118 79 25MB
English Pages 656 [809] Year 2019
T h e Ox f o r d H a n d b o o k o f
T H E E C ON OM IC S OF C E N T R A L BA N K I N G
The Oxford Handbook of
THE ECONOMICS OF CENTRAL BANKING Edited by
DAVID G. MAYES, PIERRE L. SIKLOS, and JAN-E GBERT STURM
1
3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2019 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-in-Publication Data Names: Mayes, David G., author. | Siklos, Pierre L., 1955– author. | Sturm, Jan-Egbert, 1969– author. Title: The Oxford handbook of the economics of central banking / edited by David G. Mayes, Pierre L. Siklos, Jan-Egbert Sturm. Description: New York, NY : Oxford University Press, [2018] | Includes bibliographical references and index. Identifiers: LCCN 2018018218 (print) | LCCN 2018036142 (ebook) | ISBN 9780190626204 (UPDF) | ISBN 9780190626211 (EPUB) | ISBN 9780190626198 (hardcover :qalk. paper) Subjects: LCSH: Banks and banking, Central. | Monetary policy. Classification: LCC HG1811 (ebook) | LCC HG1811 .O94 2018 (print) | DDC 332.1/1—dc23 LC record available at https://lccn.loc.gov/2018018218 1 3 5 7 9 8 6 4 2 Printed by Sheridan Books, Inc., United States of America
To David Mayes, a dear colleague, a dedicated policymaker, and a good friend to many
Contents
List of Contributors 1. Central Banking’s Long March over the Decades David G. Mayes, Pierre L. Siklos, and Jan-Egbert Sturm
xi 1
PA RT I C E N T R A L BA N K G OV E R NA N C E A N D VA R I E T I E S OF I N DE P E N DE N C E 2. Monetary Policy Committees and Voting Behavior Sylvester Eijffinger, Ronald Mahieu, and Louis Raes
39
3. Peaks and Troughs: Economics and Political Economy of Central Bank Independence Cycles Donato Masciandaro and Davide Romelli
59
4. The Governance of Central Banks: With Some Examples and Some Implications for All Forrest Capie and Geoffrey Wood
99
PA RT I I C E N T R A L BA N K F I NA N C I N G , BA L A N C E -SH E E T M A NAG E M E N T, A N D ST R AT E G Y 5. Can the Central Bank Alleviate Fiscal Burdens? Ricardo Reis
131
6. The Impact of the Global Financial Crisis on Central Banking Alex Cukierman
171
7. Strategies for Conducting Monetary Policy: A Critical Appraisal D. L. Thornton
193
viii Contents
PA RT I I I C E N T R A L BA N K C OM M U N IC AT ION A N D E X P E C TAT ION S M A NAG E M E N T 8. Central Bank Communication: How to Manage Expectations? Jakob de Haan and Jan-Egbert Sturm
231
9. Central Bank Communications: A Case Study J. Scott Davis and Mark A. Wynne
263
10. Transparency of Monetary Policy in the Postcrisis World Nergiz Dincer, Barry Eichengreen, and Petra Geraats
287
PA RT I V P OL IC Y T R A N SM I S SION M E C HA N I SM S A N D OP E R AT ION S 11. Real Estate, Construction, Money, Credit, and Finance Peter Sinclair 12. Inside the Bank Box: Evidence on Interest-Rate Pass-Through and Monetary Policy Transmission Leonardo Gambacorta and Paul Mizen
337
369
13. Term Premium Variability and Monetary Policy Timothy S. Fuerst and Ronald Mau
406
14. Open Market Operations Jan Toporowski
436
PA RT V T H E N E W AG E OF C E N T R A L BA N K I N G : M A NAG I N G M IC RO - A N D M AC ROP RU DE N T IA L F R A M E WOR K S 15. Central Banking and Prudential Regulation: How the Wheel Turns Kevin Davis
457
16. Central Banks’ New Macroprudential Consensus Michael W. Taylor, Douglas W. Arner, and Evan C. Gibson
482
17. Central Banks and the New Regulatory Regime for Banks David T. Llewellyn
503
Contents ix
18. Macroprudential Regulation of Banks and Financial Institutions Gerald P. Dwyer
528
PA RT V I C E N T R A L BA N K I N G A N D C R I SI S M A NAG E M E N T 19. Central Banking and Crisis Management from the Perspective of Austrian Business Cycle Theory Gunther Schnabl
551
20. The Changing Role of the Central Bank in Crisis Avoidance and Management David G. Mayes
585
21. Managing Macrofinancial Crises: The Role of the Central Bank Patrick Honohan, Domenico Lombardi, and Samantha St. Amand
619
PA RT V I I E VOLU T ION OR R E VOLU T ION I N P OL IC Y M ODE L I N G ? 22. Macromodeling, Default, and Money Charles Goodhart, Nikolaos Romanidis, Martin Shubik, and Dimitrios P. Tsomocos 23. Model Uncertainty in Macroeconomics: On the Implications of Financial Frictions Michael Binder, Philipp Lieberknecht, Jorge Quintana, and Volker Wieland
655
679
24. What Has Publishing Inflation Forecasts Accomplished? Central Banks and Their Competitors Pierre L. Siklos
737
Index
779
Contributors
Douglas W. Arner is the Kerry Holdings Professor in Law at the University of Hong Kong. Michael Binder is the Chair for International Macroeconomics and Macroecono metrics at the University of Frankfurt. Forrest Capie is Professor Emeritus of Economic History at the Cass Business School, City, University of London. Alex Cukierman is Professor of Economics at Tel-Aviv University and Interdisciplinary Center Herzliya. J. Scott Davis is Senior Research Economist and Policy Advisor at the Federal Reserve Bank of Dallas. Kevin Davis is Professor of Finance at the University of Melbourne. Jakob de Haan is Head of Research at De Nederlandsche Bank and Professor of Political Economy at the University of Groningen. Nergiz Dincer is Professor of Economics at TED University. Gerald P. Dwyer is Professor of Economics and BB&T Scholar at Clemson University. Barry Eichengreen is George C. Pardee and Helen N. Pardee Professor of Economics and Professor of Political Science at the University of California, Berkeley. Sylvester Eijffinger is Full Professor of Financial Economics and Jean Monnet Professor of European Financial and Monetary Integration at Tilburg University. Timothy S. Fuerst was the William and Dorothy O’Neill Professor of Economics at the University of Notre Dame. Leonardo Gambacorta is Research Adviser in the Monetary and Economic Department at the Bank for International Settlements. Petra Geraats is University Lecturer in the Faculty of Economics at Cambridge University. Evan C. Gibson is Senior Research Assistant at the Asian Institute of International Financial Law at the University of Hong Kong.
xii
Contributors
Charles Goodhart is former Director of the Financial Regulation Research Programme at the London School of Economics. Patrick Honohan is Nonresident Senior Fellow at the Peterson Institute for International Economics. Philipp Lieberknecht is an Economist at the Bundesbank, Frankfurt, Germany. David T. Llewellyn is Emeritus Professor of Money and Banking at Loughborough University. Domenico Lombardi is former Director of the Global Economy Program at the Centre for International Governance Innovation. Ronald Mahieu was Professor of Finance and Innovation at Tilburg University. Donato Masciandaro is Professor of Economics and Chair in Economics of Financial Regulation at Bocconi University and President of the Baffi CAREFIN Center for Applied Research on International Markets, Banking, Finance and Regulation. Ronald Mau is a Ph.D. candidate in the Department of Economics at the University of Notre Dame. David G. Mayes was BNZ Professor of Finance and Director of the Europe Institute at the University of Auckland. Paul Mizen is Professor of Monetary Economics and Director of the Center for Finance, Credit and Macroeconomics at the University of Nottingham. Jorge Quintana is Research Assistant at the Graduate School of Economics, Finance and Management, Goethe University. Louis Raes is Assistant Professor at Tilburg School of Economics and Management, Tilburg University. Ricardo Reis is Arthur Williams Phillips Professor of Economics at the London School of Economics and Political Science. Nikolaos Romanidis is a doctoral candidate at the Saïd Business School at Oxford University. Davide Romelli is Assistant Professor at the Department of Economics, Trinity College Dublin. Gunther Schnabl is Full Professor and Director of the Institute for Economic Policy at Leipzig University. Martin Shubik was Professor Emeritus of Management and Economics at Yale University.
Contributors
xiii
Pierre L. Siklos is Professor of Economics at Wilfrid Laurier University and Balsillie School of International Affairs. Peter Sinclair is Emeritus Professor of Economics at the University of Birmingham. Samantha St. Amand was Senior Research Associate in the Global Economy Program at the Centre for International Governance Innovation. Jan-Egbert Sturm is Professor of Applied Macroeconomics and Director of KOF Swiss Economic Institute, ETH Zurich. Michael W. Taylor is Managing Director and Chief Credit Officer Asia Pacific at Moody’s Investors Service. D. L. Thornton, Economics LLC, was Vice President and Economic Adviser at the Federal Reserve Bank of St. Louis. Jan Toporowski is Professor of Economics and Finance at SOAS University of London. Dimitrios P. Tsomocos is Professor of Financial Economics at Saïd Business School and a Fellow in Management at St Edmund Hall, University of Oxford. Volker Wieland is IMFS Endowed Chair for Monetary Economics Institute for Monetary and Financial Stability at Goethe University Frankfurt. Geoffrey Wood is Professor Emeritus of Economics at Cass Business School, City, University of London. Mark A. Wynne is Vice President, Associate Director of Research, and Director of the Globalization and Monetary Policy Institute at the Federal Reserve Bank of Dallas.
T h e Ox f o r d H a n d b o o k o f
T H E E C ON OM IC S OF C E N T R A L BA N K I N G
chapter 1
Ce ntral Bank i ng ’ s L ong March over th e De c a de s David G. Mayes, Pierre L. Siklos, and Jan-E gbert Sturm
1.1 Introduction In a speech delivered at the beginning of 2000, Mervyn King, former governor of the Bank of England (BoE), argued that a “successful central bank should be boring” (King 2000, 6), a statement that has often been repeated to highlight aspirations once held by many central bankers. In other words, monetary policy should be an uncontroversial, predictable technical exercise whose objectives have general support across society and the political spectrum. The new century’s arrival, several years into a period that came to be called the “Great Moderation” (Bernanke 2004), supposedly coincided with central banks being at the apogee of their power and influence among public institutions, their reputation celebrated as the outcome of good policy practices supported by convincing theory. As we approach the end of the second decade of the twenty-first century, much has changed in the world of central banking, and yet some important elements remain just as they were before the events that began in 2007 and are now referred to as the great financial crisis or the global financial crisis (either way, GFC). The GFC produced an outpouring of new research, memoirs, and personal accounts, which asked how it came to be that modern finance produced such a large financial crisis and where we could go from here. King himself, looking back on his years in central banking (King 2016), argues that what ails the financial system remains firmly in place and that a new and possibly even larger crisis is in the offing. Others (e.g., El-Erian 2016) underscore that in spite of the GFC, monetary authorities around the world remain “the only game in town.”
2 Mayes, Siklos, and Sturm And yet there are equally powerful signs that many things are changing, and it remains unclear whether the accomplishments of central banks trumpeted almost two decades ago will become just another phase in the history of central banks. Where once policy rules were supposed to provide guidance and educate the public about how and why central banks change the stance of monetary policy, these are now replaced by a more artful view of how policy is set. Indeed, several central banks no longer rely on a single policy instrument, a policy interest rate, to signal the stance of policy. Instead, a complex mix of forward guidance, expressions of bias about the likely future direction of interest rates, not to mention the creation of a large number of new policy instruments, have now entered the vocabulary of central banking. Most of these, originally labeled unconventional monetary policies (UMP), have been in place and used for a decade. Today it is no longer obvious that the proliferation of instruments can still be thought of as unconventional. Next, the belief that good practice in monetary policy goes hand in hand with the maintenance of financial stability is being replaced with experimentation about how achieving a monetary policy objective aimed at keeping inflation under control can operate in parallel with the desire to maintain financial system stability. The shift is more than technical, since also at stake are the institutional role of central banks and their relationship with other public agencies that are responsible for regulating and supervising the financial system. Added to the above list of forces buffeting the central banking institution is pressure to revisit the widely accepted view that monetary policy should aim for low inflation. While few doubt the wisdom that the control of purchasing power is a sensible objective, a growing number of voices, including some emanating from some central banks, are saying that low inflation, that is, inflation rates in the vicinity of 2 percent per annum,1 too often brings about the possibility that interest rates will also remain too low for too long. Indeed, there is a worry that central banks will more frequently face the so-called zero lower bound (ZLB) for nominal interest rates, especially if inflation rates also remain low for long, whether due to secular stagnation or demographic factors. This has not prevented some central banks from breaching this fictitious lower limit, with the result that we are now witnessing several examples of central banks maintaining negative interest rates, with the profession now asking, where is the effective lower bound? As this is written, the resistance to targeting higher inflation rates has been successful, but the topic has not been removed from consideration as one way to reform popular inflation-control regimes currently in place.2 Any list of forces making central banking less boring over the past decade would not be complete without mention of the tension between the precrisis consensus about certain truths regarding what drives the transmission mechanism of monetary policy and emerging challenges to these views. These tensions have now spilled over into rethinking how monetary policy interacts not only with the financial sector but with the real economy as well. As a result, as this volume goes to press, we find ourselves facing a new conundrum of sorts. Whereas there was little opposition to reducing central bank policy rates quickly as the effects of the GFC were beginning to be keenly felt,
Central Banking’s Long March over the Decades 3 central banks especially are equally keen to delay a return to more “normal” policy rates for fear of derailing a relatively weak recovery in economic growth. As a result, we are seeing a battle of those who would prefer to “lean against the wind” against the “data- dependent” view that growth beyond capacity together with higher inflation is just what economies need at the moment (e.g., see Svensson 2016; Filardo and Rungcharoenkitkul 2016). In any case, the emergence of macroprudential policies can deal with some of the distortionary consequences of interest rates that are too low for too long even if there is scant evidence that this is the case (e.g., see Lombardi and Siklos 2016 and part V of this volume). The experience of the last decade is also producing new research that seeks to improve models used in policy analysis not only by explicitly incorporating a nexus between the real and financial sectors but also by allowing for better ways to select among competing models in order to improve the quality of policy advice. Ordinarily, a Handbook is intended to provide a reference of received knowledge in a particular field. If the foregoing interpretation of the state of central banking is reasonably accurate, then the time is surely appropriate to provide an account not only about where we are but, equally important, also where central banking might be headed. In other words, this Handbook is more than just a compendium of what central banking can do and has done but is also an attempt to lay out the unanswered questions about existing monetary frameworks. The hope is that readers will obtain a glimpse of the sentiment expressed earlier, namely, that changes are afoot in the role and place of central banks in society. This book is divided into seven parts with titles that are self-explanatory: Central Bank Governance and Varieties of Independence; Central Bank Financing, Balance- Sheet Management, and Strategy; Central Bank Communication and Expectations Management; Policy Transmission Mechanisms and Operations; The New Age of Central Banking: Managing Micro-and Macroprudential Frameworks; Central Banking and Crisis Management; and Evolution or Revolution in Policy Modeling? To make clear that the book represents a beginning and not the end of an era in the study of central banks, the following pages provide a summary of some of the key contributions of each chapter along with related ideas and topics that could not be covered in such a vast area of study.
1.2 Part I: Central Bank Governance and Varieties of Independence Much of the good fortune that allowed central bankers to extol their success at stabilizing inflation, if not relegating business cycles to history, was arguably due to acceptance of the idea that the monetary authority ought to be independent within government and not subject to the kind of political pressure that might lead to exploiting the short-run
4 Mayes, Siklos, and Sturm trade-off believed to exist between inflation and real economic activity or unemployment. This was accomplished by granting central banks autonomy via legislation, that is, by granting de jure independence or, rather, by appointing a central banker who is relatively more conservative about inflation than politicians concerned about reelection and shielded from political pressure to loosen monetary policy when this undermines the aim of achieving low and stable inflation. By the early 2000s, and in spite of several criticisms that the link between inflation and central bank independence (CBI) was weak, acceptance of CBI as the sine qua non of good institutional structure became common and was no longer widely debated. Beyond awarding sole authority over the day-to-day operations of central banks, there were two other powerful forces at play during the 1990s and 2000s. Along with autonomy, both central bankers and politicians came to the conclusion that the objectives of monetary policy ought to be clearly and simply stated. Moreover, the lessons from the “great inflation” of the late 1960s through the early 1980s convinced policymakers and the public that low and stable inflation was the best, if not the only, objective that monetary policy should aim for. Armed with theory and empirical evidence, central banks, first in advanced economies and later in emerging-market economies, were assigned a mandate to control inflation. To be sure, there were considerable differences around the world in how explicit this mandate would be, as well as in how accountability for failing to reach an inflation objective would be penalized. Nevertheless, on the eve of the GFC, a significant majority of the global economy would adopt a policy strategy of this kind. It also became more widely acknowledged that a successful central bank ought to deliberate policy options via a committee structure. Not only would this ensure that diverse opinions could be heard inside the central bank, but in the presence of adequate transparency, it also would reassure the public that the individuals put in charge of monetary policy would be accountable for their decisions. However, the committee structure also raises many challenges. If the committee is too large, decisions risk taking too long; if the members of the committee are too much alike in their thinking about monetary policy, then the much-vaunted diversity necessary to air differing opinions is lost; finally, and depending on how votes are counted and on the manner in which motions are presented, not to mention whether committee members are individually accountable or the committee as a whole must answer for decisions taken, there is always the possibility that free riders or followers of the majority will not be willing to offer the necessary counterweight to the need for diverse thinking inside the committee. Chapter 2, by Eijffinger, Mahieu, and Raes, wades into the question of what we can learn by analyzing monetary policy committees (MPCs) and how they make decisions. Whereas economists often consider the number of dissenting voters in a committee, the distribution of voting over time given the extant macroeconomic environment, and the information content of policy reaction functions, the chapter argues that there is something to be learned from models (in this case, spatial voting models used in political science) to study votes taken by legislatures or in judicial decisions. Using data from Sweden’s Riksbank, Eijffinger, Mahieu, and Raes are able to rank members of its policymaking committee not only according to whether they are hawkish or dovish but
Central Banking’s Long March over the Decades 5 also by when their positions change over time. After all, there is no reason an individual needs to wear the hawkish or dovish label at all times, although this is clearly possible. Moreover, this kind of analysis also permits the creation of categories of central bankers over time according to whether they lean more heavily toward tightening or loosening monetary policy. Of course, any model, no matter how enlightening it is about the positions taken by individual committee members, must confront the trade-off that inevitably exists between realism and complexity. The same is true of the spatial voting models of the variety used by Eijffinger, Mahieu, and Raes. Their work also underscores how important it is for outsiders to obtain the necessary information not only about how voting is conducted inside committees but also about the content of the deliberations, even if these are finely worded via the publication of minutes of central bank policymaking committees. Sadly, comparatively few central banks make this kind of information available. Indeed, the chapter by Eijffinger, Mahieu, and Raes also highlights the likely pivotal role played by the committee structure (e.g., central bank insiders versus outsiders), the size of the committee which adds complexity to the policymaking process, and the numerous biases and other phenomena (e.g., groupthink, voting order, the precise wording of any motion) that complicate our understanding of the value of committee-based monetary policy decisions (see also Maier 2010). As the authors themselves acknowledge, there is a dearth of comparative analyses as the extant literature tends to adopt the case-study approach. Finally, while an examination of decisions by MPCs is essential, the authors point out that even more can be learned from the successes and failures of committees by contrasting their activities with those that shadow them by providing a second opinion about the appropriate stance of monetary policy (e.g., see Siklos and Neuenkirch 2015). The contribution by Masciandaro and Romelli, c hapter 3, deals with the issue of CBI, reminding us that it is easy to fall into the trap of assigning too much emphasis to de jure independence, not only because de facto independence is more relevant to an assessment of the success of monetary policy but also because de facto independence changes over time, rising and falling as other macroeconomic and institutional factors put pressure on central bank behavior. The authors also go back to first principles by asking what is the state of the art regarding what makes a central bank autonomous. It is clear that CBI need not be sui generis as it is often portrayed in the vast literature on the consequences of providing autonomy to a central bank. Ultimately, a central bank must respond to public opinion, directly or indirectly, and this will affect the effort central bankers exert in delivering the monetary policy that society demands. CBI, if properly used by central banks, appears to be a persistent phenomenon. That is, once CBI is obtained, it is likely to be maintained over time but will also be significantly influenced by real economic outcomes. For example, variables such as the unemployment rate are more significant, in a statistical sense, in influencing CBI than inflation. Polity also plays an important role in enshrining the role of CBI. In particular, democratic institutions foster more central bank autonomy. It is, of course, as critics would point out, always difficult to boil complex relationships down to a coefficient, especially when the variables cannot be measured precisely.
6 Mayes, Siklos, and Sturm Perhaps more important, while there are both good theoretical and empirical reasons to support some form of central bank independence, the concept ought to be elastic enough to permit the monetary and fiscal authorities to coordinate their policies during crisis times. The events surrounding the GFC make clear that monetary policy should support a fiscal expansion intended to cushion the significant contraction that followed the near collapse of the global financial system in 2007. That said, supporters of CBI would point out that an accommodative policy can, in principle, also provide an incentive for the fiscal authorities to delay necessary structural reforms. Indeed, the most serious critics of quantitative easing (QE) would highlight the failure of fiscal policy to take advantage of extraordinarily loose monetary policies to put in place investments that are likely to boost future productivity. Beyond these questions are a couple of additional complications raised by an analysis of the kind offered by Masciandaro and Romelli. Since most economies are open, how do globalization and exchange-rate regimes enter the picture when thinking about the role of CBI? Since textbook depictions of the differences between fixed and floating exchange rates may not be accurate in a world where trade flows and financial flows operate simultaneously, there is a need to consider these influences more seriously. Next, policy has waxed and waned between being forward-looking, when inflation was under control and inflation objectives were met on average, and a tendency to be backward- looking because central banks were hesitant to remove policy accommodation until there was sufficient data to convince them to change course. How these attitudes toward the setting of the stance of monetary policy interact with how we think about the concept of CBI also needs more research. Both CBI and the manner in which central bank decisions are delivered raise issues about the governance of central banks. There is, of course, a large literature on corporate governance where, for example, the long-term interests of shareholders play a critical role. However, in c hapter 4, Capie and Wood ask who are the shareholders of central banks. The answer is government. However, governments change every few years in a democratic society, and there is really no equivalent at the central banking level of an annual shareholders meeting. Moreover, even if, technically speaking, the government has controlling interest in the shares of the central bank, in spirit it is the public to whom most central banks ultimately feel responsible. Complicating matters, as Capie and Wood remind us, is that many of the original central banks were private institutions. It is only over a considerable period of time that central banks became the public institutions we know them as today. Hence, the authors suggest that it is useful to draw parallels and lessons from corporate governance for the governance of central banks. The BoE and the Reserve Bank of New Zealand (RBNZ) serve as the case studies. The BoE is one of the oldest institutions of its kind, while the RBNZ is a relatively young institution. More important, perhaps, New Zealand is the archetypical small open economy, while the same is not true of the United Kingdom. Capie and Wood conclude that small open economies, by their nature, need to be flexible since they are subject to numerous external shocks. As a result, in such societies, a premium is placed on protecting the central bank from the vagaries of undue political
Central Banking’s Long March over the Decades 7 interference while demanding that there should be clarity of purpose and an accountable set of objectives. It is useful, of course, to consider the governance of central banks in relation to models used in the private sector. Nevertheless, whereas private concerns are expected to maximize profits, the objective function of central banks is more complicated and more difficult to observe. It is also the case that legal tradition plays a role. The Anglo-Saxon approach inherent in the histories of the two central banks considered in chapter 4 is not necessarily portable to other parts of the world with different legal traditions. Next, it is commonplace to examine the relationship between the government and the central bank as one where the former is the principal and the latter acts as the agent. However, there are other ways of thinking about these two organizations, for example, through the prism of preferences for inflation versus real economic outcomes, that is, via the belief that political cycles exist. Finally, when private corporations tinker with governance, it is almost exclusively because of an economic imperative. In the realm of central banking, reforms are often prompted by political considerations. Indeed, Capie and Wood recognize that crises play an important role in the evolution of governance structures over time but that how these reforms are implemented is likely also to be influenced by the resilience and transparency of political institutions.
1.3 Part II: Central Bank Financing, Balance-Sheet Management, and Strategy Before the GFC, it was very difficult to get academics to show any interest in central bank balance sheets. Since then, the ballooning of these balance sheets in the central banks of the main Western countries has become one of the most important issues on the agenda. Part II of this book focuses on its study. Chapter 5 by Reis and chapter 6 by Cukierman cover part of this topic. Cukierman, in particular, explains how the balance-sheet increase emerged initially, as central banks hurried to fill the gap caused by the virtual closure of wholesale markets. Then, as the zero interest bound was reached, central banks expanded their balance sheets further, trying to drive down interest rates farther out along the yield curve and in markets for other financial instruments, as part of a program to try to increase monetary policy’s contribution to the recovery of the real economy and to get bank lending restarted in particular. However, while the history of how central banks got into this territory in the first place is of great interest in its own right, the major current concern is over how this will all develop in the future. How can central banks move to an orderly system where economies are growing, interest rates are back to the levels that were normal in the decades before the GFC, and inflation remains firmly under control? The Bank of Japan (BoJ) and the
8 Mayes, Siklos, and Sturm European Central Bank (ECB) are still in the expansionary phase, and the BoE is debating whether it has reached the turnaround point. Of the major central banks that expanded their balance sheets in a significant way, only the Federal Reserve has embarked on the process of the return toward some sort of new normality, and that is still rather hesitant and without a clear long run. It is tempting to argue by analogy that these banks are facing the same sort of problem that homeowners face when they discover rotting timber beams. They must rush around and support the house in the short run, put up scaffolding, remove the rotten timber, determine the causes of the rot, put those right, replace the timber, make other adjustments to strengthen the building, repair the damage, and only then can the supports and the scaffolding be removed in the expectation that the house will survive for the indefinite future, provided it is carefully maintained. However, part of the fear is that the building will never be the same again. Perhaps the supports are still needed; removing the scaffolding may reveal other problems. It is not a simple matter just to reverse the flow of purchases, nor is there any indication that the rate of reabsorption should be the mirror image of the increase. In the first place, the banking system needs to return to proper stability with greater capital and a confidence that the atmosphere that led to the excesses before the crisis will not return. That will provide an environment for markets to operate normally again and for the central bank to bow out of the system, except for its continuing safety-net role. (At least they have had the opportunity to demonstrate the great strength of that net.) In the second place, monetary policy needs to return toward normality. Over the last decade, the fear has been of deflation, and the ECB’s raising of interest rates in 2011 turned out to be premature, to say the least. In the recovery period, inflation will again be the concern. Although all the main countries may be in reasonably similar parts of the cycle, if they act at different times, this has an effect on the exchange rate, which, outside the United States and to an extent the euro area, also has an impact on the inflation rate, as it is an important channel in the inflationary process. Indeed, monetary policy in China is also part of that particular equation. A falling exchange rate in a period of slack demand does not have its usual inflationary impact and is therefore not such an unattractive policy and indeed leads to competitive downward pressure, as was experienced in the 1930s after the 1929 crash. A third complication is the distortions in behavior that the unusual period of low interest rates and quantitative expansion has led to. The most obvious example is the recovery in real estate prices and other asset prices, which are now back to historically high levels and ratios in many countries. Unsustainably high asset prices were an important part of the sharp downturn in the GFC. Central banks are, not surprisingly, cautious about triggering another cycle of financial difficulty just as their economies appear to be exiting from the last one. Indeed, the most critical judgment of the consequences of QE and very low interest rates comes from Schnabl (in chapter 19), who sees them as sowing the seeds for increasing instability, not as solving the problems of the past. For this reason, Cukierman, in chapter 6, puts all of the issues together, including these macroprudential concerns. Central banks and other macroprudential authorities
Central Banking’s Long March over the Decades 9 have sought to drive a wedge between general inflation in the consumer price level and inflation in asset prices, especially real estate, by using tools that impact sectoral lending and borrowing. Such concerns are not unique to central banks that have encountered the ZLB. Even in Australia and New Zealand, where interest rates have been low but not extreme, house prices have taken off, and by some counts, the Auckland housing market has shown the greatest tension in terms of debt-to-income ratios. In that case, a raft of constraints, limiting investment from overseas, restricting loan-to-value ratios, and increasing capital requirements, do appear to have cooled the market. Although experience elsewhere, such as in Hong Kong, suggests that such periods of cooling may still be temporary if the underlying constraints on supply from the availability of building land and the pressure on demand from immigration continue. The nature of the balancing act in trying to return to normality without causing worse problems is therefore considerable. Indeed, there is a fourth concern that relates more directly to the management of the central bank’s balance sheet, namely, that as interest rates rise, asset prices will fall and central banks could realize losses if they sell assets below purchase cost. In one sense, this can be avoided if assets are held to maturity and central banks manipulate the term structure of their holdings appropriately so that they can still sell enough assets to absorb the required amount of liquidity from the financial system and banks in particular. Reis also looks at the economy-wide concerns in c hapter 5 but from a different perspective. He notes that in QE, the central bank is in effect assisting fiscal policy. On the one hand, the state is seeking to expand the economy in the post-GFC downturn by running large deficits and raising substantial new debt to finance them. The central bank, on the other hand, is buying this debt on secondary markets and giving financial institutions the resources to buy further new debt, when it is issued, with the resulting proceeds. This sounds like an unbelievable money machine, and at some point, the process does have to come to an end before the system explodes. Central banks can become insolvent when they go beyond the point where the state can borrow the money to bail them out. In the meantime, however, as both Reis and Cukierman point out, the central bank could, in effect, issue helicopter money, as in a depressed environment the inflationary consequences are not apparent. However, here, too, there is a limit to how much the printing press can be used before confidence is lost and hyperinflation ensues. In any case, giving money directly to people would be a highly politicized decision and not something a central bank would do without the direct encouragement of the government. Reis takes the issues a step further by considering the extent to which the central bank can redistribute resources within a financial area, as the ECB has done since the sovereign debt crisis struck in 2010. Clearly, simply redistributing seigniorage dividends in a manner different from the capital key is not going to be politically possible, but in 2014, the ECB did agree to pay back to Greece the extra revenue it was earning from holding high-interest Greek government debt. It is perhaps more interesting to look at how the ECB has effectively been able to redistribute through emergency liquidity assistance to
10 Mayes, Siklos, and Sturm the Greek banking system and through allowing major imbalances to build up in the TARGET2 system. The ECB has thus been able to push the envelope quite considerably beyond simply buying securities in secondary markets, and with the option to undertake “outright monetary transactions,” it can do so further. Central banks have not reached the end of what they could do with their balance sheet to assist monetary policy and economic recovery. For example, while forward guidance is normally used simply to indicate what expected economic outcomes would imply for the setting of monetary policy under prevailing policy, it could be used to indicate that central banks will permit higher inflation for a while in the interests of recovery. A more drastic move discussed by Cukierman is to alter the role of cash so that effectively it can also attract a negative interest rate to drive down the short-run base of the entire system. At one extreme, the central bank could simply end the use of notes and coins except for trivial transactions and replace them with a digital equivalent whose value can fall. At the other extreme, notes could be dated so that they have to be exchanged at regular intervals or see their value fall. Either way around this seems to be in the realm of the theoretically possible rather than the politically likely. However, as we discuss later in this chapter, the introduction of digital currency (without any implication that its value might be written down) is a much more reasonable possibility into which a lot of practical research is being undertaken at present. Other theoretically possible changes in the role of the central bank are also discussed by Cukierman, such as the introduction of some version of the Chicago Plan where commercial banks have a 100 percent reserve requirement. But unwinding at least some of the central banks’ unusual balance-sheet position derived from the GFC seems more likely than further steps into more extreme territory. Ultimately, the concern for both Reis and Cukierman is that the deeper the involvement with semifiscal issues, the more likely it is that the independence of the central bank will be compromised. On the one hand, the solvency of the bank may become an issue with its expanded balance sheet, while on the other, the fiscal authority is so stretched that it might feel it needs to constrain the central bank’s shrinkage of its balance sheet in the interests of its own stability. As Cukierman points out, with the expanding role of the central bank in the field of financial stability and macroprudential policy, it increasingly needs to be closely involved with the other main public sector actors, particularly the ministry of finance. Chapter 7, by Thornton, is a longer-term review of monetary policy strategies over the last one hundred years or so and hence is somewhat separate from the others, except insofar as it also obviously covers the balance-sheet expansions of recent years. It provides a very interesting history of the debate about how monetary policy can affect inflation and aggregate output in the economy, where new ideas have emerged as existing theories appeared to be contradicted by actual behavior, with the insights of John Maynard Keynes and William Phillips being interesting examples. A second level of debate has been over which transmission mechanisms are leading to the effect. To some extent, the experience may well have been that a change in policy led to a change in behavior. Goodhart’s law is a helpful case in point; as soon as money targeting became
Central Banking’s Long March over the Decades 11 popular in the 1980s, what appeared to have been a stable money demand relationship evaporated, leading to the adoption of inflation targeting with a much more pragmatic approach to the relationships but with a firm emphasis on looking forward rather than correcting previous errors. As Gerald Bouey, governor of the Bank of Canada, put it in 1982, “we did not abandon a monetary target, it abandoned us” (Crow 2013, 40). The general philosophy behind inflation targeting is very simple: if inflation looks as if it is going to rise above acceptable levels, you should tighten monetary policy, and similarly, if it looks as if it is going to fall, you should loosen. However, there has been considerable debate and very extensive modeling effort to try to make policy more accurate. In chapter 7, Thornton contrasts interest-rate targeting with money targeting and forward guidance, and it is this last that links his work very firmly with the other two chapters in this part of the book. He is critical of forward guidance not simply because it is state contingent, which means that it is still difficult for people to form a view of what will happen in the future, but also because much of its rationale depends on how expectations of the term structure of interest rates are formed. He then goes on to contrast inflation targeting with nominal income targeting and QE, again providing a link with the two earlier chapters. As with forward guidance, Thornton is very critical of QE and memorably remarks that “Bernanke (2014) quipped that ‘the problem with QE is that it works in practice but doesn’t work in theory.’ The problem, of course, is that if it doesn’t work in theory, it won’t work in practice, either.” As a result, Thornton’s conclusion about the appropriate monetary policy strategy is rather negative. Thus far, all models are too simplistic to explain what the optimal reaction should be, and hence central banks tend to take a rather pragmatic approach. While QE and forward guidance may be the latest policies, it is clear from Thornton’s conclusion that he expects the lessons of time will be rather negative in their regard as well.
1.4 Part III: Central Bank Communication and Expectations Management Whereas in the 1970s and 1980s it was considered that monetary policy is most effective when it is as opaque as possible, attitudes have certainly changed in recent decades. Nowadays, it appears that central bankers can hardly be too transparent and too open in order to be successful. Exemplary in this context are two quotes from Alan Greenspan and Ben Bernanke. While speaking to a Senate committee in 1987, Greenspan stated, “Since becoming a central banker, I have learned to mumble with great incoherence. If I seem unduly clear to you, you must have misunderstood what I said.”3 Whereas to Greenspan obfuscation was key, the opposite holds for his successor, Bernanke: “As a general matter, the more guidance the central bank can provide the public about how
12 Mayes, Siklos, and Sturm policy is likely to evolve (or about the principles on which policy decisions will be based), the greater the chance that market participants will make appropriate inferences—and thus the greater the probability that long-term interest rates will move in a manner consistent with the outlook and objectives of the monetary policy committee.”4 As these quotes indicate, the transition toward more, transparent communication set in well before the GFC. Nevertheless, the latter did change communication policies of most central banks significantly. The three chapters in this part of the book look into today’s role of central bank communication and transparency in the conduct of monetary policy. Chapter 8 gives a survey of the different ways in which central banks communicate nowadays and how successful they have been. Subsequently, c hapter 9 provides a more in-depth analysis regarding one particular communication channel of the US Federal Reserve, the Federal Open Market Committee (FOMC) postmeeting statements. Chapter 10 is devoted to measuring and comparing the degree of transparency of central banks around the world. As argued by de Haan and Sturm in chapter 8, central banks communicate in different dimensions to the public. It can be on the objectives of monetary policy, on strategy and the decision-making process, on (upcoming) macroeconomic conditions, or on actual or future policy decisions. Openness on each of these dimensions can be translated into degrees of transparency. In line with this, Dincer, Eichengreen, and Geraats distinguish in c hapter 10 among political, operational, procedural, economic, and policy transparency. Although the increased independence of central banks all around the world has been a clear driver of higher accountability standards and thereby increased reporting on and openness of objectives, strategies, and procedural aspects, in practice most of the interest rests on transparency and communication directly related to active and future monetary policy and the underlying economic motivation thereof. As indicated by the above quote from Bernanke, this kind of communication is expected to make monetary policy more effective and has, according to many, turned into a separate instrument of central banks to reach their objectives. Besides almost directly controlling short-term interest rates through, for example, short-term open market operations, communication can influence expectations about future short-term interest rates, thereby affecting long-term interest rates. This instrument gains in value especially when facing the effective lower bound of short-term interest rates, where traditional tools become ineffective. This so-called forward guidance comes in different shapes and forms, and de Haan and Sturm discuss their pros and cons. Although the academic-oriented literature suggests that it would be most effective if central banks would commit, this is not what is actually observed. Although clear statements about the future policy path are likely to have a stronger impact than more cautious ones, a central bank fears a loss in credibility when changes in economic conditions force it to deviate from such an announced path. Furthermore, history has shown that it is very difficult, if not impossible, to formulate waterproof state contingency. Nevertheless, there is general agreement that forward guidance did and does have a substantial effect on interest-rate expectations.
Central Banking’s Long March over the Decades 13 Furthermore, it plays an important role in UMP instruments that have been introduced during and after the GFC also to alleviate reaching the effective lower bound. Through the so-called signaling channel, in which the central bank usually communicates about their size and duration, the effectiveness of these asset purchase programs is boosted. Without proper communication, the stimulating impact of such programs might be offset by expectations of higher policy rates. A final broad topic discussed in chapter 8 is the management of inflation expectations. Communication helps to anchor inflation expectations and in that way supports changes in nominal short-term interest rates to reflect changes in real rates. This allows the economy to return to its long-run path faster. Whereas studies using inflation forecasts in general find that explicit inflation targets do help anchor inflation expectations, studies focusing on the general public’s knowledge of central bank objectives and firms’ and households’ inflation expectations come to more sobering conclusions. Chapter 9, by Davis and Wynne, zooms in on communication of the US FOMC. Over recent decades, the FOMC has stepped up communication substantially. For instance, in 1994, it started to release statements immediately after a policy meeting in which policy rates were changed. Since 1999, these statements have been issued after every scheduled meeting. In December 2004, the release of the minutes of these meetings was moved forward and now occurs three weeks after the meetings to which they refer. Following other major central banks, the chairman nowadays holds press conferences at regular intervals. Furthermore, members of the Federal Reserve Board of Governors and the individual Federal Reserve Bank presidents now release their economic forecasts four times a year as part of a regular Survey of Economic Projections. Davis and Wynne concentrate on the FOMC postmeeting statements and document how these have become longer, more detailed, and more complex over time. Whereas in the early years these statements only contained a vague description of the policy actions of the Federal Reserve, they nowadays contain an assessment of the economy, a balance of risks, a forecast, and what sometimes can be interpreted as commitments on future policy actions. In that sense, these statements have turned into a policy instrument affecting expectations and thereby financial markets. To test the latter, the authors use daily financial market data and estimate a daily time series of US monetary policy shocks. They also characterize some of the linguistic features of these statements and show that these features correlate with the identified monetary policy shocks. Especially during the period in which the federal funds rate reached its effective lower bound (December 2008–December 2015), the absolute size of the monetary policy shocks increased on statement days and are a function of the length and the complexity of the FOMC statements. When controlling for the actual policy change, a similar relationship is also shown to exist for the period before that: the impacts of the FOMC’s policy statements have increased as they got longer and more complex. Chapter 10 broadens the scope again and looks at transparency in the world. Transparency is a commitment device disciplining central banks in their communication. It serves as a mechanism for accountability, a necessary condition for central bank independency. It forces independent central bankers to explain how their actions are
14 Mayes, Siklos, and Sturm consistent with their mandates. It thereby enhances the credibility of the central bank and increases the effectiveness of the policy decisions made. Dincer, Eichengreen, and Geraats’s main contribution is producing and publishing a new transparency index covering a panel of 112 central banks for the years 1998 through 2015. The new index combines and extends the work of Eijffinger and Geraats (2006) and Dincer and Eichengreen (2008, 2014) by being more granular and thereby more focused while capturing developments in the postcrisis world. As mentioned earlier, it distinguishes among the political, economic, procedural, policy, and operational dimensions of transparency. As the new data show, central banks vary substantially across these different dimensions. From the point of view of the central bank’s ability to effectively pursue its mandate, it is important to understand to what extent and in what respects more transparency along these different dimensions is always and everywhere beneficial. The chapter focuses on procedural and policy transparency. Regarding the first, a key aspect is the release of voting records and minutes without undue delay. Its desirability is discussed. With respect to policy transparency, the chapter analyzes its evolution in the wake of central banks’ postcrisis experiments with unconventional policy measures and forward guidance. During the eighteen years covered, the index reveals a rise in monetary policy transparency throughout the world, irrespective of the level of economic development of the country and the monetary policy framework of its central bank. This trend has weakened in the wake of the GFC. With policy rates near the effective lower bound, the use of forward guidance, on the other hand, has increased substantially. Central bank communication can be fraught by complications. This may be part of the explanation for why the trend toward greater transparency has slowed rather than accelerated following the crisis. The chapter presents some case studies illustrating that attempts to increase openness can be taken too far. Although these three chapters are able to cover a lot of relevant material and arguably the core of topics related to central bank communication, gaps remain. They do not deal with discussions about how to translate the language of central bankers appropriately so as to be able to use it in empirical analyses. Text analysis is a rich and vastly expanding research area. More in-depth analyses on other major central banks and/or different forms of communication would also have been a natural way to extend this part of the book. Another example of a natural way to extend this part is to realize that since the GFC, central banks have moved away from institutions mainly concentrating on price or inflation stability toward those in which financial stability has become another key task. Although the complications for communication strategies that accrue are touched on in c hapter 8, a more detailed analysis on its (future) consequences also for policy transparency would certainly have been insightful. Finally, although all three chapters ultimately are interested in how central banks influence expectations, each takes the expectation formation process for granted. The role of the media in this transmission process and how individuals form their expectations are largely still open questions.
Central Banking’s Long March over the Decades 15
1.5 Part IV: Policy Transmission Mechanisms and Operations Having instruments to carry out monetary policy requires knowledge of how these instruments affect the working of the economy. Economic theory distinguishes many different transmission channels. This part of the book is not intended as an overview but rather as a discussion of some of these transmission channels or ways in which monetary policy operates in practice—also in light of the GFC. From today’s perspective, it looks very much as if this crisis has also had a lasting impact on the ways monetary policy actions are transmitted into the economy. This part consists of four chapters, highlighting such aspects in different ways and from different angles. Not only does the real estate sector play an important role in most financial and economic crises, but it is also the sector through which most of the commonly distinguished transmission channels work. Arguably, no other sector is more sensitive to changes in interest rates or balance sheets. That said, cycles observed in construction are often not in sync with macroeconomic cycles—a reason it might not always be straightforward for central banks to meet the objectives of monetary and financial stability simultaneously. Although this is increasingly recognized by policymakers, this new stylized fact has not yet been fully digested and incorporated by economic research. Chapter 11, by Sinclair, deals with these and related issues by setting up a small model that links house prices and quantities using stock-flow concepts, and explores how these variables are shaped over time. In that setup, it looks at how monetary and financial variables interact with the construction sector and analyzes what role the government should play to keep the financial system stable. Land and real estate are notoriously immobile, but bubbles created in this market are foremost national problems that need national solutions. Macroprudential instruments, such as ceilings on loan-to-value ratios for lending on real estate, maximum mortgage durations, and refined minimum capital ratios on banks, can all help to prevent bubble formation in real estate markets and thereby financial instability. Allowing flexibility in these instruments might circumvent potential conflicts with monetary stability objectives. Looking at it from this angle, Sinclair recognizes that what is nowadays often labeled the financial cycle might differ substantially from the regular business cycle, thereby strengthening the case of viewing macroprudential policy as providing a necessary set of additional instruments to cope with the ever-increasing complexity of the world. What is not discussed and therefore left for future research is the role of demographics and the impact of secular stagnation on real interest rates and housing prices. Gambacorta and Mizen focus in chapter 12 on the most traditional of all transmission channels, the interest-rate channel. After summarizing its theoretical foundations, the chapter reviews the literature on the pass-through of policy into lending and deposit rates. It is thereby realized that the environment has turned more complex since the GFC and not only that other channels specific to the banking system impinge on
16 Mayes, Siklos, and Sturm this traditional transmission channel but also that the necessity to look into the different funding sources to understand the cost of bank funding has become more important. The so-called bank lending channel and the bank capital channel are discussed from this angle, as is the influence communication has on expectations about policy rates. Although the chapter also considers forward guidance, it does not go into issues related to the effective lower bound and whether there is a difference in this respect between policy and retail rates. Despite concerns about a weakening of monetary transmission, most research still points toward a strong and robust relationship between policy and retail rates. The role played by lending standards in determining the strength or weakness of the interest-rate channel is potentially important but it not touched on in this chapter. Further and future institutional changes, for instance, along this dimension, will likely continue to alter the banking system and thereby influence the transmission process of monetary policy. The chapter argues that this is perhaps most evident in Europe, where the emerging banking union will trigger more cross-border banking competition. Digging deeper into the interest-rate transmission channel, chapter 13, by Fuerst and Mau, deals with the term premium, its variability, and the role monetary policy plays in this. The effective lower bound on short-term policy rates triggered the introduction of new instruments aimed at directly affecting returns on long-term bonds and thereby the term premium. In an environment in which balance sheets of many major central banks are likely to remain large for a long time, the question emerges of to what extent the term premium should remain an input, or even target, for monetary policymakers. To answer this, Fuerst and Mau use a dynamic stochastic general equilibrium (DSGE) model in which either Epstein-Zin (1989) preferences separate risk aversion from intertemporal substitution elasticities or—following Carlstrom et al (2015)—asset markets are segmented such that short and long bonds are priced by different (constrained) agents. Whereas in the first case, the term premium should not directly concern policymakers, in the second one, there is a clear role for monetary policy to smooth fluctuations in the term premium. The latter model can be calibrated such that it matches the empirical mean and variability in the term premium. This does not appear possible when using the first model. The authors therefore conclude that there are significant welfare gains to a central bank smoothing the term premium. Long- term bond yields can be decomposed into average expected future short rates and term premiums. Taking the conclusions of this chapter at face value requires policymakers to distinguish between these different components in order to measure the not directly observable term premium. As with discussions involving potential growth and the non- accelerating inflation rate of unemployment (NAIRU), this is likely to pose policy issues that require further research. In chapter 14, Toporowski looks into the buying and selling of financial assets by the central bank as a way to implement monetary policy. He first takes a historical perspective and documents the use of open market operations as an alternative to interest-rate policy when that policy cannot be used and as a supplement to such policy when it appears to be ineffective. In the modern world, the chapter argues, it is important to
Central Banking’s Long March over the Decades 17 distinguish between reverse purchase (or sale) agreements and “outright” purchases (or sales) of securities. Whereas the former agreements have the attraction that they allow central banks to inject (or withdraw) liquidity over a fixed time horizon, without committing to provide such reserves in the future and thereby potentially removing incentives to sound bank management, they ceased to have enough of an impact after the GFC. This caused a dramatic switch to outright purchases. Whereas prices at or near the bottom of the market have allowed central banks to earn capital gains, they also— given the thin capital base central banks work with—substantially increased the risks in their balance sheets. It is furthermore argued that the effectiveness of open market operations depends not only on the state of the economy but also on the complexity and liquidity of the financial system.
1.6 Part V: The New Age of Central Banking: Managing Micro-and Macroprudential Frameworks Perhaps nothing symbolizes the changes in central banking since 2007 better than the recognition that micro-and macroprudential concerns are not easily separable. The veritable explosion of academic work over the past decade has at least provided the necessary ingredients to equip policymakers with the “known knowns” as well as the “unknown unknowns,” to use the expression made famous by former US defense secretary Donald Rumsfeld. This much becomes clear after reading the four chapters in this part. Nevertheless, several complications arise in the “new era” of central banking that policy or the profession have not yet fully grasped. First, there is some vague acknowledgment that monetary and financial stability go hand in hand. However, while there used to be widespread agreement about what constitutes good conduct in monetary policy, there was and continues to be a lack of clarity about what constitutes financial stability. About the best that can be said is that when it comes to financial stability, we know it when we see it. Adding to the difficulties is that monetary policy involves, for the most part, monitoring easily observed policy instruments such as an interest rate, as is the case with the main objective of monetary policy, namely, a form of price stability.5 In contrast, measuring the quality and effectiveness of micro-and macroprudential regulations and policies is proving to be exceedingly difficult and subject to a number of different interpretations (e.g., see Barth, Caprio, and Levine 2013; Lombardi and Siklos 2016; and references therein). Second, before the crisis, there was some consensus about the desirability of assigning micro-and macroprudential authority to separate institutions even if some coordinating mechanism would be required to ensure that the objective of financial stability is met. While the capacity of a particular country to support several institutions is one determinant, the notion that a central bank may be open to a moral
18 Mayes, Siklos, and Sturm hazard type of dilemma by becoming responsible for supervising and regulating banks as well as being accountable for macroprudential objectives led several countries to assign the relevant responsibilities to separate institutions. It was never made clear by governments that devised such arrangements whether these institutions would be equal or whether the central bank was first among equals, especially since many central banks (e.g., the US Federal Reserve) were born out of a need to maintain some form of financial system stability. Moreover, it was often more of a hope and a prayer that separate micro-and macroprudential regulators would cooperate in, if not coordinate, their responses when crisis conditions emerged. The experience of the former Financial Services Authority (FSA) vis-à-vis the BoE is likely the case study par excellence of the failure of two critical institutions to operate in tandem in a time of need. Davis, in c hapter 15, considers the broad sweep of financial regulation and supervision and the role of the central bank over time and across several economies. He concludes that the wheel has turned so that, like the proverbial pendulum, we are “back to the future” (Masciandaro 2012). This means that what was long ago thought of as the core function of a central bank, namely, the maintenance of financial system stability, lost during an era when not only could monetary policy assist with ensuring calm financial conditions but deregulation also was believed to lead to economic salvation, is now being returned to the portfolio of responsibilities that central banks acquired in the aftermath of the GFC. It comes as no surprise, as Orphanides (2013) and others (e.g., Siklos 2017) have pointed out, that central banks risk being overburdened. While some are willing to see the return of responsibility for financial stability as almost natural, given the historical origins of many central banks, others highlight the increased complexity of financial systems and the growth of government as two factors that ought to make policymakers wary of making central banks even more powerful than they currently are. Even if we accept that central banks should be given more responsibilities, the difficult choice of deciding how much relative weight to put on monetary stability versus financial stability has yet to be addressed. Moreover, if the public is unable to observe how much emphasis a central bank places on one set of responsibilities over another, then much of the progress in central bank transparency and accountability may well be lost. Taylor, Arner, and Gibson begin c hapter 16 by arguing that the emphasis in central banking toward the maintenance of price stability meant that monetary authorities around the world effectively shied away from worrying about financial stability, which, according to a former central banker, is part of the “genetic code” of central banks. Supported by economic theory, this created conditions that were ripe for a large financial crisis. Focusing on governance arrangements among large systemically important economies (the United States, the United Kingdom, and the euro area), the authors consider how the GFC changed institutional arrangements leading to a much greater emphasis on the control of systemic risks. Their tour d’horizon tends to find favor with the so-called single-peak arrangement (see Haldane 2009) of the BoE wherein separate but largely equal bodies are responsible for both monetary and financial stability policies
Central Banking’s Long March over the Decades 19 but are housed under one roof. In contrast, recent reforms in the United States have resisted giving the Fed sole responsibility over financial stability, while the euro area’s response is not only a work in progress but, as this is written, resembles a hybrid of the US and UK responses to the GFC. Nevertheless, it may be somewhat of an exaggeration to conclude, as the authors seem to suggest, that a “new macroprudential consensus” has been reached as none of the current systems has been tested by a financial crisis. Llewellyn, in c hapter 17, makes two interesting observations. First, regardless of one’s assessment of the effectiveness of regulatory changes since the GFC, they amount to substantial changes of an order that we have not seen for decades. Second, policymakers have woken up to a recognition that reducing bank failures is not a sufficient end in itself for financial regulation. It is equally, if not more, important to minimize the social costs of financial instability. Indeed, the chapter goes on to explain how the previous focus on bank failures is incapable of being met without proper recognition of the social aspects of financial regulation. In other words, regulation cannot be exogenously determined without considering the process by which it is enforced. Indeed, over and above these elements, policymakers, regulators, and supervisors have paid insufficient attention to the “culture” of banks, and this is an aspect that also contributes to the endogeneity of bank regulatory structures observed globally. It is clearly essential to recognize that the scope and structure of regulation of the financial system are not independent of other institutional arrangements in different financial systems. Nevertheless, beyond corporate culture, there is also political culture to consider, and it is unclear how the nexus between the two complicates recommendations for regulations intended to mitigate the likelihood of future financial crises. All of the problems highlighted by Llewellyn are otherwise known as leading to the problem of regulatory arbitrage, and as the chapter suggests, we simply do not have a comprehensive set of tools and policies to mitigate attempts to exploit the financial system that contribute to creating conditions for the next financial crisis. There is an urgent need to develop a strategy. As in other areas, a successful strategy will have to be calibrated so that it recognizes country-specific factors. Dwyer’s chapter 18 moves away from solely considering the institutional consequences of the shift toward greater emphasis on financial stability through macroprudential regulation to also ask, through the lens of macroeconomic consequences, whether there are good grounds for thinking that existing macroprudential regulations will be successful. The chapter reviews what we know, including several historical lessons, and concludes that emerging regulatory regimes are built on shaky foundations. Dwyer reminds us not only that there is a fallacy in believing that there exists a perfect regulatory structure designed by government that can prevent the worst effects of a financial crisis, something that economists have known for decades, but also that the “time inconsistency” that plagues monetary policy exists in a fashion when macroprudential policies are examined. In other words, it is not enough to design a macroprudential policy strategy. Instead, a successful policy aimed at maintaining financial system stability must also provide the right incentives so that regulators and policymakers can maximize the likelihood that the best solutions are adopted. Throwing cold water on the ability of existing
20 Mayes, Siklos, and Sturm macroprudential strategies to prevent the next financial crisis is possibly a valid conclusion. The harder task, left out of the existing literature, is to find concrete solutions to the incentive compatibility problem highlighted in Dwyer’s contribution. Moreover, designing clear rules of conduct for macroprudential regulation is clearly important, but there is an equally critical need to acknowledge and build in escape clauses and directives so that the scope for discretion is understood by all. This is perhaps as important as the design of incentives that Dwyer emphasizes, as Tucker (2016) has emphasized (see also Siklos 2017).
1.7 Part VI: Central Banking and Crisis Management Before the GFC, most of the major central banks were largely content with the crisis- management arrangements they had in place. However, once the crisis struck, it became clear that all of the regimes in place had difficulties, many of them catastrophical. There has therefore been a dramatic flurry of activity over the ensuing decade to try to improve such systems, which is yet to be completed. Even without any new measures, the existing changes will not be fully implemented until the mid-2020s. The principles of good crisis management were well known before the GFC, but fortunately, most schemes had not been vigorously tested. The United States, for example, had had to handle many individual bank failures, including a major concentration of them in the savings and loan crisis of 1986–1995, which led to the improvement in systems, in particular through the Federal Deposit Insurance Corporation Improvement Act of 1991. But these were failures of more than one thousand small institutions, and even though in total the losses were considerable at $160 billion, they were not sufficient to result in a recession. The Nordic crises of 1989–1993 and the Asian crisis of 1997 were, however, rather more dramatic and resulted in significant changes. Nevertheless, part of the reason for the catastrophic nature of the GFC is that on the one hand, the Nordic countries actually managed to handle their crises rather well, while on the other, the Asian countries have made themselves far less vulnerable to the problem in the first place. This provided the opportunity for central banks to be at worst complacent and at best overconfident about their ability to handle any new crisis. The story told in the three chapters of this part of the book, therefore, only begins in the 1980s. In chapter 19, Schnabl explains the evolution of central bank crisis management of the period by reference to Austrian business cycle theory. He shows that the approach has been asymmetric and as a result has increased instability. Mayes, on the other hand, looks in c hapter 20 at the lessons learned over the period and how they are being implemented, before making a critical appraisal of how they might work in the future. Honohan, Lombardi, and St. Amand’s c hapter 21 takes a more prescriptive approach and
Central Banking’s Long March over the Decades 21 sets out not just what is being done but also what needs to be done in the light of experience to ensure a well-run system. Crisis management is inherently asymmetric. Although planned for, it is only actuated when needed or thought likely to be needed. In good times, central banks focus on crisis avoidance, although both the system and individual institutions must be structured so as to make efficient crisis management possible. There is some degree of symmetry in macroprudential measures in that they are strengthened as the economy expands and builds up pressures but are released in the downturn. However, that symmetry is somewhat limited. It is rather like the principles behind the “Greenspan standard,” whereby monetary policy leans against the wind as the economy grows faster but not to the full extent of the inflationary threat, because the authorities can respond very vigorously when the bubble bursts and (it was thought) avoid a recession. The reasons for not intervening fully on the upside were twofold. First, it might very well be that there had been technical or other innovations that permitted a higher noninflationary growth rate, and it would be a really bad idea to nip that growth in the bud and prevent it from emerging. Second, if the central bank pulled the plug, even where warranted, the blame for causing the downside would fall on it. If some other event intervened, then the bank would only be responding to the pressures and hence largely avoiding the responsibility. Schnabl interprets crises entirely in the framework of monetary (mis)management and views the Greenspan standard approach, which he regards as a general characterization of monetary policy over the last thirty to forty years, not just something relating to the United States, as a progressive deviation from a sustainable policy. It is not so much that the low inflation environment that has prevailed, with its apparent stability, has driven equilibrium real interest rates down but that the asymmetric policy is destabilizing. These outcomes are particularly strong in the post-GFC era but also acted as a fundamental cause of the crisis itself in the early 2000s. The excessively low interest-rate regime is cemented by a perception that sustainable growth rates in the economy have fallen, leading people to believe that they represent a lower equilibrium rather than the conditions for the next destabilizing financial cycle. The cycle Schnabl describes has both real, money and credit-driven components. Thus, there is both excessive investment and a boom in credit and asset prices generated by the excessively low interest rates. What makes the whole of this process worse is that deregulation of the financial system internally allows the cyclical process to be more dynamic and removing financial barriers internationally increases its contagion around the world in what Schnabl calls “wandering bubbles.” One country’s low rates cause its exchange rate to fall and gives it a competitive advantage, which leads other countries to respond. This cycle can be seen clearly in the 1980s, beginning with the Japanese both in the run-up to their crisis and in the response to it. Schnabl’s chapter feeds through to the other two chapters in this part of the book when he refers to the behavior of the supervisory authorities once banks get into trouble. The banks themselves will be faced by a build-up of nonperforming loans in the crisis. The obvious response is to try to build a bridge over the initial period of difficulty by
22 Mayes, Siklos, and Sturm advancing more to the distressed borrowers so they can service what they have already borrowed. This contributes to the asymmetry of the cycle, as does the next step of bailing out the banks to stop them from failing and worsening the credit crunch. Thus, little action is taken in the upturn to moderate the excesses, while vigorous action is taken in the downturn to avoid the consequences of those excesses being realized. It thereby sets the grounds for a progressive amplification of these cycles and at the same time encourages the decline in trend productivity because the inefficient are kept in business. Schnabl’s view of the future is doubly pessimistic. Indeed, the perspective is even worse, as this asymmetry contributes to increasing inequality, favoring those who can acquire assets that benefit from the approach and harming those who suffer from the slower growth and lower and more precarious incomes. The consequences are reflected in higher debt ratios all around, for countries and for households, both of which will push the system closer to unsustainability and collapse, assisted by growing debt service ratios. As Honohan, Lombardi, and St. Amand put it, “Overzealous crisis management can unwittingly sow the seeds of the next crisis.” The other two chapters are generally more optimistic in tone, although neither suggests that it is possible to get to some nirvana without the cycles and crises. They focus in particular on the lessons learned as a result of the GFC. The most important is that the authorities have to be able to handle problems in failing institutions promptly and at low cost without a simple taxpayer bailout. Moreover, that ability to handle the problem—to the detriment of the existing owners, management, and creditors of the institution—needs to be thoroughly credible. In that way, owners and managers should want to run their businesses more prudently, but if danger threatens, they will want to make sure they can organize a private-sector solution that maximizes the value for them and increases the chances of retaining their jobs. The asymmetry of the process is clear. Action when a bank fails cannot be avoided, but in allowing pressures to build up and in intervening early to head off more serious problems, there is a choice, and the tendency in the past has been toward forbearance. Even with compulsory early intervention and prompt corrective action in the United States, problems have been allowed to mount, and the adverse signals have been disregarded. Indeed, as Mayes points out, crises normally occur because collectively people talk themselves out of the need to act (part of the “this time is different” syndrome highlighted by Reinhart and Rogoff 2009). Honohan, Lombardi, and St. Amand argue cogently against the dangers of “groupthink” that ostracizes those who try to question the general feeling. The system therefore always has to be able to cope with missed opportunities and unexpected shocks. As they put it, “In short, what is needed for good central bank crisis management is preparedness and a willingness to take quick and decisive action.” If anything, there is a tendency to spend too much effort on problem avoidance, because the costs of a crisis are so high that even small chances of reducing their occurrence comes out well in cost-benefit analysis. However, the costs of crises are only borne if they occur, but the costs of the avoidance measures are borne all the time—even if there is nothing to avoid. Using macroprudential tools will help in limiting asset price
Central Banking’s Long March over the Decades 23 bubbles and credit expansions. The capital and liquidity buffers currently in the process of implementation are intended to be large enough that the systemically important banks in the global financial system would not become insolvent in the face of shocks of the size experienced in the GFC. As a result, attention has now passed to the process of recapitalization, which is to be achieved by “bailing in” the creditors. Large institutions cannot be allowed to stop working, or they risk bringing the whole of the rest of the financial system down with them because of their degree of interconnection. The resolution method therefore needs to be able to remove the owners and senior management and recapitalize the institution while it continues to function. Although a longtime advocate of bailing in, from the times before the term had been coined, Mayes is cautious about whether it can be used in all circumstances and, indeed, whether such an ability makes crises more or less likely. If the threat of a bail-in panics holders of such instruments in all banks and not just those in trouble, then it could trigger a market crisis of its own. Much of the debate in practice is going to be over who will be bailed in. As has already been seen in the case of Italy, in 2016, the government preferred to use a preventive bailing out of Banca Monte dei Paschi di Siena rather than let retail holders of bonds be bailed in. Similarly, in the cases of Veneto Banca and Banca Popolare di Vicenza, in 2017, where the same technique could not be used as the ECB had determined the banks had failed, it preferred to inject taxpayer funding into the resolution of these two banks rather than let such bondholders bear the losses. The list of who can be bailed in without severe wider consequences worse than having a bailout may not be long enough. However, the main point that Mayes addresses in chapter 20 is that despite the advances being made in resolution tools, coordination among the authorities, and macroprudential preparedness, the tools and responsibilities are primarily national, while the major financial institutions in the world being regulated are international. At worst, a national authority on its own does not have the resources or the powers to handle a major insolvency without a disorderly resolution, as illustrated by Iceland and Cyprus, among others. At best, the problem is that the authorities in the various countries, although willing, are unable to cooperate sufficiently and fast enough to address the problem in time. Hence, the favored international solutions have tended to go for either putting the responsibility on the home country of the institution for solving the entire problem itself (labeled single point of entry) or making sure that each country is able to solve the problems in its own jurisdiction irrespective of the degree of cooperation from the other (multiple point of entry). Australia and New Zealand have followed that route, while the United States and the United Kingdom have chosen the single point of entry. As with all recovery and resolution plans, they are only plans and can only be tested in artificial circumstances. Despite these differences of opinion, Honohan, Lombardi, and St. Amand point out that a considerable “transnational epistemic community” has been established which has agreed on the principles of how the problems should be resolved. The Bank for International Settlements (BIS) and the Financial Stability Board (FSB) have been the major forums for these agreements.
24 Mayes, Siklos, and Sturm A clear theme that runs through the changes since the GFC broke out is that in general, the role of the central bank has increased. Central banks have frequently become the resolution authority, as in the United Kingdom, and have also become responsible for macroprudential supervision. If the central bank is already the supervisor of individual institutions, then this a very major concentration of power in the system. This can effectively force the central bank into being a more political body, as demonstrated by the ECB in the case of the Irish and Greek crises explored in both c hapter 20 and chapter 21. It certainly propels the central bank into having a much closer relationship with the government as wider issues may need to be borne in mind when resolving a bank or lending to it when it faces an indistinct combination of liquidity and solvency problems. Honohan, Lombardi, and St. Amand put it even more strongly: “central bankers are inherently political actors.” Thus, the idea of the central bank being able to take a step back from the political pressures and apply a purely technical solution based on rules laid down in advance and clear evidence of the likely outcomes is obviously at variance with reality. The system may therefore change again if crises are perceived to generate conflicts between the central bank and the government—something that becomes more likely when the central bank has a range of objectives, some of which may conflict. It is inherent that problems may occur outside the central bank’s traditional field of direct responsibilities—as was seen with investment banks and the American International Group (AIG) in the United States during the crisis. As the guardian of financial and macroprudential stability, the central bank has to act even if this has repercussions later. Similarly, when encountering the effective lower bound or the crisis of confidence in the euro area, the central bank has to step outside the traditional box and lend under conditions it would not previously have countenanced and challenge the limits of its powers. The euro area in particular is operating with a new set of largely untried institutions and, indeed, legislation, with a new Bank Recovery and Resolution Directive, a new Single Resolution Board, new arrangements for pooling funds with the Single Resolution Fund, in addition to the new supervisory role for the ECB. How well this will work out in practice remains to be seen. Communication plays a critical role in crisis management. Handled badly, it can result in a bank run, as in the case of Northern Rock. The central bank will only be successful in many of its actions if it is credible and if those involved believe the policy will work. Confidence is crucial, and what swings that may be relatively small errors and successes, which may even be due to factors outside the central bank’s control. As Honohan, Lombardi, and St. Amand point out, the central bank and the government need to act in concert for the credibility to hold. But there are strong incentives for each party to try to place the risks involved on the other. Thus, the central bank making losses due to a marginal bank failing may look much better to the government than if the same losses are occurring directly on its books from the issuing of a guarantee. Without government endorsement, central bank actions can lack legitimacy. Argument from examples is always helpful, and c hapter 21 explores the cases of Indonesia in 1997, Argentina in 1989 and 2001, the United Kingdom with Northern Rock
Central Banking’s Long March over the Decades 25 in 2007 and the Royal Bank of Scotland and Lloyds/Halifax Bank of Scotland in 2008, Ireland in 2008–2010, and the euro area in 2010–2012. These illustrate both the mistakes that can be made and the measures that can be successful. With a long enough list of experience, central banks ought to be able to do better at crisis management in the future. As Honohan, Lombardi, and St. Amand conclude, good crisis management requires boldness and decisiveness, but their examples show weakness, delay, insufficiency, and a lack of preparedness. Maybe the next time will be different.
1.8 Part VII: Evolution or Revolution in Policy Modeling? It should be clear by now that whereas the conduct of central banking rests on a heavy dose of judgment, the success of monetary policy prior to the GFC is also due in no small part to improvements in modeling. One does not have to go back far in time to find dissatisfaction with large-scale macroeconomic models that central banks and statistical agencies began to construct in the 1960s and 1970s, that is, during the heyday when economists thought that economic policy could safely deliver the economy to a particular point on the Phillips curve. It took Sims’s work (Sims 1980), among others, to bring attention to the “incredible” restrictions built into early large-scale models. Nevertheless, these large models developed decades ago at least had the virtue of making clear that understanding how economies evolve over time is potentially a complex task. Estimating the impact of certain policies precisely is also hazardous and subject to considerable model uncertainty. The first two chapters in this part of the book approach deeper questions about models that central banks use as inputs into the decision-making process. In chapter 22, Goodhart, Romanidis, Tsomocos, and Shubik build on Shubik’s important contributions in the role that default plays in our understanding of macroeconomic outcomes. It might seem obvious that the failure of financial transactions or the breakdown of certain financial relationships is an ever-present phenomenon in most economies. Hence, this possibility ought to be a concern to central banks. However, in the rush to apply simple models to explain the evolution of key macroeconomic variables, together with the firm belief that financial markets are efficient and do not represent a threat to the real economy, models that ignored financial frictions were almost completely ignored by central banks. This lacuna continued even as monetary authorities in advanced economies began to develop models based on sound microeconomic principles. These were introduced not because their developers believed that markets were literally frictionless and that heterogeneity in individual behavior was not a fact of life. Instead, these assumptions seemed to get in the way of understanding how economies respond to shocks from the real economy or from external factors.
26 Mayes, Siklos, and Sturm The models of the kind described above came to be called DSGE models, and since the GFC, these have often been singled out as one of the culprits in the failure of economics to “see the crisis coming.” The focus of chapter 22’s attack on the DSGE approach is the role of default as the principal form through which financial frictions throw “grease in the wheels” of financial markets. The chapter devotes considerable attention to how badly DSGE can mislead policymakers who ignore frictions, especially ones related to default, at their peril. Equally important, recognition of these frictions forced the monetary authority to contemplate in any formal framework developed to analyze the impact of shocks or policies to consider the trade-offs between monetary and financial stability, including the much-discussed contention by some that a lesson learned from the GFC is that “leaning against the wind” (LAW) may be a less successful policy than previously believed. As this is written, the debate about when or when not to LAW continues, but it remains largely a battle of ideas at the theoretical level, a point noted at the outset of this chapter. At the more practical level, LAW has fewer proponents, with central banks resorting to “data dependence” to avoid using policy rates to tighten, aided by low inflation and less than impressive real economic growth. The difficulty is not only that data dependence is a potentially overly flexible means by which not to take a stand until it is possibly too late but also that it relegates the economic outlook to becoming far less important to setting the current stance of monetary policy. It is worth recalling that central banking advocates of inflation control used to place heavy emphasis on the economic outlook in deciding how to set policy rates today. The authors of c hapter 22 do not take a stand on the LAW debate, while the next chapter, by Binder, Lieberknecht, Quintana, and Wieland, sees the existing evidence as indicative that LAW policies are not particularly effective. Nevertheless, even these authors likely admit that the jury remains out on this question. The recognition that financial frictions matter is far from new. Indeed, what we now call financial frictions were well known even during the 1950s. It is just that observation rendered these unimportant from a macroeconomic perspective, especially during the period of the Great Moderation. This view, of course, is no longer tenable. Nevertheless, it is also the case that economists in central banks quickly recognized the need to introduce such frictions, together with an acknowledgment that economic agents are heterogeneous. What remains unclear is how best to model financial frictions and even how heterogeneous agents’ expectations should adjust in such an environment or in response to economic shocks. Equally clear, as chapter 22 stresses, is that the dimension of more suitable models for monetary policy analysis must increase substantially. Chapter 23 adopts a different strategy vis-à-vis an assessment of the popularity of DSGE modeling in central banks. The authors’ approach centers on the empirical performance of various models. They make a plea for more explicit recognition of model uncertainty and the need for model diversity since no one model will outperform all others at all times. The authors demonstrate that this kind of diversity is essential not only for providing an estimate of how much model uncertainty exists at any given moment but also because the success or failure of various models over time provides
Central Banking’s Long March over the Decades 27 a mechanism for policymakers to learn how to improve them for policy analysis and forecasting. Although, as shown in c hapter 23, there continues to be, at least empirically, a preference for models that retain a New Keynesian flavor, this is in part because models that attempt to combine what is useful from finance and the recognition that financial frictions are important have not been confronted with data to the same degree as the models that reached a peak in their popularity around the time of the onset of the GFC. Matters become even cloudier when these more sophisticated models that are necessary for a postcrisis world are confronted with the various creative interventions implemented by central banks since 2008, generally referred to as QE. Finally, c hapter 23 confronts the modeling challenges created by the growing acceptance of so-called macroprudential instruments deployed to accomplish for financial stability what the central bank policy rate was able to do to deliver inflation stability. Unfortunately, there is a plethora of macroprudential policies, and together with the profession’s inability to date to agree on how to define financial system stability, research on the appropriate mix of monetary and macroprudential policies is in its infancy (see Lombardi and Siklos 2016 and references therein). The age-old rules-versus-discretion debate, thought suspended for a time when the Taylor rule seemed an adequate depiction of how monetary policy can be carried out, has been revived. Once again, as this is written, we are a long way from achieving any new consensus on these questions. All of the foregoing attempts to model economic activity and its dynamics stem from the need for central banks that aim to control inflation and, now, maintain financial system stability to provide an outlook for the economy. Despite the fact that a marked preference for “data dependence” to explain why central banks delay a return to historical norms for interest rates, financial markets, among others, continue to demand and expect central banks, in particular, to provide regular updates to their forecasts. As Siklos’s chapter 24 points out, just as DSGE models were accused of neglecting important economic features, economists were fond of assuming that expectations could be treated as uniform or, rather, that in explaining the forward-looking nature of monetary policy, one needed only a single expectations proxy. Little effort was devoted to recognizing that there exists considerable diversity in expectations and that disagreement across forecasts can provide critical information about the appropriate conduct of monetary policy. Indeed, it is only comparatively recently that central banks themselves began to publish staff forecasts. Even fewer central banks publish the forecasts of members of their MPCs. Both, however, are critical ingredients for evaluating not only how clearly central banks see the economic outlook but also the extent to which households and professional forecasters might be influenced by such forecasts. Indeed, the importance placed by central banks on the need to “anchor expectations” is dependent on how the public in general interprets central bank forecasts. Siklos’s empirical evidence for nine advanced economies highlights the need not only to measure and determine the evolution of forecast disagreement but also to understand its determinants over time. Global factors matter greatly, it seems, and the selection of the benchmark against which to evaluate how much forecasters disagree is critical.
28 Mayes, Siklos, and Sturm Other determinants considered include central bank communication and fluctuations in energy prices. The bottom line, however, is that economics would do well to borrow some aspects of how weather forecasts are generated. If model uncertainty is rife, then there are multiple paths for the economic outlook depending on the model that performs best for different forecast horizons. Nevertheless—and this is misunderstood by some central bankers—economists are unlikely to be able to entirely emulate weather forecasting for at least two reasons. First, it is unlikely, in spite of the tremendous growth in the amount of available data, that policymakers will be able to acquire the sheer volume of data employed in forecasting the weather. Second, and perhaps more important, economics cannot count on some of the physical laws that assist in narrowing the degree of model uncertainty around certain forecasts. It is understandable that at a theoretical level, the search for a better or a new economic model continues, and many of the chapters in this book point this out in a variety of ways. Nevertheless, at an empirical level, there is no reason to restrict inference to one model. Indeed, developments in empirical modeling are especially helpful, because we are able to harness useful information from a large variety of models. Surely, this is a more appealing approach in conveying the uncertainty around forecasts than more traditional methods that rely on the measurement of uncertainty around forecasts from a single model. Although many central banks have or claim to have adopted such a strategy, either it has not been properly communicated or the benefits obtained from forecasts from diverse models have not yet been fully exploited.
1.9 Postscript: What’s in Central Banking’s Future? Fintech and the Central Bank Like other financial institutions, central banks have been affected by the development of financial technology both in their own operations and in their regulation of the financial sector. Payment systems in particular have changed out of all recognition, with the ability to transact electronically in real time replacing batched paper-based transactions at the end of the day. Similarly, the rise of Internet banking and debit and credit cards has transformed how people behave. Mobile devices are providing a further step forward. The role of the central bank in the system has evolved steadily over the last fifty years in the face of these changes, but the process has been evolution rather than revolution. In advance, all such innovations may make a dramatic impact on financial activity. Indeed, one might want to argue that without the improvements in financial technology, the GFC would have been less extreme. However, it is being argued that some of the ideas being developed at present in what is labeled “fintech” (not a new term) have the potential to disrupt how central banks behave in exercising their roles in the system as
Central Banking’s Long March over the Decades 29 providers of payment systems and currency, as regulators, and even in monetary policy. In other words, this will involve more than evolution. IMF (2017) provides a survey of the issues. Maybe in practice, these ideas will not amount to much, but it is worth tracing how their potential might be realized in case they do have a substantial effect in the coming few years on the economics of central banking. Two that offer a particular possibility are (1) cryptocurrencies and (2) blockchain and related ledger systems, which between them could not merely transform how people transact but alter the role of the central bank in the economy markedly. After a brief introduction to the technologies, we explore the implications for a central-bank-issued digital currency, monetary policy, payment and settlement, regulation, the encouragement of innovation, and the protection of consumers.
1.9.1 Cryptocurrencies and the Blockchain These days, the vast majority of money is held in electronic (digital) form, whether as firms’ and households’ deposits in banks and other financial institutions or by those banks in deposits at the central bank. The amount held in physical currency (notes and coins) is trivial by comparison. The same disparity applies to transactions; the vast majority by value are also electronic, whether through credit and debit cards, Internet banking, or interbank transfers. Cash is still used in retail transactions, in part because its use is subsidized but also because nothing else is quite so convenient. It is also used in illegal and black-market transactions because of its anonymity, and this may account for the large number of high-value notes in circulation. Cryptocurrencies offer a different way forward from just letting the use of cash wither away. They offer a means of payment using a digital currency that is highly protected so that users can be convinced that their balances cannot be attacked by others and that the instructions they give will be accurately executed. At present, there are around one thousand of these currencies, but most are very small, and only bitcoin is at all well known and used. Even so, the level of transactions as opposed to the theoretical value of the holdings is very small. People are holding the currency in the hope of making capital gains rather than because of the transactional advantages. These currencies are private, in the sense that they are not issued by governments or central banks, and hence they require an open and verifiable approach to establishing transactions and the ownership of balances. This is where blockchain and other distributed ledger technologies come in. Each set of transactions in bitcoin needs to be verified before the system can move on, and this can be done by anyone with adequate computer power. These sets of transactions—blocks—are then linked to one another to provide a complete history of each unit of the currency. Since these histories, or ledgers, of transactions can be held anywhere, this is known as a distributed ledger technology. As long as the majority of these ledgers come up with the same blocks, the transactions are verified, and the single path up to the present is confirmed. Traditionally, ledgers have
30 Mayes, Siklos, and Sturm been held centrally by a trusted counterparty such as a bank or a central bank. Thus, the user trusts the central counterparty rather than requiring multiple verification by independent sites. Potentially, therefore, one or more cryptocurrencies could replace cash, card, and bank-related transactions if people had sufficient confidence in them. This, of course, is a concern to central banks, in that they do not have control over the system, and to the providers of existing transaction services, in that they might lose market share and see their profits fall. Indeed, many think that because of these two drawbacks, cryptocurrencies will never develop far; either they will be banned by the authorities, or the traditional providers will come up with effective competition, such as the ability to transact in real time at the retail level, which is currently being introduced. Not least among the concerns over bitcoin and related cryptocurrencies is that they are being used for money laundering, and they have a history of illegal involvement (Popper 2015). However, currencies of this form do not have to be private. They could be issued by the central bank (Bech and Garratt 2017). This could be at the interinstitutional level (several central banks, including the Bank of Canada, appear to be considering this at present), which would mean that all financial institutions allowed into the system could transact with one another in central bank money. (Bott and Milkau 2017 offer a survey of the possibilities.) Or it could be at the retail level, which is a much more revolutionary idea. Barrdear and Kumhof (2016) have explored this in some detail, building on earlier work (Ali, Barrdear, Clews, and Southgate 2014a, 2014b). They suggest that the central bank could offer digital currency balances to people in return for government bonds. Thus, the digital currency would be entirely backed by these bonds. It is debatable how keen people would be to hold much in the way of such balances if they were not remunerated. Any interest paid would be clearly below the bond rate and presumably below the rate offered on deposits by the commercial banks. In countries facing the effective lower bound, this is a somewhat academic debate at present. The government will clearly be a gainer in this arrangement, as the increased demand for bonds will lower the interest rate they have to pay on their debt. There is thus some attraction to the idea beyond the transactional benefits. Whether the banks are losers will depend on the popularity of the system and the extent to which they lose profitable business; standard banking transactions are not where banks make their profits. (Offering subsidized transaction services helps retain customers for other products that are much more profitable.) Barrdear and Kumhof do not suggest that the central bank digital currency system should be the only one permitted. Banks could be expected to defend profitable lines of business, and the idea that there would be no improvement in the services they offer as a consequence seems very unlikely. Some people have an extra item on their agenda: if the digital currency were to drive out cash (or if cash were to be withdrawn), then it would be possible in theory to offer negative interest rates (Rogoff 2017). However, there is no need to consider more extreme circumstances such as those to see digital currency playing a role in monetary
Central Banking’s Long March over the Decades 31 policy. While monetary policy under positive interest rates operates through overnight or other short rates, Barrdear and Kumhof (2016) point out that the digital currency supply could be affected by a form of open market operations. The central bank could offer to buy or sell government bonds as necessary to the holders of the central-bank- issued cryptocurrency and hence affect the money supply through a second route. In the same way that digital or cryptocurrencies do not need to be private or dependent on the blockchain, blockchain or related ledgers could be used for transactions without the digital currency. Such applications can extend beyond immediate financial markets to real estate or, indeed, to what have become known as “smart contracts.” Such smart contracts have clauses that can be triggered automatically by the appropriate change in the circumstances of the contracting parties, for example, if a balance falls below a particular level. However, one of the most popular financial market examples is securities (Mainelli and Milne 2016). Indeed, the technology is already being applied to crowdfunding and other start-ups where the costs of a full-scale launch on the market are prohibitive. Here, the requirements are very similar to those for a cryptocurrency. In the secondary market, shares may be traded in any fraction that makes sense, and on completion of the transaction, the name on the register needs to be transferred from the seller to the buyer. If one were optimistic, then all stages in the transaction could be completed together through a similar ledger technology. But the scope here for improvement in both costs and time taken is considerable. Whether one would actually want to go as far as the whole process taking place in real time is more debatable, as it would render market making of the traditional form impossible. Counterparties would actually have to have the securities they buy or sell and not merely need to be able to acquire them before the settlement date. Blockchain-style technologies would work well in markets for less tradable securities or for crowdfunding and related funding schemes where the costs of transactions need to be very small and the parcels that are traded may be very small. Being able to deal directly with counterparties could cut costs and avoid the need to employ brokers. Regulators would have to decide on minimum standards.
1.9.2 A Proviso The potential for an electronic approach to reduce transaction costs is clearly present. The highest costs come in cross-border transactions. Processing a small transfer between New Zealand and Australian banks, for example, costs 25 NZD at the originating end and 26 AUD at the receiving end before exchange-rate fees have been added. Thus, it probably makes more sense to gamble and put a couple of banknotes in the mail. These costs are particularly awkward for remittances where the recipient may not have a bank account. In Kenya, where it is more likely that people will have a phone than a bank account, you can pay in airtime. With bitcoin, there is no need to exchange the currency if the recipient can spend it directly—clearly something that could be achieved if the currency becomes more widely used.
32 Mayes, Siklos, and Sturm However, it is widely thought that bitcoin will not succeed, first because of its anonymity (while the pseudonym of the owner is known, there is no requirement for the actual name to be revealed) but more seriously because the time it takes to complete a batch of transactions is too long at present to make it convenient. Widespread use would slow it down even further. Other cryptocurrencies may be able to get around this, such as Ethereum, and bitcoin itself has spawned a new, more efficient arm called Bitcoin Cash (Economist 2017). Governance is also an issue. Bitcoin deliberately has open governance, but this makes taking decisions and changing the protocols hugely complicated. Others, Ethereum included, have more concentrated governance, so changes are practicable. There are other disadvantages to bitcoin—at present, it is mainly being hoarded rather than used, by a set of people who are hoping to see its value rise. Not only does this risk a bubble and a stability-threatening burst, but it makes it difficult to use for transactions if the value is not predictable in real terms. (Bitcoin builds this problem in as its ultimate supply is capped.) Some jurisdictions, the Federal Reserve included, treat bitcoin and related cryptocurrencies as commodities. Thus, its use is more like a system of barter. Currently, this has tax advantages, as a swap is not treated the same way as a purchase from the point of view of tax liability. It is therefore important to focus on the generality of how such currencies could be used rather than on the very specific case (Leinonen 2016).
1.9.3 Regulation and Innovation Fintech innovations face a difficulty in that most of them are being initially advanced by start-ups and other small companies. They are also in a very competitive field, so it is difficult to know what will succeed. Many authorities therefore have decided to make the rules easier for bringing the product to market, provided that the overall size of the operation is small and the time period involved is relatively short. The Australian Securities and Investment Commission (ASIC), for example, has set up what is described as a “sandbox,” where such innovations can be developed inside the boundary of official oversight but outside the panoply of full regulation. The idea is to allow fintech companies to get a start in the market. However, a second common route is for banks to buy up any promising ideas and then develop them themselves. Consumers still need to be protected. And it is difficult to advertise clearly which products are subject to consumer claims and which are not because they are in the sandbox phase or outside the system altogether. There are clearly serious problems for redress in systems where the counterparties are anonymous (rather, pseudonymous, as only their pseudonyms are known). It has been possible to track down bitcoin transactors and for successful prosecutions to be brought for illegal transactions in the United States (Popper 2015). However, the costs of doing that for small claims would be prohibitive. This implies that any viable system is at least going to have to disclose the
Central Banking’s Long March over the Decades 33 names of the merchants so there can be redress and a reasonable opportunity to ensure that other trading regulations are upheld.
1.9.4 Where Do We Go from Here? The direction these new technologies are most likely to take is toward making transactions simpler, easier, less costly, and faster. The format is difficult to judge, but there is a major vested interest for existing service providers to make sure that they dominate any such new system as well. Thus, it is not clear whether encryption or distributed ledger technology will be the key feature. Either way, the central banks will need to be alert to the more complex and less stable environment that these technologies can bring. Of course, if they decide to be the supplier of the cryptocurrency themselves, then the impact on central banks’ actions will be substantial and could change the role of the central bank in retail transactions markedly. Taking a much wider role as a provider would alter the balance of markets and would certainly generate a lot of controversy if banks and other private-sector providers felt their business was being eroded. Nevertheless, if that gave the central bank more policy tools, the change might be supported. Similarly, if a central-bank-issued digital currency provided an opportunity for reducing the costs of government funding, then the chance of introduction would be increased. At some point, governments may feel that the central banks’ responsibility should be reduced if fintech leads to yet another expansion of the central banks’ powers. The digital currency could be issued by another agency if it is to be 100 percent backed by government bonds.
Notes 1. At least in advanced economies. The inflation target can be higher in emerging market economies. Siklos (2017, appendix) has an up-to-date listing of countries that have adopted inflation targets, including the start date and annual changes (if any) in the inflation target levels and ranges. 2. Instead, there may well be a resurgence of interest in price-level targeting. A few years ago, the Bank of Canada was actively pursuing this agenda. The GFC put paid to more serious consideration of this kind of policy regime being implemented in practice. However, former central bankers (e.g., Bernanke 2017) now see a version of price-level targeting as a means to escape the risks associated with the ZLB. 3. Quoted in Guardian Weekly, November 4, 2005. 4. Remarks by Ben S. Bernanke at the National Economists Club, Washington, D.C., December 2, 2004. 5. Arguably, the simplicity of evaluating monetary policy has taken a turn for the worse since QE and the willingness of some policymakers to permit inflation to be too high or too low for a time.
34 Mayes, Siklos, and Sturm
References Ali, R., J. Barrdear, R. Clews, and J. Southgate. 2014a. “The Economics of Digital Currencies.” Bank of England Quarterly Bulletin 54, no. 3: 276–286. Ali, R., J. Barrdear, R. Clews, and J. Southgate. 2014b. “Innovations in Payment Technologies and the Emergence of Digital Currencies.” Bank of England Quarterly Bulletin 54, no. 3: 262–275. Barrdear, J., and M. Kumhof. 2016. “The Macroeconomics of Central Bank Issued Digital Currencies.” http://www.bankofengland.co.uk/research/Documents/workingpapers/2016/ swp605.pdf. Barth, James R., Gerard Caprio Jr., and Ross Levine. 2013. “Bank Regulation and Supervision in 180 Countries from 1999 to 2011.” NBER working paper 18733, January. Bech, M., and R. Garratt. 2017. “Central Bank Cryptocurrencies.” BIS Quarterly Review (September): 55–70. https://www.bis.org/publ/qtrpdf/r_qt1709f.pdf. Bernanke, Ben S. 2004. “The Great Moderation.” Meetings of the Eastern Economic Association, Washington, D.C., February 20. Bernanke, B. 2014. “Bernanke Cracks Wise: The Best QE Joke Ever!” CNBC. https://www.cnbc. com/2014/01/16/bernanke-cracks-wise-the-best-qe-joke-ever.html. Bernanke, Ben S. 2017. “Monetary Policy in a New Era.” Brookings Institution working paper, October 2. Bott, J., and U. Milkau. 2017. “Central Bank Money and Blockchain: A Payments Perspective.” Journal of Payments Strategy and Systems 11, no. 2: 145–157. Carlstrom, Charles T., Timothy S. Fuerst, and Matthias Paustian. 2015. “Inflation and Output in New Keynesian Models with a Transient Interest Rate Peg.” Journal of Monetary Economics 76: 230–243. Crow, J. 2013. “PracticalExperiences in Reducing Inflation: The Case of Canada.” The Great Inflation: The Rebirth of Modern Central Banking, edited by M. D. Bordo and A. Orphanides, xx–xx. Chicago: University of Chicago Press, pp. 37–55. Dincer, N. Nergiz, and Barry Eichengreen. 2008. “Central Bank Transparency: Where, Why and with What Effects?” Central Banks as Economic Institutions. Cournot Centre for Economic Studies Series, edited by Jean-Philippe Touffut, 105–141. Cheltenham, UK and Northampton, MA: Elgar. Dincer, N. Nergiz, and Barry Eichengreen. 2014. “Central Bank Transparency and Independence: Updates and New Measures.” International Journal of Central Banking 10, no. 1: 189–253. Economist. 2017. “Bitcoin Divides to Rule.” August 5. https://www.economist.com/news/ business-and-finance/21725747-crypto-currencys-split-two-versions-may-be-followed- others-bitcoin. Eijffinger, Sylvester C. W., and Petra M. Geraats. 2006. “How Transparent Are Central Banks?” European Journal of Political Economy 22, no. 1: 1–21. El-Erian, Mohamed. 2016. The Only Game in Town. New York: Random House. Epstein, Larry G. and Stanley E. Zin. 1989. “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework.” Econometrica 57: 937–969. Filardo, Andrew, and Phurichai Rungcharoenkitkul. 2016. “A Quantitative Case for Leaning against the Wind.” BIS working paper 594, December.
Central Banking’s Long March over the Decades 35 Haldane, Andrew. 2009. “Rethinking the Financial Network.” Speech. Financial Student Association, Amsterdam, April 28. http://www.bankofengland.co.uk/archive/Documents/ historicpubs/speeches/2009/speech386.pdf. IMF. 2017. “Fintech and Financial Services: Initial Considerations.” IMF Staff Discussion Note SBN/17/05. https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2017/06/ 16/Fintech-and-Financial-Services-Initial-Considerations-44985. King, Mervyn. 2000. “Monetary Policy: Theory and Practice.” Speech. January 7. http://www. bankofengland.co.uk/archive/Documents/historicpubs/speeches/2000/speech67.pdf. King, Mervyn. 2016. The End of Alchemy. New York: W. W. Norton. Leinonen, H. 2016. “Virtual Currencies and Distributed Ledger Technology: What Is New under the Sun and What Is Hyped Repackaging?” Journal of Payments Strategy & Systems 10, no. 2: 132–152. Lombardi, Domenico, and Pierre L. Siklos. 2016. “Benchmarking Macroprudential Policies: An Initial Assessment.” Journal of Financial Stability 27 (December): 35–49. Maier, Philipp. 2010. “How Central Banks Take Decisions: An Analysis of Monetary Policy Meetings.” Challenges in Central Banking, edited by P. L. Siklos, M. T. Bohl, and M. E. Wohar, 320–356. Cambridge: Cambridge University Press. Mainelli, M., and A. Milne. 2016. “The Impact and Potential of Blockchain on Securities Transactions.” SWIFT Institute working paper 2015-007, May. Masciandaro, Donato. 2012. “Back to the Future?” European Company and Financial Law Review 9, no. 2: 112–130. Orphanides, Athanasios. 2013. “Is Monetary Policy Overburdened?” BIS working papers 435, December. Popper, N. 2015. Digital Gold. London: Allen Lane. Reinhart, C., and K. Rogoff. 2009. This Time Is Different: Eight Centuries of Financial Folly. Princeton: Princeton University Press. Rogoff, K. 2017. The Curse of Cash. Princeton: Princeton University Press. Siklos, Pierre L. 2017. Central Banks into the Breach: From Triumph to Crisis and the Road Ahead. Oxford: Oxford University Press. Siklos, Pierre L., and Matthias Neuenkirch. 2015. “How Canadian Monetary Policy Is Made: Two Canadian Tales.” International Journal of Central Banking (January): 225–250. Sims, Christopher. 1980. “Macroeconomics and Reality.” Econometrica 48 (January): 1–48. Svensson, Lars E. O. 2016. “Cost-Benefit Analysis of Leaning against the Wind: Are Costs Larger Also with Less Effective Macroprudential Policy?” IMF working paper 16/3, January. Tucker, Paul. 2016. “The Design and Governance of Financial Stability Regimes.” Essays in International Finance 3 (September). Waterloo, Ontario: Centre for International Governance Innovation.
Pa rt I
C E N T R A L BA N K G OV E R NA N C E A N D VA R I E T I E S OF I N DE P E N DE N C E
chapter 2
Monetary P ol i c y C ommit tees a nd Vot i ng Behav i or Sylvester Eijffinger, Ronald Mahieu, and Louis Raes
2.1 Introduction In many modern economies, the position of a monetary authority bears remarkable similarities to the position of the highest-level court (Goodhart (2002)). For both bodies, there is a commonly held belief that they should be able to operate independently. What this independence exactly entails differs for both types of institutions from one country to another. In the United Kingdom, the highest court is the Appellate Committee of the House of Lords (in short, Law Lords). There is a consensus among legal scholars that the powers of the Law Lords with respect to the legislature are less wide ranging in the United Kingdom than the powers of the Supreme Court in the United States (Goodhart and Meade (2004), p.11). This difference may sound familiar to economists. In economic jargon, one would say that the Supreme Court has goal independence whereas the Law Lords have instrument independence. If we compare the monetary policy committee (MPC) of the Bank of England (BoE) with the Federal Open Market Committee (FOMC) of the Federal Reserve, we notice a similar pattern. The MPC has, due to a clear inflation target, little goal independence. The FOMC, on the other hand, has multiple objectives providing some discretion to the monetary policy makers. The similarities extend further. Because independence, however defined, is deemed important for both the monetary authority and judicial courts, governments have applied similar recipes to guarantee independence. In the United States, for example, Supreme Court justices have tenure for life. Similarly, board governors, part of the FOMC, are appointed for fourteen years. These lengthy appointments, for life in the case
40 Eijffinger, Mahieu, and Raes of the Supreme Court and fourteen years in the case of the Board of Governors, serve to insulate justices and governors from political pressure. Various other aspects (decision-making procedures, the role of consensus, the role of the chairman, communications strategy, etc.) also demonstrate remarkable similarities: see Goodhart and Meade (2004) for some examples. At the same time, different countries tend to fill in the details in different ways, and these details may matter. In the case of monetary authorities, most central banks have put a committee in charge of monetary policy. These committees vary substantially on various dimensions across different countries.1 Besides guaranteeing independence, there are two additional concerns underlying the institutional design of a monetary authority or a judicial court. First, there is the need to balance independence with accountability. Delegating important public tasks to individuals who are not elected and may even not be directly under the control of elected public servants requires a system of checks and balances to ensure sufficient accountability. Second, there is the need to have sufficient technical skills because these bodies often deal with complex issues.2 Both monetary policy committees and judicial courts have been the topic of research. Given the similarities we described above, there has occasionally been crossover between these literatures. Scholars explaining decision-making in groups and committees often use MPCs or judicial courts as an example. This overlap concerns mainly theoretical work.3 Empirical work on MPCs and judicial courts has evolved more separately. One consequence of this is that we see different methodologies being used by scholars studying decision-making at central banks and by those studying deliberations of courts. In our work, we use methods that are predominantly used in the study of voting in legislative bodies and judicial bodies (e.g., the Supreme Court) to study voting at central banks. We do so because we believe that this methodological crossover may be fertile. We do not claim that one methodology is superior to another; rather, we feel that different approaches can prove to be complementary to each other. Crossing methodological boundaries between disciplines is not always straightforward. One may lack the expertise to use certain methods or lack colleagues to bounce back ideas. In this chapter, we aim to provide a succinct introduction. We first provide a snapshot of the literature on voting at MPCs. We then discuss the spatial voting model in its simplest form. Next, we illustrate the methods discussed with voting data from the Riksbank. We give a brief overview of results in the literature using spatial voting models. We then point to some possible methodological extensions, some of which are not yet explored to our knowledge.
2.2 Related Literature In the introduction, we mentioned that the institutional design of central banks differs across countries. These differences often seem minor from a distance but could matter
Monetary Policy Committees and Voting Behavior 41 a great deal in practice. The implication for research on central bank committees is that one should be cautious in generalizing the results of the study of one central bank. Many studies use the voting records of a particular central bank (most often the Federal Reserve or the BoE) to study a particular feature of the institutional design of that particular bank. As an example, consider research studying regional bias at the FOMC.4 Regional bias refers here to the notion that an FOMC member would systematically favor his or her home region by attaching disproportional weights to regional indicators when contemplating appropriate monetary policy. The study of this topic is motivated by the structure of the FOMC where regional representation is by design an important feature. Jung and Latsos (2015) report that they find evidence of regional bias, but they judge the impact to be fairly small. This finding is relevant for the design of MPCs, but the question remains to what extent one can generalize this to other central banks. Jung and Latsos (2015) suggest conducting similar research on other central banks where a regional bias may play a role as well and are rightly cautious in generalizing the findings to other central banks. However, if the possibility to generalize is limited, then studies of separate central banks should be treated as case studies. Some studies lump central banks together in an effort to gain a cross-country perspective. For example, Adolph (2013) is an impressive effort to study the career concerns of central bankers and monetary policy in a wide range of industrial countries. Such a cross-country study seems more general than the case studies we mentioned before, but this comes at the cost of limited assessment of detail.5 For these reasons, we caution the reader to put the research results on MPCs in perspective. While some studies are impressive efforts of data collection and refined statistical analysis, good judgment is required to judge the merits in different contexts. As mentioned before, research on MPCs has focused historically mostly on the FOMC and the BoE. A first topic studied in the context of the FOMC is the aforementioned regional bias. A second branch of the literature has focused on the chairman. The chairman at the FOMC plays an important role, for different reasons. He heads the FOMC and leads the meetings. The chairman tends to lead the communication by the FOMC and as a consequence receives most media attention. Furthermore, the Humphrey-Hawkins Full Employment Act of 1978 requires the chairman to give an oral testimony to the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives. The role of the chairman has been much discussed, and many scholars have written on the influence of the chairman in decision-making at the FOMC; see Chappell, McGregor, and Vermilyea (2007), Gerlach-Kristen and Meade (2010), El-Shagi and Jung (2015), Ball (2016). A third line of research is concerned with political influence. There is the concern that there might be (tacit) political influence on FOMC members. This pressure might come through informal conversations or might arise due to people with a specific partisan background being appointed to the Board of Governors. There is a long tradition in analyzing these matters. Many researchers do believe that there is some steering by means of appointing candidates with a certain ideological profile. The extent to which this really influences decision-making remains debated.6
42 Eijffinger, Mahieu, and Raes The aforementioned branches of the literature all boil down to uncovering the preferences of FOMC members and analyzing the extent to which these are influenced by certain determinants (see also Havrilesky and Gildea (1991), Chappell, Havrilesky, and McGregor (2000), Thornton, Wheelock, et al. (2014), Eichler and L¨ahner (2014)). The literature on the MPC of the BoE also has a strong focus on uncovering the determinants of dissent. One important difference between the MPC and the FOMC is that the MPC is considered to be more individualistic in comparison to the FOMC (see Blinder (2009)).7 A consequence is that dissent is more common at the MPC. The MPC consists of insiders and outsiders, that is, MPC members appointed from within the bank and MPC members with no connection to the bank. This is one particular feature that has drawn considerable attention in the literature investigating dissents at the BoE; see Besley, Meads, and Surico (2008), Harris and Spencer (2009), Gerlach-Kristen (2009), Bhattacharjee and Holly (2010), Harris, Levine, and Spencer (2011). However, there is some disagreement on how these two types of MPC members differ in their preferences and voting behavior from each other (see Eijffinger, Mahieu, and Raes (2013a)). Finally, besides the BoE and the Federal Reserve, other central banks have also been studied, albeit to a lesser extent. Examples are Jung and Kiss (2012), Chappell, McGregor, and Vermilyea (2014), Siklos and Neuenkirch (2015), Horvath, Rusnak, Smidkova, and Zapal (2014). The papers using the type of spatial voting models we advocate follow the line of research we have described in this section. These papers use voting records to estimate the latent preferences of policymakers, which are then studied. Hix, Hoyland, and Vivyan (2010) and Eijffinger, Mahieu, and Raes (2013a) study the MPC of the BoE. Eijffinger, Mahieu, and Raes (2013b) study four smaller central banks, and Eijffinger, Mahieu, and Raes (2015) focus on the FOMC. In the next sections, we shall explain this approach and provide an illustration.
2.3 Spatial Voting In this section, we discuss the spatial voting model. In political science, spatial voting models belong to the standard toolbox. A key reference is Clinton, Jackman, and Rivers (2004). In economics, this approach is less commonplace which motivates us to elaborate on the methodology. The discussion in this section is largely based on the corresponding sections in Eijffinger, Mahieu, and Raes (2013a). We start by discussing how the canonical spatial voting model can be motivated from a random utility framework.
Ideal points and a spatial voting model The data we analyze consist of votes expressed during monetary policy deliberations. The votes were cast by MPC members to whom we refer as voters n = 1, …, N. The voters are voting on policy choices t = 1, …, T. Each policy choice presents voters with a choice
Monetary Policy Committees and Voting Behavior 43 between two policy rates, where the lower rate is considered a dovish position ψ t and the higher rate is a hawkish position ζt . The positions ψ t and ζt are locations in a one- dimensional Euclidean policy space .8 A voter n choosing the hawkish position ζt on policy choice t is denoted as ynt = 1. If voter n chooses the dovish position ψ t , we code this as ynt = 0. The choices ζt and ψ t are functions of a policy rate and possibly other variables capturing the contemporaneous economic conditions prevailing at policy choice t. However, the two choices differ only in the policy rate. We assume that voters have quadratic utility functions over the policy space such that U n (ζt ) = − xn − ζt 2 + ηnt and U n (ψ t ) = − xn − ψ t 2 + νnt , where xn ∈ is the ideal point or the underlying monetary policy preference of voter n, and ηnt , νnt are the stochastic elements of utility, and . denotes the Euclidean norm. Utility maximization implies that ynt = 1 if U n (ζt ) > U n (ψ t ) and ynt = 0 otherwise. To derive an item response specification, we need to assign a distribution to the errors. Assuming a type 1 extreme value distribution leads to a logit model with unobserved regressors xn corresponding to the ideal points of the voters:
P ( ynt = 1 ) = P (U n (ζt ) > U n (ψ t ))
= P (νnt − ηnt < xn − ψ t 2 − xn − ζt 2 )
= P ((νnt − ηnt ) < 2(ζt − ψ t )xn + (ψ t2 − ζt2 ))
= logit −1 (βt xn − α t ). (1)
The last line follows from substituting 2(ζt − ψ t ) with βt and substituting (ζt2 − ψ t2 ) with α t. Assuming (conditional) independence across voters n and meetings t yields the following likelihood (see Clinton, Jackman, and Rivers (2004)):
N
T
(β, α, x | Y ) = ∏∏(logit −1 (βt xn − α t )) ynt × (1 − logit −1 (βt xn − α t ))1− ynt , (2) n =1 t =1
with β = (β1 , …, βT )′ , α = (α1 , …, αT )′ vectors of length T, x = (x1 , . . . , x N )′. a vector of length N and Y the N × T (observed) vote matrix with entry (n, t ) corresponding to ynt . This derivation should be familiar to most economists as it mimics the discrete choice derivation of a logit model. Model 1 is in this sense just a logit model where the regressor xn is not observed in the data but is a latent variable. A key difference, in terms of interpretation, is that the object of interest is precisely the latent regressor xn (the ideal points). To understand the intuition behind the parameters, start by considering the situation where βt equals 1. Then the model reduces to:
P ( ynt = 1) = logit −1 (xn − α t ). (3)
Figure 2.1 provides a visual presentation of this simplified model with two voters and two meetings.9 For each meeting t, there is a corresponding vote parameter α t . This parameter captures the overall inclination to vote dovishly; that is, a higher α t increases the probability that each voter will vote for the dovish policy choice. This parameter captures the
44 Eijffinger, Mahieu, and Raes Ideal points x2
x1 –2
Dovish
1
0
2
Hawkish
2
Vote difficulty
Figure 2.1 Ideal Points on a Single Latent Dimension Notes: This figure illustrates equation (3). On the latent dove-hawk dimension, two ideal points x1, x2 (voters) and two vote parameters α1, α2 (meetings) are shown. If the ideal point of voter n is larger than the vote parameters αt, then it is more likely that voter n votes for the hawkish policy choice. In this example, voter 1 is as likely to vote hawkishly as to vote dovishly on the policy choice represented by α2.
economic environment in which voting takes place. Consider now voter 1. Voter 1 has an ideal point x1 slightly smaller than zero, whereas voter 2 has an ideal point x2 larger than 2. The dove-hawk dimension runs from dovish to hawkish, and so x2 would be a clear hawk here. Both voters have an ideal point larger than the vote parameter α1 associated with meeting 1. This implies that both voters are more likely to vote for the hawkish policy option in this meeting, since for n = 1, 2 we have that logit −1 (xn − α1 ) > 0.5. However, the ideal point of voter 2 is larger than the ideal point of voter 1, so the predicted probability of voting hawkishly is larger for voter 2. Now consider meeting 2. For this meeting we have x1 = α 2 , so the ideal point of voter 1 and the vote parameter of this meeting are equal. We find that logit −1 (x1 − α 2 ) = logit −1 (0) = 0.5. There is an equal probability that voter 1 chooses the hawkish or the dovish option in this meeting. Once again, voter 2 has an ideal point larger than α 2, and so we give this voter a higher probability of voting for the hawkish policy choice. These examples show that the vote parameter α t captures meeting characteristics (e.g., state of the economy) and determines how likely it is a priori that voters vote for the dovish or the hawkish policy choice. Now consider the effect of βt or the discrimination parameter. This parameter captures the extent to which preferences in the dove-hawk dimension determine the choice between two competing policy rate proposals. Say we find that for a certain meeting t, βt equals zero. Then βt xn equals zero, and the preferences in the underlying dove-hawk dimension do not have an impact on the choice between competing policy proposals. Analogously, a negative βt implies that doves (hawks) have a higher probability of choosing the hawkish (dovish) policy choice. A voter n is as likely to make the dovish as the hawkish choice if his ideal point xn equals α t / βt . This ratio is referred to as the cut point, the point in the policy space where voters are indifferent between two policy choices presented in meeting t.
Monetary Policy Committees and Voting Behavior 45 The model we presented in equation 1 is not identified. For example, for a constant C, x we have (βt xn + C ) − (α t + C ) = βt xn − α t . Similarly, we have C × βt n = βt xn. C To identify the parameters, we use the common approach of normalizing the ideal points to have mean zero and a standard deviation of 1. The left-right direction is fixed by specifying one ideal point to be negative and one ideal point to be positive. We do this on the basis of the summary statistics (see below). Voting could depend on a whole range of influences, including personal and group preferences (e.g., through an organizational consensus, varying reputational concerns). Identifying each of these requires considerably more data and/or assumptions. The measures of revealed policy preferences we propose in this chapter, represented by the ideal points, are therefore a mix of these influences on monetary policy voting rather than a literal measure of policy preference.10 In our opinion, these serve as a useful summary of policy preferences and could aid researchers in analyzing monetary policy votes.
2.4 Data In this chapter, we use the Riksbank as a running example.11 Specifically, we use the voting records from January 2000 to October 2009. The Executive Board, the MPC of the Riksbank, consists of six internal members who are appointed for a period of six years. The Executive Board is considered to be an individualistic committee (Apel, Claussen, and Lennartsdotter (2010)), which means that Executive Board members vote according to their beliefs.12 In the case of a split vote, the governor’s vote is decisive. An overview of the Executive Board and the terms of office of its members in the period 1999–2009 is presented in figure 2.2.
Karolina Eckholm Lars E. O. Svensson Brabro Wickman-Parak Svante öberg Stefan Ingves Irma Rosenberg Kristina Persson Urban Bäckström Lars Heikensten Eva Srjeber Villy Bergström Kerstin Hessius Lars Nyberg 1999
2001
2003
2005
2007
2009
2011
Figure 2.2 The Terms of Office of Members of the Executive Board (1999–2009) Notes: This is a reproduction of fi gure 1 in Ekici 2009. The blue shading means that the person was a regular member, the red shading indicates the chairman, and the gray shading indicates the remaining term of office.
46 Eijffinger, Mahieu, and Raes The econometric framework laid out above requires us to recode the data. Unanimous votes are dropped, as these are uninformative for our purposes. The remaining votes are coded as decisions between two alternatives. The lower interest-rate proposal is coded as a zero, whereas the higher interest-rate proposal is coded as a 1. To make this clear, consider the following fictitious example which is summarized in table 2.1.13 We have a meeting with four voters: Alice, Bob, Cameron and David. Alice votes for lowering the policy rate by 0.25 percent, whereas the other three vote for no change in the policy rate. Since the vote by Alice represents the dovish choice, we would code Alice’s vote as 0, whereas the other three votes would be coded as 1. Dropping the unanimous meetings leads to thirty-three meetings and thirteen voters (Executive Board members). A summary can be found in table 2.2. Table 2.2 makes clear that we expect Srjeber to be a hawk, given that she preferred the higher policy choice in twenty-three out of twenty-five votes, whereas we expect Persson to Table 2.1 How We Coded Voting Records Name
Vote cast
Coded as
Alice
−0.25
0
Bob
+0
1
Cameron
+0
1
David
+0
1
Table 2.2 Summary of the Data Name
Total votes
Villy Bergström
25
Urban Bäckström* Karolina Eckholm
Dovish votes
Fraction dovish
9
16
0.64
19
6
13
0.68
3
3
0
0.00
Lars Heikensten*
25
9
16
0.64
Kerstin Hessius
12
6
6
0.50
Stefan Ingves* Lars Nyberg Barbro Wickman-Parak Kristina Persson
Hawkish votes
7
4
3
0.43
33
13
20
0.61
6
4
2
0.33
14
0
14
1.00
Irma Rosenberg
11
4
7
0.64
Eva Srjeber
25
23
2
0.08
Lars E. O. Svensson
6
2
4
0.67
Svante Öberg
8
8
0
0.00
Note: * Executive Board member who has been governor at some point.
Monetary Policy Committees and Voting Behavior 47 be a dove, as she always preferred the lower policy rate proposal in the fourteen votes in our sample. We therefore specify that the ideal point of Srjeber should be positive and Persson’s should be negative. Together with the normalization (see above), this identifies the model. Figure 2.3 visualizes the same data in two ways. In the top graph, we have plotted the votes across time. The hawkish votes are indicated by a plus sign and the dovish votes by a circle. The time scale looks unwieldy because the frequency of meetings varies over time. This graph shows periods of persistency in votes as well as switches by some committee members. In the bottom graph, we show the same data after having stripped out the majority votes. This makes the dissenters more salient.
Overview of Nonanimous Meetings Öberg Svensson Srjeber Rosenberg Persson Wickman−Parak Nyberg af Jochnick Jansson Ingves Hessius Heikensten Ekholm Bäckström Bergström
+
+o ++ +++++++
+ o o
++ + o+ +
o
++++ +++o oooo
oo o
oo o
o+
o+ o oo o
+++ +++
oo
o+ +
+++
+
oo o+ oooooo+
o
+oo o + o o
+++o
+
+ o
++ ++ oooooo+ oo o+ oooooo+
+
+oo o + o +
+++o
o
+ +
oo o+ oooooo+ oo oo ooooooo
+ o
+oo o + o +o+ o + o +
+o++
+
‘99
+++ ooo
++ ++o + + + + + ooo o o o o
+
‘01
‘03 Time
‘05
+++
‘07
‘09
Overview of Dissents Öberg Svensson Srjeber Rosenberg Persson Wickman−Parak Nyberg af Jochnick Jansson Ingves Hessius Heikensten Ekholm Bäckström Bergström
++ +
+
++++++
+ o
+ + o
o
o
o
++ + o
‘99
+
+ o
ooo
o
+ o
‘01
+ o
+
+ +
+ o o
o
o o
+
+
o o o
o
+
ooo
o +
o+
‘03 Time
+
‘05
‘07
‘09
Figure 2.3 Visualizations of the Data Notes: The top graph shows all nonunanimous votes in our data set. Hawkish votes are marked as a plus and dovish votes as a circle. The bottom graph shows the same data after removing the majority votes. In the case of an equal split, all votes are shown.
48 Eijffinger, Mahieu, and Raes Earlier, we put forward the assumption of independence across votes and voters. These graphs may provide sufficient reason for some to doubt this assumption. For example, Nyberg and Heikensten have often voted in a similar way in nonunanimous meetings. A solution to violations of the conditional independence assumption is to include the (potentially latent) factors in the specification. The advantage of such an approach is that it allows for a quantification of the presumed effect. However, given the limited amount of data we have, it will be hard to obtain reasonably precise estimates of such effects. In the remainder of this study, we maintain the independence assumption.
2.5 Ideal Point Analysis A first summary of the model is given in figure 2.4. This figure provides a historical ranking of Executive Board members on a dove-hawk scale. Because the appointments Revealed Preferences at the Riksbank Srjeber Svensson Öberg Hessius Ingves Wickman−Parak Bäckström Ekholm Heikensten Rosenberg Nyberg Bergström 95% credibility interval
Persson –4
–2
0 Dove − Hawk
Figure 2.4 Overview of All Ideal Points
2
4
Monetary Policy Committees and Voting Behavior 49 of Executive Board members are staggered, the ideal point approach allows us to compare Executive Board members who were not appointed at the same time. The figure shows substantial uncertainty around some of the ideal points. For example, the intervals around Svensson, Öberg, and Ekholm are very wide. This reflects the fact that in our small sample we have only six recorded votes for Svensson, eight for Öberg, and three for Ekholm. We could also zoom in on the different compositions of the Executive Board over our sample period. In figure 2.5, each row displays another composition of the Executive Board in our sample period. Rather than presenting the ideal points, we show here the probabilities that the respective board members occupy a certain rank, with 1 being the most dovish and 6 being the most hawkish. Consider, for example, the first row. This concerns the first Executive Board. This board consisted of Bäckström (governor), Heikensten, Srjeber, Bergström, Hessius, and Nyberg.
Wickman−Parak
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
0.0 0.4 0.8 0.0 0.4 0.8
0.0 0.4 0.8
0.0 0.4 0.8
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6) Öberg
Svensson
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6) Öberg
0.0 0.4 0.8
Svensson 0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
0.0 0.4 0.8
0.0 0.4 0.8
0.0 0.4 0.8
Nyberg
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
0.0 0.4 0.8
0.0 0.4 0.8
0.0 0.4 0.8 0.0 0.4 0.8 0.0 0.4 0.8
0.0 0.4 0.8 0.0 0.4 0.8
Rosenberg
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6) Öberg
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6) Srjeber
Srjeber
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Wickman−Parak
Ingves 0.0 0.4 0.8
0.0 0.4 0.8
Ekholm
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6) Srjeber
Rosenberg
Rosenberg
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Persson
Nyberg 0.0 0.4 0.8
0.0 0.4 0.8
Ingves
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Srjeber
Persson
Persson
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Nyberg 0.0 0.4 0.8
0.0 0.4 0.8
Ingves
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Nyberg
Nyberg
Nyberg
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Heikensten 0.0 0.4 0.8
0.0 0.4 0.8
Bergtröm
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Hessius
Heikensten
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Bäckström 0.0 0.4 0.8
0.0 0.4 0.8
Bergtröm
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Heikensten
0.0 0.4 0.8
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Bäckström 0.0 0.4 0.8
0.0 0.4 0.8
Bergtröm
1 2 3 4 5 6 Rank from Dove to Hawk (1−6)
Figure 2.5 Rank Probabilities in Different Boards Notes: This figure shows the rank probabilities of Executive Board members in different compositions of the board. Each row shows another composition.
50 Eijffinger, Mahieu, and Raes This board composition remained unchanged from January 1, 1999, until December 31, 2000, when Kerstin Hessius left the Executive Board and was replaced by Kristina Persson. We see that in this first Executive Board, Srjeber clearly is the most hawkish member. Hessius seems also more hawkish than the other board members. The ranks of the other four are less clear, although the governor, Bäckström, seems more likely to be ranked in the middle, spots three and four. When Hessius is replaced by Persson, we notice a shift in the ranks. Persson is clearly the most dovish. Now Bergström, to whom we earlier attached the highest probability of being the most dovish, moves a spot to the middle. Going from row four to row five, we see that Persson and Srjeber left the Executive Board and were replaced by Svensson and Wickman-Parack. We see that now Svensson is clearly the most hawkish.14 Of particular interest in the case of the Riksbank is the position of the governor. Apel, Claussen, and Lennartsdotter (2010)note that at least until 2010, the governor had never been on the losing side in a vote. The Executive Board consists of six members, and in the case of a tie, the governor has the decisive vote. Given that in our sample we only have votes over two interest-rate proposals, this means that a governor would be on the losing side if four (or more) Executive Board members favored another interest rate. Apel, Claussen, and Lennartsdotter report that past Executive Board members feel that the governor at the Riskbank tends to support the majority view rather than trying to steer the majority in his direction. With the ideal point estimates in hand, we can try to quantify theprobability that the governor does occupy the middle ground. In the left panel of figure 2.6, we show for the six different board compositions the probability that the governor occupies the middle ground. We find that the chances fluctuate around 50 percent for the different board compositions. The right panel of figure 2.6 shows the ideal points in the different board compositions. We notice a substantial diversity of opinion during the period when Persson and Srjeber were both on the board. P (Governor = Median Voter)
Evolution Ideal Points across Boards
1.0 2 0.8
0.6
Ideal points
Probability
1
0.4
0
−1 0.2 −2
In gv es
In gv es
In gv es
Bä ck str öm Bä ck str öm H ei k en ste n
0.0 1
2
3
4
5
6
Different board compositions
Figure 2.6 Other Quantities of Interest Notes: The left panel of the figure shows the probability that the governor is the median voter (rank 3, 4). The right panel shows the distribution of ideal points in different boards. Only the point estimates of the ideal points are shown.
Monetary Policy Committees and Voting Behavior 51 The point of this simple exercise was to show what ideal point analysis is and what we can do with it. The approach is powerful in the sense that we obtain a joint probability distribution over all latent ideal points. We can easily construct substantial quantities while keeping track of the uncertainty surrounding these quantities. As a final example, say we are interested in knowing whether Bäckström’s successor, Heikensten, was more dovish or hawkish. Our model suggests that the probability that Bäckström is more hawkish than Heikensten is about 59 percent. The probability that Heikensten is more dovish than his successor, Ingves, is, on the other hand, about 70 percent.15
2.6 More Traditional Approaches This chapter can be read as a pamphlet promoting the use of ideal point models to complement more traditional approaches such as reaction functions. One key strength of Bayesian ideal point models we emphasize is that we can keep track of uncertainty in the estimates while constructing quantities of interest. Leaving this feature aside, one may ask how estimates of ideal point models compare to traditional estimates. In the case of the Riksbank, there is recent work on estimating preferences using a reaction function framework. Chappell, McGregor, and Vermilyea (2014) and Chappell and Mc-Gregor (2015) provide analyses of the preferences at the Riksbank. Chappell, McGregor, and Vermilyea (2014) argue that in the case of the Riksbank, the chairman is highly influential. This stands in contrast to Blinder (2004) and our assumption regarding voting at the Riksbank. However, Chappell and McGregor (2015) follow through on Chappell, McGregor, and Vermilyea (2014) and provide an interesting Monte Carlo study suggesting that we should be careful with the trust we place in preference estimates in a traditional reaction function framework. They provide evidence of substantial bias in the estimates and relate this to the incidental parameters problem. At first sight, the incidental parameters problem should not be of any concern in studies like these, given the nature of the panel data used. For example, in their study, Chappell and McGregor (2015) use 101 meetings and seventeen individuals. This problem seems to arise, however, because of various complications. The data are highly unbalanced, with more than 65 percent of the observations missing. For some voters, we have only few observations, and on top of that we have a discrete dependent variable. The authors speculate that the large biases they find in the preference estimates could well be present in other studies of MPC behavior.16 However, Chappell and McGregor (2015) do seem confident that at least the ranking of members’ preferences is robust. To see how the ideal point estimates and reaction function estimates compare, we can look at the different graphs provided in figure 2.7. In these graphs we compare our estimates with estimates provided by Chappell and McGregor (2015) (top left graph) and Chappell, McGregor, and Vermilyea (2014) (top right graph). We can see that we have a similar order to that of as Chappell, McGregor, and Vermilyea (2014). The differences with Chappell and McGregor (2015) are larger. For example, we have Persson as the most dovish member in our sample, whereas Chappell and McGregor (2015) put her in the middle.17
52 Eijffinger, Mahieu, and Raes 0.6
1.0
Bäckström
0.4
Srjeber
Bergström
0.3
Persson
Öberg Heikensten Ingves Nyberg Wickman−Parak Rosenberg
0.2 Svensson
0.1
Öberg
0.8
Hessius
Chappell et al. (2014)
Chappell et al. (2015)
0.5
0.6
Ingves
Svensson Srjeber
Bäckström Wickman−Parak
0.4
Nyberg Rosenberg Hessius Bergström Heikensten
0.2
Ekholm
0.0 Persson
0.0 −3
−2
−1
0 Ideal points
1
2
3
−3
−1
0 Ideal points
1
2
3
0.3
0.2
Tightness chappell et al. (2014)
Tightness chappell et al. (2014)
0.3
−2
Srjeber Öberg
0.1 Ingves Bergström Hessius Bäckström Heikensten Svensson Nyberg Rosenberg
0.0
–0.1
Wickman−Parak
0.2
Srjeber Öberg
0.1 Ingves Heikensten Bergström Svensson Hessius Bäckström Nyberg
0.0
Rosenberg
−0.1
Wickman−Parak
Persson
Persson
–0.2
−0.2 −3
−2
−1
0
1
Ideal points
2
3
−0.2
0.0
0.2
0.4
0.6
0.8
1.0
Chappell, McGregor, and Vermilyea 2014
Figure 2.7 Comparing Estimates Notes: The top left graph plots our ideal point estimates against estimates of preferences reported in Chappell and McGregor 2015, table 1, column 1. The top right graph reports our ideal point estimates against estimates of preferences reported in Chappell, McGregor, and Vermilyea 2014, table 7. The bottom left graph reports ideal point estimates against net tightness frequencies as provided by Chappell, McGregor, and Vermilyea 2014, table 7. The bottom right graph reports estimates of preferences reported in Chappell, McGregor, and Vermilyea 2014 versus net tightness frequencies from the same article, table 7.
The bottom row shows graphs where our estimates and the estimates reported in Chappell, McGregor, and Vermilyea (2014) are compared with net tightness frequencies, also taken from Chappell, McGregor, and Vermilyea (2014). This is defined as the number of times a member prefers a rate higher than the committee choice less the number of times a member prefers a lower rate, with the difference expressed as a fraction of total votes. The authors consider this a good validation check. Visual inspection suggests that our estimates seem to do well by this measure. Furthermore, we note that we put Öberg and Srjeber in a different order from Chappell, McGregor, and Vermilyea (2014). Is Srjeber significantly more likely to be hawkish? Our approach allows for a fast calculation of this question. We find a nearly 97 percent chance of Srjeber being more hawkish than Öberg. Our analysis here was mainly illustrative. A further analysis of the Riksbank can be found in Eijffinger, Mahieu, and Raes (2013b). We are aware of four papers applying this
Monetary Policy Committees and Voting Behavior 53 approach to voting records of central banks: Hix, Hoyland, and Vivyan (2010), Eijffinger, Mahieu, and Raes (2013a), Eijffinger, Mahieu, and Raes (2013b) and Eijffinger, Mahieu, and Raes (2015). Hix, Hoyland, and Vivyan (2010) show that the British government has been able to move the position of the median voter in the MPC over time. Eijffinger, Mahieu, and Raes (2013a) use a similar approach to that of Hix, Hoyland, and Vivyan (2010) over a longer sample period. Their main finding is that external members tend to hold more diverse policy preferences than internal members. Furthermore, MPC members with industrial experience tend to be more hawkish. Eijffinger, Mahieu, and Raes (2013b) present a range of case studies on different central banks. Eijffinger, Mahieu, and Raes (2015) do not use voting records but use opinions expressed in FOMC meetings to create hypothetical votes which are subsequently analyzed with a hierarchical spatial voting model. Theyfind little evidence of presidential influence through appointments but do report that historically, board governors tend to hold more dovish preferences than bank presidents.
2.7 Extensions The basic model we outlined above can be extended in various ways. There have been various extensions proposed in the literature investigating voting behavior in political bodies or judicial courts. Furthermore, one could in principle also look into the vast literature on item response theory (IRT) for interesting extensions or just propose one’s own extension. We discuss here a few examples to give an idea of what is possible. First, one could think of alternative link functions. The spatial voting model given by equation 1 can be thought of as belonging to a class of models:
P ( ynt = 1) = F (βt xn − α t ), (4)
where F is a cumulative distribution function. Instead of opting for a logistic distribution, we could also specify F to be the cumulative distribution function (cdf) of the standard normal distribution, which would lead to a probit model. Other choices of link functions might have desirable properties. For example, Liu (2004) proposed using the cdf of t-distribution with five to eight degrees of freedom. This approximates the logistic link but has heavier tails, which makes it more robust, hence the name robit. Another avenue for further research might be asymmetric link functions. This has been explored in the context of IRT models. Bazán, Branco, and Bolfarine (2006)find that a skew-probit model is more appropriate for modeling mathematical test results of pupils. We are not aware of any explorations of this in the context of votes (neither in political science nor in economics). Eijffinger, Mahieu, and Raes (2013a) work with a modification proposed by Bafumi, Gel-man, Park, and Kaplan (2005), who propose adding a level of error 0 and 1 as follows:
P ( ynt = 1) = 0 + (1 − 0 − 1 )logit −1 (βt xn − α t ). (5)
54 Eijffinger, Mahieu, and Raes The parameters 0 and 1 are introduced to make the model more robust to outliers. Throughout this chapter, we focus on models that are estimated in a Bayesiann manner. This implies that another straightforward extension is to include hierarchical predictors. For example, Eijffinger, Mahieu, and Raes (2015) estimate ideal points of FOMC members using the following hierarchical setup: P ( ynt = 1) = logit −1 (βt xn − α t ) xn ∼ N (µ x , σ2x )
µ x = γ vn
σ x ∼ Unif (0,1) γ ∼ N (0, 2) βt ∼ N (1, 4 ) truncated at 0 α t ∼ N (0, 4 ) . Here the ideal points xn are modeled as depending on predictors vn . In their application, these predictors include whether or not the FOMC member is a board governor, a dummy variable for the appointing president, and career experiences before joining the FOMC. Besides the extensions we discuss here, various others are possible. The IRT literature is very rich, and exploring that literature might prove to be insightful when thinking about modeling votes in committees.
2.8 Conclusion There is a substantial literature using voting records of MPCs to learn about decision- making at central banks. This literature is motivated by the fact that central banks ought to make decisions independently, need to be held accountable for their decision-making and ideally leverage the pooling of opinions. MPCs come, however, in a wide variety of forms often reflecting their historical development. However, even if one could redesign a central bank from scratch, it is not entirely obvious what an optimal design should look like (Reis (2013)). The literature has studied various aspects of committees around the world. These aspects include appointment systems, career concerns, the inclusion of outsiders, size, and so on. Some aspects are relatively well understood by now, but generalizing conclusions requires carefulness because most of these studies are in essence case studies. One way forward is the study of a more diverse set of central banks. However, to date, a number of central banks are fairly reluctant to share transcripts and voting records, despite pleas for more transparency (Eijffinger (2015)). In this chapter, we have discussed an approach to studying voting records (and by extension transcripts). The methods we put forward are commonplace in the analysis of legislative bodies or judicial courts but are as yet underutilized in economics.
Monetary Policy Committees and Voting Behavior 55 While we see substantial advantages to this approach, such as the flexibility to create quantities of interest and the careful incorporation of uncertainty, there are certainly drawbacks. One objection to the approach outlined in this chapter is that it is too simplistic. We have heard on more than one occasion that a complex decision-making process cannot and should not be reduced to mapping policymakers on one simple dimension. This objection is similar to objections made in the context of political science (see Lauderdale and Clark (2014)). However, as demonstrated in Eijffinger, Mahieu, and Raes (2013a), even a single latent dimension generates a low prediction error and hence leaves little room for statistical improvement. Furthermore, in the stylized spatial voting model we described here, we neglected to incorporate macroeconomic information which is often deemed to be crucial in monetary policymaking (economic growth, inflation, etc.). All this information was captured by the meeting-specific parameters α t . In principle, one can try to model the parameters α hierarchically, as in the extension we outlined for the ideal points in the previous section. The extent to which this works remains to be explored. It should be clear that we do not argue that spatial voting is superior to approaches commonly used in the study of MPCs (e.g., estimating reaction functions). However, we find spatial voting models a neat addition to the toolkit of scholars investigatingcommittees. We have also touched upon some possible extensions of the spatial voting framework for researchers. Furthermore, not only voting records can be used as data but also transcripts. Eijffinger, Mahieu, and Raes (2015) use transcripts to create artificial voting records. Another approach could be to use text analysis (see Baerg and Lowe (2016)) in addition to spatial voting models.
Notes 1. See, for example, the survey by Lybek and Morris (2004). The book by Siklos, Bohl, and Wohar (2010) shows the diversity in institutional framework among central banks. 2. On top of purely technical skills, there may be a need for some political skills as well; see Goodhart and Meade (2004), 14–16. 3. See Gerling, Grüner, Kiel, and Schulte (2005) for a survey. 4. Papers studying this topic include Meade and Sheets (2005), Chappell, McGregor, and Vermilyea (2008), Hayo and Neuenkirch (2013), Bennani, Farvaque, and Stanek (2015) and Jung and Latsos (2015). 5. Other cross-country studies are Vaubel (1997), Göhlmann and Vaubel (2007), Belke and Potrafke (2012). 6. See, for example, Chappell, Havrilesky, and McGregor (1993), Tootell (1996), Chappell, Havrilesky, and McGregor (1995), Havrilesky and Gildea (1995), Chang (2001), Falaschetti (2002), Adolph (2013). 7. It should be added that the degree of individualism at the FOMC has evolved over time. Many researchers attribute a lower degree of individualism to the influence of the governor; see Blinder (2009). 8. Extending the model to higher dimensions is possible, although identification (see below) becomes more complicated. We limit ourselves here to one-dimensional models as it facilitates the discussion. We return to the issue of one latent dimension later on.
56 Eijffinger, Mahieu, and Raes 9. This discussion and example are taken from Eijffinger, Mahieu, and Raes (2013a). 10. See also the discussion in Jackman (2009) 458–459. 11. A more detailed analysis of ideal points in the Executive Board of the Riksbank can be found in Eijffinger, Mahieu, and Raes (2013b). An insightful discussion on the history and workings of the Executive Board of the Riksbank is Apel, Claussen, and Lennartsdotter (2010). 12. Apel, Claussen, and Lennartsdotter (2010) provide some survey evidence from Executive Board members indicating that there is some bargaining region in repo decisions. When the majority vote is reasonably close to their own assessment, some executive members occasionally refrain from entering reservations. 13. In our sample period, we have no meetings where more than two policy alternatives were mentioned in the voting record. 14. Two remarks are in order regarding this result. First, this result comes from the restricted sample we use. Second, these ranks are relative statements. 15. It should be understood that a statement like this is conditional on the model being true. This is clearly never the case, so this statement holds strictly speaking only in the model world. See McElreath (2016) for an insightful discussion. 16. To be clear, Bayesian inference is not necessarily immune to this problem, but in some cases we may have good hope that Bayesian approaches have superior large sample properties; see Jackman (2009), 437. In the case of ideal point models, we are not aware of research looking at bias and confidence interval coverage for data of the size we often deal with in the context of MPCs. 17. We should point out these are not the preferred estimates by Chappell and McGregor (2015). We pick these estimates to illustrate how different preferences estimates can be.
References Adolph, C. 2013. Bankers, Bureaucrats, and Central Bank Politics: The Myth of Neutrality. New York: Cambridge University Press. Apel, M., C. Claussen, and P. Lennartsdotter. 2010. “The Executive Board of the Riksbank and Its Work on Monetary Policy—Experiences from the First Ten Years.” Economic Review, Riksbank. Baerg, N. R., and W. Lowe. 2016. “A Textual Taylor Rule: Estimating Central Bank Preferences Combining Topic and Scaling Methods.” Mimeo. Bafumi, J., A. Gelman, D. K. Park, and N. Kaplan. 2005. “Practical Issues in Implementing and Understanding Bayesian Ideal Point Estimation.” Political Analysis 13: 171–187. Ball, L. 2016. “Ben Bernanke and the Zero Bound.” Contemporary Economic Policy 34, no. 1: 7–20. Bazán, J. L., M. D. Branco, and H. Bolfarine. 2006. “A Skew Item Response Model.” Bayesian Analysis 1, no. 4: 861–892. Belke, A., and N. Potrafke. 2012. “Does Government Ideology Matter in Monetary Policy? A Panel Data Analysis for OECD Countries.” Journal of International Money and Finance 31, no. 5: 1126–1139. Bennani, H., E. Farvaque, and P. Stanek. 2015. “FOMC Members’ Incentives to Disagree: Regional Motives and Background Influences.” NBP working paper 221, Warsaw. Besley, T., N. Meads, and P. Surico. 2008. “Insiders versus Outsiders in Monetary Policymaking.” American Economic Review 98, no. 2: 218–23. Bhattacharjee, A., and S. Holly . 2010. “Rational Partisan Theory, Uncertainty, and Spatial Voting: Evidence for the Bank of England’s MPC.” Economics and Politics 22, no. 2: 151–179.
Monetary Policy Committees and Voting Behavior 57 Blinder, A. 2004. The Quiet Revolution: Central Banking Goes Modern. New Haven: Yale University Press. Blinder, A. S. 2009. “Making Monetary Policy by Committee.” International Finance 12, no. 2: 171–194. Chang, K. H. 2001. “The President versus the Senate: Appointments in the American System of Separated Powers and the Federal Reserve.” Journal of Law, Economics, and Organization 17, no. 2: 319–355. Chappell, H. W., T. M. Havrilesky, and R. R. McGregor. 1995. “Policymakers, institutions, and Central Bank Decisions,” Journal of Economics and Business 47, no. 2: 113–136. Chappell, H. W., T. M. Havrilesky, and R. R. McGregor. 2000. “Monetary Policy Preferences of Individual FOMC Members: A Content Analysis of the Memoranda of Discussion.” Review of Economics and Statistics 79, no. 3: 454–460. Chappell, H. W. J., and R. R. McGregor. 2015. “Central Bank Voting and the Incidental Parameters Problem.” Working paper. Chappell, H. W., R. R. McGregor, and T. A. Vermilyea. 2007. “The Role of the Bias in Crafting Consensus: FOMC Decision Making in the Greenspan Era.” International Journal of Central Banking 3, no. 2: 39–60. Chappell, H. W., R. R. McGregor, and T. A. Vermilyea. 2008. “Regional Economic Conditions and Monetary Policy.” European Journal of Political Economy 24, no. 2: 283–293. Chappell, H. W., R. R. McGregor, and T. A. Vermilyea. 2014. “Power-Sharing in Monetary Policy Committees: Evidence from the United Kingdom and Sweden.” Journal of Money, Credit and Banking 46. no. 4: 665–692. Chappell, Henry W. J., T. M. Havrilesky, and R. R. McGregor. 1993. “Partisan Monetary Policies: Presidential Influence through the Power of Appointment.” Quarterly Journal of Economics 108, no. 1: 185–218. Clinton, J., S. Jackman, and D. Rivers. 2004. “The Statistical Analysis of Roll Call Data.” American Political Science Review 98, no. 2: 355–370. Eichler, S., and T. Lähner. 2014. “Forecast Dispersion, Dissenting Votes, and Monetary Policy Preferences of FOMC Members: The Role of Individual Career Characteristics and Political Aspects.” Public Choice 160, no. 3–4: 429–453. Eijffinger, S. 2015. “Monetary Dialogue 2009–2014: Looking Backward, Looking Forward.” Kredit und Kapital 48, no. 1: 1–10. Eijffinger, S., R. Mahieu, and L. Raes. (2018) “Inferring Hawks and Doves from Voting Records.” European Journal of Political Economy 51:107–120. Eijffinger, S. C., R. J. Mahieu, and L. Raes. 2013a. “Estimating the Preferences of Central Bankers: An Analysis of Four Voting Records.” CEPR discussion paper DP9602. Eijffinger, S. C., R. Mahieu, and L. Raes. 2015. “Hawks and Doves at the FOMC.” CEPR discussion paper DP10442. El-Shagi, M., and A. Jung. 2015. “Does the Greenspan Era Provide Evidence on Leadership in the FOMC?” Journal of Macroeconomics 43: 173–190. Falaschetti, D. 2002. “Does Partisan Heritage Matter? The Case of the Federal Reserve.” Journal of Law, Economics, and Organization 18, no. 2: 488–510. Gerlach-Kristen, P. 2009. “Outsiders at the Bank of England’s MPC.” Journal of Money, Credit and Banking 41, no. 6: 1099–1115. Gerlach-Kristen, P., and E. Meade. 2010. “Is There a Limit on FOMC Dissents? Evidence from the Greenspan Era.” Mimeo. Gerling, K., H. P. Grüner, A. Kiel, and E. Schulte. 2005. “Information Acquisition and Decision Making in Committees: A Survey.” European Journal of Political Economy 21, no. 3: 563–597.
58 Eijffinger, Mahieu, and Raes Göhlmann, S., and R. Vaubel. 2007. “The Educational and Occupational Background of Central Bankers and Its Effect on Inflation: An Empirical Analysis.” European Economic Review 51, no. 4: 925–941. Goodhart, C., and E. Meade. 2004. “Central Banks and Supreme Courts.” Moneda y Credito 218: 11–42. Goodhart, C. A. 2002. “The Constitutional Position of an Independent Central Bank.” Government and Opposition 37, no. 2: 190–210. Harris, M., P. Levine, and C. Spencer. 2011. “A Decade of Dissent: Explaining the Dissent Voting Behavior of Bank of England MPC Members.” Public Choice 146, no. 3: 413–442. Harris, M. N., and C. Spencer. 2009. “The Policy Choices and Reaction Functions of Bank of England MPC Members.” Southern Economic Journal 76, no. 2: 482–499. Havrilesky, T., and J. Gildea. 1991. “Screening FOMC Members for Their Biases and Dependability.” Economics & Politics 3, no. 2: 139–149. Havrilesky, T., and J. Gildea. 1995. “The Biases of Federal Reserve Bank Presidents.” Economic Inquiry 33, no. 2: 274. Hayo, B., and M. Neuenkirch. 2013. “Do Federal Reserve Presidents Communicate with a Regional Bias?” Journal of Macroeconomics 35: 62–72. Hix, S., B. Hoyland, and N. Vivyan. 2010. “From Doves to Hawks: A Spatial Analysis of voting in the Monetary Policy Committee of the Bank of England.” European Journal of Political Research 49, no. 6: 731–758. Horvath, R., M. Rusnak, K. Smidkova, and J. Zapal. 2014. “The Dissent Voting Behaviour of Central Bankers: What Do We Really Know?” Applied Economics 46, no. 4: 450–461. Jackman, S. 2009. Bayesian Analysis for the Social Sciences. Chichester: John Wiley. Jung, A., and G. Kiss. 2012. “Preference Heterogeneity in the CEE Inflation- Targeting Countries.” European Journal of Political Economy 28, no. 4: 445–460. Jung, A., and S. Latsos. 2015. “Do Federal Reserve Bank Presidents Have a Regional Bias?” European Journal of Political Economy 40: 173–183. Lauderdale, B. E., and T. S. Clark. 2014. “Scaling Politically Meaningful Dimensions Using Texts and Votes.” American Journal of Political Science 58, no. 3: 754–771. Liu, C. 2004. “Robit Regression: A Simple Robust Alternative to Logistic and Probit Regression.” In Applied Bayesian Modeling and Causal Inference from an Incomplete-Data Perspective, edited by A. Gelman and X. L. Meng, 227–238. Chichester: John Wiley. Lybek, T., and J. Morris. 2004. “Central Bank Governance: A Survey of Boards and Management.” McElreath, R. 2016. Statistical Rethinking: A Bayesian Course with Examples in R and Stan, Boca Raton: CRC Press. Meade, E. E., and D. N. Sheets. 2005. “Regional Influences on FOMC Voting Patterns.” Journal of Money, Credit and Banking 37, no. 4: 661–77. Reis, R. 2013. “Central Bank Design.” The Journal of Economic Perspectives 27, no. 4: 17–43. Siklos, P. L., M. T. Bohl, and M. E. Wohar. 2010. Challenges in Central Banking: The Current Institutional Environment and Forces Affecting Monetary Policy. Cambridge: Cambridge University Press. Siklos, P. L., and M. Neuenkirch. 2015. “How Monetary Policy is Made: Two Canadian Tales.” International Journal of Central Banking 11, no. 1: 225–250. Thornton, D. L., D. C. Wheelock. 2014. “Making Sense of Dissents: A History of FOMC Dissents.” Federal Reserve Bank of St. Louis Review 96, no. 3: 213–227. Tootell, G. M. 1996. “Appointment Procedures and FOMC Voting Behavior.” Southern Economic Journal 63, no. 1: 191–204. Vaubel, R. 1997. “The Bureaucratic and Partisan Behavior of Independent Central Banks: German and International Evidence.” European Journal of Political Economy 13, no. 2: 201–224.
chapter 3
Peaks and Trou g h s Economics and Political Economy of Central Bank Independence Cycles Donato Masciandaro and Davide Romelli
3.1 Introduction In 1824, David Ricardo wrote: It is said that Government could not be safely entrusted with the power of issuing paper money; that it would most certainly abuse it. . . . There would, I confess, be great danger of this if Government—that is to say, the Ministers—were themselves to be entrusted with the power of issuing paper money.
Ricardo’s ideas, nowadays, are a closer representation of the functioning of monetary policy institutions than ever before in history. Central bankers can implement their policies with a degree of autonomy that their predecessors would have only dreamed of. Yet the history of central banks is rich in modifications to their role and functions (Goodhart 1988; Lastra 1996; Goodhart 2011). In particular, over the past four decades, central banks around the world have seen their mandates progressively narrowed and zoomed in on the goal of price stability. At the same time, this narrowing mandate has been accompanied by changes in their governance arrangements, the main focus of which became an increasing degree of independence of monetary policy authorities from the executive power. This evolution, prompted by Kydland and Prescott’s (1977) well-known time inconsistency problem, has attracted a significant interest from the economic profession starting in the 1990s, when the first indices of central bank independence (CBI) were developed. Figure 3.1 displays this growing interest in CBI by showing the number of academic papers and research studies published with a title containing these keywords
60 Masciandaro and Romelli 50 46
45
45
42
44
40
39
35
34
33
25
22
20
23
19
15
16
15 10
7
20 13 20 14 20 15
11 12 20
10
20
09
20
08
20
07
20
06
20
05
20
04
20
03
20
02
20
01
20
00
20
99
98
19
97
96
95
19
94
19
19
93
19
14
3
1
19
92
19
2
19
1
19
1
91
5
15
12
20
10
0
33
33
30
36
Number of publications per year
Figure 3.1 Research and Policy Articles with a “Central Bank Independence” title (1991–2015) Note: Data obtained from SSRN and JSTOR.
between 1991 and 2015.1 During the period 1991–1998, sixty-four research and policy articles were published, peaking in 1998 with thirty-three publications. The 2000s saw a new wave of research in the field (253 published articles), with a new peak of forty-six published articles in 2008. Recent years have seen a renewed interest in studying CBI, with an increasing trend in the number of publications on the topic. This evolution of research on CBI has followed closely the pace of reforms in central bank institutional design. For example, throughout the 1990s, a large wave of reforms that increased the degree of independence of monetary policy institutions was observed in both developed and developing countries. Similarly, the increased interest in recent years was brought about by the debate on the optimal design of central banks which sparked after the 2008 global financial crisis (GFC). The GFC has posed new challenges to modern central banking models, in which monetary policy is conducted by an independent central bank that follows an interest- rate rule-based approach to stabilize inflation and output gaps (Goodhart, Osorio, and Tsomocos 2009; Alesina and Stella 2010; Aydin and Volkan 2011; Curdia and Woodford 2011; Giavazzi and Giovannini 2011; Gertler and Karadi 2011; Issing 2012; Woodford 2012; Cohen-Cole and Morse 2013; Cukierman 2013). In the aftermath of the crisis, there has been a large number of important reforms to central bank governance, in particular regarding the involvement of central banks in banking and financial supervision
Peaks and Troughs: Central Bank Independence Cycles 61 (Masciandaro and Romelli 2018). For example, the Dodd-Frank Act of 2010 increased the responsibilities of the Fed as prudential supervisor (Komai and Richardson 2011; Gorton and Metrick 2013).2 In Europe, the European Systemic Risk Board (ESRB), established in 2010, provides macroprudential supervision of the European Union’s financial system under the guidance of the European Central Bank (ECB), while the European Single Supervisory Mechanism (SSM), which started operating in November 2014, assigns banking-sector supervision responsibilities to the ECB together with national supervisory authorities. These reforms indicate a reversal in central bank governance, since granting more supervisory power to the central bank is generally associated with a lower degree of CBI (Masciandaro and Quintyn 2009; Orphanides 2011; Eichengreen and Dincer 2011; Masciandaro 2012a and 2012b; Masciandaro and Quintyn 2015).3 In this chapter, we provide an overview of this evolution in CBI over the past four decades. We investigate the endogenous determination of central bank institutional design from both a theoretical and an empirical perspective. Theoretically, we build a small, stylized political economy model in which citizens delegate to policymakers the optimal design of central bank governance. This toy model is used to highlight some key determinants that can explain the evolution of CBI as a function of macroeconomic shocks and political economy characteristics of countries. We then employ recently developed dynamic indices of CBI to highlight the peaks and troughs of CBI over the period 1972–2014. Using the recomputed Grilli, Masciandaro, and Tabellini (1991) index in Arnone and Romelli (2013) and Romelli (2018), we highlight several interesting trends in central bank design. For a sample of sixty-five countries, we show that the increasing trend in CBI over the period 1972–2007, was reversing after the GFC, mainly due to the significant changes to the roles of central banks in banking supervision. We then provide a systematic investigation of the political economy and macroeconomic characteristics that are associated with CBI over time. Employing a dynamic index of CBI, we analyze the evolution of CBI across a large sample of countries and over time and highlight some new interesting determinants of the endogenous evolution of central bank design. We find that legacy matters, that is, that past levels of CBI are highly correlated to future ones. We also show that past episodes of high inflation are positively correlated with high levels of CBI in the following periods. This corroborates the findings in Crowe and Meade (2008) and suggests that high inflation aversion does, indeed, constrain governments to assign higher degrees of independence to their central banks. However, taking stock of our richer panel data, we show that the effect of inflation aversion vanishes in recent years (2000–2014), when the degree of CBI tends to be more closely related to other types of macroeconomic shocks such as fiscal and exchange-rate shocks. Section 3.2 of this chapter provides a systematic overview of the literature in CBI over the past decades. Section 3.3 presents a stylized model that can explain the drivers of the optimal level of CBI. Section 3.4 discusses the data employed and some descriptive statistics, while section 3.5 presents the empirical strategy and results, and section 3.6 concludes.
62 Masciandaro and Romelli
3.2 The Design of CBI: The State of the Art Starting with the new classical revolution, a large literature has been concerned with the optimal institutional design of monetary policy authorities and how this impacts macroeconomic outcomes. The main theoretical argument is that policymakers tend to use monetary tools with a short-sighted perspective, by using an inflation tax to smooth different kinds of macroeconomic shocks, in an attempt to exploit the short-term trade-off between real economic gains and nominal (inflationary) costs.4 Moreover, the more efficient markets are, the greater the risk that the short-sighted monetary policies just produce inflationary distortions, as rational agents will anticipate the political incentives of using an inflation tax and will fully adjust their expectations. In this framework, the Friedman-Lucas proposition on monetary policy neutrality holds (Friedman 1968; Lucas 1973). Furthermore, this political inflation bias can generate even greater negative externalities, such as a moral hazard among politicians (if the inflation tax is used for public finance accommodation) or bankers (if the monetary laxity is motivated by bank bailout needs) (Nolivos and Vuletin 2014). As a result, in the late 1970s and early 1980s, the idea of banning the use of monetary policy for inflation tax purposes received a broad consensus among policymakers and the academic community. Consequently, as soon as this institutional setting gained momentum, the relationship (governance) between the policymakers (responsible for the design of policies) and the central bank (in charge of monetary policy) became crucial in avoiding the inflation bias. In this context, Rogoff (1985) argues that only an independent central bank is able to implement credible monetary policies that will favor lower inflation rates and thus eliminate the time inconsistency problem of government policies (Kydland and Prescott 1977).5 Walsh (1995a) proposes an alternative way to model CBI using a principal-agent framework that underlines the importance of assigning stronger incentives to central bankers in order to reach the socially optimal policy. For example, the Reserve Bank of New Zealand Act of 1989 establishes a contract between the central bank and the government that is close in spirit to Walsh’s optimal central bank contract (Walsh 1995b). The optimal design of central bank governance is essentially a two-sided medal. On one side, the central banker has to be independent, that is, implement its policies without any external (political) short-sighted interference. Therefore, the central banker becomes a veto player against inflationary monetary policies. On the other side, the central banker has to be conservative, where being conservative refers to the importance that he or she assigns to medium-term price stability in its relation to other macroeconomic objectives. Thus, being conservative is a necessary condition to avoid the central banker himself or herself becoming a source of inflation bias, and independence is often considered as the premise for conservative monetary policies. Moreover, independent
Peaks and Troughs: Central Bank Independence Cycles 63 and conservative central banks are credible if and only if the institutional setting in which they operate guarantees the accountability and transparency of their policies. Given these key characteristics of monetary policy settings, a large literature has developed a set of indices that attempt to capture the institutional features of central banks and gauge their degree of independence, conservatism, and transparency. Seminal works include Bade and Parkin (1982), Grilli, Masciandaro, and Tabellini (1991), Cukierman (1992), and Masciandaro and Spinelli (1994).6 Most of these indices develop de jure indices of independence based on central bank charters. One exception is Cukierman (1992), who first distinguishes between legal and de facto indicators of independence. These classical indices of independence have been updated by, among others, Cukierman, Miller, and Neyapti (2002) and Jácome and Vazquez (2008) for the Cukierman index and Arnone et al. (2009) and Arnone and Romelli (2013) for the Grilli, Masciandaro, and Tabellini index. Moreover, several recent works extend previous measures of CBI by looking at other central bank characteristics. For example, Crowe and Meade (2008) and Dincer and Eichengreen (2014) develop measures of CBI and transparency. Vuletin and Zhu (2011) propose a new de facto index of independence, identifying two different mechanisms embedded in the measure of the turnover rate of central bank governors (see also Dreher, Sturm, and de Haan 2008 and 2010) Together with the construction of these indices of central independence, a large literature has attempted to determine whether the degree of independence is associated with important macroeconomic indicators such as inflation rates, public debt, and interest rates, as well as income and growth. The assumption was to verify if the existence of a monetary veto player reduces the intended and unintended effects of the misuse of the inflation tax and produces positive spillovers on other macroeconomic variables. By and large, this literature has produced mixed results (de Haan and Sturm 1992; Alesina and Summers 1993; Alesina and Gatti 1995; Posen 1995; Forder 1996; Campillo and Miron 1997; Sturm and de Haan 2001; Gutierrez 2003; Jácome and Vázquez 2008; Siklos 2008; Ftiti, Aguir, and Smida 2017). For example, Klomp and de Haan (2010b) perform a meta regression analysis of fifty-nine studies, examining the relationship between inflation and CBI, and confirm the existence of a negative and significant relation between inflation and CBI in Organization for Economic Cooperation and Development (OECD) countries, although the results are sensitive to the indicator used and the estimation period chosen. More recent studies nonetheless confirm the importance of the legal CBI in explaining inflation rates (Cukierman 2008; de Haan, Masciandaro, and Quintyn 2008; Carlstrom and Fuerst 2009; Down 2009; Alpanda and Honig 2009; Alesina and Stella 2010; Klomp and de Haan 2010a; Maslowska 2011; Arnone and Romelli 2013), government deficits (Bodea 2013), and financial stability (Cihak 2007; Klomp and de Haan 2009; Ueda and Valencia 2014).7 Despite this growing consensus, the GFC has once again brought central banks to the core of policy and academic debate by highlighting the need to reconsider, among others, their role in banking supervision (Masciandaro 2012b) and financial stability. Central banks around the world are now perceived as policy institutions with the goal of promoting monetary and financial stability, a double mandate that might bring a
64 Masciandaro and Romelli new form of time inconsistency problem (Nier 2009; Ingves 2011; Ueda and Valencia 2014). Yet empirical investigations on the interplay between CBI and financial stability have led to confounding results. For instance, Cihak (2007) and Klomp and de Haan (2009) find that CBI fosters financial stability, while Berger and Kißmer’s (2013) analysis suggests that more independent central banks are less willing to prevent financial crises. Finally, it is also important to notice how, starting from 2008, central banks such as the Federal Reserve and the Bank of England, have implemented large-scale quantitative easing (QE) policies. Despite the little evidence of significant side effects of these policies, the potential losses that could result from these nonconventional operations might threaten CBI in the future (Goodfriend 2011; Ball et al., 2016).8 The large majority of these empirical studies essentially consider CBI as an exogenous (independent) variable that can be useful to explain macroeconomic trends. Yet successive research has argued that political institutions such as central banks evolve endogenously as a response to a set of macroeconomic factors (Farvaque 2002; Aghion, Alesina, and Trebbi 2004; Polillo and Guillén 2005; Brumm 2006 and 2011; Bodea and Hicks 2015; Romelli 2018). The step forward in this line of research is then to consider the degree of CBI as an endogenous (dependent) variable that has to be explained.9 Which are the drivers that can motivate the decision of a country to adopt a certain degree of independence of the central bank? Why and how are policymakers forced to implement reforms that reduce their powers in using the inflation tax, increasing the degree of independence of the central bank? Various hypotheses have been advanced to explain the genesis of the political process that leads a monetary policy regime to assume a given set of characteristics. Developments in endogenizing CBI have been the subject of analysis in both economics and political science. One argument relates to the possibility that the degree of CBI depends on the presence of constituencies that are strongly averse to the use of the inflation tax, which drives policymakers to bolster the status of the central bank (the constituency view).10 Other views argue that the aversion to the use of the inflation tax is structurally written in the features of the overall legislative and/or political system, which influence policymakers’ decision on whether to have a more or less independent central bank (the institutional view).11 Yet another view stresses the role of culture and traditions of monetary stability in a country in influencing policymakers’ choices (the culture view).12 These three views share the role of the preferences of citizens in determining the degree of CBI (Masciandaro and Romelli 2015). In the constituency view, the present preferences against the use of the inflation tax are relevant; in the institutional and culture views, the past anti-inflationary preferences influence the present policymakers’ decisions. It is also evident that these preferences might change following periods of economic turmoil, prompting a wave of reforms in the design of the central bank governance. Overall, whatever the adopted view in explaining the evolution of CBI, our attention should focus on two crucial elements: social preferences and the incentives and constraints that shape the behavior of the agent responsible for the monetary setting
Peaks and Troughs: Central Bank Independence Cycles 65 design, that is, the incumbent policymaker. Motivated by these arguments, the next section of this chapter uses a political economy framework to show how the optimal monetary policy design can evolve as a function of these two elements.
3.3 Citizens, Politicians, and Cycles in CBI In this section, we study the optimal design of central bank governance using a delegation framework in which a policymaker’s choice depends on the economic and institutional environment existing at a given time, which, in turn, determines the political weights put on the pros and cons of assigning the central bank a certain level of independence. Our framework is based on two main assumptions. First, we assume that incumbent politicians weight the gains and losses of adopting a central bank governance setting following their own preferences. Second, policymakers are politicians, and, as such, they are held accountable at elections for how they have managed to please voters. As a result, the policymakers also weight the costs and benefits of setting a level of CBI given the median voter’s preferences.13 We consider an economy with rational agents (citizens) who dislike the short-sighted use of inflation tax and prefer monetary stability. These rational citizens will then fully anticipate the government’s incentive to use inflation to address different kinds of macroeconomic shocks.14 Hence, our setting is that of a democracy where citizens dislike the political inflationary biases and prefer to have independent and credible central banks as monetary actors. The degree of CBI then becomes a possible institutional device to face political bias whereby citizens acknowledge that the definition of an optimal level of CBI means to exploit a trade-off between avoiding the inflationary bias in normal times and using stabilization devices in crisis times. We assume that this concern about the effectiveness of a central bank regime takes the form of a simple utility function in which social welfare is linearly increasing in the level of CBI, as follows:
U(θ) = θ, (1)
where θ denotes the level of CBI. In a democracy, citizens assign to the elected policymaker the task of designing the optimal level of CBI. This policymaker is a politician who aims to please the voter and whose reward is based on how he or she carries out the job, that is, defining and implementing an optimal level of CBI.15 The outcome of policymaking, that is, the level of CBI, θ, is the result of two factors: the policymaker’s effort and his or her ability, as follows:
θ = e + Ω, (2)
66 Masciandaro and Romelli where e denotes the policymaker’s effort and Ω his or her ability. The delegation framework of the model is depicted in figure 3.2. The following sequence of events is assumed. First, society chooses to delegate to the policymaker the task of implementing the optimal level of CBI. This policymaker chooses a level of effort e before knowing his or her ability ' in implementing this particular task. After the regime is implemented, the policymaker learns his or her ability; however, citizens only observe the level of CBI, θ , and cannot distinguish between the effort and ability. Finally, the policymaker is rewarded for the task based on the observed level of CBI and not the policymaker’s effort, since citizens cannot distinguish between the two. The policymaker acts with the goal of maximizing his or her utility function, denoted by Z , defined as follows:
Z (θ, e) = R(U (θ)) − C(e), (3)
where R(U (θ)) is a reward function and C(e) is the cost function. The politician’s reward is a function of the social utility, itself a function of the level of CBI, while the political costs are a function of the effort in implementing the task.16 Political reward is related to the possibility to be reelected, which is an increasing function of the utility of voters, that is, the social welfare function, U(θ) . Hence, in aiming to please voters, the policymaker’s goal becomes aligned with the interest of the citizens. Moreover, it is natural to assume that each delegated task the politician has to fulfill can be more or less convenient from the policymaker’s point of view in terms of political gains. We denote the political value the policymaker assigns to fulfill the specific task of implementing an optimal level of CBI by β . Therefore, the reward function can assume the specific form:
R(U (θ)) = β U (θ), (4)
with β ∈ [0, 1]. The coefficient β can also be interpreted as a measure of the population’s aversion to inflation. As a result, the more the median voter values an independent monetary policy’s ability to safeguard against inflation, the higher the political benefits of the policymaker from implementing such a policy. The incentives alignment between the policymaker and the citizens is a necessary and sufficient condition to characterize the optimal behavior of the policymaker. Time
Voters delegate policymakers to set CBI
Policymaker chooses effort
Optimal level of CBI is implemented and ability revealed
Figure 3.2 Timing of the Delegation Framework
Voters observe CBI but cannot distinguish between effort and ability
Peaks and Troughs: Central Bank Independence Cycles 67 Our main interest lies in the factors that might impact the political cost function, C(e) . The policymaker knows that if monetary policy is delegated to an independent bureaucracy committed to a monetary stability goal, the policymaker will face certain rigidities in implementing accommodative and stabilizing policies when the economy is hit by certain shocks. In other words, we assume that the political costs are related to a series of macroeconomic events that call for a lax monetary policy that cannot be implemented if the central banker is given a certain degree of independence. We consider five such macroeconomic shocks:
1) Political shocks (Pol) 2) Unemployment shocks (U) 3) Fiscal shocks (F) 4) Financial shocks (Fin) 5) Foreign exchanges shocks (FEx) Given these shocks, we assume a simple form of the political cost function:
C(e) = ce 2 , (5)
where parameter c = c(Pol , U , F , Fin, FEx ) is linearly increasing in the probability that the country will experience a political (Pol), unemployment (U), fiscal (F), financial (Fin) or foreign exchange (FEx) shock.17 The exact relationship between these political costs and macroeconomic conditions will depend on an array of country characteristics, which we will address in the empirical section of this chapter. Given these assumptions, the policymaker’s optimal decision can be written as:
max Z = βθ − ce 2
s.t.
e
θ = e + Ω
The first-order condition yields a straightforward solution for the optimal level of effort of the policymaker:
e=
β . (6) 2c
Thus, in this simple framework, the level of effort does not depend on political ability. Once the level of effort is set, the policymaker’s ability is revealed, and the optimal level of central bank independence is set as follows:
β θ = + Ω, (7) 2c
68 Masciandaro and Romelli is the revealed level of ability of the policymaker. where Ω This simple framework gives us an easily testable empirical framework that can explain the different levels of CBI implemented across countries and what can lead to increases or decreases in CBI across countries and time. The main testable implication of the model is summarized in hypothesis 1: Hypothesis 1: The level of CBI in a given country is likely to increase with a society’s ), while it is aversion to inflation (higher β) and the ability of the policymaker (Ω likely to decrease with the likelihood that the country has experienced a series of shocks that bear political costs (c).
In the next sections of this chapter, we provide an empirical framework to test this hypothesis using recently updated indices of CBI.
3.4 Peaks and Troughs in CBI: Data In this section, we discuss the construction of the index of CBI used to document the evolution of CBI over the past four decades, as well as the set of explanatory variables used in our empirical investigation. We focus our analysis on the evolution of CBI in the post–Bretton Woods period. Prior to 1972, countries participating to this system were required to maintain the parity of their national currencies with their currency reserves, limiting the set of actions available to monetary policymakers. The collapse of this international monetary system led to the return to floating exchange rates, reintroducing room for the discretionary conduct of domestic monetary policies. We thus expect a higher number of central bank legislative reforms after 1972, when central banks started to move from being simple public agencies acting on behalf of the government to a potential veto players against any inflationary pressure triggered by politicians.
3.4.1 Dependent Variable The literature on CBI generally uses two different strategies to capure the degree of independence of a central bank: indices based on (a) central bank legislation (de jure) or (b) the turnover rate of the central bank governor (de facto). In this chapter, we focus on a legal measure of CBI for a series of reasons. First, economists argue that the mere adoption of a legal statute guaranteeing CBI dampens inflationary expectations in the economy (Polillo and Guillén 2005). Therefore, the adoption of a new law might have practical implications on the perceived level of independence and on the credibility of a country’s central bank. Second, these indices are preferred because they focus on specific claims contained in central bank statutes, and for this reason, they are less biased by the presence of possible subjective judgments.
Peaks and Troughs: Central Bank Independence Cycles 69 Third, even if de facto measures might be able to explain how CBI affects macroeconomics variables such as inflation or economics growth, their ability to explain how independence evolves over time is not clear. Finally, indices of legislative independence are favored over de facto measures, which associate the independence of the central bank to the autonomy of its governor, since the latter does not consider the independence of the other members of the board of directors and might thus over-or underestimate the degree of CBI. Indeed, nowadays the large majority of central banks implement their monetary policy using committees (Morris and Lybek 2004; Blinder 2008). Moreover, it is important to note that to obtain a more accurate turnover rate index, it might be necessary to analyze the reasons for the departure, before the end of his or her term, of the central bank governor. The measure of CBI employed in this chapter is the GMT index (from Grilli, Masciandaro, and Tabellini 1991). This index is calculated as the sum of central banks’ fulfillment of fifteen different criteria and ranges from 0 (least independent) to 16 (most independent). Despite the wealth of research on CBI indices, the GMT index is still the only measure of de jure CBI that differentiates between political and economic CBI, as well as providing information on the involvement of the central bank in banking supervision. Moreover, the inclusion in the GMT index of nonstatutory factors that influence the degree of de facto independence, such as information on supervision, has been shown to strengthen its explanatory power (Maslowska 2011).18 The political independence index of Grilli, Masciandaro, and Tabellini (1991) is based on a binary code assigned to eight different characteristics that sum up the ability of monetary authorities to independently achieve the final goals of their policy. This index captures three main aspects of monetary policy institutions: the procedure for appointing the members of the central bank governing bodies, the relationship between these bodies and the government, and the formal responsibilities of the central bank. The economic independence index, on the other hand, summarizes the degree of independence of the central bank in choosing the set of instruments consistent with monetary policy. Its three main aspects concern the influence of the government in determining how much it can borrow from the central bank, the nature of the monetary instruments under the control of the central bank, and the degree of central bank involvement in banking supervision (see appendix 3.1 for the detailed structure of the index). We thus use this index to evaluate the evolution of CBI over the past decades. To that end, we need to compute the evolution of the level of CBI over time. Most research that studies CBI simply updates the information of the most commonly used indices at a particular point in time (Acemoglu et al., 2008). For example, the GMT index, first computed to capture the degree of CBI at the end of the 1980s, has been updated by Arnone et al. (2009) with the level of independence as of 2003. However, measuring the degree of CBI only at specific points in time limits our understanding of how central bank design has evolved over time. Since our interest rests in capturing these particular dynamics, we use the data presented by Romelli (2018), who recomputes the GMT index for each year in which a
70 Masciandaro and Romelli reform in the legislation of the central bank took place over the period 1972–2014, for a sample of sixty-five countries. The list of countries and information on data availability are presented in appendix table 3.A.1. Figure 3.3 presents the evolution of the average degree of political and economic independence for our sample of countries. Figure 3.3 highlights several important trends. During the 1990s and early 2000s, a clear trend toward an increase in the level of both political and economic CBI is observed. The most striking feature is present in the early 1990s, where, together with a spike in the average inflation rate, we see a significant rise in the level of CBI. This period coincides with the breakup of the Soviet Union, which resulted in the inclusion in the sample of several economies experiencing high inflation. At the same time, these Eastern European and former Soviet countries implemented significant reforms in their monetary policy institutions between 1989 and 1995. This period also coincides with the EU integration process and the creation of the ECB, which required members to implement legislative reforms aimed at assigning their central banks a level of independence similar to that of the Bundesbank, the highest at the time. Further, in this period, the establishment of an independent central bank became an external prerequisite for maintaining or gaining access to the financial markets; that is, the political gains in changing the CBI level increased. It is interesting that these reforms in central bank institutional design took place in the years in which the academic literature began to look at CBI as a relevant research topic, as we have already noted in figure 3.1. From this point on, transition to a period of more stable and low inflation followed. This, as clearly depicted in figure 3.3, corresponded to a leveling in the degree of CBI. This is in line with the theoretical arguments presented here in section 3.3, where we argue that periods of low inflation aversion will correspond to a lower level of CBI. 20
0.7
15
0.6 10 0.5 5
0.4
Average inflation rate (%)
Average degree of CBI [0, 1]
0.8
0
0.3 1975
1980
1985
GMT economic
1990
1995
2000
GMT political
2005
2010
2015
Inflation rate
Figure 3.3 The Evolution of the GMT Political and Economic Independence Indices for Sixty- five Analyzed Countries, 1972–2014
Peaks and Troughs: Central Bank Independence Cycles 71 Furthermore, a clear reversal in the level of independence is noticeable following the GFC. This trend is, in fact, captured by the degree of economic independence (GMT economic in figure 3.3), since this index is the one capturing the evolution of central bank involvement in banking supervision, where most of the changes in central bank design took place. Interestingly, during the period 2008–2015, the number of articles and policy papers published on CBI first dropped, and then it started to increase again. Similar trends are present when looking at the subsamples of OECD (appendix figure 3.A.1) and non-OECD countries (appendix figure 3.A.2). Table 3.1 adds further relevant details on the evolution of these indices of CBI. We present the average level of the GMT across the last four decades up to the start of the GFC. Looking at the mean of the GMT index (column 4), we clearly observe a systematic increase in its average level in every decade analyzed. This pattern is consistent whether we look separately at the political or economic GMT measures or split the sample of countries into OECD or non-OECD members. Yet this increasing trend is reversed after 2007 for the non-OECD countries for the level of economic independence, which also includes information on the involvement of the central bank in banking supervision.19 Similar evidence can be found when splitting the sample between inflation-and non-inflation-targeting countries, where the trend is reversed after 2007 for the second group of countries (see appendix table 3.A.2).
3.4.2 Explanatory Variables The choice of the main explanatory variables reflects the theoretical model presented in section 3.3. The empirical implication of this model, summarized in hypothesis 1, relates the optimal level of CBI to the population’s aversion to inflation, the ability of the policymakers, and the likelihood that the country has experienced a series of shocks that bear political costs. The empirical proxies of these variables are the following. First, to capture the inflationary pressures present in a country, we consider an indicator variable that signals whether an episode of hyperinflation (inflation rates higher than 40 percent) has occurred in a country in the previous five years. Second, we consider five types of shocks that might influence the policymaker’s cost function and hence the level of CBI: political, unemployment, fiscal, financial, and foreign exchange. (a) Political shocks. We assume that countries with a higher probability of political shocks or that are more unstable politically will tend to be characterized by a lower degree of independence. For example, Cukierman and Webb (1995) show that less independent central banks tend to experience higher turnover rates of the central bank governor just after government changes, indicating, therefore, that de facto independence is lower in less stable political systems. In our case, we measure the level of political instability by looking at the government stability variable proposed by the International Country Risk Guide (ICRG) rating of the Political Risk Services (PRS) group. This variable measures both the government’s ability to
40
43
63
65
65
65
1972–1979
1980–1989
1990–1999
2000–2006
2007
2008–2014
24
29
30
30
33
1980–1989
1990–1999
2000–2006
2007
2008–2014
19
38
35
35
32
1980–1989
1990–1999
2000–2006
2007
2008–2014
230
35
237
313
188
136
225
30
210
262
237
183
455
65
447
575
425
319
No. of observations
0.606
0.605
0.578
0.469
0.323
0.303
0.768
0.763
0.751
0.534
0.427
0.432
0.686
0.678
0.659
0.499
0.381
0.377
Mean
0.188
0.188
0.188
0.125
0.125
0.125
0.125
0.125
0.125
0.063
0.063
0.125
0.125
0.125
0.125
0.063
0.063
0.125
Min.
1.000
0.938
0.938
0.938
0.688
0.500
1.000
1.000
1.000
1.000
0.750
0.750
1.000
1.000
1.000
1.000
0.750
0.750
Max.
0.614
0.600
0.581
0.498
0.283
0.289
0.791
0.783
0.776
0.536
0.432
0.433
0.702
0.685
0.672
0.515
0.366
0.371
Mean
0.125
0.125
0.125
0.125
0.125
0.125
0.125
0.125
0.125
0
0
0
0.125
0.125
0.125
0
0
0
Min.
GMT Political
1.000
1.000
1.000
1.000
0.625
0.625
1.000
1.000
1.000
1.000
0.875
0.875
1.000
1.000
1.000
1.000
0.875
0.875
Max.
0.125 0.125
0.598
0.125
0.125
0
0
0.125
0.125
0.125
0
0
0
0.125
0.125
0.125
0
0
0
Min.
0.611
0.574
0.440
0.364
0.318
0.746
0.742
0.726
0.533
0.422
0.430
0.671
0.671
0.645
0.482
0.396
0.382
Mean
GMT Economic
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
Max.
Note: This table provides summary statistics information for the aggregated GMT index of central bank independence (GMT), as well as for the subindices of political and economic independence. The sample of OECD and non-OECD member countries varies over time, based on the year in which a specific country joined the OECD.
18
1972–1979
Non OECD Countries
23
1972–1979
OECD Countries
No. of countries
Period
GMT
Table 3.1 Summary Statistics on the Evolution of the GMT Independence Index
Peaks and Troughs: Central Bank Independence Cycles 73 carry out its declared program(s) and its ability to stay in office and is also included among the indicators of the Worldwide Governance Indicators (WGI) project of the World Bank. (b) Unemployment shocks. Eijffinger and Schaling (1995) study the link between unemployment and CBI and show that a higher natural rate of unemployment is associated with a higher degree of CBI. In this chapter, we are interested in possible unemployment shocks, which we capture by looking at the five-year volatility of unemployment rate. This reflects any pressure policymakers face to reduce and stabilize unemployment in the country by using the inflation tax. (c) Fiscal shocks. In the aftermath of the GFC, and particularly in the euro area countries, significant research has been devoted to the link between fiscal shocks and the level of debt to GDP. For example, Nickel and Tudyka (2014) suggest that the cumulative effect of fiscal stimulus on real GDP is positive at moderate debt-to- GDP ratios but turns negative as the ratio increases. Given that sovereign debt crisis will influence the government’s ability to react to fiscal shocks and monetize debt, we use a sovereign debt crisis dummy that signals the occurrence of a systemic debt crisis in the country in the last five years. The date of the crisis comes from Laeven and Valencia (2013). (d) Financial shocks. Concerning financial shocks, we look at the occurrence of past financial crises to assess the potential effect of instability in the financial sector on the level of CBI adopted in a country. We proxy this through a crisis dummy that signals the presence of a systemic banking crisis in the last two years. The date of the crisis comes from Laeven and Valencia (2013). (e) Exchange-rate shocks. We capture the occurrence of exchange-rate shocks by looking at a dummy that signals the presence of a systemic currency crisis in the last two years. These data are also obtained from Laeven and Valencia (2013). The authors define a currency crisis as a nominal depreciation of the currency vis-à- vis the US dollar of at least 30 percent, which is also at least 10 percentage points higher than the rate of depreciation in the previous year. Third, conceptualizing policymakers’ ability is a challenging empirical endeavor. However, it is innocuous to assume that this ability is exogenous to the other set of independent variables, as the policymaker who sets the optimal level of CBI is likely to be different from the one who has experienced the past shocks discussed above. This implies that this exogenous variable will be captured in the error term of the econometric model, with limited concerns about the validity of the specification.20 Finally, we also include, in all estimations, a set of additional control variables. In particular, given the evolution of CBI indices toward higher levels of independence as depicted in figure 3.3, we expect a path dependence and control for previous levels of CBI. In addition, we include a dummy variable that takes value equal to one for all the countries adopting an inflation-targeting regime, as the adoption of an inflation- targeting regime might induce further increases in the level of independence of the central bank from the executive branch. Given that institutional characteristics might also
74 Masciandaro and Romelli influence the degree of CBI of a country, we also include a dummy variable able to capture the country’s legal origin as well as the participation in a monetary union, such as the euro area. Finally, we include two measures able to capture the degree of a country’s openness to trade and its level of development, as measured by its real GDP per capita.
3.5 Peaks and Troughs in CBI: Empirics This section tests empirically the main implication of the stylized model presented in section 3.3, by providing evidence on the factors that can influence the endogenous determination of the level of CBI on a large set of countries. Given the dynamic index of CBI employed, we perform a series of panel, as well as cross-sectional estimations. The first set of results is presented in table 3.2. Column 1 presents the panel estimations for the entire sample of sixty-five countries during the period 1972–2014 for which the evolution of the GMT index is computed in Romelli (2018). In columns 2 and 3, we split the full sample by focusing on the set of OECD and non-OECD countries, respectively. Our results first confirm the high correlation between current and past levels of CBI suggested also in figure 3.3. The coefficient of the lagged value of CBI is always statistically significant at 1 percent, and it ranges from 0.94 to 0.98. This suggests that countries with high levels of CBI in the past are likely to maintain it. Next, the coefficient of the proxy of inflation aversion is significant over the entire sample, particularly for the set of non-OECD countries. The positive sign of this coefficient indicates how countries that experienced high periods of inflation in the past will be characterized by a higher inflation aversion which will constrain the government to assign a higher degree of independence to its central bank. Turning to the set of variables that aims to capture the different shocks discussed in our stylized model, we highlight several interesting patterns. First, columns 1 and 2 show a positive correlation between government stability and legal CBI. Therefore, more developed and politically stable countries are characterized by a higher degree of CBI. These results are in line with those of Cukierman and Webb (1995), who look at the turnover rate of the central bank governor and find that the independence of the central bank is lower during political changes. Second, while inflation crises did not have a statistically significant impact on the set of OECD countries, we do find a positive and statistically significant coefficient for the measure of unemployment-rate volatility across all specifications. Thus, similar to previous studies such as Cukierman (1994), these results suggest that governments in countries experiencing more volatile unemployment rates will have a higher incentive to introduce inflationary surprises. Since the public is well aware of this, the benefit for granting a higher CBI will be higher for these countries. Moreover, we find a negative and statistically significant coefficient for the dummy capturing systemic debt crisis in the entire sample and in the subsample of non-OECD countries. These results suggest that countries that experienced episodes of government debt crisis are associated with
Inflation-targeting regime
Euro area dummy
Common-law legal system
Currency crises dummy
Financial crises dummy
Systemic debt crises dummy
Unemployment-rate volatility
Government stability (t-1)
Inflation crises dummy
GMT (t-1)
0.0041
0.0057
0.0320*** (0.011)
0.0289***
(0.006)
(0.005)
(0.008)
−0.0180***
(0.009)
(0.008)
−0.0170***
−0.0204**
−0.0055
0.0010 (0.007)
0.0018
(0.001)
(0.006)
(0.005)
0.0001
−0.0178***
0.0040* (0.002)
0.0042***
(0.002)
(0.001)
(0.001)
0.0053***
(0.006)
(0.010) 0.0032***
0.0085
0.0173*
0.9453*** (0.017)
0.9564***
2
1972–2014
(0.012)
1
Table 3.2 Endogenous Central Bank Independence
0.9679***
0.0030
(0.008)
0.0057
(0.012)
−0.0190*
(0.012)
0.0042
(0.009)
0.0096
(0.011)
−0.0317***
(0.002)
0.0037*
(0.001)
0.0007
(0.014)
0.0276**
(0.018)
3 0.9916***
0.0040
(0.003)
−0.0034
(0.004)
−0.0096**
(0.007)
−0.0151**
(0.003)
0.0004
(0.009)
−0.0126
(0.001)
0.0023*
(0.001)
0.0023***
(0.015)
0.0194
(0.006)
4 0.9830***
0.0035
(0.004)
−0.0041
(0.006)
−0.0133**
(0.016)
−0.0529***
(0.004)
−0.0013
(0.001)
0.0001
(0.002)
0.0049**
(0.001)
0.0024**
(0.010)
0.0623***
(0.011)
5
2000–2014
0.9947***
(Continued)
0.0015
(0.005)
−0.0035
(0.006)
−0.0113*
(0.004)
−0.0062
(0.007)
0.0025
(0.010)
−0.0151
(0.002)
0.0009
(0.001)
0.0023*
(0.009)
0.0013
(0.008)
6
33
28
428
(0.001)
0.0001*
(0.001)
−0.0001
(0.005)
3
55
681
(0.001)
0.0001*
(0.001)
−0.0001
(0.003)
4
33
397
(0.001)
0.0001
(0.001)
−0.0001
(0.005)
5
2000–2014
25
284
(0.001)
0.0001*
(0.001)
−0.0001
(0.004)
6
Note: The dependent variable is the level of the GMT index. GMT (t-1) is the lagged value of the GMT index of CBI. Inflation crisis dummy, systemic debt crises dummy, financial crises dummy and currency crises dummy are dummy variables that take the value of one if a country has experienced an inflation, fiscal, financial, or currency crisis, respectively, in the previous two years. Government stability is the measure of the lagged value of the government stability variable from the International Country Risk Guide (ICRG). Unemployment-rate volatility measures the standard deviation of unemployment rate in the previous years. Controls include dummies for the country’s legal origin, euro area and inflation-targeting countries, as well as the lagged value of real GDP per capita and openness to trade. Decades dummies and constant terms are included but not reported. Columns 1 and 4 provide information for the full set of countries, columns 2 and 5 focus on the subsample of OECD countries, and columns 3 and 6 analyze the set of non-OECD countries. Robust standard errors are in parentheses. * denotes significance at a 10 percent level, ** denotes significance at a 5 percent level, *** denotes significance at a 1 percent level.
55
Number of countries
768
(0.001)
(0.001) 1,196
0.0001
(0.001)
0.0001
−0.0001
(0.001)
(0.005)
(0.003) −0.0001
2
1972–2014 1
Number of observations
Openness to trade (t-1)
Real GDP per capita (t-1)
Table 3.2 Continued
Peaks and Troughs: Central Bank Independence Cycles 77 a lower degree of CBI. Indeed, in these countries, the incentive to monetize the sovereign debt might be higher; therefore, policymakers might assign a lower level of independence to their central banks. Interestingly, past financial crises have no statistically significant impact on current levels of CBI, while currency crises are only statistically significant in the subsample of OECD countries. As already shown in figure 3.3, the period from the 1970s until the late 1990s has been characterized by high average levels of inflation. During this period, it is reasonable to assume that the inflation aversion in a society was the main driver of reforms toward higher levels of CBI. Yet the second half of our sample period was generally characterized by lower and more stable inflation rates, which would suggest a lesser effect of this driver of central bank design. This motivates our strategy to focus attention on a more recent period. Indeed, when we restrict our sample period to the years 2000– 2014, the significance of the inflation crisis dummy vanishes. However, in the absence of inflation aversion, governments might be more concerned about other types of shocks. In columns 4 and 5, the coefficient of unemployment-rate volatility is positive and significant, suggesting that countries characterized by a higher probability of unemployment- rate shocks tend to be characterized by higher levels of independence of their central banks. Furthermore, for this restricted time frame, we also find a strongly significant and negative impact of currency crisis episodes. This indicates that governments facing exchange-rate shocks will be more likely to favor a lower level of CBI. Previous research has considered the pegged exchange rates as an alternative to CBI (Crowe and Meade 2008). Therefore, these results suggest that countries experiencing an exchange-rate crisis might have to abandon their fixed exchange-rate regime and, as a consequence, revert the degree of CBI of the country. In addition to these main results, we also find evidence that countries that are members of the euro area currency union enjoy a greater degree of independence. Indeed, since its creation, the ECB has been characterized by one of the highest degrees of legal independence, and this effect is mainly captured over the full sample period, columns 1 and 2. Furthermore, we also find evidence that structural characteristics such as the legal origin of countries seem to matter. The negative relationship between the common-law dummy and CBI suggests that countries in common-law jurisdictions have a lower degree of independence. This result is mainly driven by the fact that common-law system countries, such as the United States and the United Kingdom, tend to have central banks that are less independent with respect to their freedom to lend to the government. As a matter of fact, the less binding constraints on allowing lending to the government facilitated the adoption of QE policies immediately after the onset of the GFC. Interestingly, the inflation-targeting indicator is not statistically different from zero. This result might be influenced by the sample of countries included in our study. To investigate this issue in more detail, in table 3.3,we replicate our analysis by focusing on the subsample of inflation-targeting countries (columns 1 and 4) and non-inflation- targeting countries (columns 2 and 5), as well as on a sample of countries not belonging to the European Union. The results presented in table 3.3 show, once again, a positive and
Euro area dummy
Common-law legal system
Currency crises dummy
Financial crises dummy
Systemic debt crises dummy
Unemployment-rate volatility
Government stability (t-1)
Inflation crises dummy
GMT (t-1)
0.0325*** (0.012)
(0.019)
(0.006)
(0.007)
0.0501***
−0.0126**
(0.009)
−0.0130*
−0.0099
(0.015)
(0.005)
(0.009)
0.0048
−0.0012
(0.008)
(0.001)
−0.0050
−0.0223***
(0.002)
0.0001
0.0030*
(0.002)
(0.002)
(0.002)
0.0022
0.0035**
(0.018)
0.0050**
0.0246
(0.001)
(0.018)
(0.027)
0.0001
0.9598***
0.9282***
(0.003)
−0.0058**
(0.009)
−0.0056
(0.008)
0.0076
(0.009)
−0.0205**
(0.002)
0.0033
(0.001)
0.0017**
(0.012)
0.0256**
(0.011)
0.9770***
(0.007)
−0.0005
(0.015)
−0.0266*
(0.017)
−0.0338**
(0.010)
0.0022
(0.011)
0.0069
(0.003)
0.0048*
(0.002)
0.0027*
(0.025)
0.0260
(0.024)
0.9539***
4
3
1
2
2000–2014
1972–2014
Table 3.3 Endogenous Central Bank Independence (Robustness)
(0.002)
−0.0039*
(0.003)
−0.0050*
(0.003)
−0.0053
(0.003)
−0.0003
(0.011)
−0.0235**
(0.001)
0.0013
(0.001)
0.0025**
(0.014)
0.0202
(0.004)
0.9984***
5
(0.005)
−0.0079*
(0.009)
−0.0117
(0.005)
−0.0020
(0.008)
−0.0124
(0.001)
−0.0010
(0.001)
0.0019*
(0.010)
0.0347***
(0.007)
0.9947***
6
21
34
700 29
630
(0.001)
0.0001
(0.001)
21
265
(0.000)
0.0002**
(0.001)
0.0001
34
416
(0.001)
0.0001
(0.001)
0.0001
29
351
(0.001)
0.0001
(0.001)
0.0001
(0.003)
(0.003) 0.0001
0.0020
0.0051*
Note: The dependent variable is the level of the GMT index. GMT (t-1) is the lagged value of the GMT index of CBI. Inflation crises dummy, systemic debt crises dummy, financial crisis dummy, and currency crises dummy are dummy variables that take the value of one if a country has experienced an inflation, fiscal, financial, or currency crisis, respectively, in the previous two years. Government stability is the measure of the lagged value of the government stability variable from the International Country Risk Guide (ICRG). Unemployment-rate volatility measures the standard deviation of unemployment rate in the previous years. Controls include dummies for the country’s legal origin, euro area and inflation-targeting countries, as well as the lagged value of real GDP per capita and openness to trade. Decades dummies and constant terms are included but not reported. Columns 1 and 4 provide information for the full set of countries, columns 2 and 5 focus on the subsample of inflation-targeting countries, and columns 3 and 6 analyze the set of non-inflation-targeting countries. Robust standard errors are in parentheses. * denotes significance at a 10 percent level, ** denotes significance at a 5 percent level, *** denotes significance at a 1 percent level.
588
(0.001)
(0.001)
Number of countries
0.0001
(0.001)
(0.001)
0.0001
−0.0001
0.0001**
Number of observations
Openness to trade (t-1)
Real GDP per capita (t-1)
Inflation-targeting regime
80 Masciandaro and Romelli statistically significant coefficient for both the previous degree of CBI and the measure of political stability. This confirms the idea that CBI is path-dependent and that more stable governments are associated with higher levels of CBI. Interestingly, the dummy capturing the occurrence of inflation crises in the previous period is positively and statistically significant in columns 3 and 6, that is, the sample of non-European Union countries. For this subsample of countries, we also find that inflation-targeting countries are characterized by a higher degree of CBI. So far, in line with the theoretical arguments in section 3.3, our analysis has been focused on the determinants of the CBI level of a country. However, the shocks that might influence the degree of CBI might also drive the magnitude of reforms in central bank institutional design. For example, Crowe and Meade (2008) show that over the period 1990–2003, greater changes in independence have occurred in countries originally characterized by lower levels of independence and higher inflation. Similarly, Masciandaro and Romelli (2015) show that changes in CBI during the period 2007–2012 are associated with the occurrence of macroeconomic shocks following the GFC. In table 3.4, we extend Crowe and Meade’s (2008) analysis by focusing on the determinants of changes in CBI. In particular, taking advantage of the dynamic index of CBI, we are able to identify the determinants of changes across different decades. To do so, we compute changes in the degree of CBI in a panel of nonoverlapping ten-year- period observations; that is, we compute the change in CBI every ten years, in 1982, 1992, 2002, and 2012. Table 3.4 presents the estimates of the determinants of the change in the GMT index between different decades. Similarly to Crowe and Meade (2008), we assume that changes in CBI might be explained by the independence of the central bank in the previous decade, as well as by the country’s level of inflation, openness, and GDP per capita. It is important that we also include the different measures of macroeconomic shocks discussed in the theoretical model and in tables 3.2 and 3.3. The results presented in table 3.4 show how, across all specifications, changes in CBI are more likely in countries characterized by lower initial degrees of CBI. Contrary to Crowe and Meade (2008), we do not find evidence that reforms are associated with lower initial levels of inflation rate (inflation 0). Given the different time horizon of this cross-sectional analysis, the macroeconomic shocks discussed in sections 3.3 and 3.4 are now computed over a ten-year horizon. Therefore, the dummies capturing the different shocks assume a value equal to one if an inflation, fiscal, financial, or exchange-rate shock took place in the past ten years. Similarly, the unemployment-rate volatility is measured over the same longer horizon, while political shocks are computed by looking at the changes in the level of democracy of the country in the last decade. We find that the coefficients of inflation, unemployment, fiscal, and exchange-rate shocks are not significantly different from zero, while the coefficients of political and financial crisis shocks are statistically significant across all specifications. In particular, we provide evidence that changes in CBI are associated with improvements in the level of democracy of the country. Finally, the negative and statistically significant coefficient of the financial crises dummy suggests that countries experiencing financial crises are more likely to decrease the degree of independence of their central bank. We can therefore conclude that financial crises might not impact the
Table 3.4 Determinants of the Changes in Central Bank Independence 1972–2012
GMT0 Inflation 0 Inflation crises dummy Unemployment-rate volatility Political shock (change in polity) Systemic debt crises dummy Financial crises dummy Currency crises dummy Real GDP per capita 0 Openness to Trade 0
2000–2012
1
2
3
4
−11.5522*
−11.4878*
−15.0613*
−14.9129*
(6.750)
(6.743)
(8.763)
(8.891)
−0.2556
3.4116
−0.7591
−2.9706
(0.956)
(9.486)
(1.361)
(8.269)
−5.4796
−6.0456
−6.7206
−6.1219
(11.510)
(11.441)
(13.049)
(13.007)
1.3225
1.2772
1.8917
1.9425
(1.230)
(1.266)
(1.162)
(1.209)
0.0092*
0.0100*
0.0128**
0.0121*
(0.005)
(0.006)
(0.006)
(0.007)
−3.7055
−3.8407
−9.0799
−9.7496
(10.900)
(10.322)
(10.305)
(10.529)
−6.4894*
−6.6892*
−9.7100*
−9.6002*
(3.707)
(3.640)
(5.532)
(5.416)
−0.9901
−1.4321
−0.9911
−0.5329
(7.300)
(7.450)
(11.374)
(12.292)
0.0002
0.0002*
0.0002
0.0002
(0.000)
(0.000)
(0.000)
(0.000)
−0.0255
−0.0258
−0.0306
−0.0301
(0.024)
(0.024)
(0.030)
(0.030)
Inflation 0 * Polity 0
−0.2376
0.1404
(0.568)
(0.458)
Number of observations
119
119
90
90
Number of countries
52
52
52
52
Note: The dependent variable is the change in the level of the GMT index in a decade. GMT 0 is the value of the GMT index of CBI at the beginning of the decade. Inflation 0 is the value of the inflation rate at the beginning of the decade. Inflation crises dummy, systemic debt crises dummy, financial crises dummy, and currency crises dummy are dummy variables that take the value of one if a country has experienced an inflation, fiscal, financial, or currency crisis, respectively, in the previous ten years. Political shock capture the change in the level of democracy in the country in the decade, measured by using the polity index. Unemployment-rate volatility measures the standard deviation of unemployment rates in the previous ten years. Controls include the lagged value of real GDP per capita and openness to trade at the beginning of the decade. Decades dummies and constant terms are included but not reported. Columns 1 and 2 provide information for the period 1972–2012 and columns 3 and 4 focus on the 2000–2012 period. Robust standard errors are in parentheses. * denotes significance at a 10 percent level, ** denotes significance at a 5 percent level.
82 Masciandaro and Romelli overall level of CBI, but the occurrence of crises might stimulate changes to the degree of CBI, as it happened after the recent crisis.
3.6 Conclusion Following the 2008 GFC, central bankers have not only extensively used unconventional monetary policy tools but have also acquired deeper supervisory powers over banking and financial intermediaries. Monetary activism coupled with a higher degree of involvement in banking supervision reopened the debate on the optimal degree of CBI and, consequently, the opportunity to reconsider the institutional setting that governs the relationship between incumbent governments and bureaucratic monetary policymakers. Policymakers around the world are considering whether to reshape their central bank institutional design by amending the degree of independence. A few recent relevant episodes in this debate include the ECB’s high degree of independence in its monetary policy stance, which has been harshly criticized for its hawkish attitude by some and harshly blamed for its dovishness by others. Similarly, in the United States, an intense political debate originated from the monetary strategy that the Federal Reserve System designed and implemented in order to address the 2008 financial turmoil and the economic stagnation that followed. Recent debates also include the highly controversial idea of “helicopter money” to finance fiscal stimulus with newly printed money, which undermines the long history of institutionalizing independent monetary policymaking (Galí 2014; Economist 2016). Overall, this crisis has challenged the consolidated setting of central banking. Therefore, a natural question arises: how to explain these cycles in CBI, that is, the possibility that peaks and troughs can occur in the degree of independence of central banks? This chapter proposes a political economy framework of delegation, in which a policymaker’s choice of the optimal central bank design is conditional on the economic and institutional environment existing at a given time, which, in turn, determines the political weights put on the pros and cons of assigning the central bank more independence. Our framework, as already stated, is based on two main assumptions. First, we assume that incumbent politicians weight the gains and losses of maintaining or reforming a central bank governance setting following their own preferences. Second, policymakers are politicians, and, as such, they are held accountable at elections for how they have managed to please voters. As a result, policymakers also weight the costs and benefits of setting a level of CBI given the preferences of voters. This simple setup gives us an easily testable empirical framework that can explain the different levels of CBI implemented across countries and over time. We argue that the level of CBI in a given country is likely to be related to a society’s aversion to inflation, as well as a series of macroeconomic shocks that bear political costs for the incumbent policymaker.
Peaks and Troughs: Central Bank Independence Cycles 83 We test the main implications of our framework using recently updated indices of CBI for a sample of sixty-five countries over the last four decades. First, we find that legacy matters; that is, high levels of CBI are likely to be maintained. Similarly, high levels of social aversion to the use of an inflation tax reduces the political incentives of the incumbent government to change the central bank regime in order to gain more freedom to address macroeconomic shocks via monetization. At the same time, we show that in periods of low inflation, in particular during 2000–2014, CBI is more closely related to other macroeconomic shocks such as political, labor market, or currency shock. In this respect, our analysis can also offer a rule of thumb to evaluate the current debate over the need, or the opportunity, to change central banks’ governance. The general suggestion of this chapter is that maintaining the effectiveness of the central bank as a long-sighted monetary veto player against the political pressures toward a short-sighted accommodation of macroeconomic shocks can depend on a series of country characteristics and experiences, which can shape the optimal design of central banks over time.
Notes 1. Data were obtained from an SSRN and JSTOR search on papers containing “central bank independence” in their titles. If we further expand our focus to all academic research containing the heading “central bank independence” in text using Google Scholar, we find around sixteen thousand works between 1991 and 2015. 2. While the Dodd-Frank Act increased the Fed’s responsibilities in supervision, it nonetheless kept unchanged the number of regulators in the US system and limited the Fed’s freedom to use section 13(3) by requiring it to obtain approval from the Treasury secretary prior to extending emergency credit to financial institutions. 3. Economic theory does not have a clear answer on the optimality of assigning supervisory roles to central banks or other independent institutions. For instance, Masciandaro and Quintyn (2015) discuss two conflicting views regarding the merger of monetary and supervisory functions inside the central bank. The view that favors the integration of the two functions highlights the informational advantages and economies of scale derived from bringing all functions under the authority of the central bank (Peek, Rosengren, and Tootell 1999; Bernanke 2007). Alternatively, a separation argument highlights the higher risk of policy failures if central banks have supervisory responsibilities, as financial stability concerns might impede the implementation of optimal monetary policies (Goodhart and Schoenmaker 1995; Ioannidou 2005). The empirical literature that has investigated the relative merits of assigning banking-sector supervision to central banks is also mixed. As a consequence, throughout this chapter, we adopt a prudential approach by supporting the separation view and assuming that supervisory roles can, at times, be in conflict with the goal of price stability, hence lowering the independence of monetary policy authorities. This is also in line with the Grilli, Masciandaro, and Tabellini (1991) index of CBI employed in this chapter. 4. See Bernanke 2013a on the gains in having a long-sighted independent central banker versus a short-sighted politician.
84 Masciandaro and Romelli 5. See Cukierman 1998; Cukierman 2008; Eijffinger and de Haan 1996; Alesina and Stella 2010; and de Haan and Eijffinger 2016 for excellent reviews of the time inconsistency literature. Furthermore, Goodfriend 2007 reviews how consensus on monetary policy was reached, while Goodfriend 2012 discusses how the concept of CBI has emerged first under the gold standard and later with fiat money. Barro and Gordon 1983, Backus and Driffill 1985, and Lohmann 1997 focus on how the “rules of the game” influence the outcomes of the overall macroeconomic policy, while Sargent and Wallace 1984, Niemann 2011, Niemann, Pichler, and Sorger 2013, Martin 2015, Burdekin and Laney 2016, and Reis 2016 focus their attention on fiscal policy. 6. See Berger, de Haan, and Eijffinger 2001 and Arnone, Laurens, and Segalotto 2006 for an overview. On the historical alternative models of independent and dependent monetary authorities, see Vicarelli et al. 1988 and Wood 2008. Country studies include Waller 2011; Bernanke 2013b; and Gorton and Metrick 2013 on the US Fed; Goodfriend 2012 on the Bank of England; Issing 2005b and Beyer et al. 2008 on the Bundesbank; and Gaiotti and Secchi 2012 on the Bank of Italy. 7. Other analyses include Eijffinger and Hoeberichts 1998 and 2008, McCallum 1995, and Fischer 1995 on the relationship between CBI and central banker conservativeness; and Niemann 2011 on monetary conservativeness and fiscal policy. Furthermore, Eijffinger and Geraats 2006 and Hughes Hallett and Libich 2006 focus on transparency, and Cukierman and Meltzer 1986, Goodfriend 1986, Issing 2005a, and Blinder et al. 2008 discuss central bank communication. Over time, the relationship between independence and accountability has become the core focus of the so-called central bank governance (Briault, Haldane, and King 1996; Morris and Lybek 2004; Frisell, Roszbach, and Spagnolo 2008; Crowe and Meade 2008; Hasan and Mester 2008; Khan 2016). Central bank governance became the institutional setting for implementing the day-by-day monetary policy: given the long-run goal of avoiding the risk of inflation, the modern central banker can also smooth the real business cycles using monetary policy rules (Henderson and McKibbin 1993; Persson and Tabellini 1993; Taylor 1993; Bernanke and Gertler 1995; Walsh 1995a; Svensson 1997; Clarida, Galí, and Gertler 1999; Galí, and Monacelli 2005). Therefore, monetary policy becomes the final outcome of a complex interaction among three main components: monetary institutions, central banker preferences, and policy rules. 8. Other recent contributions and policy debates that discuss the issue of CBI after the GFC include Cecchetti 2013; Bayoumi et al. 2014; Buiter 2014; Kireyev 2015; Fischer 2015; Williams 2015; Balls, Howat, and Stansbury 2016; Aklin and Kern 2016; den Haan et al. 2017; de Haan and Eijffinger 2017. 9. Excellent references on how central banks’ policies and their institutional settings have changed, as well as on the causes of these changes, are Siklos 2002 and Siklos, Bohl, and Wohar 2010. 10. See Maxfield 1997. Posen 1995, stressing the distributive consequences in the choice of a monetary regime, notes that there is no reason to assume that the adoption of CBI is self- enforcing, that choice requires political support, and the financial sector is positioned to provide that support (see also de Haan and van ’t Hag 1995). On the relationships between financial sector preferences, low inflation, and CBI, see also Miller 1998, which provides an interest group theory of CBI. 11. On these issues, see Moser 1999. Vaubel 1997 suggests that central banks, even if formally independent, can be captured, while Sieg 1997 proposes a formal model of a captured independent central bank. Bernhard 1998 claims that information asymmetries of the
Peaks and Troughs: Central Bank Independence Cycles 85 monetary policy process can create conflicts between government ministers, their backbench legislators, and, in multiparty government, their coalition partners; an independent central bank can help overcome these conflicts. See also Alesina and Sachs 1988; Alesina 1989; Goodman 1991; Milesi-Ferretti 1995; Lohmann 1997; Bagheri and Habibi 1998; Banaian and Luksetich 2001; Keefer and Stasavage 2003. 12. See Berger 1997 and Berger, de Haan, and Eijffinger 2001. Hayo 1998 claims that people’s preferences with respect to price stability matter in explaining low inflation rates and that CBI is just one aspect of a stability regime, with two competing interpretations of the role of the institutional design: preference-instrument interpretation versus historical- feedback interpretation. Franzese 1999 claims that the effectiveness of CBI depends on every variable in the broader political-economic environment. In Eggertsson and Le Borgne 2010, the society—with all agents having homogeneous preferences—determines the CBI solving a delegation problem with a trade-off between costs and benefits. Crowe 2008 demonstrated that CBI is more likely to occur in societies where preferences over different policy dimensions, one of which is monetary policy, are heterogeneous. See also Eijffinger and Stadhouders 2003; Quintyn and Gollwitzer 2010; Hielscher and Markwardt 2012; Berggren, Daunfeldt, and Hellström 2014. 13. This model is a simplified version of the model presented in Masciandaro and Romelli 2015. 14. The government’s use of this inflation tax, or political bias, can stem from many different factors, including the advantages brought by an inflationary policy (partisan bias) such as to increase the employment level (employment bias); an incentive to use monetary policy to make the costs of fiscal policies less onerous in economic or political terms (fiscal bias); a temptation to bail out banks through monetization (banking bias); or incentive to use accommodative monetary policy when facing balance-of-payment imbalances. 15. This implies a helping hand (Pigou 1920) view of policymaking in which the policymaker aims to please citizens. The alternative assumption, the grabbing hand view (Shleifer and Vishny 2002), implies the policymaker is aiming to please specific constituencies, that is, the lobbies (see also Masciandaro 2009). This second modeling choice is not of interest in our framework, as inflation developments are a wide concern for the whole population of voters, and it is unlikely that powerful constituencies would systematically advocate the use of an inflationary tax. 16. The idea that the degree of CBI enters directly the utility function is in line with the results of the empirical literature on the effect of CBI. Indeed, this literature shows how CBI can be considered a “free lunch” able to guarantee price stability at no costs for real economic growth (see, among others, Grilli, Masciandaro, and Tabellini 1991; Alesina and Summers 1993; Cukierman 2008). 17. Bodea 2010 brings a first contribution to the small research investigating the complementarity between exchange-rate regimes and CBI (see also de Haan, Klaas, and Sturm 1993). Bodea shows that governments will prefer both a fixed, but adjustable, exchange-rate regime and an independent central bank that is not completely transparent. 18. We are aware that no single definition of CBI is “right” for all countries, but we consider de jure indices, in particular the GMT one, as the most appropriate for our analysis, since legislative changes might signal a stronger change in the relationship between the central bank and the government. One drawback of de jure indices is that legislative reforms might require a long legislative process with a possible temporary decoupling between the degree of de jure and the effective level of independence of the central bank. Yet de facto indices also suffer a similar drawback in situations where the central bank governor
86 Masciandaro and Romelli satisfies all the wants and needs of the government and is not to be replaced, resulting in a high level of independence measured by the turnover rate. 19. Given that the number of OECD member countries evolves over time (it increased from eighteen at the creation of the OECD in the 1960s to thirty-five countries in 2014), we group countries inside the OECD subsample starting from the year in which they officially became members of the OECD. 20. Note also that policymaker’s effort, which might be correlated to the other independent variables, does not enter directly into the optimal determination of CBI. The effort is nonetheless implicitly considered, as the policymaker’s effort will naturally depend on the presence of shocks in the economy.
References Acemoglu, Daron, Simon Johnson, Pablo Querubin, and James A. Robinson. 2008. When does policy reform work? The case of central bank independence. Brookings Papers on Economic Activity 2008, no. 1: 351–418. Aghion, Philippe, Alberto Alesina, and Francesco Trebbi. 2004. “Endogenous Political Institutions.” Quarterly Journal of Economics 119, no. 2: 565–611. Aklin, Michaël, and Andreas Kern. 2016. “Is Central Bank Independence Always a Good Thing?” June 22. https://ssrn.com/abstract=2799220. Alesina, Alberto. 1989. “Politics and Business Cycles in Industrial Democracies.” Economic Policy 4, no. 8: 55–98. Alesina, Alberto, and Roberta Gatti. 1995. “Independent Central Banks: Low Inflation at No Cost?” American Economic Review 85, no. 2: 196–200. Alesina, Alberto, and Jeffrey Sachs. 1988. “Political Parties and the Business Cycle in the United States, 1948–1984.” Journal of Money, Credit, and Banking 20, no. 1: 63–82. Alesina, Alberto, and Andrea Stella. 2010. “The Politics of Monetary Policy.” Handbook of Monetary Economics 3: 1001–1054. Alesina, Alberto, and Lawrence H. Summers. 1993. “Central Bank Independence and Macroeconomic Performance: Some Comparative Evidence.” Journal of Money, Credit and Banking 25, no. 2: 151–162. Alpanda, Sami, and Adam Honig. 2009. “The Impact of Central Bank Independence on Political Monetary Cycles in Advanced and Developing Nations.” Journal of Money, Credit and Banking 41, no. 7: 1365–1389. Arnone, Marco, Bernard Laurens, and Jean-François Segalotto. 2006. “The Measurement of Central Bank Autonomy: Survey of Models, Indicators, and Empirical Evidence.” IMF working paper 06/227. Arnone, Marco, Bernard J. Laurens, Jean-François Segalotto, and Martin Sommer. 2009. “Central Bank Autonomy: Lessons from Global Trends.” IMF Staff Papers 56, no. 2: 263–296. Arnone, Marco, and Davide Romelli. 2013. “Dynamic Central Bank Independence Indices and Inflation Rate: A New Empirical Exploration.” Journal of Financial Stability 9, no. 3: 385–398. Aydin, Burcu, and Engin Volkan. 2011. “Incorporating Financial Stability in Inflation Targeting Frameworks.” IMF working paper 11/224. Backus, David, and John Driffill. 1985. “Inflation and Reputation.” American Economic Review 75, no. 3: 530–538. Bade, Robin, and Michael Parkin. 1982. Central Bank Laws and Monetary Policy. London, Ontario: University of Western Ontario Department of Economics.
Peaks and Troughs: Central Bank Independence Cycles 87 Bagheri, Fatholla M., and Nader Habibi. 1998. “Political Institutions and Central Bank Independence: A Cross-Country Analysis.” Public Choice 96, nos. 1–2: 187–204. Ball, Laurence, Joseph Gagnon, Patrick Honohan, and Signe Krogstrup. 2016. What Else Can Central Banks Do? Geneva Reports on the World Economy 18. Geneva: ICMB/CEPR. Balls, Ed, James Howat, and Anna Stansbury. 2016. “Central Bank Independence Revisited: After the Financial Crisis, What Should a Model Central Bank Look Like?” Harvard Kennedy School M-RCBG associate working paper 67. Banaian, King, and William A. Luksetich. 2001. “Central Bank Independence, Economic Freedom, and Inflation Rates.” Economic Inquiry 39, no. 1: 149–161. Barro, Robert J., and David B. Gordon. 1983. “Rules, Discretion and Reputation in a Model of Monetary Policy.” Journal of Monetary Economics 12, no. 1: 101–121. Bayoumi, Tamim, Giovanni Dell’ariccia, Karl Friedrich Habermeier, Tommaso Mancini Griffoli, and Fabian Valencia. 2014. “Monetary Policy in the New Normal.” IMF working paper 14/3. Berger, Helge. 1997. “The Bundesbank’s Path to Independence: Evidence from the 1950s.” Public Choice 93, nos. 3–4: 427–453. Berger, Helge, Jakob de Haan, and Sylvester C. W. Eijffinger. 2001. “Central Bank Independence: An Update of Theory and Evidence.” Journal of Economic Surveys 15, no. 1: 3–40. Berger, Wolfram, and Friedrich Kißmer. 2013. “Central Bank Independence and Financial Stability: A Tale of Perfect Harmony?” European Journal of Political Economy 31: 109–118. Berggren, Niclas, Sven-Olov Daunfeldt, and Jörgen Hellström. 2014. “Social Trust and Central- Bank Independence.” European Journal of Political Economy 34: 425–439. Bernanke, Ben S. 2007. “Central Banking and Bank Supervision in the United States.” Remarks at Allied Social Sciences Association, January 5. Bernanke, Ben S. 2013a. “Celebrating 20 Years of the Bank of Mexico’s Independence.” Speech. Central Bank Independence—Progress and Challenges conference, Mexico City, October 14. Bernanke, Ben S. 2013b. “A Century of US Central Banking: Goals, Frameworks, Accountability.” Journal of Economic Perspectives 27, no. 4: 3–16. Bernanke, Ben S., and Mark Gertler. 1995. “Inside the Black Box: The Credit Channel of Monetary Policy Transmission.” Journal of Economic Perspectives 9, no. 4: 27–48. Bernhard, William. 1998. “A Political Explanation of Variations in Central Bank Independence.” American Political Science Review 92, no. 2: 311–327. Beyer, Andreas, Vitor Gaspar, Christina Gerberding, and Otmar Issing. 2008. “Opting Out of the Great Inflation: German Monetary Policy after the Breakdown of Bretton Woods.” NBER working paper 14596. Blinder, Alan S. 2008. The Quiet Revolution: Central Banking Goes Modern. New Haven: Yale University Press. Blinder, Alan S., Michael Ehrmann, Marcel Fratzscher, Jakob de Haan, and David-Jan Jansen. 2008. “Central Bank Communication and Monetary Policy: A Survey of Theory and Evidence.” Journal of Economic Literature 46, no. 4: 910–945. Bodea, Cristina. 2010. “Exchange Rate Regimes and Independent Central Banks: A Correlated Choice of Imperfectly Credible Institutions.” International Organization 64, no. 3: 411–442. Bodea, Cristina. 2013. “Independent Central Banks, Regime Type, and Fiscal Performance: The Case of Post-Communist Countries.” Public Choice 155, nos. 1–2: 81–107. Bodea, Cristina, and Raymond Hicks. 2015. “International Finance and Central Bank Independence: Institutional Diffusion and the Flow and Cost of Capital.” Journal of Politics 77, no. 1: 268–284.
88 Masciandaro and Romelli Briault, Clive, Andrew Haldane, and Mervyn A. King. 1996. “Central Bank Independence and Accountability: Theory and Evidence.” Bank of England Quarterly Bulletin (February): 63–68. Brumm, Harold J. 2006. “The Effect of Central Bank Independence on Inflation in Developing Countries.” Economics Letters 90, no. 2: 189–193. Brumm, Harold J. 2011. “Inflation and Central Bank Independence: Two-Way Causality?” Economics Letters 111, no. 3: 220–222. Buiter, Willem H. 2014. “Central Banks: Powerful, Political and Unaccountable?” Journal of the British Academy 2: 269–303. Burdekin, Richard C. K., and Leroy O. Laney. 2016. “Fiscal Policymaking and the Central Bank Institutional Constraint Una Vez Más.” Public Choice 167, nos. 3–4: 277–289. Campillo, Marta, and Jeffrey A. Miron. 1997. “Why Does Inflation Differ across Countries?” In Reducing Inflation: Motivation and Strategy, edited by Christia D. Romer and David H. Romer, 335–362. Chicago: University of Chicago Press. Carlstrom, Charles T., and Timothy S. Fuerst. 2009. “Central Bank Independence and Inflation: A Note.” Economic Inquiry 47, no. 1: 182–186. Cecchetti, Stephen G. 2013. “Central Bank Independence—A Path Less Clear.” Remarks at Central Bank Independence—Progress and Challenges conference, Mexico City, October 14. Cihak, Martin. 2007. “Central Bank Independence and Financial Stability.” July 5. https://ssrn. com/abstract=998335. Clarida, Richard, Jordi Galí, and Mark Gertler. 1999. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37: 1661–1707. Cohen-Cole, Ethan, and Jonathan Morse. 2013. “Monetary Policy and Capital Regulation in the US and Europe.” International Economics 134: 56–77. Crowe, Christopher. 2008. “Goal Independent Central Banks: Why Politicians Decide to Delegate.” European Journal of Political Economy 24, no. 4: 748–762. Crowe, Christopher, and Ellen E. Meade. 2008. “Central Bank Independence and Transparency: Evolution and Effectiveness.” European Journal of Political Economy 24, no. 4: 763–777. Cukierman, Alex. Central Bank Strategy, Credibility, and Independence: Theory and Evidence. Cambridge: MIT Press. Cukierman, Alex. “Central Bank Independence and Monetary Control.” Economic Journal 104, no. 427: 1437–1448. Cukierman, Alex. 1998. “The Economics of Central Banking.” In Contemporary Economic Issues, edited by H. Wolf, 37–82. London: Palgrave Macmillan. Cukierman, Alex. 2008. “Central Bank Independence and Monetary Policymaking Institutions—Past, Present and Future.” European Journal of Political Economy 24, no. 4: 722–736. Cukierman, Alex. 2013. “Monetary Policy and Institutions before, during, and after the Global Financial Crisis.” Journal of Financial Stability 9, no. 3: 373–384. Cukierman, Alex, and Allan H. Meltzer. 1986. “A Theory of Ambiguity, Credibility, and Inflation under Discretion and Asymmetric Information.” Econometrica: 1099–1128. Cukierman, Alex, Geoffrey P. Miller, and Bilin Neyapti. 2002. “Central Bank Reform, Liberalization and Inflation in Transition Economies— An International Perspective.” Journal of Monetary Economics 49, no. 2: 237–264. Cukierman, Alex, and Steven B. Webb. 1995. “Political Influence on the Central Bank: International Evidence.” World Bank Economic Review 9, no. 3: 397–423.
Peaks and Troughs: Central Bank Independence Cycles 89 Curdia, Vasco, and Michael Woodford. 2011. “The Central- Bank Balance Sheet As an Instrument of Monetary Policy.” Journal of Monetary Economics 58, no. 1: 54–79. De Haan, Jakob, and Sylvester C. W. Eijffinger. 2017. “Central Bank Independence under Threat?” CEPR Policy Insight 87. De Haan, Jakob, and Sylvester Eijffinger. 2016. “The Politics of Central Bank Independence.” EBC discussion paper 2016-004. De Haan, Jakob, Klaas Knot, and Jan-Egbert Sturm. 1993. “On the Reduction of Disinflation Costs: Fixed Exchange Rates or Central Bank Independence?” PSL Quarterly Review 46, no. 187: 429–443. De Haan, Jakob, Donato Masciandaro, and Marc Quintyn. 2008. “Does Central Bank Independence Still Matter?” European Journal of Political Economy 24, no. 4: 717–721. De Haan, Jakob, and Jan-Egbert Sturm. 1992. “The Case for Central Bank Independence.” Banca Nazionale del Lavoro Quarterly Review 45, no. 182: 305–327. De Haan, Jakob, and Gert Jan van ’t Hag. 1995. “Variation in Central Bank Independence across Countries: Some Provisional Empirical Evidence.” Public Choice 85, nos. 3–4: 335–351. Den Haan, Wouter, Martin Ellison, Ethan Ilzetzki, Michael McMahon, and Ricardo Reis. 2017. “The Future of Central Bank Independence: Results of the CFM–CEPR Survey.” VoxEU.org, January 10. https://voxeu.org/article/future-central-bank-independence. Dincer, N. Nergiz, and Barry Eichengreen. 2014. “Central Bank Transparency and Independence: Updates and New Measures.” International Journal of Central Banking 10, no. 1: 189–259. Down, Ian. 2009. “Central Bank Independence, Disinflations and Monetary Policy.” Business and Politics 10, no. 3: 1–22. Dreher, Axel, Jan-Egbert Sturm, and Jakob de Haan. 2008. “Does High Inflation Cause Central Bankers to Lose Their Job? Evidence Based on a New Data Set.” European Journal of Political Economy 24, no. 4: 778–787. Dreher, Axel, Jan-Egbert Sturm, and Jakob de Haan. 2010. “When Is a Central Bank Governor Replaced? Evidence Based on a New Data Set.” Journal of Macroeconomics 32, no. 3: 766–7 81. Economist. 2016. “What If the Helicopters Took Off?” March 9. Eggertsson, Gauti B., and Eric Le Borgne. 2010. “A Political Agency Theory of Central Bank Independence.” Journal of Money, Credit and Banking 42, no. 4: 647–677. Eichengreen, Barry, and Nergiz Dincer. 2011. “Who Should Supervise? The Structure of Bank Supervision and the Performance of the Financial System.” NBER working paper 17401. Eijffinger, Sylvester, and Jakob de Haan. 1996. The Political Economy of Central- Bank Independence. Princeton: Princeton University Department of Economics. Eijffinger, Sylvester C. W., and Petra M. Geraats. 2006. “How Transparent Are Central Banks?” European Journal of Political Economy 22, no. 1: 1–21. Eijffinger, Sylvester C. W., and Marco Hoeberichts. 1998. “The Trade Off between Central Bank Independence and Conservativeness.” Oxford Economic Papers 50, no. 3: 397–411. Eijffinger, Sylvester C. W., and Macro M. Hoeberichts. 2008. “The Trade-off between Central Bank Independence and Conservatism in a New Keynesian Framework.” European Journal of Political Economy 24, no. 4: 742–747. Eijffinger, Sylvester C. W., and Eric Schaling. 1995. “The Ultimate Determinants of Central Bank Independence.” Tilburg University discussion paper 1995-5. Eijffinger, Sylvester, and Patrick Stadhouders. 2003. “Monetary Policy and the Rule of Law.” CEPR discussion paper 3698.
90 Masciandaro and Romelli Farvaque, Etienne. 2002. “Political Determinants of Central Bank Independence.” Economics Letters 77, no. 1: 131–135. Fischer, Stanley. 1995. “Central-Bank Independence Revisited.” American Economic Review 85, no. 2: 201–206. Fischer, Stanley. 2015. “Central Bank Independence.” at Herbert Stein Memorial Lecture, National Economists Club, Washington, D.C. Forder, James. 1996. “On the Assessment and Implementation of ‘Institutional’ Remedies.” Oxford Economic Papers 48, no. 1: 39–51. Franzese, Robert J., Jr. 1999. “Partially Independent Central Banks, Politically Responsive Governments, and Inflation.” American Journal of Political Science: 681–706. Friedman, Milton. 1968. “The Role of Monetary Policy.” American Economic Review 58, no. 1: 1–17. Frisell, Lars, Kasper Roszbach, and Giancarlo Spagnolo. 2007. “Governing the Governors: A Clinical Study of Central Banks.” Riksbank working paper 221. Ftiti, Zied, Abdelkader Aguir, and Mounir Smida. 2017. “Time-Inconsistency and Expansionary Business Cycle Theories: What Does Matter for the Central Bank Independence-Inflation Relationship?” Economic Modelling 67: 215–227.. Gaiotti, Eugenio, and Alessandro Secchi. 2012. “Monetary Policy and Fiscal Dominance in Italy from the Early 1970s to the Adoption of the Euro: A Review.” Bank of Italy occasional paper 141. Galí, Jordi. 2014. “The Effects of a Money-Financed Fiscal Stimulus.” CEPR discussion paper DP10165. Galí, Jordi, and Tommaso Monacelli. 2005. “Monetary Policy and Exchange Rate Volatility in a Small Open Economy.” Review of Economic Studies 72, no. 3: 707–734. Gertler, Mark, and Peter Karadi. 2011. “A Model of Unconventional Monetary Policy.” Journal of Monetary Economics 58, no. 1: 17–34. Giavazzi, Francesco, and Alberto Giovannini. 2011. “Central Banks and the Financial System.” In Handbook of Central Banking, Financial Regulation and Supervision: After the Financial Crisis, edited by Sylvester Eijffinger and Donato Masciandaro, 3– 29. Cheltenham: Edward Elgar. Goodfriend, Marvin. 1986. “Monetary Mystique: Secrecy and Central Banking.” Journal of Monetary Economics 17, no. 1: 63–92. Goodfriend, Marvin. 2007. “How the World Achieved Consensus on Monetary Policy.” Journal of Economic Perspectives 21, no. 4: 47–68. Goodfriend, Marvin. 2011. “Central Banking in the Credit Turmoil: An Assessment of Federal Reserve Practice.” Journal of Monetary Economics 58, no. 1: 1–12. Goodfriend, Marvin. 2012. “The Elusive Promise of Independent Central Banking.” Monetary and Economic Studies 30: 39–54. Goodhart, Charles Albert Eric. 1988. “The Evolution of Central Banks.” Cambridge: MIT Press. Goodhart, Charles Albert Eric. 2011. “The Changing Role of Central Banks.” Financial History Review 18, no. 2: 135–154. Goodhart, Charles Albert Eric, Dirk Schoenmaker. 1995. “Should the functions of monetary policy and banking supervision be separated?” Oxford Economic Papers 47, no. 4: 539–560. Goodhart, Charles Albert Eric, Carolina Osorio, and Dimitrios Tsomocos. 2009. “Analysis of Monetary Policy and Financial Stability: A New Paradigm.” CESifo working paper 2885. Goodman, John B. 1991. “The Politics of Central Bank Independence.” Comparative Politics 23, no. 3: 329–349.
Peaks and Troughs: Central Bank Independence Cycles 91 Gorton, Gary, and Andrew Metrick. 2013. “The Federal Reserve and Panic Prevention: The Roles of Financial Regulation and Lender of Last Resort.” Journal of Economic Perspectives 27, no. 4: 45–64. Grilli, Vittorio, Donato Masciandaro, and Guido Tabellini. 1991. “Political and Monetary Institutions and Public Financial Policies in the Industrial Countries.” Economic Policy 6, no. 13: 341–392. Gutierrez, Eva. 2003. “Inflation Performance and Constitutional Central Bank Independence: Evidence from Latin America and the Caribbean.” IMF working paper 03/53. Hasan, Iftekhar, and Loretta J. Mester. 2008. “Central Bank Institutional Structure and Effective Central Banking: Cross-Country Empirical Evidence.” Comparative Economic Studies 50, no. 4: 620–645. Hayo, Bernd. 1998. “Inflation Culture, Central Bank Independence and Price Stability.” European Journal of Political Economy 14, no. 2: 241–263. Henderson, Dale W., and Warwick J. McKibbin. 1993. “A Comparison of Some Basic Monetary Policy Regimes for Open Economies: Implications of Different Degrees of Instrument Adjustment and Wage Persistence.” Carnegie-Rochester Conference Series on Public Policy 39: 221–317. Hielscher, Kai, and Gunther Markwardt. 2012. “The Role of Political Institutions for the Effectiveness of Central Bank Independence.” European Journal of Political Economy 28, no. 3: 286–301. Hughes Hallett, A., and Jan Libich. 2006. “Central Bank Independence, Accountability and Transparency: Complements or Strategic Substitutes?” CEPR discussion paper 5470. Ingves, Stefan. 2011. “Central Bank Governance and Financial Stability.” BIS study group report, May. Ioannidou, Vasso P. 2005. “Does Monetary Policy Affect the Central Bank’s Role in Bank Supervision?” Journal of Financial Intermediation 14, no. 1: 58–85. Issing, Otmar. 2005a. “Communication, Transparency, Accountability: Monetary Policy in the Twenty-First Century.” Federal Reserve Bank of St. Louis Review (March): 65–83. Issing, Otmar. 2005b. “Why Did the Great Inflation Not Happen in Germany?” Federal Reserve Bank of St. Louis Review (March): 329–336. Issing, Otmar. 2012. “The Mayekawa Lecture: Central Banks—Paradise Lost.” Monetary and Economic Studies 30: 55–74. Jácome, Luis I., and Francisco Vázquez. 2008. “Is There Any Link between Legal Central Bank Independence and Inflation? Evidence from Latin America and the Caribbean.” European Journal of Political Economy 24, no. 4: 788–801. Keefer, Philip, and David Stasavage. 2003. “The Limits of Delegation: Veto Players, Central Bank Independence, and the Credibility of Monetary Policy.” American Political Science Review 97, no. 3: 407–423. Khan, Ashraf. 2016. “Central Bank Governance and the Role of Nonfinancial Risk Management.” IMF working paper 16/34. Kireyev, Alexei. 2015. “How to Improve the Effectiveness of Monetary Policy in the West African Economic and Monetary Union.” IMF working paper 15/99. Klomp, Jeroen, and Jakob de Haan. 2009. “Central Bank Independence and Financial Instability.” Journal of Financial Stability 5, no. 4: 321–338. Klomp, Jeroen, and Jakob de Haan. 2010a. “Central Bank Independence and Inflation Revisited.” Public Choice 144, nos. 3–4: 445–457.
92 Masciandaro and Romelli Klomp, Jeroen, and Jakob de Haan. 2010b. “Inflation and Central Bank Independence: A Meta‐ Regression Analysis.” Journal of Economic Surveys 24, no. 4: 593–621. Komai, Alejandro, and Gary Richardson. 2011. “A Brief History of Regulations regarding Financial Markets in the United States: 1789 to 2009.” NBER working paper 17443. Kydland, Finn E., and Edward C. Prescott. 1977. “Rules rather than Discretion: The Inconsistency of Optimal Plans.” Journal of Political Economy 85, no. 3: 473–491. Laeven, Luc, and Fabian Valencia. 2013. “Systemic Banking Crises Database.” IMF Economic Review 61, no. 2: 225–270. Lastra, Rosa Maria. 1996. Central Banking and Banking Regulation. London: London School of Economics and Political Science. Lohmann, Susanne. 1997. “Partisan Control of the Money Supply and Decentralized Appointment Powers.” European Journal of Political Economy 13, no. 2: 225–246. Lucas, Robert E. 1973. “Some International Evidence on Output-Inflation Tradeoffs.” American Economic Review 63, no. 3: 326–334. Martin, Fernando M. 2015. “Debt, Inflation and Central Bank Independence.” European Economic Review 79: 129–150. Masciandaro, Donato. 2009. “Politicians and Financial Supervision Unification outside the Central Bank: Why Do They Do It?” Journal of Financial Stability 5, no. 2: 124–146. Masciandaro, Donato. 2012a. “Back to the Future? Central Banks As Prudential Supervisors in the Aftermath of the Crisis.” European Company and Financial Law Review 9, no. 2: 112–130. Masciandaro, Donato. 2012b. “Central Banking and Supervision: The Political Cycle.” Paolo Baffi Centre research paper 2012-131. Masciandaro, Donato. 2012c. “Monetary Policy and Banking Supervision: Still at Arm’s Length? A Comparative Analysis.” European Journal of Comparative Economics 9, no. 3: 349. Masciandaro, Donato, and Marc Quintyn. 2009. “Reforming Financial Supervision and the Role of the Central Banks: A Review of Global Trends, Causes and Effects (1998–2008).” CEPR Policy Insight 30, no. 1: 11. Masciandaro, Donato, and Marc Quintyn. 2015. “The Governance of Financial Supervision: Recent Developments.” Journal of Economic Surveys 30, no. 5: 982–1006.. Masciandaro, Donato, and Davide Romelli. 2015. “Ups and Downs of Central Bank Independence from the Great Inflation to the Great Recession: Theory, Institutions and Empirics.” Financial History Review 22, no. 3: 259–289. Masciandaro, Donato, and Davide Romelli. 2018. “Central Bankers As Supervisors: Do Crises Matter?” European Journal of Political Economy 52, no. 3: 120–140. Masciandaro, Donato, and Franco Spinelli. 1994. “Central Banks’ Independence: Institutional Determinants, Rankings and Central Bankers’ Views.” Scottish Journal of Political Economy 41, no. 4: 434–443. Maslowska, Aleksandra A. 2011. “Quest for the Best: How to Measure Central Bank Independence and Show Its Relationship with Inflation.” Czech Economic Review 5, no. 2: 132–161. Maxfield, Sylvia. 1997. Gatekeepers of Growth: The International Political Economy of Central Banking in Developing Countries. Princeton: Princeton University Press. McCallum, Bennett T. 1995. “Two Fallacies concerning Central- Bank Independence.” American Economic Review 85, no. 2: 207–211. Milesi-Ferretti, Gian Maria. 1995. “The Disadvantage of Tying Their Hands: On the Political Economy of Policy Commitments.” Economic Journal: 1381–1402.
Peaks and Troughs: Central Bank Independence Cycles 93 Miller, Geoffrey P. 1998. “An Interest‐Group Theory of Central Bank Independence.” Journal of Legal Studies 27, no. 2: 433–453. Morris, JoAnne, and Tonny Lybek. 2004. “Central Bank Governance: A Survey of Boards and Management.” IMF working paper 04/226. Moser, Peter. 1999. “Checks and Balances, and the Supply of Central Bank Independence.” European Economic Review 43, no. 8: 1569–1593. Nickel, Christiane, and Andreas Tudyka. 2014. “Fiscal Stimulus in Times of High Debt: Reconsidering Multipliers and Twin Deficits.” Journal of Money, Credit and Banking 46, no. 7: 1313–1344. Niemann, Stefan. 2011. “Dynamic Monetary-Fiscal Interactions and the Role of Monetary Conservatism.” Journal of Monetary Economics 58, no. 3: 234–247. Niemann, Stefan, Paul Pichler, and Gerhard Sorger. 2013. “Central Bank Independence and the Monetary Instrument Problem.” International Economic Review 54, no. 3: 1031–1055. Nier, Erlend. 2009. “Financial Stability Frameworks and the Role of Central Banks: Lessons from the Crisis.” IMF working paper 09/70. Nolivos, Roberto Delhy, and Guillermo Vuletin. 2014. “The Role of Central Bank Independence on Optimal Taxation and Seigniorage.” European Journal of Political Economy 34: 440–458. Orphanides, Athanasios. 2011. “Monetary Policy Lessons from the Crisis.” In Handbook of Central Banking, Financial Regulation and Supervision: After the Financial Crisis, edited by Sylvester Eijffinger and Donato Masciandaro, 30–65. Cheltenham: Edward Elgar. Peek, Joe, Eric S. Rosengren, and Geoffrey M. B. Tootell. 1999. “Is Bank Supervision Central to Central Banking?” Quarterly Journal of Economics 114, no. 2: 629–653. Persson, Torsten, and Guido Tabellini. 1993. “Designing Institutions for Monetary Stability.” Carnegie-Rochester Conference Series on Public Policy 39: 53–84. Pigou, Arthur C. 1920. “The Economics of Welfare.” London: Macmillan. Polillo, Simone, and Mauro F. Guillén. 2005. “Globalization Pressures and the State: The Worldwide Spread of Central Bank Independence.” American Journal of Sociology 110, no. 6: 1764–1802. Posen, Adam S. 1995. “Declarations Are Not Enough: Financial Sector Sources of Central Bank Independence.” NBER Macroeconomics Annual 10: 253–274. Quintyn, Marc, and Sophia Gollwitzer. 2010. “The Effectiveness of Macroeconomic Commitment in Weak(er) Institutional Environments.” IMF working paper 10/193. Reis, Ricardo. 2016. “Can the Central Bank Alleviate Fiscal Burdens?” NBER working paper 23014. Ricardo, David. Plan for the Establishment of a National Bank. London: John Murray, 1824. Rogoff, Kenneth. 1985. “The Optimal Degree of Commitment to an Intermediate Monetary Target.” Quarterly Journal of Economics 100: 1169–1189. Romelli, Davide. 2018. “The Political Economy of Reforms in Central Bank Design: Evidence from a New Dataset.” Mimeo. Department of Economics, Trinity College Dublin. Sargent, Thomas J., and Neil Wallace. 1984. “Some Unpleasant Monetarist Arithmetic.” In Monetarism in the United Kingdom, edited by Brian Griffiths and Geoffrey Edward Wood, 15–41. London: Palgrave Macmillan. Shleifer, Andrei, and Robert W. Vishny. 2002. The Grabbing Hand: Government Pathologies and Their Cures. Cambridge: Harvard University Press. Sieg, Gernot. 1997. “A Model of Partisan Central Banks and Opportunistic Political Business Cycles.” European Journal of Political Economy 13, no. 3: 503–516. Siklos, Pierre L. 2002. The Changing Face of Central Banking: Evolutionary Trends since World War II. Cambridge: Cambridge University Press, 2002.
94 Masciandaro and Romelli Siklos, Pierre L. 2008. “No Single Definition of Central Bank Independence Is Right for All Countries.” European Journal of Political Economy 24, no. 4: 802–816. Siklos, Pierre L., Martin T. Bohl, and Mark E. Wohar, eds. 2010. Challenges in Central Banking: The Current Institutional Environment and Forces Affecting Monetary Policy. Cambridge: Cambridge University Press, 2010. Sturm, Jan-Egbert, and Jakob de Haan. 2001. “Inflation in Developing Countries: Does Central Bank Independence Matter?” CESifo working paper 511. Svensson, Lars E. O. 1997. “Optimal Inflation Targets, ‘Conservative’ Central Banks, and Linear Inflation Contracts.” American Economic Review: 98–114. Taylor, John B. 1993. “Discretion versus Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public Policy 39: 195–214. Ueda, Kenichi, and Fabian Valencia. 2014. “Central Bank Independence and Macro-prudential Regulation.” Economics Letters 125, no. 2: 327–330. Vaubel, Roland. 1997. “The Bureaucratic and Partisan Behavior of Independent Central Banks: German and International Evidence.” European Journal of Political Economy 13, no. 2: 201–224. Vicarelli, Fausto, Richard Sylla, Alec Cairncross, Jean Bouvier, Carl-Ludwig Holtfrerich, and Giangiacomo Nardozzi. 1988. Central Banks’ Independence in Historical Perspective. Berlin: Walter de Gruyter, 1988. Vuletin, Guillermo, and Ling Zhu. 2011. “Replacing a ‘Disobedient’ Central Bank Governor with a ‘Docile’ One: A Novel Measure of Central Bank Independence and Its Effect on Inflation.” Journal of Money, Credit and Banking 43, no. 6: 1185–1215. Waller, Christopher J. 2011. “Independence + Accountability: Why the Fed Is a Well-Designed Central Bank.” Federal Reserve Bank of St. Louis Review 93, no. 5: 293–301. Walsh, Carl E. 1995a. “Is New Zealand’s Reserve Bank Act of 1989 an Optimal Central Bank Contract?” Journal of Money, Credit and Banking 27, no. 4: 1179–1191. Walsh, Carl E. 1995b. “Optimal Contracts for Central Bankers.” American Economic Review 85, no. 1: 150–167. Williams, John C. 2015. “Monetary Policy and the Independence Dilemma.” Federal Reserve Bank of San Francisco Economic Letter 15. Wood, John. 2008. “The Meanings and Historical Background of Central Bank Independence.” Paolo Baffi Centre research paper 2008-05. Woodford, Michael. 2012. “Inflation Targeting and Financial Stability.” NBER working paper 17967.
Appendix 3.1 The Grilli, Masciandaro, and Tabellini (GMT) Index of CBI The political index is based on a binary code assigned to eight different characteristics that sum up the ability of monetary authorities to independently achieve the final goals of their policy. This index captures three main aspects of monetary regimes: the procedure
Peaks and Troughs: Central Bank Independence Cycles 95 for appointing the members of the central bank governing bodies, the relationship between these bodies and the government, and the formal responsibilities of the central bank. Starting from these three aspects, one point is assigned for each of the following criteria, if satisfied. I. Governor and central bank board appointment:
• The governor is appointed without government involvement. • The governor is appointed for more than five years. • The other members of the board of directors are appointed without government involvement. • The other board members are appointed for more than five years.
II. Relationships with government:
• There is no mandatory participation of government representative(s) on the board. • No government approval is required for formulation of monetary policy.
III. Objectives and responsibilities of the central bank:
• The central bank is legally obliged to pursue monetary stability as one of its primary objectives. • There are legal provisions that strengthen the central bank’s position in the event of a conflict with the government.
The economic index summarizes the degree of independence of central banks in choosing their monetary policy instruments. Its three main aspects concern the influence of the government in determining how much to borrow from the central bank, the nature of the monetary instruments under the control of the central bank, and the degree of central bank involvement in banking supervision. Again, one point is assigned for each of the following satisfied criteria. I. Monetary financing of public deficits:
• There is no automatic procedure for the government to obtain direct credit from the central bank. • When available, direct credit facilities are extended to the government at market interest rates. • Direct credit facilities are temporary. • Direct credit facilities are for a limited amount. • The central bank does not participate in the primary market for public debt.
• The central bank is responsible for setting the policy rate.
• The central bank has no responsibility for overseeing the banking sector (two points) or shares its responsibility with another institution (one point).
II. Monetary instruments:
III. Central bank involvement in banking supervision:
Appendix 3.2 Tables and Figures .8
15
10 .6
.5 5
Average inflation rate (%)
Average degree of CBI [0,1]
.7
.4
0
.3 1975
1980
1985
1990
GMT economic
1995
2000
2005
GMT political
2010
2015
Inflation rate
Appendix Figure 3.A.1 The Evolution of the GMT Political and Economic Independence Indices for OECD Countries, 1972–2014. Notes: Figure 3.A.1 show the evolution of the average political and economic indices of central bank independence (left vertical axis) and inflation rate (right vertical axis) for OECD countries, over the period 1972–2014.
.8 .7 20 .6 15
.5
10
.4
Average inflation rate (%)
Average degree of CBI [0,1]
25
5
.3 1975
1980
1985
GMT economic
1990
1995
2000
GMT political
2005
2010
2015
Inflation rate
Appendix Figure 3.A.2 The Evolution of the GMT Political and Economic Independence Indices for non-OECD Countries, 1972–2014 Notes: Figure 3.A.2 shows the evolution of the average political and economic indices of central bank independence (left vertical axis) and inflation rate (right vertical axis) for non OECD countries, over the period 1972–2014.
Appendix Table 3.A.1 Country and year of the first analyzed legislation Afghanistan
2003
Lithuania
1994
Albania
1992
Luxembourg
1983
Algeria
1972
Malaysia
1982
Argentina
1972
Malta
1994
Australia
1972
Mexico
1972
Austria
1972
Mongolia
1996
Bahrain
1973
Montenegro
2005
Belgium
1972
Morocco
1972
Bosnia and Herzegovina
1997
Netherlands
1972
Brazil
1972
New Zealand
1972
Bulgaria
1991
Norway
1972
Canada
1972
Poland
1997
Chile
1972
Portugal
1972
China
1995
Qatar
1993
Croatia
1991
Romania
1991
Cyprus
1972
Russia
1992
Czech Republic
1991
Saudi Arabia
1972
Denmark
1972
Singapore
1991
Estonia
1993
Slovakia
1992
Finland
1972
Slovenia
1991
France
1972
South Korea
1972
Germany
1972
Spain
1972
Greece
1972
Sweden
1972
Hungary
1991
Switzerland
1972
Iceland
1972
Thailand
1972
India
1972
Trinidad and Tobago
1972
Indonesia
1972
Turkey
1972
Iran
1972
Ukraine
1991
Ireland
1972
United Arab Emirates
1980
Italy
1972
United Kingdom
1972
Japan
1972
United States
1972
Kuwait
1972
Venezuela
1972
Latvia
1992
40
43
63
65
65
65
1972–1979
1980–1989
1990–1999
2000–2006
2007
2008–2014
16
21
21
21
21
1980–1989
1990–1999
2000–2006
2007
2008–2014
147
21
147
198
160
128
455
65
447
575
425
319
No. of Observations
27
42
44
44
44
1980–1989
1990–1999
2000–2006
2007
2008–2014
308
44
300
377
265
191
0.691
0.683
0.662
0.515
0.408
0.411
0.676
0.667
0.653
0.468
0.336
0.327
0.686
0.678
0.659
0.499
0.381
0.377
Mean
0.125
0.125
0.125
0.125
0.125
0.125
0.250
0.250
0.250
0.063
0.063
0.125
0.125
0.125
0.125
0.063
0.063
0.125
Min.
1.000
1.000
1.000
1.000
0.750
0.750
1.000
1.000
1.000
1.000
0.688
0.563
1.000
1.000
1.000
1.000
0.750
0.750
Max.
0.718
0.696
0.680
0.526
0.390
0.389
0.668
0.661
0.657
0.496
0.327
0.345
0.702
0.685
0.672
0.515
0.366
0.371
Mean
0.125
0.125
0.125
0
0
0
0.25
0.25
0.25
0.125
0.125
0.125
0
0
0
0
0
0
Min.
GMT Political
1.000
1.000
1.000
1.000
0.875
0.875
1.000
1.000
1.000
1.000
0.750
0.750
1.000
1.000
1.000
1.000
0.875
0.875
Max.
0.125 0.125
0.665
0.125
0
0
0
0.125
0.125
0.125
0
0
0
0.125
0.125
0.125
0
0
0
Min.
0.670
0.644
0.504
0.427
0.432
0.683
0.673
0.648
0.440
0.345
0.309
0.671
0.671
0.645
0.482
0.396
0.382
Mean
GMT Economic
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
0.875
0.875
1.000
1.000
1.000
1.000
1.000
1.000
Max.
Note: This table provides summary statistics information for the aggregated GMT index of CBI (GMT), as well as for the subindices of political and economic independence. The sample of OECD and non-OECD member countries varies over time, based on the year in which a specific country joined the OECD.
24
1972–1979
Non-inflation-targeting countries
16
1972–1979
Inflation-targeting countries
No. of countries
Period
GMT
Appendix Table 3.A.2 Summary Statistics on the Evolution of the GMT Independence Index (Inflation-vs. Non-Inflation-Targeting Countries)
chapter 4
The Governa nc e of Central Ba nks With Some Examples and Some Implications for All Forrest Capie and Geoffrey Wood
4.1 Introduction Adam Smith expected that the dominant form of firm would be organized and run by a sole proprietor (1776, bk. 2, chap. 6). He argued that was the only form by which it could be ensured that the firm was run predominantly in the interests of the owner and not of those who managed it on the owner’s behalf. Another path, where the sole proprietor is rare and generally confined to small firms, has, however, been chosen. This path was cleared by developments that Smith did not foresee in law, accountancy, and finance. But the clearing appears to be only temporary, for every major corporate failure and every major financial crash is followed by a burst of activity in what is called corporate governance. The thickets appear to grow back. At the moment, in the wake of the financial crash of the early twenty-first century and the subsequent recession, corporate governance is a subject of great attention in the United Kingdom, the United States, and most of the Western world and is a subject of concern by the West about some other countries, notably China and Japan. This concern has followed the crash because of widespread belief that the crash was at least in part due to the failure of the owners of firms to control their managements. Governance is, though, a topic whose discussion has been confined mainly to private- sector organizations. To an extent, this is not surprising, for the greater part of the public sector in most countries does not take the form of a corporation.1 But some parts do. The Bank of England (hereafter, the Bank) is a prominent, but certainly not the only, example. As will be argued, however, corporate governance is better viewed not as a separate subject embracing matters not just economic but also ethical, but rather as a subdivision of an analytical framework that certainly dates back to Smith. In this chapter,
100 Capie and Wood we show how that framework can embrace corporate governance and also readily be extended to public-as well as private-sector organizations. We argue that the study of the governance of central banks is certainly illuminating, is facilitated by viewing governance as a subdivision of a well-established area of economics, and may be useful for the conduct of policy. The structure of the chapter is as follows. There is first a brief examination of the meaning of the term governance. We then turn to setting out the UK approaches (note the plural) to corporate governance. Then the evolution of governance in the Bank is described, and parallels are noted with the Reserve Bank of New Zealand (hereafter, RBNZ), a central bank with a shorter history, of course, but similar in interesting ways and different in ways still more interesting. The chapter concludes with some general principles of good governance in central banks.
4.2 What Is Governance? 4.2.1 Definition and Applicability According to the Oxford English Dictionary, governance has six meanings: the action or manner of governing;the state of being governed; the office, function, or power of governing; method of management, system of regulations; mode of living, behaviour, demeanour; wise self command.
Does any part of this general sense need to be changed for central banks? It appears to us that it does not, for the only feature that differentiates them from the general run of companies for which the term is used is their ownership—with some rare exceptions, they are government-owned. The Bank was, indeed, created by the government although not for many years owned by the state. The first, second, and fourth of these aspects of the definition are particularly relevant here: the first encompasses what is being controlled and how; the second, who controls the central bank; and the fourth, what the bank does internally and externally. The fifth is not really relevant in this context, and the sixth is, of course, taken for granted for central banks. But, as we remarked above, the concept has been used exclusively (to our knowledge) in and for the private sector. Accordingly, we first consider some examples of corporate governance in the private sector before turning to central banking. Doing so facilitates our demonstrating that corporate governance is a subdivision of a well-established area of economics.
The Governance of Central Banks 101
4.2.2 Corporate Governance in the United Kingdom In the United Kingdom, there are several “corporate governance codes.” There is one for the general run of companies; then there is one for a particular corporate form, investment trusts; and then there is one for investing organizations, primarily public-sector bodies such as pension funds but applicable in principle for all such bodies. We start with the code for companies in general and then move on to that for investment trusts, as that is conceived of as a specialized version of the general one. The code for bodies such as pension funds is considered only briefly, as it is essentially an extension of the code for investment trusts. We focus on the United Kingdom because codes across the English-speaking world are all very similar, differing only because of differences in law as between countries. (i) The first version of the UK corporate governance code was produced in 1992, by the Cadbury Committee.2 This code has endured. All work on UK corporate governance appears to contain the following quotation from that report: Corporate governance is the system by which companies are directed and controlled. Boards of directors are responsible for the governance of their companies. The shareholders’ role in governance is to appoint the directors and the auditors and to satisfy themselves that an appropriate governance structure is in place. The responsibilities of the board include setting the company’s strategic aims, providing the leadership to put them into effect, supervising the management of the business and reporting to the shareholders on their stewardship. The board’s actions are subject to laws, regulations, and the shareholders in general meeting. (Cadbury Committee 1992, para. 2.5)
The latest version of the UK corporate governance code (Financial Reporting Council 2014) develops this: Corporate governance is therefore about what the board of a company does and how it sets the values of the company. It is to be distinguished from the day-to- day operational management of the company by full-time executives. . . . It is based on the underlying principles of all good governance: accountability, transparency, probity and focus on the sustainable success of an entity over the longer term.
That sets the basic principles out pretty clearly and already yields obvious comparisons with, for example, the Bank. But it is useful to continue for the moment with the corporate governance of what we may term normal companies. The Financial Reporting Council in its revision of the code highlights two points it thinks are of particular importance: the avoiding of “groupthink” and the provision of information on risks that affect the “longer term viability” of the company.
102 Capie and Wood The Council suggests that “sufficient diversity on the board” will help in avoiding groupthink. Happily, they write that this “includes, but is not limited to, gender and race.”3 The revised code also emphasizes the role of the board in “establishing the culture, values, and ethics of the company. It is important that the board sets the correct ‘tone from the top.’ . . . This will help prevent misconduct, unethical practices, and support the delivery of long term success.” The preface of the report concludes by saying that while the company is “in law primarily responsible to its shareholders . . . companies are encouraged to recognise the contribution made by other providers of capital and to confirm the board’s interest in listening to the views of such providers insofar as these are relevant to the company’s overall approach to governance.” These are the main principles that are fleshed out in the body of the report with suggestions for their implementation. But an important general point remains. It is emphasized that the code is intended to be flexible: The “comply or explain” approach is the trademark of corporate governance in the UK. . . . The Code is not a rigid set of rules. It consists of principles (main and supporting) and provisions. . . . It is recognised that an alternative to following a provision may be justified in particular circumstances if good governance can be achieved by other means. A condition of doing so is that the reasons for it should be explained clearly and carefully to shareholders. (2014, 4)
So much for the basic code, which is intended to apply to all companies. (ii) The next code we set out is the Association of Investment Companies Code of Corporate Governance: A Framework of Best Practice for Member Companies” (AIC 2015). This was updated following the updating of the Cadbury Code summarized above. Investment companies (at least, those that are members of the AIC) are investment trusts—closed-end funds. They may raise or repay capital (the latter by a purchase of their own shares) from time to time, but they are not obliged to, and generally do not, repurchase their own shares. The AIC writes: Investment companies have special factors which have an impact on their governance arrangements. These special factors arise principally from two features. First, the customers and shareholders of an investment company are the same, thus simplifying shareholder considerations while magnifying the importance of this group’s concerns. Second, an investment company typically has no employees, and the roles of CEO (unofficially), portfolio management, administration, accounting and company secretarial tend to be provided by a third party fund manager (or delegated to it by others) who may have created the company at the outset and have an important voice in the original appointments to the board. (2015, 4)
The Governance of Central Banks 103 They go on, highlighting the crucial difference between investment companies and others: “These factors mean that the fund manager is a more important stakeholder than a typical supplier.” The bulk of this code therefore deals with “board independence, and the review of management and other third party contracts.” The code goes into detail on what the AIC sees as the key factors to be dealt with in view of that general guidance: what shareholders want, the role of boards in achieving these goals, and finally four “Fundamentals behind the AIC Code.” These are that directors must put the interests of shareholders first, that directors must treat all shareholders “fairly,” that directors must be prepared to resign or otherwise lose office in the interests of long-term shareholder value, and that directors must address all relevant issues and communicate the results of their discussions clearly (AIC 2015, 7–8). In effect, then, it is clear that these two codes, one derived from the other, are concerned primarily to advance the long-term interests of the shareholders and to ensure that companies make clear what they are doing in order to pursue that goal. The details, of course, differ substantially because of the different kinds of companies involved. (iii) The third code we look at is one to which groups of investors can choose to conform. Such groups include, for example, pension funds and companies that manage investments on their behalf. This code is concerned not really with the behavior of these organizations but rather with the behavior of the firms in which they invest. It is a concern that such firms have good governance, conforming in outline with that of Cadbury and local best practice. They are expected to pay heed to the long-term consequences of their actions for the environment and to the health and general working conditions of their employees. This code is fundamentally different from the other two we have discussed and is noted here only for completeness. It plainly reflects contemporary popular concerns but does not deal with how the investors themselves behave and is of no direct relevance to governance in central banks.
4.3 Governance Generalized? These governance codes are, in fact, all special cases of solutions to a problem established for centuries in economic discussion, the problem of contract design. Contract design has received a good deal of attention over the years from economists. It is usually handled under the heading of “The Principal-Agent Problem,” the nomenclature invented by Steve Ross (1973). The “principal” is the person who wants something done; the “agent” is the person employed to do it. The “problem,” as we shall see, can arise for a variety of reasons. A little background from the history of economics is helpful in showing both the difficulties and the solutions.
104 Capie and Wood Adam Smith touched on the subject in some detail at several points. For example: “the work done by slaves, though it appears to cost only their maintenance, is in the end the dearest of any. A person who can acquire no property, can have no interest but to eat as much, and labour as little as possible” (1776, bk. 1, chap. 8). But his most detailed, and best-known, discussion arises in his explanation for the decline of agriculture in continental Europe after the fall of the Roman Empire. The system had become primarily one where the land was worked by “metayers”: “The Landlord furnished them with the seed, cattle, and instruments of husbandry. The produce was divided equally between the proprietor and the farmer” (bk. 3, chap. 2).Smith identified two problems with this relationship—there would be underinvestment, and there would be misuse of resources: It could never, however, be the interest even of this last species of cultivators [the metayers] to lay out, in the further improvement of the land, any part of the little stock they might save from their own share of the produce, because the lord, who laid out nothing, was to get one-half of whatever it produced. . . . It might be the interest of the metayer to make the land produce as much as could be brought out of it by means of the stock furnished by the proprietor; but it could never be in his interest to mix any part of his own with it. In France . . . the proprietors complain that their metayers take every opportunity of employing the master’s cattle rather in carriage than in cultivation; because in the one case they get the whole profits for themselves, in the other they share them with their landlords” (bk. 3, chap. 2, 367)
Contract design has appeared in various contexts over the years, both before that famous discussion and after it. Hume seems to have been the first to state the free-rider problem: Two neighbours may agree to drain a meadow, which they possess in common; because it is easy for them to know each other’s mind; and each must perceive that the immediate consequence of his failing in his part, is the abandoning the whole project. But it is very difficult, and indeed impossible, that a thousand persons shou’d agree in any such action; it being difficult for them to concert so complicated a design, and still more difficult for them to execute it; while each seeks a pretext to free himself of the trouble and expence, and wou’d lay the whole burden on others. (Hume 1740, 538)
This was followed up in nineteenth-century discussions of public finance, on the “benefit approach” and the “ability to pay approach” to taxation. Various writers, notably Pantaleoni (1882) and de Viti de Marco (1934), advocated the benefit approach.4 Wicksell pointed out what became known subsequently as the free-rider problem with this approach. If the individual is to spend his money for private and public uses so that his satisfaction is maximized he will obviously pay nothing whatsoever for public purposes. . . .
The Governance of Central Banks 105 Whether he pays much or little will affect the scope of public service so slightly, that for all practical purposes, he himself will not notice it at all. Of course, if everyone were to do the same, the State will soon cease to function. (Wicksell 1896, 81)
In other words, it can be advantageous to the individual to conceal preferences. That, too, is a problem for contract design. This “voting” problem has been tackled by various authors; Borda, Bowen, and Vickrey are the best known. Borda (1781) showed that he was aware of the problem, but, as he acknowledged, his “solution” was a purely academic one: “My scheme is intended only for honest men.” Bowen (1943) proposed a sequential voting procedure but one that depended on myopia. Each voter voted according to his time preferences at each stage, but only because the voter did not look beyond that stage. Vickrey (1960) was led to the conjecture that there is no way of aggregating individual preferences by voting that cannot be distorted by strategic behavior. (A formal proof of this was provided by Gibbard in 1973.) The problem of monopoly regulation has also been recognized as essentially one of contract design. This was the approach of Loeb and Marget (1979), who recognized that (as with the voting problem above) the difficulty is lack of information. The regulator, in advance of setting the regulation, knows less about costs (and possibly demand) than does the regulated firm. Concealment (or nondisclosure) also lies behind the problem of moral hazard that has been recognized for centuries in insurance. A number of economists have grappled with how to design contracts for this; see, for example, Arrow (1963), Pauly (1974), and Grossman and Hart (1983). The same problem inhibits price discrimination. This was discussed in the context of infrastructure provision by Dupuit: The best of all tariffs would be the one which would make pay those which use a way of communication a price proportional to the utility they derive from using this service. . . . I do not have to say that I do not believe in the possible application of this voluntary tariff; it would meet an insurmountable obstacle in the universal dishonesty of passants, but it is the kind of tariff one must try to approach by a compulsory tariff.” (Dupuit 1844, 223)
It arises, too, in economic planning. Hayek and Robbins had both many times pointed out the difficulty of such an activity: a market was needed before the information a planner needed would be disclosed. As late as 1967, Lange did not acknowledge this point: “Were I to rewrite my essay today my task would be much simpler. My answer to Hayek and Robbins would be: so what’s the trouble? Let us put the simultaneous equations on an electronic computer and we shall obtain the solution in less than a second. The market process with its cumbersome tatonnements appears old fashioned” (Lange 1967, 158). This was probably not the first but certainly an early and notable example of the fallacy that computers can solve all our problems. This fallacy was recently stressed by Easterly (2015).
106 Capie and Wood But if we now stand back to see all the problems of contract design that economists have grappled with since Smith and Hume, what do we find? The extensive and often complex literature that has examined contract design problems shows that attempts to design contracts face difficulties and sometimes reach impasses because of three main types of information problems: adverse selection, moral hazard, and nonverifiability. The first is when the agent has private knowledge (about cost or valuation) that is not available to the principal. The second is when the agent can take action (of a type the principal would not wish) unknown to the principal. The third is when both principal and agent have the same information after the event, but no third party can observe it, so even after the event, the information cannot be used for contract enforcement in, for example, a court of law. None of these problems seems likely to interfere with a central bank contract. Hence, the principal-agent framework would appear a useful way to consider governance in central banks.5 It is useful at this point to reiterate three points. Governance describes behavior that it is hoped will lead to desired outcomes. Contract design tries to produce contracts that embody incentives that will lead to desired outcomes. And both, for good governance and for good contract design, require a clear view of what the desired outcomes are.
4.4 An Essential Interruption Before turning to the two central banks we examine, we must consider legal systems. The reason is that there are differences, important for our purposes and in reaching our conclusions, between common and civil (or Roman) law systems. The first difference is that common law is permissive. Anything that is not forbidden is allowed. This is illustrated by the following quotation, from the summing up for the defense by Jeremy Hutchison QC before the jury deliberated on the verdict in R. v. Bunton, a case that took place at Britain’s central criminal court (the Old Bailey) in 1965: “Nothing in this country is a crime, unless it is expressly forbidden by law, however inconvenient and unfortunate that may be.” That was of the greatest importance for the verdict and had implications for a future act of Parliament. Second, common law works on precedent, whereas Roman-type codes depend strictly on what has been written in the statutes. Hence, in a common-law country, an existing law can be interpreted so as to provide a legal framework for an activity not even dreamed of when the law was written. Third, a good number of countries with pure common-law systems do not have constitutional courts that can overrule governments. (Some common-law countries, of which the United States is the best known, do, of course, have such courts.) In Britain, for example, the Supreme Court can eventually decide that a law was written so that it did not produce the result the government intended; the only thing the government can then do is change the law.6 But the Court cannot say, “This law is inconsistent with the constitution as written,” simply because there is not one. These features of common-law systems were extremely important for the way Bank governance evolved, and they help
The Governance of Central Banks 107 in understanding why governance of the RBNZ, so similar to the Bank at its foundation and also in its current state, diverged so substantially from it for some years.
4.5 The Bank of England and Its Evolving Governance After a naval defeat by France in 1690, the government of William III wished to build a strong navy and needed to borrow to do so. The government’s credit, however, was not good. To encourage taking up the loan, all who did so became members of the Bank (formally known as the Governor and Company of the Bank of England), following a scheme devised by William Paterson and carried out by Charles Montagu, first Earl of Halifax, in 1694. In addition to simply lending the money and being paid interest (at 8 percent per annum), the subscribers thus incorporated received certain privileges—in particular, they became the government’s banker and were the only limited-liability corporation allowed to issue banknotes. A very limited amount is known about the governance of the Bank in its early years. There was a Court of Directors and a Court of Proprietors. But for most questions on the working of the bank, there are poor answers. How, for example, were governors appointed? Even in the late twentieth century, this was far from clear. A former deputy governor wrote, “the procedures for identifying a new Pope, or a new Dalai Lama are less opaque than those which precede the appointment . . . of a Governor of the Bank of England” (Howard Davies, quoted in Capie 2010, 369). And what were the rules on the appointment of directors? At what point did government start to play a part in these appointments? It was before nationalization, but when exactly? Lack of clarity on all these matters characterized the Bank’s history. Horsefield (1953, 50) wrote of the early nineteenth century, “We hear a great deal, for example, about what the Bank of England ought to have done but relatively little about what it actually did.” On the appointment of governors and directors, a story developed that no retail bankers should be allowed to be directors, the explanation being that they had a conflict of interest. Bagehot (1873) gave the story support. He wrote, “By old usage, the directors of the Bank of England cannot be themselves by trade bankers” (212). Later in the paragraph, that has become “the rule forbidding bankers to be [directors]” (213). And within a few lines, “the rule is absolute” (214). He continues, “Not only no private banker is a director of the Bank of England, but no director of any joint stock bank would be allowed to become such” (214). But when did such a rule appear? And then when did the “rigid” rule disappear? For it did. At the founding of the Bank in 1694, £1.2 million in capital was raised for the purpose of making a loan to government. There were 1,272 shareholders. A cap of £10,000 was placed on any single subscription. There seems to have been no lower limit, although £100 appears to be the smallest subscription. A minimum of £500 was required
108 Capie and Wood for voting privileges, £2,000 was required to become a director, and £4,000 to become Governor. But we can’t discover when these amounts changed. There was only a modest number of shareholders in the category of less than £500. The bulk of the shareholders fell into the £500–£3,999 grouping. A small number held more than £4,000 and less than £10,000, and only twelve held the maximum of £10,000. These figures are all in nominal terms, and it should be remembered they were huge at the time of the Bank’s founding. The total number of shareholders increased from that initial 1,272 to 17,134 in 1945 at the point of nationalization (Anson and Capie 2014). The 1700s were early days for joint stock companies. There were few, if any, regulations on information provided or records kept on any corporate business. We know the names of all the subscribers. But the records of the meetings that voters were entitled to attend did not record who attended, let alone who voted or, indeed, what the issues were. It does seem, though, that nothing of significance was decided at these meetings. The Bank’s capital was increased at several points between 1694 and 1816 by means of new issues, thus also increasing the number of shareholders. In 1946, the Bank was nationalized. Shareholders were given government stock in exchange for their shareholding to an extent that left them no worse off in monetary terms. Nothing changed in terms of the organization of the Bank or what it did. The court remained in place, and decisions continued to be made in the same way. The bank paid a flat-rate “dividend” to the Treasury and retained profits beyond that. Incidentally, these profits rose to “embarrassing” levels at various points over the next forty or so years. The main point we wish to make here is that governance across a long period was essentially unchanged. Government made it known to the Bank what it wanted. The Bank complied and in return retained or improved its privileged position. Secrecy covered all operations. Very little was written down. At the moment, we simply note without comment these early governance arrangements of the Bank. We return to them later, as they seem well worth discussion.
4.5.1 The Bank’s Early Objectives and Functioning A bimetallic standard obtained at the time of the Bank’s founding. But by the early years of the eighteenth century, there was a de facto gold standard.7 The standard in the early part of the period was not strictly defined. That had to await the 1844 act, at which point a fiduciary issue was specified and further note issue was tied to one-to-one relationship with gold. This standard maintained price stability by linking the note issue to the stock of gold, which, while it could and did change, could change only slowly. There were fluctuations in the price level, but as long as Britain was on the gold standard, these fluctuations were modest and were followed by reversion to the trend.8 The only departures from that state of affairs were when Britain temporarily left the gold standard—suspended it, to use the technical term.
The Governance of Central Banks 109 How was policy to defend the gold standard decided? Who decided what to do, and who acted on the decisions? As the Bank grew steadily to the point where it dominated the financial system, a prime responsibility was to maintain the standard, which it did by maintaining the convertibility of its own notes into gold. Other banks had to follow suit. One could therefore say that from an early date, the Bank slowly, almost imperceptibly, acquired one of the functions of a modern central bank, the maintenance of monetary stability. In the first stage, that was through preservation of a metallic standard. What is striking and important from the governance point of view was that the Bank can be said to have drifted into the role, and the 1844 act simply accepted that. The act did not impose a responsibility on the Bank so much as accept the responsibility it had acquired. It is also of interest from the point of view of an investigation into central bank governance how the Bank acquired its second objective, known nowadays as the preservation of financial stability. The banks that made up the banking system when the Bank was founded had changed little since banks emerged from being mediaeval goldsmiths. Whatever other activities they engaged in, their key business was borrowing and lending. When engaged in that business, they are crucial in two ways. They supply a part—in modern economies, by far the greater part—of the stock of money. And they transfer funds from lenders to borrowers; they act as financial intermediaries. When engaged in borrowing and lending, banks such as those considered here need both capital and liquidity. Capital consists of funds the bank actually owns. It can have been provided by the bank’s shareholders or, depending on the corporate form, the partners in the bank or even its sole owner. Such funds are needed because however well run the bank is, and regardless of how well it treats its customers and the extent to which it is aware of its responsibilities to them, now and again it will lose money on a loan. Some or all of what it has lent will not be paid back. That is, though, no excuse (in either morality or law) for not repaying the people who have lent the bank money; so the bank needs some funds of its own to make up what is needed to repay depositors. But although such capital is necessary, it was at the time we are currently writing of no concern of the central bank; it was a matter for a bank’s owners and managers.9 Liquidity is in some ways a trickier concept. It can first of all most easily be thought of as cash that the bank keeps in its own vaults. Some cash is needed because while most of the time receipts match withdrawals, sometimes they fall short. Again, the bank is obliged to pay out what its customers demand, so to avoid default and consequent closure, it needs some cash in hand. This system is known as fractional reserve banking, because in it a bank’s cash reserves are less than the total of the bank’s liabilities that are repayable on demand. There are two principal views as to how such a system can prosper. One is that an entirely free system—“free banking,” with each bank in charge of its own note issue and there being no central bank—would solve any problems that arose, deal with any shock to the system. The alternative view is that the system requires a central bank. In England, a
110 Capie and Wood central bank emerged. Bagehot, for one, would have preferred free banking but accepted that the system should remain as it had evolved. There was certainly fractional reserve banking in England before there was a central bank, although cash reserves were a large fraction of deposits. The argument that the system requires what is now called a central bank before it can grow was first articulated by Francis Baring in 1797. In 1793, war had been declared between France and Britain: That dreadful calamity is usually preceded by some indication which enables the commercial and monied men to make preparation. On this occasion the short notice rendered the least degree of general preparation impossible. The foreign market was either shut, or rendered more difficult of access to the merchant. Of course he would not purchase from the manufacturers; . . . the manufacturers in their distress applied to the Bankers in the country for relief; but as the want of money became general, and that want increased gradually by a general alarm, the country Banks required the payment of old debts. . . . In this predicament the country at large could have no other resource but London; and after having exhausted the bankers, that resource finally terminated in the Bank of England. In such cases the Bank are not an intermediary body, or power; there is no resource on their refusal, for they are the dernier resort. (Baring 1797, 19–23)10
The Bank was, in other words, the only bank that could still lend; small banks had turned to bigger banks, these to the big London banks, and these in turn to their banker, the Bank of England.11 Very soon after Baring’s 1797 coining of the term dernier resort, Henry Thornton provided a statement of what the lender of last resort was, why it was necessary, and how it should operate: “If any bank fails, a general run upon the neighbouring banks is apt to take place, which if not checked in the beginning by a pouring into the circulation of a very large quantity of gold, leads to very extensive mischief ” (Thornton 1802, 182). And who was to pour in this gold? The Bank of England: “if the Bank of England, in future seasons of alarm, should be disposed to extend its discounts in a greater degree than heretofore, then the threatened calamity may be averted” (188). This was, Thornton emphasized, not incompatible with allowing some individual institutions to fail. Concern should be with the system as a whole. And the reason a “pouring into the circulation” would stop a panic and thus protect the system was described with great clarity by Bagehot: “What is wanted and what is necessary to stop a panic is to diffuse the impression that though money may be dear, still money is to be had. If people could really be convinced that they would have money . . . most likely they would cease to run in such a herd-like way for money” (Bagehot 1873, 64–65). In the kind of banking system that Britain had by the mid-to late nineteenth century, a system based on gold but with the central bank the monopoly supplier of notes, the responsibility for diffusing “the impression that . . . money is to be had” clearly rested with the central bank. Because the central bank was the monopoly note issuer, it was the ultimate source of cash. If it does not, by acting as lender of last resort, supply that cash in a panic, the panic
The Governance of Central Banks 111 will continue, it will get worse, and a widespread banking collapse will ensue, bringing along with it a sharp monetary contraction. Nineteenth-century practice gradually converged to following this advice, as the Bank in a series of crises moved closer and closer to accepting the “lender of last resort” role. In 1825, in 1847, and again in 1857, it supplied liquidity to end a panic. Then, in 1866, a really important step was taken with the Overend Gurney crisis. Overend, Gurney, and Co. was a very large firm; its annual turnover of bills of exchange was in value equal to about half the national debt, and its balance sheet was some ten times the size of that of the next-largest bank. It was floated during the stock-market boom of 1865. By early 1866, the boom had ended. A good number of firms were failing. Bank rate had been raised from 3 percent in July 1865 to 7 percent in January 1866. After February, bank rate started to ease, but on May 11, Gurney’s was declared insolvent. To quote Bankers Magazine (in what would now be called its editorial) for June 1866, “a terror and anxiety took possession of men’s minds for the remainder of that and the whole following day.” The Bank for a brief time made matters worse by hesitating to lend even on government debt. The Bank Charter Act (which, among other things, restricted the note issue to the extent of the gold reserve plus a small fiduciary issue) was then suspended, and the panic gradually subsided. The event was important in itself, for a fully fledged banking crisis was possible but did not happen; it was also important in that it set a precedent, and was said so to do, for future action. For a full account of the crisis, see Clapham (1944), and Capie (2002). The failure in 1878 of the City of Glasgow Bank was much less dramatic. It had started respectably, was managed fraudulently, and failed. There was fear that the Bank Charter Act would have to be suspended again (see Pressnell 1986), but no major problems appeared: “There was no run, or any semblance of a run; there was no local discredit” (Gregory 1929). Other Scottish banks took up all the notes of the bank; Gregory conjectures that they acted in that way to preserve confidence in their own note issues. Next in England came the (first) Baring crisis of 1890. Barings was a large bank of great reputation. It nevertheless became involved in a financial crisis in Argentina. Barings had lent heavily to Argentina, and on November 8, it revealed the resulting difficulties to the Bank. A hurried inspection of Barings suggested that the situation could be saved but that £10 million was needed to finance current and imminent obligations. A consortium was organized, initially with £17 million of capital. But there was no major panic and no run on London or on sterling. The impact on financial markets was small. Barings was liquidated and refloated as a limited company with additional capital and new management. Why the great difference between the first, second, and third of these episodes? Why the absence of general panic? The Bank had both learned to act as lender of last resort (LoLR) and had made clear that it stood ready so to act. (Note that while panic in the 1890 episode was prevented by knowledge that the LoLR stood ready, Barings itself was saved by what has become known as bailout.12 This bailout was carried out by private- sector actors risking their own money. There was no forced levy on the general body of taxpayers.)
112 Capie and Wood To summarize, we can now see the “core responsibilities” of a central bank, the two tasks that no other organization can carry out and that are essential for the functioning of a modern banking system. These two tasks are the maintenance of monetary stability in the sense of stable prices somehow defined and acting as LoLR when required.13 The Bank had the first task from an early date. By the third quarter of the nineteenth century, it had, through the combination of events and the pressure and arguments of individuals, adopted the second.14 It was able to develop these responsibilities because it was operating in the permissive framework of common law; as it was not within the prescriptive Roman-law framework, it did not have to wait for instructions. But it had not adopted the responsibility of being the LoLR regardless of what was going on in the banking system with which it was concerned. There were changes in the banking sector and in regulation; it was these, in combination with the Bank’s evolution, that led to Britain being almost completely free of banking crises from just after the middle of the nineteenth century to the beginning of the twenty-first. Details of these developments can be found in Capie and Wood (2011). But in summary, the essential points are as follows. First, banking was progressively deregulated, so that banks had the freedom to manage their own affairs, choosing both their capital ratio and their liquidity ratio in the light of their own individual experience. And second, banks managed their affairs cautiously. They used their freedom to be prudent.
4.5.2 Developments after World War II In the aftermath of the World War II, the international order changed substantially. The United States, which had not sustained significant economic damage in that war, leaped to being the largest and most influential economy. World trade started to take place to a significant extent in the US dollar (see Carse, Williamson, and Wood 1980 for a discussion of this and for further references), and the Bretton Woods system (extensively discussed in Bordo and Eichengreen 1992 and Steil 2013) of “fixed but adjustable exchange rates,” with its associated international financial institutions, was put in place. Despite all these changes, indeed in some respects turmoil, the British banking system remained stable. It would not be unfair to say that its stability was taken for granted, and there was a considerable degree of complacency about its operations. This complacency, which had ramifications right up to the 2007 crisis, revealed itself in the behavior and treatment of bank capital. Capital had always been important to British banks. Indeed, it was considered so important that the government and the Bank accepted the public interest argument that allowed the concealment of true profits and capital until as recently as 1970. Banks have always experienced a tension between having too much capital and having too little. Strong capital positions are designed to give depositors confidence; indeed, in the nineteenth century, they were occasionally used as a competitive weapon to attract deposits. But the greater the capital, the lower the return on capital, and so there is a trade-off between depositor confidence and shareholder satisfaction. And of course,
The Governance of Central Banks 113 the quality of the assets, the quality of what the bank has lent against or bought, is key to any calculation. It is important to emphasize, in view of later developments, that right up to well after World War II, the amount of capital held by banks was entirely their own decision. Nonetheless, despite this freedom, by the beginning of the last quarter of the nineteenth century, the published capital ratios had settled at around 15 percent, with little variation among banks, and by the end of the century, that figure had slipped to around 10 to 12 percent. In the inflationary conditions of World War I, the ratios fell further, as much of the bank lending that led to the expansion of bank balance sheets was secured on government debt, then believed to be completely secure. In the years between the two world wars, there continued to be remarkable stability in the banking sector, and, no doubt in consequence, the ratios slipped slightly further. In the 1920s and 1930s, they had settled at around 7 percent. During World War II, the banks’ capital ratios fell further and fell sharply, to around 3 percent. The banks’ balance sheets expanded with government debt, while private lending fell away. But as the ratios fell, so, too, did the risk, since the bulk of the balance sheet was made up of gilts. The ratios reached their all-time lows in the 1950s, when they were down to between 2 and 3 percent. Raising capital after the war was not easy, with restrictions placed by the Capital Issues Committee (which restricted access to capital markets by private-sector borrowers to ensure there was always ready finance for the government). This particular restriction on the banks began to be troublesome, and bank chairmen spent a lot of time in the 1950s lobbying the Bank for support in allowing them to raise new capital. A note for the chief cashier made the problem clear: “it will be seen that the capital structure of the Clearing Banks is far from sound. . . . At present it is clear that in times of trouble they must either put footnotes in their balance sheets—which we deplore—or lean on us for financial aid which would be disastrous. . . . . The banks, [if] freed from restriction, should pursue energetically the implementation of a programme which, for good reasons, is long overdue” (quoted in Billings and Capie 2007, 145). Observe that the banks themselves were keen to hold more capital. It was government regulation that restricted the issuance of more capital. And the Bank was a spectator. The Bank’s lack of involvement in deciding or supervising capital reflects the view that no bank was, in twenty-first-century terms, “too big to fail.” This had been demonstrated in the nineteenth century by the Bank’s actions in the Overend Gurney crisis, and as long as no bank really was too big to fail, and there was an LoLR to preserve the system in the face even of a large individual failure, whether a bank failed or not was a matter for the bank’s own good management to prevent. However, there was a crisis in the secondary banking sector in the mid-1970s, and that led to legislation in 1979. The Banking Act passed that year placed limits on individual exposures to ensure appropriate diversification. Exposures exceeding 25 percent of capital required prior approval of the Bank. That marked the beginning of interference in bank operations. And soon after that, in the 1980s, the rules of Basel took over. Capital regulation had started in Britain and rapidly expanded in complexity under the tutelage
114 Capie and Wood of the Basel Committee.15 This was not supplemented by an increased emphasis on the banks’ being responsible for their own actions. How, indeed, could it be? The banks were being instructed on how they should behave. There was a natural consequence. It was one that ultimately led to substantial changes in the Bank’s responsibilities.
4.5.3 Drift to Inflation Targeting Meanwhile, changes had also been taking place in the Bank’s other core responsibility. When these changes started is hard to state precisely, but it is reasonable to argue that they started in the 1970s. Throughout a good part of its history, certainly since the Bank Charter Act of 1844 (as modified in 1847), the Bank had been charged not with price stability but with adherence to a rule, that of keeping sterling on gold. This rule had to change when the standard was suspended, and the consequence had been inflation. But gold was returned to after every suspension, including that of World War I. Between the end of the war and 1925, the Bank had been conducting monetary policy with the aim of returning to gold at the old parity. That was government policy and was an objective shared by almost everyone.16 But the return of 1925 was achieved only briefly. Shortly after Britain left gold in 1931, the United States followed, and in the run- up to World War II, the Bank operated by maintaining the external value of sterling steady against the US dollar. While the departure from gold was a government decision, there seemed to be a feeling thereafter that maintaining stability against the dollar was a sensible thing to do. As is so often the case, it is not clear who decided. After World War II, a new international monetary system, the Bretton Woods system, was established. While this retained, at least as far as central banks and governments were concerned, a residual link to gold, its practical basis was the US dollar. Under this system, the Bank’s monetary “instruction” was to keep sterling within bounds around an exchange rate for the US dollar. The rate was a matter for the government, and the Bank’s role in the choice of it was at most advisory. From the point of view of maintaining price stability, this system proved inferior to the gold standard. But how big a change was it for the Bank? It was still following a rule for holding sterling pegged to an external target. The difference was that while the gold peg could have been changed by government, it was not; in contrast, the exchange-rate peg was changed by government, and more than once, albeit never willingly. The really big changes to the “monetary stability” part of the Bank’s task came with the floating of sterling in 1972, after the 1971 breakdown of the Bretton Woods system. There was no longer a clear external target, no longer an obvious descendant of the gold standard rule, and sterling initially appreciated modestly after the float. This lack of a clear goal was soon followed by sharply rising inflation, peaking at around 27 percent in 1975; the dangers of the lack of an anchor had been exposed by the 1973 oil crisis. The government of 1976 soon went to the International Monetary Fund (IMF) for a loan, one
The Governance of Central Banks 115 condition of this being a squeeze on public spending and another being limits on domestic credit expansion.17 This represented one of the earliest monetary aggregate targets for monetary policy in the Bank’s history. The Callaghan government fell in the election of 1979 and was replaced by a conservative government, with Margaret Thatcher as prime minister. This government adopted targets for the growth of a monetary aggregate, sterling M3.18 This target was forced on a reluctant Bank by a not particularly enthusiastic Treasury. Accounts of this episode can be found in Griffiths and Wood (1984) and Lawson (1992). For a variety of reasons, sterling soared, and it eventually became clear that this first attempt at a domestically chosen domestic target had not been a success. In 1988, there was a return to an external target; monetary policy was now guided by what was known as “shadowing the deutschmark.” That is to say, sterling was kept within an informal band around the Deutschmark. But this external target caused problems in turn, first leading to interest rates so low that inflation accelerated and then, after German reunification in 1990, interest rates too high for the state of the British economy at the time. In that year, the government decided that sterling would join the European Exchange Rate Mechanism (ERM). This system was one of pegged exchange rates within the European Union, with the Deutschmark as the de facto anchor; Britain joined at an exchange rate of 2.95 Deutschmarks to the pound. Maintaining that then became the Bank’s monetary policy objective. But maintaining it proved impossible, at any rate over a politically acceptable time horizon. British inflation did fall as intended, but the squeeze, because of the monetary policy of Germany having to deal with an economy in a different cyclical phase from that of Britain, remained tight well beyond the time when British inflation had fallen. Other countries in the ERM experienced similar problems, and on September 1992, Britain left the ERM. This led to the return of a domestic target for monetary policy. In October 1992, the Chancellor of the Exchequer wrote to the Treasury Committee of the House of Commons: “I believe we should set ourselves the specific aim of bringing underlying inflation in the UK, measured by the change in retail prices excluding mortgage interest payments, down to levels that match the best in Europe. To achieve this, I believe we need to aim at a rate for inflation in the long term of 2% or less.” The details involved gradually falling inflation targets until less than 2 percent was achieved. (This last aspiration, it should be noted, was soon given up.) So the target was chosen by the government, and interest-rate decisions were made by the Chancellor in the light of advice given by the Governor at their monthly meetings. This system continued until 1997, with some changes in the intervening years. These changes led, perhaps unintentionally, to more power accruing to the Bank over what should be done to achieve the target. In February 1993, the Bank started to publish an “Inflation Report,” which contained inter alia the Bank’s inflation forecasts. Also from 1993, the Bank was given a little discretion over when to make any interest change—the constraint was that the change had to take place before the next Chancellor-Governor meeting. Then, in April 1994, the minutes of the Chancellor-Governor meetings started to be published. These were to be published two weeks after the meeting following the one to which the minutes related. In 1995, there were further modest changes to
116 Capie and Wood the target—called a “restatement” by then-Chancellor Kenneth Clarke.19 Interest-rate decisions remained a matter for the Chancellor, and the target remained a matter entirely for the government.20 After this account of a period of what might reasonably be called turmoil, it is useful to pause and reflect on a matter that has been implicit throughout. It is not always straightforward to establish the source of an objective. It might come from the state. It appears that most, perhaps all, monetary stability objectives have arisen there. This does not, however, mean that the Bank objected to the choice, nor does it mean that the Bank had no input in the choice. Indeed, it may have been suggested by the Bank. There is simply no information on that, for many such discussions were, and no doubt still are, informal. It is, however, true that the actual decision lay with government. In contrast, concern over financial stability seems more often to have originated in the Bank and/or with outside commentators (such as Thornton, Joplin, and so on in correspondence and in numerous pamphlets) who addressed their concerns and recommendations to the Bank. Equally, where cautious behavior evolved in the financial institutions, it was then on occasion formalized by the Bank. The liquid assets ratio and the cash asset ratio that the commercial banks moved to in the late nineteenth century and used in the twentieth century are illustrations of this point. After World War II, the Bank formally agreed with them that certain ratios should be adhered to. Later, the Bank took it upon itself to introduce new arrangements in 1971. To summarize, up to almost the late 1990s, the Bank had been in charge of the financial stability objective, including how to achieve it. The monetary stability objective, in contrast, was formally always the decision of government, although it seems likely that the Bank was involved in deciding it some of the time and always was in the attempts to achieve it.21 Until very late in the Bank’s history, no conscious thought, with one notable exception, seems to have been given to corporate governance. Objectives emerged, sometimes but not always being subsequent to that emergence refined or clarified by government, but the Bank’s internal management seemed to change of its own accord and for no clear reason as time went on. Indeed, the only clear example of deliberate alignment of interests occurred at the very founding of the Bank, when the continuation of its privileges depended on satisfactory performance of the task given to it by its founder, the government. But this early alignment, together with the common-law framework within which the Bank operated, explains why governance was never really thought about through most of the Bank’s history. The Bank had been established with certain “public” responsibilities, and with the freedom it had, self-interest allowed it to evolve so as to fulfill them without any further instructions or changes in governance.22
4.5.4 Conscious Change Very soon after the election of a Labour government in May 1997, there were some fundamental and consciously decided changes. The Bank was given responsibility for
The Governance of Central Banks 117 setting interest rates; to decide on rates, a monetary policy committee (MPC) was established, made up of both Bank staff and others appointed from outside the Bank. These appointments were by the Chancellor but were a minority on the committee. What had happened, then, was that responsibility for the conduct of policy had shifted entirely to the Bank, while the choice of objective remained entirely with the government. From the constitutional point of view, it was very close to the situation under the gold standard. At the same time, the government interfered substantially in the Bank’s financial stability responsibilities. Prior to 1997, the Bank had gradually assumed responsibility for financial stability. The Banking Act of 1997 gave the Bank formal responsibility for both monetary and financial stability. Thereafter, the Bank spoke of having these two core purposes. But while monetary stability was easily stated and a simple numerical target for inflation was laid down, no such definition was available for financial stability. And with the creation of the Financial Services Authority (FSA) at the same time and the formal removal of supervision from the Bank, confusion on the question of where responsibility for financial stability lay prevailed, as well as absence of clarity for what the term meant. The absence of the financial stability task at the least complicates the achievement of monetary stability; the central bank without that task may not be as well prepared for shocks originating in the banking sector. Hence, it is not self-evident that the tasks should be given to separate agencies. The benefits and dangers of the separation have been the subject of extensive study. (See, for example, Goodhart and Schoenmaker 1995.) That in the United Kingdom it did not contribute to good governance and the achievement of both objectives is suggested by subsequent developments. Before considering the important consequences of this attention to the matter of governance, which had been ignored essentially since the Bank’s inception, we turn next to the RBNZ, a bank initially on the model of the Bank which, in turn, became something of a model for the Bank.
4.6 The Establishment and Development of the RBNZ The RBNZ was set up in 1934, one of several established at the prompting of Montagu Norman and under the guidance of Otto Niemeyer, then an adviser and subsequently a director of the Bank. Not surprisingly, the RBNZ was on the same model as the Bank. It was incorporated as a legal entity with private shareholders, and it had as its primary duty “to exercise control . . . over monetary circulation in New Zealand, to the end that the economic welfare of the dominion may be promoted and maintained” (Reserve Bank of New Zealand Act 1933, s. 12). That situation did not last long. Indeed, a change that affected the Bank some 250 years after its establishment overtook the RBNZ after two years.
118 Capie and Wood In 1936, New Zealand elected its first Labour government, and the RBNZ was nationalized in that year. At the same time, its objective was changed, and it was given a “general function . . . to give effect to . . . the monetary policy of the Government as communicated to it from time to time by the Minister of Finance” (Reserve Bank of New Zealand Act 1933, as substituted by the Reserve Bank of New Zealand Amendment Act 1936, s. 10). This act was changed again in 1950, in 1960, and in 1973. On every one of these occasions, the objectives (note the plural) were changed and added to. The final list of objectives produced by these changes contains four main paragraphs, the first of which itself contains four subheadings. In outline, monetary policy is to be as the government says, and it is to be aimed at “the maintenance and promotion of economic and social welfare in New Zealand”; the bank is to regulate monetary, credit, and foreign-exchange transactions and control interest rates; and it is to make loans to the government as the minister of finance instructs. These changes plainly fail to satisfy the crucial requirement for good governance, one that emerges even more clearly from the principal-agent framework. The objective that the governance of the organization was supposed to achieve was not clear, and, indeed, it had some mutually incompatible components. It should be no surprise that not even an approximation of price stability resulted, and given the micromanagement approach implied there, detailed interference in economic life generally took place. The outcome was deterioration in economic circumstances that was large even after allowing for an at times very unfavorable economic environment. (See Wood 1994 for more details.) It became generally accepted in New Zealand that the country was in a state of crisis, and the Labour government elected in the mid- 1980s essentially reversed the economic policies of every government since 1936. The Reserve Bank Act of 1989, apart from not restoring the RBNZ to private hands, took it back to its situation of 1934–1936, the remaining difference being primarily in how the price stability objective was specified; the objective itself was in effect the same as that in the RBNZ’s first two years. Passed in 1989 with bipartisan support, the act became effective in February 1990. There was a clear statutory objective: “The Primary function of the Bank is to formulate and implement monetary policy directed to the economic objective of achieving and maintaining stability in the general level of prices” (Reserve Bank Act 1989, s. 8). That was the only macroeconomic objective, as well as the primary objective. The other objectives were related to regulatory and supervisory responsibilities. The inflation objective is made measurable (and thus, its attainment made capable of being judged) in a Policy Target Agreement, which the Governor and the minister of Finance have to negotiate and publish. The inflation rate has to be consistent with the Bank’s general statutory duty and is to be measured by the consumer price index (CPI) (on the grounds that the CPI measure of inflation was the most generally understood one).23 This act made the RBNZ the first central bank to have an inflation target. The RBNZ retained responsibility for banking-sector supervision and regulation, but it could choose how to do it and soon chose to do it mainly by insisting on disclosure of key facts about the banks to the general public and a signed (and publicly displayed)
The Governance of Central Banks 119 declaration by each bank director that he or she knew, understood, and was happy with whatever his or her bank was doing.24 Two points clearly require exploration. Did the contract work in the sense of making inflation lower and closer to target than it would have been? And did it affect the process of policymaking? An episode speaks to the former: in 1990, policy was tightened in the run-up to an election, an action without precedent in the history of the RBNZ; and inflation, which had been rising, was contained within target. The second is answered by a quotation: “The clear target and the announced downward path for inflation have provided a structure for internal discussions and debates within the Reserve Bank about the appropriate stance of policy.” There was “a behavioural change within the policy making machine” (Nicholl 1993, 13).25 We now have two central banks, whose governance we can contrast and thus perhaps see if and how central bank governance matters as much as governance in the private sector. That leads us to a general reflection and conclusion on central bank governance. But it is necessary first to deal a little more with the Bank.
4.7 Crisis and Postcrisis Just as lack of clarity about the objective of the central bank led to problems in New Zealand, so lack of clarity about responsibilities led to problems in the United Kingdom. It did so when Northern Rock ran into difficulties in 2007. The Bank no longer supervised banks either formally or informally, so unless someone told it, which the FSA did not, it had no way of knowing the peculiar nature and high risk of Northern Rock’s business model. In addition, the Bank’s eye was off the financial stability ball. Inflation had been the primary concern for some time, and there had been no financial stability problems for many years. And finally, even had other banks been willing to support Northern Rock (which their behavior in markets had clearly indicated they would not be enthusiastic about), events happened so quickly that there was no time to ask, and the call had to be on the government. The Bank had to go to its owner and ask for funds, and this had to be done in an ad hoc manner and in haste.26 As the report of the Treasury Select Committee of the House of Commons made clear, there was plenty of scope for things going wrong and a serious banking run starting at many stages in the process. Indeed, some have argued that the behavior of the Bank made this much more likely (Congdon 2009). The crisis, thus, not only made it necessary for the Bank to call on government support. It also brought to light inadequacies in the previous set of instructions laid down by the government. It became necessary to reconsider central bank mandates.27 Nothing was changed in the United Kingdom as far as the inflation mandate was concerned. It was, however, recognized after the crisis that it was necessary to ensure that market information and financial stability are both deemed important. This can follow in part from the Bank’s mandate, but internal structure and the system in which the
120 Capie and Wood Bank operates also matter. When the Bank, for example, dealt in the money markets, it did so through the discount office, so called because it dealt with the discount market, which had developed both as an intermediary between commercial banks and as an intermediary between them and the Bank. These discount houses were small and very highly geared. One precaution they took against failure was to know their customers well. The Bank, in turn, gained from them information about their customers through regular meetings between the discount houses and the discount office of the Bank. The crisis exposed weaknesses in the mandate given to the Bank, and in addition, there were defects in how the Bank (and the FSA) responded to the crisis. This inevitably required not only action from the government to deal with the crisis but also changes in the mandate. The changes have concentrated authority in the Bank. At the same time, regulation of the banking sector became (still more) detailed, with still higher capital ratios and instructions on the risk weights that should attach to various assets. One point and one implication leap out of that survey of the years after 1997. That was the first period in which the government had interfered in the Bank’s financial stability responsibilities, and it was the first time since 1866 that something had gone seriously wrong in that area of responsibility. Indeed, the changes of 1997 marked the first time there had been interference in the Bank’s governance since its foundation in 1694. There had occasionally been inquiries into the working of the monetary and financial system. In the twentieth century, there were the MacMillan, Radcliffe, and Wilson inquiries. But none of these dealt specifically with the governance of the Bank. So, for example, while Norman had been given an uncomfortable time by Keynes at the MacMillan Committee, it was not about the organization or running of the Bank.
4.8 Comparisons, Constitutions, and Conclusions Comparison shows that certain aspects of desirable governance apply very clearly across both the private sector and the pubic sector. Objectives should be clearly defined, and it should be clear who is responsible for what. Further, those responsible should have the information and the tools they need. All of these points are easy to agree on, indeed obviously sensible. But they do not always require conscious decision as long as the objective is clear, along with accountability. That is a notable implication of the Bank’s history. It is also notable that a conscious consideration of what went wrong, as happened in the case of the RBNZ, is capable of producing a clear and sensible governance framework. It should be remarked, though, that it took things going wrong rather badly before that consideration took place. As was observed by Lindsey Knight, deputy governor of the RBNZ in the years immediately after its reform, when he was asked to explain the thorough nature of these changes, “Maybe it is not essential to have a crisis, but it helps” (Knight 1991).
The Governance of Central Banks 121 An obvious implication for other central banks is that they must consider whether they have achievable objectives and the information necessary to achieve them. There is, for example, clear danger in separating responsibility for stability of the banking system from day-to-day contact with that system. And as far as monetary stability goes, the objective needs to be clear and defined in a way that is observable and that the central bank has the instruments to attain.28 This leaves unconsidered a rather important matter, with which we conclude. Normal corporate governance seeks to ensure that subject, of course, to the law, the management of the firm pursues the long-term interests of the shareholders. Who are the shareholders of the central bank? The legal (maybe actually legalistic) answer is, in almost every case where the constitution of the bank allows the question, that the government owns the shares. But does it really own them? It owns them because it has been elected to govern; it has been given delegated authority by the citizens. It is thus closer perhaps to someone who holds a proxy at a company’s annual general meeting. That person can exercise the voting rights of the owner of the shares but does not own the shares. The proxy may have been given specific instructions or just told to do what seems best. This right is usually given annually. But there is nothing to prevent its being given for, say, five years. That is exactly the situation a government is in. Is an appropriate governance framework, then, sufficient to ensure that the government, through its delegated authority, serves the long-term interests of the shareholders? It is usually accepted that low and stable inflation is an acceptable approximation to price stability, and that should be the target for a central bank. In addition, it also appears to be widely accepted that central bank independence is sufficient to achieve that, although one can imagine circumstances where it is not necessary. Here we come again to the importance of law. Unless that independence is somehow protected, it can be tinkered with by government, and not necessarily for electoral advantage but rather just because of the pressure of events. It is argued in Capie and Wood (2015) that just this happened in the United Kingdom and some other countries in the wake of the financial crisis, and there are precedents also in the United States, where the Federal Reserve has been tinkered with (most recently, its capital raided to pay for road building), and in New Zealand before the reforming act. Can that independence somehow be protected? By law, not at all, unless the country has a constitution that somehow accords inviolable status to the central bank. Such are hard to find. The evidence is that the only reliable form of protection is societal protection, where the central bank is embedded in such a society that tinkering with its basic position is unthinkable. Examples of such a setting are considered, and shown to be effective, in Capie, Wood, and Castaneda (2016), where it is argued, and then tested against the evidence, that small open economies provide just such a setting. Not all countries can choose to be small and open. What can they do? The lesson of this chapter is that good governance, close to that set out in the United Kingdom’s Cadbury Code of good governance for private-sector companies, is essential, but it is not enough. Maybe a crisis is sufficient. It did the job for New Zealand and helped
122 Capie and Wood entrench Germany’s concern for price stability. But a crisis is a high price to pay. More work on the boundary of law and economics is necessary before we can have hope of long-term monetary and financial stability, and it can be no more than a hope that such work will have a satisfactory outcome.
Notes 1. These noncorporate parts are sometimes analyzed in a public choice framework. That framework of analysis has, albeit rarely, been applied to central banks. Vaubel (e.g., 2003) has used this approach. 2. The chairman of the committee was Adrian Cadbury. 3. We resist the temptation to discuss these two supposedly important differences and, indeed, their meaning. An interesting discussion of the importance of gender in central bank board decision-making can be found in Masciandaro, Profeta, and Romelli 2016. 4. Both these contributions are discussed in Buchanan 1966 and in Richard Musgrave’s classic 1959 text. 5. An examination of the application to normal company governance of the principal-agent tools can be found in Wood 2013. We observe, too, that the usual discussion is concerned with ensuring that agents act in the interests of shareholders. Who the shareholders of the central bank are is an interesting matter, discussed later. 6. This, too, is illustrated by R. v. Bunton. After the verdict in that case, the Theft Act of 1968 contained an elaborate definition of “theft” and “steal,” which would have precluded acquittal had it been in place before R. v. Bunton. It should be remarked also that the situation described here is continually evolving. New Zealand has acquired a Supreme Court, and in the United Kingdom, the powers of the Supreme Court, which until comparatively recently did not even have that name, are changed with developments in case law. 7. This had come about through change in the price of gold relative to silver, driving silver out of use as money. 8. The fluctuations were the result of factors such as large changes in the price of grain and changes in the behavior of the cash holdings of the public and of banks other than the Bank. 9. The Bank took an interest in what the banks in general were doing to the extent that it paid heed to the quality of securities it received, directly and indirectly, for rediscounting. That was, however, a matter of self-preservation as much as anything, for the Bank was still a private organization. For discussion of the Bank’s concern with the quality of bills, see Sissoko 2015. 10. Baring, besides importing the term, used it in a new, metaphorical way. In France, it referred to the final court of appeal. 11. That the Bank acted as banker to banks makes clear why some central banks, that of New Zealand, for example, go by the title Reserve Bank. Other banks use them as bankers and, like anyone else, hold the bulk of their cash reserves with their banker. 12. Much has been written about the LoLR. The activity purportedly covers everything from supplying liquidity to the market to recapitalizing or otherwise rescuing nonbank firms. It seems to us that precise definition and careful use of the term are important. The key is in the phrasing. It is the lender when there is no other source of funds. And that points to the provider of the notes and cash on which the system runs—commonly the central bank but
The Governance of Central Banks 123 not necessarily or invariably so. The liquidity should be supplied to the market as a whole and not to individual banks in difficulties. 13. We have it will be observed taken for granted that the maintenance of price stability is desirable. We have done so because in combination with the gold standard that was a function imposed on the Bank of England ab initio. A good explanation of its desirability can be found in Friedman (1960). 14. Some have suggested that this was, in fact, simply an extension, albeit a substantial one, of any bank’s normal practice of accommodating its customers in times of temporary financial distress. 15. See Bank for International Settlements 2015. 16. There was, of course, some dissent from this view, most famously that of Keynes. See Keynes 1925 and Capie, Mills, and Wood 1986. 17. This is a concept developed primarily at the IMF and included only the domestic components of money supply growth. See Foot 1981 for details. 18. This was a broad measure of the money supply and included, in addition to notes, coin, and bank deposits at the Bank, mainly current and deposit accounts at the clearing banks. 19. Underlying inflation was to be kept at 2.5 percent or less, and the Chancellor suggested that this would mean that inflation was in the range of 1 to 4 percent most of the time. 20. How free the Chancellor actually was to decide on interest rates when the advice from the Governor started to be published is not at all clear. It depended in part on personalities and in part on the conditions of the time. 21. The informality of all these arrangements matches aspects of the Bank’s internal affairs. The Bank had a board of directors, the court, but how much it was consulted and regarding what did not vary in a systematic manner; and the importance of the court for much of the Bank’s history was greatly affected by the Governor’s personality. 22. Whether the 1844 act, which redefined the relationship between the note issue and gold, should be viewed as a change in Bank governance or a simple formalization of an existing situation is a subject for a different paper. 23. If the two do not agree on an inflation range that they can accept both as consistent with the price stability mandate and as a suitable operational target for the RBNZ, this has to be resolved in public, in Parliament. There is thus explicit provision for public disagreement, which has occasionally occurred. A notable aspect of the inflation part of the contract is what is not there. At one stage in the drafting of the Act, it was proposed that the Governor of the RBNZ’s pay would depend, at least in part, on how close the inflation outcome was to the inflation target. This proposal was rejected, on the argument that under some circumstances, it would be possible for the governor to be rewarded for raising unemployment and that this might destabilize the framework. Contracts for central bank governors, including length of tenure and whether the appointment should be renewable and, if so, how often, are now occasionally discussed. In England in the nineteenth century, the practice developed of a two-year term for governor. However, Norman in the early twentieth century stayed for twenty-four years. Thereafter, without formality and with some flexibility, a five-year term became the norm. 24. It is sometimes claimed that this was only possible because a large part of the banking system was foreign owned. This is not persuasive, as all foreign-owned banks in New Zealand had to take the form of separately capitalized subsidiaries, sufficiently free- standing as to be able to withstand the collapse of their parent. It certainly concentrated the minds of bank directors, as they were now open to lawsuits should their banks fail.
124 Capie and Wood 25. New Zealand’s Fiscal Responsibility Act of 1994 was consciously seen as a complement to the earlier Reserve Bank Act. To quote Ruth Richardson, the minister of finance who steered the act through Parliament, “The two monetary and fiscal statutes together constitute a legislative framework that promotes sound, credible policy and that acts as a bulwark against policy stances that would compromise the credibility of the policy setting.” 26. Why it was decided to support Northern Rock rather than follow the nineteenth-century course and let it fail, and then providing liquidity to the market as needed to prevent a contagious run, is examined in detail in Milne and Wood 2009. 27. The implications for the conduct of monetary policy with regard to inflation are obvious. Central banks should not rely exclusively or primarily on measures of the output gap and of inflation expectations in making their forecasts and the subsequent policy decisions. How to get them to do so is a different matter. 28. This is actually a generalized version of Milton Friedman’s (1962) criticism of inflation targeting, which he observed was not satisfactory because it did not satisfy these conditions.
References AIC. 2015. Code of Corporate Governance: A Framework of Best Practice for Member Companies. London: Association of Investment Companies. Anson, Mike, and Forrest Capie. “Ownership and Governance of the Bank of England, 1694–1979.” Manuscript. Arrow, K. 1963. “Research in Management Controls: A Critical Synthesis.” In Management Controls: New Directions in Basic Research, edited by C. Bonini, R. Jaecliche, and H. Wagner, 317–327. New York: McGraw-Hill. Bagehot, Walter. 1873. Lombard Street. London: Henry King. Bank for International Settlements. 2015. Annual Report, Introduction, 3–26. Basle. Bankers Magazine. 1866. Editorial, June. Baring, Francis. 1797. Observations on the Establishment of the Bank of England and on the Paper Circulation of the Country. London: Minerva. Billings, Mark, and Forrest Capie. 2007. “Capital in British Banking, 1920–1970.” Business History 49, no. 2: 139–162. Borda, J. C. de. 1781. Mémoire sur les elections au scrutin histoire de l’Academie Royale des Sciences. Paris: Imprimerie Royale. Bordo, Michael, and Barry Eichengreen. 1992. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press. Bowen, H. 1943. “The Interpretation of Voting in the Allocation of Economic Resources.” Quarterly Journal of Economics 58: 27–48. Buchanan, James M. 1966. Fiscal Theory and Political Economy. Chapel Hill: University of North Carolina Press. Cadbury Committee. 1992. Report by a Committee on the Financial Aspects of Corporate Governance. Kingston upon Thames: Gee. Capie, Forrest. 2002. “The Emergence of the Bank of England as a Mature Central Bank.” In The Political Economy of British Historical Experience, 1688–1914, edited by Donald Winch and Patrick O’Brien, 295–315. Oxford: Oxford University Press.
The Governance of Central Banks 125 Capie, Forrest. 2010. The Bank of England: 1950s to 1979. Cambridge and New York: Cambridge University Press. Capie, F. H., T. C. Mills, and G. E. Wood. 1986. “What Happened in 1931?” In Financial Crises and the World Banking System, edited by Forrest Capie and Geoffrey Wood, 13–47. Basingstoke and London: Macmillan. Capie, Forrest, and Geoffrey Wood. 2011. “Financial Crises from 1803 to 2009: A Crescendo of Moral Hazard.” In The Financial Crisis and the Regulation of Finance, edited by D. Green, E. Pentecost, and T. Weyman-Jones, 134–154. Cheltenham: Edward Elgar. Capie, Forrest, and Geoffrey Wood. 2015. “Central Bank Independence: Can It Survive a Crisis?” In Current Federal Reserve Policy under the Lens of History: Essays to Commemorate the Federal Reserve’s Centennial, edited by Owen F. Humpage, 126–150. New York, Cambridge University Press. Capie, Forrest, Geoffrey Wood, and Juan Castaneda. 2016. “Central Bank Independence in Small Open Economies.” In Central Banks at a Crossroads: What Can We Learn from History? edited by Michael Bordo, Oyvind Eitrheim, Marc Flandreau, and Jan Qvigstad, 195–238. Cambridge and London: Cambridge University Press. Carse, Stephen, John Williamson, and Geoffrey Wood. 1980. The Financing Procedures of British Foreign Trade. Cambridge and New York: Cambridge University Press. Clapham, Sir John. 1944. The Bank of England: A History 1694–1814. Cambridge: Cambridge University Press. Congdon, Tim. 2009. “Banking Regulation and the Lender-of-Last-Resort Role of the Central Bank.” Verdict on the Crash: Causes and Policy Implication, edited by P. Booth, 95–100. London: IEA. Congdon, Tim. 2015. “How Mervyn King Got Northern Rock Wrong.” Standpoint, November, 39–41. De Viti de Marco, Antonio. 1934. Principii de economia finanziana. Turin: Giulio Einaudi. Dupuit, J. 1844. “De la mesure de l’utilité des travaux publics.” In Annales des ponts et chaussées. Paris: Scientifiques et Medicales Elsevier. Easterly, William. 2015. “Foreign Aid versus Freedom.” IEA Hayek Memorial Lecture, London. Financial Reporting Council 2014. The UK Corporate Governance Code. London: Financial Reporting Council. Foot, Michael D. K. 1981. “Monetary Targets: Their Nature and Record in the Major Economies.” In Monetary Targets, edited by Brian Griffiths and Geoffrey Wood, 13–47. London and Basingstoke: Macmillan. Friedman, Milton. 1962. “Should There Be an Independent Monetary Authority?” In In Search of a Monetary Constitution, edited by Leland B. Yeager, 173–195. Cambridge, Mass: Harvard University Press. Gibbard, Alan. 1973. “The Manipulation of Voting Schemes: A General Result.” Econometrica 41, no. 4: 587–610. Goodhart, Charles, and Dirk Schoenmaker. 1995. “Should the Functions of Monetary Policy and Banking Supervision Be Separated?” Oxford Economic Papers, new series, 47, no. 4: 539–560. Gregory, T. E. 1929. Selected Statistics, Documents, and Reports relating to British Banking 1832– 1928. Oxford: Oxford University Press. Griffiths, Brian, and Geoffrey Wood, eds. 1984. Monetarism in the United Kingdom. London: Macmillan. Grossman, S., and O. Hart. 1983. “An Analysis of the Principal-Agent Problem.” Econometrica 51: 7–45.
126 Capie and Wood Horsefield, J. K. 1953. “The Bank and Its Treasure.” In Papers in English Monetary History, edited by T. S. Ashton and R. S. Sayers, 109–125. Oxford: Clarendon Press. Hume, D. 1740. Treatise of Human Nature. Oxford: Oxford University Press. Joplin, Thomas. 1832. An Analysis and History of the Currency Question: Together with an Account of the Origin and Growth of Joint Stock Banking in England, Comprised in the Writer’s Connexion with These Subjects. Toronto: University of Toronto Libraries. Keynes, J. M. 1925. The Economic Consequences of Mr. Churchill. London: Hogarth Press. Knight, R. L. 1991. “Central Bank Independence in New Zealand.” Mimeo Lange, O. 1967. “The Computer and the Market.” In Socialism, Capitalism, and Economic Growth: Essays Presented to M. Dobb, edited by C. H. Feinstein, 76–114. Cambridge: Cambridge University Press. Lawson, Nigel. 1992. The View from Number Eleven: Memoirs of a Tory Radical. London: Bantam. Loeb, M., and W. Marget. 1979. “A Decentralised Method of Utility Regulation.” Journal of Law and Economics 22: 399–404. Masciandaro, D., P. Profeta, and D. Romelli. 2016. “Gender and Monetary Policymaking: Trends and Drivers.” Baffi CAREFIN Center research paper 2015-12. Milne, Alistair, and Geoffrey Wood. 2009. “‘Oh What a Fall Was There’: Northern Rock in 2007.” In Financial Institutions and Markets: 2007–2008—The Year of Crisis, edited by Robert R. Bliss and George G. Kaufman, 3–25. New York: Palgrave-Macmillan. Musgrave, Richard. 1959. The Theory of Public Finance. New York: McGraw-Hill. Nicholl, Peter. 1993. “New Zealand’s Monetary Policy Experiment.” University of Western Ontario Department of Economics Political Economy Research Group paper 31. Pantaleoni, Maffeo. 1882. Teoria della translizione dei tributi. Bari: Gios, Laterza. Pauly, M. V. 1974. “Over Insurance and Public Provision of Insurance: The Roles of Moral Hazard and Adverse Selection.” Quarterly Journal of Economics 88: 44–62. Pressnell, Leslie. 1986. “The Avoidance of Catastrophe: Two 19th Century Banking Crises: Comment.” In Financial Crises and the World Banking System, edited by Forrest Capie and Geoffrey Wood, 74–76. Basingstoke and London: Macmillan. Reserve Bank of New Zealand Act. 1933. Wellington: Government Printer. Reserve Bank of New Zealand Act. 1989. Wellington: Government Printer. Ross, S. 1973. “The Economic Theory of Agency: The Principal’s Problem.” American Economic Review 63, no. 2: 134–139. Sissoko, Carolyn. 2015. “Money Is Debt.” Mimeo. Smith, A. 1776. The Wealth of Nations. New York: Modern History. Steil, Benn. 2013. The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order. Princeton: Princeton University Press. Thornton, Henry 1802. An Enquiry in to the Effects of the Paper Credit of Great Britain. Reprinted in 1978, with an introduction by F. A. Hayek. Fairfield, Augustus Kelley. Vaubel, Roland 2003. “The Future of the Euro: A Public Choice Perspective.” In Monetary Unions: Theory, History, Public Choice, edited by Forrest Capie and Geoffrey Wood, 146–182. London and New York, Routledge. Vickrey, W. 1960. “Utility, Strategy and Social Decision Rules.” Quarterly Journal of Economics 74: 507–535. Wicksell, K. 1958 [1896]. Finanztheoretische Untersuchungen. Jena: Gustav Fischer. English translation: A New Principle of Just Taxation, in Classics in the Theory of Public Finance, edited by R. Musgrave and A. T. Peacock New York: St. Martin’s Press.
The Governance of Central Banks 127 Wood, Geoffrey. 1994. “A Pioneer Bank in a Pioneers’ Country.” Central Banking 5, no. 1: 59–77. Wood, Geoffrey. 2013. “Efficiency, Stability, and Integrity in the Financial Sector: The Role of Governance and Regulation.” In Reforming the Governance of the Financial Sector, edited by David G. Mayes and Geoffrey Wood, 166–182. Abingdon and New York: Routledge.
Pa rt I I
C E N T R A L BA N K F I NA N C I N G , BA L A N C E -SH E E T M A NAG E M E N T, A N D ST R AT E G Y
chapter 5
Can the Cent ra l Ba nk Alleviate Fi s c a l Burdens ? Ricardo Reis
5.1 Introduction Crises spur the imagination of academics. The US financial crisis followed by the European crisis led to original proposals on how fiscal stimulus can be delivered, how safe assets can be created in a monetary union without joint liabilities, and how macroprudential regulation can overcome problems in the financial sector, among many others. At the same time, the balance sheets of the four major central banks (European Central Bank, Federal Reserve, Bank of England, and Bank of Japan) look quite different from how they did just ten years ago. As figure 5.1 shows, these central banks have increased the size of their balance sheets through direct purchases of government securities (European Central Bank), issued liabilities in the form of bank reserves that pay market interest rates (Federal Reserve), raised the maturity of bond holdings through special purpose vehicles (Bank of England), and bought large amounts of private securities (Bank of Japan). It is therefore not a surprise that some of the most daring ideas coming from research on macroeconomic policy involve the central bank in some way. Common to many discussions surrounding the central bank is its supposed ability to alleviate the fiscal burden faced by a country. On both sides of the Atlantic, public debt is at its highest level since World War II; on the right side, fiscal crises in Greece, Ireland, and Portugal have already left their marks. With its mystical ability to print money and
2006
2006
2007
2007
2008
2008
2010
2011
2009
2010
Bank of Japan
2011
Federal reserve system
2009
Federal reserve system
2012
2012
2013
2013
2014
2014
Capital
Currency
Other liabilities
Bank reserves
Liabilities (% of GDP)
Gold and foreign assets
Other assets
ST gov. bonds
LT gov. bonds
Assets (% of GDP)
Capital
Currency
Other liabilities
Bank reserves
Liabilities (% of GDP)
2015
Gold and foreign assets
Other assets
ST gov. bonds
LT gov. bonds, ABS
Assets (% of GDP)
Figure 5.1 Balance Sheets of Four Major Central Banks, 2005–2015
–80%
–60%
–40%
–20%
0%
0% 2005
20%
40%
60%
80%
–30%
–25%
–20%
–15%
–10%
–5%
0%
0% 2005
5%
10%
15%
20%
25%
30%
–30% 2005
–20%
–10%
0%
0% 2005
10%
20%
30%
2007
2007
2008
2008
2008
2010
2011
2009
2009
2011
2010
2011
Bank of England
2010
Euro system
2009
2012
2012
2012
2013
2013
2013
2014
2014
2014
2015
2015
Capital
Currency
Other liabiliJes
Bank reserves
Liabilities (% of GDP)
Gold and foreign assets
Other assets
ST repos
LT gov. bonds
Assets (% of GDP)
Capital
Revaluation account
Currency
Other liabilities
Bank reserves
Liabilities
Gold and foreign assets
Other assets
MRO
LTRO
Securities held
Assets (% of GDP)
Note: Vertical axis scale is in % of domestic GDP. (Reproduced from Reis 2016.)
–30%
–20%
–10%
0% 2007 0%
10%
20%
30%
2006
2006
Euro system
Can the Central Bank Alleviate Fiscal Burdens? 133 its frequent purchases of government bonds, it is tempting to look at the central bank as a source of solace and respite. This perception of the central bank as a relevant fiscal agent takes many forms and common statements. Some say that a country with its own central bank can never have a sovereign debt crisis because it can print money to pay any bills. Others argue that if a country raised its inflation target, it could wipe out the real value of its public debt and alleviate its fiscal burden. Others still make the point that central banks can never go legally insolvent, so when fiscal authorities are on the verge of sovereign insolvency, it is time for the central bank to step up. Finally, in the context of the Eurosystem, some proposed that the European Central Bank (ECB) could buy and then forgive the debt of the periphery countries, becoming a vehicle for fiscal transfers within the currency union. Each of these suggestions involves the central bank as a fiscal actor. Of course, any student of monetary economics understands that it is the interaction of monetary and fiscal policy that determines equilibrium outcomes for inflation and real activity. There is a vast literature discussing the fiscal implications of central bank actions and the required coordination between monetary and fiscal authorities. However, all the suggestions in the previous paragraph were more specific than this general interaction. They involved the central bank creating real resources and somehow transferring them to some other authorities. They involved the central bank using its balance sheet to have a direct effect on the balance sheet of the fiscal authority. While a survey of central banks one decade ago could barely mention these fiscal roles for the central bank (Blinder, 2004), they play a central role in more recent surveys (Reis, 2013) and yet still leave much to be discussed. This is the topic of this chapter. The closest papers to it in motivation are Stella and Lonnberg (2008) and Archer and MoserBoehm (2013), but the approach and goal here are quite different. This chapter works through the resource constraint facing the central bank with the goal of understanding whether the central bank can generate fiscal revenues and in doing so relax (or tighten) the resource constraint facing the fiscal authorities. In spite of this narrow focus, there is still significant ground to cover. Section 5.2 starts with the fiscal intertemporal resource constraint so that we can precisely define what is the fiscal burden. This section catalogs five different possible channels through which a central bank could feasibly lower this burden. Each is analyzed in turn in the succeeding sections. I also provide a minimal definition of a central bank, setting the basics for each succeeding section to progressively expand the actions taken by the central bank, and consider new channels for real transfers to the fiscal authorities. Throughout, these are spelled out using resource constraints. Section 5.3 starts by considering the central bank’s ability to take deposits and make loans to banks. This gives it control over nominal interest rates and inflation and also gives rise to the first channel through which it can lower the fiscal burden: inflating away
134 Reis the debt. While the idea is simple, this section discusses the difficulties with estimating the size of this channel, with implementing it effectively, and with making it quantitatively relevant. Section 5.4 considers the central bank’s provision of liquidity services to the economy, for which it collects a payment commonly known as seigniorage. This source of relaxation of fiscal burdens is common during hyperinflations. When inflation is below two digits, though, the associated revenues are relatively small, and they are not too sensitive to inflation. Next, I introduce voluntary interest-paying reserves that banks can hold at the central bank. These central bank liabilities are quite special assets in the economy, and they play a central role in the remainder of this chapter. Section 5.5 starts by describing the unique properties of reserves and by answering one question: should these central bank liabilities be counted as part of the public debt? Taking the other side, this section studies to what extent issuing reserves provides a third channel for the alleviation of fiscal burdens. Reserves lead to a definition of the meaning of central bank insolvency, together with its tight link to central bank independence (CBI). They point to the fourth channel that this chapter studies: the sources of income risk facing the central bank and the possibility that these lead to either sources of revenue to the government or further fiscal burdens in the form of central bank insolvency. Section 5.6 to 5.9 study four sources of income risk, and four associated dangers to solvency. Section 5.6 focuses on the risk from holding foreign currency. This leads to a discussion of the role of central bank income and net worth and clarifies why across the world there are special rules involving central bank holdings of foreign reserves. Section 5.7 considers purchases of short-term private bonds by the central bank and their associated default risk. The rules that determine the central bank’s dividend to the fiscal authorities and the ability to provision play a key role in determining whether the fiscal authorities will support the central bank. Section 5.8 and 5.9 turn to government bonds. First, I focus on sovereign default and on the transfers between different economic agents that it entails. Second, I discuss quantitative easing (QE), the purchase of long-term government bonds in exchange for bank reserves. I show that QE does not lead on average to fiscal transfers, but it can put to the test the fiscal capacity of the central bank because of duration risk. In section 5.10, the analysis switches from the resources that the central bank can distribute to the redistributions across fiscal entities that it can trigger. Within a currency union, the central bank can be used to relax the fiscal constraints of some regions, while increasing the fiscal burden of others. This is the fifth and final channel considered in this chapter. Using the Eurosystem as an example, I describe the many vehicles that could be used to undertake these redistributions and how the Eurosystem’s institutions prevent them. Finally, section 5.11 concludes by revisiting some commonly heard statements about what central banks can achieve fiscally. I show that while they are partly correct, they are mostly misleading. This provides a final application of why the lessons in this paper can be useful in economic policy discussions.
Can the Central Bank Alleviate Fiscal Burdens? 135
5.2 The Fiscal Burden and the Central Bank In discussing fiscal resources and central banks, it is important to stay away from two common fallacies. The first of these is money illusion. Printing banknotes may generate nominal resources, but because this invariably comes with higher inflation, it creates fewer real resources. People are only willing to lend to the government expecting a real return, and almost all government spending programs require providing or paying for real services or goods. Fiscal burdens are real, and it is an illusion to think that “printing money" makes them magically disappear. To safeguard against this illusion, I will write constraints in real terms; whenever a variable is in nominal units, it will be written using capitals, to make this clear. The second fallacy is the free lunch. If central banks could magically create something out of nothing without bounds, then the whole of society could use m onetary policy to solve any scarcity problem. Free lunches don’t usually exist, at least not at macroeconomic scale where there are finite resources. Even if central banks and monetary policy work toward raising welfare, it is dangerous and almost a lways wrong to believe that they can create real resources out of nothing. When one identifies what seems like a free lunch, it is crucial to make sure that all constraints have been spelled out. I will do so by insisting on looking at resource constraints throughout. At the same time, the analysis will be very general by discussing what central banks can do given those constraints, rather than what they ought to do. There will be many lessons, even though the objectives of the central bank, or its actual choices, are never stated. The goal is to figure out what is possible, leaving for later a discussion of what is desirable.
5.2.1 The Government’s Fiscal Burden Time is discrete, and I will take the perspective of an arbitrary date t . The fiscal burden of the government is equal to the commitments that it has to honor to its creditors:
Φt ≡
∑
∞
Qtj Btj
j=0
pt
. (1)
On the right-hand side are the nominal promises outstanding at date t for a payment in j periods-time: Btj. Note that this expression assumes that all government bonds
136 Reis have a zero coupon—a weak assumption since if they are not, they can be rewritten in their zero-coupon equivalent form. Bonds have a price today Qtj, where Qt0 = δt is the repayment rate on the bonds, such that δt = 1 if there is no sovereign default. Since we care about the real fiscal burden, we divide by the price level pt . Governments also issue real debt, and this could be included, but it would not change the analysis in any interesting way. The reason this is a burden is that people would not be willing to lend to a government that never intends to repay. The government cannot run a Ponzi scheme on its bonds, and this constraint can be written as a resource constraint facing the fiscal authorities:
∞ Φt ≤ t ∑mt ,t + j ( ft + j + dt + j ) . (2) j = 0
On the right-hand side are three new variables, m, f , and d . Starting with the third, dt are real transfers from the central bank to the fiscal authority. We will discuss where they come from throughout the chapter, but for now keep in mind that they could well be negative. In that case, the fiscal authority might be said to be recapitalizing the central bank. Moving on to ft , these are the fiscal surpluses, or the difference between spending and revenues. One might argue that some forms of committed spending, like social security payments, should be moved to the left so they are included in the fiscal burden. This could make a significant difference for quantitative assessments, but it is hard to gauge how easy it is to revert those commitments and what fraction of them is nominal versus real. I will treat them as exogenous and real, and as such they can be included in ft . Finally, mt ,t + j is the stochastic discount factor that converts units from future real resources to present-day values. An important assumption is that there are no arbitrage opportunities facing the government, including the central bank. If they existed, then it is hard to escape the free-lunch fallacy: why wouldn’t the government exploit them with no end and use the free funds to provide valuable government services? There may well be many financial-market imperfections together with arbitrage opportunities across private individual securities that investors can take advantage of. The assumption here is instead that in the large and liquid market for government liabilities of different maturities, there are no such opportunities that the government can systematically exploit in a quantitatively significant way. Mathematically, this translates into assuming that the following condition holds:
mt ,t + j δ t + j pt t = 1, (3) j Qt pt + j
Can the Central Bank Alleviate Fiscal Burdens? 137 so that the expectation of the product of the stochastic discount factor with the return on any government security is always equal to 1. It is important to note, because it is often forgotten, that the price of government bonds is driven both by default and by future inflation. If the bond is expected to default and pay little (small δt + j) or if inflation is expected to be high until the bond comes due (high pt + j /pt ), both of these depress the price of the bond in the same way or, equivalently, raise the yields on the public debt. From the perspective of the bondholder, whether the country uses inflation to prevent default or is stuck with low inflation and is forced to default, the payoff is reduced. Using this condition, we can write the fiscal burden in an alternative, useful way:
∞ δ t + j Btj Φt = t ∑mt ,t + j . (4) j = 0 pt + j
The fiscal burden is the expected present discounted value of the payments promised by the government, taking into account the possibility of sovereign default.
5.2.2 What Is a Central Bank? Central banks have a long and rich history, and they perform many functions in modern advanced economies (Reis, 2013). Each section of this chapter will gradually expand the policies controlled by the central bank so that by the end, we end up with a satisfactory model caricature of its real-life counterpart. To start, consider the minimal version of what is a central bank. All central banks in advanced countries perform a crucial task, and some (the Federal Reserve) were created from the start to satisfy this function. Banks issue checks and debit cards that are used to make payments by their customers to other banks’ customers. This is only possible because periodically the banks that end up as net creditors can settle their claims on the other banks that end up as net debtors. Making these settlements requires a clearinghouse that the banks can join, and some form of payment within the clearinghouse, in the sense of a claim on the house itself that can be used to settle the interbank claims. In the nineteenth-century United States, many private clearinghouses tried to perform this role, but most failed because of the double moral hazard problem that each bank wants to accumulate a large debt to the house and then default, and the house managers want to secretly issue the house’s means of payment to enrich themselves. The central bank is a public institution that has a strong and restrictive legal framework governing its operations and broad powers to regulate, supervise, and close down the banks that take deposits and issue these means of payments. This allows the central bank to ensure that banks almost always satisfy their claims to other banks. A central bank can then be defined as the sole institution in a country with the power to issue an interbank claim, called bank reserves at the central bank, that is used to settle
138 Reis interbank claims. Since the central bank is the monopoly issuer of these reserves, it can freely choose which interest to pay on these reserves. Insofar as the balances on these reserves are zero, then this comes with no fiscal transfer on aggregate from the banking sector to the government sector. But the central bank is special among government agencies in being able to issue this type of nominal government liability independently of the fiscal authority. Moreover, this liability is quite special because it is the unit of account in the economy: its units define what the price level pt is. From another perspective, the central bank can pay the interest on reserves with more reserves, and this fully satisfies its commitment to the banks, since reserves are the unit for nominal transactions in the economy. Its abilities to issue reserves, choose their interest rate, and in doing so affect their real value, which is by definition the inverse of the price level, are the minimal powers of a central bank.
5.2.3 The Channels from the Central Bank to the Fiscal Burden Combining equations 2 and 4 gives the determinants of the fiscal burden in terms of an inequality:
∞ δ t + j Btj Φt = t ∑mt ,t + j ≤ t j = 0 pt + j
∞ ∑mt ,t + j ( ft + j + dt + j ) . (5) j = 0
At this very general level, we can see the five channels through which the central bank can affect the fiscal burden. First, the central bank can affect the real value of reserves and in doing so exert significant control over the price level pt + j. For a fixed level of nominal commitments by the fiscal authority, higher prices lower the real burden of honoring those commitments. In short, the central bank can try to inflate away the debt. Section 5.3 is devoted to this channel. Second, the central bank makes real transfers to the government dt + j . Every savings that the central bank achieves in its operations is an extra source of revenue to the fiscal authority. More important, if in its activities the central bank can generate revenue to disburse that is not fully offset by lowering the revenues of the fiscal authority, so that ft + dt rises, this will lower the fiscal burden. The unique revenues of the central bank come from selling banknotes, which society finds useful to undertake payments; these are studied in section 5.4. Third, some of the nominal commitments of the government may have their source in the central bank. That is, Φt may include the liabilities issued by the central bank, reserves, or perhaps even banknotes. Section 5.5 spells out the resource constraint of the central bank and connects it to the fiscal burden to understand what is part of Φt and what is not. The fourth channel through which central banks affect the fiscal burden also works through dt + j but in the opposite direction. If the central bank takes on income risk in
Can the Central Bank Alleviate Fiscal Burdens? 139 its asset holdings, then it may incur losses which must be met by the fiscal authority through a negative dt + j . In the extreme, if the central bank is insolvent, then a large flow of funds may be necessary to recapitalize it. Sections 5.6 to 5.9 describe the different sources of income for central banks asa result of holding different assets and how these affect the flow of dividends to the fiscal authority and the solvency of the central bank. The fifth and final channel studied in this chapter is across fiscal authorities. For a given total fiscal burden for an economic union, the central bank may be able to redistribute it across regions. Section 5.10 discusses how the central bank can perform these redistributions using all of the previous channels. There are three further, indirect channels through which the central bank can lighten the fiscal burden, which this chapter will not cover. They occur through three equilibrium variables that the central bank is far from precisely controlling but which it probably somewhat affects: mt ,t + j, ft + j , and δt + j. If there is monetary nonneutrality, then the central bank’s actions will affect the real interest rate. If, moreover, the fiscal authority adjusts its plans in response to the central bank’s actions, then fiscal revenues are also affected. Finally, if private investors perceive that the attractiveness of sovereign default depends on inflation and on the size of reserves, then the central bank will also have some effect on the recovery rates on public debt. Each of these effects may be important. But they really pertain to discussions of the Phillips curve, of monetary-fiscal interactions, and of equilibrium sovereign default, respectively. These are much-studied topics that have been well covered in many other places (e.g., Mankiw and Reis, 2010; Leeper and Leith, 2016; Aguiar and Amador, 2014) and which would require specifying the actions of many more agents than just the central bank. They are left aside in this chapter.
5.3 Inflating the Debt The central bank can choose the nominal interest rate that remunerates bank reserves. Because there is no counterparty nominal risk in lending to the central bank—which will always repay and can always do so—this interest rate should put a floor, or lower bound, on other nominal interest rates in the economy. At the same time, in its role as the clearinghouse, the central bank could also announce that it will make loans to banks (negative individual reserves) at an announced interest rate. This, in turn, puts a ceiling on interest rates, since being able to borrow at any time from the central bank, no bank would pay more to borrow from anyone else. Both floor and ceiling combined imply that the central bank is able to pin down, by the sheer force of its announcements, the market nominal interest rate on one-period investments, which I denote by it . Outside the fictional model description in the previous paragraph, in reality, all advanced country central banks are able to control the overnight nominal interest rate in their economies. They may not do so through the simple borrowing and lending of reserves as in the previous paragraph but via direct purchases or repurchase agreements. Moreover, the institutional details of interbank markets in each country may lead to
140 Reis deviations from the floor and ceiling described above. Further, the central bank may allow for a gap between its deposit and lending rates, or what is called a “corridor system,” while using the quantity of reserves outstanding to pin down a desired rate. In spite of all this diversity, decades of experience have confirmed that, through different implementation methods, the central bank has an almost perfect control of the overnight safe nominal interest rate in advanced financial systems.
5.3.1 The Central Bank’s Control over Inflation The no-arbitrage condition applied to reserves implies that:
mt ,t +1 (1 + it ) pt t = 1. (6) pt +1
Because the central bank chooses it and, under the classical dichotomy, mt ,t +1 is not affected by these choices, this Fisher relation provides an equation with which to solve for inflation. At the same time, it is clear that this is only one equation for more than one unknown; one has to pin down both the initial price level pt and the future price level pt +1 at each of the many possible future states of the world. Therefore, simply choosing one exogenous interest rate will leave the price level indeterminate. These problems are relatively well understood, and the topic of determining the price level has produced a large literature as well as recent surveys (Reis, 2015b), filled with ways in which the central bank can pick the price level to control inflation. The most popular strategy is perhaps the Taylor (1993) rule, while the simplest and most robust one is perhaps the payment on reserves rule of Hall and Reis (2016). In the latter case, the central bank makes its payment of interest at date t + 1 be indexed to the price level, so that it promises a real payment xt to the banks holding the reserves. In that case, since 1 + it = (1 + xt ) pt +1, then plugging this into the no-arbitrage condition and rearranging, one gets a unique global solution to the price level:
pt =
1 , (7) (1 + xt )t (mt ,t +1 )
as a function of the policy choice xt and the stochastic discount factor mt ,t +1. Whether through this route or some other, and with larger or smaller errors coming from fluctuations in the stochastic discount factor, the central bank can control inflation. In picking a path for { pt + j }∞j =0, it can then affect the real value of nominal bonds and so the fiscal burden of the government. While this control of the price level is achieved via the power to choose the remuneration of reserves, it does not require a large central bank balance sheet or risks to the dividends it pays. Given its total issuance of reserves and credits on aggregate to the whole banking sector, vt , the law of motion for reserves net of credit by the central bank is
Can the Central Bank Alleviate Fiscal Burdens? 141
vt = (1 + rt ) vt −1 + dt, (8)
where 1 + rt = (1 + it −1 ) pt −1/pt is the ex post real return paid on the reserves. The central bank can issue more reserves to pay for the transfers it sends to the fiscal authority and for its past commitments to remunerate reserves. For the control of the price level, it is not necessary to have net reserves outstanding; arbitrage suffices. The size of the balance sheet of the central bank has no direct link to inflation (Reis, 2016). The central bank can choose vt = 0 at all dates, and still the price level will be under control simply by the promised interest rate on reserves that could be issued on demand. Under this scenario, dividends to the fiscal authority are always zero, and the central bank is always solvent. Its effect on the fiscal burden works solely through inflation.
5.3.2 Surprises and Persistence By how much can the central bank lower the fiscal burden depends not just on the amount but also on the type of inflation that it engineers and how it interacts with the maturity of the outstanding debt. To see this clearly, consider two special and extreme cases. First, imagine that there is a single J -period government bond outstanding that always repays its holders in full. In that case:
m Φt = BtJ t t ,t + J . (9) pt + J
Imagine also that because of the delays in setting policy or in its transmission to the real economy, the central bank’s interest policy only has an effect on inflation in T periods’ time. In that case, if T ≤ J , then the central bank can still affect pt + J and lower the fiscal burden by as much as it wants to. But if T > J , then the central bank can set inflation to whatever it wants, and yet this will be neutral with respect to the fiscal burden. The more general lesson is that only if the central bank engineers a surprise increase in the price level before most of the debt is due will it be able to inflate away a significant amount of it. Hilscher, Raviv, and Reis (2014) show that this extreme case is not far-fetched. They construct the distribution of privately held debt in the United States for 2012 and find that it has dramatically lowered in duration relative to just a decade before. The average duration is only 3.7 years. In turn, using market expectations of inflation, they also find that the probability that the price level will jump significantly in the space of just one or two years is quite low. Inflation is inertial, and the Federal Reserve would find it difficult to suddenly and unexpectedly shift policy so much so as to change the price level by much in only a few years. As a result, the T > J case above is not so far off as an approximation to the US situation in 2012. Hilscher, Raviv, and Reis (2014) conclude that the probability that the United States could inflate away its debt by even as little as 5 percent of GDP is well less than 1 percent.
142 Reis For the second case, consider a different set of special assumptions: (i) the stochastic discount factor across one period is a constant m ∈(0,1) so that mt ,t + j = m j ; (ii) there are only two types of bonds outstanding entering period t , a one-period bond that pays B s right away and a consol that pays B c every period in perpetuity; (iii) the central bank can choose pt freely right away, but after that, the price level evolves according to pt + j = ρ pj pt . With these restrictions, the fiscal burden is: Φt =
Btc 1 s Bt + . (10) pt 1 − m / ρ p
The more persistent the increase in the price level, the lower the real value of the debt will be. Moreover, more persistence of the price level implies that the fiscal burden falls by more per unit of higher prices today. Intuitively, higher persistence implies that the price level will be higher in the future, eating away at the real value of the consol. This formula also makes clear how this effect depends on the interaction with the maturity of the government’s debt. The larger the ratio B c/B s, the larger the effect that higher persistence has on inflating the debt. To conclude, being able to inflate more quickly and with more persistence increases the reduction in the fiscal burden that can be achieved. But both of these properties interact with the maturity of the debt. With more complicated maturities than the two special cases in this section, figuring out this interaction is not easy, and it can make a large difference for the calculations.
5.3.3 An Options-Based Approach In back-of-the-envelope calculations, it is common to assume that inflation can be increased instantly and permanently and that the distribution of the maturity of government debt is exponential. However, Hilscher, Raviv, and Reis (2014) show that these seemingly innocent assumptions dramatically overstate the effect of higher inflation on the public debt. Moreover, while so far we have assumed for analytical transparency that higher inflation does not affect real interest rates, this Phillips curve effect can be substantial in the data. Is there a relatively simple way to provide reliable estimates given the large error in these and other approximations? The answer is in the formula for the debt burden. It is a result in financial economics that if f πt ,t + j is the probability density for inflation, πt ,t + j ≡ pt + j /pt , and rt ,ft + j denotes the risk-free rate between dates t and t + j defined as (1 + rt ,ft + j )−1 ≡ t (mt ,t + j ), then we can define the risk-adjusted measure for inflation as f Q (πt ,t + j ) ≡ (1 + rt ,ft + j ) f (πt ,t + j )mt ,t + j (πt ,t + j ) . The fiscal burden without default is then equal to:
(
)
∞ 1 Bj Φt = ∑(1 + rt ,ft + j )−1 t Q , (11) pt π t ,t + j j=0
Can the Central Bank Alleviate Fiscal Burdens? 143 where Q is the expectations operator under the risk-adjusted measure. This result shows that the harmonic mean of inflation under the risk-adjusted measure at different horizons is the sufficient statistic from inflation that multiplies the maturity distribution of the debt to give the fiscal burden. This formula can either be used probabilistically to provide “value-at-risk” probabilities for whether inflation will lower the real value of the debt by a certain threshold or be used to easily calculate counterfactuals of the effects of higher inflation. Until recently, this formula could not be implemented because there were no good estimates of the f Q (.) probability density. But Hilscher, Raviv, and Reis (2014) use the active recent markets on inflation options to obtain these marginal densities for inflation and propose an econometric estimator to convert these into joint distributions for inflation.
5.4 Dividends from Seigniorage A part of financial regulation in most advanced economies consists of forcing banks to hold a required amount of reserves at the central bank, usually against their deposits. These required reserves often pay zero interest, so they are distinct from the voluntary reserves vt that were discussed already. We denote them by H t instead (recalling that capitals denote variables in nominal units). Still taking the case vt = 0 , without loss of generality, the resource constraint of the central bank becomes:
pt dt = H t − H t −1 . (12)
By forcing banks to hold more of these required reserves, the central bank generates resources that it can then rebate to the fiscal authorities. Ultimately, this regulation is a form of taxation of the banks. As such, it is subject to the usual trade-offs: on the one hand, it generates fiscal revenues; on the other hand, it also introduces distortions on the optimal size of banks and on the extent of deposit financing. Closely related is the ability of the central bank to issue banknotes. By definition, they pay zero interest. This section studies this central bank product in more detail.
5.4.1 The Central Bank as a Seller of Payment Services Central banks all over the world issue banknotes. Currency is the sum of banknotes or coins, and while the latter are sometimes issued by the treasury, this is usually under strict rules on the ratio of coins to banknotes, so that the central bank effectively has almost perfect control over total currency. Reserves and currency are the dual bedrock of the monetary system. They come with the promise that they can be exchanged for reserves and vice versa, at any time, on simple request, at a fixed exchange rate of one to one (which is why currency and reserves can be interchangeably referred to as the unit of account).
144 Reis For some reason, people find currency useful for making payments. There is an enormous literature on why this is so (e.g., Nosal and Rocheteau, 2011). For our purposes, it only matters that people happen to demand these services, and the central bank is their monopoly supplier. Because they are durable, the monopolist sales are the first difference of the stock of currency; because of the one-to-one exchange rate with reserves, their real price is 1/ pt . Therefore, with only a slight abuse of notation, we can denote the sum of required reserves and currency by H t , so the revenue from providing these payment services is:
st =
H t − H t −1 . (13) pt
This is typically called seigniorage. In the most standard model of monetary economics, due to Sidrauski (1967), the central bank rebates this revenue to households every period. An alternative story, due to Friedman (1969), imagines the central bank dropping the banknotes from a helicopter, or effectively giving away the seigniorage revenue by distributing currency for free. In either of these two cases, the fiscal effect is neutral. The central bank pays zero dividends to the fiscal authorities, and the analysis of the previous section applies, so the central bank can only affect fiscal burdens via debasement of the nominal debt. But this is not how central banks supply currency.
5.4.2 Fiscalist Inflation Closer to reality, central banks collect the seigniorage revenue and pay it off to the fiscal authority every period, so the resource constraint of the central bank is simply:
dt = st . (14)
In this case, the central bank rebates the revenue from its monopoly supply of a publicly provided good to the fiscal authorities. From a different perspective, the central bank is a collector of the inflation tax on the holders of currency. Either way, the central bank is not all that different from the parking tickets office or the issuer of permits for boats: it collects a revenue and uses it to provide direct fiscal transfers that lower the fiscal burden of the government. It is tempting to say that by printing more banknotes, the central bank can just increase this revenue. While this is often taught in introductory economics classes, it is quite far removed from the reality of central banking. No modern central bank chooses H t exogenously. Rather, they choose the interest on reserves and in doing so give rise to some equilibrium inflation. Then they simply stand ready to accommodate whatever demand for currency there is. The amount of banknotes in circulation is not chosen by the central bank, as it satisfies any demand that there is for this means of payment.
Can the Central Bank Alleviate Fiscal Burdens? 145 How is the amount of currency in circulation then endogenously determined? Because currency pays no interest, and households and firms could instead hold their savings in a bond that pays the nominal interest rate, the opportunity cost of holding money is approximately equal to the short-term interest rate it . Almost all models of the demand for currency then give rise to a declining function of this opportunity cost: L (.) : + → + . This gives the demand: Ht = L(1 + it ). (15) pt yt
L(.) is sometimes called the inverse velocity of currency. The central bank therefore chooses an interest rate, which pins down inflation, and this determines the demand for currency. Printing currency is not an independent option from changing interest rates and determining inflation. Sargent and Wallace (1981) clarified this point in a forceful way. Imagine that the central bank was solely focused on generating a fiscal revenue for the fiscal authorities, so dt /yt was imposed on it exogenously. Then this would pin down an inflation path, since combining equations 13, 14, and 15, given an exogenous dt /yt , one gets a stochastic difference equation for inflation:
dt 1 1 − L yt −1 pt −1 . (16) = L mt ,t +1 pt yt mt −1,t pt −1 yt pt t t −1 pt +1 pt
Inflation would therefore be pinned down by the fiscal demands placed on the central bank. The general lesson is that “printing money" to generate revenue is not an independent choice from setting interest rates and determining inflation. The central bank is the monopoly supplier of a durable good. It cannot just “sell more” to get more revenue. Rather it must change the price for its good. In the case of the central banks’ output, banknotes, the only way to raise its revenue, seigniorage, is to let inflation rise.
5.4.3 How Large Is It? Because printing banknotes can be done quickly, seigniorage is an appealing source of revenue for the government, and it is an effective way for the central bank to lower the fiscal burden, albeit at a cost of higher inflation. A long literature has discussed the use of this source of revenue (Cagan, 1956; Fischer et al., 2002). A common conclusion is that it is relatively small.
146 Reis To understand why it is so, start by considering seigniorage in a steady state where 1 + g = yt /yt −1 , 1 + π = pt /pt −1 , and m = t (mt ,t +1 ) . Seigniorage then is:
1 s 1+ π 1− . (17) = L m (1 + g )(1 + π) y
Inverse velocity for non-interest-paying, monetary base in the United States between 1960 and 2015 was 6.11 percent on average, while average nominal income growth rate was 6.32 percent. Seigniorage has therefore been approximately 0.36 percent of GDP. By comparison, in the data after taking away the costs of running its operations, the Federal Reserve’s average remittances to the US Treasury have been 0.31 percent of GDP. Increasing π would raise the last term in brackets on the right-hand side, but it would also lower inverse-velocity. This is the typical taxation effect: higher taxes directly raise tax revenue but lower it by reducing the tax base. A standard specification of the demand for currency writes velocity as an isoelastic function L(1 + it ) ≡ L0 [it /(1 + it )]− η. The L0 is a constant, while Lucas (2000) suggests that η = 0.5 is roughly consistent with the US data. Under this particular specification, increasing inflation from its 1960–2015 average of 3.31 percent by one percentage point would raise seigniorage by a mere 0.02 percent of GDP. Even raising inflation by a whole 10 percent would increase seigniorage by only 0.17 percent of GDP. It is not easy to estimate the peak of the Laffer curve for seigniorage from banknotes. Historically, Fischer et al. (2002) suggest from looking at a cross section of countries that the peak happens at 174 percent inflation with seigniorage of about 6 percent of GDP. They also document that even in deep fiscal crises, when fiscal balances run at a 10 percent deficit, usually inflation is only on the order of 20 percent. Outside of steady state, Hilscher et al. (2016) estimate the present value of seigniorage for the United States through a variety of different perspectives using market inflation expectations. Even under their most generous estimates, the present value of seigniorage is never above 30 percent of GDP, and a more accurate estimate is slightly above 20 percent. The amount of banknotes in circulation is simply not that large for steady-state seigniorage to be all that fiscally significant. Another source of seigniorage revenue comes from requiring some reserves to be held by banks. Measuring the limits on this implicit tax is harder. Directly, this tax on collecting deposits should reduce savings in the banking sector, therefore affecting the tax base. Perhaps more important, though, is the indirect effect. As banks face a higher marginal cost of financing, this should lead to fewer projects being financed by bank credit, which in turn should lower economic activity, the amount of wealth, and thus the amount held as deposits. Estimating the full general-equilibrium Laffer curve is therefore not easy. Insofar as a sudden increase in bank reserves can precipitate a financial crisis, trying this policy option can be very costly (Hoggarth et al., 2002). To conclude, seigniorage may be useful to smooth small business-cycle fluctuations in government revenue while keeping tax rates fixed (Chari et al., 1991), so it may be a source of fiscal smoothing. But on average, its role in overall government revenue is small.
Can the Central Bank Alleviate Fiscal Burdens? 147
5.5 Reserves and Central Bank Insolvency So far, we have set outstanding reserves to zero at all dates. Being a liability of the central bank, they are a government liability. This section asks whether reserves (and banknotes) should be part of the public debt.
5.5.1 The Intertemporal Resource Constraint of the Central Bank With both seigniorage and reserves present, then the equality between sources and uses of funds for the central bank gives:
st + vt = (1 + rt )vt −1 + dt . (18)
Iterating forward, this gives:
(1 + r ) v t
∞ = t −1 t ∑mt ,t + j (st + j − dt + j ) + lim t (mt ,T vT ). (19) j = 0 T →∞
Reserves are just another form of government liability. Banks must voluntarily choose to hold them, and, like any other private agent, they will not be willing to hold an asset that is a Ponzi scheme. Of course, the central bank can always just issue new reserves to pay for the interest on old reserves. But so can a standard consumer issue new debt to pay past debt in textbook microeconomics. The problem is not the ability to issue pieces of paper that stand for liabilities but rather whether these can have a positive value in the sense that private agents are willing to give positive real resources in return. Therefore, reserves cannot be a Ponzi scheme:
lim t (mt ,T vT ) ≤ 0. (20)
T →∞
Equivalently, the central bank faces an upper bound on its dividends:
∞ ∞ t ∑mt ,t + j dt + j ≤ t ∑mt ,t + j st + j − (1 + rt )vt −1 . (21) j=0 j=0
This is the intertemporal resource constraint facing the central bank. Together with the constraint facing the fiscal authority from section 5.2 in equation 5, they jointly define the overall government fiscal limit.
148 Reis
5.5.2 Reserves and the Public Debt If there is no constraint on the dividends paid to the fiscal authorities, then the two resource constraints can be combined by replacing out for dividends. That is, if the central bank can always count on the fiscal authority to set a sequence of {dt + j }∞j =0 that satisfies the central bank’s resource constraint, then, of course, this constraint is slack, and only the combined intertemporal resource constraint for the government as a whole is important. This may include negative dividends every year, implying that the central bank is being recapitalized annually, but this is what effectively happens in many government bodies, like the department of public works, which has many expenses and almost no revenues. Insofar as its liabilities are backed by the overall fiscal authorities, then a government agency cannot be insolvent separately from the solvency of the overall government (Sims, 2003). In this case, replacing out for the present value of dividends in the government budget constraint in equation 5 and using the resource constraint of the central bank in equation 21 gives the joint constraint:
∞ (1 + rt )vt −1 + Φt ≤ t ∑mt ,t + j ( ft + j + st + j ) . (22) j = 0
In the previous section, where reserves were set to zero, the same equation held. This equation makes clear that reserves are just another government liability. They count for the fiscal burden, and they could be included in the public debt. However, counting them as public debt would require making two more adjustments to public accounting. Once these are done, at least for the four countries in figure 5.1, including reserves would actually lower the calculated public debt. First, adding reserves to the left-hand side comes with adding seigniorage to the right-hand side in the source of funds. As discussed in section 5.4, the present value of US seigniorage is somewhere between 20 percent and 30 percent of GDP while, as displayed in figure 5.1, outstanding reserves peaked at 15 percent of GDP. Second, the issuance of reserves often happens to fund the purchase of public debt. When this happens, it leaves the right-hand side of the equation unchanged, since the higher vt −1 is immediately offset by a lower Φt . Section 5.8 will further explore this Modigliani-Miller neutrality whereby exchanging one form of public liability for another makes no difference to the overall fiscal burden. Currently, for the countries in figure 5.1, the value of the central bank’s bond portfolio exceeds the value of outstanding reserves. Net of assets and seigniorage, including reserves, would then actually lower the public debt. Moreover, starting from an initial situation where reserves were zero, the future path of reserves from then onward would make no difference to the fiscal burden. The ability to issue reserves does not alleviate per se the fiscal burden of the government; it is the present value of seigniorage that matters. Intuitively, no net transfers result from issuing reserves, because in equilibrium, no arbitrage ensures that no matter what nominal interest rate is chosen, the price level will adjust, and reserves will end up paying the
Can the Central Bank Alleviate Fiscal Burdens? 149 market real return. Therefore, issuing reserves ends up boiling down to another government liability, with no creation of new resources, so they do not alleviate the fiscal burden. Finally, banknotes in circulation do not enter the resource constraint above. They are not a liability of the government or of the central bank. Rather, they are a durable good that the central bank produces and sells. Therefore, currency only affects the fiscal burden via the seigniorage revenue discussed in the previous section.
Central Bank Independence and Insolvency The analysis of the previous subsection implies that there is no independent resource constraint on the actions of the central bank. Whatever losses or gains, the fiscal authority would back the central bank, and as such, the only binding constraint is the integrated budget constraint of the fiscal and monetary authorities. As Hall and Reis (2015) clarified, the institution of central bank independence (CBI) breaks this situation. A minimal condition for independence of choices and actions of the central bank is its ability to finance operations without needing approval from the treasury (Amtenbrink, 2011, chapter 3 in this volume). This requires the central bank having its own budget, independently of fiscal authorities, and not relying on the treasury to approve recapitalizations of the central bank and impose conditions on monetary policy attached to these. Formally, central bank independence means a constraint on the sequence {dt + j }∞j =0. It is the joint combination of a constraint on the flow of dividends, {dt + j }∞j =0, and the no-Ponzi-scheme condition in equation 20 that leads to a meaningful intertemporal resource constraint for the central bank in equation 21. There is no bankruptcy or insolvency procedure for a central bank. That is, there is no legal framework or institution that will judge if a central bank’s liabilities exceed its assets and force it to reorganize or be liquidated. If any private agent wants to settle a claim on the central bank, or is willing to sell the central bank a good or asset, then as long as the claim or price is denominated in units of currency, the central bank can always pay by issuing more reserves and stand ready to exchange them for banknotes on request. But if the central bank cannot go legally insolvent, how can it be that it has a resource constraint, as we just derived? The standard model of intertemporal consumer behavior taught in microeconomics textbooks also does not have insolvency legal proceedings, but it puts a strict constraint on the consumer. That comes from the no-Ponzi constraint, so that insolvency in a rational-expectations model ends up typically being an off-equilibrium event, which is not observed but is still relevant. Likewise for the central bank in the standard economic model described so far, the no-Ponzi-scheme condition is the constraint that provides meaning to central bank insolvency. If the central bank tries to have reserves shoot for infinity, as in a Ponzi scheme, then the real value of reserves for a private agent will be zero. Since, by definition, the value of reserves is 1/ pt , when no one wants to hold them, then pt = ∞. This is what is special about central banks: their insolvency is synonymous with hyperinflation or
150 Reis currency reform. It is not impossible, or even rare, for central banks to go insolvent; it happens all the time around the world. Insolvency happens when people no longer accept holding the central bank’s liabilities, and the country has to redenominate its currency, an event countries frequently go through. Still, one may wonder, how is it that a central bank that can print currency is unable to pay all its nominal reserves debt? Since banknotes are legal tender to settle any debt, isn’t reverting from interest-rate setting to exogenously choosing the supplyof currency the answer to prevent insolvency? To formally evaluate this intuition, consider a pseudo final date, date t , when the central bank must pay a very large nominal dividend. After that, no more payments are made, but also no more reserves or banknotes are issued. The flow of funds, now written in nominal terms, is:
H t − H t −1 = (1 + it −1 )Vt −1 + Dt , (23)
where Vt −1 and Dt are reserves and dividends, respectively, in nominal terms. From date t + 1 onward, since there are no dividends or reserves, the central bank can close its doors, and the currency supply stays fixed at H t forever. This equation formalizes the intuition behind the question in this paragraph because, no matter how large Vt −1 or Dt is, the central bank could always choose H t large enough to make this hold. Moreover, since reserves would be zero from date t onward, clearly the no-Ponzi-scheme condition would be satisfied. It might seem, therefore, that a central bank that switches from the current policy of setting interest rates to exogenously printing banknotes would not be insolvent. This thinking is incorrect, because it keeps inflation and Vt −1 fixed, when in fact the act of printing banknotes would change them. As already noted, given a stable demand function L (.) , raising the supply of banknotes must come with higher inflation. If this inflation was unexpected, then the ex post real payment on the reserves:
1 + rt =
(1 + it −1 ) pt −1 (24) pt
will be unexpectedly low. Therefore, at this final date, holders of reserves will receive a much lower return than expected. The extra inflation is equivalent to the decline in the recovery rate of a government bond during sovereign insolvency. One may not call it insolvency, but in fact, it is just like it: the payment on reserves is well below what was promised and expected. Alternatively, if the inflation was expected, then agents would have asked for a much larger it −1 the period before. But this leads to a larger needed payment on reserves this period, which in turn requires even more printing of banknotes in the current period, and the process continues. As we saw in the previous section, ultimately there is an upper bound on the real resources that can be generated by seigniorage, such that the central bank simply cannot pay the real amount required by markets on its reserves at date t −1. But in that case, banks will not want to accept reserves at date t −1, so their real value would be zero, and pt −1 would be infinity.
Can the Central Bank Alleviate Fiscal Burdens? 151 The argument that the ability to print banknotes without bound excludes insolvency is therefore incorrect. Ex post or ex ante, the ability to print banknotes does not stop the two manifestations of insolvency: the ex ante unwillingness of lenders to roll over the debt or the ex post large haircuts on defaulting debt. An alternative way to state this is that being solvent can be interpreted as being able to keep one’s promises. A central bank that promised a nominal interest rate on reserves and that promised to hit a certain inflation target has promised a real return on its reserves. The central bank is insolvent when that actual return rt is well below what was expected. This happens when inflation is well above what was expected. To sum up, it is the combination of central bank independence and central bank insolvency that puts a fiscal resource constraint on the actions of the central bank. Central bank design (Reis, 2013) must spell out precisely what are the constraints imposed on {dt + j }∞j =0 in the relation between the central bank and the fiscal authorities. The next four sections consider four separate constraints on dividends that match different central banks and discuss their implications for central bank insolvency and the resulting fiscal burden of monetary policy actions.
5.6 Exchange-R ate Income Risk and Period Solvency Most central banks across the world hold large reserves of foreign assets. Foreign assets come with what is perhaps the major source of income risk to the central bank, that associated with fluctuations in exchange rates. This section studies the implications this has for the dividend flow to the fiscal authorities and how it interacts with the constraints that determine central bank solvency.
5.6.1 Period Solvency The charters of the ECB have no explicit allowance for automatic recapitalizations. For the ECB to receive a transfer from the fiscal authorities (a negative dt ), there must be an explicit agreement by all of the nation members of the Eurosystem. In most circumstances, this is politically difficult. Therefore, the ECB can be interpreted as having the weakest of fiscal support from the European fiscal authorities, with negative dividends being ruled out, or {dt + j }∞j =0 ≥ 0 . At the same time, the charters of the ECB state that it must rebate its net income to the national central banks of the Eurosystem every year, and many of these are then required by national law to send them as dividends to their respective fiscal authorities. Therefore, to a first approximation, the dividend rule of the ECB can be written as:
dt = max {st − rt vt −1 , 0}. (25)
152 Reis For our simple central bank that so far only issues reserves and prints banknotes, net income is equal to the seigniorage from printing banknotes minus the interest paid on reserves. When paid interest exceeds seigniorage, net income is negative, and the central bank does not receive any recapitalization from the fiscal authorities. In this extreme reading of the charters of the ECB, every time the central bank records a positive net income, it must rebate it to governments, but when net income is negative, it can expect no transfer from the fiscal authority. Over the past century, across most of the developed world, annual seigniorage has usually been positive. Nominal income growth has been positive in most years, and the velocity of currency seems to be stationary, so st is positive by the end of the year. However, consider what would happen to our hypothetical central bank if in one year, seigniorage turned out negative. Starting from zero reserves and zero assets, that year, the central bank would have to borrow to offset this fall in income. As banks come to deposit some of their currency at the central bank, they receive reserves in return. Interest must now be paid on these reserves forever, since positive net income is sent to the fiscal authorities and cannot be used to retire reserves. Every time that net income is negative in the future, which happens more often when reserves are higher, again reserves grow. At some point, the total interest on reserves exceeds seigniorage at all dates, and the central bank must issue reserves every year just topay for the interest. The central bank is running a Ponzi scheme; it is insolvent. More formally, combining the dividend rule with the resource constraint in equation 18 leads to the law of motion for reserves:
vt = vt −1 + max {0, rt vt −1 − st }. (26)
Therefore, vt ≥ vt −1 almost surely. Moreover, once vt is large, so st is negligible with respect to it, vt grows exponentially at the real interest rate. At this point, the expected discounted value of reserves is strictly positive, violating the no-Ponzi-scheme condition in equation 20. Reis (2015a) called this period insolvency. Under the asymmetric dividend rule that does not allow for any negative dividends, then as long as there is a positive probability of negative seigniorage in one period, the central bank becomes insolvent. Hall and Reis (2015) characterize the circumstances where dt < 0 for the United States and the ECB, by measuring the necessary recapitalizations of the central bank to cover the possible losses. Understanding the period solvency of the central bank reduces then to inspecting its net income and gauging the circumstances under which this can be negative, since these define the cases where the central bank becomes a fiscal burden.
5.6.2 Risky Assets and Central Bank Net Worth Let the central bank now buy risky assets in the amount zt . For simplicity, imagine that this asset pays no dividend but has a nonzero real price et , as in the case of foreign
Can the Central Bank Alleviate Fiscal Burdens? 153 currency, although one could extend the analysis to have a risky dividendwith no significant changes to the conclusions. In this case, et would stand for the real exchange rate, such that when this rises, the foreign currency appreciates relative to the domestic currency. No arbitrage with respect to this asset imposes that: m e t t ,t +1 t +1 = 1. (27) et
The resource constraint of this central bank now is:
vt = (1 + rt )vt −1 − st + dt + et (zt − zt −1 ). (28)
The new term reflects the fact that funding new purchases of the risky asset must be done by issuing reserves. The risk comes because while zt −1 is chosen last period, its real payment today depends on et , so reserves today may have to be used to offset losses. The no-Ponzi-scheme condition on reserves now is:
lim t [mt ,T (vT − eT zT )] ≤ 0, (29)
T →∞
reflecting the central bank’s assets that can be used to pay the reserves. Having introduced assets on the central bank balance sheet, there is also now a definition of central bank net worth as the difference between the value of assets and the value of liabilities. In real terms:
nt = et zt − vt . (30)
By itself, this definition does not mean much. Unlike private corporations, central banks cannot be closed down by their creditors demanding a sale of the assets in order to pay for the liabilities, so whether net worth is positive or negative has in itself no economic implications. However, when combined with the no-Ponzi-scheme condition, note that if net worth is constant, as long as mt ,T declines with T, which is true in almost all models, then the central bank will always be solvent. Requiring constancy of net worth is sufficient, but not necessary, to ensure solvency. The pervasiveness of the concept of net worth has led to it creeping into the rules for the operations of central banks and for distribution of dividends to the fiscal authority. For instance, the way in which net income is calculated is often following the accounting convention of it being the first difference in the balance sheet. In other words, net income is such that if it was paid as a dividend, net worth would be constant. In this case, net income is equal to:
xt = st − rt vt −1 + (et − et −1 )zt −1 . (31)
The first two terms are familiar. They are the income from seigniorage and the expenses from paying interest on reserves. Both are expressed in real terms using market prices. The last term measures the marked-to-market capital gain from owning foreign
154 Reis reserves. Because net worth is constant, then the central bank is solvent as long as negative net income comes with a recapitalization by the fiscal authorities.
5.6.3 Foreign Exchange and the Fiscal Burden Replacing out reserves in the expression for net income using the law of motion for reserves gives a reduced-form expression for dividends:
e dt = st + rt nt −1 + t − (1 + rt ) et −1zt −1 . (32) et −1
When this is negative, either the central bank is period insolvent, or it must be recapitalized, increasing the fiscal burden. The first term in the expression is the same as in the previous subsection. By selling banknotes, the central bank earns seigniorage. The second term is new and is equal to the interest earned on the net worth. Insofar as the central bank started with assets exceeding reserves, it will earn a return on this endowment. This can be used as a buffer against fluctuations in the other components of income to prevent period insolvency. Even though this term is small for most central banks, it can be large enough to offset some of the rare instances where seigniorage is negative. Central bank net worth can therefore eliminate the insolvency discussed in the previous subsection and, together with periodic recapitalization, account for why the Fed, the ECB, and the BoE have not experienced insolvency in their history. The last term is zero in risk-adjusted expectation; just multiply by the stochastic discount factor and take expectations using the no-arbitrage pricing conditions to see this. On average, the central bank does not earn excess returns on its holdings of risky assets. Still, given fluctuations in the price of those assets, if the portfolio of assets is large, the central bank could earn a large positive or negative income in any given year. In the case of foreign currency, losses happen if the domestic real exchange rate appreciates more than expected. With the very large amount of reserves that were accumulated by the central banks in many emerging countries so far this century, even small fluctuations in currency can make the central bank have losses and even end up with negative net worth. The situation becomes more extreme when there are large fluctuations in exchange rates during crisis. Countries often accumulate large foreign reserves in order to defend their currency pegs. When the markets expect the currency to depreciate and the central bank successfully defends the peg, the exchange rate depreciates by less than expected, so the central bank accumulates a loss in its large foreign holdings. When the peg is let go, the surprise depreciation should give the central bank a large gain. But this is typically done by the time when zt is already very low, so the gain is small and not enough to offset the accumulated losses from the months defending the peg. By the end of the year, the central bank of a country that went through a currency crisis often records very
Can the Central Bank Alleviate Fiscal Burdens? 155 large losses. Ironically, if the central bank is successful at defending the peg, the losses are even larger. In the other direction, the Swiss National Bank in 2013–2015 tried to keep the Swiss franc from appreciating relative to the euro by buying a larger amount of euro reserves. Toward the end of 2014, it became clear that the exchange-rate peg would have to be eventually let go, especially as the real exchange rate of the Swiss franc was appreciating because the euro was depreciating relative to the dollar. Moreover, the holdings of foreign reserves were so large relative to Switzerland’s GDP that there was no hope of a government recapitalization for when this happened. The announcement of a further round of QE by the ECB, which was expected to lead to a further rise in zt to defend the peg, led the Swiss National Bank to abandon the peg and realize thelosses rather than risk its own solvency. It is therefore not surprising that in many countries, the central bank has stronger fiscal support when it comes to its foreign reserves holdings. Often, charters commit the fiscal authority to recapitalize them in case of losses in foreign reserves. In other countries, the central bank is an agent holding the foreign currency in the name of the fiscal authority, which ultimately owns it and bears any associated losses or gains. The central banks of countries with large foreign reserve holdings and exchange-rate pegs can sometimes not count the income risk from these holdings as part of their net income, transferring losses automatically to the fiscal authorities. Because foreign reserves expose the central bank to such large income risk, they are brought back into the accounts of the fiscal authorities. In this case, the central bank’s actions in managing its portfolio of foreign exchange can add to or diminish the overall fiscal burden.
5.7 Private Default Risk and Rule Solvency In normal times, the central banks of advanced economies stay away from holding assets that are credits on the private sector. This is seen as potentially interfering with the allocation of credit across sectors in a way that goes beyond the mandate of the central bank. Yet many central banks in emerging countries routinely do so, and from 2013 onward, the Bank of Japan started buying large amounts of corporate bonds as part of its qualitative and quantitative easing policies. This section considers the risk that buying bonds that may default brings to the solvency of the central bank and thus to its ability to generate fiscal resources for the government.
5.7.1 Bonds and Default Risk I assume that private bonds are denominated in real terms or alternatively that they are indexed to inflation. This is clearly unrealistic but plays only a pedagogical role.
156 Reis I already partly considered the effect of inflation on the value of bonds in section 5.3, and section 5.8 will have more to say about government bonds that are assumed to be nominal. Inflation would not affect private bonds in a different interesting way from the way it affects government bonds. A bond promises to give one real unit tomorrow in exchange for a real price qt today. This promise may be unfulfilled if the bond defaults and instead pays a fraction of what was promised. With a slight abuse of notation, let δt +1 be the repayment rate, as we used for government bonds. This equals 1 in most states of the world, but it is between 0 and 1 in some of them. No arbitrage as usual dictates that the risk-adjusted expected gross return on this bond is 1: t [mt ,t +1δ t +1 ] = qt . (33)
As before, the central bank issues reserves vt , promises to pay a nominal interest rate on them which after inflation leads to a real return rt , collects seigniorage from selling banknotes st , and pays a dividend to the government dt . I drop the risky foreign assets, but they would just lead to an extra additive term in net income, which was already dissected in the previous section. The novelty is that the central bank buys bt bonds in real terms and collects the repayment on the bonds it bought in the previous period. Therefore, the law of motion for reserves is:
vt = (1 + rt )vt −1 − st + dt + qt bt − δt bt −1 . (34)
Finally, the new no-Ponzi-scheme condition on reserves again just states that reserves minus the value of assets cannot be rolled over forever:
lim t [mt ,T (vT − qT bT )] ≤ 0. (35)
T →∞
5.7.2 Rule Solvency and Accounting Net Worth Hall and Reis (2015) first combined the no-Ponzi-scheme condition on reserves with the link to central bank independence to come up with a workable definition of central bank solvency. They interpreted central bank independence in terms of a rule for the dividends, leading to a definition of rule solvency: a central bank is solvent while it can stay committed to the rule for dividends that is in its charters without reserves exploding. This definition is explicitly grounded on the key role of fiscal support by emphasizing how the rule for dividends fiscally connects the central bank and the treasury. Period solvency, which was studied in the previous section, is a special case of rule solvency when the rule is that the central bank dividend can never be negative. But this rule is more general because it can accommodate other cases. The first rule considered by Hall and Reis (2015) came from an economic reading of most central bank charters, that
Can the Central Bank Alleviate Fiscal Burdens? 157 dividends should be paid as net income. In that case, as the previous section showed, net worth would be constant. Reserves would equal:
vt = qt bt − n0 (36)
at all dates. The central bank is solvent, and reserves track the purchases of bonds. However, while for most central banks the rule is to pay out net income, this is interpreted differently via the accounting rules used to calculate net income. Most central banks differ in their accounting standards from a proper economic measure of net worth. First, they count their output of banknotes as a liability. Second, they measure net worth in nominal rather than real terms. Therefore, accounting nominal net worth is measured as:
Wt = pt (qt bt − vt ) − H t . (37)
As a result, the rule that tells the central bank to keep its net worth constant over time (Wt = Wt −1) leads to a dividend payment equal to net income that is calculated as:
i p δ dt = t −1 t −1 nt −1 + t − (1 + rt ) qt −1bt −1 . (38) pt qt −1
The first difference relative to the earlier case is that seigniorage disappears. Because banknotes are recorded as a liability, printing and selling them are not counted as a revenue. Of course, this accounting definition does not change the reality that there is a seigniorage revenue. It simply allows the central bank to keep this revenue instead of distributing it to the fiscal authorities. The second difference is that it is now the nominal return to net worth that raises dividends. Because the nominal interest rate is always positive, this return is always positive. The reason is that by calculating net income in terms of nominal accounting net worth, the calculation suffers from a version of money illusion. For a fixed real interest rate, higher inflation raises the real dividend that gets paid. With these accounting rules, if the central bank holds no risky assets, then its dividend is sure to be always positive, so it is never period insolvent. With bond holdings, or with other risky assets, income might be negative. The last term in the expression is zero under risk-adjusted expectations on average, which means that in the states when the bond pays in full, the central bank will earn a small positive revenue on its assets. But when it defaults, there will be a loss, which is larger the smaller the repayment rate and the larger the price of the bond (for instance, because default was less likely). If the rule for fiscal transfers is still the max rule, then declines in central bank net worth will measure the extent of the recapitalization that the central bank needs. Private default therefore ends up affecting central bank solvency and the impact of central bank actions on fiscal burden in a similar way to exchange-rate risk: both can lead to fluctuations in net income and in the regular remittances to the fiscal authorities which, if large enough, can trigger central bank insolvency.
158 Reis
5.7.3 Provisioning and the Fiscal Burden Many central banks have the ability to retain earnings as provisions against future losses on risky assets. This consists of holding on to positive net income, instead of distributing it as a dividend, so it can be offset against future negative net income. Provisioning raises the net worth of the central bank, so that when losses happen, net worth can fall back without leading to insolvency of the central bank. For instance, the ECB statute allows it to build a “risk provision" and a “general reserve" by retaining profits up to 100 percent of the statutory capital. By focusing on net worth, provisioning may at first seem peculiar. Net worth should not be a relevant concept for a public agency. Yet for an independent public body that issues its own liabilities like the central bank, provisioning is a sensible way to accommodate future income losses from investments in risky private assets, without needing future recapitalizations from the fiscal authority. That is, provisioning is a form of relaxing the institutional independence constraint on {dt + j } and, in doing so, preventing central bank insolvency. Returning to the fiscal burden, including both foreign reserves and bond holdings, the present value of dividends from the central bank to the fiscal authority is still just equal to the net liabilities of the central bank. Therefore, combining the resource constraints gives:
∞ Φt ≤ t ∑mt ,t + j +1 ( ft + j + st + j ) + et zt −1 + δ t bt −1 − (1 + rt )vt −1 . (39) j = 0
The current value of the assets with which the central bank enters this period reduces the overall debt burden, while the promised payments on reserves increases it. At period t , the central bank does not choose any of the variables on the right-hand side, so none of its present actions this period affect the debt burden today. As for past actions, we can rewrite:
et zt −1 + δ t bt −1 − (1 + rt )vt −1 = nt −1 + (et − et −1 )zt −1 + (δt − qt −1 )bt −1 − rt vt −1 . (40)
By issuing liabilities (reserves) to hold assets (foreign and private), the central bank has a net worth of nt = qt bt + et zt − vt . These are net assets that are ultimately owned by the government and which it can use to pay down some of its debt. Therefore, a higher nt −1 reduces the fiscal burden. Yet the discussion in this section and the previous one emphasized two properties of net worth that severely limit its ability to relax the fiscal burden. First, in theory, for a central bank that follows a net income rule to pay its dividends, net worth is constant over time. Thus, nt = n0, and since the initial net worth was given to the central bank by the fiscal authority, it will have added to the fiscal burden then. The intertemporal effect is therefore neutral. Second, in practice, the net worth of most central banks is quite small at most dates, and two orders of magnitude smaller than the public debt, so this effect is for all purposes quantitatively irrelevant.
Can the Central Bank Alleviate Fiscal Burdens? 159 The remaining terms on the right-hand side of the equation above are all zero in risk- adjusted expectation. This is a more general lesson that applies to any risky investment by the central bank: the central bank does not earn excess returns. Therefore, holdings of private assets by the central bank do not lead to a reduction of the fiscal burden of the fiscal authorities. Some periods the central bank will return a positive income beyond seigniorage, some periods it will require a recapitalization. On average, these will be zero, and if the fiscal authority only receives the positive income and refuses to proceed with the recapitalizations, then the central bank will become insolvent. The balance sheet of the central bank is only important through the income risk it generates and how this interacts with central bank independence.
5.8 Sovereign Risk: Public Bonds Continuing to consider default, I now turn attention to public nominal bonds. These are interesting and distinct from the previous discussion for two reasons. First, the central bank is a branch of the government, so issuing new public liabilities (reserves) to buy outstanding public liabilities (government bonds) is a purely intragovernmental transaction. Second, sovereign default leads to a transfer from one branch of the government to the other. This section introduces central bank holdings of one-period government bonds, which I will denote by Bt1c from the total Bt1 outstanding.
5.8.1 Wallace Neutrality with No Default The resource constraint for a central bank that holds these public bonds as its only asset is:
Q1 p B1c B1c vt = (1 + rt )vt −1 − st + dt + t t +1 t − δt t −1 . (41) pt pt +1 pt
From the sole perspective of the central bank, this is almost the same as its resource constraint when it held private bonds. The only difference is that the bonds are nominal, and so their return has inflation risk. This nominal versus real difference makes a slight difference for the capital gains calculation in net income. Under a proper constant real net worth rule for calculating net income, dividends would equal:
δ Q1 B1c dt = st + rt nt −1 + 1t − (1 + it −1 ) t −1 t −1 . (42) Qt −1 pt
The difference in the last term relative to when bonds were real (compare with equation 38) is that capital gains now accrue in nominal terms.
160 Reis If there is never any default—so δt = 1 in all states of the world—then because no arbitrage implies that 1 / Qt1−1 = 1 + it −1, the last term in the expression above is always equal to zero. The intuition is that in this case, reserves and government bonds have identical payoffs in the model: they are both denominated in nominal terms, and they both pay one nominal unit next period. In this case, the size of the balance sheet of the central bank is irrelevant to its own net income. Liabilities and assets are perfectly matched in payoffs and maturity, so that any returns earned from the government bonds are immediately paid to the reserve holders. This case roughly matches the experience of the US Federal Reserve over many decades before the 2007–2008 financial crisis. This neutrality extends beyond the central bank balance sheet and applies to all variables in most monetary equilibrium models (Wallace, 1981; Benigno and Nistico, 2015). When the central bank issues reserves and buys government bonds, it is substituting one type of government liability for another. Because the two have exactly the same payoff, they make no difference to any choice or constraint of anyone in the economy. Conventional open market operations by central banks, where they issue reserves to buy short-term government bonds, are neutral because they simply change the composition of the overall government liabilities between two identical elements. In reality, there are two important distinctions between reserves and short-term government bonds that lead to nonneutralities. First, reserves tend to be overnight, whereas the more liquid government bonds traded in most countries are typically of maturities of at least a few months. Therefore, there is some, even if short, maturity mismatch in the central bank’s balance sheet which on the one hand makes the central bank affect real outcomes by engaging in maturity transformation but on the other hand exposes it to interest-rate risk (Greenwood et al., 2016). This will be the topic of the next section. Second, reserves can only be held by banks, whereas government bonds are widely traded. This segmentation of the two markets may break the no-arbitrage condition linking their returns, especially during times of financial crisis when, by holding reserves, banks maybe able to relax liquidity and collateral constraints (Reis, 2017).
5.8.2 Transfers to the Government with Sovereign Default Replacing the central bank resource constraint and the government’s by assuming that the institutional restrictions on dividends do not bind gives the consolidated constraint:
∞ (1 + it −1 )Vt −1 + δt (Bt1−1 − Bt1−c 1 ) ≤ t ∑mt ,t + j ( ft + j + st + j ) . (43) pt j = 0
Buying government bonds changes only the left-hand side. Wallace neutrality can also be seen here: with no default, then any expansion in reserves comes by buying government bonds, which therefore reduces the bonds being held by the private sector. The
Can the Central Bank Alleviate Fiscal Burdens? 161 left-hand side is completely unchanged by this change, and thus, the fiscal burden is unchanged as well. With sovereign default, though, there is a clear difference. The reason is that reserves are default free. Because they are the unit of account, they are always nominally paid. When the central bank buys more sovereign bonds, there is a shift in the composition of the overall public debt away from risky default bonds and toward default-free reserves. Insofar as the government intended to use default as a way to lower its fiscal burden, then by holding government bonds, the central bank makes this harder. To achieve the same real reduction in the debt burden, the government’s default must be greater in the sense that the recovery rate on the bonds δt is smaller. This all applies to fundamental default; if instead the sovereign default is driven by multiplicity of equilibrium, then central bank purchases can instead make default less likely (Corsetti and Dedola, 2016). In terms of central bank net income, as equation 42 makes clear, after the sovereign default, the central bank makes a loss. Its loss is the gain of the fiscal authority, and it is exactly matched by a lower stream of dividends from the former tothe latter. This leads to no net reduction in the fiscal burden. Another redistribution that occurs as a result of default and central bank intervention is from the holders of the public debt toward banks (Reis, 2017). Because only banks can hold reserves, only they benefit from being immune to sovereign default. Of course, this redistribution toward banks is perhaps more than offset by the extensive financial repression that governments often adopt when they find themselves in a fiscal crisis.
5.9 Interest-R ate Risk and QE The Federal Reserve holds almost entirely domestic assets, issued or backed by the US federal government, and this has been the case for many decades (Toporowski, 2017). It is exposed to almost no exchange-rate or private default risk. If the Federal Reserve were to hold only short-term Treasuries (as it did until 2008), then the interest paid on reserves would be almost always precisely offset by the interest earned on the Treasuries bought with these reserves. This would result in no change in the fiscal burden of the government and no risk to the Fed’s solvency. However, through its QE policies, the Fed increased the average maturity of the Treasuries it held by buying longer-term securities. This section studies the risk that this brings by including longer-term government bonds.
5.9.1 Longer-Term Bond and Interest-Rate Risk I assume that aside from default-free government bonds of maturity one-period, there are also two-period bonds. Extending the analysis to a full array of bonds of arbitrary
162 Reis maturities would just involve keeping track of more involved notation, with little gain in insights. A two-period bond sells for price Qt2 today and promises to pay one nominal unit in two periods. Next period, this bond is identical to a one-period bond issued tomorrow, so its price is Qt1 then. Assuming no default, these two prices are linked by the no-arbitrage conditions:
m p t t1,t +1 t = t Qt pt +1
mt ,t +1Qt1+1 pt = 1. (44) 2 Qt pt +1
The central bank’s resource constraint now is:
vt = (1 + rt )vt −1 − st + dt +
1 1 1c (Q B + Qt2 Bt2c − δt Bt1−c 1 − Qt1 Bt2−c1 ). (45) pt t t
The no-Ponzi-scheme condition still applies to reserves but can now be offset by the market value of bond holdings:
Q1 B1c + QT2 BT2c lim t mt ,T vT − T T ≤ 0. (46) T →∞ pT
The central bank issues one-period liabilities and uses some of them to buy two- period assets. In doing so, it engages in maturity transformation. In fact, since in reality post-QE, the central banks of the major advanced economies have borrowed very short (reserves are overnight) and have invested in government bonds of maturities of several years, the balance sheet of the central bank resembles that of a short-long investment fund. With this activity come risk and fiscal consequences.
5.9.2 Yield Curve Risk Using again the constant net worth rule that rebates every period net income to the fiscal authority, and assuming no risk of default, the dividends are:
Q1 Q 2 B 2c dt = st + rt nt −1 + 2t − (1 + it −1 ) t −1 t −1 . (47) Qt −1 pt
As before, there is income risk from seigniorage that is partly buffered by the return on net worth. Because I assumed there is no default risk, the return on one-period bonds is exactly the same as the interest paid on reserves, so they generate no income. The new features are the capital gains on the long-term bonds as a result of changes in their price. The arbitrage conditions in equation 44 dictate that the expected risk adjusted real value of this term is zero. There are no expected returns from maturity transformation
Can the Central Bank Alleviate Fiscal Burdens? 163 in this simple model because there is no term premium to reward this activity. However, ex post, this term may not be zero depending on how interest rates and inflation envolve. To see this clearly, note that the arbitrage conditions dictate that this term is equal to: 1 mt −1,t pt −1 Bt2−c1 − + i ( ) . (48) 1 t −1 t −1 1 + it (1 + it ) pt pt
It is surprises about inflation and the evolution of interest rates in the future that lead this term to be nonzero. A sudden unexpected steepening of the yield curve forces a loss over the central bank (Bassetto and Messer, 2013; Hall and Reis, 2015).
5.9.3 Intertemporal Solvency and the Solvency Upper Bound The Federal Reserve has a particular way to deal with these losses. The Federal Reserve Act (section 7, 12 USC 290) allows it, when net income is negative, to record a “negative liability” in the sense of a balance in a deferred account. Then, when in a future period net income is positive, the balance of the deferred account is first brought to zero before any disbursements are made to the Treasury. This rule has never been tested by a large balance in the deferred account, and it is unclear to what extend the Federal Reserve could keep to it if there was strong political opposition. It has the virtue, though, of providing fiscal backup by the fiscal authorities without requiring actions by them. They no longer need to approve a recapitalization of the central bank or to transfer funds to it. Instead, it is the fiscal authority’s future claim on the central bank’s net income that is used to provide this fiscal backup. To see this mathematically, let kt be the balance on the deferred account and recall that xt denotes net income. The adjusted rule for dividends to the fiscal authority then is: dt = max {xt − (1 + rt )kt −1 , 0}. (49)
Net income is first used to pay off the balance in the deferred account. Only after that is the remainder distributed as dividends. The law of motion for the account balance then is:
{
{
}}
kt = max 0, min k,(1 + rt )kt −1 − max (xt − dt , 0) + max (− xt , 0)) . (50)
If there is a limit to the size of the deferred account, kt ≤ k , either explicit in the rules of the central bank or implicit in terms of how far the fiscal authority will allow it, then k − kt provides a measure of the solvency of the central bank. Once it is close to zero, then any future loss will imply insolvency in precisely the same way as before. Because the size of k − kt may correlate with the accounting net worth of the central bank, it being negative is not a problem per se but may be indicative of solvency risks on the horizon.
164 Reis Because the central bank will keep on generating seigniorage from printing banknotes, a natural economic limit for k is the present value of this seigniorage. In that case, summing up the central bank resource constraint and using the no-Ponzi-scheme condition gives the intertemporal constraint:
∞ δ B1c + Qt1 Bt2−c1 ∞ − (1 + rt )vt + t ∑mt ,t + j st + j . (51) t ∑mt ,t + j dt + j ≤ t t −1 pt j=0 j=0
The right-hand side is the sum of the accounting value of the central bank and the present value of seigniorage. This is the fiscal capacity of the central bank, the upper limit on the dividends that it can achieve. Reis (2015a) and Benigno and Nistico (2015) took this constraint to define the intertemporal solvency of the central bank. As long as the right-hand side is positive, the central bank is solvent. This is a weaker requirement than period or rule solvency. Even if the central bank has periods of negative net income, and even if it deviates from some rule, as long as it can use a deferred account, then it can smooth out these income shocks by driving its reserves up and down. As long as the right-hand side of the expression above is positive,it can ensure that it will always pay zero or positive dividends to the fiscal authority, although this may require at times very large balances in the deferred account.
5.10 Fiscal Redistribution The fifth and final channel through which central banks can affect fiscal burdens is by redistributing them across different fiscal authorities. In public finance, the ability of fiscal authorities to generate resources from taxation and then spend them in different public projects is as studied as the government’s ability to redistribute a fixed amount of resources to different groups. Likewise, a study of the fiscal role of the central bank is not complete without also discussing the fiscal redistributions that the central bank can engender. At the same time, such a discussion is limited by two opposing forces. On the one hand, unlike fiscal authorities, central banks have no redistribution mandate and in fact are often explicitly disallowed from actively pursuing redistributive policies. One of the main reasons central banks are dissuaded from buying risky assets, as in sections 5.6 and 5.7, is the redistribution that they may lead to across the private issuers of these securities. On the other hand, any central bank action that has some effect on inflation or real activity will likely have some redistributive effects. Households and firms are differentially exposed to inflation risk, through the multiple contracts they sign or the portfolios they choose, and likewise they are hurt differently during recessions (Auclert, 2017). As a result of these two forces, a more institution-centered discussion of fiscal redistribution by central banks might rule it out from the start, while a more consequentially driven one would require a whole chapter in itself. The compromise solution in
Can the Central Bank Alleviate Fiscal Burdens? 165 this section is to focus only one one source of risk—public default—and to consider only redistribution across regions within a currency union.
5.10.1 Redistribution via Dividends and Income In a currency union, the central bank faces many fiscal authorities. On top of whether it alleviates the fiscal burden of the union as a whole, the central bank might or might not be able to transfer fiscal burdens from one region to another. This section extends the model of the central bank to a currency union to investigate this possibility. For concreteness, I consider only the case of a short-term bond with default risk, although the other sources of risk discussed so far could be included. Consider a world with two regions, labeled a and b . With only one-period real bonds with default risk, and assuming for concreteness that the central bank follows a net- income dividend rule, total dividends paid are:
δa δb dta + dtb = st + rt nt + at − (1 + rt ) qta−1bta−1 + bt − (1 + rt ) qtb−1btb−1 . (52) qt −1 qt −1
With no restrictions on how to choose the dividends to each section, the central bank could promote unlimited redistribution across areas. It would just have to set dta to a very large number and dtb to a correspondingly negative amount. Under the no- recapitalization constraint discussed in section 5.6, the constraint that dtb ≥ 0 would limit this redistribution. But this would still leave significant room for a sizable transfer. Most central banks in advanced countries do not have this discretion. They must return their dividends to a single authority at the national level rather than to multiple regions. An exception is the Eurosystem. There is no single union-wide fiscal authority to send dividends to in the Eurozone, and they are instead distributed to the member central banks, and so effectively to regions. Each country’s central bank then has its rules of distribution to the fiscal authorities. Yet the Eurosystem itself has close to zero discretion in choosing how to distribute these dividends to each member central bank and region. There is a strict formula in the founding treaty stipulating a fixed rule such that each country’s share of its dividends must equal the average of the country’s population share and GDP share in the Eurozone, averaged over the last three years. Therefore, there is no scope for this discretionary redistribution. There is still discretion, though, in how the overall dt is calculated, that is, on how net income is calculated, which can be subtly manipulated to lead to redistribution. Imagine that the expected repayment on country a ’s bonds is quite low, perhaps because the probability of default is quite high. As a result, the price of its bonds qta is low as well. Yet ex post, default may not materialize. In this case, the return on the bond δta /qta−1 will ex post turn out to be well above 1+ rt . Even if on average the central bank earns no excess returns on these bonds, for some states of the world the returns may be high.
166 Reis In this situation, it may turn out to be politically difficult to justify the high returns by the central bank at the expense of a country going through a fiscal crisis and paying a high interest rate on its bonds. The ECB in 2014 agreed to raise the dividends to Greece above their stipulated key, by including in the Greek dividend all the gains in holding Greek bonds. This is a form of redistribution. If default materializes instead, the redistribution goes in the other direction. Country a reaps the benefits from not paying its bonds, but the fall in net income lowers the dividends to both country a and country b . Ex post, this is a transfer from country b to country a . The only way to fully prevent redistributions through net income would be to replace equation 52 with a separate dividend rule for each region. One way to achieve this in the Eurozone is to have every national central bank hold the bonds of its country, so that any losses and gains affect the net income and dividends of that country alone. The ECB moved partly in this direction from 2015 onward, when it instituted as a rule for its government bonds purchase program that 92 percent of net profits would stay at the national central banks. Whether in case of a loss there would be provisioning or deferred accounts at the national or at the Eurozone level is less clear.
Redistribution via Assets and Reserves Even if the net income rules completely sever the two regions apart they still share a resource constraint via the central bank. Its no-Ponzi-scheme version is:
(
)
lim t mt ,T vTa + vTb − qTa bTa − qTb bTb ≤ 0. (53)
T →∞
This allows for one final form of redistribution. If the central bank holds a larger share of country a ’s bonds than its share of reserves, and does so forever, then this amounts to letting country a run a Ponzi scheme at the expense of country b. Country a is on net borrowing from the central bank, and country b is on net lending to it, with this debt rolled over forever. The ECB’s charters forbid this redistribution on the side of assets. The ECB until 2012 did not choose which bonds to hold. Rather, it conducted its monetary policy operations by accepting in repurchase agreements any collateral from a wide list. Therefore, it did not effectively control the compositions of its portfolio across regions. Once the ECB started its securities market program (SMP), it did buy the debt of countries in trouble. The German and European constitutional courts assessed whether thesebond holdings would lead to a redistribution contrary to the charter of the ECB. Equation 53 shows clearly that the answer to that question depends on a single criterion: whether the SMP was a temporary or permanent program. In the former case, no redistribution results from it. In the latter, redistribution happens. On the side of reserves, the ECB was also set up to prevent this redistribution. In the Eurosystem, when one financial institution moves its deposits from the central bank of one country to another, the liabilities to the private sector of the first central bank
Can the Central Bank Alleviate Fiscal Burdens? 167 fall, while the liabilities of the second central bank rise. Via the TARGET2 system, this transaction is recorded in the Eurosystem as a liability of the first country to the second. The total liabilities and reserves of each country are unchanged. The crisis again led to a deviation from this principle. The ECB allowed the national sovereign banks to give emergency lending assistance around this system. Banks can get reserves from a national central bank and print euros with the ECB’s authorization. Again, if this program is temporary, it implies no redistribution. The no-Ponzi-scheme condition has real economic bite. The bottom line is that there are many channels through which a common central bank in a fragmented fiscal union can redistribute resources. Because the central bank is a fiscal actor, with a resource constraint shared by all the regions, it can be used to voluntarily redistribute resources across regions. Even if it is not used this way, it will still lead to ex post involuntary transfers. This is the inevitable consequence of sharing a fiscal actor with a common resource constraint. Many of the ECB’s rules were designed to prevent this redistribution on average, yet its unconventional policies put these rules to the test. Using the principles laid out in this chapter, one can measure and assess this redistribution in a disciplined way.
5.11 Conclusion The central bank produces and supplies a public good (banknotes) for which it collects a revenue (seigniorage), it affects the path of a variable that works as a tax (inflation), it issues a public liability (reserves), it holds other public liabilities (government bonds), it provides a stream of dividends to the fiscal authorities that can fluctuate significantly with income (remmittances), and its policies interact with those of fiscal authorities to determine equilibrium outcomes that affect tax revenue and public spending. From more than one perspective, the central bank is a fiscal agent. In a broad sense, any study of central banks and monetary policy must therefore consider their interaction with fiscal policies. There are several surveys on how the interaction between monetary and fiscal policy determines inflation (Sims, 2013; Leeper and Leith, 2016). This chapter reviewed a different set of results, from a more recent literature, which has explored the monetary- fiscal interactions from the perspective of the resource flows from the central bank to the fiscal authorities. As important as highlighting that the central bank is a fiscal agent is to understand that it is also subject to a fiscal limit. This clarifies a few dangerous fantasies that are commonly stated because they are partly correct. The first one is that central banks can never go insolvent. This is partly correct in that there is no legal insolvency procedure for a central bank. The danger associated with it is that it leads to claims that central banks can always pay for any public spending, and even if that raises inflation a little, this may be desirable when inflation is below target anyway. As this chapter clarified, central banks can and do become insolvent, not in a
168 Reis legal sense but in an economic sense. This happens whenever the central bank does not have support from the fiscal authorities to cover its losses, and the reserves outstanding are no longer fully covered by its assets plus the stream of income that it can retain following remittances to the fiscal authority. When this happens, banks no longer wish to roll over reserves at the central bank, their value falls to zero, and hyperinflation and currency reform result. The central bank is insolvent in the economic sense that no one accepts its liabilities, and their value must be written down. This chapter discussed how the extent of fiscal support conceptually leads to different forms of insolvency (period, rule, and intertemporal) and how in practice different asset holdings come with different forms of income risk that may put solvency in question. The second fantasy is that a country that prints its own money can never have a sovereign debt crisis. This is partly correct because the central bank can issue default-free reserves or banknotes to buy government bonds. But this statement is dangerous and not quite right for many reasons linked to different conclusions in this chapter. First, as we saw in section 5.5, printing currency does not prevent central bank insolvency. Second, the actual real revenues from printing currency are quite small, as discussed in section 5.4. Third, as emphasized in section 5.2, printing money raises inflation, which increases the interest rate that nominal government bonds must pay. A country with a central bank may be able to commit to never default, but if this comes with expected high inflation, the interest rates on its government bonds will be just as large. Fourth, we saw that it is not enough to generate inflation to devalue the debt; in a fiscal crisis where the maturity of the debt usually becomes very low, this must be done so suddenly as to make it almost infeasible without hyperinflation, as discussed in section 5.3. Finally, as section 5.8 discussed, if to prevent sovereign default the central bank buys most of the government bonds, this will just lead the default to happen through central bank insolvency or currency reform. The third claim is that the central bank can freely redistribute across regions. The ECB is then blamed for not doing more to help Greece during its debt crisis post-2010. Section 5.10 shows that indeed the central bank can redistribute across regions in a currency union. But the institutional design of the ECB (and the Federal Reserve or the BoE) make these extremely difficult without breaking the legal mandate of the central bank. The overall point of this chapter is not to say that central banks cannot play a role in a fiscal crisis. Neither is it to argue that monetary policy cannot constrain fiscal policy in many ways. Rather, it was to analyze the resource flows between the central bank and the fiscal authorities. From that analysis comes the conclusion that these resource flows are typically small and do not come for free.
References Aguiar, M., and M. Amador. 2014. “Sovereign Default.” In Handbook of International Economics, edited by G. Gopinath, E. Helpman, and K. Rogoff, 647–687. Amsterdam: Elsevier North Holland. Aguiar, M. and M. Amador (2014).
Can the Central Bank Alleviate Fiscal Burdens? 169 Amtenbrink, F. 2011. “Securing Financial Independence in the Legal Basis of a Central Bank.” In The Capital Needs of Central Banks, edited by S. Milton and P. Sinclair, 83–95. New York: Routledge. Archer, D., and P. Moser-Boehm. 2013. “Central Bank Finances.” BIS paper 71. Auclert, A. 2017. “Monetary Policy and the Redistribution Channel.” NBER working paper 23451. Bassetto, M., and T. Messer. 2013. “Fiscal Consequences of Paying Interest on Reserves.” Fiscal Studies 34: 413–436. Benigno, P., and S. Nistico. 2015. “Non-neutrality of Open Market Operations.” CEPR discussion paper 10594. Blinder, A. S. 2004. The Quiet Revolution: Central Banking Goes Modern. Arthur M. Okun Memorial Lecture Series. New Haven: Yale University Press. Cagan, P. 1956. “The Monetary Dynamics of Hyperinflations.” In Studies in the Quantity Theory of Money, edited by M. Friedman, 25–117. Chicago: University of Chicago Press. Chari, V. V., L. J. Christiano, and P. J. Kehoe. 1991. “Optimal Fiscal and Monetary Policy: Some Recent Results.” Journal of Money, Credit and Banking 23, no. 3: 519–539. Corsetti, G., and L. Dedola. 2016. “The Mystery of the Printing Press: Self-fulfilling Debt Crises and Monetary Sovereignty.” Journal of the European Economic Association 14, no. 6: 1329–1371. Fischer, S., R. Sahay, and C. A. Végh. 2002. “Modern Hyper-and High Inflations.” Journal of Economic Literature 40, no. 3: 837–880. Friedman, M. 1969. The Optimum Quantity of Money and Other Essays. Chicago: Aldine. Greenwood, R., S. G. Hanson, and J. C. Stein. 2016. “The Federal Reserve’s Balance Sheet As a Financial-Stability Tool.” Jackson Hole Symposium, Federal Reserve Bank of Kansas City. Hall, R. E., and R. Reis. 2015. “Maintaining Central-Bank Solvency under New-Style Central Banking.” NBER working paper 21173. Hall, R. E., and R. Reis. 2016. “Achieving Price Stability by Manipulating the Central Bank’s Payment on Reserves.” NBER working paper 22761. Hilscher, J., A. Raviv, and R. Reis. 2014. “Inflating Away the Public Debt? An Empirical Assessment.” NBER working paper 20339. Hilscher, J., A. Raviv, and R. Reis. 2016. “Measuring the Market Value of Central Bank Capital.” LSE manuscript. Hoggarth, G., V. Saporta, and R. Reis. 2002. “Costs of Banking System Instability: Some Empirical Evidence.” Journal of Banking and Finance 26, no. 5: 825–855. Leeper, E., and C. Leith. 2016. “Understanding Inflation As a Joint Monetary- Fiscal Phenomenon.” In Handbook of Macroeconomics, vol. 2, edited by J. Taylor and H. Uhlig, 2305–2415. Amsterdam: Elsevier North Holland. Lucas, R. E. 2000. “Inflation and Welfare.” Econometrica 68, no. 2: 247–274. Mankiw, N. G., and R. Reis. 2010. “Imperfect Information and Aggregate Supply.” In Handbook of Monetary Economics, vol. 3A, edited by B. Friedman and M. Woodford, 183–230. Amsterdam: Elsevier North Holland. Nosal, E., and G. Rocheteau. 2011. Money, Payments, and Liquidity. Cambridge: MIT Press. Reis, R. 2013. “Central Bank Design.” Journal of Economic Perspectives 27, no. 4: 17–44. Reis, R. 2015a. “Different Types of Central Bank Insolvency and the Central Role of Seignorage.” Journal of Monetary Economics 73: 20–25. Reis, R. 2015b. “How Do Central Banks Control Inflation? A Guide to the Perplexed.” Lecture notes for the AEA continuing education program.
170 Reis Reis, R. 2016. “Funding Quantitative Easing to Target Inflation.” Jackson Hole Symposium, Federal Reserve Bank of Kansas City. Reis, R. 2017. “QE in the Future: The Central Bank’s Balance Sheet in a Fiscal Crisis.” IMF Economic Review 65, no. 1: 1–42. Sargent, T. J., and N. Wallace. 1981. “Some Unpleasant Monetarist Arithmetic.” Quarterly Review Fall issue: 1–17. Sidrauski, M. 1967. “Rational Choices and Patterns of Growth in a Monetary Economy.” American Economic Review, Papers and Proceedings 57: 534–544. Sims, C. A. 2003. “Fiscal Aspects of Central Bank Independence.” Unpublished Princeton University. Sims, C. A. 2013. “Paper Money.” American Economics Review Papers and Proceedings 103, no. 3: 563–584. Stella, P., and A. Lonnberg. 2008. “Issues in Central Bank Finance and Independence.” IMG working paper 08-3. Taylor, J. B. 1993. “Discretion versus Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public Policy 39, no 1: 195–214. Wallace, N. 1981. “A Modigliani-Miller Theorem for Open-Market Operations.” American Economic Review 71, no. 3: 267–274.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme, INFL, under grant number No. GA: 682288.
chapter 6
The Im pact of t h e G l oba l F inancial C ri si s on Central Ba nk i ng Alex Cukierman
6.1 Introduction The global financial crisis (GFC) led to numerous changes in the policies, relative emphasis on alternative objectives, policy instruments, and tasks of major central banks. It moved the financial stability motive into the forefront of central banks’ policy concerns and revived the explicit recognition of the lender of last resort (LOLR) function of the central bank in the face of shocks to the financial system. Although the financial stability motive was (at least implicitly) also present along with the price stability motive prior to the crisis, the crisis highlighted the critical importance of the LOLR and regulatory functions of central banks and other regulators.1 Since 2008, the financial stability motive has been a main mover of monetary policy in the United States and subsequently in Europe and some emerging markets. By inducing major central banks to reach the zero lower bound (ZLB) quickly, the GFC opened the door for substantial quantitative easing (QE) operations and triggered a still ongoing process of regulatory reforms. US policy rates reached the ZLB already in late 2008 and were supplemented by huge QE programs. During the first quarter of 2009, the Bank of England (BoE) base rate was reduced to 0.5 percent and is maintained at this level to this day. Although it did not react with the same immediacy and amplitude, the more conservative European Central Bank (ECB) engaged on a similar path after a while and as of 2015 has moved its policy rate into negative territory. The crisis led to the development and utilization of nonconventional policy instruments on a large scale. The most prominent among those are QE, direct liquidity injections into the private banking system, negative interest rates, and forex interventions. It intensified both practical and academic research on the interactions
172 Cukierman between traditional monetary policy aimed at price stability and monetary, financial, and regulatory policies aimed at preserving the stability of the financial system. In countries directly hit by the crisis (such as the United States in the face of the subprime crisis or the ECB in face of the Greek sovereign debt crisis), central banks had two distinct functions. The more immediate function was to inject large quantities of liquidity quickly in order to prevent a total arrest of financial intermediation. But after some measure of financial stability has been restored, the longer-term issue confronting policymakers and reformers in the affected countries was how to reform regulation both within and outside central banks so as to reduce the likelihood of future crises. Until the eruption of the subprime crisis, most Western central banks practiced standard inflation targeting. More precisely, given inflationary expectations, they set their short-term interest rates so as to minimize a linear combination of the output and inflation gaps over the medium term. This changed dramatically following the downfall of Lehman Brothers in September 2008. In spite of huge liquidity injections and policy rates in the vicinity of the ZLB, inflation above target ceased to be an issue and remains subdued in both the United States and Europe. As a matter of fact, during the last couple of years, the central banks of those big players became concerned with negative inflation gaps (inflation below target and even in the negative range). The expansionary policies of the big blocks led both directly and indirectly to a new trade-off, this time between limiting the impact of the financial crisis on the real economy and the formation of bubbles in real estate and financial assets. In particular, the sustained expansionary policies of the Fed and of the ECB induced central banks in many smaller countries to follow suit, leading to investment booms in real estate and financial markets of those and other countries. Prior to the crisis, the extent to which monetary policy should take the risk of bubbles in those markets was controversial. Following the experience of the last several years, many central banks do give some weight to those considerations. The GFC led to a far-reaching, still ongoing process of regulatory reform aimed at identifying systemic risk early on, closing precrisis regulatory loopholes, and, particularly in the European case, devising regulatory institutions that would be able to deal with the international dimensions of systemic risks effectively. Major pieces of legislation/resolutions in this context are the July 2010 Dodd-Frank Act in the United States and the creation of a European banking union by the heads of state and government of the European Union in June 2012. In both cases, the respective central banks (the Fed in the United States and the ECB in the euro area) were charged with the supervision and regulation of systemic risks and, in parallel, given more authority in comparison to the precrisis era. Similarly, in the United Kingdom, an independent Financial Policy Committee (FPC) charged with monitoring and reducing systemic risks was created in April 2013 within the BoE. In this chapter, section 6.2 describes the main unconventional monetary policy instruments that emerged during the crisis. Section 6.3 explores the interactions between QE) and the ZLB and provides some evidence on the equivalence between normal interest-rate policy and QE. Consequences of the relative passivity of fiscal
The Impact of the Global Financial Crisis 173 policies for monetary policies during the GFC are explored in section 6.4, along with the old-new idea of “helicopter money” as an efficient device against deflation. The GFC triggered a persistent process of regulatory reforms in which central banks play an important role, particularly in the area of macroprudential regulation. Section 6.5 briefly describes the still ongoing regulatory reforms in the United States, the euro area, and the United Kingdom with particular emphasis on the growing role of central banks in the macroprudential area. The ZLB constraint is particularly problematic when, as some recent evidence shows, the natural rate of interest is deep in negative territory. To overcome this problem, some economists have proposed raising the inflation target. Others have proposed taxing cash in various ways in order to prevent a flight to cash when the short-term rate is negative. Section 6.6 describes those proposals and discusses their limitations. The section also evaluates the old-new idea of 100 percent reserve requirements on banks and speculates about which of the unconventional monetary policy instruments used during the GFC are most likely to persist into the future. Section 6.7 documents a persistent slowdown in net new credit formation during the GFC, discusses the reasons for this slowdown, and outlines implications for exit strategies.
6.2 The Emergence of Unconventional Policy Instruments The GFC led to the emergence of unconventional monetary policy instruments, the most important of which are quantitative easing, forex interventions, negative policy rates, and forward guidance.
6.2.1 Quantitative Easing Also labeled as large-scale asset purchases by the Fed, QE is not new. Buying or selling assets by the central bank is a normal byproduct of conventional interest-rate policy even during normal times, since maintenance of the policy rate at the level desired by the central bank has to be supported by the injection or removal of liquidity through the buying or selling of assets. There are several factors that distinguish asset purchases during the crisis from their normal-times counterparts. First, the bulk of such purchases were done once the policy rate reached the ZLB. Second, particularly in the United States, the range of assets purchased was much wider than during normal times. Rather than limiting itself to government debt, the Fed bought mortgage-backed securities and other types of commercial debt. During the immediate aftermath of Lehman’s fall, the Fed also bought banking stocks in an attempt to strengthen the capital position of the banking system.2
174 Cukierman Finally, the exceptionally large magnitude of US QE operations distinguishes it from asset purchases during normal times. In reaction to the panic that engulfed financial markets in the immediate aftermath of Lehman’s collapse, QE was used mainly to inject liquidity into the capital market. But as the recession caused by the financial crisis persisted, QE gradually became a device for circumventing the ZLB by directly lowering long-term interest rates in order to stimulate the economy.3 Recent literature considers QE as a substitute for lowering the policy interest rate when the latter is stuck at the ZLB and attempts to measure the direct impact of QE on medium-and long-term interest rates.4 Although it engaged in some limited asset purchases already in 2008–2009, the ECB started to engage in large-scale asset purchases only at the beginning of 2015. Prior to that, the bulk of its liquidity injections operations were done through limited-term advances to the banking system.5
6.2.2 Forex Market Interventions These are not new. They have been used by central banks of emerging markets as a device for ironing out “excessive fluctuations” in exchange rates long before the GFC. For several decades prior to the crisis, China pegged its currency to the US dollar in order to maintain the competitiveness of its exports. But the zero (and recently even negative) interest-rate policies of the United States and, more recently, of the ECB added a new dimension to such interventions. To preserve their competitiveness, small open economies such as Switzerland, Israel, the Czech Republic, and others reacted to the expansionary monetary policies of the big blocks by lowering their policy rates, often in conjunction with forex interventions designed to stem the appreciation of their currencies. The potential importance of such interventions gradually increased as their policy rates approached the ZLB. In small open economies, an important transmission channel of monetary policy operates through the exchange rate. Once the ZLB is reached, the relative importance of forex purchases as a device for moderation of appreciation rises. Some countries, such as Israel, have initially used this instrument to build up forex reserves and occasionally to iron out excessive appreciation tendencies. As of September 2011, the Swiss National Bank (SNB) introduced a one-sided peg on the euro/Swiss franc exchange rate at 1.2 and effectively defended it till January 2015, when the ECB announced a large QE program. This policy led to a substantial increase in Switzerland forex reserves, which, as of late 2015, are at a level similar to that of the country’s GDP. The SNB’s extreme reliance on forex interventions is directly related to the GFC. Being a safe haven currency, the Swiss franc tended to appreciate during the frequent panics experienced during the GFC, eroding the competitiveness of Swiss exports. Having reached the ZLB relatively early, the SNB found it expedient to rely relatively heavily on forex interventions. It is notable that once the ZLB is reached, there is
The Impact of the Global Financial Crisis 175 no need to sterilize such interventions, implying that they become similar to the QE operations practiced by the Fed and the ECB.6
6.2.3 Negative Interest Rates Recent negative inflation rates along with economic activity below potential induced a number of central banks to experiment with negative policy rates. Sweden, Denmark, and Switzerland set their deposit rates at negative levels to preserve competitiveness and to induce banks to lend. During the first quarter of 2015, the ECB adopted a negative deposit rate mainly in order to stimulate banks’ lending. A similar policy was enacted in early 2016 by the Bank of Japan. Negative rates on banking reserves mean that banks have to pay for keeping deposits at the central bank. Most banks do not pass this cost to small and medium-size deposits but do charge large depositors. It appears at first blush that those developments imply that the ZLB is not as binding as was believed in the past. A more sensible interpretation is that the bound may be reduced somewhat below zero but that as long as the returns on cash and on deposits differ, it will not be possible for institutional reasons to reduce the policy rate substantially below zero. This may potentially be a serious problem if the natural rate is expected to be substantially below zero for an extended period of time (Cúrdia 2015; Cúrdia et al. 2015). This issue is taken up in the next section.
6.2.4 Forward Guidance Like QE and forex interventions, forward guidance is not a new monetary policy instrument. However, the ZLB constraint, along with widespread use of QE in the United States and more recently in the euro area, made it an effective complementary policy tool. Obviously, for this to be effective, preannouncements of future policy plans have to be subsequently delivered. This may be a problem if unexpected future developments call for deviations from previously announced policies.7 Following experimentation with date-based statements about the target policy rate through 2012, the Fed started to handle this difficulty by using contingent forward guidance. In 2013, the Fed announced numerical thresholds for the rate of unemployment and inflation. In particular, it stated that the target policy rate will stay low as long as unemployment is above 6.5 percent and inflation remains at or below 2.5 percent. In his memoirs, Ben Bernanke (2015, 532) stresses that the figure for unemployment was a threshold rather than a trigger in the sense that the Federal Open Market Committee (FOMC) would not even consider lifting the policy rate as long as unemployment was above 6.5 percent. This numerical statement had repeatedly been backed up by the more general statement that the Fed would do “whatever it takes” to reduce the rate of unemployment. In the face of mounting doubts about the viability of the euro area, President
176 Cukierman Mario Draghi of the ECB used similar language to signal that the ECB will do whatever it can to ensure the viability of the EA.8
6.3 Interactions between the ZLB and QE and Optimal Monetary Policy The GFC led to a very substantial and persistent decrease in net new credit formation by banks as well as through the capital market. This process started in the United States in parallel with the rescue of Bear Stearns in March 2008 and intensified after the collapse of Lehman Brothers. It subsequently spread to the Eurozone in parallel with the intensification of the European sovereign debt crisis. (Section 6.7 documents the magnitude of the credit arrest.) Owing to the fact that the main channel through which monetary policy influences economic activity is through the cost and availability of credit, this credit shrinkage was the major challenge confronting the central banks of countries affected by the GFC. In normal times, the central bank affects the whole range of interest rates by changing the short-term rate. Conventional wisdom is that through substitution along the yield curve, a decrease in the short-term rate induces a decrease in longer-term rates on both banking and capital market credit. It is widely agreed that the stimulatory impact of monetary policy on the real economy operates mainly through such longer-term rates as well as by increasing the availability of credit to credit-constrained firms and individuals. In normal times, the desired short-term rate is achieved by open market operations in government securities. But once the ZLB is reached, an essential link in the transmission of stimulatory monetary policy gets disconnected, since the short-term rate is stuck at the ZLB. However, even in this case, sufficiently long-term rates on government debt remain positive and so are (normally riskier) corporate bonds. With the onset of the subprime crisis, this statement was a fortiori true for mortgage-backed securities. Being stuck at the ZLB as early as the end of 2008, the Fed was the first major central bank to introduce large-scale QE operations as a substitute for further lowering of the short-term policy rate. The short-term link broken by the ZLB was bypassed by directly buying longer-term government securities, corporate bonds, and mortgage-backed securities. In other words, instead of trying to influence (the largely positive) longer-term rates through a short-term policy rate, the Fed’s policy aimed at directly lowering long- term rates. In 2012, the Fed supplemented those operations by a “maturity extension program” under which it decided to swap 400 billion Treasury securities with maturities of less than three years that it owned for Treasury securities with maturities ranging from six to thirty years. As with QE, the explicit aim was to lower long-term rates and to release credit constraints. However, unlike QE, this operation did not require an expansion of the Fed’s balance sheet, since the purchase of long-term government debt was financed by unloading short-term government debt (Bernanke 2015, 520).9
The Impact of the Global Financial Crisis 177 The ECB started to use QE on a grand scale only at the beginning of 2015. This is due to the fact that the ratio between credit flows through the capital market and credit flows through the banking system is smaller in the euro area than in the United States in conjunction with the stronger institutional focus of the ECB on price stability.10 The recent extended period of negative output and inflation gaps in the Eurozone along with the recent better performance of the United States convinced the ECB to engage in QE operations in spite of its institutional leaning toward conservatism. Empirical work by Swanson and Williams (2014) for the United States suggests that in spite of the fact that the policy rate had been at the ZLB since December 2008, interest rates with a year or more to maturity were responsive to news during 2008–2010 suggesting that the ZLB on the short end of the yield curve did not constraint movements in longer-term rates. The authors note that until late 2011, financial markets consistently expected the federal funds rate to lift off within four quarters. They attribute this finding to the combined effect of QE and forward guidance. Only beginning in late 2011 did the sensitivity of intermediate-maturity Treasury yields to news fall closer to zero. In spite of this, they conclude that the Fed’s large-scale asset purchases (QE) along with forward guidance continued to move medium-and long-term rates. Krishnamurthy and Vissing-Jorgensen (2011) estimate that QE1 and QE2 reduced the rates on Treasury bills and on agency and highly rated corporate bonds by 100 and 20 basis points, respectively. But the impact on lower-grade bonds was negligible unless directly targeted within a QE program. In particular, the impact of QE on mortgage- backed securities was significant only when QE involved the direct purchase of such securities. The policy rates of the United States, the United Kingdom, and Japan have been at either the ZLB or within a short distance from it for quite some time. Even after the first liftoff of the US federal funds rate at the end of 2015, the Fed continues to postpone the next increase and signaled that further increases were going to be very gradual. Although a latecomer to the vicinity of the ZLB, the ECB currently has a negative policy rate, which is expected to remain in this range for quite some time. Partly because of those policies, long-term riskless rates are also low. But, as documented in Bean et al. (2015), long-term riskless rates have been going down for quite some time prior to the GFC because of long-term structural reasons, such as abundance of savings due to aging, secular stagnation of investments, and increased demand for safe assets. Some researchers have interpreted this state of affairs to imply that the Wicksellian real natural rate of interest is currently deep in negative territory for an extended period of time (Cúrdia 2015; Cúrdia et al. 2015). By contrast, since policy rates are very low and actual inflation is zero or even negative, the actual short-term riskless rate is either zero or marginally positive. In either case, it is, on this view, substantially higher than the natural rate. The modern formulation of the Wicksellian approach as embedded in the New Keynesian framework implies that in the face of shocks, optimal monetary policy should maintain the actual real rate as close as possible to the natural real rate. This implies that the closeness of current riskless rates to the ZLB interferes with the
178 Cukierman implementation of optimal monetary policy. (Section 6.7 discusses recently proposed institutional solutions to this problem.)
6.4 Consequences of the Relative Passivity of Fiscal Policies for Monetary Policies during the GFC and “Helicopter Money” Traditional macroeconomic theory implies that when the interest rate is stuck at the ZLB, fiscal policy is particularly potent in stimulating aggregate demand. The reason is that expansionary fiscal policies are unlikely to trigger increases in interest rates at the ZLB. However, excluding the American Recovery and Reinvestment Act of 2009, fiscal policies in both the United States and the Eurozone were relatively passive during the bulk of the GFC for several reasons. In the United States, the main reason was the reluctance of a relatively conservative Congress to approve deficit financing along with increased polarization between President Barack Obama and a conservative Congress. In the euro area, the vicious circle between sovereign debt defaults and banking defaults along with the decentralization of fiscal decisions prevented large-scale discretionary fiscal expansions. As a consequence of this fiscal immobility, monetary policies in both the United States and the Eurozone had to take a larger share of the anticyclical effort required to dampen the worldwide recession triggered by the GFC. The Fed responded vigorously and swiftly. Due mainly to its strong price stability mandate, the response of the ECB was relatively slower and not as vigorous as in the United States. But, as described earlier, with both the output and inflation gaps in the Eurozone substantially in negative territory during between 2014 and 2016, the anticyclical response of the ECB also became more vigorous. The highly expansionary monetary policies of those big blocks (along with similar policies in the United Kingdom and Japan) induced central banks of smaller economies also to reduce their interest rates. Although the stimulatory effects of such policies are beneficial for world aggregate demand, they have several undesirable side effects. First, as more and more countries reduce their policy rates, the stimulatory impact of the real exchange rate via foreign trade on each single country’s aggregate demand is reduced. This is somewhat reminiscent of “beggar thy neighbor” policies during the Great Depression with the following important difference: During the Great Depression, the instruments used to implement such policies were tariffs and quantitative restrictions on trade. During the GFC, the somewhat less blunt instruments are worldwide expansionary monetary policies that drive short-term policy rates down toward the ZLB. In its attempt to maintain external competitiveness, the central bank of each country reduces
The Impact of the Global Financial Crisis 179 its policy rate, thus partially neutralizing the impact of the low rates in other countries on their competitiveness. The popular press refers to this as a “currency war.” However, unlike tariffs during the Great Depression, the combined impact of worldwide monetary expansions on worldwide aggregate demand is still positive. The second undesirable effect is that universally low interest rates encourage the formation of bubbles in real estate and financial markets. Although the charters of most central banks do not state that prevention of bubbles is an objective of the central bank, in practice, central banks with sufficient regulatory authority have used capital adequacy ratios, liquidity ratios, and payment-to-income ratios on mortgages during the GFC in order to partially offset the impact of low rates on the formation of asset bubbles. The third undesirable aspect of low rates is that they reduce the pensions of savers who are reluctant to invest a substantial fraction of their savings in risky assets. Caballero (2010) has suggested the following solution to this conundrum; decrease sale taxes without decreasing government expenditures, and finance the shortfall by means of a money gift from the Fed to the US Treasury. This is one way of implementing a “money helicopter drop.” In order to ensure that such a step does not revive inflation once the economy returns to full employment, Caballero proposes that Treasury commit to return the Fed’s “gift” once the economy is at its potential level. Although back in 2010 Caballero was mainly concerned with the US economy, as of the writing of this chapter, this proposal is obviously more relevant for the euro area than it is for the United States. Comparison by Cukierman (2017) of the inflationary consequences of the same high-powered monetary expansion in the United States during the GFC and in Germany during the post-World War I inflation shows that inflation was much higher in Germany than in the United States. The main difference between the two episodes is that in Germany, the increase in money was immediately translated into government purchases, as proposed by Caballero in 2010. I believe this historical comparison strongly supports the view that a “helicopter drop” is very effective in quickly raising even a negative inflation rate toward the inflation target. It is ironic that although this idea is diametrically opposed to one of the basic tenets of the ECB charter, it is probably the swifter way to avoid a prolonged period of negative inflation as is currently the case in the euro area.
6.5 The Role of Central Banks in Postcrisis Regulatory Reform The GFC triggered a process of regulatory reform particularly in the countries that were seriously affected by the crisis. Although details of those reforms differ across countries, there appears to be a consensus that detection, regulation, and prevention of systemic risk should be under the authority of the central bank, usually in cooperation
180 Cukierman with additional micro oriented regulatory bodies. The post crisis regulatory reforms in the United States, the euro area, and the United Kingdom are all built on the view that the best institutional location for macroprudential regulation is the central bank. In addition, recent legislation aims at providing regulatory frameworks in which cooperation and information exchange between different regulators are tighter than in the precrisis era.
6.5.1 US Regulatory Reform The July 2010 US Dodd-Frank Act created a Financial Stability Oversight Council (FSOC) of about ten regulators, chaired by the Treasury secretary, in which the Fed plays a central role. Those regulators are expected to meet on a regular basis and to issue specific regulations designed to achieve the broad goals of the act. This process has been going on since the passing of the act and is still ongoing.11 One of the main precrisis problems of the US regulatory system was the fractionalization of regulatory functions across different regulators, often with patches of the financial system that were uncovered by regulation. Important objectives of the Dodd-Frank Act are to eliminate such patches and to increase coordination and information exchange among different regulators within the FSOC. The council is charged with identifying and responding to emerging risks throughout the financial system. For this purpose, the council is directed and authorized to perform the following functions:12 (i) Make recommendations to the Federal Reserve for increasingly strict rules for capital, leverage, liquidity, risk management, and other requirements as companies grow in size and complexity, with significant requirements on companies that pose risks to the financial system. (ii) Be authorized to require, with a two-thirds vote and vote of the chair, that a nonbank financial company be regulated by the Federal Reserve if the council believes there would be negative effects on the financial system if the company failed, or if its activities would pose a risk to the financial stability of the United States. (iii) Be able to approve, with a two-thirds vote and vote of the chair, a Federal Reserve decision to require a large, complex company to divest some of its holdings if it poses a grave threat to the financial stability of the United States—but only as a last resort. (iv) Establish a floor for capital that cannot be lower than the standards in effect today, and authorize the council to impose a ceiling of 15-to-1 leverage-to- capital ratio at a company, if necessary, to mitigate a grave threat to the financial system. (v) Require large, complex financial companies to periodically submit plans for their rapid and orderly shutdown, should the company go under—also known as funeral plans or living wills.
The Impact of the Global Financial Crisis 181 (vi) Create an orderly liquidation mechanism for the Federal Deposit Insurance Corporation (FDIC) to unwind failing systemically significant financial companies. Shareholders and unsecured creditors bear losses, and management and culpable directors will be removed. (vii) Require that Treasury, FDIC, and the Federal Reserve all agree to put a company into the orderly liquidation process to mitigate serious adverse effects on financial stability, with an up-front judicial review. (viii) Any emergency lending by the Fed must be approved by the secretary of the Treasury, and loans may not be made to insolvent firms.13 The legislation embraces the Thornton-Bagehot principle that emergency liquidity should be provided by the Fed—but only to solvent institutions. Interestingly, such emergency liquidity has to be approved by the secretary of the Treasury. This feature reflects the view that in a democratic society, a financial crisis that requires large liquidity injections cannot be left only to the discretion of the central bank.14
6.5.2 Euro Area Regulatory Reform The main precrisis problem of the euro area was, and still is, the variety of regulatory systems across different countries. Although some variety is probably inevitable in view of differences in the financial structures of different countries, the globalization of banking business within the euro area calls for a tighter measure of common supervision and regulation. To address this challenge, a Single Supervisory Mechanism (SSM) for Europe was created in November 2014. It is made up of the ECB and the national supervisory authorities of the participating countries. Its main objectives are to ensure the safety and soundness of the European banking system, increase financial integration and stability, and ensure consistent supervision. The SSM is one of the two pillars of the EU banking union, along with the Single Resolution Mechanism (SRM). The European banking union integrates supervisory and resolution mechanisms for banks within the European Union and, as of November 2014, makes the ECB directly responsible for the supervision of 129 significant banks within the European Union. These banks hold almost 82 percent of banking assets in the euro area. Banks that are not considered significant are known as “less significant” institutions. They continue to be supervised by their national supervisors, in close cooperation with the ECB. At any time, the ECB can decide to directly supervise any one of these banks to ensure that high supervisory standards are applied consistently. All euro area countries participate automatically in the SSM. Other EU countries that do not have the euro as their currency can choose to participate. To do so, their national supervisors enter into “close cooperation” with the ECB. The SRM became effective in December 2015. Its main purpose is to ensure the efficient resolution of failing banks with minimal costs for taxpayers and to the real economy. A Single Resolution Board ensures swift decision- making procedures,
182 Cukierman allowing a bank to be resolved over a weekend. As a supervisor, the ECB has an important role in deciding whether a bank is failing or likely to fail. A Single Resolution Fund, financed by contributions from banks, is available to pay for resolution measures.15
6.5.3 UK Regulatory Reform The United Kingdom’s Financial Policy Committee (FPC), formed in early 2011, is the country’s systemic risk regulator charged with overseeing the overall stability of the financial system. The committee is given tough powers to tame systemic risk by clamping down on loose credit, overheated sectors, and previously unregulated parts of the financial system. In parallel, a firm-specific, microprudential, operationally independent subsidiary of the BoE, the Prudential Regulation Authority (PRA) was established to manage significant risks on the balance sheets of individual financial institutions. The affiliation of this authority with the BoE should facilitate cooperation and information exchange between the macro (FPC) and the micro (PRA) regulators.16
6.5.4 Common Elements in US, UK, and Euro Area Regulatory Reforms and Their Rationale Regulatory reforms in all three areas are based on the notion that the central bank is the choice institution for macroprudential supervision and regulation. This consensus is supported by two main considerations. First, the central bank is the public sector’s institution that is likely to have the widest view of the overall state of the financial system. Second, the central bank is also the institution that acts as LOLR in case of crisis. It therefore makes sense to charge the central bank with systemic stability and to endow it with the additional policy instruments needed to perform this task. Consequently, the central bank is well positioned to evaluate the trade-offs involved in taking preventive measures against a systemic crisis and the injection of public funds to limit the adverse effects of such a crisis if it materializes.17 However, this enlargement of the central bank’s authority moves it closer to the political arena and in particular to fiscal decisions. This is problematic on two counts. First, involvement of the central bank in semifiscal decisions may endanger its independence. Second, as the magnitude of those liquidity injections rises, they become closer to fiscal policy in that they involve a redistribution of wealth. This violates the principle that in democratic societies, distributional policies should be determined by elected officials rather than by unelected bureaucrats. The regulatory reforms of both the United States and the United Kingdom handle this problem by involving the Treasury in the regulatory process. As we saw earlier, in the United States, the FSOC is chaired by the Treasury secretary, and in the United Kingdom, both the FPC and the PRA are accountable to HM Treasury.
The Impact of the Global Financial Crisis 183 The situation is somewhat different in the euro area. Since it operates within an area composed of nineteen different fiscal authorities with a stronger price stability mandate than in the United States or the United Kingdom, the ECB is not directly accountable to a single fiscal authority. This implies that in case of a crisis that requires substantial liquidity injections, the tension between the monetary and fiscal aspects of LOLR operations must be resolved by negotiations between the ECB and the euro area governments. This raises a thorny question about the relative desirability of those two institutional arrangements. The answer to this question, most likely, depends on the relative risks to price versus risks to financial stability. When risks to financial stability are more important, the involvement of fiscal authorities is desirable even at the cost of some central bank independence (CBI).18 However, when the converse is true, this statement is reversed. Be that as it may, the enhanced role of central banks in macroprudential regulation, in all three regimes, implies that in the future, central banks are likely to play a more important role than in the past in the internalization of the trade-offs between price and financial stability.
6.6 Open Questions and Some Old-New Ideas 6.6.1 How to Reconcile a Negative Natural Interest Rate and the ZLB? The long period of low world interest rates experienced during the last decade is unique. The persistent maintenance of short-term policy rates in the vicinity of the ZLB by major central banks after Lehman’s collapse is obviously an important reason for low short-term interest rates since that time. Those policies also partially contributed to the relatively low level of long-term risk-free rates. However, as shown by Bean et al. (2015), both nominal and real risk-free rates were already low prior to the GFC. This fact led some economists to argue that this is a result of secular stagnation driven by a fall in investment opportunities due to a slowdown in the rate of population growth and a decline in the rate of innovation (Gordon 2012; Gordon 2014; Summers 2014). Bean et al. (2015) argue convincingly that the main reason for the decline in the risk- free long-term real rate is an increase in world savings induced by an aging population due to substantial increases in life expectancy without similar increases in retirement age. This still leaves an open question about the persistence of low interest rates in the future. Although they believe that the world savings rate is a slow-moving variable, Bean et al. argue that at some distant point, world savings are likely to reverse course. At that point, real rates will start to rise. Until that time, real rates are likely to be low. In particular, the short-term real rate, also known as the Wicksellian natural rate of
184 Cukierman interest, is likely to be low and, according to calculations made by some economists, even negative.19 During periods in which the policy rate is at (or slightly below) the ZLB and inflation is far below the 2 percent target and even negative, this configuration leads to a positive real short-term policy rate that is obviously above the (negative) natural rate of interest. The current state of affairs in the euro area conforms to this description. At least conceptually, it implies that the ZLB imposes an effective constraint on modern Wicksellian optimal monetary policy, since it does not make it possible to reduce the real policy rate to the level of the negative natural rate. Two solutions have been proposed to this conundrum. One is to raise the inflation target to, say, 4 percent. The other is to adjust monetary institutions so as to eliminate the ZLB constraint. The idea to raise the inflation target in order to bypass the constraint of the ZLB on monetary policy has been recently revived by Blanchard, Dell-Ariccia, and Mauro (2010) and Ball (2014).20 By sufficiently raising the inflation target, it is obviously possible to hit any negative natural rate in spite of the ZLB. Obviously, this strategy will work only if the higher target is credible and is, therefore, able to raise inflationary expectations and reduce the ex ante real rate. Although the logic of this argument is appealing, it has two potential limitations. First, if the higher inflation target achieves the objective of lifting actual output to its potential level, the higher-than-normal inflation target becomes time inconsistent. This may undermine the credibility of the higher target in the first place (Tabellini 2014). Second, at a more practical level, after a hard-won battle against inflation, most central bankers are generally reluctant to tamper with the inflation target out of fear that when more normal times return, this will upset the stabilizing effect of a permanently fixed target of 2 percent on long-run inflationary expectations. This argument carries more weight the lower the number of times the policy rate is expected to hit the ZLB in the future.21 The other way to neutralize the constraint imposed by the ZLB on optimal monetary policy is to adjust monetary institutions so as to neutralize the incentive to move funds into cash when deposit rates become negative. Buiter and Rahbari (2015) explore three possible ways to achieve that. One is to remove the fixed exchange rate between zero interest-rate cash currency and central bank reserves/deposits denominated in a virtual currency. This can be done by dating cash and by credibly committing to reduce its value periodically at a rate identical to the rate at which reserves depreciate due to the negative interest they carry.22 Although the logic of the idea is impeccable, it is hard to imagine that such a drastic (and inconvenient) institutional change is likely to be adopted anytime soon, only because some calibrated New Keynesian models like that of Cúrdia (2015) and Cúrdia et al. (2015) estimate that the US natural rate of interest rate is likely to be in the neighborhood of minus 4 percent for several years. Beyond this impressionist remark, it is important to remember that the interest rates that are important for economic activity are medium-and long-term rates and that, due to the positive slope of the yield curve, they are not necessarily constrained by the ZLB. This view is partially supported by evidence showing that, at least until 2010, the impact of the short-term ZLB on US one-and two-year interest rates was negligible (Swanson and Williams 2014).
The Impact of the Global Financial Crisis 185
6.6.2 Should the Reserve Requirement on Banks Be Raised to 100 Percent? Following the Great Depression, Irving Fisher became a strong supporter of the “Chicago plan” which advocated the establishment of a 100 percent reserve requirement on banks. Both Fisher and the University of Chicago economists behind the proposal believed that by increasing the central bank’s control of credit, this institutional change would greatly reduce the likelihood of financial crises. But, as is well known, this idea was never implemented. Following the GFC, a modern reincarnation of the idea, adapted to the much deeper current financial markets, was proposed by Cochrane (2014), who advocates that: (i) banks should be 100 percent funded by equity, (ii) fixed- value debt should be 100 percent backed by Treasury or Fed securities, and (iii) capital ratios should be replaced by Pigouvian taxes. During 2015, a Swiss group collected the one hundred thousand signatures necessary to require a national referendum on requiring banks to hold 100 percent reserves. Besides the obvious political opposition of banks to such a change, any serious evaluation of this matter should keep in mind the following considerations. First, by strengthening the link between high-powered money and credit creation, such a move will transfer a good chunk of decisions about the allocation of credit to the central bank. This will require central banks to develop the microeconomic skills necessary for evaluation of different credit risks. It is likely to increase the politicization of central banks and endanger their independence by inducing a stronger involvement of governmental politics in the allocation of credit. Furthermore, Diamond and Dybvig (1986) argue that this proposal has the following additional drawbacks; it will damage the economy by decreasing the overall amount of liquidity and will necessitate the creation of new regulatory structures to control the institutions that will enter into the vacuum left when banks can no longer create liquidity. This is likely to be particularly detrimental for small and medium-size firms that have limited access to the capital market. In summary, it appears that although a 100 percent reserve requirement is likely to enhance the stability of the financial system, the jury is still out on the desirability of this idea.
6.7 The Evolution of Credit over the GFC: Implications for Exit Policies Since 2008, and in spite of huge injections of high-powered money by the Fed after Lehman’s collapse, new credit allocations through the US banking system and its capital market experienced deep and sustained decreases. Somewhat less extreme but broadly similar processes occurred in the euro area, mainly after the public admission by Greek Prime Minister George Papandreou in November 2009 that Greek deficits and debt
186 Cukierman were substantially higher than previously available public figures.23 Figures 6.1 and 6.2 illustrate the dimensions of total new credit arrest in the United States and in the euro area, respectively, since the onset of the GFC through 2015. In both the United States and the euro area, new total credit growth through banks and the capital market was seriously depressed for about five years. In about half of those years, total US credit actually shrank. There are signs of some pickup in total new credit in the United States during 2014 and 2015. However, the credit increments in those years pale in comparison to their counterparts prior to 2008. Except for the fact that it starts about one to two years later, a similar phenomenon characterizes the behavior of changes in total credit in the euro area. There is little doubt that the deep and relatively long recession, first in the United States and later in Europe, is related to the arrest in credit growth. It is therefore important to understand the reasons for the deep and sustained arrest in the allocation of new credit. In the United States, the decision not to bail out Lehman Brothers after previously bailing out other systemically important financial institutions (SIFI) constitutes an important watershed. By demonstrating to financial market participants that not all SIFI institutions will always be bailed out, this decision generated an immediate panic. The ensuing flight to safety along with substantial decreases in the market value of assets severely reduced the willingness of banks to grant credit and of institutional investors to continue to lend to banks. Essentially, financial market participants became probabilistically aware of the existence of lower bailout probabilities.24
4000 3500 3000 2500 2000 1500 1000 500 0 –500 –1000 –1500
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Bear Stearn Lehman’s rescue collapse Total net new issuance of bonds (excluding Treasury) Total net new bank credit
Figure 6.1 Total Net New Bank Credit and Total Net New Issuance of Bonds in the United States, excluding Treasury Bills ($ Billions) Sources: bank credit, Bloomberg ALCBBKCR Index; bond issues, Securities Industry and Financial Markets Association (SIFMA).
The Impact of the Global Financial Crisis 187 3000 2500 2000 1500 1000 500 0 –500
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Lehman’s Papandreou’s announcement collapse
–1000 –1500 Total net new bond issuance (excluding government) Total net new bank credit
Figure 6.2 Total Net New Bank Credit and Bond Issuance in the Euro Area, excluding Government (€ Billions) Sources: http://sdw.ecb.europa.eu.
Although the mechanism above provides a reasonable explanation for the sudden deep credit arrest immediately following Lehman’s collapse, the persistence of this arrest involves additional longer-term factors. One is the well-known financial accelerator mechanism (Bernanke and Gertler 1995; Bernanke, Gertler, and Gilchrist 1996). By reducing the value of real estate and financial assets, the initial trauma reduced the value of available collateral along with banks’ equity capital. Both factors reinforced the unwillingness of banks to extend credit for some time after the initial panic.25 As recounted in section 6.5, the GFC also triggered long-term regulatory reforms in major Western economies. The postcrisis emerging regulatory frameworks impose restrictions on automatic bailouts, require SIFIs to provide “living wills” and to perform stress tests, and impose more stringent capital, leverage, and liquidity ratios. Although some of those are phased in gradually, their impact on new credit formation is gradually felt in advance due to banks’ attempts to improve their capital and leverage ratios. The upshot of the previous discussion is that the reduction in the value of collateral, tougher regulatory demands from banks, and the lingering traumas in the aftermaths of Lehman’s collapse and the Greek sovereign crisis have been a drag on credit growth and are likely to slow it down also in the foreseeable future. One implication of this point of view is that exits from the extraordinarily expansionary monetary policies followed during the crisis, first by the Fed and subsequently by the ECB, should be phased in rather carefully with an eye to what such exits are doing to credit expansion and the real economy. This statement is particularly relevant for the euro area, in which economic activity is substantially below potential and the rate of inflation is negative.
188 Cukierman
6.8 Conclusion This chapter describes the main changes in the conduct of monetary policy and in monetary policymaking institutions triggered by the GFC. In reaction to the crisis, the policy rates of the United States and the United Kingdom quickly reached the ZLB. After a while, the more conservative ECB reached this range as well, and it is currently featuring a negative policy rate. Policy rates in the vicinity of the ZLB along with a relative passivity of fiscal policies induced heavier reliance on monetary policy instruments that had been considered unconventional during the Great Moderation. The most important among those are QE, forex market interventions, negative interest rates, and forward guidance. The crisis triggered a wide-scale process of regulatory reforms in the United States, the euro area, the United Kingdom, and other countries. In practically all cases, the main responsibility for macroprudential regulation is now being vested with the central banks. Although it is likely that such broadening of central bank authority reduces the likelihood of systemic crises, it may increase the politicization of central banks in the future. In the euro area, the crisis triggered a substantial move toward unification of regulation under the umbrellas of a banking union that rests on three pillars: the SSM, the SRM, and the harmonization of regulation. Persistently low rates of inflation and interest may preclude the achievement of optimal monetary policy due to the ZLB if the Wicksellian natural rate is deep in negative territory, as some recent research for the United States shows. The chapter describes and evaluates various proposals, such as raising the inflation target and taxing cash, designed to neutralize the constraining impact of the ZLB on monetary policy. It also describes and evaluates the old-new idea of a 100 percent reserve requirement on banks as a device to reduce the likelihood of financial crises. In the context of the relative passivity of fiscal policy, the chapter argues that historical evidence from the post-World War I German hyperinflation supports the view that a money finance decrease in taxes (“helicopter money”) can be very effective in swiftly offsetting deflation. The chapter documents the substantial and persistent slowdown in the growth of total new credit (via both the banking system and the capital market) in the United States and the euro area and argues that due to regulatory reform, the financial accelerator mechanism, and an increase in financial markets awareness of less generous bailout policies in the future, this slowdown is likely to persist. One implication of this view is that exit from the extraordinary expansionary monetary policies followed during the crisis should be rather gradual. In summary, and looking ahead, the GFC encouraged the development of unconventional monetary policy instruments, led to substantial regulatory reforms and to the expansion of central banks’ authority over macroprudential regulation. How many of those changes are likely to persist into the future when normal times return? My judgment is that at least to some extent, all those changes are likely to persist.
The Impact of the Global Financial Crisis 189 The saddling of central banks with macroprudential regulation and the regulatory and related institutional reforms in the United States and Europe are probably the most persistent changes. Cooperation between different US regulators is likely to be permanently tighter and banking regulation across countries in the euro area permanently more uniform. Unconventional monetary policy instruments such as QE and forex interventions are likely to be used more frequently than prior to the crisis when the ZLB is reached or some other circumstances require the use of such instruments. Furthermore, in spite of the fact that many central bank charters do not directly saddle the bank with the responsibility to avert bubbles in real estate and financial markets, most central banks are likely to pay more attention to this issue within the framework of their postcrisis wider financial stability mandate.26
Notes 1. Chapter 16 in this volume, by Taylor, Arner, and Gibson, contains a historical account of the changes that occurred over time in the relative emphasis given by central banks to financial versus price stability. Mayes, in c hapter 20 of this volume, notes that the recent crisis led to an expansion of the authority and tasks of central banks. 2. With hindsight, this proved to be a profitable investment. Relatedly, He and Krishnamurthy (2013) find that injections of equity capital by the Fed were particularly effective in stimulating the economy. 3. During the six years following the Lehman Brothers collapse, the US monetary base more than quadrupled (Cukierman 2016, fig. 1). 4. Examples are Krishnamurthy and Vissing-Jorgensen 2011 and Swanson and Williams 2014. Further details appear in the next section. 5. Details appear in Cukierman 2014. 6. In normal times, sterilization is used to avoid conflicts between the interest-rate policy of the central bank and its intervention policy. However, once the policy rate becomes stuck at the ZLB, this conflict disappears. 7. An extensive survey of recent literature on central bank communications appears in Moessner, Jansen, and de Haan forthcoming. 8. The verbatim statement: “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro. And believe me, it will be enough” (Draghi 2012). 9. This operation is reminiscent of “Operation Twist” undertaken under McChesney Martin in the 1960s. Details of this episode appear in Modigliani and Sutch 1966. 10. As a consequence, high-powered money in the Eurozone expanded much less than in the United States. Details appear in Cukierman 2014. 11. A progress report appears in Dodd-Frank Implementation 2013. 12. The act is quite comprehensive and covers many areas, including consumer protection. The text focuses only on the parts of the act that are relevant for financial stability. 13. Obviously, unless insolvency proceedings have been opened, it is always a guess whether realized liabilities will be greater than assets, and if emergency lending is possible, other creditors will be very keen to avoid triggering insolvency. 14. Further discussion of this issue is postponed to the end of this section.
190 Cukierman 15. European Central Bank 2015. 16. Responsibility for business regulation has been transferred to a third authority, the Financial Conduct Authority (FCA), which has responsibility for conduct issues across the entire spectrum of financial services. It is hoped that coordination among those three bodies will be reinforced by the fact that all three are either directly or indirectly accountable to the UK Treasury (HM Treasury 2011). 17. Using a sample of as many as 124 countries during the 2007–2012 period, Melecky and Podpiera (2016) find that in countries with deep financial markets, or countries undergoing rapid financial deepening, locating microprudential regulation in the central bank along with macroprudential regulation reduces the likelihood of financial crises. 18. In addition, to be effective, supervisors and regulators should be independent from political influence and interest groups. Attainment of such independence is likely to be even more difficult than preservation of CBI. Further discussion of those issues appears in Cukierman 2013. 19. A recent example based on a New Keynesian framework appears in Cúrdia et al. 2015. 20. Early advocates of such a strategy are Krugman (1998) and Svensson (2003). 21. Using a calibrated New Keynesian model, Coibion, Gorodnichenko, and Wieland (2012) find that a 3 percent (or higher) inflation target becomes optimal only when the ZLB binds at least every seven or eight years. Most central bankers are unlikely to consider raising the target to avoid such relatively infrequent events. 22. The other two ways are to abolish currency or to tax it. 23. Figs. 2a, 2b, and 4 in Cukierman 2014 document the exceptional increases in banking reserves and high-powered money in the United States and the euro area during the GFC. 24. Cukierman and Izhakian (2015) formulate this change in perceptions as an increase in (Knightian) bailout uncertainty and show, within a general equilibrium framework of the financial system, that such a change leads to credit arrest, to a flight to safety, and to shrinkage of activity in the real sector. A data-connected, intuitive exposition of those ideas appears in Cukierman 2016. 25. The reduction in financial asset values was particularly dramatic in mortgage-backed and other asset-backed securities. Details appear in fig. 5 of Cukierman 2016. 26. Based on two surveys—one of central bank governors and the other of academic specialists—Blinder, Ehrmann, and de Haan (2016) conclude that the consensus among governors and academics is similar but that the winds of change are stronger in countries that were directly affected by the GFC.
References Ball, L. 2014. “The Case for a Long-Run Inflation Target of Four Percent.”IMF working paper 14/92. Bean C., C. Broda, I. Takatoshi, and R. Kroszner. 2015. Low for Long? Causes and Consequences of Persistently Low Interest Rates. Geneva Reports on the World Economy 17. Geneva: ICMB/ CEPR. Bernanke, B. 2015. The Courage to Act: A Memoir of a Crisis and Its Aftermath. New York: W. W. Norton.
The Impact of the Global Financial Crisis 191 Bernanke, B., and M. Gertler. 1995. “Inside the Black Box: The Credit Channel of Monetary Policy Transmission.” Journal of Economic Perspectives 9, no. 4: 27–48. Bernanke, B., M. Gertler, and S. Gilchrist. 1996. “The Financial Accelerator and the Flight to Quality.” Review of Economics and Statistics 78, no. 1: 1–15. Blanchard, O., G. Dell-Ariccia, and P. Mauro. 2010. “Rethinking Macroeconomic Policy.” IMF staff position note 10/03. Blinder, A., R. Ehrmann, and J. de Haan. 2016. “Necessity As the Mother of Invention: Monetary Policy after the Crisis.” NBER working paper 22735. Buiter, W., and E. Rahbari. 2015. “High Time to Get Low: Getting Rid of the Lower Bound on Nominal Interest Rates.” Global Economics View, Citi Research, April. Caballero, R. 2010. “A Helicopter Drop for the US Treasury.” VoxEU.org, August 30. https:// voxeu.org/article/helicopter-drop-us-treasury. Cochrane, J. 2014. “Toward a Run Free Financial System.” Manuscript. University of Chicago School of Business. Coibion, O., Y. Gorodnichenko, and J. Wieland. 2012. “The Optimal Inflation Rate in New Keynesian Models: Should Central Banks Raise Their Inflation Targets in Light of the Zero Lower Bound?” Review of Economic Studies 79, no. 4: 1371–1406. Cukierman, A. 2013. “Regulatory Reforms and the Independence of Central Banks and Financial Supervisors.” SUERF Studies on States, Banks, and the Financing of the Economy 3: 121–134. Cukierman, A. 2014. “Euro-Area and US Bank Behavior, and ECB-Fed Monetary Policies during the Global Financial Crisis: A Comparison.” CEPR discussion paper 10289. Cukierman, A. 2016. “The Political Economy of US Bailouts, Unconventional Monetary Policy, Credit Arrest and Inflation during the Financial Crisis.” CEPR discussion paper 10349. Cukierman, A. 2017. “Money Growth and Inflation: Policy Lessons from a Comparison of the US since 2008 with Hyperinflationary Germany in the 1920’s.” Economic Letters 154: 109–112. Cukierman, A., and Y. Izhakian. 2015. “Bailout Uncertainty in a Microfounded General Equilibrium Model of the Financial System.” Journal of Banking and Finance 52: 160–179. Cúrdia, V. 2015. “Why So Slow? A Gradual Return for Interest Rates.” Federal Reserve Bank of San Francisco Economic Letter 2015-32 (October 12). Cúrdia, V., A. Ferrero, G. Ng, and A. Tambalotti. 2015. “Has U.S. Monetary Policy Tracked the Efficient Interest Rate?” Journal of Monetary Economics 70: 72–83. Diamond, D. W., and P. H. Dybvig. 1986. “Banking Theory, Deposit Insurance, and Bank Regulation.” Journal of Business 59, no. 1: 55–68. Draghi, M. 2012. “Speech at the Global Investment Conference, London.” July 26. https://www. ecb.europa.eu/press/key/date/2012/html/sp120726.en.html European Central Bank. 2015. “Single Supervisory Mechanism.” https://www. bankingsupervision.europa.eu/about/thessm/html/index.en.html Dodd- Frank Act Implementation: Navigating the Road Ahead. 2013. Washington, D.C.: Morrison & Foerster. Gordon, R. J. 2012. “Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds.” NBER working paper 18315. Gordon, R. J. 2014. “The Demise of U.S. Economic Growth: Restatement, Rebuttal and Reflections.” NBER working paper 19895. He, Z., and A. Krishnamurthy. 2013. “Intermediary Asset Pricing.” American Economic Review 103, no. 2: 732–770.
192 Cukierman HM Treasury. 2011. “A New Approach to Financial Regulation: A Blueprint for Reform.” June https://www.gov.uk/government/consultations/a-new-approach-to-financial-regulationthe-blueprint-for-reform Krishnamurthy, A., and A. Vissing-Jorgensen. 2011. “The Effects of Quantitative Easing on Interest Rates.” Manuscript. Kellogg School of Management, Northwestern University. Krugman, P. 1998. “It’s Baaack: Japan’s Slump and the Return of the Liquidity Trap.” Brookings Papers on Economic Activity 29, no. 2: 137–206. Melecky, M., and A. M. Podpiera. 2016. “Placing Bank Supervision in the Central Bank: Implications for Financial Stability Based on Evidence from the Global Financial Crisis.” Manuscript. IMF. Modigliani, F., and R. Sutch. 1966. “Innovations in Interest Rate Policy.” American Economic Review 56: 178–197. Moessner, R., D. Jansen, and J. de Haan. Forthcoming. “Communication about Future Policy Rates in Theory and Practice: A Survey.” Journal of Economic Surveys, forthcoming. Summers, L. 2014. “U.S. Economic Prospects: Secular Stagnation, Hysteresis, and the Zero Lower Bound.” Business Economics 49, no. 2: 65–73. Svensson, Lars E. O. 2003. “Escaping from a Liquidity Trap and Deflation: The Foolproof Way and Others.” Journal of Economic Perspectives 17, no. 4: 145–166. Swanson, E., and J. Williams. 2014. “Measuring the Effect of the Zero Lower Bound on Medium-and Longer-Term Interest Rates.” American Economic Review 104, no. 10: 3154–3185. Tabellini, G. 2014. “Inflation targets reconsidered: Comments on Paul Krugman.” IGIER working paper 525, Universita Bocconi.
chapter 7
Strategie s for C ondu c t i ng Monetary P ol i c y A Critical Appraisal D. L. Thornton
7.1. Introduction This chapter discusses strategies for conducting monetary policy and related issues. Much of the published literature on the subject comes from model economies. But model economies are highly stylized, overly simple, and populated by economic agents whose behavior is more rational than that of most citizens. Moreover, models cannot come close to capturing the heterogeneity and complexity of real-world economies. While some of these models produce interesting and useful results, they fall short of showing how various monetary policy strategies are likely to work in real- world policymaking. This chapter discusses monetary strategies from both theoretical and practical (real-world) perspectives. However, the analysis is not based on a specific model or class of models. The structure is that imposed by basic economic theory, not by a particular model. I cite some of the important model-based work in the literature. But the discussion is based on my analysis of how these strategies are likely to work in the real world of policymaking. My purpose is (1) to show the reader the most prominent ideas that monetary/macroeconomic economists have advanced as strategies for conducting monetary policy and (2) to provide the perspective necessary for a critical evaluation of the usefulness and viability of these ideas for real- world policymaking. The chapter is divided into five sections. Section 7.2 discusses the two main channels through which monetary policy is believed to affect inflation and output: the money channel and the interest-rate channel. Section 7.3 discusses three strategies directly tied
194 Thornton to either of these channels. Section 7.4 discusses monetary policy strategies that are not necessarily tied to a specific channel of policy. Section 7.5 discusses the debate over rules versus discretion in the conduct of monetary policy. While not a specific strategy for conducting monetary policy, rule-based policy is a tactic for how several of these strategies—interest-rate targeting, money supply targeting, nominal GDP targeting, and so on—could be implemented. Moreover, there is an ongoing debate in the profession about the use of rules versus discretion in the conduct of policy, so any analysis of the practical usefulness of rule-based policymaking is important for a complete discussion of monetary policy strategies. Section 7.6 is a conclusion.
7.2 The Money and Interest-R ate Channels of Monetary Policy As the name implies, monetary policy was originally about controlling the supply of money. In economies in which nearly all transactions are carried out with cash— coins and government-issued currency—increases in the supply of money (cash) would result in higher prices. From a theoretical perspective, this result is sensible. The amount of cash that people want to hold depends on their level of real income and the level of prices (the price level). The higher the level of prices or the higher the level of real income, the more money people would need to hold. The demand for money is the demand for real money balances (purchasing power), nominal money (M) divided by the price level (P). Economists represented the aggregate demand for money by the simple equation
( M / P )d = f ( y ), (1)
where f (•) denotes an unspecified function (but linear in most empirical models) and y denotes the level of output—real GDP. The equation shows that if the money supply were doubled, the price level would double if output didn’t increase. The rationale is simple. If the central bank increases the money supply and, therefore, real money balances, beyond the amount that people want to hold, people will attempt to reduce their excess money balances by spending them. As everyone attempts to do this, prices start to rise. As a practical matter, prices won’t increase instantly. Output may also increase but only temporarily. The increase in output is temporary, because output is determined by real factors: labor, physical capital, and technology. An increase in the nominal money supply cannot have a permanent effect on output. If it did, output could be increased continuously by simply increasing the nominal money supply. Consequently, doubling the nominal money supply must eventually result in a doubling of prices, with output unchanged. This is called monetary policy neutrality.
Strategies for Conducting Monetary Policy 195 The interest-rate channel of monetary policy stemmed from the idea that the demand for money also depends on the nominal interest rate. That is, the demand for money is given by
( M / P )d = f ( y , i), (2)
where i denotes the nominal interest rate. The demand for money depends on the nominal interest rate because money yields a zero nominal rate of return. Therefore, individuals have an incentive to economize on their money holdings. The basic model is attributed to William Baumol (1952) and James Tobin (1956). They argued that higher interest rates would give individuals an incentive to hold less money and more securities or other assets that yield a positive nominal rate of return. Hence, the demand for money is negatively related to the nominal interest rate: the higher the interest rate, the less money individuals want to hold. This view of the demand for money led economists to believe that monetary policy could be implemented through the so-called interest-rate channel of monetary policy. The idea is that the central bank can reduce the interest rate by increasing the money supply. The decline in the interest rate would then cause businesses to invest more and citizens to spend more, which would raise the level of demand for goods and services. The increased spending would cause prices to rise. However, if prices are “sticky” (that is, slow to change), the increased demand for goods would initially be reflected in output. But, as with the money channel, this effect would be temporary; monetary policy neutrality also holds if monetary policy works through the interest-rate channel. To understand why, note that investment and consumer spending depend on the real interest rate (the nominal interest rate less the expected rate of inflation). If an increase in the supply of money causes the nominal interest rate to fall but prices are sticky, the real rate will decline as the nominal interest rate declines relative to inflation. However, the equilibrium real interest rate is thought to be determined by the Fisher equation, which says the equilibrium real interest is equal to the nominal interest rate plus the expected rate of inflation. It is the “expected” rate of inflation because investors are necessarily forward-looking. However, the expected and actual rates of inflation must be the same in equilibrium. People will understand that the lower interest rates will ultimately produce higher prices. Hence, inflation expectations could increase quickly when policymakers increase the money supply. If expected inflation increased immediately, the real interest rate would not change. In such an instance, nominal interest rates would rise—not fall—and monetary policy implemented through the interest-rate channel would be totally ineffective. Policymakers would not produce even a temporary increase in output through the interest-rate channel. Regardless of whether inflation expectations increase immediately, the effect of monetary policy on output will be temporary because, like real output, the real interest rate is determined solely by real factors. In abstract theoretical models, these factors are the marginal efficiency of capital (the rate of return to the last unit of the stock of capital) and economic agents’ rate of time preference (the interest rate that equates the marginal
196 Thornton
Rate of change of money wage rates (percent per year)
10 Curve fitted to 1861–1913 data
8 6 4 2 0 –2 –4
0
1
2
3
6 7 8 4 5 Unemployment (percent)
9
10
11
Figure 7.1 The Original Data for the Phillips Curve for the United Kingdom
utility of consumption today with the marginal utility of consumption “tomorrow”). Consequently, the actual rate of inflation and inflation expectations ultimately will converge, so the real rate will be unchanged. Any effect of monetary policy on output through the interest-rate channel must be temporary. The idea that monetary policy could have an effect on output was given a boost with the publication of a paper in 1958 by New Zealand economist A. W. H. Phillips. Phillips presented a scatter plot of the annual growth rate of nominal wages and the unemployment rate in the United Kingdom over the period 1861–1913. Phillips’ graph is reproduced in figure 7.1. The figure shows that the unemployment rate declined as the inflation rate rose. When the unemployment rate was at very low levels, the inflation rate was very high. To many economists, Phillips’ data suggest that inflation would accelerate at very low rates of unemployment; inflation would get higher and higher as the unemployment rate got closer to zero (that is, as the economy approached full use of its economic resources). This relationship became known as the Phillips curve. The impression from Phillips’ data that inflation accelerates as the unemployment rate falls gave rise to the idea that economists call the full-employment level of output. This idea underlies most textbook descriptions of aggregate demand and aggregate supply. Figure 7.2 shows a typical textbook representation of aggregate demand (AD) and aggregate supply (AS). The price level is on the vertical axis, and real output is on the horizontal axis. AD is defined as the total demand of final goods and services (real GDP) at all possible price levels over a period of time. AS is defined as the total supply of final goods and services produced at all possible price levels over a period of time. The shape of the AS curve reflects the idea, suggested by Phillips’ data, that as the economies’ resources become more fully employed, prices will rise more. When all resources are fully employed—when the AS curve becomes vertical—further increases in AD have no effect on output; the only effect is on the price level.
Strategies for Conducting Monetary Policy 197 Price level
AS Note that the shape of AS is consistent with the shape of the A.W Phillips’ curve.
AD1
AD2
AD3
AD4
Real national income
Figure 7.2 Typical Textbook Representation of Aggregate Demand and Aggregate Supply
The shape of the AS curve stems from Phillips’ data and a particular view of how the economy works. That is to say, the demand and supply of all goods and services work similarly to the standard textbook supply-and-demand model for any particular good or service.1 Subsequent work tended to find a negative relationship between wages and the unemployment rate, but the relationship varied in strength and across time. Indeed, Boldrin (2016) notes that the Phillips curve model has performed poorly despite the fact that its proponents are constantly refining and reestimating it. In this regard, it is interesting to note that the impression that the Phillips curve becomes vertical at low rates of unemployment suggested by figure 7.1 is due entirely to the three years when wages increased by 5 percent or more. If these three observations are eliminated, the remaining forty-nine years suggest a negative relationship that is most likely linear and not particularly strong, something close to the line I added to figure 7.1. In any event, this simple empirical relationship led some economists to argue that the trade-off between inflation and output was permanent; a permanent increase in the rate of inflation could permanently increase output and reduce unemployment. Of course, this cannot be true. If higher rates of inflation led to reductions in the unemployment rate, policymakers could eliminate unemployment by producing a very high rate of inflation. But it is clear from the Great Inflation of the 1970s and early 1980s, the post- World War I hyperinflation in Germany, and high inflation periods in a variety of other countries that very high rates of inflation have bad and even devastating consequences. Despite these obvious examples, the view of a permanent trade-off between inflation and output persisted until Milton Friedman (1968) and Edmund Phelps (1967) showed that from a strictly theoretical perspective, the long-run Phillips curve must be vertical; output cannot be increased permanently by increasing the rate of inflation in the economy. Specifically, Friedman and Phelps argued that for employment to be higher (unemployment lower), the real wage would have to fall. But this could happen only if wage inflation lagged actual inflation. If workers expected inflation to rise at the rate of actual inflation, nominal wages and the price level would rise at the same rate, so the real
198 Thornton wage would be unchanged. Consequently, there would be no change in employment; the unemployment rate would be unchanged. Friedman and Phelps acknowledged that in the short run (i.e., before workers’ expectations of inflation caught up with actual inflation), the real wage would fall, and firms would hire more workers. But ultimately, workers’ expectations of inflation would converge to the actual rate of inflation. The conclusion: the Phillips curve could be negatively sloped in the short run, but it must be vertical in the long run; there can be no permanent trade-off between inflation and output or the unemployment rate. Nevertheless, as discussed later, many, if not most, economists believe that a little inflation is good for economic growth. During most of the 1960s and 1970s, economists debated (1) whether monetary policy worked through the money channel or the interest-rate channel and (2) whether monetary policy could affect output and by how much and for how long. While Keynesian economists believed that monetary policy worked through the interest-rate channel, they argued that monetary policy was ineffective. Their conclusion was based on the fact that the empirical evidence showed that the relationship between interest rates and spending was quite weak. Moreover, surveys of firms’ investment behavior suggested that interest rates were relatively unimportant for making investment decisions.2 Consequently, Keynesians argued that it would take a very large change in interest rates to generate a change in spending large enough to have an important effect on output. Because Keynesians believe that inflation was a consequence of too much spending—they believed in the Phillips curve—they believed that monetary policy could have essentially no effect on output, either. The Keynesians’ conclusion: monetary policy was ineffective for affecting either output or inflation. I call this the Keynesian monetary policy ineffectiveness proposition (Thornton 2010b). Monetarists argued that monetary policy primarily worked by changing the supply of money. They believed that central banks could control inflation by controlling the growth rate of the money supply. Monetarists also believed that prices were pretty flexible, so the effect of monetary policy on output would be necessarily small as well as transitory. The monetarists’ conclusion: monetary policy should focus on controlling inflation by controlling the growth rate of the money supply. The monetarist/Keynesian debate intensified during the 1970s as the rate of inflation in the United States and most European countries rose to higher and higher levels. Keynesians found it increasingly difficult to argue that inflation was caused simply by too much spending, because the inflation rate was high and accelerating while the economies went into recession. This outcome was called stagflation—high inflation and slow output growth. Monetarists argued that inflation was accelerating because the money supply was increasing too rapidly. They accused policymakers of confusing high nominal interest rates with tight monetary policy; nominal interest rates were the consequence of high inflation, not tight monetary policy. The monetarists’ conclusion: monetary policy was too loose, not too tight. Their prescription: slow the rate of growth of the money supply.
Strategies for Conducting Monetary Policy 199 The debate was all but settled when Paul Volcker became chairman of the Federal Reserve in August 1979. Faced with double-digit inflation, the Volcker Fed changed the approach to conducting monetary policy and paid more attention to slowing the growth rate of the money supply. The inflation rate in the United States peaked in mid-August 1980 at about 14 percent and declined to about 3.5 percent by late 1983. Volcker’s success forced Keynesians to abandon their monetary policy ineffectiveness proposition. They were forced to accept that central banks could control inflation. This was a massive change from the Keynesian view of the 1960s and 1970s.3 However, they did not abandon the idea that inflation was caused by an excess of spending and not by excessive growth of the money supply. Both Keynesians and monetarists abandoned their views of the effect of monetary policy on output because the Volcker Fed’s policy was accompanied by back-to-back recessions, in 1980 and 1981–1982, that were cumulatively long and severe. Keynesians concluded that monetary policy was far more effective for controlling inflation than they had previously thought. Keynesians and monetarists concluded that monetary policy had a much larger effect on output than either had thought. Monetarists won the debate about whether monetary policy could reduce inflation. However, the debate over whether monetary policy worked through the money channel or the interest-rate channel continued. The monetarists’ view that monetary policy worked through the money supply was dealt a serious blow in the early 1980s, when the demand for money, which had been relatively stable, suddenly became unstable. There were a number of hypotheses about why this happened, but none found strong empirical support.4 Economists attempted to find monetary or credit aggregates that were stably related to output or inflation but to no avail.
7.3 Strategies Connected to a Specific Channel of Policy In this section, I discuss three specific strategies for conducting monetary policy that are directly linked to either the money or the interest-rate channel of policy: money supply targeting, interest-rate targeting, and forward guidance. The fact that these strategies are linked to a specific channel of policy means that if monetary policy cannot work through that channel, the strategy cannot work, either.
7.3.1 Money Supply Targeting If inflation were solely the consequence of excess money growth, it could be controlled by increasing the money supply at the same rate as the demand for money. For example, if at the current price level the demand for money was increasing at a 3 percent
200 Thornton rate, policymakers could stabilize the price level—there would be no inflation—by increasing the supply of money at a 3 percent rate. It is difficult, if not impossible, to know the rate at which the demand for money will increase in the future. For this and other reasons, Friedman (1960) advocated increasing the money supply at a fixed rate annually. Friedman noted that the effect of a policy action taken today on the economy wouldn’t be seen for a number of months. Moreover, he suggested that the number of months could vary from a few to a year or longer. Hence, he proposed that policymakers implement a money growth rule—a constant rate of growth of the money supply. He suggested that the rule would be easy to implement and would achieve the desired stability of the price level on average over time. An alternative approach is simply to adjust the rate of money growth with inflation. If inflation begins to accelerate, policymakers could simply throttle down the growth rate of the money supply; if the prices started to fall, policymakers could increase the growth rate of money. Friedman and others argued that the problem with this approach was that if policymakers didn’t move fast enough, inflation expectations could increase. This would make quelling inflation more difficult and costly in term of lost output. Moreover, Friedman argued that because the lag in the effect of the change in the money supply on the economy was highly variable, implementing such a procedure could cause the economy and inflation to become less stable, not more stable. While economists debated how monetary policy worked and how best to implement policy, the German Bundesbank and the Swiss National Bank provided some empirical evidence that money growth is important for stabilizing prices. Both central banks made controlling inflation their primary, if not sole, policy objective, and both emphasized the importance of controlling the growth rate of money. The evidence that their approach worked is reflected in the fact that the German and Swiss inflation rates were the lowest during the Great Inflation of the 1970s and early 1980s.5 However, while the Bundesbank set specific money growth targets, it missed those targets about half the time. Hence, it is safe to say that it did not use Friedman’s money growth rule to implement monetary policy. While the monetarists’ idea that inflation was caused by “too much money chasing too few goods” is unassailable in a world where all money is currency and people are just handed more money, real-world monetary policy isn’t that simple. This is not how the money supply is increased. Central banks increase the money supply by making loans to financial institutions (banks) and by purchasing securities. These actions increase bank reserves which, in turn, enable banks to make more loans. When banks make loans, they create checkable deposits, which are the core component of the M1 money measure (currency plus checkable deposits). But banks cannot force people to borrow. Moreover, banks finance their lending in other ways. Since the 1960s, US banks have financed a large part of their lending by issuing large-denomination negotiable certificates of deposit (CDs). Hence, when the Fed increases reserves by making loans or purchasing securities, banks simply finance more of their lending with reserves and less with CDs. Consequently, while the money supply would increase, the amount of lending might not. In any event, economists found it difficult to explain why such an increase in the
Strategies for Conducting Monetary Policy 201 supply of money would increase spending. The loans would have been made either way. The inability of monetarists to provide the step-by-step procedure from excessive central-bank-induced money growth to higher inflation led many economists to be skeptical of the monetary theory of inflation—skeptical of the transmission mechanism from money to prices. This is one of the reasons Keynesians doubted that inflation was caused by excessive money growth. A second problem for the monetary theory of inflation was the inability of economists to agree on a particular measure of money. There was widespread agreement that from a purely abstract, theoretical perspective, money was something that was used to facilitate trade; money was the thing people used to buy things, pay wages, pay taxes, and so on. The measure that most closely resembled the abstract notion of money is M1.6 However, while the correlation between the growth rate of M1 and inflation was positive, it was not particularly high. Moreover, Gurley and Shaw (1960) suggested that spending was linked to a broader array of liquid assets not included in M1. This led to the idea that the appropriate measure of money should include a broader array of liquid assets. Some economists, including Friedman, suggested that assets that could be converted quickly to currency or checkable deposits at little cost should be included in the measure of money. This gave rise to the widespread use of the money measures that included an increasing number of liquid assets that couldn’t be directly used in exchange (i.e., assets that were slower and more costly to convert to currency or checkable deposits). Economists continued to include more and more assets in their definitions of money, with little success. The inability of the profession to coalesce around a single measure of money made using a money growth rule impractical. If policymakers cannot agree on a particular measure of money, they cannot possibly agree on a specific growth rate for it. Moreover, there was a competing theory of inflation called cost-push inflation. Some argued that inflation was caused by increases in production costs. If these costs increase because labor is becoming increasingly scarce, or for any other reason, producers will increase prices. The cost-push theory of inflation was supported by the fact that the Great Inflation was accompanied by large increases in the price of oil caused by the Organization of Petroleum Exporting Countries (OPEC), a cartel formed to control the price of oil by controlling the global supply of oil. There was uncertainty among economists about exactly what caused inflation. The Phillips curve theory was challenged by the fact that the Great Inflation occurred during a period when there was a global recession and economic growth was weak and the unemployment rate high. Despite its apparent success during the Great Inflation, the cost-push theory of inflation is seldom mentioned in modern discussions of monetary policy. Nearly all of the focus is on the Phillips curve. Indeed, the so-called expectations-augmented Phillips curve (i.e., the idea that inflation is determined by inflation expectations and the gap between actual and potential output) is a central equation in the New Keynesian model— the model that arguably drives the Fed’s monetary policy and that of most central banks. Nevertheless, some policymakers speak of the danger of excessive money growth for inflation. The reason is simple: the price level is the money price of goods and services; therefore, the price level is directly linked to money.
202 Thornton
7.3.2 Interest-Rate Targeting and Forward Guidance As the name implies, interest-rate targeting requires monetary policy to work through the interest-rate channel. The interest-rate channel is based on the simple model of money demand given by equation 2. Economists argued that if the central bank increased the nominal money supply, the interest rate would fall, because inflation and real output would respond more slowly than the interest rate to an increase in the money supply. Consequently, the initial effect of an increase in the money supply would be reflected in lower nominal interest rates. However, this is only the initial effect. Increases in output or inflation will increase the quantity of money demanded, which would cause the nominal interest rate to rise. Indeed, as noted previously, if inflation expectations increased immediately and sufficiently following an increase in the money supply, the nominal rate would rise, not fall. In any event, if monetary policy neutrality holds, the real rate must return to the level determined by economic fundamentals, so ultimately, the nominal interest will be higher than before by the amount of the increase in inflation expectations. Some economists are skeptical of the simple money demand story of how central banks can affect interest rates because the interest rate is the price of credit, not the price of money. The price of money is the price level. They argue that to affect the interest rate, central banks have to change the supply of credit, not the supply of money. The credit market is very large. Hence, central banks would need to produce a large change in the supply of credit to have a significant effect on interest rates. However, central bank actions have typically had a relatively small effect on the supply of credit. So the relevant question is, how could central banks’ actions (making loans to banks or purchasing securities), which have been historically small relative to the size of the credit market, have a significant effect on interest rates? Basic economics suggests that a small change in the supply of credit should result in a correspondingly small change in interest rates. Proponents of the interest-rate channel suggested that it could happen, because, though small, central bank actions have a relatively large effect on the supply of funds in the interbank market and, hence, the interbank lending rate—the rate one bank pays another for short-term loans (typically overnight loans).7 But there are a myriad of credit market instruments and, therefore, a myriad of interest rates. How can the large effect on one interest rate—the interbank rate—affect other interest rates? The answer from those who believe in the interest-rate channel is that the effect of a change in the interbank rate is translated to other interest rates because long-term rates are determined by the expectations hypothesis (EH) of the term structure of interest rate. The EH posits that longer-term interest rates are determined by market participants’ expectation of the short-term rate over the holding period of the longer-term asset plus a constant term premium (aka, risk premium). The term premium is the extra return investors require for holding longer-term securities that have more market risk (aka duration risk; duration is a measure of the sensitivity of a bond’s price to changes in the interest rate) than an otherwise equivalent short-term security.
Strategies for Conducting Monetary Policy 203 As used in analyses of monetary policy and tested in the finance literature, the EH asserts that the long-term rate is equal to the average of today’s expectation of the short- term rate over the maturity of the long-term bond plus a constant, maturity-specific term premium. That is,
h −1
itn = (1 / h)∑ Et itm+mj + πn,m , (3) j =0
where n and m are integers denoting the maturities of the long-and short-term bonds, respectively; n > m ; and h = n / m. Et denotes market participants’ expectation at time t, and πn,m denotes the constant, maturity- specific term premium. The EH holds for all combinations of n and m where h is an integer. If rates are determined by the EH, central banks can have a relatively large effect on a wide range of interest rates across the term structure simply by controlling the short-term rate (i.e., the overnight interbank rate). Hence, interest-rate targeting is focused on controlling the interbank rate. The best- known interest-rate targeting procedure is an interest-rate rule suggested by John Taylor (1993) called the Taylor rule. Taylor-style rules are popular in theoretical and empirical discussions of monetary policy. The rule takes the general form
itp = r + πT + λ(πt − πT ) + β( yt − y p ), (4)
where itp denotes the policy rate, the interest rate that policymakers use to implement monetary policy; r denotes the equilibrium real rate of interest; πt and πT denote the actual and targeted rates of inflation, respectively; and yt and y p denote actual and potential output, respectively. The coefficients λ and β are both positive and reflect the amount by which policymakers adjust the policy rate in response to deviations of inflation from the target and output from potential output, respectively. If inflation is at the target level and output equals potential output, policymakers would set the nominal policy rate equal to the real rate of interest plus the targeted rate of inflation; monetary policy neutrality is implicit in this rule. In the New Keynesian economic model, inflation is determined by the so-called expectations-augmented Phillips curve. Hence, from the perspective of the New Keynesian model, the coefficient β is determined by two things: the coefficient on the output gap in the inflation-augmented Phillips curve and the policymaker’s preference for stabilizing inflation around the inflation target relative to stabilizing output around potential output. In the context of the New Keynesian model, policymakers would adjust the policy rate in response to deviations in the output gap even if they only cared about stabilizing inflation.8 In this case, the Taylor rule coefficient on the output gap would be the coefficient on the output gap in the Phillips curve equation. The EH also forms the basis for another strategy known as forward guidance (FG). FG is an outgrowth of Woodford’s (1999) work on monetary policy inertia. Like most economists, Woodford acknowledges that business and consumer spending decisions
204 Thornton depend on long-term rates and notes that “the effectiveness of changes in central-bank targets for overnight rates in affecting spending decisions (and, hence, ultimately pricing and employment decisions) is wholly dependent upon the impact of such actions upon other financial-market prices, such as longer-term interest rates, equity prices, and exchange rates” (Woodford 2012, 308). Most economists believe that the link between the interest rate that central banks influence and long-term rates is weak. For example, in his July 20, 1993, congressional testimony, Alan Greenspan observed that “currently short-term interest rates, most directly affected by the Federal Reserve, are not far from zero: longer-term rates, set primarily by the markets, are appreciably higher.” Woodford suggested that policymakers could have a larger effect on longer-term yields if they could commit to keep their policy rate at a given level for a longer period of time than the market would otherwise expect (the so-called Woodford period). Woodford’s suggestion implies that the poor performance of the EH historically is due to the fact that short-term rates have been very difficult to predict. FG is essentially a strategy to make short-term rates more predictable. Many central banks began implementing monetary policy using the interbank rate in the late 1980s or later. Several central banks, including the Fed, experimented with FG well in advance of the 2007 financial crisis.9 However, the Fed, the European Central Bank (ECB), and several other central banks became more explicit and aggressive in their use of FG after Lehman Brothers announced its bankruptcy on September 15, 2008. Here is how FG is supposed to work. Assume the policy rate is 4 percent and policymakers reduce it to 3 percent. The effect on the long-term rate through the EH is determined by how long the market expects the policy rate to be at 3 percent. FG suggests that the effect on the long- term rate will be larger if policymakers commit to keep the rate at 3 percent longer than they normally would (i.e., longer than market participants would normally expect). The EH is widely featured in modern monetary policy discussions because it forms the basis for modern central bank interest-rate targeting and FG. But this is a relatively recent phenomenon. The classical theory of interest arguably dominated economists’ thinking about how longer-term interest rates are determined well into the 1990s and, for many economists, still today. The classical theory of interest is a theory of the real long-term rate. Classical economists believed that the real long-term rate was determined by real factors, the individuals’ rate of time preference, and the marginal efficiency of capital. The rate of time preference sets a lower bound for the long-term rate. The expected return on the marginal unit of capital determines long-term real rate, subject to this lower bound. Short-term interest rates were thought to be affected by current economic and financial market conditions but linked to the long-term rate via arbitrage. During periods of economic expansion, when the expected return to investment in real capital was high, the long-term rate would rise, causing the entire structure of rates to shift up. The supply of credit was thought to be relatively inelastic, so the long-term rate would tend to rise relative to the short-term rate. During a period of weak investment opportunities, the long-term rate would decline, pushing the entire rate structure lower. Because
Strategies for Conducting Monetary Policy 205 of the relative inelasticity of credit supply, the long-term rate would fall relative to the short-term rate. Long-term rates were most often higher than rates on comparable short-term securities because it was assumed the marginal investor was risk-averse. Hence, the term premium was positive. Risk aversion accounted for the fact that the yield curve was most often upward-sloping. But if investment opportunities were particularly low (such as during a recession), the yield curve could invert; short-term yields could be higher than long-term rates. The reason was that everyone would believe that interest rates were going to decline, so the term premium would be negative even for risk-averse investors. While short-term rates could deviate from long-term rates for a variety of reasons, arbitrage would keep short-term rates from deviating too far from the long-term equilibrium rate. For example, if short-term rates fell too far below the long-term rate, investors would lend long and borrow short. Such arbitrage would cause short-term rates to rise and long-term rates to fall. Likewise, if short-term rates went too far above the long- term rate, investors would borrow long and lend short, causing short-term rates to fall and long-term rates to rise. The relationship between long-term and short-term rates could vary considerably over time and over the business cycle. But the relative behavior of long-and short-term rates was constrained by economic fundamentals, current conditions in financial markets, and market participants’ tolerance for interest-rate risk. The classical theory of interest and the EH are diametrically opposed. The EH asserts that long-term rates are determined solely by market participants’ expectations of the future path of the short-term rate. In contrast, the classical theory asserts that long-term rates are determined by economic fundamentals and that short-term rates are anchored to long-terms rate by arbitrage. That the classical theory of interest still plays an important role in economists’ thinking is evidenced by the fact that the real rate, r, which is assumed to be independent of central banks’ actions, is explicit in the Taylor rule. However, in such rules, the real rate is assumed to be constant. But there is no reason it should be. Indeed, it should change over the business cycle and for other reasons such as during periods of technological innovation and strong economic growth. The effectiveness of interest-rate targeting and FG depends on whether long-term rates are determined by the EH. In this regard, it is important to note that the EH has scant empirical support. It has been rejected using different tests and over different sample periods and monetary policy regimes (Campbell and Shiller 1991; Thornton 2005; Sarno, Thornton, and Valente 2007; Della Corte, Sarno, and Thornton 2008).10 Indeed, Woodford’s suggestion that changes in the policy rate could have a larger effect on long-term rates if central bankers could make the short-term rate more predictable is a confirmation of the ineffectiveness of the EH historically. It also suggests that Woodford believes that the failure of the EH is due to the well-known fact that interest rates are notoriously difficult to predict.11 Indeed, nearly all interest rates are closely approximated by the simple autoregression model,
it +1 = θ it + εt +1 , (5)
206 Thornton where ε denotes a zero-mean, constant-variance stochastic shock (i.e., εt +1 : f (0, σ2 )). For most interest rates, estimates of θ are close to and not statistically significantly different from 1. That is, the rate in the next period is equal to the rate in this period plus the response of the interest rate to the information received during the period between t and t + 1, the source of the unforecastable random shock εt +1 in equation 5, which is unpredictable. Market participants know this. Consequently, their expectation of the future short-term rate should be very close to the current short-term rate. For example, if the EH determined the six-month rate, it should be equal to the rate on an otherwise equivalent three-month bond plus a constant term premium. Since the duration risk of a six-month bond is only slightly larger than that of a three-month bond, the constant difference should be small. Hence, if the six-month rate is determined by the EH, the spread between the six-and three-month default-risk-free bonds should be relatively small and stable. But this is grossly at odds with the monthly spread between the six-month and three-month Treasury bill rates from March 1960 to October 2016 (see figure 7.3). The figure also shows the date of the beginning of the Federal Open Market Committee’s (FOMC) post-financial-crisis FG. The spread varies considerably, is often very large, and is sometimes negative. Interpreted from the point of view of the EH, this means that the term premium is not constant but rather highly variable. But if the term premium is variable, it is impossible to know whether the EH is correct without knowing detailed information about how and why the term premium varies as it does. Absent such knowledge, saying the term premium is variable is equivalent to saying the EH does not hold. Consistent with the EH, the spread became small and relatively stable during the early part of the FOMC’s
2.00 1.50
Start of the FOMC’s FG
1.00 0.50 0.00 –0.50 –1.00
3
10
l1
ar
Figure 7.3 The Spread between the Six-Month and Three-Month T-Bill Rates
Ju
M
3
06 ov
N
00
Ju
l0
96
ar M
3 l9
ov
N
Ju
90
86
ar M
3
ov
N
80
l8
Ju
76
ar M
3
ov
N
70
l7
Ju
66
ar M
3
ov
N
l6
ar M
Ju
60
–1.50
Strategies for Conducting Monetary Policy 207 FG, but that ended in late 2014. If the EH doesn’t hold for short-term interest rates whose maturities differ by only three months, it is hard to believe that it holds for longer-term rates whose maturities differ by many years. Despite this and its empirical failure, many, if not most, economists believe that long- term rates are determined by the EH. This belief likely reflects the lack of an alternative theory of the term structure. Moreover, by their very nature, financial markets are forward-looking, and the EH is consistent with this fact. Because of the EH’s dominance in economics and finance, it is important to realize that the EH is not really a theory; it’s an intertemporal arbitrage condition. This is illustrated by a simple example: A person can purchase a two-year bond or, alternatively, purchase a one-year bond today and then purchase another one-year bond one year from today. The current rate on a one-year bond, it(1) , is known. If the rate on a one-year bond next year also was known with certainty, the interest rate on an otherwise equivalent two-year bond would have to equal the average of the rate on the two successive one-year bonds plus a premium for the fact that the two-year bond has more interest- rate risk. If this was not the case, one could make a profit by arbitrage. Of course, next year’s rate on a one-year bond, it(1+1) , is not known. Assume each market participant has an expectation of it. Let the k th market participant’s expectation for next year’s rate be given by Etk (it(1+1) ) , where Etk denote today’s expectation of the k th market participant. According to the EH, the k th market participant would purchase a two-year bond today if and only if its yield is greater than or equal to it(2) = 0.5[it(1) + Etk (it(1+)1 )] + π 2k,1 , where πnk ,m is determined by the k th market participant’s risk aversion and the interest- rate risk (i.e., the duration) of the n-period bond relative to that of the m-period bond. This is the intertemporal arbitrage requirement for the k th market participant. But this intertemporal arbitrage requirement must hold for all market participants and for possible combinations of long-term and short-term rates, that is, for all possible combinations of n and m . The observed rates are determined by the actions of all market participants. Hence, if the EH holds, the long-term rate would be determined by
h −1 itn = f (1 / h)∑ (Etk itm+ mj ) + πnk ,m , (6) j=0
where f (•) is an unknown and unspecified aggregator function. The EH is not a theory, because it provides no information about the aggregator function. Rather, it implicitly assumes that all market participants have the identical expectation for the future short- term rate and have the identical term premium requirement. Without specific information about the aggregator function, it is impossible to know the market-determined value of itn or the market-determined term premium. Indeed, if the pool of market participants changes over time, there is no reason to suspect that the market-determined term premium is constant. It would be constantly changing, which, of course, means that the EH cannot hold. The determination of the long-term rate is even more complicated than equation 6 suggests, because the extent to which any market participant’s actions affect the
208 Thornton long-term rate depends on the relative size of each participant’s investment. Hence, the expectations of some market participants are more important than others for determining itn . There is no doubt that investors are forward-looking. But this does not mean the long-term rate is determined by an amorphous expectation of the short-term rate or that it should be useful for guiding monetary policy. Furthermore, it is well known that interest rates are well represented by equation 5. Hence, it is reasonable to assume that market participants would use equation 5 to formulate their expectation of the future short-term rate. If so, Etk (itm+mj ) ≈ itm , for all k . If all market participants based their expectation of the short-term rate based on equation 5, the term structure would be relatively flat. Nearly all of its shape would be due to the term premium. That it is unlikely that long-term rates are determined by EH is not the only problem with FG. Woodford (2012) emphasizes that for FG to be effective, the market participants must believe that the central bank will not renege on its FG commitment. Two types of commitment strategies have been discussed in the literature. One is called time- contingent FG, promising the rate will remain at its new level until a specific future date. Of course, the date must be farther in the future than market participants would expect otherwise. The other is called state-contingent FG, for example, promising to keep the rate at the new level until economic growth, inflation, or the unemployment rate reaches some specific level.12 In models where it is possible to determine the optimal monetary policy, state-contingent FG is found to be more effective because it provides a more credible commitment. This conclusion arises because in the context of such models, effective FG not only requires policymakers to commit to keeping the policy rate lower than it would have been otherwise, but they must also commit to allow inflation to be higher than it would have been otherwise. Without the latter commitment, policymakers would find it difficult to resist the temptation to raise rates earlier than promised to avoid a higher rate of inflation.13 While the necessity of committing to allow inflation to be higher is critical in such models, it’s not clear that it is essential in real-world policymaking. By almost any metric, central banks have kept their policy rates low for a long time, yet inflation appears to be well contained. If anything, it seems that many central banks have had difficulty getting inflation up to the target level. In the case of state-contingent FG, it is important that the state-contingency criterion be something that policymakers’ actions have a significant effect on. The importance of this aspect of state-contingent FG was demonstrated when the FOMC announced at its December 2012 meeting that it would keep its policy rate at zero until the unemployment rate went below 6.5 percent. Not only is the unemployment rate not directly affected by policy actions, but it is determined by factors that policy actions don’t affect. I noted this problem in early 2013, pointing out that much of the decline in the unemployment rate that had occurred already was due to an unprecedented decline in the labor force participation rate and not to a large increase in employment.14 The labor force participation rate continued to decline. Consequently, the unemployment rate approached the critical 6.5 percent level much sooner than the FOMC had anticipated.
Strategies for Conducting Monetary Policy 209 The FOMC removed this contingency from its FG language in its policy statement at its March 2014 meeting, saying that in deciding when to raise its policy rate, the Committee will take into account “a wide range of information, including measures of labor market conditions, indicators of inflation pressures and inflation expectations, and readings on financial developments.” In effect, the FOMC replaced its state-contingent FG with no FG commitment. It is worth noting that as a practical matter, it is not clear that it matters whether FG is time-or state-contingent. Policymakers understand that the effectiveness of FG depends solely on how long the rate is expected to be at its new level and that the period must be longer than market participants would expect otherwise. Consequently, they will choose a time-contingency or a state-contingency criterion that they believe will achieve this objective. Finally, there is a potential problem: FG might be unstable. To illustrate this potential problem, assume that policymakers have traditionally raised the policy rate when output reached its previous peak, so they pledge to keep the policy rate low until output is 125 percent of its previous peak. Now assume that the economy heats up and policymakers become concerned about inflation. They cannot renege on their commitment. If they do, FG will not be effective in the future. If by the time output reaches 125 percent of its previous peak, inflation is higher than policymakers’ objective, they will raise the policy rate. But for the policy to be effective, policymakers will have to commit to keep the policy rate higher longer than they normally would. But doing so could cause a larger-than-desired fall in output. If this were to happen, policymakers would have to reduce the policy rate, again committing to keep the policy rate lower than they normally would. But the new normal is 125 percent of the previous peak. Consequently, FG could be destabilizing. In closing this section, I discuss two more points relative to using the interest rate as a policy instrument. The first is the fact that credit is difficult to price. Unlike physical goods, whose production costs are relatively high and provide a lower bound to the price, the production costs of credit are minimal. Consequently, the market looks to other information to price credit. Given the widespread belief that central banks can control the interbank policy rates and their actions have a significant effect on other rates, it is not surprising that the interest-rate policies of central banks have a larger effect on interest rates than the effect of their actions on the supply of credit would suggest. The difficulty in pricing credit also explains why many rates are tied to other rates, for example, why many interest rates were tied to the London interbank offered rate (LIBOR), a rate that is based on a survey and is not market-determined.15 Second, while it is widely believed that central banks control interbank rates, there is much less agreement on how they achieve this. Some central banks use a corridor system. They lend freely at one rate and borrow freely at a lower rate. However, the corridor is often wide and is adjusted frequently. Moreover, it is difficult to determine if adjustments to the corridor are due solely to policy actions or if policymakers are simply adjusting the corridor because market rates are changing.
210 Thornton 12.00
10.00
8.00
6.00
4.00
2.00
0.00 05-01-1988 05-01-1991 05-01-1994 05-01-1997 05-01-2000 05-01-2003 05-01-2006 Effective federal funds rate
FOMC’s funds rate target
Figure 7.4 The Fed’s Funds Rate Target and the Effective Funds Rate
Since the late 1980s, the FOMC has implemented monetary policy by targeting the effective federal funds rate. Figure 7.4 shows the funds rate and the FOMC’s funds rate target daily from May 1, 1988, to December 15, 2008, the date when the FOMC reduced the target to between 0 and 25 basis points. The funds rate stayed very close to the target over the period. During the early part of the period, the FOMC would signal its target for the funds rate. At its February 4, 1994, meeting, the FOMC began the practice of announcing when it took a policy action. The relationship between the funds rate and the FOMC’s target tightened somewhat despite the fact that the FOMC did not mention the funds rate explicitly in these statements. Gradually, the FOMC’s statements became increasingly specific about the funds rate target. However, the FOMC did not explicitly say it was targeting the funds rate until its June 30, 1999, meeting. The relationship between the funds rate and the target became noticeably tighter after this date (denoted by the vertical line in figure 7.4). Some economists and even policymakers believe the Fed kept the funds rate close to the target by conducting daily open market operations. However, fi gure 7.4 shows that after June 1999, not only did the funds rate track the target more closely, but it also moved to the new target level the day the announcement was made; open market operations weren’t conducted until the next day. Controlling the funds rate without open market operations was dubbed open mouth operations by Guthrie and Wright (2000), who found that the Reserve Bank of New Zealand (RBNZ) was able to control its policy rate simply by announcing the target; open market operations were not required.
Strategies for Conducting Monetary Policy 211 The evidence that the FOMC controlled the funds rate through open mouth operations rather than open market operations and that the Fed never controlled the funds rate through open market operations is supported by research. Specifically, I show that daily open market operations had essentially no effect on the federal funds rate (Thornton 2007). This conclusion is also supported by the fact that there is no evidence of a daily liquidity effect (see Hamilton 1997 and Thornton 2001b). Evidence of a liquidity effect has been found using nonborrowed reserves, the only monetary aggregate found to produce a liquidity effect (Christiano and Eichenbaum 1992; Christiano, Eichenbaum, and Evans 1996a and 1996b; Strongin 1995; Pagan and Robertson 1995). However, I show that these authors’ finding was due to the fact that the Fed sterilized the effect of an increase in discount window borrowing on total reserves by selling an amount of securities equal to the amount of discount window lending (Thornton 2001a). When this fact is accounted for, the liquidity effect vanished.16 One possible reason open market operations did not have a significant effect on the funds rate is that the daily open market operations were too small. For example, the average of the absolute value of the biweekly change in total reserves over the period was just $1.2 billion, hardly enough to produce a significant move in the funds rate. This conclusion is also supported by the fact that the funds rate went to zero well in advance of the FOMC reducing the target to zero on December 15, 2008, because of the Fed’s massive lending following Lehman’s bankruptcy announcement (see Thornton 2010a). It took a massive increase in total reserves to push the funds rate to zero. Total reserves increased by $780 billion between the two weeks ending September 10, 2008, and the two weeks ending December 17, 2008, well in advance of the FOMC reducing the target to zero.17 Hence, it is unlikely that the small changes in total reserves that occurred prior to Lehman’s announcement could have had any significant effect on the funds rate. Another reason it is unlikely that open market operations had any significant effect on the funds rate is the fact that beginning in the late 1960s, large banks began using the federal funds market to finance some of their lending (see Meulendyke 1998, 38). Hence, if the FOMC attempted to increase the funds rate by reducing the amount of loanable funds available in the federal funds market, banks would respond by financing more of their lending with CDs and less with federal funds. The decline in the availability of federal funds would be accompanied by a reduction in the demand for federal funds. Indeed, I have shown why it would be difficult, if not impossible, to control the funds rate through open market operations (Thornton 2014b and 2015b). The reason is that CDs and federal funds are alternative ways large banks finance their lending. Hence, arbitrage between the CD rate and the federal funds rate and between the CD rate and other short-term rates would require the Fed to engage in massive open market operations to change the funds rate. Historically, the Fed’s open market operations have been nowhere near the size required to change the equilibrium funds rate.
212 Thornton
7.4 Strategies Not Tied to a Specific Channel of Policy This section discusses three specific strategies that are not directly tied to a specific channel of monetary policy: inflation targeting, nominal GDP targeting, and quantitative easing (QE). In theory, these strategies could be implemented with either the money or interest rate. Indeed, they could be implemented through the exchange-rate, credit, or wealth channels of policy, though these channels are not discussed here. The analysis in this section is done in terms of the interest-rate channel of policy, because nearly all central banks currently implement monetary policy through this channel.
7.4.1 Inflation Targeting A problem that can arise with interest-rate targeting is policymakers can confuse high nominal interest rates with “tight” monetary policy or low nominal rates with “easy” policy. Indeed, Friedman (1968) suggested that nominal interest-rate targeting would be dynamically unstable, and in some models (e.g., Sargent and Wallace 1975), nominal interest-rate targeting leads to price-level indeterminacy. However, McCallum (1981) showed that price-level indeterminacy goes away if policymakers have controlling inflation as a policy objective. McCallum’s result and monetary policy neutrality led to the strategy of inflation targeting (IT)—that is, conducting monetary policy through the interest-rate channel but specifying a specific objective for inflation. (IT is just as relevant if central banks are implementing monetary policy by controlling the supply of money.) The first central bank to officially implement IT was the RBNZ in 1997. As of 2016, sixty-one central banks have adopted formal inflation targets. Some target a specific inflation rate, others target a range for inflation, and still others, such as the ECB, set an upper limit for inflation (e.g., less than 2 percent). Proponents of IT argue that in addition to stabilizing inflation, it has the side benefit of producing a higher and more stable level of output. Output is higher because resources are used for production and not for mitigating the effects high and variable inflation and inflation uncertainty. The belief that low and stable inflation is beneficial stems from both economic theory and from experiences such as the Great Inflation, when high and variable inflation was accompanied by relatively slow output growth, and the so-called Great Moderation (the period from about 1984 to the start of the most recent financial crisis in August 2007), when relatively high and stable output growth was accompanied by relatively low and stable inflation. Some proponents of stable prices advocate price-level targeting (PLT) over IT. They argue that central banks could do better if they targeted the price level rather than the inflation rate. Of course, a 2 percent inflation target is equivalent to targeting the price
Strategies for Conducting Monetary Policy 213 level to increase by 2 percent over the same time period. Proponents of PLT acknowledge this but argue that PLT avoids base drift, the upward drift of the price level that would occur with IT if the central bank tended to miss its inflation target on the high side. Moreover, they argue that removing uncertainty about the future price level is important, because the price level is what people are concerned about when planning for the future (e.g., making investment decisions, financial planning, retirement planning). In short, PLT is more likely to produce the objective of “price stability” than is IT. Others suggest that PLT could lead to greater instability. This could happen because unexpected increases in prices (not associated with monetary policy) have to be offset by a central-bank-induced deflation to stabilize the price level. In any event, no central bank has adopted PLT. This may be because IT appears to have been relatively successful, so central banks are reluctant to change to PLT. It could also be because the case for PLT has been advanced in the context of economic models where economic agents are fully rational and policymakers behave optimally.18
7.4.2 Nominal GDP Targeting Some economists believe that price stability and economic stability could be achieved if central banks targeted a growth rate of nominal income (nominal GDP), nominal income targeting (NIT).19 The idea is that central banks adopt a target growth rate for nominal GDP that reflects both the desired long-term rate of inflation and the growth rate of potential output. For example, if the desired rate of inflation is 2 percent and potential output growth is 3 percent, policymakers would set the nominal growth target at 5 percent. Policymakers would then adjust the policy instrument (the money supply or the interest rate) in such a way to keep the growth rate of nominal GDP in a narrow range around 5 percent. NIT would enable policymakers to stabilize inflation and output simultaneously—that is, achieve the so-called dual mandate. Proponents of NIT argue that it would produce better outcomes than either IT or PLT. The reason is that policymakers would respond exactly the same way to unexpected changes in AD as they would under PLT or IT: both prices and output would be rising, so under any of the three strategies, policymakers would increase the policy rate or reduce the money supply. However, policymakers would respond differently under NIT or either PLT or IT to unexpected changes in aggregate supply (i.e., instances when prices and output move in the opposite direction). In such instances, whether nominal GDP increased or decreased would depend on whether prices increased at a faster or slower rate than the rate of decline in output. If inflation increased faster, policymakers would increase the policy rate under NIT; if it increased slower, policymakers would reduce the policy rate under NIT. Under PLT or IT, policymakers would increase the rate regardless of what was happening to output growth. Proponents of NIT argue that wages are sticky, so that during bad times, unemployment tends to rise as firms lay off workers rather than reduce nominal wages, so that
214 Thornton output will fall faster than prices will rise.20 Under NIT, policymakers would reduce the policy rate because the growth rate of nominal GDP would be below the target level. Under IT, policymakers would raise the policy rate if inflation went above the target. Some economists suggest that it would be better to target a specific path for nominal GDP rather than the growth rate. This would eliminate the problem of base drift if inflation was more often than not above the target. In this case, policymakers would respond to deviations from the nominal output path. While NIT works well in models where inflation, output, employment, and other variables are mechanically linked and where potential output is well specified, there are a number of practical problems with implementing NIT in the real world. For one thing, potential output is not a well-defined physical measure like temperature. It’s a concept that is defined and estimated in different ways. Furthermore, regardless of how it’s estimated, estimates of potential tend to track historical real GDP. For example, figure 7.5 shows the Congressional Budget Office’s (CBO) estimate of potential output for the period since 1949. The figure also shows real GDP and a trend line estimated over the period 1949–1994 and extrapolated to 2025.21 The CBO began ratcheting up its estimate of potential output when actual output grew above its previous trend and has been ratcheting its estimate down since 2008 because output has been consistently below its former estimates. In many advanced economies, output growth has trended down, so estimates of potential output necessarily are revised down as well.22 Consequently, central banks would have to continuously revise their nominal GDP targets with persistent changes in output growth, assuming they can recognize the change. 25000.0
25000.0
15000.0
10000.0
5000.0
01
-0 1 05 -194 -0 1 9 09 -195 -0 1 2 01 -195 -0 1 5 05 -195 -0 1 9 09 -196 -0 1 2 01 -196 -0 1 5 05 -196 -0 1 9 09 -197 -0 1 2 01 -197 -0 1 5 05 -197 -0 1 9 09 -198 -0 1 2 01 -198 -0 1 5 05 -198 -0 1 9 09 -199 -0 1 2 01 -199 -0 1 5 05 -199 -0 1 9 08 -200 -0 1 2 01 -200 -0 1 5 05 -200 -0 1 9 09 -201 -0 1 2 01 -201 -0 1 5 05 -201 -0 1 9 09 -202 -0 1- 2 20 25
0.0
CBO’s potential
Real GDP
Figure 7.5 The CBO’s Estimate of Potential Output
Trend line
Strategies for Conducting Monetary Policy 215 I believe one of the biggest weaknesses of NIT is commonly cited as its major advantage, specifically, that policymakers do not have to decompose the observed change in nominal GDP into inflation and the change in output. NIT implicitly assumes policymakers are indifferent between output growth and inflation. But this seems unlikely. It’s more likely that policymakers’ preferences with respect to inflation and output are lexicographic. That is, policymakers are likely to accept a high level of output (or a high rate of output growth) as long as inflation is within the target range. Indeed, the lack of indifference between output and inflation is central to Kydland and Prescott’s (1977) model of time inconsistency (discussed later, in section 7.5). Moreover, output and the growth rate of output cycle around estimates of potential output or its growth rate. Policymakers would like to truncate cyclical swings in output below potential but would be much less willing to dampen cyclical swings above potential if inflation is not too far from their objective. The reluctance to take actions to limit the rate of output growth when inflation is within or not too far from the desired level is furthered by the fact that high levels or growth rates of output are generally accompanied by low or declining rates of unemployment. Policymakers would be reluctant to take actions that would be seen as increasing the unemployment rate as long as inflation is within or close to the desired level. There is also the problem that NIT assumes that there is a nice, regular, positive relationship between output or output growth and inflation consistent with the output gap/ Phillips curve theory of inflation. This assumption is inconsistent with the inability of the Phillips curve model to explain, let alone predict, inflation. This is why NIT tends to work well in models and why it is not likely to work well in real-world policymaking. In any event, to my knowledge, no central bank has adopted the NIT strategy.
7.4.3 Quantitative Easing The strategy known as QE garnered worldwide attention when the Bernanke Fed adopted it following Lehman’s bankruptcy announcement. However, its origins are much earlier. At the urging of some US economists, the Bank of Japan (BoJ) implemented QE in 2001. Specifically, the BoJ purchased a large quantity of Japanese government bonds in an attempt to increase the money supply and, thereby, jump-start the Japanese economy, which was mired in a prolonged period of slow growth and low inflation. The BoJ’s QE program was ineffective. Banks accumulated much of the funds supplied by the BoJ’s bond-buying program as excess reserves. While M1 increased significantly, there was no dramatic increase in either output or inflation. The BoJ dropped the policy and returned its balance sheet to a more normal level in 2006. The Federal Reserve, under Chairman Ben Bernanke, began using QE on November 25, 2008, when the Board of Governors announced the Fed would purchase $600 billion of securities, $100 billion of agency debt, and $500 billion of agency-backed mortgage- backed securities (MBS). The Board announced that this action was taken to reduce the cost and increase the availability of funds for purchasing homes and improve credit
216 Thornton market conditions generally. This action garnered relatively little attention. QE didn’t gain prominence until March 18, 2009, when the FOMC announced that the Fed would purchase an additional $1.15 trillion of securities, composed of agency-backed MBS ($750 billion), agency debt ($100 billion), and long-term Treasury securities ($300 billion), for a grand total of $1.75 trillion. This action was called QE1. Bernanke was well aware of the BoJ’s QE experience. But he argued that the Fed’s QE program was different. He noted that the BoJ’s program was concerned only with the size of the balance sheet: The BoJ increased its balance sheet with the intent of increasing the money supply. In contrast, Bernanke indicated that the Fed’s QE program was focused on the composition of the balance sheet, not its size. The FOMC’s QE policy was focused on providing credit to specific segments of the financial market and not the financial market generally. The intent was to reduce long-term interest rates, not to increase the supply of money. Indeed, the Emergency Economic Stabilization Act of 2008, passed on October 3, 2008, allowed the Fed to pay interest on excess reserves balances. The Fed began paying banks 25 basis points on all reserves in an effort to entice banks to hold the additional reserves created by QE as excess reserves rather than lend them and, thereby, increase the supply of money. Bernanke (2009) suggested that the Fed’s purchases of long-term debt could reduce long-term interest rates “when markets are illiquid and private arbitrage is impaired,” noting that in such a circumstance, “one dollar of longer-term securities purchases is unlikely to have the same impact on financial markets and the economy” as lending to specific segments of the market. The theoretical justification for how QE could reduce long-term yield when markets were not illiquid evolved over time.23 Realizing that interest-rate arbitrage along the maturity structure could significantly weaken, if not eliminate, the effect of QE on long- term bond yields, Bernanke and other proponents of QE argued that the market was segmented by term to maturity. Specifically, they argued that a holder of long-term securities had a preference for investing in longer-term debt beyond that assumed by conventional finance theory (i.e., beyond that determined by the relative durations of long-and short-term securities and the market participant’s aversion to risk). Bernanke argued that because of market segmentation, the Fed’s purchase of long-term securities would reduce rates on the securities purchased. This effect would spread to other long- term securities via what Bernanke called the portfolio balance channel. Specifically, investors seeking higher long-term yields would sell long-term Treasury securities and purchase corporate and other long-term bonds. This would reduce the price of long- term Treasuries and, hence, raise their yield and would raise the price of corporate bonds and lower their yield. In this way, QE would achieve the objective of reducing yields on a wide array of long-term securities. The portfolio balance channel is nothing more than basic arbitrage. When investors sell Treasuries, MBS, or agency debt and purchase other long-term bonds, this not only reduces rates on the bonds purchased but it also raises rates on Treasuries, MBS, and agencies. How much the former decreases relative to how much the latter increases depends to a large extent on the relative sizes of the respective markets and the size of the
Strategies for Conducting Monetary Policy 217 purchase. A $1.75 trillion purchase is large, but the market for long-term bonds is enormous, especially when one realizes that financial markets are highly integrated internationally. Indeed, Bernanke (2013) presented a chart showing the ten-year government bonds yields of the United States, Canada, Germany, and the United Kingdom monthly from 2000 to 2013. These yields moved closely together over the entire period but especially so after 2008. Hence, one would have to believe that the Fed’s QE actions had a large effect on Canadian, German, and UK bond yields, too. But this seems unlikely. The purchases were simply too small. It seems more likely that international long-term yields were being affected by other events, including the low interest-rate policies pursued by the Fed, the Bank of England, and the ECB.24 It is more likely that other events and not QE were responsible for the simultaneous decline in sovereign bond yields. Moreover, several commentators (e.g., Bauer and Rudebusch 2014; Thornton 2017) have noted that the assumption that the long-term markets are severely segmented by maturity is at odds with decades of finance theory and with a considerable volume of empirical evidence. Indeed, there is strong evidence that financial markets are very efficient; that is, markets are segmented only to the extent of the risk characteristics of alternative securities and the risk aversion of market participants. Consequently, there is no reason to believe that arbitrage would be limited solely to the long-term securities as QE’s effectiveness requires. If the long-term bond market is not segmented as Bernanke and others claim, it is unlikely that QE had a significant effect on bond yields generally, let alone long-term bond yields. The Fed’s purchases were small relative to the size of the long-term debt market and tiny relative to the size of the entire market for debt. These facts may have motivated Bernanke and others to suggest that the effect on long-term yields was most likely due to QE reducing term premiums on longer-term securities. Bernanke’s conclusion was based on his observation (Bernanke 2013) that most of the observed decline in long-term Treasury yields is due to a decline in board staff estimates of the term premium on ten-year Treasuries. He suggested that “as the securities purchased by the Fed become scarcer, they should become more valuable. Consequently, their yields should fall as investors demand a smaller term premium for holding them.” Bernanke’s statement echoes Gagnon et al.’s (2011) argument that QE reduced term premiums in the Treasury market because the Fed’s purchases removed a large amount of duration risk from the market. They argued that QE reduced term premiums (and, thereby, long-term yields) by removing a large quantity of securities with high duration risk from the market. Elsewhere, I pointed out that basic finance theory suggests it is unlikely QE had any effect on term premiums (Thornton 2015b). The reason is that the term premium on any security relative to another security is determined by just two things—relative durations of the two securities and the degree of risk aversion of market participants— neither of which is determined by or affected by the amount of the particular security in the market. Consequently, the Fed’s purchase of $600 billion of ten-year Treasuries cannot affect the term premium on the remaining ten-year Treasuries relative to, say, a one-year Treasury. The remaining ten-year Treasuries have the same duration as before and, hence, the same duration risk. The term premium could decline only if the Fed’s
218 Thornton purchase caused the most risk-averse investors to leave the Treasury market, while the least risk-averse investors remained. This seems unlikely. It is more likely that the most risk-averse investors would remain in the default-risk-free Treasury market; the least risk-averse investors would seek higher yields by purchasing securities with more default risk. In any event, the effect of QE on long-term bond yields depends on something that the Fed can neither control nor predict: who stays and who leaves the market. Not only is the idea that the market is segmented across the term structure highly questionable, but, as I noted elsewhere, if markets were segmented, the FOMC’s FG strategy couldn’t work (Thornton 2010c). The reason is that FG is based on the EH, which requires a high degree of arbitrage across the entire term structure. If markets are segmented across the term structure, FG couldn’t possibly work. Perhaps this is why Bernanke (2014) quipped “the problem with QE is that it works in practice but doesn’t work in theory.” The problem, of course, is that if it doesn’t work in theory, it won’t work in practice, either.
7.5 Rules versus Discretion in the Conduct of Monetary Policy Many of strategies discussed previously are in some ways linked to a more general strategy for conducting monetary policy called rule-based monetary policy. Hence, a discussion of the use of rules to conduct monetary policy is important for a complete discussion of strategies for conducting monetary policy. Rule-based policy also has implications for other ideas, such as monetary policy transparency.25 The idea of rule-based monetary policy dates back to Simons (1936), who was the first to mention the usefulness of rules in the conduct of monetary policy. However, the context was much different from today. The modern debate over rules versus discretion in the conduct of monetary policy is largely due to Kydland and Prescott’s (1977) article on time inconsistency. Time-inconsistent behavior arises from the fact that decisions that are optimal at one point in time may not be optimal at another. For example, monetary policymakers know that it is socially optimal to keep inflation low, so they will commit to a policy of low inflation. But if they believe that their efforts to reduce inflation will reduce output and increase unemployment, they will have an incentive to renege on their commitment at some future date. Of course, rational economic agents will expect policymakers to renege. In a world where monetary policy neutrality holds, the result will be a higher long-run rate of inflation with no gains in either output or employment. Hence, optimal policy outcome can only be achieved by adopting a monetary policy rule. Conclusion: rules are preferred to discretion. However, there are many reasons for time inconsistency in real-world policymaking that don’t necessarily imply that rules are preferred to discretion. In Kydland and Prescott’s (1977) paper, time inconsistency arises in a model where the basic structure
Strategies for Conducting Monetary Policy 219 of the economy does not change over time. But time inconsistency is more likely to arise in real-world economies where the economy is always changing: new technologies and products drive out old, and wars affect economies in a variety of ways that are often persistent—verging on permanent. Unexpected economic events such as stock-market crashes and financial crises have persistent effects as well. Furthermore, for good or ill, economists change their views about the economy, how policy works, and the effectiveness of policy. In real-world policymaking, a rule that seemed optimal at one time might not seem optimal at another. Recall that many economists argued that controlling the money supply was important, if not essential, for controlling inflation. While some economists still believe that money is important for inflation, few argue that controlling money is essential for controlling inflation. Those who do may not agree on the measure of money to use or how to control it. Moreover, while it is easy to determine the optimal policy rule in models like Kydland and Prescott’s, it is difficult—if not impossible—to know what the optimal policy rule would be in real-world policymaking. For example, in the case of inflation, Friedman (1969) argued that the optimal rate of inflation is equal to the negative of the equilibrium real interest rate, so that the nominal interest rate would be zero, the idea being that a zero nominal interest rate would eliminate the incentive to hold idle money balances and would result in the optimal quantity of money. While Friedman’s suggestion has been shown to work well in economic models, no central bank has chosen to target a negative rate of inflation. Some economists believe that zero is the optimal inflation rate. Indeed, when asked by Board Governor Janet Yellen at the July 1996 FOMC meeting what inflation rate he considered ideal, Fed Chairman Greenspan responded, “I would say the number is zero, if inflation is properly measured.” Many economists agree. However, at least one economist suggested that even though zero is the optimal inflation rate, once inflation is above zero, society is better off if it tolerates a little inflation rather than bearing the cost necessary to achieve price stability.26 Still others argue that “moderate” inflation is optimal. Despite the fact that the theoretical case for a positive long-term rate of inflation is extraordinary weak, every inflation-targeting central bank has a positive inflation target.27 Furthermore, rule-based policymaking implies that policymakers would know a specific, implementable rule that would achieve the desired, if not optimal, policy result. For example, what monetary policy rule will produce a long-run inflation rate of zero or 2 percent? As noted previously, economists have two theories of inflation: (1) inflation is the result of excessive money growth; (2) inflation is a function of the gap between actual and potential output, the Philips curve theory. However, proponents of these theories have been unable to produce measures of money or the output gap that are closely, persistently, or predictably linked to inflation. Policymakers will find it difficult to specify a rule that will achieve their desired inflation outcome. Furthermore, policymakers will find it difficult to implement a particular policy rule even if they could agree on one. This problem is illustrated using the Taylor rule given by equation 4. To illustrate why discretion is needed, assume that policymakers have agreed to use this rule and that they know r and y p and the optimal values of λ and β,
220 Thornton or they agree on values to use in the rule. Now, assume that y decreases unexpectedly. If policymakers follow the rule, they will reduce the policy rate by β times the change in the output gap. However, if the decline in output was due to a one-time increase in the world price of oil, policymakers may not want to reduce the policy rate. It might be better to do nothing. Conversely, if an unexpected increase in output was due to an unexpected increase in productivity, policymakers might not want to raise the rate. In real- world policymaking, policymakers need to evaluate the sources of unexpected changes in output or inflation and whether such changes are permanent or temporary. The usefulness of policy rules for implementing policy becomes even more doubtful when one realizes that there is considerable uncertainty about the magnitudes of r, λ , β, and y p that will enable policymakers to achieve their inflation objective. Such uncertainty suggests that the interest-rate rule should be written as
itp = r + πT + λ(πt − πT ) + β( yt − y p ) + ζt , (7)
where ζt denotes a random variable that reflects the cumulative uncertainty about λ , β, r, and y p and, hence, the uncertainty about where to set the policy rate. Uncertainty about such magnitudes will make it difficult, if not impossible, for policymakers to agree on a specific rule. There is also the fact that policymakers seem to care about things that cannot be easily quantified or put into a simple, implementable rule: financial stability, conditions in the labor market, credit market conditions, and so on. The more policymakers care about such things, the less likely they will adopt a policy rule. All of these reasons explain why no central bank has chosen to implement policy using a policy rule. It is important to note, however, that this does not imply that rules such as the Taylor rule are useless. Indeed, some policymakers use specific forms of a Taylor- style rule to guide their policy recommendations. However, it is doubtful that their policy recommendations are based solely on such rules. Of course, other policymakers use other non-rule-based frameworks to make their policy recommendations. Policy rules are not the only byproduct of time inconsistency. Time inconsistency also led to the related idea of monetary policy transparency. The idea behind monetary policy transparency and policy-objective targeting is that policymakers will be less likely to renege if they are open and transparent about the objective of policy. They will be even less likely to renege if they formally announce a target for that objective (e.g., IT, NIT).28 Finally, the conclusion that optimal policy outcomes can be achieved only by adopting a monetary policy rule ignores the real-world fact that policymakers can modify or change the rule whenever they like. Hence, the suggestion that adopting a policy rule would make monetary policy more predictable and thus protect central bank independence is questionable. For example, I don’t believe that using a specific interest-rate rule would have prevented the Fed from reducing its funds rate target to zero and keeping it there for six and a half years into the economic expansion. I don’t believe it would have prevented the Fed and other central banks from adopting QE or FG.
Strategies for Conducting Monetary Policy 221 I believe that a better way to prevent the Fed and other central banks from taking such extreme actions is to adopt what I call economic reality-based monetary policy (ERMP). ERMP would require the central bank to specify a set of fundamental economic realities and commit to conduct monetary policy within the limits implied by these realities. To see how ERMP works, assume the central bank is committed to conducting monetary policy in accordance with the following economic realities: Reality 1: Credit is most efficiently and effectively allocated by the market and, hence, by economic fundamentals. Reality 2: Interest rates determine the allocation of credit. Hence, interest rates are best determined by the market. Actions that the FOMC takes to affect interest rates necessarily distort interest rates and the allocation of credit and economic resources. The purpose of the central bank’s interest-rate policy is to distort interest rates and the allocation of credit. The problem arises when central banks pursue their policy too aggressively or for too long. Together, these realities imply that policy actions taken to affect interest rates should be limited in both magnitude and duration. For example, if the FOMC had agreed to conduct monetary policy consistent with these realities, it might not have reduced its funds rate target to 1 percent in June 2003. Even if it did, it would have been reluctant to keep it there for a year. Moreover, the FOMC would have been more reluctant to engage in QE for the express purpose of allocating credit to specific markets. It almost certainly wouldn’t have kept its funds rate target near zero for six and a half years after the recession ended. Nor would it have engaged in its FG policy (i.e., being committed to keep interest rates low for an extended period, for the purpose of reducing long-term yields). ERMP wouldn’t prevent central banks from temporarily engaging in aggressive credit allocation in times of crises; however, they would have to provide a strong case that financial markets are significantly impaired. But it almost certainly would prevent the Fed and other central banks from engaging in such actions years after markets had stabilized. I believe that ERMP will preserve central bank independence while simultaneously enhancing central bank accountability. Because policymakers’ actions are constrained by economics, there is less need for direct governmental oversight. If the central bank took actions that appeared to be inconsistent with the stated economic realities, it would have to explain its actions. Oversight and accountability are achieved without creating a governmental bureaucracy. ERMP also has the advantage that it neither requires nor restricts how the central bank implements monetary policy. Policy could be implemented with a specific policy rule, or policymakers could continue to rely on meeting-to-meeting discretion. Policymakers could target inflation, the price level, or nominal GDP. ERMP constrains how aggressively the policy is pursued. It does not constrain how policy is conducted.
222 Thornton I believe the list of economic realities should include the two I mentioned. However, it’s likely that agreement could be reached on a somewhat longer list. I believe my proposal will produce better monetary policy, and better monetary policy outcomes, because it constrains policymakers’ actions to be consistent with economic realities. ERMP will make monetary policy more predictable, make central banks more accountable, and protect central bank independence.
7.6 Conclusion The purpose of this chapter has been to review alternative strategies for implementing monetary policy that have been discussed or used in the conduct of monetary policy from a critical, real-world perspective. Most of these strategies have been developed and/ or discussed using highly stylized and abstract models that make strong assumptions about the behavior and knowledge of policymakers and the economic agents in the models. While it is useful to consider alternative strategies in such models, doing so has limited usefulness for real-world policymaking. I hope that the analysis provided here gives the reader a broader basis for considering the relative effectiveness of alternative strategies for conducting monetary policy. Viewed from this real-world perspective, many of these strategies, such as money supply targeting, interest-rate targeting, NIT, price-level targeting, and policy rules seem much less feasible. This may explain why no central bank has adopted these strategies or used a specific rule to implement policy. However, FG and QE also seem less likely to produce the desired outcome. Nevertheless, several central banks have chosen to implement these strategies and are continuing to pursue them despite the fact that experience thus far suggests these strategies have not been very successful. Finally, I have presented a new approach to monetary policy that I believe will make monetary policy more effective and more predictable. It will also make central banks more accountable while protecting their independence.
Notes 1. It is important to note that AS and AD are concepts. They don’t exist. That is, they are not measurable. To understand why, see Thornton 2015a. 2. A recent survey (Sharpe and Suarez 2014) confirmed previous survey findings. 3. See Thornton 2010b for a more complete discussion. 4. See Stone and Thornton 1987. 5. See Thornton 2010b, 96–97. 6. For a discussion of the theoretical view of money, see Thornton 2000. 7. For example, see Friedman 1999. 8. See Thornton 2003 for more details.
Strategies for Conducting Monetary Policy 223 9. For a more complete description of central banks’ use of FG and evidence of their effectiveness, see Kool and Thornton 2015 and references cited therein. 10. Mankiw and Miron (1986) found that the EH worked better prior to the founding of the Fed and suggested that the failure of the EH occurred because the Fed “smoothed interest rates”; McCallum (2005) advanced a similar hypothesis without providing evidence. However, Kool and Thornton (2004) have shown that Mankiw and Miron’s finding is due to the fact that the test they employ tends to generate results that are more favorable to the EH when the short-term rate is more variable than the long-term rate. Specifically, Kool and Thornton showed that the finding that the EH works better before the Fed’s founding is due to three extreme observations of the short-term rate during the final panic of 1907. When these observations are accounted for, the EH performs no better before the Fed’s founding than after. 11. See Guidolin and Thornton 2018 and the references cited therein. 12. FG is also categorized as Odyssean or Delphic, depending on whether the FG statement is a public commitment to take an action or merely a forecast of an economic evident that is likely to prompt a particular action. For example, see Campbell et al. 2012. 13. See Williams 2011 and the references cited therein. 14. See Thornton 2013 for the details. 15. See Thornton 2015c. 16. Also see Taylor 2001; Friedman 1999; Thornton 2014b and 2018. 17. See Thornton 2010a and 2015b. 18. See Khan 2009 for a nice survey of the PLT literature. See Cobham et al. 2010 for a survey of the effectiveness of IT. 19. See Sumner 2014; Woodford 2012. 20. For example, see Sumner 2014. 21. For a discussion of how the CBO estimates potential output, see Thornton 2016. 22. See Martin, Munyan, and Wilson 2015. 23. See Thornton 2015b. 24. See Thornton 2017 for a discussion of why the Fed’s low interest-rate policy caused long- term rates to be lower than they would have been otherwise. 25. See Thornton 2003 for a discussion of monetary transparency and its limitations. 26. See Howitt 1990. However, Thornton 1996 shows that if the welfare of future generations is considered, it is difficult to believe that Howitt’s conclusion is valid, even if the welfare gains of future generations are discounted at a very low rate. The argument is weakened still further if inflation produces even a tiny reduction in the long-run rate of output growth. See also the Boskin Commission report 1996 for a discussion and estimates of the positive measurement bias in inflation measures. 27. See Marty and Thornton 1995 for a critique of five arguments for positive inflation. 28. For a critical discussion of monetary policy transparency, see Thornton 2003.
References Bauer, Michael D., and Glenn D. Rudebusch. 2014. “The Signaling Channel for Federal Reserve Bond Purchases.” International Journal of Central Banking 11, no. 2: 233–289.
224 Thornton Baumol, William J. 1952. “The Transactions Demand for Cash: An Inventory Theoretic Approach.” Quarterly Journal of Economics 66, no. 4: 545–556. Bernanke, Ben S. 2009. “The Crisis and the Policy Response.” Stamp Lecture, London School of Economics, January 13. Bernanke, Ben S. 2013. “Long- Term Interest Rates.” Speech. Annual Monetary/ Macroeconomics Conference: “The Past and Future of Monetary Policy.” Federal Reserve Bank of San Francisco, March 1. Bernanke, Ben S. 2014. “A Discussion with Federal Reserve Chairman Ben Bernanke on the Fed’s 100th Anniversary.” Brookings Institution, Washington, D.C., January 16. https:// www.c-span.org/video/?317242-3/federal-reserve-chair-ben-bernanke-monetary-policy Boldrin, Michele. 2016. “Comment on ‘A Wedge in the Dual Mandate’: Monetary Policy and Long-Term Unemployment.” Journal of Macroeconomics 47, part A: 26–32. Boskin Commission. 1996. “Toward a More Accurate Measure of the Cost of Living.” Final Report to the Senate Finance Committee, December 4. Campbell, J. R., C. L. Evans, J. D. M. Fisher, and A. Justiniano. 2012. “Macroeconomics Effects of FOMC Forward Guidance.” Brooking Papers on Economic Activity (Spring): 1–80. Campbell, John, and Robert Shiller. 1991. “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.” Review of Economic Studies 58: 495–514. Christiano, Lawrence J., and Martin Eichenbaum. 1992. “Identification and the Liquidity Effect of a Monetary Policy Shock.” In Political Economy, Growth, and Business Cycles, edited by A. Cukierman, Z. Hercowitz, and L. Leiderman, 335–370. Cambridge: MIT Press. Christiano, Lawrence J., Martin Eichenbaum, and Charles L. Evans. 1996a. “The Effects of Monetary Policy Shocks: Evidence from the Flow of Funds.” Review of Economics and Statistics 78, no. 1: 16–34. Christiano, Lawrence J., Martin Eichenbaum, and Charles L. Evans. 1996b. “Identification and the Effects of Monetary Policy Shocks.” In Financial Factors in Economic Stabilization and Growth, edited by M. Blejer, Z. Eckstein, Z. Hercowitz, and L. Leiderman, L., 36–79. Cambridge: Cambridge University Press. Cobham, David, Øyvind Eitrheim, Stefan Gerlach, and Jan Qvigstad. 2010. Twenty Years of Inflation Targeting: Lessons Learned and Future Prospects. Cambridge: Cambridge University Press. Della Corte, Pasquale, Lucio Sarno, and Daniel L. Thornton. 2008. “The Expectations Hypothesis of the Term Structure of Very Short-Term Rates: Statistical Tests and Economic Value.” Journal of Financial Economics 89: 158–174. Friedman, Benjamin M. 1999. “The Future of Monetary Policy: The Central Bank As an Army with Only a Signal Corps?” International Finance 2, no. 3: 321–338. Friedman, Milton. A Program for Monetary Stability. New York: Fordham University Press. Friedman, Milton. 1968. “The Role of Monetary Policy.” American Economic Review 68, no. 1: 1–17. Friedman, Milton. 1969. “The Optimum Quantity of Money.” In The Optimum Quantity of Money and Other Essays, 1–50. Chicago: Aldine Publishing Company. Gagnon, Joseph, Matthew Raskin, Julie Remache, and Brian Sack. 2011. “The Financial Market Effects of the Federal Reserve’s Large-Scale Asset Purchases.” International Journal of Central Banking, 7, no. 1: 3–43. Guidolin, Massimo, and Daniel L. Thornton. 2018. “Predictions of Short-Term Rates and the Expectations Hypothesis.” International Journal of Forecasting 34, no. 4: 636–664. Gurley, John G., and Edward S. Shaw. 1960. Money in a Theory of Finance. Washington, D.C.: Brookings Institution.
Strategies for Conducting Monetary Policy 225 Guthrie, Graeme, and Julian Wright. 2000. “Open Mouth Operations.” Journal of Monetary Economics 46, no. 2: 489–516. Hamilton, James D. 1997. “Measuring the Liquidity Effect.” American Economic Review, 87, no. 1: 80–97. Howitt, Peter. 1990. “Zero Inflation As a Long-Term Target for Monetary Policy.” In Zero Inflation: The Goal of Price Stability, edited by R. G. Lipsey, 66–108. Toronto: C. D. Howe Institute. Khan, George A. 2009. “Beyond Inflation Targeting: Should Central Banks Target the Price Level?” Federal Reserve Bank of Kansas City Economic Review (Third Quarter): 35–64. Kool, Clemens J. M., and Daniel L. Thornton. 2004. “A Note on the Expectations Hypothesis at the Founding of the Fed.” Journal of Banking and Finance 28: 3055–3068. Kool, Clemens J. M., and Daniel L. Thornton. 2015. “How Effective Is Central Bank Forward Guidance?” Federal Reserve Bank of St. Louis Review, 97, no. 4: 303–322. Kydland, Finn E., and Edward C. Prescott. 1977. “Rule Rather Than Discretion: The Inconsistency of Optimal Plans.” Journal of Political Economy 85, no. 3: 473–492. Mankiw, N. Gregory, and Jeffrey A. Miron. 1986. “The Changing Behavior of the Term Structure of Interest Rates.” Quarterly Journal of Economics 101: 211–228. Martin, R., T. Munyan, and B. A. Wilson. 2015. “Potential Output and Recessions: Are We Fooling Ourselves?” Federal Reserve Board international finance discussion paper 1145. Marty, Alvin L., and Daniel L. Thornton. 1995. “Is There a Case for Moderate Inflation?” Federal Reserve Bank of St. Louis Review 77, no. 4: 27–37. McCallum, Bennett T. 1981. “Price Level Determinacy with an Interest Rate Policy Rule and Rational Expectations.” Journal of Monetary Economics 8, no. 3: 319–329. McCallum, Bennett T. 2005. “Monetary Policy and the Term Structure of Interest Rates.” Federal Reserve Bank of Richmond Economic Quarterly 91, no. 4: 1–21. Meulendyke, Ann-Marie. 1998. U.S. Monetary Policy & Financial Markets. New York: Federal Reserve Bank of New York. Pagan, Adrian R., and John C. Robertson. 1995. “Resolving the Liquidity Effect.” Federal Reserve Bank of St. Louis Review 77, no. 3: 33–54. Phillips, A. W. 1958. “The Relationship Between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861–1957.” Economica 25, no. 100: 283–299. Phelps, Edmund S. 1967. “Phillips Curves, Expectations of Inflation and Optimal Unemployment over Time.” Economica, 34, no. 135: 254–281. Sargent, Thomas J., and Neal Wallace. 1975. “Rational Expectations, the Optimal Monetary Instrument, and the Optimal Money Supply Rule.” Journal of Political Economy 83: 241–254. Sarno, Lucio, Daniel L. Thornton, and Giorgio Valente. 2007. “The Empirical Failure of the Expectations Hypothesis of the Term Structure of Bond Yields.” Journal of Financial and Quantitative Analysis 42: 81–100. Sharpe, S. A., and G. A. Suarez. 2014. “The Insensitivity of Investment to Interest Rates: Evidence from a Survey of CFOs.” Federal Reserve Board Finance and Economics Discussion Series 2014-02. Simons, Henry C. 1936. “Rules versus Authorities in Monetary Policy.” Journal of Political Economy 44, no. 1: 1–30. Stone, Courtenay C., and Daniel L. Thornton. 1987. “Solving the 1980s’ Velocity Puzzle: A Progress Report.” Federal Reserve Bank of St. Louis Review (August/September: 5–23.
226 Thornton Strongin, Stephen. 1995. “The Identification of Monetary Policy Disturbances: Explaining the Liquidity Puzzle.” Journal of Monetary Economics 35, no. 3: 463–497. Sumner, Scott B. 2014. “Nominal GDP Targeting: A Simple Rule to Improve Fed Performance.” Cato Journal 34, no. 2: 315–337. Taylor, John B. 1993. “Discretion versus Policy Rules in Practice.” Carnegie-Rochester Series on Public Policy 39: 195–214. Taylor, John B. 2001. “Expectations, Open Market Operations, and Changes in the Federal Funds Rate.” Federal Reserve Bank of St. Louis Review 83, no. 4: 33–48. Thornton, Daniel L. 1996. “The Costs and Benefits of Price Stability: An Assessment of Howitt’s Rule.” Federal Reserve Bank of St. Louis Review, 78, no. 2: 23–38. Thornton, Daniel L. 2000. “Money in a Theory of Exchange.” Federal Reserve Bank of St. Louis Review 82, no. 1: 35–62. Thornton, Daniel L. 2001a. “The Federal Reserve’s Operating Procedure, Nonborrowed Reserves, Borrowed Reserves and the Liquidity Effect.” Journal of Banking and Finance 25, no. 9: 1717–1739. Thornton, Daniel L. 2001b. “Identifying the Liquidity Effect at the Daily Frequency.” Federal Reserve Bank of St. Louis Review 83, no. 4: 59–78. Thornton, Daniel L. 2003. “Monetary Policy Transparency: Transparent about What?” Manchester School 71, no. 5: 478–497. Thornton, Daniel L. 2005. “Test of the Expectations Hypothesis: Resolving the Anomalies When the Short-Term Rate Is the Federal Funds Rate.” Journal of Banking and Finance 29, no. 10: 2541–2556. Thornton, Daniel L. 2007. “Open Market Operations and the Federal Funds Rate.” Federal Reserve Bank of St. Louis Review 89, no. 6: 549–570. Thornton, Daniel L. 2010a. “Can the FOMC Increase the Funds Rate without Reducing Reserves?” Federal Reserve Bank of St. Louis Economic Synopses 28. Thornton, Daniel L. 2010b. “How Did We Get to Inflation Targeting and Where Do We Go Now? A Perspective from the U.S. Experience.” In Twenty Years of Inflation Targeting: Lessons Learned and Future Prospects, edited by David Cobham, Øyvind Eitrheim, Stefan Gerlach, and Jan Qvigstad, 90–110. Cambridge: Cambridge University Press. Thornton, Daniel L. 2010c. “Monetary Policy and Longer-Term Rates: An Opportunity for Greater Transparency.” Federal Reserve Bank of St. Louis Economic Synopses 36. Thornton, Daniel L. 2013. “Is the FOMC’s Unemployment Rate Threshold a Good Idea?” Federal Reserve Bank of St. Louis Economic Synopses 1. Thornton, Daniel L. 2014a. “The Federal Reserve’s Response to the Financial Crisis: What It Did and What It Should Have Done.” In Developments in Macro-Finance Yield Curve Modelling, edited by Jagjit S. Chadha, Alain C. J. Durre, Michael A. S. Joyce, and Lucio Sarno, 90–121. Cambridge: Cambridge University Press. Thornton, Daniel L. 2014b. “Monetary Policy: Why Money Matters, and Interest Rates Don’t.” Journal of Macroeconomics 40: 202–213. Thornton, Daniel L. 2015a. “Does Aggregate Supply Exist? Should We Care?” D. L. Thornton Economics, Common Sense Economics 1, no. 2. Thornton, Daniel L. 2015b. “Requiem for QE.” Policy Analysis 783. Thornton, Daniel L. 2015c. “What’s the Connection between the ‘Rain Man’ and LIBOR Rates, and the Fed’s Ability to Control Interest Rates?” D. L. Thornton Economics, Common Sense Economics Perspective 9.
Strategies for Conducting Monetary Policy 227 Thornton, Daniel L. 2016. “Why the Fed’s Zero Interest Rate Policy Failed.” D. L. Thornton Economics, Common Sense Economics Perspective 4. Thornton, Daniel L. 2017. “Federal Reserve Mischief and the Credit Trap.” Cato Journal 37, no. 2: 263–285. Thornton, Daniel L. 2018. “Greenspan’s Conundrum and the Fed’s Ability to Affect Long-Term Yields.” Journal of Money, Credit and Banking 50, no. 2–3: 513–543. Tobin, James. 1956. “The Interest Elasticity of the Transactions Demand for Cash.” Review of Economics and Statistics 38, no. 3: 241–247. Williams, John C. 2011. “Unconventional Monetary Policy: Lessons from the Past Three Years.” Federal Reserve Bank of San Francisco Economic Letter, October 3. Woodford, M. 1999. “Optimal Monetary Policy Inertia.” Manchester School 67, supp.: 1–35. Woodford, M. 2012. “Methods of Policy Accommodation at the Interest-Rate Lower Bound.” In The Changing Policy Landscape, Jackson Hole Symposium, 185– 228. Kansas City, Mo.: Federal Reserve Bank of Kansas City.
Pa rt I I I
C E N T R A L BA N K C OM M U N IC AT ION A N D E X P E C TAT ION S M A NAG E M E N T
chapter 8
Central Ba nk C om munic at i on How to Manage Expectations? Jakob de Haan and Jan-E gbert Sturm
8.1 Introduction In a remarkable 180-degree turn from their long-standing tradition of secrecy, many central banks nowadays consider communication an important policy instrument.1 Blinder et al. (2008) define central bank communication as the provision of information by the central bank to the general public on the objectives of monetary policy, the monetary policy strategy, the economic outlook, and the (outlook for future) policy decisions. Communication can make monetary policy more effective. Even though central banks have control only over short-term interest rates, they can use communication to influence expectations about future short-term interest rates, thereby affecting long- term interest rates. Long-term interest rates, reflecting expected future short-term interest rates, affect saving and investment decisions by households and firms. Therefore, the public’s perception of future policy rates is critical for the effectiveness of monetary policy (Blinder et al. 2008). In addition, central bank communication may affect inflation expectations. Inflation expectations are important, as they will affect actual inflation. Very simply, if economic agents expect an inflation rate of, say, 2 percent and behave accordingly, actual inflation will move toward this rate. The anchoring of inflation expectations is also important because it prevents a fall in nominal short-term interest rates to be associated with a medium- term weakening of the economic situation and thus a decline in inflation expectations. For these reasons, central banks aim to “anchor” inflation expectations. This means that The views expressed in this chapter do not necessarily reflect the position of De Nederlandsche Bank. We would like to thank Regina Pleninger for excellent research assistance and an anonymous referee and David Mayes for their feedback on a previous version of the chapter.
232 De Haan and Sturm inflation expectations are in line with the central bank’s inflation objective. In this way, the consequences of a shock are mitigated by a firm anchorage of expectations; the economy returns faster to its long-run path than otherwise would be the case. Moreover, if the short- term interest approaches its effective lower bound (ELB),2 implying that there is less and less room for further policy rate reductions, the development of inflation expectations will become more and more crucial in determining the real interest rate. The current emphasis on transparency is based on the insight that monetary policy to a very large extent is “management of expectations” (Svensson 2006). Although the increase in central bank transparency had already set in before, central bank communication has changed profoundly since the Great Financial Crisis and the ensuing Great Recession during which monetary policy rates hit the ELB. This survey will discuss these changes in some detail, addressing the following issues.3 Forward guidance regarding policy rates. Modern central banks frequently use communication about their future policy rates to influence expectations (Svensson 2015). Forward guidance (FG) has been argued to make monetary policy effective even at the ELB (Eggertsson and Woodford 2003). In practice, central banks apply different forms of FG. Issues that we will address include: Why do different central banks use different types of FG? What are the pitfalls of FG? How much impact does FG have on financial markets? Communication about asset purchase programs. Asset purchases financed by central bank money may affect asset prices through several channels. One is the portfolio balance channel, which reflects the direct impact on prices when investors rebalance their portfolios. Another one—relevant from the perspective of this chapter—is the signaling channel, which refers to what the public learns from announcements about such operations. Several studies show that communication about unconventional monetary policies has indeed affected financial markets strongly. This literature will be surveyed. Managing inflation expectations. Whereas most of the recent literature on central bank communication focuses on managing financial-market expectations, some recent studies have examined to what extent central bank communication can affect inflation expectations. Especially if central banks try to forgo deflationary pressures, this is what matters in the end. We survey this line of literature. One important element of communication is the central bank’s announced inflation target. Issues that will be addressed include: Does communicating about the inflation target help anchor inflation expectations? Will communicating a higher inflation target increase inflation expectations without endangering the central bank’s credibility? Although this chapter is about influencing expectations, we will not deal with the formation of expectations. Still, it may be useful to point out the discrepancy between the way expectations are treated in macroeconomic models and research in economic psychology and in most of the literature discussed in this chapter. Nowadays, most macroeconomic models are based on the presumption of rational expectations, although several alternative approaches have been suggested, such as information delay (due to sticky information), rational inattention, rationally heterogeneous expectations (all discussed in Maag 2010), and econometric learning (Evans and Honkapohja 2009).4
Central Bank Communication: Managing Expectations 233 An important line of research in economic psychology addresses the question of how agents form their expectations. Ranyard et al. (2008) provide an excellent survey of this literature as far as it deals with inflation expectations. They conclude that “while consumers may have a limited ability to store and recall specific prices, and even succumb to a number of biases in the way in which they form perceptions and expectations of global price changes, they do seem to have some feel for, and ability to judge and forecast, inflation. How they achieve this, however, is still an open question” (378). The literature discussed in this chapter does not deal with how expectations are formed but takes them for granted. Studies either use data on expectations, derived from surveys (among professional forecasters, households, or firms), or financial- market prices (see de Haan et al. 2016 for a discussion). Alternatively, they start from the presumption that financial-market prices reflect all available information (see Lim and Brooks 2011 for a survey) and that these prices will change if the central bank communicates new information.
8.2 Forward Guidance Regarding Policy Rates Nowadays, most central banks in advanced economies communicate about their future policy rates.5 However, the underlying rationale for this policy differs across institutions. A number of inflation-targeting central banks had started communicating about their future policy rates even before the financial crisis—for example, the Reserve Bank of New Zealand (RBNZ) started publishing expected paths of future policy rates in the late 1990s.6 But more recently, many others also started communicating about their future policy rates. After they got close to the ELB, the Federal Reserve, the European Central Bank (ECB), and the Bank of England (BoE) started providing FG about future policy rates in various forms.7 FG has been argued to make monetary policy effective, even at the ELB. By committing to future levels of the policy rate, the central bank can circumvent the ELB by promising monetary accommodation once the ELB is no longer a constraint (Eggertsson and Woodford 2003; Gürkaynak et al. 2005). Drawing on Moessner et al. (2017), the intuition can be explained as follows. Even if short-term policy rates are at the ELB, the central bank can still influence macroeconomic outcomes by steering future expectations of the policy rate. The central bank can reduce long-term interest rates by promising to keep the policy interest rate “lower for longer,” that is, to keep the future policy rates below levels consistent with its normal reaction function when policy rates are no longer constrained by the ELB. If the monetary policy authority can do so credibly, nominal long-term rates (reflecting expected future short-term rates) will be reduced already in the current period. In addition, the (implicit) promise to surpass the inflation target in the future can lead to an increase in inflation expectations, leading to
234 De Haan and Sturm a reduction in today’s real interest rates. Through these long-term rates and inflation expectations, the central bank can provide a monetary stimulus even when the policy rate is constrained by the ELB. However, this policy is time-inconsistent, since the costs of higher inflation arise only later, so that the central bank has an incentive to renege on its promise in the future. The effectiveness of this policy therefore depends on the central bank’s ability to commit.8 Campbell et al. (2012) classify communication about future policy rates into “Delphic” and “Odyssean” communication. Delphic communication means forecasting macroeconomic performance and likely monetary policy actions. The central bank acts in a similar way to the Delphic oracle. The forecasts by the central bank could affect private- sector expectations if the central bank is perceived to have superior forecasting ability or better knowledge about its own policy intentions (Blinder et al. 2008). Odyssean communication means that the central bank commits itself to future monetary policy actions. Like Odysseus, the central bank ties itself to the mast in order to withstand the call of the sirens. Moessner et al. (2017) show that central banks that provide FG—be it as part of their inflation targeting (IT) strategy or in order to forgo the ELB—generally do not commit.9 So there is a huge discrepancy between the theoretical literature, which focuses mostly on Odyssean FG, and central bank practice of Delphic FG.10 Broadly speaking, central banks utilize three forms of FG (Filardo and Hoffman 2014): qualitative (or open-ended) guidance, wherein the central bank provides information neither about the envisaged time frame nor about the conditions under which policy rates will be changed; calendar-based (or time-dependent) guidance, wherein the central bank offers a specified time horizon; and data-based (or state-contingent) guidance, wherein the central bank links future rates to incoming data. Central banks may switch from one type of FG to another. For instance, in December 2008, the Federal Open Market Committee (FOMC) of the Federal Reserve announced that the federal funds rate target was likely to remain unchanged “for some time.” This is an example of open-ended FG. Subsequently, this language was strengthened to refer to “an extended period” in March 2009. Then, in August 2011, the FOMC switched to time- dependent FG, communicating that it expected the federal funds rate to remain exceptionally low until at least mid-2013, which was extended to late 2014 in January 2012 and to late 2015 in September 2012. In December 2012, the Fed switched to state-contingent FG, stating that the exceptionally low level of the federal funds rate would “be appropriate at least as long as the unemployment rate remains above 6-1/2%, inflation between one and two years ahead is projected to be no more than a half percentage point above the Committee’s 2% longer-run goal, and longer-term inflation expectations continue to be well anchored.” In December 2013, the FOMC added that it “now anticipates . . . that it likely will be appropriate to maintain the current target range for the federal funds rate well past the time that the unemployment rate declines below 6-1/2%, especially if projected inflation continues to run below the Committee’s 2% longer-run goal.” In March 2014, communication became more open-ended again; the FOMC stated that it “continues to anticipate . . . that it likely will be appropriate to maintain the current target range for the federal funds rate for a considerable time after the asset purchase
Central Bank Communication: Managing Expectations 235 program ends, especially if projected inflation continues to run below the Committee’s 2% longer-run goal, and provided that longer-term inflation expectations remain well anchored. . . . The Committee currently anticipates that, even after employment and inflation are near mandate-consistent levels, economic conditions may, for some time, warrant keeping the target federal funds rate below levels the Committee views as normal in the longer run.” In January 2015, it was communicated that “the Committee judges that it can be patient in beginning to normalize the stance of monetary policy.” In March 2015, communication became slightly more specific: “the Committee judges that an increase in the target range for the federal funds rate remains unlikely at the April FOMC meeting. The Committee anticipates that it will be appropriate to raise the target range for the federal funds rate when it has seen further improvement in the labor market and is reasonably confident that inflation will move back to its 2% objective over the medium term.” As pointed out by Moessner et al. (2017), in choosing a particular type of FG, central banks face a trade-off between informing the private sector and avoiding the impression that they commit. On the one hand, changes in economic conditions may make it necessary for the central bank to deviate from previously announced paths. In turn, these changes may surprise private-sector participants and have negative consequences for central bank credibility, suggesting that more cautious statements may be preferable. On the other hand, qualitative FG may have less impact than more specific types of guidance. One advantage of state-contingent FG over open-ended and time-dependent FG is that the former allows the public to distinguish whether changes in FG are due to changes in expectations about the economic outlook or changes in monetary policy preferences, while the latter two do not allow such a distinction (Shirai 2013). Having said that, state-contingent FG is not without risks, as the recent experience of the BoE illustrates. In August 2013, the MPC of the BoE announced that it “agreed its intention not to raise Bank Rate from its current level of 0.5% at least until . . . the unemployment rate has fallen to a ‘threshold’ of 7%.” However, unemployment fell much faster than anticipated and dropped below this threshold very quickly. In February 2014, the MPC communicated that “Despite the sharp fall in unemployment, there remains scope to absorb spare capacity further before raising Bank Rate.” BoE MPC then-member David Miles describes the first announcement as follows: “It was absolutely not an unconditional commitment. What we said was that we would not raise interest rates at least until unemployment fell to 7%, provided the inflation outlook remained benign. . . . And neither did we commit to raise interest rates when unemployment fell to 7% (which it subsequently did fairly quickly)” (Miles 2014, 3). This is not how the media assessed the BoE’s FG. For instance, Sky News wrote: Mr Carney’s problem is that every time he makes a big forecast he seems to get it wrong. When he came into office, the Governor brought with him a whizzy new framework for setting UK interest rates. Under “forward guidance,” he would provide clarity about borrowing costs. He promised [our emphasis], based on the Bank’s forecasts, that he and the Monetary Policy Committee would start to consider
236 De Haan and Sturm lifting them only when unemployment dropped beneath 7%. Suddenly, within a few months, the jobless rate, hitherto stuck stubbornly above that level, started to come down. Within a year, forward guidance had to be dropped, replaced with a far more vague set of promises some nicknamed “fuzzy guidance.”11
Also, some well-informed observers criticize the BoE’s FG. For instance, Andrew Sentance, a former member of the BoE’s MPC, concludes: “The concept of forward guidance has not delivered. It seems to have been used to support the view that interest rates will not rise, rather than preparing the public and business for inevitable hikes.”12 Similarly, the Federal Reserve initially put a lot of emphasis on the unemployment rate in its FG. While the FOMC had said it would not even consider raising rates until unemployment fell to 6.5 percent, markets (mistakenly) viewed 6.5 percent as a trigger for rate hikes. When unemployment dropped below 6.5 percent due to an unexpectedly large decrease in the labor force participation rate, the Fed judged that liftoff was not yet advisable (Blinder et al. 2017). Our take is that these examples show how difficult it is to describe the state-contingency conditions and what the risks are if the forecasts of the central bank, in this case about unemployment, turn wrong. It also becomes clear that despite the efforts of the BoE and the Fed to explain that their FG should not be interpreted as a commitment, this message was not always understood. According to Woodford, “central bank statements about future policy can, at least under certain circumstances, affect financial markets—and more specifically, that they can affect markets in ways that reflect a shift in beliefs about the future path of interest rates toward the one announced by the central bank” (2012, 216). He concludes that the Federal Reserve’s use of FG has had a substantial effect on interest-rate expectations. Gürkaynak et al. (2005) find that 75 to 90 percent of the explainable variation in five- and ten-year Treasury yields in response to monetary policy announcements by the Fed is due to the path factor (which is associated with Fed statements) rather than to changes in the federal funds rate target. Moessner et al. (2017) summarize empirical research on the impact of FG on financial- market prices in detail. Appendix table 8.A.1, which is taken from this study and extended, summarizes several studies. The evidence presented in this table is broadly in line with Woodford’s (2012) assessment. An issue that has received scant attention in the literature is whether different types of FG have been equally effective. There is some evidence for the Fed suggesting that its different forms of FG affected markets differently. Moessner (2015b) presents evidence from event-study regressions that open-ended and time-dependent FG led to a significant reduction in forward US Treasury yields at a wide range of horizons, with the largest reduction occurring at the five-year-ahead horizon; by contrast, state-contingent FG led to a significant increase in forward US Treasury yields for horizons of three to seven years ahead. These results are broadly consistent with the findings of Femia et al. (2013), Raskin (2013), and the IMF (International Monetary Fund 2014). Using primary dealer survey data, Femia et al. (2013) report that time-dependent FG coincided with a perceived shift to a more accommodative monetary stance. Raskin (2013) shows that interest-rate expectations became significantly less sensitive to macroeconomic
Central Bank Communication: Managing Expectations 237 surprises after the introduction of time-dependent FG. The IMF (2014) reports that the initial introduction of open-ended FG in December 2008 and time-dependent FG in August 2011 had sizable negative impacts on policy rate expectations, but the shift to state-contingent FG in December 2012 had virtually no impact on policy rate expectations. The shift back to open-ended FG in March 2014 coincided with an increase in policy rate expectations, although this shift may also reflect an upward revision in FOMC members’ projections for policy interest rates that were released at the same time.13 Another open issue is how FG interacts with large-scale asset purchases, which is discussed in more detail in the next section. Only a few studies have addressed this issue so far. Cúrdia and Ferrero (2013) suggest that forward policy rate guidance is essential for asset purchase programs (or quantitative easing, QE) to be effective. The reason is that if QE stimulates the economy, financial-market participants may expect higher policy rates, thereby offsetting the stimulus. Model simulations of QE2 in the United States suggest that if the FOMC had not communicated to keep the interest rate near zero, the median effect of QE2 would be only 0.04 percentage point on economic growth and 0.02 percentage point on inflation. However, with communication to keep the federal funds rate near zero, the effect turned out to be 0.22 percentage point on GDP growth and 0.05 percentage point on inflation. Wu (2014) also examines the interaction between the Fed’s FG and QE. He argues that communication about the QE program may indicate changes in the FOMC’s assessment of the underlying economic conditions and thus may also generate changes in the market’s expectations of the future short-term policy rate. Likewise, changes in the FOMC’s FG, such as an extension of the period during which the federal funds rate is kept around zero, may also affect expectations of the size and persistence of the QE program. Wu (2014) constructs an indicator of market expectations of the size and duration of the asset purchase program, based on real-time policy announcements as well as market surveys. Subsequently, he analyzes the impact of FG and QE on term premiums and expected future short-term rates, controlling for changes in the underlying macroeconomic and financial-market fundamentals. His results suggest that the Federal Reserve’s FG and QE policies have effectively lowered the long-term bond yields and term premiums, lowering the ten-year term premium by more than 100 basis points. He also finds strong evidence suggesting two spillover effects. First, FG leads to a gradual extension of the market’s projected length of the holding period of the assets purchased by the Fed, thereby keeping term premiums low. Second, continuing QE helps to enhance the credibility of FG and to guide the market’s expectations of future short-term interest rates. Greenwood et al. (2015) build a no-arbitrage model of the yield curve that allows them to analyze the effects of FG and communication on QE on short and long rates. In their model, FG works through the expectations hypothesis, while communication on QE works through expected future bond-risk premiums. The authors show that the cumulative effect of all expansionary announcements up to 2013 was hump-shaped, with a maximum effect at the ten-year maturity for the yield curve and the seven-year maturity for the forward-rate curve. This evidence is consistent with changing expectations
238 De Haan and Sturm about QE rather than with changing expectations about policy rates. This conclusion is consistent with the findings of Swanson (2015), who decomposes the effect of FOMC announcements from 2009–2015 into a component that reflects news about FG and a component that reflects news about QE. The author finds that both QE-related and FG- related announcements have hump-shaped effects on the yield curve. Moreover, QE announcements have their largest impact at around the ten-year maturity, while FG announcements have their largest impact at two to five years. Finally, should central banks keep using FG once more normal situations set in? Perhaps not. Tellingly, the Central Bank of Canada, which introduced FG before many other central banks, decided to change course. As Poloz (2015) explains, statements to explicitly guide market expectations about the future path of interest rates are best reserved for extraordinary times, such as when policy rates are at the effective lower bound, or during periods of market stress. Keeping forward guidance in reserve ensures that it will have a greater impact when it is deployed. Otherwise, it becomes addictive and loses some of its impact—in short, forward guidance is not a free lunch, it comes with costs . . . in normal circumstances, it’s natural that there would be volatility as economic surprises occur. But, we think it best to have this volatility reflected in prices in informed, well-functioning financial markets, rather than be artificially suppressed by forward guidance.
8.3 Communication about Asset Purchase Programs Large-scale asset purchases (LSAPs), also often referred to as quantitative easing (QE), were first deployed in 2001 by the BoJ and then used more widely since the financial crisis by the Federal Reserve, the Riksbank, the BoE, and the ECB.14 When a central bank announces an asset purchase program, it can communicate about two key policy parameters: the size of purchases and for how long the program will run. A central bank may choose to communicate about both parameters at the start of the program or communicate on an ongoing basis. As pointed out by Wu (2014), like other central banks, the Federal Reserve has never explicitly specified the length of the holding period of the assets purchased, that is, when the Fed will begin to unload these assets from its balance sheet or how long this process may take, leaving much of it for market speculation. The Fed did not even always announce the amount of assets to be purchased. The ECB has been more specific. For instance, in January 2015, it announced: Combined monthly purchases will amount to €60 billion. They are intended to be carried out until at least September 2016 and in any case until the Governing Council sees a sustained adjustment in the path of inflation that is consistent with its aim of achieving inflation rates below, but close to, 2% over the medium term. The ECB will buy bonds issued by euro area central governments, agencies and
Central Bank Communication: Managing Expectations 239 European institutions in the secondary market against central bank money, which the institutions that sold the securities can use to buy other assets and extend credit to the real economy.
In March 2016, the ECB announced that monthly purchases would be increased to €80 billion and that bonds by non-financial corporations would also be bought. These purchases would be carried out until the end of March 2017. Between April 2017 and December 2017 monthly purchases amounted to €60 billion. Monthly purchases were reduced to €30 billion in January 2018. On June 14, 2018, the ECB Governing Council announced that it “anticipates that, after September 2018, subject to incoming data confirming the Governing Council’s medium-term inflation outlook, the monthly pace of the net asset purchases will be reduced to €15 billion until the end of December 2018 and that net purchases will then end.” Not only communication about the start but also communication about the phasing out of an asset purchase program can have a major effect on financial markets. Probably the best-known example is the Fed’s tapering. On May 22, 2013, then Federal Reserve Chairman Bernanke announced in testimony before Congress that the Fed would slow or “taper” its QE program if the economy showed signs of improving. Within a week, yields of ten-year government bonds had increased by 21 basis points (Greenwood et al. 2015). Several studies have examined the impact of large-scale purchases of assets by central banks on financial markets. The IMF (2013) and Reza et al. (2015) provide comprehensive surveys. Appendix table 8.A.2 summarizes several studies that consider the impact of central bank communication about asset purchase programs on financial markets. Although most studies find significant financial-market reactions, the estimated magnitude of the effects differs greatly. Reza et al. (2015) find that the impact of the Fed’s QE1 on US Treasury yields as reported by eight studies range between 13 and 107 basis points. Likewise, the impact of QE2 of three studies on QE2 ranges between 15 and 33 basis points. Most studies on the impact of unconventional policies on financial markets use the event-studies approach (IMF 2013).15 Under this approach, changes in financial-market prices in a very narrow time window—typically one day—are measured around an official announcement related to asset purchases. There are two main assumptions underlying this approach: (i) on the day it is made, the announcement of the central bank dominates all other shocks to financial markets; and (ii) asset prices are forward-looking and react immediately and accurately to anticipated future purchases (no overshooting). Event studies have limitations (IMF 2013). First, they may underestimate the effect of purchases if they fail to control for the surprise element of announcements. In many cases, announcements are at least in part expected. Second, responses to the content of announcements can be undercut by the simultaneous downbeat economic assessment motivating the announcement. Indeed, Hosono and Isobe (2014) conclude that stock markets in the euro area reacted negatively to ECB unconventional monetary policy surprises.16 These authors argue that expansionary policies in a crisis may lead markets to believe that economic conditions are worse than market participants realized. Third,
240 De Haan and Sturm it is assumed that the monetary policy shock is fully captured by some ad hoc window size around the chosen event. If this assumption does not hold, the results may be biased (Rigobon and Sack 2004). Too narrow a window may miss part of the reactions to the monetary policy news, but too wide a window may contaminate the monetary policy surprise with other news. That is why several papers (including Rogers et al. 2014; Unalmis and Unalmis 2015; and Haitsma et al. 2016) also apply the identification through the heteroscedasticity-based approach of Rigobon and Sack (2004). This approach is robust to endogeneity and omitted variables problems and therefore relies on much weaker assumptions than the event-study approach. The latter basically compares asset prices immediately after monetary policy announcements with those immediately before and attributes the changes to monetary policy surprises. It also implicitly assumes that, in the limit, the variance of the policy shock becomes infinitely large relative to the variances of other shocks on policy dates. The Rigobon-Sack method only requires a rise in the variance of the policy shock when the monetary policy decision is announced, while the variances of other shocks remain constant (Unalmis and Unalmis 2015). Rosa (2011) provides evidence that the event-study estimates of the response of asset prices to monetary policy contain a significant bias. But he also concludes that “this bias is fairly small and the OLS approach tends to outperform in an expected squared error sense the heteroscedasticity- based estimator for both small and large sample sizes. Hence in general the event-study methodology should be preferred” (Rosa 2011, 430). A crucial issue in this line of literature is how to measure unexpected monetary policies. Some studies use survey data from professional forecasters (for instance, Joyce et al. 2011), while Rosa (2012) measures expectations based on newspaper articles judging whether actual Fed and BoE QE policy measures were more expansionary or restrictive than prior articles expected. However, most studies summarized in appendix table 8.A.2 measure policy surprises utilizing asset prices. A good example of this line of research is the study by Altavilla et al. (2015), who use an event-study methodology to assess the impact of the ECB’s QE. Because the January 2015 ECB announcement was expected by financial markets, the authors consider a broad set of events involving ECB’s official announcements that, starting from September 2014, could have affected market expectations about the program. The authors conclude that the program has significantly lowered yields for a broad set of market segments, with effects that generally rise with maturity and riskiness of assets. For instance, for long-term sovereign bonds, yields declined by about 30 to 50 basis points at the ten-year maturity and by roughly twice as much in higher-yield member countries such as Italy and Spain. The authors argue that the low degree of financial stress prevailing at announcement of the program has facilitated spillovers to non-targeted assets. For instance, spreads relative to risk- free rates have declined by about 20 basis points for both euro area financial and non- financial corporations. Although there is strong evidence that central bank asset purchases affect financial markets,17 the channels through which such policies may affect long rates remain unclear (Bowdler and Radia 2012). Most research focuses on two possible channels for central bank asset purchases to affect financial-market prices. The first is the portfolio balance channel due to market segmentation (Joyce et al. 2011) or duration removal
Central Bank Communication: Managing Expectations 241 (Gagnon et al. 2011) through which central bank purchases of specific assets might affect the risk pricing and term premiums of a wide range of securities.18 Second is the signaling channel: central bank asset purchases may signal to market participants that the central bank has changed either its views on the economic outlook or its policy preference, and investors therefore change their expectations of the future path of the policy rate accordingly, thereby lowering long-term bond yields (Bauer and Rudebusch 2014). Yet the relative effectiveness of these channels remains “a matter of considerable debate” (Woodford 2012). For instance, Woodford notes that “a comparison of the timing of the increases in the Fed’s holdings of long-term bonds with the timing of the declines in the 10-year yield does not obviously support a portfolio-balance interpretation of the overall decline in long-term interest rates” (2012, 71), whereas Gagnon et al. (2011) are skeptical about the signaling effect and argue that the portfolio balance channel explains most of the observed declines in long-term bond yields. There is some evidence suggesting that the impact of asset purchase programs may change over time. For instance, Krishnamurthy and Vissing-Jorgensen (2013) find that the first round of the Fed’s asset purchases reduced ten-year Treasury yields by more than 6 basis points per $100 billion worth of purchases of MBS and Treasury securities. In contrast, they find that QE2 reduced ten-year Treasury yields by only 3 basis points. According to the IMF (2013), this trend of diminished impact of asset purchase programs seems to have continued with later programs. In contrast, Michaelis and Watzka (2016) find for the case of Japan a stronger and longer-lasting effect of QE on real GDP and CPI, especially since 2013. Their evidence is based on a time-varying parameter vector autoregression (TVP-VAR) with sign restrictions. Finally, the exit of LSAPs has received scant attention in the literature. However, as pointed out by Krishnamurthy and Vissing-Jorgensen (2013), asset prices react today to new information about future LSAPs. Since the prices of long-term assets are much more sensitive to expectations about future policy than those of short-term assets, controlling these expectations is of central importance in the transmission mechanism of QE. Therefore, how an exit is communicated to investors matters greatly. Previous communication about LSAPs matters, too: “If the Fed had fully described the state dependence of its rule for LSAPs, then asset prices would respond only to news regarding the state and the implied policy change and there would be no policy shocks leading to additional asset price volatility. But, the Fed has . . . . chosen not to fully describe the states that drive LSAP policy decisions or the specific dependence of policy on these states” (Krishnamurthy and Vissing-Jorgensen 2013, 95).
8.4 Managing Inflation Expectations Although most evidence suggests that financial markets were affected in the intended direction by FG and asset purchase programs, this does not imply that these unconventional policies have been able to increase short-term inflation (expectations).19 This issue is important, as most central banks now face the problem that inflation is below
242 De Haan and Sturm target (see below). Those studies that do examine the impact of unconventional monetary policies on output and inflation (summarized in app. tab. 3 of IMF 2013) come to very different quantitative conclusions.20 This is not surprising in view of the difficulty to come up with a counterfactual, that is, what would have happened absent policy action. This is especially difficult as unconventional policy measures were mostly introduced during periods with high financial stress and/or severe macroeconomic risks. Most models—whether traditional time series models or New Keynesian ones—are notoriously poor at capturing crises (IMF 2013). We will summarize some recent studies that explicitly examine the impact of unconventional monetary policy on inflation (expectations) using other methodologies. Krishnamurthy and Vissing-Jorgensen (2011) find that expected inflation increased in response to the Federal Reserve’s first two QE programs (QE1 and QE2) using expected inflation rates derived from inflation swap rates and Treasury inflation-protected securities (TIPS). These results are based on an event-study methodology. This approach is also used by Guidolin and Neely (2010), who conclude that inflation expectations appear to react modestly to LSAP announcements. Using a structural VAR to identify the effects of monetary policy shocks for the period November 2008 to December 2010, Wright (2011) finds evidence that monetary policy shocks led to a rotation in the spread between a nominal and TIPS bond (known as break-even rate or inflation compensation), with short-term spreads rising and long-term spreads falling.21 Hofmann and Zhu (2013) examine whether central bank LSAP announcements have led to higher inflation expectations in the United States and the United Kingdom. They conclude that central bank asset purchases were probably not the main driver of the shifts in inflation expectations on the days of program announcements. Recently, Wieladek and Garcia Pascual (2016) also examined the effects of the ECB’s QE, adopting the methodology previously used by Weale and Wieladek (2016) to study the effects of asset purchase programs of the BoE and the Federal Reserve. The authors assess the macroeconomic impact of the ECB’s QE by comparing data outturns to a counterfactual where it is not announced. Using monthly data from June 2012 to April 2016, the authors conclude that in absence of the first round of ECB QE, real GDP and core CPI would have been 1.3 percentage points and 0.9 percentage point lower, respectively. The effect is roughly two-thirds smaller than those of asset purchase programs in the United Kingdom and the United States as reported by Weale and Wieladek (2016). Impulse response analysis suggests that the policy is transmitted via the portfolio rebalancing, the signaling, credit easing, and exchange-rate channels. An important device for central banks to affect inflation expectations is communication about their inflation objective. Apart from influencing short-term inflation expectations (see below), central banks aim to anchor inflation expectations through communicating about their inflation objective. Bernanke (2007) defines anchoring as “relatively insensitive to incoming data.” Ball and Mazumder (2011) call this “shock anchoring” (transitory shocks to inflation are not passed into expectations or into future inflation), differentiating it from “level anchoring,” which means that expectations are tied to a particular level of inflation, such as 2 percent. If expectations are anchored, they
Central Bank Communication: Managing Expectations 243 no longer respond strongly to past inflation. Van der Cruijsen and Demertzis (2011) find that in countries with low-transparency central banks, a significant positive link exists between current inflation and inflation expectations (no anchoring), while this relationship is absent in countries having highly transparent central banks. Studies using forecasts of financial-market participants or professional forecasters generally find that explicit numerical inflation targets help anchor inflation expectations. These studies are based on the idea that if expectations are well anchored, long- run inflation expectations should be stable in response to macroeconomic news, policy announcements, and changes in short-horizon expectations (Gürkaynak et al. 2007; Ball and Mazumder 2011; Beechey et al. 2011; Galati et al. 2011). However, studies on knowledge about the central bank’s objective and inflation expectations of households and firms come to much more sobering conclusions. For instance, van der Cruijsen et al. (2015) studied the general public’s knowledge in the Netherlands, using a survey of Dutch households. They presented participants with eleven statements about the ECB’s objective. Four of these statements were based on the ECB’s specification of its objective, while the remaining seven were false statements. The authors found respondents’ knowledge about the ECB’s policy objectives was far from perfect: the average number of correct answers was less than five. They also reported that respondents’ knowledge about the ECB’s objective monetary policy was negatively related to their absolute inflation forecast errors.22 Likewise, Binder (2017) finds that Americans are generally unable to identify recent inflation dynamics with any degree of precision. Barely half of consumers expect long-run inflation to be near the Fed’s 2 percent target. This evidence is based on data from the Michigan Survey of Consumers. In interpreting her findings, Binder distinguishes between informedness and credibility, where the former refers to the Fed’s ability to capture the attention of households to convey basic information about its policies, while the latter refers to the Fed’s success in convincing households that it is committed to its goals. Binder’s results suggest that both low credibility and low informedness are major barriers to well-anchored inflation expectations among the general public. Finally, Kumar et al. (2015) performed a survey of managers of firms in New Zealand, asking a wide range of questions about their inflation expectations, their individual and firm’s characteristics, as well as their knowledge and understanding of monetary policy. Despite twenty-five years of inflation targeting in New Zealand, the survey suggests that managers of firms have been forecasting much higher levels of inflation than actually occurred, at both short-run horizons and very long-run horizons. Their average perception of recent inflation is also systematically much higher than actual inflation, which is in line with the findings of the literature discussed in Ranyard et al. (2008). Furthermore, there is tremendous disagreement in the forecasts of firms, at all horizons, as well as disagreement about recent inflation dynamics. Firms also express far more uncertainty in their inflation forecasts than do professional forecasters. The authors also find that managers commonly report large revisions in their forecasts, suggesting that expectations are not well anchored.
244 De Haan and Sturm These findings are also highly relevant in view of the debate in the literature about raising the inflation objective of central banks. The (primary) objective of most central banks in advanced countries is still price stability, which is most often defined as an inflation rate around 2 percent. For instance, the ECB aims for an inflation rate “close to, but below 2%,” where inflation is defined as a year-on-year increase in the Harmonized Index of Consumer Prices (HICP) for the euro area. Likewise, the chancellor of the Exchequer in the United Kingdom has confirmed that for the BoE, the “operational target for monetary policy remains an inflation rate of 2%, measured by the 12-month increase in the Consumer Prices Index. The inflation target of 2% applies at all times. This reflects the primacy of price stability and the inflation target in the UK monetary policy framework.”23 The most important argument for raising the inflation target is the problem of the ELB. Ball (2014, 16), for instance, argues in favor of a target of 4 percent, as it “would ease the constraints on monetary policy arising from the zero bound on interest rates, with the result that economic downturns would be less severe. This benefit would come at minimal cost, because four percent inflation does not harm an economy significantly.” However, raising the inflation objective to avoid the ELB to become binding is not the same as raising inflation while being at the ELB. Using a quantified stochastic equilibrium model in which the economy may alternate between a targeted inflation and a deflation regime, Aruoba and Schorfheide (2015) discuss the implications of a historical counterfactual where the Federal Reserve adopted a 4 percent inflation target in 1984. In this scenario, there could be some improvements in welfare, especially in the case where the Federal Reserve acts even more aggressively to cut the policy rates. In addition, the authors examine the effects if the Federal Reserve changed its target in 2014, that is, under the ELB. The authors’ findings suggest that this policy change does not generate clear short-to medium-run benefits, while the long-run benefits strongly depend on the likelihood of adverse shocks that push the economy to the ELB again. In addition, in real life, it may be hard to raise inflation expectations. Indeed, Ehrmann (2015) shows that under persistently low inflation, inflation expectations are not as well anchored as when inflation is around target: inflation expectations are more dependent on lagged inflation, forecasters tend to disagree more, and inflation expectations get revised down in response to lower-than-expected inflation but do not respond to higher-than-expected inflation. An important open issue is to what extent short-term inflation expectations are affected if the central bank announces a higher inflation target. This, of course, is due to the fact that central banks hardly ever raise their inflation objective. The experience of Japan and the United States (which at some point introduced an explicit inflation target) and New Zealand (which has changed its inflation target a couple of times, although these changes were generally rather small) may shed some light on this issue. The study by Nakazono (2016) on Japan does not come to very optimistic conclusions. He finds that survey data indicate that long-term forecasts of Japanese inflation of economic agents are not in line with those of the BoJ, despite the adoption of a 2 percent inflation target in January 2013 and the introduction of quantitative and qualitative monetary easing (QQE) in April 2013.24 Using a VAR model, De Michelis and Iacoviello
Central Bank Communication: Managing Expectations 245 (2016) examine how macroeconomic variables respond to an identified inflation target shock. The authors apply these findings to calibrate the effect of a shock to the inflation target in two New Keynesian DSGE models of the Japanese economy, with imperfect credibility. An important feature of the models is that agents update their estimates of the inflation target only gradually over time. The main findings of the analysis are that increasing an inflation target can have powerful effects on activity and inflation, especially when the economy is in a liquidity trap, but these effects can be smaller if the policy is not fully credible. Using several survey measures and the Nelson-Siegel model25 to construct a term structure of inflation expectations, Aruoba (2016) finds that the announcement of a formal inflation target of 2 percent (based on annual changes in the personal consumption expenditures price index) on January 25, 2012, by the Federal Reserve did not affect inflation expectations (or real interest rates) in any significant way. But this may also reflect that the announcement did not provide any new information to financial-market participants. Finally, having changed its inflation target a number of times since 1990, New Zealand provides a natural laboratory to understand the macroeconomic consequences of changing an inflation target. Lewis and McDermott (2015) apply the Nelson-Siegel model on inflation expectations data in New Zealand to generate inflation expectations curves fitted over various time horizons. Examining such curves enables them to assess how expectations have shifted in response to changes in the inflation target. The results from the Nelson-Siegel model suggest that changes to the inflation target change inflation expectations significantly. Particularly striking is the estimated 0.45 percentage point increase in inflation expectations when the target midpoint was increased 0.5 percentage point in 2002. The most important worry among central bankers about raising the inflation objective is that it may be “the first step on the slippery slope of inflation creep,” causing inflation expectations to get deanchored and undermining the credibility of central banks (de Haan et al. 2016). However, there is only limited empirical evidence for this. The results of Lewis and McDermott (2015, 25) suggest that the “periodic ‘fine-tuning’ of the inflation target has steadily introduced more flexibility into the New Zealand regime. This flexibility has been gained while still keeping inflation expectations anchored.” Unfortunately, this is the only study that we are aware of that examines the risk of deanchoring if the inflation target is increased. Two other recent studies on the credibility of the ECB (Moessner 2015b; Galati et al. 2016) cannot examine this issue, as the ECB has not raised its inflation target. Still, these papers are relevant, as they assess the credibility of central banks. Although very different in their methodology, the papers have in common that they assess credibility by examining whether certain events (introduction of asset purchases program, higher oil prices) affect long-term inflation expectations. If so, that would suggest that the credibility of the central bank had been reduced. Moessner (2015b) examines whether ECB balance-sheet policy announcements affect long-term inflation expectations measured using inflation swap rates of maturities up to ten years. She finds that these announcements only led to a slight increase in long-term
246 De Haan and Sturm inflation expectations. Galati et al. (2016) investigate the impact of changes in oil prices on deflation risk, using a novel data set on market participants’ perceptions of short-to long-term deflation risk implied by year-on-year options on forward inflation swaps. A key advantage of this data set is that it allows the authors to extract expected probabilities of deflation from options on forward inflation rates, as opposed to zero- coupon options on spot inflation rates, which have been mainly used in the literature so far. If inflation expectations are anchored at the central bank’s inflation target, deflation probabilities at the short-and medium-term horizon may react to changes in oil prices, but at the long-term horizon they should not. The authors find that short-term deflation risk in the euro area (two years ahead) reacts significantly and most strongly to oil prices. Medium-term deflation risk (three, five, and seven years ahead) also reacts significantly to oil prices but less strongly than at shorter horizons, while long-term deflation risk (ten years ahead) does not react to oil prices.
8.5 Conclusion The Great Financial Crisis and the ensuing Great Recession have changed communication policies of most central banks in important ways. This chapter has discussed the literature on three of these changes: FG regarding policy rates, communication regarding asset purchase programs, and inflation expectation management.26 Many central banks nowadays communicate—albeit in different ways—about their future policy rates. Although the theoretical literature suggests that Odyssean FG can be used to overcome the problem of the ELB, central banks in practice do not commit to keeping policy rates lower for longer (Moessner et al. 2017). Instead, they in general provide (different forms of) Delphic FG as an answer to the trade-off between informing the private sector and avoiding the impression that they commit. Empirical research suggests that communication about future policy rates affects interest-rate expectations. However, it is still an open question which type of FG (open- ended, time-dependent, or state-contingent) is most effective. Changing economic conditions might make it necessary for central banks to deviate from previous state- contingent announcements, which might negatively affect their reputation and credibility as the cases of the BoE and the Federal Reserve have illustrated. But further research is needed before more definite conclusions can be drawn. Likewise, the interaction of FG and communication about asset purchase programs deserves further attention (Blinder et al. 2017). The limited research on this does suggest that forward policy rate guidance is essential for these asset purchase programs to be effective. Without it, the stimulus impact of these programs might well be offset by expectations of a higher policy rate. Communication with respect to asset purchase programs often concerns two dimensions: the size and the duration of these purchases. The Fed’s tapering has shown that the phasing out of such programs also can have a clear impact on markets. As prices
Central Bank Communication: Managing Expectations 247 on long-term assets are more sensitive to changes in expectations than those on short- term assets, careful guidance regarding such asset purchase programs appears crucial to circumvent high market volatility. Most empirical studies on communication regarding asset purchase programs find significant financial-market reactions; the estimated effects, however, differ greatly and might not be stable over time. Whereas in the United States, the impact appears to have diminished over time, for Japan, the latest program has a stronger and longer-lasting impact than earlier ones. The main transmission channels through which the impact occurs are the portfolio balance channel and the signaling channel. Whereas in the first, purchases affect risk pricing and term premiums, the second sees a change in the central bank’s economic outlook or underlying preferences, which either way lead investors to change their expectations of future policy rates. It is still an open question which of these two channels is more important in practice. To what extent FG and asset purchase programs have affected short-term inflation expectations is unclear. It is, for instance, difficult to come up with a counterfactual, as most unconventional policy measures have been introduced during crises periods, which are notoriously difficult to capture with existing macroeconomic models. An important way in which central banks around the world are trying to manage inflation expectations is via communicating on inflation objectives. Indeed, many studies using financial-market or professional forecasts do find that explicit numerical inflation targets help anchor inflation expectations. On the other hand, studies focusing on inflation expectations of households and firms and their knowledge about the central bank’s objective in general suggest large error margins and limited knowledge. Nevertheless, there is interesting discussion going on about whether central banks, in the current environment in which the ELB on policy rates appears to have been reached, should raise their inflation targets. This might not only raise inflation expectations but also reduce the likelihood that in the future the ELB will become binding again. The potential risks of raising inflation targets are deanchoring of inflation expectations and loss of central bank credibility. But there is hardly any evidence for how large these risks are. Whether increasing inflation targets would be effective in raising inflation expectations is also not clear. For Japan and the United States, which recently introduced more explicit inflation objectives, no clear impact on inflation expectations have been observed, whereas for New Zealand, a country in which the inflation target has been changed a few times (although these changes were small), inflation expectations do seem to reflect these changes while at the same time remaining anchored.
Notes 1. See Blinder et al. 2008 for a survey of the (mostly empirical) literature on central bank communication. For some recent research, see Conrad and Lamla 2010; Hayo and Neuenkirch 2010 and 2011; Lamla and Lein 2011; Siklos and Sturm 2013; Neuenkirch 2014;
248 De Haan and Sturm Hayo et al. 2015. See also Mishkin 2011 and Blinder et al. 2017 for a general discussion of monetary policy after the crisis. 2. For a while, it was thought that policy rates could not drop below zero. That is why the term zero lower bound (ZLB) is often used. As policy rates in some countries have become negative, we prefer using the term effective lower bound (ELB). 3. Several other important issues will not be dealt with in this survey, including the effect of central bank communication on financial markets in other countries (e.g., Hayo et al. 2012; Fratzscher et al. 2014) and the relationship between central bank communication and credibility (e.g., Hayo and Neuenkirch 2015a). 4. Drawing on Milani 2012, rational expectations can be explained as follows. Under rational expectations, agents in the model form expectations that correspond to the same mathematical conditional expectation implied by the model that the researcher is using so that expectations are model-consistent. In other words, agents in the model form expectations that correspond to the true expectations generated by the model itself. So economic agents are assumed to know the form and solution of the model; they know the parameters describing preferences, technology, constraints, and policy behavior; they know the distributions of exogenous shocks and the relevant parameters describing those distributions. The only source of uncertainty for agents is given by the realizations of the random exogenous shocks, which cannot be forecast. 5. This section heavily draws on Moessner et al. (2017). 6. As David Mayes pointed out to us, in 1995, the RBNZ forecast an inflation rate outside the target. The bank concluded that interest rates would have to rise under its current projections and built that into the forecast. By mid-1997, the RBNZ implemented a new econometric model, which used model-consistent expectations. By then, a complete track of policy rate forecasts was set out rather than just the necessary direction of change. It has been argued that the central bank’s own projection of the policy interest rate path is “the only appropriate and logically consistent choice” (Mishkin 2004, 9) for inflation-targeting central banks and “provides the private sector with the best aggregate information for making individual decisions” (Svensson 2006, 185). Some central banks, including the central banks of New Zealand, Norway, and Sweden, follow this practice. See Moessner et al. (2017) for a survey. 7. The Bank of Japan (BoJ) faced this problem already in the late 1990s, when a reduction of the BoJ’s target for the call rate (the overnight rate that had been its operating target until then) to zero was insufficient to halt deflation in Japan (Woodford 2012). The BoJ introduced forward guidance in 1999 (Shirai 2013). 8. A recent theoretical literature examines how a central bank can credibly commit. For example, Gersbach et al. (2015) show in a New Keynesian framework that forward guidance contracts can function as a flexible commitment device and thereby improve economic performance when the economy is stuck in a liquidity trap. 9. Former BoE deputy governor for monetary policy Charles Bean (2013) describes the state-contingent forward guidance introduced by the BoE’s Monetary Policy Committee (MPC) in August 2013 as follows: “This guidance is intended primarily to clarify our reaction function and thus make policy more effective, rather than to inject additional stimulus by pre-committing to a time-inconsistent lower for longer policy path in the manner of Woodford (2012).” 10. Also, Woodford (2012) concludes that the policies of the Federal Reserve and the Bank of Canada are not in line with the New Keynesian prescription by committing to overshoot
Central Bank Communication: Managing Expectations 249 inflation in the future. Rather, they can be thought of as simply guidance about how policy would be set in the future by a forward-looking central bank. In other words, they did not incorporate a promise to be “irresponsible” in the future, although the FOMC’s policy announcement on September 13, 2012 goes some way toward doing this. The statement stresses that “a highly accommodative stance of monetary policy will remain appropriate for a considerable time after the economic recovery strengthens.” Still, it remains some way from a commitment to accept temporarily inflation higher than its target (Bowdler and Radia 2012). 11. http://news.sky.com/story/1635957/governors-forecasts-prove-wide-of-the-mark as released on February 4, 2016. 12. http://www.cityam.com/1407961668/sorry-tale-forward-guidance. 13. The IMF (2014) has also examined the impact of different types of FG on policy uncertainty. The report concludes that time-dependent FG is associated with lower policy rate uncertainty on average, as it successfully pushed the liftoff date farther out. Also, state-contingent and open-ended FG are associated with lower policy uncertainty. See Moessner et al. (2017) for a further discussion of the impact of FG on the predictability of monetary policy. 14. This section draws on International Monetary Fund (2013) and Haitsma et al. (2016). 15. Some studies employ an impulse- response analysis based on VAR models (e.g., Kontonikas and Kostakis 2013). 16. However, Hosono and Isobe (2014) use the changes in daily prices of ten-year German government bond futures traded on the Eurex Exchange. As pointed out by Rogers et al. (2014), several unconventional policies of the ECB during the crisis were aimed at reducing intra-euro-area sovereign spreads, not at lowering German interest rates. In fact, actions that succeeded in lowering sovereign spreads in countries under stress (such as Italy and Spain) tended to drive German yields up. Thus, measuring monetary policy using German yields alone would result in the perverse conclusion that these policies represented an attempt by the ECB to tighten financial conditions. That is why Rogers et al. (2014) and Haitsma et al. (2016) identify unconventional monetary policy surprises using the yield spread between German and Italian ten-year government bonds on the day of an ECB policy announcement. 17. Altavilla et al. (2015) summarize the literature on the impact of asset purchase programs of central banks as follows. First, the impact of programs carried out in the aftermath of the collapse of Lehman is generally found to be stronger than the one exerted by subsequent programs. Second, “narrow channels” of transmission are generally more important than “broad channels”—channels are defined as narrow when the impact is concentrated on the assets targeted by the program, with little spillover to other market segments. Third, the bulk of the impact of purchase programs is found to arise at announcement. 18. Alternative channels are the capital constraints (Cúrdia and Woodford 2011; Gertler and Karadi 2011; He and Krishnamurthy 2013), scarcity (Krishnamurthy and Vissing- Jorgensen 2013), and the exchange-rate channels. According to the first, central bank purchases of assets whose prices are low because of distress in the financial intermediary sector has beneficial effects. Regarding the second, central bank purchases of new issuance of, for example, MBS has led to a scarcity premium on the production coupon MBS, driving spreads on MBS down. The scarcity generates incentives for banks to originate more loans. Finally, if markets expect that due to asset purchase programs interest rates
250 De Haan and Sturm will be low for longer, they will shift their portfolio toward regions with higher yields. This will lead to a depreciation of the currency (exchange-rate channel). 19. Still, as pointed out by Bayoumi et al. (2014), these results draw primarily on periods of severe financial distress, and unconventional policies may be less effective if an economy reached the ELB in the absence of a major financial disruption. Furthermore, there is a worry that unconventional policies may have unintended consequences (Group of Thirty 2015), although there is hardly research on this as yet. Concerns are that low interest rates favor loan evergreening, reduce incentives for financial restructuring, and distort an efficient allocation of credit. 20. For instance, based on a dynamic stochastic general equilibrium (DSGE) model with segmented markets, Chen et al. (2012) report that asset purchase programs in the United States had a very small impact on inflation (a rise of 0.03 percentage point). In contrast, Chung et al. (2012), using simulations of the Federal Reserve’s FRBUS model, report that these programs raised inflation by 0.4–1.0 percentage point. Using simulation from a large Bayesian VAR model, Churm et al. (2015) conclude that the second round of purchases by the BoE increased inflation by, at most, 0.6 percentage point. 21. These spreads are influenced by expected inflation, the inflation risk premium, and the TIPS liquidity premium. 22. Several recent studies (e.g., Lamla and Maag 2012; Hayo and Neuenkirch 2012 and 2015b; Lamla and Lein 2014 and 2015; Dräger 2015) have examined the role of media in (the public’s) inflation expectations. 23. http://www.bankofengland.co.uk/monetarypolicy/Documents/pdf/chancellorletter180315. pdf. 24. The BoJ released a statement on April 4, 2013, with the introduction of QQE, stating that the bank “will achieve the price stability target of 2% . . . at the earliest possible time, with a time horizon of about two years,” and the bank “will continue with QQE, aiming to achieve the price stability target of 2%, as long as it is necessary for maintaining that target in a stable manner. It will examine both upside and downside risks to economic activity and prices, and make adjustments as appropriate” (Shirai 2013). 25. The Nelson-Siegel model can be thought of as a dynamic factor model with prespecified factors that describe the shape of the curve. With only three factors determining the level, slope, and curvature of the curve, the Nelson-Siegel model is a parsimonious way to obtain a curve from expectation surveys (Lewis and McDermott 2015). 26. Levin (2014) presents a set of principles regarding the joint design of monetary policy strategy and central bank communication.
References Altavilla, C., G. Carboni, and R. Motto. 2015. “Asset Purchase Programmes and Financial Markets: Lessons from the Euro Area.” ECB working paper 1864. Aruoba, S. B. 2016. “Term Structures of Inflation Expectations and Real Interest Rates.” Mimeo. Aruoba, S. B., and F. Schorfheide. 2015. “Inflation during and after the Zero Lower Bound.” Mimeo. Ball, L. 2014. “A Case for a Long-Run Inflation Target of Four Percent.” IMF working paper 14/ 92.
Central Bank Communication: Managing Expectations 251 Ball, L., and S. Mazumder. 2011. “Inflation Dynamics and the Great Recession.” Brookings Papers on Economic Activity (Spring): 337–405. Bauer, M. D., and G. D. Rudebusch. 2014. “The Signaling Channel for Federal Reserve Bond Purchases.” International Journal of Central Banking 10, no. 3: 233–289. Bayoumi, T., G. Dell’Ariccia, K. Habermeier, T. Mancini-Griffoli, and F. Valencia. 2014. “Monetary Policy in the New Normal.” IMF staff discussion note 14/3. Bean, C. 2013. “Global Aspects of Unconventional Monetary Policies.” Speech. https://www. bankofengland.co.uk/speech/2013/global-aspects-of-unconventional-monetary-policies. Beechey, M., B. Johannsen, and A. Levin. 2011. “Are Long- Run Inflation Expectations Anchored More Firmly in the Euro Area Than in the United States?” American Economic Journal: Macroeconomics 3: 104–129. Bernanke, B. 2007. “Inflation Expectations and Inflation Forecasting.” NBER Summer Institute, Monetary Economics Workshop. https://www.federalreserve.gov/newsevents/ speech/bernanke20070710a.htm. Binder, C. 2017. “Fed Speak on Main Street.” Journal of Macroeconomics 52: 238–251. Blinder, A., M. Ehrmann, J. de Haan, and D.-J. Jansen. 2017. “Necessity As the Mother of Invention: Monetary Policy after the Crisis.” Economic Policy 32, no. 92: 707–755. Blinder, A., M. Ehrmann, M. Fratzscher, J. de Haan, and D.-J. Jansen. 2008. “Central Bank Communication and Monetary Policy: A Survey of Theory and Evidence.” Journal of Economic Literature 46, no. 4: 910–945. Bowdler, C., and A. Radia. 2012. “Unconventional Monetary Policy: The Assessment.” Oxford Review of Economic Policy 28, no. 4: 603–621. Campbell, J., C. Evans, J. Fisher, and A. Justiniano. 2012. “Macroeconomic Effects of FOMC Forward Guidance.” Brookings Papers on Economic Activity (Spring): 1–54. Chen, H., V. Cúrdia, and A. Ferrero. 2012. “The Macroeconomic Effects of Large‐Scale Asset Purchase Programmes.” Economic Journal 122, no. 564: F289–F315. Christensen, J. H. E., and G. D. Rudebusch. 2012. “The Response of Interest Rates to U.S. and U.K. Quantitative Easing.” Economic Journal 122, no. 564: 385–414 Chung, H., J. Laforte, D. Reifschneider, and J. C. Williams. 2012. “Have We Underestimated the Likelihood and Severity of Zero Lower Bound Events?” Journal of Money, Credit and Banking 44, supp.: 47–82. Churm, R., M. Joyce, G. Kapetanios, and K. Theodoridis. 2015. “Unconventional Monetary Policies and the Macroeconomy: The Impact of the United Kingdom’s QE2 and Funding for Lending Scheme.” Bank of England staff working paper 542. Conrad, C., and M. J. Lamla. 2010. “The High-Frequency Response of the EUR-USD Exchange Rate to ECB Communication.” Journal of Money, Credit and Banking 42, no. 7: 1391–1417. Cúrdia, V., and A. Ferrero. 2013. “How Stimulatory Are Large-Scale Asset Purchases?” Federal Reserve Bank of San Francisco Economic Letter 2013-22, August. Cúrdia, V., and M. Woodford. 2011. “The Central-Bank Balance Sheet as an Instrument of Monetary Policy.” Journal of Monetary Economics 58, no. 1: 54–79. De Haan, J., M. Hoeberichts, R. Maas, and F. Teppa. 2016. “Inflation in the Euro Area and Why It Matters.” DNB occasional study no. 3, 2016. De Michelis, A., and M. Iacoviello. 2016. “Raising an Inflation Target: The Japanese Experience with Abenomics.” European Economic Review 88: 67–87. Dräger, L. 2015. “Inflation Perceptions and Expectations in Sweden—Are Media Reports the Missing Link?” Oxford Bulletin of Economics and Statistics 77, no. 5: 681–700.
252 De Haan and Sturm Eggertsson, G., and M. Woodford. 2003. “The Zero Bound on Interest Rates and Optimal Monetary Policy.” Brookings Papers on Economic Activity 1: 139–211. Ehrmann, M. 2015. “Targeting Inflation from Below: How Do Inflation Expectations Behave?” International Journal of Central Banking 11, no. 4: 213–249. Evans, G. W., and S. Honkapohja. 2009. “Learning and Macroeconomics.” Annual Review of Economics 1 (September): 421–449. Femia, K., S. Friedman, and B. Sack. 2013. “The Effects of Policy Guidance on Perceptions of the Fed’s Reaction Function.” Federal Reserve Bank of New York staff report no. 652. Filardo, A., and B. Hofmann. 2014. “Forward Guidance at the Zero Lower Bound.” BIS Quarterly Review, (March): 37–53. Fratzscher, M., M. Lo Duca, and R. Straub. 2014. “ECB Unconventional Monetary Policy Actions: Market Impact, International Spillovers and Transmission Channels.” International Monetary Fund 15th Jacques Polak Annual Research Conference, November 13–14. Gagnon, J., M. Raskin, J. Remache, and B. Sack. 2011. “The Financial Market Effects of the Federal Reserve’s Large-Scale Asset Purchases. International Journal of Central Banking 7, no. 1: 3–43. Galati, G., Z. Gorgi, R. Moessner, and C. Zhou. 2016. “Deflation Risk in the Euro Area and Central Bank Credibility.” DNB working paper 509. Galati, G., S. Poelhekke, and C. Zhou. 2011. “Did the Crisis Affect Inflation Expectations?” International Journal of Central Banking 7, no. 1: 167–207. Gersbach, H., V. Hahn, and Y. Liu. 2015. “Forward Guidance Contracts.” CESifo working paper 5375. Gertler, M., and P. Karadi. 2011. “A Model of Unconventional Monetary Policy.” Journal of Monetary Economics 58, no. 1: 17–34. Greenwood, R., S. Hanson, and D. Vayanos. 2015. “Forward Guidance in the Yield Curve: Short Rates versus Bond Supply.” NBER working paper 21750. Group of Thirty. 2015. Fundamentals of Central Banking: Lessons from the Crisis.” Washington, D.C.: Group of Thirty. Guidolin, M., and C. Neely. 2010. “The Effects of Large-Scale Asset Purchases on TIPS Inflation Expectations.” Federal Reserve Bank of St. Louis economic synopsis 26. Gürkaynak, R. S., B. P. Sack, and E. T. Swanson. 2005. “Do Actions Speak Louder Than Words? The Response of Asset Prices to Monetary Policy Actions and Statements.” International Journal of Central Banking 1, no. 1: 55–93. Gürkaynak, R. S., A. Levin, A. Marder, and E. Swanson. 2007. “Inflation Targeting and the Anchoring of Inflation Expectations in the Western Hemisphere.” Federal Reserve Bank of San Francisco Economic Review: 25–47. Haitsma, R., D. Unalmis, and J. de Haan. 2016. “The Impact of the ECB’s Conventional and Unconventional Monetary Policies on Stock Markets.” Journal of Macroeconomics 48: 101–116. Hamilton, J. D., and J. C. Wu. 2012. “The Effectiveness of Alternative Monetary Policy Tools in a Zero Lower Bound Environment.” Journal of Money, Credit, and Banking 44, supp.: 3–46. Hancock, D., and W. Passmore. 2011. “Did the Federal Reserve’s MBS Purchase Program Lower Mortgage Rates?” Journal of Monetary Economics 58, no. 5: 498–514. Hansen, S., and M. McMahon. 2015. “Shocking Language: Understanding the Macroeconomic Effects of Central Bank Communication.” CEPR Discussion paper No. 11018. Hanson, S. G., and J. C. Stein. 2015. “Monetary Policy and Long-Term Real Rates.” Journal of Financial Economics 115, no. 3: 429–448.
Central Bank Communication: Managing Expectations 253 Hattori, M., A. Schrimpf, and V. Sushko. 2016. “The Response of Tail Risk Perceptions to Unconventional Monetary Policy.” American Economic Journal: Macroeconomics 8, no. 2: 111–136. Hayo, B., A. M. Kutan, and M. Neuenkirch. 2012. “Federal Reserve Communications and Emerging Equity Markets.” Southen Economic Journal 78, no. 3: 1041–1056. Hayo, B., A. M. Kutan, and M. Neuenkirch. 2015. “Financial Market Reaction to Federal Reserve Communications: Does the Global Financial Crisis Make a Difference?” Empirica 42, no. 1: 185–203. Hayo, B., and M. Neuenkirch. 2010. “Do Federal Reserve Communications Help Predict Federal Funds Target Rate Decisions? Journal of Macroeconomics 32, no. 4: 1014–1024. Hayo, B., and M. Neuenkirch. 2011. “Canadian Interest Rate Setting: The Information Content of Canadian and U.S. Central Bank Communication.” Southern Economic Journal 78, no. 1: 131–148. Hayo, B., and M. Neuenkirch. 2012. “Bank of Canada Communication, Media Coverage, and Financial Market Reactions.” Economics Letters 115, no. 3: 369–372. Hayo, B., and M. Neuenkirch. 2015a. “Central Bank Communication in the Financial Crisis: Evidence from a Survey of Financial Market Participants.” Journal of International Money and Finance 59: 166–181. Hayo, B., and M. Neuenkirch. 2015b. “Self-monitoring or Reliance on Media Reporting: How Do Financial Market Participants Process Central Bank News?” Journal of Banking & Finance 59: 27–37. He, Z., and A. Krishnamurthy. 2013. “Intermediary Asset Pricing.” American Economic Review 103, no. 2: 732–770. He, Z. 2010. “Evaluating the Effect of the Bank of Canada’s Conditional Commitment Policy.” Bank of Canada discussion paper 2010-11. Hofmann, B., and F. Zhu. 2013. “Central Bank Asset Purchases and Inflation Expectations.” BIS Quarterly Review (March): 23-35. Hosono, K., and S. Isobe. 2014. “The Financial Market Impact of Unconventional Monetary Policies in the U.S., the U.K., the Eurozone, and Japan.” PRI discussion paper no. 14A-05. International Monetary Fund. 2013. “Unconventional Monetary Policies—Recent Experience and Prospects.” April 18. International Monetary Fund. 2014. IMF country report 14/222. Joyce, M. A. S., A. Lasaosa, I. Stevens, and M. Tong. 2011. “The Financial Market Impact of Quantitative Easing in the United Kingdom.” International Journal of Central Banking 7: 113–61. Kontonikas, A., and A. Kostakis. 2013. “On Monetary Policy and Stock Market Anomalies.” Journal of Business Finance & Accounting 40, nos. 7/8: 1009–1042. Kool, C. J. M., and D. L. Thornton. 2012. “How Effective Is Central Bank Forward Guidance?” Federal Reserve Bank of St. Louis working paper 2012-063A. Krishnamurthy, A., and A. Vissing-Jorgensen. 2011. “The Effects of Quantitative Easing on Interest Rates.” Brooking Papers on Economic Activity 43, no. 2: 215–287. Krishnamurthy, A., and A. Vissing-Jorgensen. 2013. “The Ins and Outs of LSAPs.” Jackson Hole Symposium, Federal Reserve Bank of Kansas City. Kumar, S., H. Afrouzi, O. Coibion, and Y. Gorodnichenko. 2015. “Inflation Targeting Does Not Anchor Inflation Expectations: Evidence from Firms in New Zealand.” BPEA conference draft, September 10–11. Lam, W. R. 2011. “Bank of Japan’s Monetary Easing Measures: Are They Powerful and Comprehensive?” IMF working paper 11/264.
254 De Haan and Sturm Lamla, M. J., and S. M. Lein. 2011. “What Matters When? The Impact of ECB Communication on Financial Market Expectations.” Applied Economics 43, no. 28: 4289–4309. Lamla, M. J., and S. M. Lein. 2014. “The Role of Media for Consumers’ Inflation Expectation Formation.” Journal of Economic Behavior & Organization 106 C: 62–77. Lamla, M. J., and S. M. Lein. 2015. “Information Rigidities, Inflation Perceptions, and the Media: Lessons from the Euro Cash Changeover.” Economic Inquiry 53, no. 1: 9–22. Lamla, M. J., and T. Maag. 2012. “The Role of Media for Inflation Forecast Disagreement of Households and Professional Forecasters.” Journal of Money, Credit and Banking 44, no. 7: 1325–1350. Levin, A. T. 2014. “The Design and Communication of Systematic Monetary Policy Strategies.” Journal of Economic Dynamics & Control 49: 52–69. Lewis, M., and C. J. McDermott. 2015. “New Zealand’s Experience with Changing Its Inflation Target and the Impact on Inflation Expectations.” Mimeo. Lim, K.-P., and R. Brooks. 2011. “The Evolution of Stock Market Efficiency over Time: A Survey of the Empirical Literature.” Journal of Economic Surveys 25, no. 1: 69–108. Maag, T. 2010. “Essays on Inflation Expectation Formation.” Dissertation. ETH, Zurich. Michaelis, H., and S. Watzka. 2016. “Are There Differences in the Effectiveness of Quantitative Easing in Japan over Time?” Munich discussion paper 2014-35. Milani, F. 2012. “The Modeling of Expectations in Empirical DSGE Models: A Survey.” In DSGE Models in Macroeconomics: Estimation, Evaluation, and New Developments, edited by N. Balke, F. Canova, F. Milani, and M. A. Wynne, 3–38. Bingley: Emerald Group. Miles, D. 2014. “What Is the Right Amount of Guidance? The Experience of the Bank of England with Forward Guidance.” Speech. De Nederlandsche Bank Annual Research Conference, Amsterdam, November 13. Mishkin, F. 2004. “Can Central Bank Transparency Go Too Far?” In The Future of Inflation Targeting, edited by C. Kent and S. Guttmann, 48–65. RBA Annual Conference Volume. Sydney: Reserve Bank of Australia. Mishkin, F. S. 2011. “Monetary Policy Strategy: Lessons from the Crisis.” NBER working paper 16755. Moessner, R. 2013. “Effects of Explicit FOMC Policy Rate Guidance on Interest Rate Expectations.” Economics Letters 121, no. 2: 170–173. Moessner, R. 2014. “Effects of Explicit FOMC Policy Rate Guidance on Equities and Risk Measures.” Applied Economics 46, no. 18: 2139–2153. Moessner, R. 2015a. “Reactions of Real Yields and Inflation Expectations to Forward Guidance in the United States.” Applied Economics 47, no. 26: 2671–2682. Moessner, R. 2015b. “Reactions of US Government Bond Yields to Explicit FOMC Forward Guidance.” North American Journal of Economics and Finance 33: 217–233. Moessner, R., D. Jansen, and J. de Haan. 2017. “Communication about Future Policy Rates in Theory and Practice: A Survey.” Journal of Economic Surveys 31, no. 3: 678–7 11. Moessner, R., and W. Nelson. 2008. “Central Bank Policy Rate Guidance and Financial Market Functioning.” International Journal of Central Banking 4, no. 4: 193–226. Nakazono, Y. 2016. “Inflation Expectations and Monetary Policy under Disagreements.” Bank of Japan working paper 16-E-1. Neuenkirch, M. 2014. “Federal Reserve Communications and Newswire Coverage.” Applied Economics 46, no. 25: 3119–3129.
Central Bank Communication: Managing Expectations 255 Okina, K., and S. Shiratsuka. 2004. “Policy Commitment and Expectation Formation: Japan’s Experience under Zero Interest Rates.” North American Journal of Economics and Finance 15: 75–100. Poloz, S. S. 2015. “Central Bank Credibility and Policy Normalization.” Remarks at Canada- United Kingdom Chamber of Commerce, London, March 26. Ranyard, R., F. Del Missier, N. Bonini, D. Duxbury, and B. Summers. 2008. “Perceptions and Expectations of Price Changes and Inflation: A Review and Conceptual Framework.” Journal of Economic Psychology 29: 378–400. Raskin, M. 2013. “The Effects of the Federal Reserve’s Date-Based Forward Guidance.” Federal Reserve Board Finance and Economics Discussion Series 2013-37. Reza, A., E. Santor, and L. Suchanek. 2015. “Quantitative Easing As a Policy Tool under the Effective Lower Bound.” Bank of Canada staff discussion paper 2015-14. Rigobon, R., and B. Sack. 2004. “The Impact of Monetary Policy on Asset Prices.” Journal of Monetary Economics 51: 1553–1575. Rogers, J. H., C. Scotti, and J. H. Wright. 2014. “Evaluating Asset-Market Effects of Unconventional Monetary Policy: A Multi-Country Review.” Economic Policy 29, no. 80: 749–799. Rosa, C. 2011. “The Validity of the Event-Study Approach: Evidence from the Impact of the Fed’s Monetary Policy on US and Foreign Asset Prices.” Economica 78: 429–439. Rosa, C. 2012. “How ‘Unconventional’ Are Large-Scale Asset Purchases? The Impact of Monetary Policy on Asset Prices.” Federal Reserve Bank of New York staff report 566. Shirai, S. 2013. “Monetary Policy and Forward Guidance in Japan.” Speech. International Monetary Fund (September 19) and Board of Governors of the Federal Reserve System (September 20), Washington, D.C. Siklos, P. L., and J.-E. Sturm, eds. 2013. Central Bank Communication, Decision Making, and Governance. Cambridge: MIT Press. Sinha, A. 2015. “FOMC Forward Guidance and Investor Beliefs.” American Economic Review 105, no. 5: 656–661. Smith, A., and T. Becker. 2015. “Has Forward Guidance Been Effective?” Federal Reserve of Kansas Economic Review (Third Quarter): 57–78. Svensson, L. E. O. 2006. “The Instrument-Rate Projection under Inflation Targeting: The Norwegian Example.” In Stability and Economic Growth: The Role of Central Banks, 175–198. Mexico: Banco de Mexico. Svensson, L. E. O. 2015. “Forward Guidance.” International Journal of Central Banking 11, supp. 1: 19–64. Swanson, E. 2015. “Measuring the Effects of Unconventional Monetary Policy on Asset Prices.” NBER working paper 21816. Swanson, E., and J. Williams. 2014. “Measuring the Effect of the Zero Lower Bound on Medium-and Longer-Term Interest Rates.” American Economic Review 104, no. 10: 3154–3185. Ueda, K. 2012. “The Effectiveness of Non-traditional Monetary Policy Measures: The Case of the Bank of Japan.” Japanese Economic Review 63, no. 1: 1–22. Unalmis, D., and I. Unalmis. 2015. “The Effects of Conventional and Unconventional Monetary Policy Surprises on Asset Markets in the United States.” MPRA paper 62585. http://mpra. ub.uni-muenchen.de/62585. Van der Cruijsen, C., D. Jansen, and J. de Haan. 2015. “How Much Does the Public Know about the ECB’s Monetary Policy? Evidence from a Survey of Dutch Households.” International Journal of Central Banking 11, no. 4: 169–218.
256 De Haan and Sturm Van der Cruijsen, C., and M. Demertzis. 2011. “How Anchored Are Inflation Expectations in EMU Countries?” Economic Modelling 28, nos. 1–2: 281–298. Weale, M., and T. Wieladek. 2016. “What Are the Macroeconomic Effects of Asset Purchases?” Journal of Monetary Economics 79: 81–93. Wieladek, T., and A. Garcia Pascual. 2016. “The European Central Bank’s QE: A New Hope.” CESifo working paper 5946. Woodford, M., 2012. “Methods of Policy Accommodation at the Interest Rate Lower Bound.” Jackson Hole Symposium, Federal Reserve Bank of Kansas City. Wright, J., 2011. “What Does Monetary Policy Do to Long-term Interest Rates at the Zero Lower Bound?” NBER working paper 17154. Wu, T. 2014. “Unconventional Monetary Policy and Long-Term Interest Rates.” IMF working paper 14/189.
January 1991–March 2010
Bank of Canada
Federal Reserve
RBNZ, Norges Bank, Riksbank, Federal Reserve
He 2010
Campbell et al. 2012
Kool and Thornton 2012
December 1994–August 2011
February 1990June 2007
June 1998–August 2007, March 1994–July 2007
Moessner and Nelson Federal 2008 Reserve
Period
March 1998–February 2003
Central bank
Okina and Shiratsuka Bank of Japan 2004
Study
Regressing the difference in consensus forecast errors and benchmark forecast errors regarding short-and long-run interest rates on a dummy indicating FG regimes.
Regressions of daily changes in asset prices on two derived factors: a target factor and a path factor.
VAR of monthly interest rates, unemployment, and inflation. Predictions of VAR estimated on data until FG (April 2009) are compared to actual data to examine whether model parameters have changed.
Studies changes in sensitivity of interest rates/implied volatilities to macroeconomic news.
Case-study analysis of the short-term impact of the introduction of FG, using indicators for the shape of the yield curve derived from estimated forward-rate curve.
Method
(Continued)
FG improved market participants’ ability to forecast short- term rates over relatively short forecast horizons but only for Norway and Sweden. A significant reduction in the cross-sectional variance of survey participants forecast followed the adoption of forward guidance by the RBNZ, the Norges Bank, and the Riksbank but not for the Federal Reserve.
Significant effect of path factor on US government bond yields and corporate bond yields.
Canadian one-year Treasury bill rates and one-year- forward three-month rates have generally been lower than their model-implied values since April 2009. Canadian longer-term interest rates are also lower than their model-implied values, though their difference diminishes as the maturities become longer.
Pre-crisis FG significantly increased sensitivity of interest rates and implied volatilities to macroeconomic news.
FG was effective in stabilizing market expectations for the path of short-term interest rates, reducing longer-term interest rates and flattening the yield curve, but did not manage to reverse deflationary expectations.
Conclusion
Appendix Table 8.A.1 Summary of Empirical Research on Market Reactions to Forward Guidance (FG)
Appendix
Central bank
Federal Reserve
Federal Reserve
Federal Reserve
Federal Reserve
Federal Reserve
Federal Reserve
Study
Moessner 2013
Raskin 2013
Moessner 2014
Swanson and Williams 2014
Moessner 2015a
Moessner 2015b
Event study methodology using daily data examining impact of FG announcements on equity prices and several risk indicators.
Studies changes in the sensitivity of short-term interest-rate expectations to economic news, using probability distributions of interest-rate expectations derived from interest-rate options.
Event study methodology using daily data examining impact of FG announcements on near-to medium-term interest-rate futures implied by Eurodollar contracts.
Method
June 2004–June 2014
June 2004–February 2013
FG announcements led to a significant reduction in real yields 2–5 years ahead. By contrast, long-term break- even inflation rates were barely affected, suggesting that inflation expectations have remained well anchored.
Sensitivity to macroeconomic news of yields with maturities greater than one year was high from 2008– 2010 but fell close to zero from late 2011, which may be partly due to the FOMC’s FG, in addition to the ELB.
Significant increase in equity prices and reduction in credit spreads and risk indicators such as volatility index for US government bonds and US equity index risk reversals.
The introduction of the FOMC’s time-dependent FG in August 2011 led to a significant reduction in the sensitivity of the risk-neutral percentiles six months to three years ahead to economic surprises.
FG announcements significantly reduced implied interest rates at horizons of one to five years ahead, with the largest effect at the intermediate horizon of three years. This effect was not just due to associated asset purchase announcements. FG led to a significant reduction in the term spread, that is, to a flattening of the yield curve.
Conclusion
Event study methodology using daily data Open-ended and time-dependent FG announcements led examining impact of FG announcements to a significant reduction in forward US Treasury yields on US government bond yield curves. at a wide range of horizons, with the largest reduction occurring at the five-year-ahead horizon. By contrast, FG announcements containing state contingency led to a significant increase in forward US Treasury yields for horizons of three to seven years ahead.
Event study methodology using daily data examining impact of FG announcements on real interest rates and break-even US Treasury yield curves.
January 1990–December 2012 Study effect of the ELB on sensitivity of interest rates to macroeconomic news using daily data.
June 2004–February 2013
January 200–December 2012
June 2004–February 2013
Period
Appendix Table 8.A.1 Continued
Federal Reserve
Federal Reserve
Federal Reserve
Federal Reserve
Federal Reserve
Smith and Becker 2015
Sinha 2015
Hansen and McMahon 2015
Hanson and Stein 2015
Hattori et al. 2016
VAR model including federal funds futures and macroeconomic variables using monthly data.
Regressions of daily changes in asset prices on two derived factors, interpreted as forward guidance and LSAP factors.
January 2008–May 2014
January 1999–February 2012
Event study regressions using daily data derived from equity index options and interest-rate swaptions.
Event study regressions using changes in the two-year nominal yield on FOMC announcement days as a proxy for changes in expectations regarding the path of the federal funds rate.
January 1998–December 2014 FAVAR model to study effects of FOMC FG quantified using tools from computational linguistics on market prices.
January 2012–December 2013 Uses event study regressions to examine the effects of FOMC FG on higher moments of the risk-neutral probability densities derived from options on US Treasury yields.
December 2008–December 2014
January 2009–June 2015
Source: Moessner, Jansen, and de Haan 2017.
Federal Reserve
Swanson 2015
Unconventional monetary policy announcements substantially reduced option-implied equity market and interest-rate tail risks. Most of the impact was due to FG rather than asset purchase announcements.
A 100-basis-points increase in the two-year nominal yield on an FOMC announcement day is associated with a 42- basis-points increase in the ten-year-forward real rate.
Shocks to FG are more important than the FOMC communication of current economic conditions in terms of their effects on market and real variables, but neither communication has strong effects on real economic variables.
FOMC FG affected higher moments of the risk-neutral probability densities derived from options on US Treasury yields.
Effects of an FG shock on the expected federal funds rate twelve months ahead are significant for the first twenty months; FG shocks that imply a lower expected path of the federal funds rate decrease the slope of the expected funds rate curve.
FG has relatively small effects on the longest-maturity Treasury yields and essentially no effect on corporate bond yields. It has significant effects on medium-term Treasury yields, stock prices, and exchange rates.
Event-study methods and time-series econometric methods (portfolio choice model, multivariate GARCH- in-mean model)
Model-free analysis and dynamic term structure models
Joyce et al. 2011 BoE QE
Fed LSAP
BoJ Event-study approach Comprehensive monetary easing
Fed Monetary policy
Fed MBS purchase
Bauer and Rudebusch 2014
Lam 2011
Rosa 2011
Hancock and Passmore 2011
Empirical pricing models using time-series regression
Event-study approach
Time-series and event-study methodologies.
Fed LSAP
Gagnon et al. 2011
Method
Central bank and programs considered
Study
Expected future short-term interest rates
UK asset prices
Long-term interest rates
Impact on
July 2000–August 2010
1999–2007
Mortgage rates
Asset prices
January 2005–mid-2011 Financial markets
December 1990–June 2007
December 1991–end of 2009
November 2008–March 2010
Period
The program stabilized the mortgage market, creating downward pressure on mortgage rates. Also, the results suggest that half of the declines in mortgage rates were associated with improved market functioning and clearer government backing, the other half with portfolio rebalancing.
US monetary policy shocks affect both American and foreign three-month and one-year interest rates and stock prices, while the impact is less profound on exchange rates. Further, the response patterns across countries differ in both magnitude and statistical significance.
Monetary easing measures had a statistically significant impact on lowering bond yields and improving equity prices but no notable impact on inflation expectations.
The results suggest that bond purchases have important signaling effects that lower expected future short-term interest rates.
QE purchases led to significantly lower medium-to long-term government bond yields. The estimates suggest that the effect came mainly through a portfolio balance effect.
The LSAP program lowered long-term interest rates and, hence, reduced the long-term premium. The result suggests that monetary policy is effective even after the zero bound is reached.
Conclusion
Appendix Table 8.A.2 Studies on Effects of Communication of Unconventional Monetary Policies on Financial Markets
Fed, BoE QE
Fed LSAP announcements
BoJ and other G7 Event-study approach central banks Nontraditional monetary policy
Fed, BoE, ECB, BoJ Unconventional monetary policy
Christensen and Rudebusch 2012
Rosa 2012
Ueda 2012
Hosono and Isobe 2014
Event-study approach
Event-study approach
Dynamic term structure models
Event-study methodology
Fed QE1 in 2008–2009, QE2 in 2010–2011
Krishnamurthy and Vissing- Jorgensen 2011
Affine term structure model (ATSM)
Fed Altering maturity structure of Treasury debt
Hamilton and Wu 2012
Interest rates
Term structures of interest rates
US asset prices
January 2001–April 2013 Returns on assets
March 1999–March 2011 Asset prices
August 2007–June 2011
January 1998–December Government bond 2010 yields
2008–2011
January 1990–January 2011
(Continued)
Findings suggest that unconventional monetary policies lower long-term government bond yields and the exchange rate of the home currency. Further influences are found on corporate bond spreads, interbank loan spreads, and stock prices for some economies.
Many measures of nontraditional monetary policy have an impact on asset prices in the expected directions. However, the measures have failed to stop the deflationary trend of the Japanese economy.
LSAP news has economically large and highly significant effects on assets prices. Further, the effects are not statistically different from those of an unanticipated cut in the Fed funds target rate.
In the United States, lower expectations of future short-term interest rates led to the declines in US yields. However, in the United Kingdom, yields appeared to reflect reduced term premiums. Hence, the effect of QE depends on market institutional structures and central bank communication policies.
QE significantly lowered nominal interest rates on Treasuries, agencies, corporate bonds, and MBSs but at differing magnitudes across bond types and maturities as well as across QE1 and QE2.
Maturity structure of Treasury debt held by the public is significantly related to the behavior of US interest rates. The relation is dependent on whether the economy is at the ELB or in a normal environment.
FedConventional Heteroscedasticity-based and unconventional GMM monetary policy surprises
ECB QE
Unalmis and Unalmis 2015
Altavilla, Carboni, and Motto 2015
Haitsma, ECB Unalmis, and de All unconventional and Haan 2016 conventional policies
Fed QE1, QE2, Operation Twist, QE3, FG
Wu 2014
Event-study approach and Rigobon-Sack approach
Event-study approach
Model for term structure in which real-time expectations about Fed holdings are included
Linear regression analysis
Fed, BoE, ECB, BoJ Unconventional monetary policy
Rogers, Scotti, and Wright 2014
Method
Central bank and programs considered
Study
Appendix Table 8.A.2 Continued Impact on
Bond yields
January 4, 1999– February 27, 2015
September 2014–March 2015
Stock prices and portfolios of stocks
Asset markets
January 1994–June 2014 Asset markets
January 1992– September 2013
January 2000–April 2014 Bond yields, stock prices, exchange rates
Period
Especially unconventional monetary policy increase EURO STOXX 50 index. Evidence that stock prices of sectors that depend on bank credit are more strongly affected.
The ECB asset purchase program has significantly lowered yields for a broad set of market segments, with effects that generally rise with maturity and riskiness of assets. Although low financial distress weakened the local supply channel, it did reinforce the duration and the credit channels because of its interplay with the asset composition of the program.
Monetary policy surprises have statistically significant effects on asset markets. However, the magnitudes of responses differ notably between the unconventional and conventional period.
Purchases have significantly negative effect through portfolio rebalance effect, and this effect is reinforced by FG. Furthermore, LSAP enhanced FG through signaling effect.
Policies are effective in easing financial conditions when policy rates are at the ZLB.
Conclusion
chapter 9
Central Ba nk C om munic at i ons A Case Study J. Scott Davis and Mark A. Wynne
9.1 Introduction In 1981, Karl Brunner wrote that “Central Banking . . . thrives on a pervasive impression that [it] is an esoteric art. Access to this art and its proper execution is confined to the initiated elite. The esoteric nature of the art is moreover revealed by an inherent impossibility to articulate its insights in explicit and intelligible words and sentences” (Brunner 1981, cited in Goodfriend 1986). It is probably fair to say that this view characterized the conventional wisdom within the Federal Reserve System and most other central banks well into the 1980s. But over the past two decades, there have been major changes in central bank communications with financial markets and the general public. The Federal Reserve System has not always been a leader in adopting new communications practices, but since the onset of the global financial crisis of 2007–2008, and especially since conventional interest-rate policy ran up against the zero lower bound (ZLB) in December 2008, communications about monetary policy have taken on added importance. This chapter focuses on just one aspect of Federal Reserve communications about monetary policy, namely, the postmeeting statement released by the Federal Open Market Committee (FOMC) after each meeting. We document how this statement has We thank the editors and anonymous referees for helpful comments on the previous draft. We also thank Valerie Grossman, Adrienne Mack, and Kelvin Virdi for research assistance with this project at various stages and Christopher Koch and María Teresa Martínez-García for helpful conversations. The views in this chapter are those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Dallas or the Federal Reserve System.
264 Davis and Wynne evolved under the chairmanships of Alan Greenspan and Ben Bernanke. It was not until 1994 that the FOMC released a statement immediately after a policy meeting. Prior to 1994, analysts and the general public had to wait until after the subsequent meeting of the FOMC to know what the policy directive was. Originally, statements were only issued when policy (as measured by the level of the federal funds rate) changed. Since 1999, the FOMC has issued a statement after every scheduled meeting, regardless of whether the stance of policy has changed. We will show how the statement has grown longer and more detailed over time, especially after the federal funds rate was lowered to its effective lower bound (ELB) in December 2008, where it remained until December 2015. We contrast the evolution of the FOMC’s postmeeting statement with the evolution of the press releases that the Bank of Canada started issuing in 1997 to announce changes in policy. The trend toward longer statements is less pronounced in the case of Canada. We also show that the Bank of Canada’s policy statements are easier to understand and speculate that this may be because the Bank of Canada spent less time at the ELB for its main policy rate than the FOMC and thus engaged less in the way of unconventional monetary policy. We then go on to show how as the FOMC’s policy statement has gotten longer and more complex, its impact has increased. We estimate a daily series of US monetary policy shocks using daily financial-market data and a VAR with sign restrictions. With this, we can identify the monetary policy shock on the day a statement was released. We show how the absolute size of these monetary policy shocks on statement days has increased in the ZLB period, as the length and information content of the statement have increased. Furthermore, we show that after controlling for the actual policy change announced in the statement, the length and complexity of the statement have an effect on the financial- market impact of a given Fed policy change. Thus, over time, as Fed statements have gotten longer and more complex, their market-moving impact has increased. This chapter charts the evolution of the postmeeting Fed statement from the first one issued in 1994 to the end of Bernanke’s tenure as chairman. As noted above, the statements have gotten longer and more complex over time as the amount of information contained in the statement has increased and the FOMC wrestled with unconventional monetary policy in the form of large-scale asset purchases (LSAPs) and forward guidance (FG). Originally, the statement contained just a vague description of Fed policy actions, but now it contains an assessment of the Fed’s outlook for the economy, balance of risks, and a forecast, if not an outright commitment, on future policy actions. Thus, over the last twenty years, and especially during the ZLB period, the Fed has begun to use the postmeeting statement itself as a instrument with which it can influence and move markets.
9.2 Background There is a rich literature on the economic theory of why central banks should be more transparent and communicate with financial markets and the general public. One of
Central Bank Communications: A Case Study 265 the seminal articles was by Goodfriend (1986), who critically evaluated some of the arguments in defense of secrecy in the conduct of monetary policy put forward by the Federal Reserve in response to a Freedom of Information Act suit in 1975. Goodfriend concluded that there were indeed conditions under which central bank secrecy might be desirable but that the theoretical arguments in defense of secrecy were at best inconclusive. Along with the presumption that government secrecy was an anathema to the healthy functioning of democracy, Goodfriend argued that “further work is required to demonstrate that central bank secrecy is socially optimal” (1986, 90), By the 1990s, inflation targeting was becoming a popular framework for the conduct of monetary policy, and Bernanke et al. (1999) highlighted the importance of communication as part of an inflation-targeting strategy. Guthrie and Wright (2000) argued that in the case of the Reserve Bank of New Zealand—which had pioneered the use of an inflation- targeting strategy in 1990—its credibility was such that by simply communicating where it wished short-term interest rates to be, it could elicit a market response without actually undertaking any open market operations to back up its words. Blinder et al. (2001) reviewed existing practice on central bank communications around the turn of the century. They take as their point of departure the principle that when it comes to central banking communication, “full transparency ought to be the presumption, with a few conspicuous exceptions” (xvii), with the case for transparency resting on arguments for democratic accountability and policy effectiveness. They review the communications practices of a number of central banks, focusing on such things as the postmeeting statement, publication of meeting minutes and transcripts, publication of forecasts, and so on. Amato, Morris, and Shin (2002) showed that it is possible for a central bank to be too transparent, especially when public information serves the dual role of conveying fundamental information and serving as a focal point for better coordination, reminding us that the optimal amount of transparency (to the extent that it can be quantified) is somewhere between zero and 100 percent.1 Friedman (2003) critically reviews the language used by advocates of inflation targeting and argues, “By forcing participants in the monetary policy debate to conduct the discussion in a vocabulary pertaining solely to inflation, inflation targeting fosters over time the atrophication of concerns for real outcomes” (122). He sees this as yet another example of the familiar principle that “the language in which . . . debate takes place exerts a powerful influence over the substance of what participants say, and eventually even over what they think” (114). Blinder et al. (2008) surveyed the by then large literature on central bank communication. Noting the wide variation on communication practices across the world’s central banks, they concluded that “a consensus has yet to emerge on what constitutes an optimal communication strategy” (910). Ehrmann and Fratzscher (2005; 2007; 2013) seek to quantify various aspects of how central banks communicate. They have examined communications via speeches by policy committee members and postmeeting press conferences and document how certain features of such communications—such as the extent to which committee members agree on the economic outlook in their speeches or provide similar outlooks for monetary policy—affect the predictability of monetary policy
266 Davis and Wynne decisions. Berger, Ehrmann, and Fratzscher (2011) examine how communications by the European Central Bank are filtered through the media and document how various factors, such as the inflation environment or the extent to which a decision is unanticipated, impacts the tone of media coverage. Finally, Myatt and Wallace (2014) show that under certain circumstances, a central bank may choose to limit the clarity of its communications with the general public so as to better achieve an output stabilization objective. This chapter approaches the question of central bank communication from a somewhat different angle. Our interest is in attempting to quantify the information content in the texts of one specific communication tool—specifically, the postmeeting statement—used by the FOMC to explain its thinking to the general public. There is a small literature that has attempted similar exercises in the past. Seminal work done at the Federal Reserve Bank of Dallas many years ago attempted to evaluate the information content of the Beige Book published by the Federal Reserve System about two weeks prior to each FOMC meeting. Balke and Petersen (2002) quantified the qualitative information in each Beige Book by assigning numerical scores to comments about developments in real activity and inflation and examined the ability of the Beige Book to predict real economic activity. They coded the qualitative information by reading each Beige Book and assigning scores ranging from -2 to 2 to various words and phrases. For example, text describing “moderate” or “normal” economic growth was typically scored a 0.5, while text describing “strong” economic activity might rate a score of 1.0 or 1.5. Armesto et al. (2009) built on that work by using software developed by linguists to automate the quantification of the qualitative information in the Beige Book, specifically to generate “systematic measures of the levels of optimism and pessimism contained in the narrative of the Beige Book” (41). They essentially confirmed the earlier findings of Balke and Petersen using more sophisticated econometric techniques. Fulmer (2013) performs a similar exercise using slightly different linguistic software (the Stanford Parser) and reaches similar conclusions (the Beige Book has some predictive power for future economic activity but is best suited as a proxy for current-quarter economic activity.) And finally, Bligh and Hess (2007) examined how the language in Fed Chairman Greenspan’s speeches and testimonies evolved in response to changing economic conditions using computerized content analysis (specifically, the Diction 5.0 program of Hart 2000 that is also used in Armesto et al. 2009).
9.3 The Postmeeting Statement FOMC communications come in many forms, from the postmeeting statement, to meeting minutes and transcripts, postmeeting press conferences, the survey of economic projections, congressional testimony by the chair, speeches and interviews by the various reserve bank presidents, and members of the Board of Governors. In this
Central Bank Communications: A Case Study 267 chapter, we focus only on the postmeeting statement. As noted in the introduction, it was not until February 1994 that the FOMC started issuing a statement after each regularly scheduled FOMC meeting. The first statement was just ninety-nine words long, in marked contrast to what was to come later. There was no reference to a specific level of interest rates, to specific indicators of economic activity, or to where policy might be headed in the future. A specific value for the federal funds rate was not mentioned in the initial statements. Rather, the language was cast in terms of pressure on reserve positions or in terms of seeing increases in the discount rate fully reflected in interest rates in the federal funds market. In February 1995, the committee announced that all changes in the stance of monetary policy would be announced after the meeting. It was not until July 1995, when the committee had shifted to easing the stance of monetary policy, that mention was made of a specific level of the federal funds rate, when a decision to ease monetary policy was expected to be “reflected in a 25 basis point decline in the federal funds rate from about 6% to about 5¾%.” Through 1996, 1997, and 1998, the FOMC referred to target levels for the funds rate of “around” or “about” a particular value and only issued statements following decisions to change the funds rate. Starting with the May 1999 meeting, the FOMC announced that it had made no change in policy and initiated the practice of releasing a statement after every meeting regardless of whether the stance of policy had changed. Starting in June 1999, it began to refer to specific “target” levels for the funds rate. Also starting in 1999, the committee began to issue FG of a sort in the form of an assessment of the perceived risks going forward. By 2003, the committee was worried about inflation getting too low and lowered the federal funds rate to the then-unprecedented level of 1 percent. Starting with the August 2003 meeting, the committee again began issuing FG about the stance of policy, noting that it believed that “policy accommodation can be maintained for a considerable period.” In January 2004, the committee changed this language to state that it believed that it could “be patient in removing its policy accommodation.” In May 2004, it altered the language once again to state that “policy accommodation can be removed at a pace that is likely to be measured” and then raised the federal funds rate by 25 basis points at its June meeting. The committee raised the funds rate by 25 basis points at each of the subsequent sixteen meetings. In response to deteriorating economic conditions as a result of the financial crisis, on December 16, 2008, the FOMC lowered the target for the federal funds rate to a range of 0 to 0.25 percent and noted that “weak economic conditions are likely to warrant exceptionally low levels of the federal funds rate for some time.” This language was subsequently altered to read “exceptionally low levels of the federal funds rate for an extended period.” Thus began the period of unconventional monetary policy. The expectation that the federal funds rate would remain at an exceptionally low level for an extended period was included in every statement through August 2011, when the language was altered to “at least through mid-2013.” By January 2012, this was changed to “at least through late 2014,” and by September, it had been pushed back to “at least through mid-2015.” But at the December 2012 meeting, the committee made a radical change in how it communicated its intentions. The committee adopted language stating
268 Davis and Wynne that it anticipates that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent, inflation between one and two years ahead is projected to be no more than a half-percentage point above the Committee’s 2 percent longer-run goal, and longer-term inflation expectations continue to be well anchored.” Thus, over the course of two decades, the committee has gone from vague language about “pressure on reserve positions” to concrete statements about its target range for the federal funds rate and the specific economic developments that will prompt a change in that target in the future.
9.4 Text Analysis For this project, we examined the texts of the 131 FOMC postmeeting statements issued by the FOMC under chairs Greenspan and Bernanke. Figure 9.1 plots the evolution of the length of the postmeeting statement (as measured by the number of words in the statement) under Greenspan and Bernanke. The first vertical line in the figure marks the end of Greenspan’s term as chairman in January 2006 and the transition to Bernanke’s term. The second vertical line marks the start of the ZLB period in December 2008 that continued through the end of Bernanke’s term in January 2014. As noted, the statement issued on February 4, 1994, was just ninety-nine words long. The statement issued on January 29, 2014, was 841 words long, only slightly more concise than the record 878 words in the statement released on December 18, 2013. Figure 9.1 1000 900 800 700 600 500 400 300 200 100 0 1994
1996
1998
2000
2002
Figure 9.1 Word Count of FOMC Statements
2004
2006
2008
2010
2012
2014
Central Bank Communications: A Case Study 269 shows a clear trend toward wordier statements, with the trend intensifying not with the change of chairman in 2006 but rather when the FOMC ran up against the ZLB on interest rates in December 2008, when it lowered the target level of the federal funds rate to the 0-to-25-basis-point range, where it stayed until December 2015. For the period from December 2008 through the end of Bernanke’s tenure in January 2014, the FOMC has relied on unconventional monetary policy—specifically, LSAPs and FG—to support economic activity. There are a number of different approaches to assessing the ease of understanding a text. The (relatively) well-known Flesch-Kincaid indexes (see Flesch 1948 and Kincaid et al. 1975) were developed to quantify the difficulty of understanding a text written in contemporary English. The Flesch reading ease score is based on the number of syllables, words, and sentences in a text. The higher the score (which typically falls in a range of 0 to 100), the greater the reading ease. The Flesch-Kincaid grade-level formula simply translates the Flesch score into a US grade-level equivalent.2 The grade-level equivalent measure of each FOMC statement through Greenspan and Bernanke’s tenures is presented in figure 9.2. Again, the vertical line in the figure represent the transition from Greenspan to Bernanke in 2006, and the shaded area denotes the ZLB era that began in 2008. There was considerable variation in the grade level of the statements during the Greenspan Fed, with some statements at the reading level of a college sophomore and some at the level of a Ph.D. student.3 Statements from the Bernanke Fed started at the level of a college sophomore or junior, but beginning with the financial crisis and then when the Fed hit the ZLB, there was a sharp and nearly monotonic increase in the grade level of the statement. The grade-level equivalent measure topped out at 21, the level of 22 21 20 19 18 17 16 15 14 13 12 1994
1996
1998
2000
2002
2004
2006
2008
Figure 9.2 Flesch-Kincaid Grade Level of FOMC Statement
2010
2012
2014
270 Davis and Wynne Table 9.1 Descriptive Statistics of FOMC Statements Word count
Flesch-Kincaid grade level
Greenspan (pre-May 1999)
122
15.0
Greenspan (post-May 1999)
217
16.5
Bernanke (pre-ZLB)
246
14.7
Bernanke (ZLB)
529
17.7
a fifth-year Ph.D. student, in September 2013, when the Fed surprised markets by not starting to taper the quantitative easing (QE) program. Table 9.1 reports the average word length and grade-level equivalent measure of the statement during various subperiods. As in the figures, we distinguish between the statements issued under Greenspan and Bernanke. We distinguish between the statements issued under Greenspan before May 1999 and those issued after May 1999. Recall that it was at the May 1999 meeting that the FOMC decided to issue a statement after every meeting regardless of whether the stance of policy had changed, and in 1999, the Fed began including its assessment of the balance of risks in the statement. We also distinguish between the statements issued under Bernanke in the period before the FOMC ran up against the ZLB and those in the period since (i.e., since December 2008), when it has been engaged in unconventional monetary policy. The average statement has gone from 120 words long in the pre-1999 era to 530 words long in the ZLB era. In addition, the grade-level equivalent measure shows that the statement has advanced from the reading level of an upper-level undergraduate before the ZLB period to that of a master’s degree student after the ZLB was reached. Table 9.2 reports some simple tests for differences of means to show that the differences are statistically significant. The cells above the diagonal report the mean difference in word length of the postmeeting statement between the period indicated in the column and that indicated in the row. The tests show that most differences in word count and grade-level measures across the subperiods are statistically significant. This is especially true for the statements in the ZLB period, which are clearly longer and far more technical and complex than Fed statements in the pre-ZLB period.4 In short, the Fed statements are getting longer and more complex. As we will now argue, this is a sign that the Fed statement contains more information. The evolution of the Fed statement over the last twenty years shows that it has gone from a vague description of past Fed policy actions to a clear description of the Fed’s outlook for the economy and a forecast, if not an outright commitment, to future policy actions. In the next section, we show how this trend toward releasing more information in the statement is associated with greater financial-market responses on the day the statement is released.
Central Bank Communications: A Case Study 271 Table 9.2 Differences in Statistics Describing FOMC Statements Greenspan (pre-May 1999) Greenspan (pre-May 1999) Greenspan (post-May 1999)
Greenspan (post-May 1999)
Bernanke (pre-ZLB)
Bernanke (ZLB)
95***
124***
407***
29**
312***
-1.5***
Bernanke (pre-ZLB)
0.3
1.9***
Bernanke (ZLB)
-2.7***
-1.2***
282*** -3.1***
Numbers in cells above the diagonal are the difference in mean word count between the period indicated in the column and the period indicated in the row. For example, the average FOMC postmeeting statement issued during the early part of Greenspan’s tenure was 407 words shorter than the average FOMC postmeeting statement issued during Bernanke’s tenure after December 2008 (the ZLB period). Numbers in cells below the diagonal are the mean difference in Flesch-Kincaid grade-level readability. For example, the readability of the average FOMC postmeeting statement issued during the ZLB period during Bernanke’s tenure was 2.7 grade levels above the grade-level readability of the average FOMC postmeeting statement issued during Greenspan’s tenure prior to May 1999 (calculated as 15 –17.7 = –2.7). * denotes significance at the 10 percent level; ** denotes significance at the 5 percent level; *** denotes significance at the 1 percent level.
9.5 An International Perspective Before proceeding to the detailed econometric analysis, it is worth pausing to get some comparative perspective on the characteristics of monetary policy statements. Each central bank has its own unique way of formulating and communicating monetary policy decisions. Mandates and decision-making processes differ across central banks, as do the means of communicating decisions. We decided to look at the recent record of one other central bank in the English-speaking world, the Bank of Canada (BoC), to see how its policy statement (or, rather, the press release announcing its policy decisions) has evolved over the past two and a half decades. The BoC starting issuing press releases announcing monetary policy decisions in June 1997. The first statement was 128 words long and had a Flesch-Kincaid grade-level readability of 12.4. The first press release was issued at the beginning of a policy-tightening cycle (as was the case with the first postmeeting statement by the FOMC). Figure 9.3 shows how the word count of the press release has evolved over time through the end of 2015.5 The figure also includes three vertical reference lines demarcating the tenures of the four governors of the BoC over this period. The tenure of Governor Carney includes the BoC’s short experience with the ZLB. We date that episode as beginning
272 Davis and Wynne in April 2009, when the target rate for the overnight rate was lowered to 0.25 percent, which the bank judged to be the ELB for that rate. The overnight rate was maintained at 0.25 percent until June 2010, when it was raised to 0.5 percent. The tightening cycle that began in June 2010 lasted until September 2010, by which time the rate had been raised to 1 percent, where it remained until January 2015, when it was lowered to 0.75 percent in response to the sharp drop in oil prices. Subsequent policy action took the rate to 0.5 percent, where it remained through the end of 2015. Note that, as with the FOMC, the BoC initially only issued a press release when it changed the stance of monetary policy. The first press release announcing no change in monetary policy was issued in December 2000. Note also that prior to 2000, the timing of policy decisions and announcements was not set in advance. In September 2000, the BoC announced that it would make policy decisions and announcements eight times a year on a prespecified schedule, similar to that followed by the FOMC. Comparing the patterns in figure 9.3 with that in figure 9.1, we see a general tendency toward longer policy statements (as with the FOMC), but the tendency is not as pronounced. The longest statement in our sample was 717 words long and issued in October 2011. The longest statement issued by the FOMC was the 878-word-long statement issued in December 2013. Note also that the statements are not notably longer during the period when the BoC was up against the ELB on its policy rate. This period lasted only from April 2009 through June 2010. At the time the bank lowered its policy rate to its ELB, it also adopted (date-based) FG, noting, “Conditional on the outlook for inflation, the target overnight rate can be expected to remain at its current level until the end of the second quarter of 2010 in order to achieve the inflation target.” This language was included in seven subsequent press releases (through the March 2010 release) and was 1000 900
Thiessen
Dodge
Carney
Poloz
800 700 600 500 400 300 200 100 0 1997
1999
2001
2003
2005
2007
Figure 9.3 Word Count of Bank of Canada Statements
2009
2011
2013
2015
Central Bank Communications: A Case Study 273 not included in the April 2010 release. In June 2010, the bank raised its policy rate to 0.5 percent. Unlike the Fed, the BoC never engaged in QE when it hit the ELB on its policy rate, relying exclusively on FG to provide additional stimulus.6 Also, the BoC was up against the ELB on its policy rate for a much shorter period of time than was the Fed. Of perhaps greater interest is the fact that the BoC’s policy statement did not seem to become more complex over time, even as it engaged in unconventional monetary policy. The Flesch-Kincaid grade-level readability of the text of the policy press release is shown in figure 9.4 and averaged 14.6 over the sample (with a standard deviation of 1.2 grade levels), around the equivalent of a college junior. Contrast this with the pattern shown in figure 9.2, where the Flesch-Kincaid readability of the postmeeting FOMC statement exceeded 20 by 2013. Tables 9.3 and 9.4 provide some simple statistics on average word count and grade- level readability under the four governors who ran the BoC during our sample period. In addition to providing statistics on Mark Carney’s full term as governor, we also distinguish between the periods during his governorship when the bank was up against the ELB on its policy rate. Two things stand out in table 9.4. First, the most statistically significant changes in the grade-level readability of the statement occurred under Stephen Poloz: the readability of the policy press release is more than one grade level lower (i.e., more readable) during his tenure than under the tenure of his three predecessors. Second, the most statistically significant changes in the word count occur between the governorships of Gordon Thiessen and David Dodge and their successors.
20 19 18
Thiessen
Dodge
Carney
Poloz
17 16 15 14 13 12 11 10 1997
1999
2000
2003
2005
2007
2009
2011
Figure 9.4 Flesch-Kincaid Grade Level of Bank of Canada Statements
2013
2015
274 Davis and Wynne Table 9.3 Descriptive Statistics of Bank of Canada Policy Statements Word count
Flesch-Kincaid grade level
Thiessen
207
14.6
Dodge
298
14.6
Carney
451
15.1
Carney (pre-ZLB)
461
15.1
Carney (ZLB)
420
15.0
Poloz
422
13.5
Note: Statistics for Poloz are through December 2015 only.
Table 9.4 Differences in Statistics Describing Bank of Canada Policy Statements Thiessen Thiessen
Dodge
Carney
Carney (pre-ZLB)
Carney (ZLB)
Poloz
91***
244***
254***
213***
215***
153***
163***
122***
124***
−31
−29
−41
−39
Dodge
0.0
Carney
−0.5
−0.5**
Carney (pre-ZLB)
−0.5
−0.5**
Carney (ZLB)
−0.4
−0.4
Poloz
1.1***
1.1***
10 0.0 09.1 1.6***
0.1 1.6***
2 1.5***
Note: Numbers in cells above the diagonal are the difference in mean word count between the period indicated in the column and the period indicated in the row. For example, the average policy statement issued during the entirety of Carney’s tenure was 244 words longer than the average policy statement issued during Thiessen’s tenure. Numbers in cells below the diagonal are the mean difference in Flesch- Kincaid grade-level readability between the period indicated in the column and the period indicated in the row. For example, the readability of the average policy statement issued during the ZLB period under Carney was 0.4 grade levels above the grade-level readability of the average policy statement issued during Thiessen’s tenure. * denotes significance at the 10 percent level; ** denotes significance at the 5 percent level; *** denotes significance at the 1 percent level.
9.6 Textual Complexity and Monetary Policy Shocks While textual analysis of the FOMC postmeeting statements is interesting, it is worth going further to see how these textual differences might matter (if at all) for how financial markets respond to the statement. Using daily financial-market data, it is possible
Central Bank Communications: A Case Study 275 to estimate a daily time series of monetary policy shocks. With this, we can identify the shock on the exact day that a statement was released and thus isolate the market-moving impact of the statement. To estimate these shocks, we use a VAR estimated with daily financial-market data and identified with sign restrictions, as in Antolín-Díaz and Rubio-Ramírez (2015). We include five variables of daily financial-market data in the VAR: the ten-year Treasury inflation-protected securities (TIPS) interest rate (R), ten-year inflation expectations ( π e , measured as the difference between the ten-year nominal rate and the ten-year TIPS rate), the nominal effective exchange rate (FX), the value of the S&P 500 (SP) and the VIX (VIX). For the first two variables, the observation will be the daily percentage point change; for the other three, the observation is the daily percentage change. We use the VAR to identify the following four shocks: a monetary shock (M), a risk shock (σ), a demand shock (D), and a supply shock (Sup). To estimate this VAR and identify these four shocks using the method of sign restrictions, begin by estimating a reduced form VAR(1) using the five financial market variables:
Xt = AXt −1 + ut
where Xt is a five-by-one vector with the five variables in the estimation and ut is the vector of reduced form residuals. Then we identify the impact matrix S where ut = Set , where the vector et is a five-by-one vector with the four exogenous shocks and one measurement error.7 The impact matrix is the matrix S where SS ′ = uu ′, the covariance matrix of the reduced form residuals. If we were identifying shocks through a set of ordering restrictions, we would assume that S is a lower triangular matrix and is thus the output of a Cholesky decomposition of the covariance matrix. Since the data are daily financial- market data, such an ordering restriction would be inappropriate, so we instead assume that S has certain sign restrictions. The sign restrictions that identify these four shocks (and one measurement error) are reported in the top half of table 9.5. Our identification strategy assumes that on the day of the shock, a monetary shock leads to an increase in the real interest rate, a decrease in expected inflation, US dollar appreciation, and a decline in the S&P 500. A risk shock leads to a fall in the real interest rate (through a risk-induced flight to safety into US Treasuries), an appreciation of the dollar (through the same flight to safety), a fall in the S&P 500, and an increase in the VIX. An aggregate demand shock leads to an increase in the real interest rate, an increase in expected inflation, and dollar appreciation. An aggregate supply shock leads to an increase in the real interest rate, a fall in expected inflation, dollar appreciation, and an increase in the S&P 500. We run a random search to identify the possible S matrices that satisfy both these sign restrictions and the criteria that SS ′ = uu ′ .8 There are many possible candidate matrices that satisfy both these criteria. For our purposes, we will identify and calculate results from one thousand possible S matrices. The average impact matrix, after averaging
276 Davis and Wynne over these one thousand possible S matrices, is presented in the bottom half of table 9.5. This is the effect of each of the four shocks in the model on the daily change in the five financial-market variables. We estimate that a one-standard-deviation contractionary monetary policy shock, M = 1, leads to a 1.9 percentage point increase in the real interest rate, a 1.5 percentage point decrease in the expected inflation rate, a 0.1 percent increase in the nominal effective exchange rate, a 0.63 percent decline in the S&P 500, and a 2.3 percent increase in the VIX.
9.7 Backing Out a Sequence of Shocks After finding a possible candidate for this matrix S, call it Si , it is possible to back out these shocks from the reduced form residuals ut . Since ut = Si ei ,t and S is invertible, then ei ,t = Si−1ut . A separate sequence of shocks, ei ,t , is associated with each possible matrix Si. Since we calculate one thousand possible Si matrices and thus one thousand possible shock sequences ei ,t , a simple average can be taken across these one thousand possible sequences. The average shock sequence for each of our four shocks is plotted in figure 9.5. From these time-series graphs of the four shocks in the model, it appears that other than a spike during the acute part of the crisis, the magnitudes of the demand and supply shocks are fairly constant between the pre-and post-ZLB periods. The same is not true, however, for the monetary shocks and the risk shocks, which appear to have higher magnitude in the post-ZLB period than in the pre-Lehman period. We can calculate the variance of each of the four shocks in the pre-Lehman and post- ZLB periods.9 By definition, the variance of each shock over the entire sample is equal to one. This simply arises from the fact that the shocks are identified with ut = Set and the identification matrix must satisfy SS ′ = uu ′, so the covariance matrix of the vector of shocks, ee ’, is simply the identity matrix. But the variance of the estimated shocks within the two subsamples is reported in table 9.6. The table shows that the variances of the supply and demand shocks are roughly the same across the two subperiods, but there is a statistically significant increase in the variance of the risk and monetary shocks in the post-ZLB period.10 Since these monetary shocks are measured with daily data, it is possible to connect this increased variance to specific Fed actions. To do this, we begin by connecting these daily monetary shocks to specific Fed meetings and statements. It is interesting to note that the statement from May 1999, the first meeting when a statement was released without a policy change, says: “While the FOMC did not take action today to alter the stance of monetary policy, the Committee was concerned about the potential for a buildup of inflationary imbalances that could undermine the favorable performance of the economy and therefore adopted a directive that is tilted toward
Central Bank Communications: A Case Study 277 Table 9.5 Sign Restrictions Used to Identify Shocks—Sign Restrictions in the Matrix S Monetary shock M
Risk shock σ
Demand shock D
Supply shock Sup
10-year TIPS interest rate (R)
+
–
+
+
10-year expected inflation rate (Ex)
–
+
–
Nominal effective exchange rate (FX)
+
+
+
+
S&P 500 (SP)
–
–
VIX (VIX)
Measurement error
+
+
Average of 1,000 estimated S matrices—average impact matrix 10-year TIPS interest rate (R)
1.89
–2.06
1.75
1.51
0.03
10-year expected inflation rate (Ex)
–1.45
–0.44
1.92
–1.49
0.07
Nominal effective exchange rate (FX)
0.10
0.15
0.10
0.10
0.00
–0.63
–0.51
0.41
0.44
0.02
2.27
2.66
–1.61
–1.54
–0.13
S&P 500 (SP) VIX (VIX)
Note: A plus sign indicates that we are restricting our search to S matrices where that particular element is positive. A minus sign indicates that the particular element must be negative. No sign indicates that the entry in the S matrix can have any sign.
the possibility of a firming in the stance of monetary policy.” The estimated monetary shock is contractionary (but not large) on the day that statement was released, May 18, 1999 (recall that we define a positive monetary shock as a contractionary monetary shock or a tightening of monetary policy). Figure 9.6 plots the estimated monetary shock on the final day of each scheduled FOMC meeting since May 1999. A few notable dates are highlighted in the chart: 1. December 2008. ZLB reached as the FOMC cuts the federal funds rate from a target level of 1 percent to a target range of 0 to 25 basis points. 2. March 2009. FOMC announcement of an increase in the size of the LSAP program or QE1. 3. September 2010. QE2 hinted at (by adding the phrase “provide additional accommodation if needed” to the postmeeting statement); QE2 announced at the subsequent FOMC meeting in November.11
278 Davis and Wynne
15
14
20
12
13
20
20
11
20
10
20
08
09
20
20
07
20
05
06
20
20
04
20
03
20
01
02
15
20
14
13
20
20
12
20
11
20
10
09
20
20
08
20
07
06
20
20
05
20
04
02
20
99
20
19
15
20
14
20
13
12
20
20
11
20
10
20
09
08
20
20
07
20
06
05
20
20
20
20
20
20
04
–7 03
–5
–7 02
–3
–5
01
–3
00
–1
99
1
–1
01
3
1
20
5
3
20
20
99
7
5
00
7
19
Time series of estimated supply shocks
9
20
Time series of estimated demand shocks
19
15
14
20
13
20
11
12
20
20
10
20
09
20
07
08
20
20
20
20
20
20
20
20
20
20
19
06
–7 05
–7 04
–5
03
–5
02
–3
01
–3
00
–1
99
1 –1
20
3
1
20
5
3
20
5
00
7
20
7
9
Time series of estimated risk shocks
9
03
Time series of estimated monetary shocks
9
Figure 9.5 Time Series of Shocks Estimated from Sign Restrictions VAR
Table 9.6 Sample Variances of the Estimated Shocks in the Model Monetary shock M
Risk shock σ
Demand shock D
Supply shock Sup
Pre-Lehman
0.72
0.72
0.92
0.89
Post-ZLB
1.08
1.18
0.95
0.95
4. August 2011. Introduction of (date-based) FG in the form of an observation that economic conditions “are likely to warrant exceptionally low levels for the federal funds rate at least through mid-2013.” 5. September 2012. FG extended until mid-2015. 6. May/June 2013. Tapering first hinted at: “The committee is prepared to increase or reduce the pace of its purchases to maintain appropriate policy accommodation.” 7. September 2013. Expected tapering did not start: “the Committee decided to continue purchasing additional agency mortgage-backed securities at a pace of $40 billion per month and longer-term Treasury securities at a pace of $45 billion per month.” 8. March 2015. “Patient” removed from statement, but Fed is clear that liftoff is not imminent: “The Committee anticipates that it will be appropriate to raise the
Central Bank Communications: A Case Study 279 3 2 6/13
1 0 –1 –2 9/10
–3
8/11
–4 –5
9/12 9/13
3/15
12/08
–6
3/09
–7 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Figure 9.6 Monetary Shocks on FOMC Statement Release Days
Table 9.7 Sample Variance of the Monetary Shock on FOMC Statement Days and Nonstatement Days Statement day
Nonstatement Day
Pre-Lehman
0.86 (75)
0.71 (2,340)
Post-ZLB
2.26 (56)
1.04 (1,665)
Number of observations in each category in parentheses.
target range for the federal funds rate when it has seen further improvement in the labor market and is reasonably confident that inflation will move back to its 2% objective over the medium term. This change in the forward guidance does not indicate that the Committee has decided on the timing of the initial increase in the target range.” Table 9.7 shows the variances of the estimated monetary shock on FOMC statement release days and ordinary days during the pre-Lehman and post-ZLB subperiods. Recall that by construction, the variance of the monetary shock over the entire 1999–2015 period is one. And in table 9.6, we reported that the variance is 0.72 in the pre-Lehman period and 1.08 in the post-ZLB period. By analyzing shocks on statement days separately
280 Davis and Wynne from nonstatement days, we find that the variance of the shock is slightly higher on statement days than on nonstatement days during the pre-Lehman period, but this increase is not statistically significant. However, in the post-ZLB period, the estimated monetary shock is more than twice as volatile on statement days as on nonstatement days.12
9.8 Effect of the Statement on the Size of Monetary Shocks So we observe two facts about FOMC statements and the monetary shock on the days the statements are released. First, the statements have been getting longer and more complex. Second, monetary shocks on FOMC statement days are getting bigger. So one question is how these two are related. Are longer and more complex statements increasing the amount of information passed in each statement and leading to a larger monetary shock? But of course, there is an empirical problem here. We want to say that as the information content of statements has increased, that has led to larger monetary shocks (i.e., more bang for the statement buck). But the causality could run both ways. It could be that a longer and more detailed statement leads to a bigger impact, but it also could be that on a big announcement day (such as announcing a new FG or a new QE program), the committee puts out a longer and more detailed statement. One solution to this empirical problem is to turn back to the pre-ZLB period and look at the statements from 1999 to 2008, when there is a change (or no change) in the federal funds rate announced in each statement. We run a simple multivariate regression and regress the size of the monetary policy shock on the day the FOMC issues a statement on the change in the federal funds rate and some proxy for the information content of the statement. This is a simple way of addressing the question of whether the information content of the statement has an effect on the size of the monetary shock associated with a decision to either change or leave unaltered the federal funds rate after controlling for the actual monetary actions announced in the statement. In table 9.8, we present the results from the following regression:
Mt = α * Surpriset + β * Statementt + γ * Statementt * Surpriset + t
where Mt is the value of the monetary shock estimated from the sign restrictions VAR, Surpriset is the difference between the Fed Funds rate at the end of the day the statement was released and the rate implied by the nearby federal funds futures contract the day before, and Statementt is some characteristic of the FOMC statement from that meeting (either the word count or the Flesch-Kincaid grade level), and t is an error term. The policy surprise has a positive and significant effect on the size of the monetary shock, as would be expected. A change in the federal funds rate above that which was expected is associated with a positive (contractionary) monetary policy shock. The point
Table 9.8 Results from a Regression of the Estimated Monetary Shock on the FOMC Policy Surprise and Statement Characteristics Dependent variable: Monetary shock Value of policy rate surprise
2.76** (1.14)
Word count of statement
2.89** (1.13)
−6.59 (5.39)
0.00* (0.00)
0.00 (0.00)
0.00 (0.00)
0.04* (0.02)
0.05** (0.02)
Surprise * word count Flesch-Kincaid grade level
2.84** (1.15)
0.07 (0.06)
Surprise * grade level R2
−12.99 (11.42)
−38.27*** (14.39)
0.04 (0.07)
0.00 (0.07)
1.01 (0.73)
1.74** (0.74)
0.08 78
0.11 78
0.14 78
0.08 78
0.09 78
0.18 78
Time trend
−0.01** (0.00)
−0.01 (0.01)
−0.01 (0.01)
−0.01** (0.01)
−0.01** (0.01)
−0.01 (0.01)
Value of policy rate surprise
3.13*** (1.13)
3.11*** (1.13)
−5.46 (5.44)
3.14*** (1.13)
−16.07 (11.20)
−37.62*** (14.34)
0.00 (0.00)
0.00 (0.00)
0.00 (0.00)
0.03 (0.02)
0.05** (0.02)
No. of observations
Word count of statement Surprise × word count Flesch-Kincaid grade level
−0.03 (0.08)
Surprise × grade level
−0.07 (0.08)
−0.06 (0.08)
1.23* (0.71)
1.79** (0.74)
R2
0.13
0.12
0.14
0.12
0.14
0.19
No. of observations
78
78
78
78
78
78
Note: Sample period: 1999–2008. Dependent variable: monetary policy shock as estimated using sign restrictions in a five-variable VAR. “Surprise” denotes the surprise component monetary policy shock as estimated by the difference between the value of the federal funds rate at the end of the day of an FOMC meeting and the rate implied by the nearby federal funds futures contract the day before the FOMC meeting. Standard errors in parentheses. ***/**/* denote significance at the 1/5/10 percent level.
282 Davis and Wynne estimate of the coefficient shows that a policy surprise of 10 basis points leads to a contractionary monetary policy shock of 0.27 (recall from the first column of the impact matrix in table 9.5, a monetary shock M = 1 leads to a 1.9 percentage point increase in the ten-year real interest rate, a 1.5 percentage point decrease in the expected inflation rate, a 0.1 percent increase in the nominal exchange rate, a 0.63 percent decline in the S&P 500, and a 2.3 percent increase in the VIX). The statement variable on its own usually doesn’t have a significant effect, but the interaction between the Fed action and the statement variable has a significant effect. For instance, the results in the third column of table 9.8 say that every increase in the word count of the statement raises the effect of the same surprise change in the federal funds rate on the magnitude of the resulting monetary shock by 0.04, and every increase of 1 in the grade level increases the effect of the same federal funds rate change on the resulting monetary shock by 1.12. The interaction term between the policy surprise and the statement characteristic variable is positive and significant, implying that as the statement gets longer or is written at a higher level, the monetary policy shock associated with a given policy change is larger. This suggests that longer and more complex statements that convey more information are able to deliver a larger monetary shock and thus help the Fed move markets in its intended direction. Earlier, we reported that the average statement from the post-1999 Greenspan Fed was 217 words long and the average statement from the pre-ZLB Bernanke Fed was 246 words long. The regression results imply that because the Bernanke statements were longer, the 10-basis-point policy surprise from the Bernanke Fed would have led to a contractionary monetary policy shock that was larger by 0.1. One needs to exercise caution before extrapolating these regression results from pre- ZLB statements to the ZLB era, and in the next section, we will discuss some ways this analysis can be extended to the ZLB period. But using a simple back-of-the-envelope calculation and extrapolating these regression results to the ZLB period, the statements in the post-ZLB Bernanke Fed were on average 280 words longer than the statements from the pre-ZLB Bernanke Fed. The 10-basis-point policy surprise from the post-ZLB Bernanke Fed would have led to a contractionary monetary policy shock that was larger by one standard deviation. In addition, the post-ZLB Bernanke Fed statements were on average three grade levels higher than those of the pre-ZLB Bernanke Fed. The 10-basis- point policy surprise from the post-ZLB Bernanke Fed would have led to a contractionary monetary policy shock that was larger by 0.3 standard deviation.
9.9 Conclusion As the FOMC has had to deal with the greatest financial crisis since the Great Depression and exhausted conventional monetary options, unconventional monetary policy has
Central Bank Communications: A Case Study 283 played a greater role, and within the class of unconventional monetary policies, FG— that is, communication about the likely future course of policy conditional on economic developments—has taken on greater importance. In this chapter, we have examined just one dimension of the FOMC’s communications—the postmeeting statement—and documented how it has evolved over time. We have sought to characterize some of the linguistic features of the statement that may have made it easier or more difficult to understand and then shown how these characteristics correlate with estimates of monetary policy shocks. While this analysis certainly suggests that the increased length and complexity of the statements during the ZLB period have been responsible for statements having such a large impact, the regression analysis performed in the last section relied on pre-ZLB observations, where the primary policy instrument was a change in the federal funds rate. To formally extend such analysis to the ZLB period, one would need to quantify the effect of unconventional policy announcements, such as the announcement of a new QE program. One possibility is to consider changes in something like Eurodollar rates, but this is left as a direction for further research. Of course, the postmeeting statement is just one of the ways the FOMC communicates with the public. It has always issued minutes after its meetings. Through 2004, the minutes were released two days after the subsequent meeting, which limited their usefulness in conveying the thinking behind the policy decision made at the meeting to which they referred. In December 2004, the decision was made to bring forward the release of the minutes, and they now appear three weeks after the meeting to which they refer, providing additional insights into the thinking behind policy decisions. Three other major communications innovations have occurred in recent years. First, following the long-standing practices of other major central banks, the chairman now holds a press conference four times a year. Second, the members of the Board of Governors and the individual reserve bank presidents now release their economic forecasts four times a year as part of a regular Survey of Economic Projections. Previously, these forecasts had only been reported twice a year as part of the regular monetary policy report to Congress (previously known as the Humphrey Hawkins testimony). And third, all of the governors and reserve bank presidents give regular speeches and interviews. A more comprehensive analysis of FOMC communications would examine how information about monetary policy communicated through these channels interacts with measures of policy shocks. That is left for future work. We briefly compared some of the characteristics of the FOMC’s postmeeting statement with the press release issued by the BoC. It would be interesting to follow up with a broader comparison with more central banks in the English-speaking world to see how the complexity of the communications of the different central banks evolved in the postcrisis period. We could also replicate our econometric analysis for these countries, a project we are currently engaged in.
284 Davis and Wynne
Notes 1. A similar point is made by Dale, Orphanides, and Österholm (2011), who argue that central banks should limit themselves to communicating the information they know most about. 2. One caveat to keep in mind when looking at the Flesch-Kincaid grade-level score is that ideally, a text should have at least 200 words before this scoring can be used (see Graesser et al. 2004, 7). 3. Assuming that the Flesch-Kincaid grade levels 13 through 16 correspond to college-level comprehension and grade levels 17 and above correspond to postgraduate level. 4. Based on the trend toward ever-longer statements apparent in fi gure 9.1, especially in the last full year of Bernanke’s tenure, one might well question the validity of tests for differences of means in the word counts. The pattern in fi gure 9.1 shows a step up in the average length of the postmeeting statement after the FOMC started engaging in unconventional monetary policy in December 2008, remaining fairly stable in the 400-to-500-word range, with the word length becoming a bit unanchored over the course of 2013. The issue of potential nonstationarity is not as apparent in the index of Flesch-Kincaid readability, although some of the statements issued toward the end of Bernanke’s tenure score at a twentieth-grade readability level. 5. Note that there were two press releases issued in October 2008, one on October 8and the second on October 21. The October 8 press release was issued as part of the coordinated action by the Federal Reserve, the ECB, the Bank of England, the Swiss National Bank, and Sveriges Riksbank to stabilize economic activity in the wake of the failure of Lehman Brothers. The BoC cut its policy rate by 0.5percentage point at that time. On October 21, 2008, the BoC announced that it was cutting its policy rate by a further 0.25 percentage point, to 2.25%. Figure 9.3 just shows the word count for the October 8 press release. 6. The BoC’s experience with unconventional monetary policy is reviewed in Poloz 2015. The BoC’s balance sheet did expand between October 2008 and June 2010 for other reasons. 7. Identification of shocks requires the matrix S to be invertible and thus square. 8. Identify the lower triangular matrix C where C is a Cholesky decomposition of uu′. Next, denote X as a square matrix (the same size as C ) of random elements. Perform a QR decomposition of X where QQ ′ = I and R is an upper triangular matrix. The trial S matrix is S = CQ . If S satisfies the sign restrictions, keep S ; if not, discard and estimate a new random matrix X . Stop the process after finding 1,000 potential matrices S that satisfy the sign restrictions. 9. The pre-Lehman period is from the start of the sample in January 1999 to September 12, 2008. The post-ZLB period is from December 17, 2008, to the end of the sample in November 2015. 10. The ratio of two variances follows an F-distribution. The ratio of the variance of the monetary or risk shock in the post-ZLB period to the same shock in the pre-Lehman period is greater than 1, with p-value equal to 0. The ratio of the variances of the demand and supply shocks in the post-ZLB period to those in the pre-Lehman period are statistically greater than 1, with p-values 0.29 and 0.06, respectively. 11. Note that Bernanke had also hinted at launching an additional LSAP program in his remarks at the Jackson Hole conference on August 27. 12. The ratio of the variance of the monetary shock on statement days to that on ordinary days in the pre-Lehman period is greater than 1, with p-value equal to 0.11. The ratio of the
Central Bank Communications: A Case Study 285 variance of the monetary shock on statement days to that on ordinary days in the post- ZLB period is greater than 1, with p-value equal to 0.
References Amato, Jeffrey D., Stephen Morris, and Hyun Song Shin. 2002. “Communication and Monetary Policy.” Oxford Review of Economic Policy 18: 495–503. Antolín-Díaz, Juan, and Juan F. Rubio-Ramírez. 2015. “Understanding the Turbulence in Financial Markets of August 2015: Can Modern Macroeconometrics Help?” Manuscript. Armesto, Michelle T., Rubén Hernández-Murillo, Michael T. Owyang, and Jeremy Piger. 2009. “Measuring the Information Content of the Beige Book: A Mixed Data Sampling Approach.” Journal of Money, Credit and Banking 41, no. 1: 35–55. Balke, Nathan S., and D’Ann Petersen. 2002. “How Well Does the Beige Book Reflect Economic Activity?” Journal of Money, Credit and Banking 34: 114–136. Berger, Helge, Michael Ehrmann, and Marcel Fratzscher. 2011. “Monetary Policy in the Media.” Journal of Money, Credit and Banking 43: 689–709. Bernanke, Ben S., Thomas Laubach, Frederic S. Mishkin, and Adam S. Posen. 1999. Inflation targeting. Princeton: Princeton University Press. Bligh, Michelle C., and Gregory D. Hess. 2007. “The Power of Leading Subtly: Alan Greenspan, Rhetorical Leadership, and Monetary Policy.” Leadership Quarterly 18: 87–104. Blinder, Alan S., Michael Ehrmann, Marcel Fratzscher, Jakob de Haan, and David-Jan Jansen. 2008. “Central Bank Communication and Monetary Policy: A Survey of Theory and Evidence.” Journal of Economic Literature 46: 910–945. Blinder, Alan S., Charles Goodhart, Philipp Hildebrand, David Lipton, and Charles Wyplosz. 2001. How Do Central Banks Talk? Geneva Reports on the World Economy 3. London: Center for Economic Policy Research. Brunner, Karl. 1981. “The Art of Central Banking.” University of Rochester Center for Research in Government Policy and Business working paper GPB 81-6. Dale, Spencer, Athanasios Orphanides, and Pär Österholm. 2011. “Imperfect Central Bank Communication: Information versus Distraction.” International Journal of Central Banking 7, no. 2: 3–39. Ehrmann, Michael, and Marcel Fratzscher. 2005. “How Should Central Banks Communicate?” European Central Bank working paper 557. Ehrmann, Michael, and Marcel Fratzscher. 2007. “Communication by Central Bank Committee Members: Different Strategies, Same Effectiveness?” Journal of Money, Credit and Banking 39, nos. 2–3: 509–541. Ehrmann, Michael, and Marcel Fratzscher. 2013. “Dispersed Communication by Central Bank Committees and the Predictability of Monetary Policy Decisions.” Public Choice 157: 223–244. Flesch, Rudolph. 1948. “A New Readability Yardstick.” Journal of Applied Psychology 32: 221–233. Friedman, Benjamin M. 2003. “The Use and Meaning of Words in Central Banking: Inflation Targeting, Credibility and Transparency.” In Essays in Honour of Charles Goodhart, Vol. 1: Central Banking, Monetary Theory and Practice, edited by Paul Mizen, 111. Cheltenham, UK, and Northampton, MA: Elgar. Fulmer, Michael. 2013. “A Text Analytics Analysis of the Federal Reserve Beige Book Corpus.” Ph.D. thesis, Department of Economics, Southern Methodist University.
286 Davis and Wynne Goodfriend, Marvin. 1986. “Monetary Mystique: Secrecy and Central Banking.” Journal of Monetary Economics 17: 63–92. Graesser, Arthur C., Danielle S. McNamara, Max M. Louwerse, and Zhiqiang Cai. 2004. “Coh-Metrix: Analysis of Text on Cohesion and Language.” Behavior Research Methods, Instruments and Computers 36: 193–202. Guthrie, Graeme, and Julian Wright. 2000. “Open Mouth Operations.” Journal of Monetary Economics 46: 489–516. Hart, Roderick P. 2000. Diction 5.0: The Text-Analysis Program. Austin, Texas: Digitext, Inc. Kincaid, J. Peter, Robert P. Fishburne Jr., Richard L. Rogers, and Brad S. Chissom. 1975. “Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel.” Research Branch report 8-75, Naval Technical Training, US Naval Air Station, Memphis. Myatt, David P., and Chris Wallace. 2014. “Central Bank Communication Design in a Lucas- Phelps Economy.” Journal of Monetary Economics 63: 64–79. Poloz, Stephen S. 2015. “Prudent Preparation: The Evolution of Unconventional Monetary Policies.” Remarks at Empire Club of Canada, December 8.
chapter 10
Transparenc y of Monetary P oli c y i n t h e P ostcrisis Worl d Nergiz Dincer, Barry Eichengreen, and Petra Geraats
10.1 Introduction Transparency is a touchstone of monetary policy. There is a broad consensus in scholarly and practical policymaking circles on its desirability and utility. At the same time, however, there is uncertainty about how best to implement and pursue it. Transparency is a desirable characteristic of a central bank insofar as it enhances the effectiveness of monetary policy. It involves a commitment to communication that makes it easier to understand the central bank’s policy decisions and how they are reached, thereby enhancing the credibility of monetary policy. Transparency imposes discipline on monetary policymakers, since unsubstantiated policy changes have reputational costs. It can render policy more predictable, which will prevent changes in the central bank’s policy stance from destabilizing financial markets. Since a predictable policy is more rapidly incorporated into financial variables, its impact on the economy will be enhanced. In addition, transparency can help sustain support for the independence of the central bank. Independence is a way of solving time-inconsistency problems, avoiding short- termism, and insulating decision-making from political pressures that would otherwise bedevil monetary policy. Independent central bankers are free to choose their tactics while being obliged to advance a broader strategy or mandate determined by society through its political representatives. Transparency, in this light, is a way for those independent central bankers to explain how their actions are consistent with that mandate. Independence must be paired with accountability in order to be socially acceptable, and transparency ensures that the central bank can be more effectively held accountable in the court of public opinion.
288 Dincer, Eichengreen, and Geraats But there is disagreement about how central banks should best communicate their intentions and actions so as to enhance the effectiveness of their policies and ensure their adequate accountability. One source of disagreement derives from the failure of central bankers and their critics to adequately distinguish different dimensions of transparency. Following Geraats (2002), scholars distinguish political transparency, or openness about policy objectives; economic transparency, or providing data, models, and forecasts used for policymaking; operational transparency, or openness about the implementation of policy decisions and the extent to which the central bank’s main policy targets have been achieved; policy transparency, or communicating policy decisions and inclinations; and procedural transparency, or providing systematic information about the monetary policy decision-making process. The last two of these are most relevant to this chapter. The problem is that there is less than full agreement about the relative importance of these different facets of central bank transparency and wide variation among central banks in the extent to which they are embraced. This chapter analyzes whether and to what the extent central banks have continued to be more transparent about monetary policy during the postcrisis period. It focuses on two aspects of how monetary policy transparency is practiced. First, we analyze best practice in terms of procedural transparency, where a key aspect is the release of voting records and minutes without undue delay. We discuss the desirability of recent developments in this respect. Second, we analyze the evolution of policy transparency in the wake of central banks’ postcrisis experiments with unconventional policy measures and forward guidance (FG). In particular, we discuss transparency issues related to large-scale liquidity and asset purchase programs and how makers of monetary policy have developed FG about the policy rate to provide further stimulus with interest rates at record low levels. There already exist a number of surveys of the literature on these issues. For example, Geraats (2002; 2006; 2009; 2014) discusses theory, practice, trends, and recent developments in central bank transparency. Blinder et al. (2008) focus on central bank communication, a closely related issue. Blinder et al. (2001) present and analyze case studies of transparency of five major central banks, while a number of other authors describe and analyze transparency practices for individual central banks (see, e.g., Bjørnland et al. 2004 on the Norwegian central bank; Geraats, Giavazzi, and Wyplosz 2008 on the European Central Bank). Fry et al. (2000) construct an index of “policy explanations” based on a survey of ninety-four central banks, while Eijffinger and Geraats (2006) present a more systematic central bank transparency index that distinguishes the political, economic, procedural, policy, and operational aspects mentioned above for nine major central banks from 1998 to 2002. Subsequently, Dincer and Eichengreen (2008; 2010; 2014) compiled this index for a much larger sample of more than one hundred central banks and used it to analyze the determinants and effects of transparency, finding, for example, that greater transparency is associated with less variable inflation. In this chapter, we synthesize these analyses and present a new transparency index that has been updated to capture developments in the postcrisis world. Section 10.2
Transparency of Monetary Policy Postcrisis 289 provides an overview of key arguments and a discussion of postcrisis challenges. Section 10.3 then presents and analyzes a new index of monetary policy transparency that captures its political, economic, procedural, policy, and operational aspects for more than one hundred central banks. This index differs from those in our own previous work and that of other scholars in being better focused and more granular, in particular with respect to procedural and policy transparency. Section 10.4 is a set of case studies of how several prominent central banks are dealing with postcrisis monetary policy challenges. Section 10.5 summarizes.
10.2 Overview of Issues Most central banks consider transparency to be an important component of their monetary policy framework.1 The European Central Bank (ECB), for example, maintains a web page headed “Transparency” which provides a window onto how central banks conceive of the concept.2 It defines transparency as “[providing] the general public and the markets with all relevant information on [the central bank’s] strategy, assessments and policy decisions as well as its procedures in an open, clear and timely fashion.” Transparency has three benefits, the web page explains: self-discipline, credibility, and predictability. It is useful to discuss these three aspects a bit further. Transparency imposes self-discipline as it requires central bankers to provide consistent explanations for their monetary policy decisions. In order for decisions to make sense to the public, policymakers have to explain why they make sense to themselves. By publishing their economic models and forecasts, policymakers are further encouraged to make decisions that are consistent with their mandate, because the public can more easily detect any deviations. This causes the reputations of policymakers to quickly suffer from inconsistent behavior. As a result, transparency (especially economic and operational transparency) improves policymakers’ incentives and mitigates “inflation bias” problems (see, e.g., Faust and Svensson 2001; Geraats 2001 and 2005). All of this heightens the likelihood and strengthens the public’s belief that makers of monetary policy will act in a manner consistent with their mandates. Thus, transparency enhances credibility, which means that inflation expectations are well anchored and in line with the central bank’s inflation objective.3 This anchoring also occurs because there is clarity about the central bank’s mandate to maintain, inter alia, a low and stable rate of inflation (there is political transparency, in other words). And when inflation expectations are well anchored it becomes easier for the central bank to achieve price stability. Consequently, transparency also enhances the predictability of policy outcomes. Monetary policy actions become more predictable by providing a prompt explanation of policy decisions and an indication of likely future policy moves (i.e., policy transparency). This means makers of monetary policy are unlikely to catch financial-market participants and others off-guard, causing fewer monetary policy surprises that could
290 Dincer, Eichengreen, and Geraats result in destabilizing disturbances to financial markets. Predictability of monetary policy actions is further enhanced by publishing the monetary policy strategy and releasing the votes and minutes of monetary policy meetings without undue delay (i.e., procedural transparency). This enables the private sector to learn how monetary policy responds systematically to economic developments and disturbances, leading to the formation of more accurate expectations of future policy. As a result, asset prices are likely to move in the direction desired by the central bank even in advance of policy action. This in turn accelerates the speed with which monetary policy affects financial variables and thereby influences consumption and investment decisions, reducing monetary transmission lags and making monetary policy more effective. Furthermore, publishing individual voting records facilitates accountability of central bankers and allows the government to reappoint those who act according to their mandate (e.g., Gersbach and Hahn 2004), thus further fostering self-discipline, credibility, and predictability of monetary policy. Such accountability also helps to prevent a democratic deficit, which is important for the legitimacy of independent central banks. However compelling these rationales, it can also be argued that transparency can go too far (see, e.g., Mishkin 2004). Too much information, or the release of too much information too quickly, may be difficult for observers to absorb and only confuse market participants. In this case, the result of greater transparency could be more volatility, not less. The release of different sets of information simultaneously (both the monetary policy announcement and the inflation report, for example) may make it hard for officials to identify the separate reaction of market participants to each and so prevent policymakers from ascertaining the effectiveness of each communication tool. Finally, detailed information about data, models, forecasts, and decisions may create a false sense of confidence and precision about future monetary policy. Market participants may underestimate the uncertainty surrounding future policy and take on excessive risk on that basis (Barwell and Chadha 2014). Investors may herd in response to public monetary signals in an environment where higher-order beliefs matter, causing greater volatility when the signals are noisy (Morris and Shin 2002). Public communications may dilute the information content of market expectations (Morris and Shin 2005) and reduce private-sector forecast efforts (Tong 2007), rendering monetary policy less predictable (Kool, Middeldorp, and Rosenkranz 2011). If the economy is thrown off course by these or other developments, the central bank will then be faced with the Hobson’s choice of having to modify or abandon its prior guidance or else stick to its previous (now suboptimal) policies, either way damaging its credibility. These dilemmas are especially acute in a postcrisis world where interest rates have fallen close to zero and central banks are forced to turn to unconventional monetary policies. A central bank that engages in quantitative easing will have to explain how its large-scale purchases of publicly traded securities will affect financial markets and, thereby, the economy. This is likely to be more difficult to communicate than in the case
Transparency of Monetary Policy Postcrisis 291 of conventional monetary policy actions; large-scale purchases of (different classes of) securities may have different effects from policy rate changes, and they may not even be formally represented in the central bank’s model. Moreover, large-scale asset purchases (LSAPs) by a central bank distort asset prices and therefore market expectations inferred from them, making it even more challenging for central banks to navigate uncharted waters. Insofar as unconventional policy operates via expectations, policy may depend even more than in other circumstances on FG—on the efforts of the central bank to influence expectations of future policy actions through its communications—which in turn creates all the dangers described above about the private sector relying on central bank information. A central bank operating via FG with its policy rate close to zero may only be able to influence price expectations by committing to keep interest rates low for longer than would otherwise be optimal.4 The responsible thing for the central bank, in effect, would be to act irresponsibly (Woodford 2012), which is an uncomfortable position for makers of monetary policy. These issues become more complex and difficult when the central bank has a mandate and operational responsibility for the maintenance of not just price stability but also financial stability, as is the case for a number of central banks in the postcrisis world. If the central bank has separate policy instruments for each objective, it would be able to achieve a dual mandate of both price and financial stability (e.g., Geraats 2010). In practice, however, instruments to maintain financial stability are still in their infancy, and their effectiveness remains untested, so trade-offs between price and financial stability may arise.5 In addition, the mapping from tactics to policy objectives is more complex and difficult when there are multiple policy instruments and the mandate has multiple dimensions. Compared to information about inflation, information about financial stability, for example, can be harder to summarize in compact and cogent ways. Some information about threats to financial stability (e.g., financial institutions burdened by “toxic assets” or liquidity problems) may be destabilizing if communicated before corrective action is taken. This is another specific instance of the general argument that at some point, efforts to enhance central bank transparency may become counterproductive. Central banks have pursued a variety of organizational initiatives to address these challenges. They have sought to develop multiple policy instruments (macroprudential as well as monetary policies) suitable for pursuing multiple objectives. Some central banks have created separate monetary policy and financial stability committees to avoid creating conflicts between these different aspects of their mandates, while allowing their members to focus narrowly on their respective charge; the central bank governor typically chairs both committees to ensure information sharing and a minimal degree of coordination. They have issued separate “inflation reports” and “financial stability reports” as vehicles for communicating information about these respective instruments. Time will tell whether these initiatives will suffice to resolve the dilemmas posed by multidimensional mandates.
292 Dincer, Eichengreen, and Geraats
10.3 Measures of Monetary Policy Transparency Building on our own previous work (Eijffinger and Geraats 2006; Dincer and Eichengreen 2008, 2010, and 2014), we present new indices of monetary policy transparency for 112 central banks covering about 150 countries.6 This new index is constructed with the intent of measuring the extent to which central banks disclose the information they use in the process of making monetary policy.7 The index analyzed here still consists of five subindices capturing (1) political, (2) economic, (3) procedural, (4) policy, and (5) operational transparency, with each subindex consisting of three items that receive a score of 0, ½, or 1. The overall index equals the sum of the scores across all items, ranging from 0 to a maximum overall score of 15, and is based on information that is freely available in English, the language of international financial markets.8 In these respects, the new index is comparable to our previous work (cited above) which involved a shorter sample period and a more limited range of countries. The new transparency index, which is detailed in appendix 10.1, differs from its predecessors in three respects. First, the indices are updated to now cover the period 1998–2015. Including the postcrisis period allows us to analyze how central bank transparency responded to the global financial crisis. In particular, it allows us to identify a number of interesting cases where central bank transparency declined subsequent to the crisis, contrary to the dominant trend. Second, the new transparency index explicitly focuses on monetary policy, as opposed to other central bank objectives. This refinement, which mostly affects the political dimension pertaining to policy objectives and institutional arrangements, has become increasingly relevant in the postcrisis world, since many central banks no longer confine themselves to monetary policy but have also actively pursued macroprudential policies to achieve financial stability objectives. Analysis of the transparency of the latter will be an interesting topic for future research, but this is a distinct aspect of central bank transparency that should be distinguished clearly. Third, the new index provides a more granular description of transparency, in particular for procedural and policy transparency. Specifically, the fifteen indices distinguished in our coding now identify twenty- seven information disclosure practices (again, see appendix 10.1), a wider range of alternatives than identified in previous work. Regarding procedural transparency, we have tightened the criteria for receiving full marks, because the financial crisis demonstrated the importance of timely information, especially in times of heightened uncertainty. In particular, it is useful to know the voting record at the time of the policy decision, because a unanimous vote sends a different signal from a bare majority. In addition, in rapidly changing circumstances, the minutes of policy meetings are less useful (and potentially confusing) if they are released after a prolonged delay (say, after the next policy meeting).
Transparency of Monetary Policy Postcrisis 293 Consequently, when scoring a central bank’s account of its monetary policy deliberations, we now require comprehensive minutes (or explanations in the case of a single central banker) to be published within three weeks (previously eight weeks) in order for a central bank to get full marks for criterion 3(b), while also giving some credit for summary minutes or for more comprehensive minutes that are published with a significant delay (of no more than eight weeks). With respect to voting records, full marks for criterion 3(c) now require that individual voting records are released on the day of the policy announcement (rather than within eight weeks) or else that the decision is taken by a single central banker (in which case the “voting record” is fully revealed by the policy decision). Partial credit is awarded when individual voting records are published within eight weeks or else when nonattributed voting records are released within three weeks. In the case of policy transparency, the index now distinguishes different forms of forward policy guidance. We define FG as communication of the likely timing, direction, size, and/or pace of future monetary policy actions. Partial credit is given to central banks that only provide a qualitative indication. To receive full marks for criterion 4(c), we now require that the central bank provides quantitative FG, because the greater precision is more informative and reduces ambiguity. Such FG could be time-dependent (including explicit calendar-based guidance or publication of the projected policy path) or state- contingent (for instance, numerical thresholds for inflation or the unemployment rate). The levels of the monetary policy transparency index for the 112 central banks in our sample from 1998 to 2015 are presented in table 10.1, listed by geographical region. Table 10.1 reveals substantial heterogeneity in transparency levels across countries. The variation within and across regions is visually depicted in the world map in figure 10.1, which shows transparency index levels in 2015. Europe has on average the most transparent central banks, Africa the least transparent. However, there is also considerable variation in monetary policy transparency within each continent. For instance, the transparency index for South Africa exceeds the average for Europe throughout the sample period. The central banks with the highest monetary policy transparency scores in 2015 are listed in table 10.2. The Swedish Riksbank ranks first, getting full marks in nearly every respect, followed by the Czech National Bank and, tied for third place, the central banks of Hungary, Iceland, the Bank of England, and the US Federal Reserve.9 The Bank of Israel ranks seventh, while tied for eighth are the ECB, the Bank of Japan, the Reserve Bank of New Zealand, and the central bank of Norway. All of these central banks are from high-income economies.10 Nearly all of them are inflation targeters,11 although the Federal Reserve and the ECB show that this is not a necessary condition to achieve a high level of monetary policy transparency. The scores in table 10.1 reveal that monetary policy transparency has changed significantly over time. Overall, there has been a trend toward greater transparency. The average transparency score across all central banks has risen substantially from 3.39 in 1998 to 6.35 in 2015. However, there is tremendous heterogeneity across countries. Figure 10.2 plots the transparency index level in 2015 against the 1998 level for the 112 central banks in our sample. It shows that the trend toward greater monetary policy transparency
1
3
1.5
3
4
1
2.5
2.5
1.5
1
0.75
1
Ethiopia
Kenya
Malawi
Mauritius
Mozambique
Rwanda
Seychelles
Tanzania
Uganda
Zambia
Central Africa
Angola
1.9
1
2.5
1
3
4.0
Northern Africa
Egypt
Libya
Sudan
Tunisia
Southern Africa
CEMAC
0.5
2.1
Eastern Africa
*
2.3
Africa
1998
4.1
3
1
2.5
1
1.9
0.5
1
0.75
1
2
3
2.5
1
4
3
1.5
3
1
2.2
2.3
1999
5.0
3
1.5
2.5
1
2.0
0.5
1
0.75
1
3
3
2.5
1
5
3
1.5
3.5
1
2.5
2.6
2000
5.3
3.5
1.5
2.5
1
2.1
0.5
1
0.75
1
3
4
2.5
1
4
3.5
3
4
1
2.7
2.8
2001
5.5
5
1.5
2.5
1
2.5
0.5
1
0.75
1
3
4.5
2.5
1
4
5
3
4
1
2.9
3.0
2002
5.8
5
1.5
2.5
1
2.5
0.5
1
0.75
1
3
5
2.5
1.5
4
5
3
4
1
3.0
3.1
2003
6.0
5
1.5
2.5
2
2.8
0.5
1
0.75
1.5
3
5
3.5
1.5
4
5
3
4
1
3.2
3.4
2004
7.1
5
1.5
2.5
3
3.0
0.5
1
0.75
3.5
3
5
3.5
3
5
5
3
4
1
3.6
3.8
2005
7.1
5.5
1.5
2.5
5
3.6
0.5
1
0.75
3.5
3
5
3.5
3
4
5
3
5.5
1
3.7
3.9
2006
7.3
5.5
1.5
2.5
5.5
3.8
0.5
1
0.75
3.5
3
5
3.5
3
5
7.5
3
7
1
4.2
4.3
2007
Table 10.1 Monetary Policy Transparency Index by Country and Region (Unweighted)
7.3
5.5
1.5
2.5
5.5
3.8
0.5
1
0.75
3.5
3.5
5
3
3
4.5
7.5
3
7
1
4.1
4.3
2008
7.4
5.5
1.5
2.5
5
3.6
0.5
1
0.75
3.5
3.5
5
2.5
3.5
5
8
3
7
2.5
4.4
4.4
2009
7.3
5.5
1.5
2.5
5.5
3.8
0.5
1
0.75
3.5
3.5
5
2.5
3
4.5
8
1.5
7
3.5
4.2
4.3
2010
7.4
5.5
1.5
2.5
6
3.9
0.5
1
0.75
3.5
6.5
5
3.5
3.5
5
8
1.5
6.5
3.5
4.7
4.5
2011
7.6
5.5
1.5
2.5
6
3.9
0.5
1
0.75
3.5
7.5
5
3.5
3.5
5
8
1.5
6.5
3.5
4.8
4.6
2012
7.8
5.5
1.5
2.5
6
3.9
0.5
1
0.75
3.5
7.5
5
4.5
3
5.5
8
3
6.5
3.5
5.0
4.8
2013
7.9
5.5
1.5
2.5
6
3.9
0.5
1
0.75
3.5
7.5
5
4.5
3
5
8
4
6.5
3.5
5.1
4.8
2014
7.9
6.5
1.5
2.5
6
4.1
0.5
1
0.75
3.5
7.5
5
4.5
3
5
8
4
6.5
3.5
5.1
4.9
2015
5.5
3.5
5.5
Namibia
South Africa
2.5
3.1
2.5
2.5
2.5
0.5
3.6
3.1
Ghana
Nigeria
Sierra Leone
WAEMU*
Americas
Caribbean
4.5
4.5
4
2
2.5
2.5
3.5
2.5
2.5
3.1
3.5
3
Aruba
Bahamas
Barbados
Cayman Islands
Cuba
Curaçao
Jamaica
Trinidad and Tobago
Central America
Belize
El Salvador
3
3.5
3.1
2.5
2.5
3.5
2.5
2.5
2
4
3.5
East Caribbean 3.5
3.8
0.5
2.5
2.5
2.0
Western Africa 2.0
3.5
2.5
2.5
Lesotho
5
4.5
Botswana
3
3.5
3.3
2.5
2.5
3.5
2.5
2.5
2
4
4.5
4
3.1
4.0
0.5
2.5
2.5
2.5
2.0
8.5
4
2.5
5
3.5
3.5
3.4
2.5
3.5
3.5
2.5
2.5
2
4.5
4.5
6.5
3.6
4.3
0.5
2.5
2.5
2.5
2.0
9.5
4
2.5
5
3.5
3.5
3.5
3.5
5.5
3.5
2.5
2.5
3.5
4.5
4.5
6.5
4.1
4.6
0.5
2.5
2.5
3.5
2.3
9.5
5
2.5
5
3.5
4
3.6
4
5.5
3.5
2.5
2.5
3.5
5
4.5
6.5
4.2
4.7
0.5
2.5
2.5
5.5
2.8
9.5
5
3.5
5
3.5
4
4.1
5.5
5.5
3.5
2.5
2.5
3.5
5
4.5
6.5
4.3
5.0
0.5
2.5
3.5
6
3.1
9.5
6
3.5
5
3.5
4
4.1
5.5
5.5
3.5
2.5
2.5
3.5
5
4.5
6.5
4.3
5.0
0.5
2.5
3.5
6
3.1
10
7.5
6
5
3.5
4
4.5
5.5
5.5
3.5
2.5
2.5
3.5
5
4.5
6.5
4.3
5.0
0.5
2.5
4.5
6
3.4
10
7.5
6
5
3.5
4
4.6
5.5
4.5
3.5
2.5
2.5
3.5
5
4.5
6.5
4.2
5.1
0.5
2.5
5.5
7
3.9
10
7.5
6
5.5
3.5
4
5.0
5.5
4.5
3.5
2.5
2.5
3.5
5
4.5
6.5
4.2
5.2
0.5
2.5
5
7.5
3.9
10
7.5
6
5.5
3.5
4
5.0
5.5
4.5
3.5
2.5
2.5
3.5
5
4.5
6
4.2
5.4
0.5
2.5
5
7.5
3.9
10
8
6
5.5
3.5
4
4.6
5.5
4.5
3.5
2.5
2.5
2.5
5
4.5
6
4.1
5.3
0.5
2.5
5
7.5
3.9
10
7.5
6
5.5
3.5
4
4.6
5.5
4.5
3.5
2.5
2.5
2.5
5
4.5
6
4.1
5.3
0.5
2.5
5.5
7.5
4.0
10
8
6
5.5
3.5
4
4.3
5.5
4.5
3.5
2.5
2.5
3
5
4.5
6
4.1
5.4
0.5
2.5
5
7.5
3.9
10
8
6
6.5
3.5
4
4.3
5.5
4.5
3.5
2.5
2.5
3
5
4.5
6
4.1
5.5
0.5
2.5
6.5
7.5
4.3
10
8
6.5
6.5
3.5
4
4.3
5.5
5
3.5
2.5
2.5
3
5
4.5
6
4.2
5.5
0.5
2.5
6.5
7.5
4.3
10.5
8
6.5
6.5
(Continued)
3.5
4
4.5
5.5
5
3.5
2.5
2.5
3
5
4.5
6
4.2
5.5
0.5
2.5
6.5
7.5
4.3
10.5
8
6.5
6.5
3
3
2
6
2.5
1
5.5
6.3
2
10
7
3.1
2.0
2.5
2
1.5
4.1
1.5
4.5
Argentina
Brazil
Chile
Colombia
Guyana
Peru
Northern America
Bermuda
Canada
United States of America
Asia
Central Asia
Kazakhstan
Kyrgyzstan
Tajikistan
Eastern Asia
China
Hong Kong
5
1.5
4.3
1.5
2
2.5
2.0
3.3
8
10
2
6.7
5.5
1
3
7
3
3.8
South America 3.3
4.5
4.5
1.5
1.5
Mexico
1999
Guatemala
1998
Table 10.1 Continued
5
1.5
4.5
1.5
2
2.5
2.0
3.5
8
10
2
6.7
5.5
1
5.5
7.5
5.5
3
4.7
5
1.5
2000
5
1.5
4.7
1.5
2
2.5
2.0
3.7
8
10
2
6.7
6.5
1
5.5
7.5
6.5
3
5.0
5
1.5
2001
6
1.5
5.2
1.5
4.5
2.5
2.8
4.1
8.5
10
2
6.8
8
1
5.5
7.5
6.5
2
5.1
5
2
2002
6
1.5
5.3
1.5
3
2.5
2.3
4.3
8.5
10
2
6.8
8
1
5.5
7.5
6.5
3
5.3
5
2
2003
6
2.5
5.7
3
5.5
2.5
3.7
4.6
8.5
10.5
2
7.0
8
1
5.5
8.5
6.5
4
5.6
5
4
2004
6
3
5.8
3
5
5
4.3
4.9
9
10.5
2
7.2
8
1
5.5
8.5
6.5
4
5.6
5
4
2005
6.5
3
5.9
3.5
5
5
4.5
5.2
9
10.5
2
7.2
8
1
5.5
8.5
5
4
5.3
5
5.5
2006
6.5
2.5
5.8
3.5
5.5
5
4.7
5.3
9
10.5
2
7.2
8
1.5
7
8.5
5
4
5.7
5
6
2007
6.5
2.5
6.5
3.5
5.5
7
5.3
5.5
9.5
10.5
2
7.3
8
1.5
7.5
8.5
5
4
5.8
6.5
6
2008
6.5
2.5
6.8
3.5
5.5
6
5.0
5.7
9.5
11.5
2
7.7
9
1.5
8
10
5
4
6.3
6.5
6
2009
6.5
2.5
6.8
3.5
6
6
5.2
5.8
9.5
11
2
7.5
9
1.5
7.5
10
7.5
4
6.6
6.5
4.5
2010
6.5
2.5
7.0
4.5
6
6
5.5
6.0
9.5
11
2
7.5
9
1.5
8
10
7.5
4
6.7
6.5
4.5
2011
6.5
2.5
7.0
4.5
6
7
5.8
6.0
12.5
11
2
8.5
9
1.5
7
11
8.5
3
6.7
6.5
3
2012
6.5
2.5
7.1
4.5
6
6
5.5
6.0
12.5
11
2
8.5
9
1.5
7.5
11
8.5
3
6.8
6.5
3
2013
6.5
2.5
7.1
4.5
8
6
6.2
6.0
12.5
11
2
8.5
8
1.5
7.5
11.5
8.5
3
6.7
6.5
3
2014
6.5
2.5
7.1
4.5
8
6
6.2
6.1
12.5
11
2
8.5
9
1.5
7.5
11
8.5
3
6.8
6.5
4
2015
6.5
4.5
2
5.5
2.8
1
2.5
6
3
3.5
1.5
2
3.2
2.5
3.5
1
5.5
4
3
3
2.9
4
2
3.5
3
Japan
Macao
Mongolia
South Korea
Southern Asia
Bangladesh
Bhutan
India
Iran
Maldives
Pakistan
Sri Lanka
Southeastern Asia
Cambodia
Indonesia
Laos
Malaysia
Philippines
Singapore
Thailand
Western Asia
Armenia
Azerbaijan
Bahrain
Georgia
3
3.5
2
4
3.1
3
4.5
4
5.5
1
5
2.5
3.6
2
2.5
3.5
3
6.5
2.5
1
3.0
5.5
2
4.5
7
3.5
3.5
2
4
3.2
8.5
4.5
4.5
6
1
5
2.5
4.6
2
2.5
4
3
6
2.5
1
3.0
7
2
4.5
7
3.5
3.5
4
4
3.5
9
4.5
4.5
6
1
5
2.5
4.6
3.5
2.5
4.5
3
6
2.5
1
3.3
8.5
2
4.5
6.5
3.5
3.5
4
4
3.6
9
3.5
9.5
6
1
5
2.5
5.2
3.5
2.5
4.5
3
6
4
2
3.6
8.5
4
4.5
6.5
4.5
3.5
4
4
3.8
9
5
9.5
6
1
6.5
2.5
5.6
3.5
2.5
4.5
3
6.5
4
3
3.9
9
4
4.5
6.5
4.5
3.5
4
4
3.8
9
5
9.5
6
1
8
3
5.9
6.5
3.5
4.5
3
6
4
3
4.4
9
4
5
7.5
4.5
3.5
5
4
4.0
9
6
9.5
6
1
8
3
6.1
6.5
5
4.5
3
8
4
3.5
4.9
9
4
5
8
5
3.5
5.5
8.5
4.6
9
6
9.5
5
1
9
3
6.1
6.5
5.5
4.5
3
9.5
4
4.5
5.4
9
4
5
8
6
4
5.5
8.5
4.8
9
6
9
5.5
1
9
3
6.1
6.5
5.5
4.5
3
9.5
4
4.5
5.4
9
4
5
8
7.5
4
5.5
8.5
5.0
9
6
9
6.5
1
9
3
6.2
6.5
5.5
4.5
3
8
4
4.5
5.1
9.5
5
5.5
10
9
4
5.5
9.5
5.2
9
6
9
6.5
1
9.5
3
6.3
6.5
6
4.5
3
9.5
4
4.5
5.4
10
5.5
5.5
10.5
9.5
4
5.5
9.5
5.3
9
6
10
7
1
9.5
3
6.5
6.5
6
4.5
3
9.5
5.5
4.5
5.6
10
5.5
5.5
11
9.5
4
5.5
10
5.4
10.5
6
10
7
1
9.5
3
6.7
6.5
6
5
3
10
6
4.5
5.9
10
7
5
11
9
4
5.5
10
5.3
10.5
6
10
7
1
9.5
3
6.7
6.5
6
5
3
10
6
4.5
5.9
10
7
5
11
9
4
5.5
10
5.3
10.5
6
10
7
1
9.5
3
6.7
6.5
6
5
3
10
6
4.5
5.9
10
7
5
11.5
9.5
4
5.5
10
5.3
10.5
6
10
7
1
9.5
3
6.7
6.5
6
5
3
9.5
6
4.5
5.8
10
7
5
11.5
(Continued)
9.5
4
6.5
10
5.5
10.5
6
10
7
1
9.5
3
6.7
6.5
6
5
3
10.5
6
4.5
5.9
10
7
5
11.5
5.4
4.4
3
2
1.5
3
2.5
3.5
2.5
4
2.5
1
4.9
5
Kuwait
Lebanon
Oman
Qatar
Saudi Arabia
Turkey
United Arab Emirates
Yemen
Europe
Eastern Europe 3.9
1.5
Jordan
Belarus
Bulgaria
4.5
4.5
3.5
5
2.5
3
2.5
Hungary
Poland
Moldova
Romania
Russian Federation
Ukraine
2.5
3
3
5
5.5
8.5
Czech Republic 7.5
5
1
2.5
4
2.5
3.5
2.5
3
1.5
2
9
7
2
2
Israel
1999
Iraq
1998
Table 10.1 Continued
2.5
3
3
5
6
5
9
5
3.5
4.7
5.8
2
2.5
4.5
2.5
3.5
2.5
3
1.5
2
9.5
2
2000
3
3
3.5
6.5
7.5
6.5
9
5
3.5
5.3
6.4
2
2.5
5.5
2.5
3.5
2.5
3
1.5
2
10.5
2
2001
4.5
3
5.5
7
7
9
9.5
5
3.5
6.0
7.0
2
2.5
7
2.5
3.5
2.5
3
1.5
2
10.5
2
2002
4.5
3
6
7
7.5
9
10.5
5
4
6.3
7.3
2
2.5
7.5
2.5
3.5
2.5
3
2.5
2
10.5
2
2003
5
3.5
8.5
7
9
9
11
5.5
4
6.9
7.8
2
2.5
7.5
2.5
3.5
2.5
3
2.5
2
10.5
2
2004
5
4.5
8.5
7
10
10.5
11
6.5
4
7.4
8.0
2
2.5
7.5
2.5
3.5
2.5
3
4
3
10.5
2
2005
5
4.5
9.5
7
10.5
10.5
11
6.5
4
7.6
8.2
2
2.5
11
2.5
3.5
2.5
3
4
3
11
2
2006
5
4.5
9.5
7
11
11.5
12.5
6.5
4
7.9
8.5
2
3.5
11
2.5
3.5
2.5
3
3
4
11.5
2
2007
5
4.5
9.5
7.5
10
12.5
14
6.5
4
8.2
8.6
2
3.5
11
3.5
3.5
2.5
3
3
4
12
2
2008
5
4.5
9.5
7.5
10
12.5
14
6.5
4
8.2
8.8
2
3.5
11
3.5
3.5
2.5
3
3
4
12
2
2009
5
4.5
9.5
9
10
12.5
14
6.5
4
8.3
8.9
2
3.5
11
3.5
3.5
3
3
3
3.5
12.5
2
2010
5
6.5
9.5
9
11
12.5
14
6.5
4
8.7
9.1
2
3.5
11
3.5
3.5
3
3
3
4.5
12.5
2
2011
5
6.5
9.5
9
11.5
12.5
14
7.5
4.5
8.9
9.2
2
3.5
11
3.5
3.5
3
3
3
4.5
12
2
2012
7.5
6.5
9.5
9
11.5
12.5
14
7.5
5.5
9.3
9.5
2
3.5
11
3.5
3.5
3
3
3
4.5
12
2
2013
7
8.5
9.5
9
11.5
12.5
14
7.5
5.5
9.4
9.6
2
3.5
11
3.5
3.5
3
3
3
4.5
12
2
2014
8
8.5
9.5
9
11
12.5
14
7.5
5
9.4
9.6
2
4.5
11
3.5
3.5
3
3
3
4.5
12
2
2015
7.0
6
5.5
6
7.5
10
3.8
4
4
3
4
6.8
5.5
8
3.9
9.3
8
10.5
2.0
3
2
Northern Europe
Denmark
Iceland
Norway
Sweden
United Kingdom
Southern Europe
Albania
Bosnia and Herzegovina
Croatia
Macedonia
Western Europe
Switzerland
Euro Area
Oceania
Australia and New Zealand
Australia
New Zealand
Melanesia
Fiji
Papua New Guinea
2
3
2.0
13
8
10.5
4.2
8
6
7.0
4
3
4
4.5
3.9
11.5
9
6
5.5
6
7.6
2
4.5
2.4
13
8
10.5
4.4
8
6.5
7.3
5.5
3
4
5
4.4
12
11.5
6
7
6
8.5
3.5
5.5
3.1
13.5
8
10.8
4.8
9.5
7
8.3
6
3
5
5
4.8
12
11.5
7
8
6
8.9
4
5.5
3.3
13.5
9
11.3
5.0
10
7
8.5
7.5
3
6
5
5.4
12
13.5
7
8.5
6
9.4
4
5.5
3.1
13.5
9
11.3
4.9
10
8
9.0
8
3
6
5.5
5.6
12
13.5
7
8.5
7
9.6
4
5.5
3.3
13.5
9
11.3
5.3
10.5
9
9.8
8.5
3
6
6
5.9
12
13.5
9
8.5
7
10.0
4
5.5
3.1
13.5
9
11.3
5.2
10.5
9
9.8
8.5
3
6
6
5.9
12
13.5
9.5
8.5
7
10.1
4
5.5
3.3
13.5
9
11.3
5.3
10.5
9
9.8
8.5
3
6
6
5.9
12
13.5
10.5
9
7
10.4
4
5.5
3.4
13.5
9
11.3
5.4
10.5
9
9.8
8.5
3
6
7.5
6.3
12
14
10.5
9
7.5
10.6
4
5.5
3.4
13.5
11
12.3
5.6
10.5
9
9.8
8.5
3
6
7.5
6.3
12
14
10.5
9
8
10.7
4
5.5
3.8
13.5
11
12.3
5.8
10.5
9
9.8
8.5
3
6
8
6.4
12
14.5
10.5
12.5
8
11.5
4
5.5
3.8
13.5
11
12.3
5.8
10.5
9
9.8
8.5
3.5
6
8.5
6.6
12
14.5
10.5
12.5
8
11.5
4
5.5
3.9
13.5
11
12.3
6.1
10.5
9
9.8
9.5
3.5
6
8.5
6.9
12
14.5
10.5
12.5
8
11.5
4
5.5
3.9
13.5
11
12.3
6.1
10.5
9
9.8
9.5
3.5
6
8.5
6.9
12
14.5
10.5
12.5
8
11.5
4
5.5
3.9
11.5
11
11.3
5.8
11.5
9
10.3
9.5
3.5
6
9
7.0
12
14.5
11.5
12.5
8
11.7
4
5.5
3.9
11.5
11
11.3
6.1
11
9
10.0
9.5
3.5
6
9
7.0
12.5
14.5
11.5
12.5
8
11.8
(Continued)
4
5.5
3.9
11.5
11
11.3
6.1
11.5
9
10.3
9.5
3.5
6
9
7.0
12.5
14.5
11.5
12.5
8
11.8
2
3.60
6.88
2.3
2.5
2
3.39
6.38
Polynesia
Samoa
Tonga
World
World Economy
7.09
3.91
2
2.5
2.3
2
1
2000
7.50
4.20
2
2.5
2.3
2
1.5
2001
7.82
4.54
2
2.5
2.3
2
1.5
2002
7.90
4.70
2
2.5
2.3
2
1
2003
8.24
5.02
2
4.5
3.3
2
1.5
2004
8.55
5.25
2
4.5
3.3
2
1
2005
8.61
5.45
2
5
3.5
2
1.5
2006
8.62
5.61
2
5
3.5
2
2
2007
8.99
5.75
2
5
3.5
2
2
2008
9.12
5.90
2
5
3.5
3.5
2
2009
9.21
5.95
2
5
3.5
3.5
2
2010
9.28
6.12
3.5
5
4.3
4
2
2011
6.24
3.5
4.5
4.0
4
2
2013
6.30
5.5
5
5.3
4
2
2014
6.35
5.5
5
5.3
4
2
2015
10.14 10.42 10.37 10.50
6.17
3.5
5
4.3
4
2
2012
*
The central banks for WAEMU and CEMAC only maintained a limited or no website in English during the sample period.
Note: Transparency index for each region (in italics) is the unweighted average across central banks in that region, using UN region classifications. The index for the World Economy is constructed as the GDP-weighted average across central banks, using fixed weights based on GDP in 2006 in US dollars at market prices, using GDP data from the World Bank’s World Development Indicators, with Curaçao excluded due to lack of GDP data. ECCU: Eastern Caribbean Currency Union. CEMAC: Central African Economic and Monetary Community. WAEMU: West African Economic and Monetary Union.
2.5
2.3
2
2
Vanuatu
1
1
1999
Solomon Islands
1998
Table 10.1 Continued
Transparency of Monetary Policy Postcrisis 301
Figure 10.1 Monetary Policy Transparency in the World in 2015
since the late 1990s is widespread. Many central banks experienced a significant rise in the transparency index from 1998 to 2015, several even by more than 6 points. The strongest increases, which are listed in table 10.3, have been in Hungary, Poland, and Thailand, followed by Iceland, Romania, Sweden, and Turkey, and then Brazil, the Czech Republic, and Georgia. These countries have in common that they all adopted inflation targeting. Although half of these “top risers” are currently high-income countries, upper-middle-income countries (Brazil, Romania, Thailand, and Turkey) are also well-represented, and one lower-middle-income country (Georgia) is included. At the same time, some central banks in the sample did not show a (net) increase in transparency, but none experienced a net decline in the transparency index during our sample period from 1998 to 2015. There was also a trend toward greater monetary policy transparency in the second half of the sample period (2007–2015), the period dominated by the global financial crisis. Figure 10.3 shows that only a handful of central banks experienced a net decline in overall transparency in this subperiod.12 In a few cases, these declines in transparency scores reflected fundamental changes in central bank statutes (Argentina in 2012, Kenya in 2010), which removed prioritization and quantification of monetary policy objectives, and (in the case of Argentina) limited the independence of the central bank.13 In other cases, transparency declines reflected the decisions of central banks to discontinue publication of policy statements (Barbados), inflation reports (Eastern Caribbean Currency Union), or other documents in English, although in some cases (such as Guatemala), they continued to be published in the local language, so the lower index scores need not imply a reduction in transparency for a domestic audience.14 The largest net drop in transparency scores over the 2007–2015 period has, perhaps surprisingly, been for the Reserve Bank of New Zealand (RBNZ), which in 2013
1(a)
1
1
1
1
1
0.5
1
1
0.5
1
0.5
Transparency index
14.5
14
12.5
12.5
12.5
12.5
12
11.5
11.5
11.5
11.5
Sweden
Czech Republic
Hungary
Iceland
United Kingdom
United States
Israel
Euro Area
Japan
New Zealand
Norway
0.5
1
0.5
1
1
1
1
1
1
1
1
1(b)
1
1
1
1
1
0.5
1
1
1
1
1
1(c)
0.5
0.5
0.5
0.5
0.5
1
0.5
0.5
0.5
0.5
0.5
2(a)
1
1
1
1
1
1
1
1
1
1
1
2(b)
1
1
1
1
1
1
1
1
1
1
1
2(c)
1
1
1
1
1
1
1
1
1
1
1
3(a)
0
0
0.5
0.5
1
1
1
1
1
1
1
3(b)
0
0
1
0
0.5
1
0.5
0.5
0.5
0.5
1
3(c)
Table 10.2 Monetary Policy Transparency Index and Its Components in 2015 for the Top 11 Countries
1
1
1
1
1
1
1
1
1
1
1
4(a)
1
1
1
1
1
1
0.5
1
1
1
1
4(b)
1
1
0.5
0.5
0
1
0.5
0.5
0.5
1
1
4(c)
1
1
1
1
1
1
1
1
1
1
1
5(a)
1
0.5
0.5
0.5
0.5
0
1
0.5
0.5
1
1
5(b)
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1
1
5(c)
Transparency of Monetary Policy Postcrisis 303 15
12
2015
9
6
3
0
0
3
6
9
12
15
1998
Figure 10.2 Comparison of Monetary Policy Transparency Index in 1998 and 2015
Table 10.3 Monetary Policy Transparency Index of Top Increasing Countries between 1998 and 2015 1998
2015
Increase
Hungary
4.5
12.5
8
Poland
3.5
11
7.5
Thailand
3
10.5
7.5
Iceland
5.5
12.5
7
Romania
2.5
9.5
7
Sweden
7.5
14.5
7
Turkey
4
11
7
Brazil
2
8.5
6.5
Czech Republic
7.5
14
6.5
Georgia
3
9.5
6.5
established a Governing Committee responsible for setting monetary policy, something that was previously done by the governor. Although committee decision-making has many merits, it increases the information disclosure requirements to achieve procedural transparency. The existence of a single central bank decision-maker naturally leads to full marks for voting transparency, item 3(c) of our index, because the “voting record” is completely revealed by the policy decision, whereas this is not the case for a committee. In addition, although a single central banker needs to provide a
304 Dincer, Eichengreen, and Geraats 15 12
2015
9 6 3 0
0
3
6
9
12
15
2007
Figure 10.3 Comparison of Monetary Policy Transparency Index in 2007 and 2015 Note: The transparency index declined from 2007 to 2015 for Argentina, Barbados, Eastern Caribbean Currency Union, Guatemala, Kenya, and New Zealand.
comprehensive explanation of the arguments behind the decision to achieve procedural transparency, in particular for item 3(b) of our index, for a committee it is also important to have minutes or a similar account of the discussions at the policy meeting, because this will shed light on the arguments and policy options that were considered and differences in views among the committee members. The decline in transparency score for the RBNZ thus reflects its decision not to release the minutes and voting records of the monetary policy decisions made by the Governing Committee following the transition from a single-decision-maker structure.15 Despite a few declines in overall transparency scores in the postcrisis period, figure 10.4 shows that monetary policy transparency increased in much of the world between 2007 and 2015. Interestingly, there was a sizable rise in transparency in several large economies, including Russia and Japan, which both adopted inflation targeting, Brazil, an inflation targeter which resumed transparency increases after a temporary drop during 2006–2009 (when its inflation reports were only released in English with prolonged delays), and the United States, which improved information disclosure within its own “dual mandate” framework (see the case study in section 10.4.2). Overall, however, the trend toward greater transparency appears to have slowed in the postcrisis period relative to earlier years. Table 10.1 shows that the average transparency score across all central banks rapidly rose from 3.4 in 1998 to 5.6 in 2007 but then inched up to 6.35 in 2015. That said, there is little evidence of a slowdown during the postcrisis period when taking into account the economic weight of central banks by constructing a transparency index for the world economy using a GDP-weighted average. We use fixed weights to show the development of transparency, with the weights based on GDP (in US dollars at market exchange rates) in 2006 in order to avoid distortions due to the
Transparency of Monetary Policy Postcrisis 305
Figure 10.4 Change in Monetary Policy Transparency in the World for 2007–2015
financial crisis. Based on this measure, our transparency index for the world economy rose from 6.4 in 1998 to 8.6 in 2007 and then rose further to 10.5 in 2015.16 To better understand the development of transparency over time, figure 10.5 shows the (unweighted) average of the monetary policy transparency index from 1998 to 2015, where countries are grouped by their levels of economic development, using the World Bank income classification (for fiscal year 2016).17 Figure 10.5 suggests that transparency has increased for each of the four World Bank income categories, although there are differences across the income groups. Monetary policy transparency tends to be positively related to the level of economic development. The average transparency score for high-income countries is about twice the average for low-income countries. The low- income group experienced the smallest rise in transparency between 1998 and 2015 (nearly 2.5, compared to 3.5 for lower-middle-income countries, and around 3 for the other income groups), thus widening the gap with others. In addition, time profiles differ by levels of economic development, with the upper- middle-income group displaying a sharp increase in transparency starting in 2000 driven by the adoption of inflation targeting in Thailand (whose score jumped from 3 to 8.5 in 2000), Brazil, South Africa, and then Romania and Turkey, among others. The lower-middle-income group experienced a similar advance in transparency a few years later with the introduction of inflation targeting in the Philippines (which jumped from 4.5 to 9.5 in 2002), Ghana, Guatemala, Indonesia, and Armenia, among others. These rapid rises in transparency for the upper-and lower-middle-income groups then slowed in 2006 and 2007. The low-income group experienced a slowdown in transparency
306 Dincer, Eichengreen, and Geraats 9 8 7 6 5 4 3 2 1
Lower middle income
Upper middle income
Low income
14
15
20
13
20
12
High income
20
11
20
10
20
09
20
20
07
08
20
06
20
05
20
04
20
03
20
20
01
02
20
00
20
99
20
19
19
98
0
Figure 10.5 Trends in Monetary Policy Transparency by Level of Economic Development Notes: Unweighted average transparency index across central banks grouped by World Bank income classification (for fiscal year 2016). ECCU (Eastern Caribbean Currency Union), CEMAC (Central African Economic and Monetary Community), and WAEMU (West African Economic and Monetary Union) were classified by using GNI in US dollars and population data of each country to compute GNI per capita for the region.
rises starting in 2008, and the high-income group after 2009. However, the increase in average transparency has not been monotonic for each income group, with a temporary decline for the low-income group in 2010 and for the lower-middle-income group in 2012.18 Figure 10.6 shows trends in transparency by monetary policy framework over the 1998–2015 sample period. It shows the (unweighted) average transparency index across central banks grouped by their monetary policy framework, using the IMF’s de facto classification in 2015, which distinguishes inflation targeters, exchange rate targeters, monetary aggregate targeters, and others.19 Transparency tends to be much higher for inflation targeters than for central banks with other monetary frameworks. Inflation targeting is arguably the most information-intensive of the three targeting regimes. In this sense, it is intuitive that central banks adopting this regime should rely most heavily on transparency as part of their monetary policy strategy. Exchange rate targeters tend to disclose much less information, while monetary targeters, which include China and more than a dozen mostly developing countries, were most opaque on average, until they caught up with exchange rate targeters in 2013. The “other” category is made up of a heterogeneous group of central banks, the most prominent of which are the ECB, the Federal Reserve, and the Swiss National Bank. It is not surprising that the inflation-targeting group (around thirty central banks in our sample) has experienced the strongest increase in transparency scores (of +5). It is important, however, as figure 10.6 reveals, that the rise in transparency has not been
Transparency of Monetary Policy Postcrisis 307 12 10 8 6 4 2
19
98 19 99 20 00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09 20 10 20 11 20 12 20 13 20 14 20 15
0
Inflation targeters
Monetary aggregate targeters
Exchange rate targeters
Others
Figure 10.6 Transparency Trends by Monetary Policy Framework Notes: Average monetary policy transparency index by IMF de facto monetary policy framework (2015). Bermuda, Cayman Islands, Cuba, and Macao, which are not represented in IMF classification, are excluded.
confined to inflation targeters. The “other” category also displayed a substantial increase in transparency (around +3). Furthermore, monetary targeters showed a significant rise (of +2.5), while the increase in transparency for exchange rate targeters was a bit smaller (+1.5). There is a noticeable slowdown in the rise of average transparency for exchange rate targeters (after 2005) and inflation targeters (after 2011). In addition, the increase in average transparency has not been monotonic, with small dips for monetary targeters in 2008 and 2010.20 We can also compare the transparency of monetary policy for central banks with a single formal objective or multiple objectives with explicit prioritization, versus central banks with multiple nonprioritized objectives. Dincer and Eichengreen (2016) classify central banks according to this distinction, dropping central banks whose formal monetary policy objectives were not specified (in English).21 Table 10.4 shows the average (unweighted) transparency score in 2015 for central banks with a single or prioritized objective for monetary policy versus central banks with multiple nonprioritized objectives, after adjusting the score of the former by subtracting 0.5 to correct for the fact that they get a higher score for item 1(a). Clearly, central banks with a single or prioritized objective are more transparent in other respects as well, compared to central banks with multiple, nonprioritized objectives. The difference in average scores is 2.35, which is highly statistically significant. This result may be distorted by the presence of many (highly transparent) inflation targeters in the group with a single or prioritized objective(s). With this in mind, table 10.4 shows the comparison between the two groups separately for inflation targeters and for other central banks.22 It turns out that for the latter, central banks with a single or prioritized objective(s) tend to be more transparent in other respects as well, with a
308 Dincer, Eichengreen, and Geraats Table 10.4 Comparison of Average Transparency Scores with Single/Prioritized versus Multiple/Nonprioritized Monetary Policy Objectives in 2015 for Inflation-Targeting versus Non-Inflation-Targeting Central Banks Average 2015 transparency score [No. of central banks]
Single/prioritized (SP) objectives
Multiple, nonprioritized (NP) objectives
p-value of t-test, SP vs. NP
All
7.49 [51]
5.14 [59]
0.0001
Inflation-targeting (IT)
9.71 [26]
9.86 [7]
0.8756
Non-inflation-targeting (non-IT)
5.18 [25]
4.50 [52]
0.2120
p-value of t-test, IT vs non-IT
0.0000
0.0000
Note: Using IMF de facto classification of monetary policy frameworks in 2015 and p-values for t- test of equality of average transparency scores. Transparency scores for central banks with single/ prioritized objectives have been adjusted by -0.5 to control for the difference due to 1(a). The sample excludes CEMAC and WAEMU, for which no formal monetary policy objectives are available (in English).
difference in average (adjusted) score of 0.68, although this is not statistically significant, whereas for inflation targeters, there is hardly any difference. Conceivably, central banks with multiple objectives may find accountability more difficult and therefore seek to be more transparent. The present comparisons, strikingly, are not consistent with this conjecture. We can further analyze the rise in transparency of central banks throughout the world by decomposing the index into its five components: political, economic, procedural, policy, and operational transparency. Figure 10.7 shows how the transparency decomposition for the (unweighted) average over all central banks has evolved during our sample period. The average central bank experienced an increase for all five components from 1998 to 2015. Clearly, economic and policy transparency has become a much more prominent feature of monetary policy, especially during the first half of our sample period. Our GDP-weighted transparency index for the world economy, which is shown in figure 10.8, similarly displays a large increase in these dimensions, while procedural transparency has also gained much greater prominence for this measure. This suggests that (at least some) large economies have put more emphasis on procedural aspects, which provide information on how monetary policy decisions are made (e.g., by publishing a monetary policy strategy, minutes, and voting records). Perhaps they find this more important because of their stronger influence on world financial markets. While the (unweighted) average of transparency in figure 10.7 shows a steady increase over time, with some slowing in the postcrisis period, the (GDP-weighted) index for the world economy in figure 10.8 features a big jump upward in 2012 (by nearly 1 point), involving a noticeable increase in political, economic, procedural, and policy
10 9
Transparency index
8 7 6 5 4 3 2
Operational
Policy
Procedural
Economic
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
0
1998
1
Political
Figure 10.7 Decomposition of Transparency Trend across Central Banks Note: Unweighted average transparency index for all 112 central banks in the sample.
12
Transparency index
10 8 6 4
Operational
Policy
Procedural
Economic
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
0
1998
2
Political
Figure 10.8 Decomposition of Transparency Trend for the World Economy Notes: The transparency index for the world economy is constructed as the weighted average of the index across all central banks, using as weights their 2006 GDP shares in aggregate GDP in our sample, where GDP is in US dollars and taken from the World Development Indicators of the World Bank. Due to unavailability of GDP for Curaçao, we excluded it from the sample.
310 Dincer, Eichengreen, and Geraats transparency. This is driven by a large change in the transparency index for the United States (from 9.5 to 12.5; see also the case study in section 10.4.2). To understand how the increases in different transparency components have developed, table 10.5 provides further detail on the specific ways in which central banks have enhanced their information disclosure. It shows the number of central banks in our sample of 112 that provided specific information in 1998, 2006, and 2015, thus permitting comparisons of the trend in both the pre-and postcrisis periods. It is based on the scores for the fifteen individual items that constitute our transparency index.
Table 10.5 Information Disclosure by Central Banks Over Time Number of central banks disclosing information
1998
2006
2015
Formal primary objective(s with prioritization)
36
47
51
Quantified main monetary policy objective(s)
21
35
43
Explicit instrument independence
40
53
58
Political transparency
Economic transparency Macroeconomic policy model(s)
5
21
28
Numeric macroeconomic forecasts
9
49
66
Quarterly medium-term inflation and output forecasts
4
15
31
Procedural transparency Explicit monetary policy strategy
51
79
89
Minutes (within eight weeks)
6
15
25
Comprehensive, timely minutes
2
11
18
Voting balance/records (within three/eight weeks)
9
11
16
Prompt individual voting records
5
6
6
Prompt announcement policy adjustments
16
51
56
Explanation of policy adjustments
Policy transparency 13
43
54
Always explanation of policy decision
3
17
34
Qualitative FG
0
4
9
Quantitative FG
0
1
5
Monetary transmission disturbances
14
47
58
Evaluation monetary policy outcomes
31
61
69
Operational transparency
Note: Information disclosure based on scores for individual components of transparency index for full sample of 112 central banks.
Transparency of Monetary Policy Postcrisis 311 Clearly, there has been a broad rise in political transparency. While in 1998, thirty- six central banks formally had a single monetary policy objective or multiple objectives with prioritization, this number rose to forty-seven in 2006 and fifty-one in 2015. An even larger increase is apparent in the number of central banks that have quantified their primary or main monetary policy objective(s), for instance by publishing an explicit inflation target. Furthermore, the prevalence of explicit instrument independence has risen from around one-third to more than one-half of central banks in our sample. Regarding economic transparency, the number of central banks that disclose the formal macroeconomic model(s) they use for policy analysis rose from just five in 1998 to twenty- eight in 2015. The share of central banks that publish numeric macroeconomic forecasts rose from less than 10 percent in 1998 to close to 60 percent in 2015, with most of the increase taking place in the first half of our sample period. However, the release of quarterly, medium-term forecasts for both inflation and output is not as common, although it increased substantially from only four central banks in 1998 to thirty-one in 2015. Concerning procedural transparency, the publication of an explicit monetary policy strategy has become widespread, with fifty-one central banks doing so in 1998 but fully eighty-nine following this practice in 2015. The release of minutes (within eight weeks) is substantially less common, but there was a considerable rise in this practice over the sample period, from only six central banks in 1998 to fifteen in 2006 and twenty-five in 2015, although not all of them provide what we classify as comprehensive, timely minutes. The number of central banks disclosing their voting balance/records within three/eight weeks rose from nine in 1998 to sixteen in 2015, with nearly all of them releasing individual voting records and most of the increase occurring in the postcrisis period.23 The prompt release of individual voting records, however, has remained rare, with only around five central banks adopting this practice.24 In terms of policy transparency, the largest increase has been in the prompt announcement of policy adjustments, a practice that spread from sixteen central banks in 1998 to fifty-six in 2015. The prevalence of the inclusion of an explanation in the announcement of policy adjustments similarly increased from around 10 percent to 50 percent of central banks. However, considerably fewer central banks provide a prompt explanation for every policy decision when policy settings remain unchanged, although the adoption of this best practice picked up pace from just three in 1998 to seventeen in 2006 and thirty- four in 2015. Relatively few central banks regularly use forward policy guidance, although it has been deployed more frequently in the postcrisis period. In addition, some central banks occasionally used forward policy guidance, which is not picked by the index scores. Qualitative FG (e.g., a statement that policymakers expect to keep the policy rate low for an “extended period”) was regularly used by nine central banks in 2015,25 while five central banks used quantitative FG. The latter was sometimes calendar-based (as adopted by Canada in 2009). In a few cases, it was state-contingent, in that it took the form of a numeric threshold for inflation (as used by Japan in 2001) or unemployment (deployed by the Federal Reserve in 2012 and the Bank of England in 2013). Most commonly, quantitative FG has been provided through the publication of the projected path for
312 Dincer, Eichengreen, and Geraats the policy rate (as is being done at the time of this writing by the Czech Republic, New Zealand, Norway, Sweden, and the United States), a practice that provides a comprehensive form of time-dependent quantitative FG. Finally, operational transparency increased substantially. The number of central banks providing regular information on monetary transmission disturbances rose from fourteen in 1998 to fifty-eight in 2015. The majority of central banks in our sample provided an evaluation of monetary policy outcomes in 2015, compared to fewer than a third in 1998. In both instances, the rise in transparency occurred predominantly during the precrisis period. In all, table 10.5 provides further evidence of a broad increase in information disclosure by central banks over the period since 1998. The rise was stronger in the first half of the sample period, especially for political, economic, and operational transparency, and slowed significantly after the global financial crisis, although not in all respects. In particular, no postcrisis slowdown was evident for the publication of minutes, voting balance, explanations of monetary policy decisions, and regular qualitative forward guidance. Furthermore, the use of regular quantitative FG, though still uncommon, and the publication of quarterly medium-term macroeconomic forecasts have both seen a noticeable increase in take-up since the financial crisis. Evidently, central banks have found these communication tools useful to cope with heightened uncertainty in the postcrisis world.
10.4 Case Studies This section takes a closer look at some of the transparency challenges facing prominent central banks in the postcrisis world: the European Central Bank (ECB), the US Federal Reserve (Fed), and the Bank of England (BoE).
10.4.1 European Central Bank The primary objective of the ECB, as stipulated by Article 105(1) of the treaty establishing the European Community as amended by the 1992 Maastricht Treaty on European Union, is to maintain price stability and, subject to that, to support the general economic policies of the European Union. The ECB has defined price stability as 0 to 2 percent HICP inflation for the euro area but clarified in 2003 that it aims to maintain inflation “below but close to” 2 percent over the medium term. The operational independence of the ECB is enshrined in the Maastricht Treaty, which arguably makes it the most independent central bank in the world. As the central bank of a new monetary union, the ECB made a concerted effort to provide detailed information on economic and financial developments in the euro area. It used to publish an extensive Monthly Bulletin, which was replaced in 2015 by the Economic Bulletin which is released eight times a year. In December 2000, the
Transparency of Monetary Policy Postcrisis 313 ECB started publishing semiannual macroeconomic projections for the euro area by Eurosystem staff, including for HICP inflation and real GDP growth. Since September 2004, it has also released ECB staff projections in the intervening quarters. For a long time, the projections were only for annual data with a horizon of one to two calendar years ahead. They were presented in a table as ranges, to reflect their uncertainty. But in 2013, the presentation became more detailed as the macroeconomic projections were also shown in a graph for a horizon of eight to ten quarters ahead, with a band around the central projection illustrating the underlying uncertainty. The projections are based on the ECB’s euro-area-wide macroeconometric model, which it has published since 2001, thus providing a high degree of economic transparency. In January 1999, the ECB published its “two-pillar strategy” for monetary policy, which was based on a prominent role for money in addition to an assessment of the outlook for price developments and risks to price stability. This strategy was modified in 2003 by restructuring the pillars such that economic analysis is used to identify short-to medium-term risks to price stability, and monetary analysis is used to assess medium- to long-term trends as a cross check. The ECB has provided policy transparency by explaining its interest-rate decisions at a press conference following its monetary policy meeting. For a long time, it stayed completely mum about its policy inclination, declaring it would never precommit. In response to turmoil in money markets starting on August 9, 2007, the ECB introduced supplementary longer-term refinancing operations (LTROs) with a maturity of three months. At the end of March 2008, it announced the introduction of six-month LTROs. The ECB has often provided considerable advance notice of these supplementary LTROs, thereby helping to stabilize money-market conditions for longer maturities.26 After the collapse of Lehman Brothers on September 15, 2008, the ECB took further steps to stabilize euro money markets and achieve its “objective to keep short term rates close to the interest rate on the main refinancing operation.”27 Its weekly main refinancing operations (MROs) changed from a variable-rate tender procedure with a minimum bid rate equal to the ECB’s main refinancing rate (or “refi rate”) to a fixed-rate tender procedure with full allotment at the refi rate. The three-and six-month LTROs increased in frequency and were also changed to fixed-rate tender procedures with full allotment.28 These steps had a dramatic effect on the euro area overnight interbank rate, EONIA, which had previously fluctuated around the refi rate. Since the full-allotment provision of liquidity started on October 15, 2008, EONIA has mostly remained significantly below the refi rate, which is the key policy rate meant to signal the ECB’s monetary policy stance. The ample provision of liquidity on demand has driven EONIA very close to the rate on the ECB deposit facility (typically hovering only around 10 basis points above it). As a result, during the postcrisis period, the main refi rate has no longer been a good indication of the ECB’s de facto monetary policy stance. The result has been to hamper the central bank’s policy transparency (see also Geraats 2011).
314 Dincer, Eichengreen, and Geraats 6%
Aug. 9 Sept. 15 1yr LTROs
3yr LTROs
QE
5% 4% 3% 2% 1% 0% –1%
2007
2008 2009 2010
2011
2012
ECB main refi rate EONIA
2013
2014
2015
2016
ECB deposit rate ECB lending rate
Figure 10.9 Eurozone Money Market since the Financial Crisis Note: EONIA and ECB main refinancing rate, deposit rate, and lending rate, January 2, 2007–December 30, 2016. Source: ECB.
This is illustrated in figure 10.9, which shows EONIA and the ECB’s main refinancing rate, together with the interest rates on its marginal lending facility and deposit facility, from January 2, 2007, to December 30, 2016. Money-market turmoil starting in August 2007 clearly led to larger fluctuations in EONIA around the ECB’s main refi rate. In the post- Lehman period, when the refi rate was cut from 4.25 percent to 1 percent, EONIA remained volatile but dropped well below the refi rate. The introduction of one-year LTROs at the end of June 2009, with full allotment and a fixed rate equal to the refi rate, brought greater stability but also imposed a systematic distortion on euro area money markets. For more than a year, EONIA was roughly 65 basis points below the refi rate and just 10 basis points above the deposit rate, with a standard deviation of only 6 basis points.29 The euro area sovereign debt crisis again led to greater volatility, although EONIA mostly remained well below the refi rate. The introduction of three-year LTROs in late 2011 was followed by nearly two years of remarkable stability in EONIA at levels very close to the deposit rate. Although EONIA started moving closer to the refi rate between early 2014 and early 2015, after the start of LSAPs by the ECB on March 9, 2015 (initially at a rate of €60 billion per month), EONIA fell to levels closer to the deposit rate again. From mid-2015 until the end of 2016, EONIA was generally only around 6 basis points above the deposit rate, dropping to –0.34 percent after the main refi rate had been cut to 0 and the deposit rate to –0.40 percent on March 16, 2016. As a result, the main refi rate has been an inaccurate signal of the ECB’s monetary policy stance in the postcrisis period, insofar as very short-term market interest rates in the euro area have been generally significantly lower (Geraats 2011).
Transparency of Monetary Policy Postcrisis 315 Among its nonstandard measures adopted in response to the financial crisis, the ECB introduced several programs involving significant asset purchases, with various degrees of transparency. For instance, the first covered bond purchase program was preannounced on May 7, 2009, with operational details (including the amount of €60 billion and the period from July 2009 to July 2010) released on June 4 and the first purchases starting on July 6, 2009. An example of an even lengthier introduction was the asset-backed securities (ABS) purchase program. On June 5, 2014, the ECB stated at its press conference that it had decided to “intensify preparatory work related to outright purchases in the ABS market.” On September 4, 2014, it announced an ABS purchase program but left the “detailed modalities” to be announced on October 2, 2014. Even then, it still did not specify the size or duration of the ABS purchase program in question. As late as November 6, 2014, the ECB only indicated that its asset purchases would have a “sizable impact” on its balance sheet. Clearly, policy transparency has been rather limited for this asset purchase program. The preannouncement of asset purchase programs under development could still be helpful for stabilizing asset prices, even if the full details of the program are not announced concurrently. For example, at the height of the euro area sovereign debt crisis, when fears arose of a breakup of the monetary union, ECB President Mario Draghi declared on July 26, 2012, that the ECB was “ready to do whatever it takes” to preserve the euro as the single currency. After its meeting of August 2, 2012, the ECB stated that it “may undertake outright open market operations of a size adequate to reach its objective.” On September 6, 2012, the ECB announced its program of “outright monetary transactions” (OMT), involving potentially unlimited sterilized purchases of euro area sovereign debt with maturities of one to three years on the secondary market, with further details to be published later. This announcement proved so effective at reducing euro area periphery sovereign debt yields that even five years later (in 2018), the ECB had not needed to make any actual purchases under the provisions of this program. This example vividly illustrates the power of central bank communication. Perhaps this positive experience helps to explain why the ECB also started to communicate its policy rate prospects through qualitative FG. Since July 2013, the ECB has stated that it expects its policy rates to remain low for an “extended period of time.” The ECB made another step toward greater transparency in January 2015, when it started to publish an “account” of its monetary policy meeting with a delay of four weeks. It provides the monetary policy considerations and policy options that were presented at the meeting, as well as the ensuing discussion by the Governing Council. Although it gives a useful description of the arguments that were discussed, it is not very informative about the strength of any dissenting views. Moreover, the ECB still does not release voting records, not even the balance of votes in unattributed form. As a result, there remains considerable scope for further improvements in ECB transparency.
316 Dincer, Eichengreen, and Geraats
10.4.2 Federal Reserve System The US Federal Reserve System has multiple nonprioritized objectives and no explicit instrument independence for its decision-making Federal Open Market Committee (FOMC). According to the Federal Reserve Act (sec. 225a), the Fed shall maintain long-run money and credit growth commensurate with long-run growth in potential output to promote “maximum employment, stable prices and moderate long-term interest rates.” Since the last goal (moderate long-term interest rates) is implied by price stability in the long run, the Fed’s objectives are commonly referred to as a dual mandate. In January 2012, the Fed further clarified this point about its interpretation of this mandate by announcing that the inflation rate consistent with its mandate is 2 percent in terms of the personal consumption expenditure (PCE) price index and by publishing FOMC participants’ estimates of the longer-run normal rate of unemployment.30 Thus, it effectively provided a quantification of its main monetary policy objectives of “maximum employment” and price stability. At the same time, it provided an explicit description of its monetary policy strategy based on its dual mandate. The Fed is held accountable for monetary policy through semiannual congressional testimony, accompanied by its Monetary Policy Report to the Congress, which since 1979 has included the range and central tendency of FOMC participants’ economic projections. In October 2007, the Fed started publishing FOMC participants’ projections for real GDP growth, the unemployment rate, and (core) PCE inflation for two to three calendar years ahead at quarterly frequency in the FOMC minutes, including a “box- and-whisker chart” illustrating the range and central tendency of the projections across FOMC participants. This gradually evolved into a fan-type chart for the projections for approximately three years ahead and since June 2011 also for the “longer run.” In April 2011, the Fed introduced a quarterly press conference by the Fed chairman after the FOMC meeting, at which the FOMC projections are presented. The Fed has long published comprehensive minutes of FOMC meetings, including individual voting records.31 After congressional pressure for greater openness, it agreed in November 1993 to also release the transcripts of FOMC meetings with a five-year lag. In December 2004, it expedited the release of FOMC minutes from a five-to-eight-week lag (which meant shortly after the next FOMC meeting) to a three-week delay, so that the minutes now pertain to the most recent policy decision and help provide a better understanding of the policy considerations for upcoming decisions. A prompt statement announcing the monetary policy decision, together with an explanation, has been released after every FOMC meeting since May 1999. The voting record of FOMC members has been included in this policy statement since March 2002. Since 1999, the policy statement has also often given an indication of the FOMC’s policy inclination. Initially, this was in the form of an explicit policy “bias,” but from 2000 until early 2003, it was implicitly conveyed through a phrase concerning the “balance of risks” being toward “heightened inflation pressures” or “economic weakness.”
Transparency of Monetary Policy Postcrisis 317 After reducing the target for the federal funds rate to an unprecedentedly low level of 1 percent, the Fed introduced explicit qualitative FG about the duration of its policy accommodation in August 2003, stating that it could be maintained for a “considerable period.” In January 2004, this was changed into the intention of being “patient” in removing policy accommodation. In May 2004, the Fed started providing qualitative FG about removing policy accommodation “at a pace that is likely to be measured,” a stance that was modified in December 2005 to “some further (measured) policy firming.” Following the collapse of Lehman Brothers in September 2008, the Fed embarked on an $800 billion asset purchase program involving mostly agency mortgage-backed securities. After reducing the federal funds rate target to a historic low of 0 to 0.25 percent in December 2008, the Fed resumed explicit FG, indicating that an “exceptionally low” federal funds rate was likely warranted “for some time.” In March 2009, its first round of LSAPs (QE1) was expanded to $1,750 billion, and its qualitative FG on the federal funds rate was strengthened to an “extended period,” a position that was maintained until mid-2011. Following the eruption of the European sovereign debt crisis, the Fed then announced an additional LSAP program (QE2) of $600 billion of US Treasury bonds in November 2010. On August 9, 2011, it introduced quantitative FG that an “exceptionally low” federal funds rate was likely to be warranted “at least through mid-2013.” This calendar- based clarification of its qualitative “extended period” guidance had a strong effect on private-sector expectations and asset prices. The expected duration of a very low federal funds rate increased from four quarters to at least seven quarters, according to the Blue Chip survey of forecasters, while the two-year US Treasury yield dropped by around 10 basis points, and five-and ten-year yields even by more than 20 basis points (Swanson and Williams 2014). This shows that quantitative FG can be much more effective than more ambiguous qualitative statements. The Fed then extended the horizon of its calendar-based guidance to “late 2014” in January 2012 and to “mid 2015” in September 2012, when it announced an indefinite LSAP program (QE3) of $85 billion a month of US Treasury bonds and agency mortgage- backed securities, until the labor market outlook would improve “substantially . . . in a context of price stability.” A further innovation was its replacement of calendar-based guidance for the federal funds rate by quantitative state-contingent FG on December 12, 2012, when the Fed announced that an “exceptionally low” federal funds rate was expected to be appropriate at least as long as the unemployment rate remained above 6.5 percent, conditional on medium-term inflation projections being below 2.5 percent and long-term inflation expectations remaining well anchored. Such state-contingent guidance has the advantage that it is more flexible as the expected horizon for low interest rates adjusts in line with economic prospects, with the horizon extending when the economy unexpectedly deteriorates, thereby reducing long-term interest rates and acting as an automatic stabilizer. But the greater flexibility of threshold-based guidance comes at the cost of greater complexity and more uncertainty about the likely duration of low interest rates.
318 Dincer, Eichengreen, and Geraats In the meantime, in January 2012, the Fed had started publishing FOMC participants’ projections of the path of the federal funds rate target for the next two to three years, thus providing information about the likely timing, direction, and pace of policy rate changes. These projections are depicted in a “dot plot” that shows the anonymous projections of all FOMC participants, including nonvoting members.32 Since there is usually considerable variation in FOMC participants’ projections, it is not clear whether the median projection actually corresponds to the median FOMC member voting that year. So the dot plot leads to a cloud of uncertainty about which dots actually matter for the policy decision in each year. The Federal Reserve thus provides an interesting example of a central bank that made significant transparency improvements following the financial crisis, including a clarification of its monetary policy strategy with a quantification of its main objectives, the introduction of a quarterly press conference, and the movement from qualitative to quantitative FG, which developed from calendar-based guidance to state-contingent guidance with an unemployment rate threshold, and more comprehensive time- dependent FG through its projections of the future path of the federal funds rate.
10.4.3 Bank of England The BoE adopted inflation targeting after it exited the Exchange Rate Mechanism of the European Monetary System in September 1992.33 It then gained operational independence for the conduct of monetary policy in May 1997. Its monetary policy objective, stipulated by the Bank of England Act of 1998 (chap. 11, part II, sec. 11), is to maintain price stability and, subject to that, to support the UK government’s economic policy, including its objectives for growth and employment. The inflation target, currently an annual rate of 2 percent for CPI inflation, is specified by the government in an annual remit letter to the bank’s monetary policy committee (MPC). If the inflation target is missed by more than one percentage point, the MPC remit asks for an open letter to the Chancellor of the Exchequer to spell out the reasons for the deviation and how the MPC plans to bring inflation back to target. MPC members are also held accountable through regular appearances before Parliament, in particular the House of Commons Treasury Committee. In addition, the BoE’s operational independence on monetary policy is subject to an explicit override mechanism: the Treasury may give the BoE directions with respect to monetary policy if required “in the public interest and by extreme economic circumstances,” but any such order is subject to subsequent parliamentary approval and limited to a period of three months (Bank of England Act 1998, chap. 11, part II, sec. 19). That said, none of this detracts from the BoE’s political transparency, for which it receives full scores on our index throughout the 1998–2015 sample period. In terms of economic transparency, the BoE has set a leading example. Since 1993, it has published a quarterly Inflation Report, which is presented by the governor at a press conference, allowing for further clarification and media scrutiny. The Inflation
Transparency of Monetary Policy Postcrisis 319 Report provides extensive information about macroeconomic developments, including a medium-term projection for inflation. Since 1996, this projection has been shown in a fan chart that illustrates the underlying uncertainty using confidence bands of gradually fading colors, an effective presentation technique that has since been adopted by other central banks.34 Since 1997, the Inflation Report includes fan charts of quarterly MPC projections for both inflation and real GDP growth. Initially, the projections were for a two-year horizon and based on the assumption of a constant policy rate, but the path of market interest-rate expectations inferred from the yield curve was soon used as well; it became the main conditioning assumption in August 2004, when the horizon of the MPC projections was extended to three years. These projections reflect the MPC’s judgments and are based on the BoE’s in-house macroeconometric models, which have been publicly available since 1999. An annual discussion of the MPC’s forecasting record has been included in the Inflation Report since 1999, which contributes to operational transparency.35 In response to a BoE forecasting review by Stockton (2012), Inflation Reports now include MPC forecasts for a wider range of macroeconomic variables. In addition, those reports have begun providing more detailed information about key judgments and risks underlying the MPC forecasts. They now also monitor whether these judgments are being borne out by data, thus improving both the transparency and the quality of the forecasting process. The BoE has been weaker on policy transparency. Until recently, it typically provided a prompt explanation of monetary policy decisions only when policy settings were adjusted, and it just released a terse announcement of the policy decision when no change was made.36 For many years, the BoE also left the private sector in the dark about its policy inclination or likely upcoming policy actions, insisting on making its monetary policy decisions one month at a time. In response to the financial crisis, the BoE lowered its policy rate to 0.5 percent and abandoned its month-by-month approach when it embarked on unconventional monetary policy measures through LSAPs of medium-to long-term UK government bonds (“gilts”) in the secondary market, financed by issuing central bank reserves. On March 5, 2009, it announced the purchase of £75 billion of mostly gilts over a period of three months.37 An expansion of £50 billion was announced in advance on May 7, 2009, and expected to take three months to complete. This was followed by a further £50 billion expansion over three months, announced on August 6, 2009, and a £25 billion expansion over three months on top of that, announced on November 5, 2009, taking the total of the first round of asset purchases (QE1) to £200 billion. By announcing its LSAPs in advance for three months at a time, the bank succeeded in affecting financial-market expectations and significantly reducing yields even before its asset purchases started.38 The BoE adopted a similar approach for its second round of asset purchases (QE2), which started with £75 billion over four months, announced on October 6, 2011, with a £50 billion expansion over three months, announced on February 9, 2012, and a final £50 billion expansion over four months, announced on July 5, 2012, taking the total amount of BoE QE to £375 billion by the end of October 2012. The BoE has also been transparent about the operation of its LSAPs; it releases the nominal value and proceeds
320 Dincer, Eichengreen, and Geraats of purchases by gilt issue at the end of each operation day, together with summary information on total offers received, allocation, and prices accepted.39 Nevertheless, it was only after a change at the helm that the BoE embraced greater policy transparency in interest-rate setting. After Mark Carney was announced as the successor to Mervyn King as BoE governor, the MPC received a new remit in March 2013 which asked the MPC to provide an assessment of the merits of explicit FG, including the use of (state-contingent) “intermediate thresholds” (as used by the US Federal Reserve). This MPC remit letter was the first to clarify that the UK government considers the use of FG to be subject to the MPC’s operational independence in setting monetary policy. In response, the MPC released a report40 and issued quantitative FG on August 7, 2013, that it did not intend to raise the policy rate or reduce its stock of purchased assets at least until the unemployment rate had fallen to 7 percent, subject to three price and financial stability “knockouts.” A drawback of such “threshold” guidance is that it creates uncertainty about when the unemployment threshold is likely to be reached. To overcome this, the Inflation Report of August 2013 introduced a fan chart of the MPC’s three-year-ahead projection for the unemployment rate. It included a separate graph showing the probability of unemployment falling below the 7 percent threshold, which showed that this probability would not reach 50 percent until 2016,41 thus indicating that the policy rate would not be raised for at least two years. Many private-sector observers, however, expected the unemployment rate, which was 7.8 percent at the time, to reach the 7 percent threshold more swiftly, so they expected the policy rate to stay low for a shorter period. As a result, the 7 percent unemployment threshold guidance probably provided considerably less stimulus than intended. In the event, the unemployment rate approached 7 percent after only two quarters, defying the MPC’s predictions. In response, the MPC provided additional FG on February 12, 2014, stating, “Despite the sharp fall in unemployment, there remains scope to absorb spare capacity further before raising Bank Rate” and that the rate rise is “expected to be gradual,” with the normal level of Bank Rate “likely to be materially below” the 5 percent precrisis average.42 In 2015, the phrasing of this qualitative FG evolved into “when Bank Rate does begin to rise, it is expected to do so more gradually and to a lower level than in recent cycles.”43 Further improvements in policy and procedural transparency were introduced in August 2015. The BoE initially released the minutes of its monthly MPC meetings, including individual voting records, with a delay of about five weeks, which is after the decision of the next MPC meeting and therefore likely to cause confusion. In October 1998, the delay was reduced to two weeks. This allows observers to use the summary of the policy discussions and the individual votes to better understand the MPC’s reaction function and update their expectations for the next monetary policy decision. In addition, once a quarter, the Inflation Report with the MPC projections is also released at a press conference, approximately one week after the monetary policy meeting. This release schedule led to a “drip feed” of information—first the policy announcement (with usually only an explanation in case of a policy adjustment), about a week
Transparency of Monetary Policy Postcrisis 321 later the Inflation Report (once a quarter), and after two weeks the minutes, including voting records. This gradual, delayed release of key information often produced undesirable market volatility. Consequently, the Warsh (2014) review of BoE transparency recommended that the BoE should adopt international best practice by releasing a prompt announcement of its monetary policy decisions and that it should include an explanation and individual voting records. In addition, it suggested that MPC projections be published on the same day (with the full Inflation Report released a week later if needed) to ensure that financial markets promptly receive the key monetary policy information they are most interested in. In response, the BoE decided to go even further. It opted to release not only a prompt policy explanation, including MPC votes and the full Inflation Report (as is done by some other central banks), but also the minutes of the MPC meeting, all at the same time as the monetary policy announcement. As a result, the BoE has replaced a drip feed with a data deluge that is hard for financial-market participants to digest. The simultaneous release of these items also deprives researchers and policymakers of the opportunity to separately identify the impact on asset prices of the policy statement, the Inflation Report, and the minutes and to assess how informative they are to financial markets. Furthermore, the BoE had to make undesirable adjustments to the structure of the MPC meetings in order to publish the minutes at the same time as the monetary policy announcement (see also Eichengreen and Geraats 2015). Previously, the MPC met on two consecutive days. The genuine policy deliberations took place on Wednesday, and the next morning a “policy discussion” was held, during which each MPC member made a statement (often prepared) of his or her views and decision. The policy decision was then determined by the majority and announced shortly afterward at noon.44 To allow sufficient time to prepare agreed minutes, MPC meetings have been restructured to take the form of three meetings in the course of seven days. The genuine deliberations now take place on Thursday, a full week before the policy announcement. This is then followed by the staged “policy discussion” on Monday and the policy vote on Wednesday. The consequences are not all positive. Spreading the MPC meetings over seven days increases the risk that macroeconomic data releases, developments in financial markets, and geopolitical events may significantly affect the appropriate monetary policy stance. MPC members could choose to disregard the new information, but this would distort the monetary policy decision. Alternatively, the MPC could revisit its deliberations, which would be inefficient. Furthermore, if MPC members redeliberate before they vote on the eve of the announcement and decide to change their policy stance, then it will be impossible to have their views and reasoning properly and coherently captured overnight in the MPC minutes to be released together with the policy announcement. In all, these procedural changes make it harder for the BoE to communicate clearly, especially in times of turmoil and heightened uncertainty when clarity is most needed. As a result, the restructured MPC meetings reduce BoE transparency.45
322 Dincer, Eichengreen, and Geraats The BoE has already experienced several significant “news shocks” during the weeklong window of MPC meetings. At the first meeting under the restructured format, during July 2–8, 2015, the Chancellor of the Exchequer, George Osborne, made the shock announcement on July 8, 2015 in the “Summer Budget 2015” that the government would introduce a “national living wage” that involves a sizable increase in the minimum wage over five years, starting with a nearly 7.5 percent rise in April 2016 to the ambitious target of 60 percent of median earnings by 2020. This news was very relevant to the BoE but was not mentioned at all in the MPC minutes, which suggests that it was not discussed, despite the fact that the minutes report (on pp. 4–5) that the Treasury representative attending the MPC meeting had briefed the MPC on the budget. Another example is the turbulent aftermath of the “Brexit” referendum that the MPC faced during its July 7–13, 2016, meeting. On the morning of Thursday, July 7, 2016, two weeks after the United Kingdom had narrowly voted to leave the European Union, the United Kingdom faced the prospect of several months of political uncertainty about who would succeed David Cameron as prime minister, and the pound sterling fell below $1.30. This was the setting against which the MPC held its genuine deliberations. But on Monday, when the MPC’s scripted “policy discussion” was scheduled, Theresa May suddenly became the only remaining candidate for the Conservative Party leadership. She was installed as the new prime minister on Wednesday, July 13, when the pound had risen to around $1.33 and the MPC made the monetary policy decision. Yet the minutes contain no sign that the MPC members had discussed these developments or reconsidered their views. The MPC minutes could provide a valuable opportunity to better understand the committee’s monetary policy reaction, but this requires knowing what information the minutes are based on. The minutes do not make this clear; they no longer even specify the dates on which the MPC met (only the date the MPC meeting ended). The minutes also provide little evidence of the “genuine deliberations” that are held on the first day, despite the recommendation of the Warsh review. This portion of the minutes often still looks more like the summary of a staff briefing on economic and financial developments. And the parts of the MPC minutes referring to discussions often feel highly censored. For instance, the July 2016 minutes report, “The Committee reviewed a range of possible stimulus measures and combinations thereof ” (p. 7), without even mentioning what measures were considered, let alone the MPC’s assessments. In light of the unanticipated outcome of the Brexit referendum, one would have expected extensive discussions among MPC members evaluating the challenging new situation. Yet the July 2016 MPC minutes were only eight pages long, less even than for the July 2015 meeting. Greater detail will likely be available in the transcripts of MPC meetings, which the BoE decided to release with a delay of eight years. But since these are restricted to the Monday and Wednesday meetings, they exclude the genuine MPC deliberations. All in all, it is clear that considerable procedural opacity remains at the BoE and that attempts to improve transparency can be taken too far.
Transparency of Monetary Policy Postcrisis 323
10.5 Conclusion Since the 1990s, central banks have paid growing attention to the role of transparency in the conduct of monetary policy. Transparency can enhance the effectiveness of policy by making decisions more predictable. It can enhance the credibility of central banks in meeting their objectives. It can buttress the independence of the central bank by serving as a mechanism for accountability. All of these are reasons central bank transparency has gained a growing number of advocates over time. Ngram Viewer, which tabulates references to terms and concepts in all publications digitized by Google Books, shows a sharp uptick in references to “central bank transparency” after 1996 and continuing through the run-up to the global financial crisis of 2006–2008.46 The question is whether all this talk about the importance of central bank transparency has been matched by action. In an effort to answer this question, we have presented a new, refined index of monetary policy transparency for 112 central banks from 1998 to 2015. Trends in the index reveal a rise in monetary policy transparency over this period throughout the world, irrespective of the level of economic development of the country and the monetary policy framework of its central bank, although central banks in high- income countries that target inflation tend to be the most transparent overall. The trend toward greater monetary policy transparency has continued in the wake of the global financial crisis, although the rate of increase appears to have slowed in some respects. In contrast, there has been a notable rise in the use of forward policy guidance during the postcrisis period. This suggests that providing information about likely future policy actions can be useful when conventional monetary policy is constrained by policy rates near zero. The case studies in this chapter illustrate how some prominent central banks have deployed and developed FG for large-scale liquidity operations, asset purchases, and policy rates during the postcrisis period. The financial crisis has shown that central bank communication can be a powerful policy tool helpful for managing expectations and improving the effectiveness of monetary policy. But our case studies suggest that its implementation is fraught by complications, which may be part of the explanation for why the trend toward greater transparency has slowed rather than accelerating following the crisis. In addition, the case studies illustrate that attempts to increase openness can be taken too far. So it is an important question to what extent and in what respects more transparency is always and everywhere beneficial, from the point of view of the central bank’s ability to effectively pursue its mandate. The indices developed and implemented here can in principle be used to work toward an answer.
Notes 1. In their survey of ninety-four central banks, Fry et al. (2000) find that 74 percent consider transparency “vital” or “very important.”
324 Dincer, Eichengreen, and Geraats 2. https://www.ecb.europa.eu/ecb/orga/transparency/html/index.en.html. 3. In his survey of eighty-eight central banks, Blinder (2000) finds that they consider transparency a very important factor to establish or maintain credibility. 4. Such a commitment is referred to as “Odyssean” FG (see Campbell et al. 2012). In practice, FG about policy rates tends to be “Delphic” instead (see the case studies in section 10.4). LSAPs are another possibility, but they raise the dilemmas described in the last paragraph. 5. For instance, a central bank facing a credit or asset price boom could “lean against the wind” by increasing the policy rate to mitigate financial stability concerns. 6. Our sample includes central banks of multilateral monetary unions, in particular the Bank of Central African States (BEAC), the Central Bank of West African States (BCEAO), the Eastern Caribbean Central Bank (ECCB), and the European Central Bank (ECB), so it covers a total of nearly 150 countries in 2015. 7. An alternative approach is to focus on how disclosed information is received, for instance, through private-sector surveys (e.g., van der Cruijsen, Jansen, and de Haan 2010) or financial market reactions to central bank communications (e.g. Ehrmann and Fratzscher 2007 and 2009). However, the availability of suitable surveys that are comparable across countries is limited. In addition, because financial markets are not well developed in many countries, changes in asset prices also reflect financial-market frictions and inefficiencies. Hence, we focus on the extent to which information relevant to the making of monetary policy is disclosed to construct an index that is consistent and comparable across countries and over time. 8. Alternatively, information published in the local language could be used to construct the transparency index (although a true polyglot would be needed to code such information for around 150 countries). Local language information, it can be argued, may provide a better measure of monetary policy transparency from the perspective of domestic accountability. But from the point of view of international financial markets, English- language information is key. Another option would be to use information in the most commonly spoken languages, where Mandarin and Spanish rank ahead of English. But for most individuals active in international financial markets, a central bank report in Mandarin would not be very informative. Hence our decision to focus on information in English, the lingua franca of the business and financial world. All the central banks in our sample have a website in English except for BCEAO, which has an extensive website in French and only maintained a limited website in English until 2012, and BEAC, which only has a website in French. Nevertheless, these two central banks are included because they represent large multilateral monetary unions and cover a significant part of Africa. 9. In 1998, the top three consisted of the Reserve Bank of New Zealand (10.5), the Bank of England (10), and the Bank of Canada (10). Although their transparency scores further improved, their relative ranking fell as they were overtaken by others. 10. Using the World Bank income classification (for fiscal year 2016). 11. Based on the IMF de facto classification of monetary policy frameworks (for April 30, 2015). 12. In particular, Argentina (–1), Barbados (–0.5), the Eastern Caribbean Currency Union (–0.5), Guatemala (–2), Kenya (–0.5), and New Zealand (–2). 13. In Argentina, a large drop in political transparency (–2) was partially offset by an improvement in operational transparency (+1). 14. This language issue also applies to other central banks that did not end up with a net decline in the overall index. For instance, the Central Bank of Colombia has long published
Transparency of Monetary Policy Postcrisis 325 extensive quarterly inflation reports in Spanish, but from 2008 until 2015 the English translation has only been released with a prolonged delay (typically more than a year) or not at all (e.g., for 2011 and 2013). And while the Central Bank of West African States further reduced the already limited information available in English, it significantly improved its transparency in French since 2010, including quarterly “monetary policy reports” with medium-term projections for output growth and inflation. Thus, our index could considerably understate the degree of transparency in the local language. 15. Since the financial crisis, a few other central banks have also moved from a single decision- maker to a committee for monetary policy decisions, including Aruba (in 2010), Israel (in 2010), and India (in 2016), of which the latter two publish minutes and voting records. 16. An exception is a slight dip in the (weighted) index for the world economy from 10.42 in 2013 to 10.37 in 2014. Although there was an increase in the transparency index for a number of countries, this was offset by a temporary small decline in the overall index for the euro area (from 11.5 to 11), which dominates given our GDP weighting scheme. 17. The income classification is fixed (at fiscal year 2016) to ensure that any change reflects a movement in transparency (rather than a shift in the composition of the income groups). 18. Due to temporary decreases in Mozambique, Malawi, and Rwanda, and in Georgia, Guatemala and Nigeria, respectively. 19. Appendix 10.2 lists the countries in our sample by monetary policy framework (as of April 30, 2015). Note that the IMF classification is missing for a few central banks. The 2015 monetary policy framework is used to show the development in average transparency (for a fixed group of central banks), including changes made in preparation for a switch to the monetary policy framework (e.g., inflation targeting “lite” before full-fledged adoption). 20. This was due to temporary declines in Mozambique, Nigeria and Seychelles, and in Malawi, Mozambique, and Rwanda, respectively. 21. This includes the Central African Economic and Monetary Community (CEMAC) and the West African Economic and Monetary Union (WAEMU), which scored 0 on item 1(a). 22. Table 10.4 also shows that within both groups, inflation targeters are much more transparent than others, with a highly statistically significant difference of 4.5 for central banks with a single or prioritized objective(s) and 5.4 for central banks with multiple, nonprioritized objectives. 23. This does not include the more common practice of central banks to sometimes indicate in the monetary policy announcement or minutes that the decision was unanimous, without routinely releasing the voting balance/records. 24. These include Brazil, Japan, Sweden, the United States (and since August 2015, the United Kingdom), and also India and Papua New Guinea, where monetary policy was decided by the central bank governor during our sample period. 25. This includes Albania, the euro area, Hungary, Iceland, Japan, Russia, South Africa, Turkey, and the United Kingdom. Qualitative FG was also used for postcrisis in Canada, Chile, Colombia, India, Poland, and South Korea. 26. For instance, the six-month LTRO on July 10, 2008, was announced on March 28, 2008, and the renewal of the three-month LTRO on December 11, 2008, was announced on September 4, 2008. 27. See ECB press release, “Changes in tender procedure and in the standing facilities corridor,” October 8, 2008, which also announced a (temporary) reduction in the interest- rate corridor range from 200 to 100 basis points. 28. See ECB press release, “Measures to further expand the collateral framework and enhance the provision of liquidity,” October 15, 2008.
326 Dincer, Eichengreen, and Geraats 29. This refers to the period from June 25, 2009 (the settlement date of the first one-year LTRO) to June 30, 2010, during which significant fluctuations in EONIA were generally confined to a spike at the end of each two-week MRO maintenance period. 30. See its “Statement on Longer-Run Goals and Monetary Policy Strategy,” January 2012. 31. Before 1993, the FOMC minutes were published in its “Record of Policy Actions.” 32. Presidents of regional Federal Reserve banks are voting members on an annual rotational basis, except for the president of the New York Fed, who always has a vote. 33. The source of much of the information in this section is the BoE website: http://www. bankofengland.co.uk/ 34. Fracasso, Genberg, and Wyplosz (2003, tab. 3.6) reported that twelve out of twenty inflation targeters published their inflation projections using fan charts. 35. More extensive evaluations of BoE forecasting are provided by the 2012 Stockton Review and the November 2015 report “Evaluating Forecast Performance” by the BoE Independent Evaluation Office. 36. The BoE even did so in the aftermath of the financial crisis—no explanation was provided for its monetary policy decisions on April 9, June 4, July 4, September 10, October 8, and December 10 in 2009. 37. The BoE also purchased corporate bonds and commercial paper. 38. Long-term nominal gilt yields dropped by around 70 and 90 basis points for a maturity of ten and twenty years, respectively, from March 4 to March 6, 2009, while the BoE gilt purchases only started on March 11, 2009. 39. The information is available at http://www.bankofengland.co.uk/markets/Pages/apf/gilts/ results.aspx. 40. “Monetary Policy Trade-offs and Forward Guidance,” Bank of England, August 2013. Curiously, despite the MPC’s remit to assess explicit FG and the MPC’s decision “to provide explicit guidance about the future path of monetary policy” (p. 16), the report makes no mention at all of a straightforward way of doing so—publishing the projected path of the policy rate and stock of purchased assets. 41. Bank of England Inflation Report, August 2013, chart 2. There was also a graph showing the probability of inflation exceeding the 2.5 percent “knockout” (chart 4), which indicated the probability was less than 50 percent for the eighteen-to-twenty-four-month horizon relevant for this “knockout.” Unusually, though in line with its FG, the projections in this Inflation Report were based on a constant policy rate. 42. Bank of England Inflation Report, February 2014, box, “Monetary Policy As the Economy Recovers” (pp. 8–9). It also provided FG on the stock of purchased assets, stating that the MPC intends to maintain it “at least until the first rise in Bank Rate.” In the November 2015 Inflation Report, this was extended to “until Bank Rate has reached a level from which it can be cut materially,” which the MPC judged to be around 2 percent (box, “The MPC’s Asset Purchases As Bank Rate Rises,” p. 34). 43. E.g., in the Monetary Policy Summary of August 2015. 44. This description is based on inside accounts by Warsh (2014) and Lambert (2005), which curiously differ from the impression created by the MPC minutes and the annex of the BoE report, “Transparency and Accountability at the Bank of England” (December 11, 2014), that describes the changes. 45. In addition, the more prolonged MPC meeting schedule increases the chances of inadvertent openness. Goodhart (2015) suggests that it raises the risk of leaks, as MPC
Transparency of Monetary Policy Postcrisis 327 members already present their policy views at the Monday meeting, three days before the policy announcement. 46. https:// b ooks.google.com/ n grams/ g raph?content=central+bank+transparency &year_ s tart=1980&year_ e nd=2008&corpus=15&smoothing=3&share=&direct_ url=t1%3B%2Ccentral%20bank%20transparency%3B%2Cc0.
References Barwell, Richard, and Jagjit Chadha. 2014. “Publish or Be Damned—or Why Central Banks Need to Say More about the Path of Their Policy Rates.” VoxEU, August 31. https://voxeu. org/article/publish-or-be-damned-or-why-central-banks-need-say-more-about-path- their-policy-rates Bjørnland, Hilde C., Thomas Ekeli, Petra M. Geraats, and Kai Leitemo. 2004. “Norges Bank Watch 2004: An Independent Review of Monetary Policymaking in Norway.” Center for Monetary Economics, Norges Bank Watch report 5. Blinder, Alan. 2000. “Central-Bank Credibility: Why Do We Care? How Do We Build It?” American Economic Review 90, no. 5: 1421–1431. Blinder, Alan, Michael Ehrmann, Marcel Fratzscher, Jakob de Haan, and David-Jan Jansen. 2008. “Central Bank Communication and Monetary Policy: A Survey of Theory and Evidence.” Journal of Economic Literature 46, no. 4: 910–945. Blinder, Alan, Charles Goodhart, Philipp Hildebrand, David Lipton, and Charles Wyplosz. 2001. “How do Central Banks Talk.” Geneva Reports on the World Economy 3. Washington, DC: Brookings Institution Press. Campbell, J. R., C. L. Evans, J. D. M. Fisher, and A. Justiniano. 2012. “Macroeconomic Effects of Federal Reserve Forward Guidance.” Brookings Papers on Economic Activity: 1–54. Dincer, Nergiz, and Barry Eichengreen. 2008. “Central Bank Transparency: Where, Why and with What Effects?” In Central Banks as Economic Institutions, edited by J.-P. Touffut, 105– 141. Cheltenham: Edward Elgar. Dincer, Nergiz, and Barry Eichengreen. 2010. “Central Bank Transparency: Causes, Consequences and Updates.” Theoretical Inquiries in Law 11, no. 1: 75–123. Dincer, Nergiz, and Barry Eichengreen. 2014. “Central Bank Transparency and Independence: Updates and New Measures.” International Journal of Central Banking 10, no. 1: 189–253. Dincer, Nergiz, and Barry Eichengreen. 2016. “Central Bank Mandates: How Much Do They Differ, and How Much Do They Matter?” Unpublished manuscript, TED University and University of California, Berkeley, May. Ehrmann, Michael, and Marcel Fratzscher. 2007. “Communication and Decision-Making by Central Bank Committees: Different Strategies, Same Effectiveness?” Journal of Money, Credit and Banking 39, nos. 2–3: 509–541. Ehrmann, Michael, and Marcel Fratzscher. 2009. “Explaining Monetary Policy in Press Conferences.” International Journal of Central Banking 5, no. 2: 41–84. Eichengreen, Barry, and Petra Geraats 2015. “The Bank of England Fails its Transparency Test.” VoxEU, January 6. https://voxeu.org/article/transparency-and- effectiveness-monetary-policy. Eijffinger, Sylvester, and Petra Geraats. 2006. “How Transparent Are Central Banks?” European Journal of Political Economy 22, no. 1: 1–22.
328 Dincer, Eichengreen, and Geraats Faust, Jon, and Lars Svensson. 2001. “Transparency and Credibility: Monetary Policy with Unobservable Goals.” International Economic Review 42, no. 2: 369–397. Fracasso, Andrea, Hans Genberg, and Charles Wyplosz. 2003. How Do Central Banks Write? An Evaluation of Inflation Targeting Central Banks, Special Report 2 Geneva: Centre for Economic Policy Research. Fry, Maxwell, Deanne Julius, Lavan Mahadeva, Sandra Roger, and Gabriel Sterne. 2000. “Key Issues in the Choice of Monetary Policy Framework.” In Monetary Policy Frameworks in a Global Context, edited by L. Mahadeva and G. Sterne, 1–216. London: Routledge. Geraats, Petra. 2001. “Precommitment, Transparency and Monetary Policy.” Bundesbank discussion paper 12/01. Geraats, Petra. 2002. “Central Bank Transparency.” Economic Journal 112, no. 483: F532–F565. Geraats, Petra. 2005. “Transparency and Reputation: The Publication of Central Bank Forecasts.” Topics in Macroeconomics 5, no. 1.1: 1–26. Geraats, Petra. 2006. “Transparency of Monetary Policy: Theory and Practice.” CESifo Economic Studies 52, no. 1: 111–152. Geraats, Petra. 2009. “Trends in Monetary Policy Transparency.” International Finance 12, no. 2; 235–268. Geraats, Petra. 2010. “Price and Financial Stability: Dual or Duelling Mandates?” In Central Banking after the Crisis, proceedings of the 38th Economics Conference of the Oesterreichische Nationalbank “Central banking after the Crisis: Responsibilities, Strategies, Instruments,” 56–63. Oesterreichische Nationalbank. Geraats, Petra. 2011. “Talking Numbers: Central Bank Communications on Monetary Policy and Financial Stability.” In Central Bank Statistics—What Did the Financial Crisis Change? Proceedings of the Fifth ECB Conference on Statistics, 162–179. European Central Bank. Geraats, Petra. 2014. “Monetary Policy Transparency.” In The Oxford Handbook of Economic and Institutional Transparency, edited by Jens Forssbæck and Lars Oxelheim, 68–97. Oxford: Oxford University Press. Geraats, Petra, Francesco Giavazzi, and Charles Wyplosz. 2008. “Transparency and Governance.” In Monitoring the European Central Bank 6. London: Centre for Economic Policy Research. Gersbach, Hans, and Volker Hahn. 2004. “Voting Transparency, Conflicting Interests, and the Appointment of Central Bankers.” Economics and Politics 16, no. 3: 321–345. Goodhart, Charles. 2015. “The UK MPC Process in the Light of the Warsh Review.” VoxEU, March 2. https://voxeu.org/article/changes-b ank-england-s-monetary-p olicymeetings Kool, Clemens, Menno Middeldorp, and Stephanie Rosenkranz. 2011. “Central Bank Transparency and the Crowding Out of Private Information in Financial Markets.” Journal of Money, Credit and Banking 43, no. 4: 765–774. Lambert, Richard. 2005. “Inside the MPC.” Bank of England Quarterly Bulletin (Spring): 56–65. Mishkin, Frederic. 2004. “Can Central Bank Transparency Go Too Far?” In The Future of Inflation Targeting, edited by Christopher Kent and Simon Guttmann, 48– 65. Sydney: Reserve Bank of Australia. Morris, Steven, and Hyun Shin. 2002. “Social Value of Public Information.” American Economic Review 92, no. 5: 1521–1534. Morris, Steven, and Hyun Shin. 2005. “Central Bank Transparency and the Signal Value of Prices.” Brookings Papers on Economic Activity 2005, no. 2: 1–43.
Transparency of Monetary Policy Postcrisis 329 Stockton, David. 2012. “Review of the Monetary Policy Committee’s Forecasting Capability.” Report Presented to the Court of the Bank of England, October. https://www.bankofengland. co.uk/-/media/boe/.../2012/.../the-mpcs-forecasting-capabilit Swanson, Eric, and John Williams. 2014. “Measuring the Effect of the Zero Lower Bound on Medium-and Longer- Term Interest Rates.” American Economic Review 104, no. 10: 3154–3185. Tong, Hui. 2007. “Disclosure Standards and Market Efficiency: Evidence from Analysts’ Forecasts.” Journal of International Economics 72, no. 1: 222–241. Van der Cruijsen, Carin, David-Jan Jansen, and Jakob De Haan. 2010. “How Much Does the Public Know about the ECB’s Monetary Policy? Evidence from a Survey of Dutch Households.” ECB working paper 1265. Warsh, Kevin. 2014. “Transparency and the Bank of England’s Monetary Policy Committee.” https://www.bankofengland.co.uk/-/media/boe/files/news/2014/december/transparency- and-the-boes-mpc-review-by-kevin-warsh.pdf. Woodford, Michael. 2012. “Methods of Policy Accommodation at the Interest-Rate Lower Bound.” In The Changing Policy Landscape, Economic Symposium Conference Proceedings, 185–288. Federal Reserve Bank of Kansas City.
Appendix 10.1 Monetary Policy Transparency Index This appendix contains the exact formulation of the monetary policy transparency index. The index is the sum of the scores for the answers to the fifteen questions below (min = 0, max = 15). Note that all questions pertain to published information that is freely available in English.
A.10.1.1. Political Transparency Political transparency refers to openness about monetary policy objectives. This involves a formal statement of objectives, including an explicit prioritization in case of multiple goals, a quantification of the primary objective(s), and explicit institutional arrangements. (a) Is there a formal statement of the objective(s) of monetary policy, with an explicit prioritization in case of multiple objectives? No formal objective(s) = 0. Multiple objectives without prioritization = ½. One primary objective or multiple objectives with explicit priority = 1. (b) Is there a quantification of the primary or main objectives of monetary policy? No = 0. Yes, but not for the primary objective or all main objectives = ½. Yes, for the primary objective or all main objectives = 1. (c) Are there explicit institutional arrangements or contracts for monetary policy between the monetary authorities and the government? No central bank, contracts, or other institutional arrangements = 0. Central bank without explicit instrument independence or contract = ½. Central bank with explicit instrument independence for the
330 Dincer, Eichengreen, and Geraats body responsible for monetary policy or a central bank contract for monetary policy (although possibly subject to an explicit override procedure) = 1.
A.10.1.2. Economic Transparency Economic transparency focuses on the economic information that is used for monetary policy. This includes economic data, the model of the economy that the central bank employs to construct forecasts or evaluate the impact of its decisions, and the internal forecasts (model based or judgmental) that the central bank relies on. (a) Are the basic economic data relevant for the conduct of monetary policy publicly available? The focus is on the release of current data for the following variables: (i) money supply growth, short-and long-term interest rates, inflation, GDP growth and unemployment rate; and (ii) a measure of capacity utilization or (the central bank’s estimate of the) “output gap,” and a timely (update of the central bank’s) estimate of the “natural” or long-run equilibrium interest rate (at least once a year). Quarterly time series not available for all variables ad (i) = 0. Quarterly time series available for all variables ad (i) = ½. Quarterly data available for all variables ad (i) and (ii) = 1. (b) Does the central bank disclose the formal macroeconomic model(s) it uses for monetary policy analysis? No = 0. Yes = 1. (c) Does the central bank regularly publish its own macroeconomic forecasts? No numerical central bank forecasts for inflation and output = 0. Numerical central bank forecasts for inflation and/or output (gap) published at less than quarterly frequency or only for the short term = ½. Quarterly numerical central bank forecasts for inflation and output (gap) for the medium term (one to two years ahead), specifying the assumptions about the policy instrument (conditional or unconditional forecasts) = 1.
A.10.1.3. Procedural Transparency Procedural transparency concerns the way monetary policy decisions are made. It involves an explicit monetary policy rule or strategy that describes the monetary policy framework, an account of monetary policy deliberations, and how the monetary policy decision was reached. (a) Does the central bank provide an explicit policy rule or strategy that describes its monetary policy framework? No = 0. Yes = 1. (b) Does the central bank give a comprehensive account of monetary policy deliberations (or explanations in case of a single central banker) within a reasonable amount of time? No, or only after a substantial lag (more than eight weeks) = 0. Only summary minutes or more comprehensive minutes published with a significant delay (of at least three but no more than eight weeks) = ½. Yes, comprehensive minutes (although not necessarily verbatim or attributed) or explanations (in case of a single central banker), including a discussion of backward-and forward-looking arguments, published within three weeks = 1. (c) Does the central bank disclose how each decision on the level of its main monetary operating instrument/target was reached? No voting records, or only released after a
Transparency of Monetary Policy Postcrisis 331 substantial lag = 0. Only nonattributed voting records released within three weeks, or individual voting records released within eight weeks = ½. Individual voting records released on the day of the policy announcement, or monetary policy decision made by a single central banker = 1.
A.10.1.4. Policy Transparency Policy transparency means prompt disclosure of monetary policy decisions. In addition, it includes an explanation of the decision and an explicit policy inclination or indication of likely future policy actions. (a) Are decisions about adjustments to the main monetary operating instrument/target promptly announced? No, or after a significant lag = 0. Yes, at the latest on the day of implementation = 1. (b) Does the central bank provide an explanation when it announces monetary policy decisions? No = 0. Only when policy decisions change, or only superficially = ½. Yes, always, and including an assessment of economic prospects = 1. (c) Does the central bank disclose an explicit policy inclination after every monetary policy meeting or an explicit indication of the likely timing, direction, size, or pace of future monetary policy actions (at least quarterly)? No = 0. Only a policy inclination or qualitative forward policy guidance = ½. Yes, quantitative FG about future policy actions = 1.
A.10.1.5. Operational Transparency Operational transparency concerns the implementation of the central bank’s monetary policy actions. It involves a discussion of control errors in achieving its main monetary operating target(s) (if any) and (unanticipated) macroeconomic disturbances that affect the transmission of monetary policy. Furthermore, the evaluation of the macroeconomic outcomes of monetary policy in light of its objectives is included here as well. (a) Does the central bank evaluate to what extent its main monetary policy operating targets (if any) have been achieved? No, or not very often (at less than annual frequency) = 0. Yes, but without providing explanations for significant deviations = ½. Yes, accounting for any significant deviations from its main operating target(s) or (nearly) perfectly achieving them; or the central bank has perfect control over its main monetary policy operating instrument(s) = 1. (b) Does the central bank provide information on (unanticipated) macroeconomic disturbances that affect the monetary policy transmission process? No, or not very often = 0. Yes, but only through short-term forecasts or analysis of current macroeconomic developments (at least quarterly) = ½. Yes, including a discussion of its forecast errors (at least annually) = 1. (c) Does the central bank provide an evaluation of the monetary policy outcome in light of its macroeconomic objectives? No, or not very often (at less than annual frequency) = 0. Yes, but superficially = ½. Yes, with an explicit account of the contribution of monetary policy in achieving the objectives (at least annually) = 1.
332 Dincer, Eichengreen, and Geraats
Appendix 10.2 IMF De Facto Classification of Monetary Policy Frameworks for 2015
Exchange Rate Targeting
Inflation Targeting
Aruba Bahamas Bahrain Barbados Belize Bhutan Bosnia Botswana Bulgaria Cambodia CEMAC Croatia Curacao Denmark ECCU El Salvador Fiji Guyana Hong Kong Iran Iraq Jamaica Jordan Kuwait Lebanon Lesotho Libya Macedonia Maldives Namibia Oman Qatar
Albania Armenia Australia Brazil Canada Chile Colombia Czech Republic Georgia Ghana Guatemala Hungary Iceland India Indonesia Israel Japan Korea Mexico Moldova New Zealand Norway Peru Philippines Poland Romania Russia South Africa Sweden Thailand Turkey Uganda
Monetary Aggregate Targeting Bangladesh Belarus China Ethiopia Malawi Mozambique Nigeria Rwanda Seychelles Sierra Leone Sri Lanka Tajikistan Tanzania Ukraine Yemen
Other Angola Argentina Azerbaijan Egypt Euro Area Kazakhstan Kenya Kyrgyzstan Lao Peoples Republic Malaysia Mauritius Mongolia Pakistan Papua New Guinea Solomon Islands Switzerland Tunisia United States Vanuatu Zambia
Transparency of Monetary Policy Postcrisis 333
Exchange Rate Targeting Samoa Saudi Arabia Singapore Sudan Tonga Trinidad and Tobago United Arab Emirates WAEMU
Inflation Targeting
Monetary Aggregate Targeting
Other
United Kingdom
Note: Bermuda, Cayman Islands, Cuba, and Macao are not included in the IMF classification.
Pa rt I V
P OL IC Y T R A N SM I S SION M E C HA N I SM S A N D OP E R AT ION S
chapter 11
Real Estat e , C onstru ction, Money, Credit, and Fi na nc e Peter Sinclair
11.1 Introduction Cycles in construction and in GDP do not always coincide. But very often they do. For the United States, in fact, Leamer (2007) goes so far as to claim that “housing IS the business cycle.” Real estate crashes are not implicated in all financial crises. But they tend to be in most. Food is as essential for survival as shelter, but in countries of the Organization for Economic Cooperation and Development (OECD), the flow costs of housing, properly measured, form by far the largest item in most family budgets. The average is typically not far from one-third. For most households, whether they are borrowing to buy, buying outright, or negotiating a tenancy, housing represents the biggest financial transaction undertaken. Part of a country’s aggregate investment is devoted to intangibles, inventory, and equipment; but the lion’s share, for households and for firms, is nearly always in construction. The transmission mechanism of monetary policy affects all macroeconomic variables—but probably none more than the construction and the prices of real estate. How, then, do monetary and financial variables interact with construction activity and the prices of buildings? Why are these prices so volatile? Why are credit providers so closely involved? Knowing what we do about the benefits and the perils of the real estate-finance nexus, what role should the authorities play to keep the financial system stable? These are the main questions that this chapter seeks to address.
I would like to thank the editors, referees, and other participants in our Zurich conference for helpful advice and comments.
338 Sinclair The chapter begins, in section 11.2, with a simple model that links the prices and quantities of dwellings in an economy and explores the forces that shape these variables over time. The crucial distinction between short-run and long-run equilibrium is drawn and explained. The chapter identifies five key issues involved. It proceeds to assemble a set of relationships that show how and why these variables are what they might be at any moment. Then, in section 11.3, there is particular emphasis on what can cause real estate prices to jump up or down— developments that can lead to financial crises—and the role that monetary policy may play in this regard. Next, in section 11.4, comes an analysis of several awkward but important features of the simple model from which it abstracts. These complicating features cannot be dismissed. They are highly relevant to the way real estate markets, monetary policy, and financial stability interact in practice. There and in section 11.5, prominent historical examples are mentioned, to illustrate the practical relevance of each of these features. The chapter will make reference to only a small selection of influential and pioneering pieces of research. Some are recent, others less so. Some of these concern theoretical developments that have improved our understanding of the nexus that links real estate phenomena to other economic variables. Others reveal how and why real estate markets have behaved in the way they have in the past and what significance they have had for macroeconomic and financial stability. Because the literature on these issues is enormous, space constraints require that the references be very selective. In section 11.5, the chapter concludes by turning to recent history and then various possible implications for policy—monetary, macroprudential, regulatory, and fiscal— and to a brief discussion of recent relevant events. The policy framework, and the way policy instruments can work (and may be expected to work), will be seen to play what can be a very powerful role in affecting the likelihood and gravity of financial crises and also in affecting other aspects of macroeconomic policy in numerous ways.
11.2 Five Key Elements at the Heart of the Economics of Real Estate The five key elements are as follows. The first concerns the passage of time. Next up are modeling the construction industry, the housing costs-income relationship, and the determination of the flow costs of housing. The final subsection explores where the stocks and prices of dwellings eventually settle and the route they might take to get there, unless dislodged by shocks.
11.2.1 Dynamics Housing is a stock. The stock can move up or down over time but only gradually. Dwelling prices can and do move, too. But by contrast, on occasions, they can jump
Real Estate, Construction, Money, Credit, and Finance 339 violently. So the quantity of dwellings is a tortoise, but the price of a dwelling is more like a butterfly. Houses are what Hicks (1974) described as a “flexprice” good; by contrast, labor, with its typically annual money wage contract, is, like many services and some goods, “fixprice.” We need to distinguish between a temporary equilibrium between the forces of demand and supply at any moment on the one hand and a long-run equilibrium on the other. We seek to understand the evolution of prices and quantities as time proceeds. Time is best thought of as a continuous variable, denoted by t. Let Q(t) be the price of a standard dwelling at date t, defined in real terms and thus deflated by a relevant price index, and H(t) the aggregate stock of dwellings. So our task is to formalize a set of relationships that explain (a) how these two variables are what they are at any moment and (b) where they will or should be eventually, at least in the absence of disturbances. But before exploring this model and what can happen when its assumptions get modified, it is useful to mention some facts about house prices. Knoll, Schularick, and Steger (2017) provide an invaluable analysis of how house prices have moved between 1870 and 2012 in Australia, Canada, Japan, the United States, and ten Western European countries. The story varies somewhat among countries, but the main message from the data is that house prices were broadly flat in real terms until around 1950 and grew sharply since then. The main culprit, they find, is the rising price of land. Despite their rapid population growth, occasionally damped by war, crowded European countries, it is surmised, exported much labor to the still small cities and largely empty farmlands in the New World in the Victorian and Edwardian eras; later, with the magnetism of big cities, land in favored centers became increasingly scarce, and after World War II, strong growth in income per head and generally low real interest rates led to a general boom in all assets, including land and dwellings on it.
11.2.2 Construction The character and flexibility of the construction sector call for scrutiny. One task that a builder does is replacement work. The housing stock H(t) depreciates over time. Repairs and renewals occupy much of a construction worker’s time. There might be very occasional disasters: fires, floods, storms, or earthquakes. These would entail a huge sudden jump in the amount of such work. But it is simplest to assume that making good the natural depreciation in the housing stock is a continuous, regular activity for all buildings. In that case, we might assume that repair and renewal work is a constant and uniform fraction of the housing stock. If we thought in terms of a year, it might be about 2 or 3 percent. Let us suppose that in continuous time, H(t) depreciates at the rate δ. New work is the other kind of building activity. This is Ḣ(t). This will be zero in the simplest long-run equilibrium, where the stock is unchanging. But it could easily be positive. Occasionally or in particular locations, at least, it could be negative; that happens if the stock appears excessive, and is allowed to dwindle over time. But it will decline slowly, a good deal more slowly in all probability, than the rate at which H will
340 Sinclair climb when conditions are favorable. Examples of declining H in our time include the inner core of Detroit, Michigan, where population has dropped by about two-thirds in three decades, and many villages in rural Greece. It is worth mentioning, though, that Michigan and rural Greece are quite rare examples of areas where aggregate real income has been sliding. In most places, those long-term trends have been healthily positive, which makes spells with absolute decline in H far less likely. The construction sector’s total flow of output will be δ H(t) + Ḣ(t) at time t. What will this flow of output depend on? The most natural variable to put in here is Q(t), the real price of a new dwelling. If we sweep away complications, such as taxation, the interval of time needed to build, and any element of monopoly power, and assume that the builders’ marginal costs are given, it is only Q(t) that will matter here. So the question now is what shape this supply curve has. If builders employ a specific factor of production, unique to that sector, any expansion in builders’ output will drive that factor’s price upward (assuming it is not in excess supply); and if a factor is fixed in supply, for a while at least, it will exert the same drag on the sector’s ability to respond to a rise in Q(t). The elasticity of the construction sector’s aggregate supply curve will depend negatively on the shares that specific (or fixed) factors command in total costs and positively on the elasticity of substitution between that set of factors and the set of variable factors, whose prices we might take as given. If this substitution elasticity is one and the variable factors’ wages account for half a builder’s value added, the construction sector’s supply curve is unit elastic. We could then write
δH (t ) + H (t ) = aQ(t ). (1)
In what follows, we shall generally assume equation 1 to hold. The left-hand side of the equation denotes the physical output of an economy’s construction industry. It is worth noting, though, that the parameter a could itself vary somewhat over time and space and that the supply elasticity need not be unitary. Fixed factors tend to free up eventually, firms can switch into and out of construction, and materials and specialized labor may be very expensive in remote areas.
11.2.3 Housing Costs, Other Prices, and Household Incomes Housing costs are a flow; we shall examine these in some detail in section 11.2.4. At this point, we ask what assumptions we can make to simplify the relation these bear to incomes and to other prices. The answer must be to assume Cobb-Douglas preferences for a representative household. These entail that the share of the budget devoted to housing is a constant, unaffected by incomes or any prices. Cross-elasticities of demand vanish in this case, and income elasticities are unitary. When the flow costs of housing are accounted appropriately—to reflect imputed rent (defined below), for
Real Estate, Construction, Money, Credit, and Finance 341 example—it appears that average accommodation costs do appear to bear a broadly similar factor of proportionality to average incomes, which changes little over time and space and varies little across income groups. One thinks of flow housing costs as averaging about one-third of disposable income.1 In numerical calculations, we shall assume that this is so. More generally, if F(t) at date t are the flow costs of housing (per dwelling), and y denotes a household’s real income, which we shall assume is given, we might write
F (t ) = by (2)
Here, 1 > b > 0 (and perhaps about one-third).
11.2.4 Flow Costs of Housing If you are rich enough, you can choose between renting the property you live in and buying it as an asset, either freehold or on a long lease. With more modest means, you can often take the latter option but borrow some of the asset purchase price. And if you own a dwelling, you can elect to rent it out and live elsewhere. So owner-occupants pay a kind of rent to themselves, “imputed rent.” What you pay for the purchase of the representative dwelling is Q(t). As F(t) is what you pay as a flow per dwelling, how are these two related? What would indifference between ownership and renting imply? Complications such as market imperfections and tax aside, F(t) is what a landlord needs to charge to break even on one of his or her properties. The landlord has three key running costs to consider. One is the real rate of interest. Call this r. The capital tied up in the business must yield this for the landlord to stay in business. A second is depreciation; this, too, must be covered. The final element is the expected capital gain (or loss) at the end of the rental interval. If the asset is now expected to go up in value, buying-to- let looks profitable, so F will be lowered. If capital losses on the asset are anticipated, F needs to go up to cover them. Personal computers have long displayed a steep fall in real, quality-adjusted price of perhaps 40 percent per year or more; this makes for higher F and will render either renting or buying less attractive than it would otherwise have been. Therefore, we can pinpoint an equation for F:
F (t ) = Q(t )[r + δ] − E(dQ(t ) / dt ) (3)
Here, E(dQ(t ) / dt ) , which could be defined as z (t )Q(t ) , denotes the expectation of the absolute momentary capital gain. So z(t) is the expected proportionate capital gain at t. Equation 3 gives us the link we need between the flow cost of housing and its purchase price. Combining equation 3 with equation 2 and writing Y for aggregate income, we find
H (t )Q(t ) = bY /[r + δ − z (t )] (4)
342 Sinclair This is because economy-wide aggregate F will equal (r + δ – z)HQ. If expectations are rational, actual and anticipated capital gains or losses might turn out to be approximately equal, at least in the absence of a surprise. When z vanishes, equation 4 describes a negative relation between Q and H, which, in our case, are inversely proportional to each other, all else equal. Taxation is one of numerous factors that can complicate equation 4; we shall look at that issue, and others, below. One characteristic of equation 4 merits emphasis. The left-hand side relates positively to the level of the (real) price of the asset. The right-hand side increases with the anticipated (real) rate of change of the asset price. So the faster people think an asset will appreciate, the more attractive that asset will be and, therefore, the more expensive. When expectations are correct, at least, there is a positive link between level and expected rate of change. That points to an explosive property. The greater the future capital gain it appears to offer, the more attractive it will be to own now. This feature contrasts with the stabilizing character of the dynamics of H, as evidenced by equation 1. Given that equation 1 displays a positive association between H and Q when H is stationary, and equation 4 displays a negative one when Q is (expected to be) stationary, the combination of a stabilizing force in one dimension and a destabilizing one in the other is exactly what we need—if expectations are on average correct—to pinpoint a unique and (approximately) foreseeable path to a long-run equilibrium.
11.2.5 Long-Run and Short-Run Equilibria In an economy where population and real incomes are constant, we can now find the long-run equilibrium values of H and Q. Equation 1 implies that Q/H is δ/a when the housing stock is stationary. And if z vanishes, the real price of housing, actual and expected, is stationary, too, so HQ = bY/(r + δ). So these steady-state values, with asterisk superscripts, are H* = √x and Q* = (δ/a)√x, where x = abY/δ(r + δ). Qualitatively, the long-run quantities and prices of housing are boosted by higher incomes and by a higher budget share for housing, each with elasticities of one-half, and also boosted by a fall in the long-run real rate of interest. Easier supply conditions in the construction sector (higher a) raise the stock and cut the price of dwellings, while faster depreciation has the opposite effect. Away from the steady state, the dynamics of H and of Q become important. Equation 1 reveals that when Q lies above the value (δH/a) where H is constant, then H will be rising. And as H gradually climbs, houses get slightly cheaper. In our case, this effect should be anticipated. With Q below δH/a, H will be declining, as builders are so poorly rewarded now that they will do less repair and renewal work than would be needed to keep the housing stock intact. So in this depressing case, Q will be slowly recovering as H shrinks. Under rational expectations, agents do their best, with common information, to forecast the long-run equilibria and, furthermore, the (unique) paths of Q and H toward the steady state from the current position, whatever it is. They may well get surprised by shocks along the way, but in their absence, Q and H should evolve as expected
Real Estate, Construction, Money, Credit, and Finance 343 H· = 0
SS q A
q· = 0
0
H
Figure 11.1 The Simple Model of the Housing Market Note: This diagram is based on one in John Fender, Monetary Policy (Chichester: John Wiley, 2012), by kind permission of the publisher and the author.
along that unique path.2 This path, pointing in an east-southeasterly direction in figure 11.1 toward the steady-state point A, is the arrowed path labeled SS.
11.3 Monetary and Financial Stability Policy Implications The housing market is very sensitive to the real rate of interest. This has been labeled r. It has been assumed above to be exogenous and constant. In practice, it is neither. Monetary policy can exercise a substantial, yet time-limited, degree of influence on it (or on them, as there will typically be a family of long rates at any date, each for a different horizon). And when it does so, the current price of housing can jump sharply, up or down. A surprise cut in r pushes the demand for H up and will raise its price; a surprise jump in r will have the opposite effect. These repercussions are explored and sketched out here.
11.3.1 Monetary Policy in the Simple Real Estate Model Suppose the monetary authority lowers a policy interest rate. In practice, at present, at least, policy rates are invariably nominal. The idea behind lowering policy rates is to
344 Sinclair stimulate aggregate demand. This should, in time, tend to restore inflation up toward some implicit or explicit target that features in the country’s policy framework. Because inflation is liable to creep up, if the policy rate stays down for a while, the fall in real interest rates could actually be amplified for an interval of time—unless, that is, enough disinflationary pressure emerged from the shock that occasioned the policy rate cut in the first place. In the longer run, it is widely accepted that real interest rates are hardly influenced at all by monetary policy, and evidence often bears this out. If money is nonneutral in the short run, it is because the prices of some goods and services, and quite possibly some factors such as labor, are expressed in nominal terms and react only sluggishly to temporary imbalances between supply and demand. But price sluggishness is not permanent. Given time, this implies that monetary policy has next to no enduring real effects. So eventually, real interest rates should remain unaffected by monetary policy innovations and governed ultimately by other forces. To take one example, many modern endogenous growth models can be solved for steady-state rates of growth and real interest; in such cases, key influences are the productivity of training3 or inventors4 on the one side and consumers’ intertemporal preferences on the other. In another example, a small open economy that allows international capital movements faces (longer-run) real interest rates which are set by conditions in the rest of the world. This said, it is a standard feature of many theories of how monetary authorities should—and do—respond to shocks that policy rates are subject to inertia. Any moves are taken in little steps. So a surprise policy rate cut this month can engender beliefs that a similar, if smaller, cut will follow next month and that a sequence of further cuts will come after that.5 The principle behind this is that such beliefs should make monetary policy a more powerful stabilizer of the macroeconomy, since those firms that might happen to be free to change their product and input prices will tend to do so in larger amounts and more quickly. If such a regime is in force, it is quite reasonable for agents to predict that a surprise nominal policy rate cut might succeed in depressing expected real interest rates, if not forever, at least for some years to come. Suppose that at date 0, a surprise loosening of domestic monetary policy does succeed in reducing perceptions of current and medium-term real interest rates and that the relevant value of r in our model does indeed fall. Or alternatively, for some reason, people think that this will be the consequence. What does this entail in the context of the model in section 11.2? Perceptions of the steady-state values of both the real price of houses, Q*, and their volume, H*, will certainly go up. If, for instance, the long-run cost of capital, r + δ, drops as a result by 19 percent, the algebra suggests that with full information, agents’ perceptions of both Q* and H* should be raised by 9 percent. Does this mean that the current nominal price of dwellings jumps by 9 percent now, at date 0? No, the model implies that with forward-looking behavior in the housing market, that increase will, in fact, be too small. If house prices were expected from now on to mark time in real terms, the 19 percent fall in the cost of capital should push real house prices up by the full 19 percent. Only then would F be unchanged, given the still unchanged volume of housing available. But the immediate 19 percent jump in Q goes too
Real Estate, Construction, Money, Credit, and Finance 345 far. Were it to occur, people who understood the model and predicted a 9 percent long- run increase would have to anticipate capital losses, as Q was brought down as a result of the gradually increasing supply. That would imply a somewhat higher prospective flow cost of housing and therefore a smaller initial overshoot in Q. In our example, the 19 percent assumed-to-be-permanent fall in r + δ goes hand in hand with a rise of approximately 11 percent in the initial value of Q. There is therefore quite a modest immediate overshoot in real house prices, which are then forecast to creep back slightly to preserve equilibrium between demand and supply along the saddle path toward the steady state and to generate the required sequence of slightly declining and very gradual capital losses that path entails. Now modify the example. Say monetary policy tightens unexpectedly a few years later. To keep matters simple, imagine that the fall in r is suddenly and fully reversed and, again, as a complete surprise. People now expect, let us say, that r will stay at its former level in perpetuity. What this implies is that the real dwelling price and the volume of the housing stock must revert to their old steady-state values. So Q will now tumble from something above the now illusory higher steady-state level to something below its new steady-state value. The initial jump in Q will be repeated—but now in the opposite direction. What follows will entail a collapse in builders’ output levels, no new dwellings will get built, and only some of the depreciation work will be undertaken. Slowly, Q will climb little by little, and H will shrink, in the gradual, painful return to the old steady state. And similar developments will occur in the commercial property market as well: a sudden crash in prices and protracted weakness in building activity. In this section, we have looked at the effect of a monetary policy shock, first favorable and then unfavorable. There can be other types of shock. Actual or expected income could jump unexpectedly, up or down, with quite similar—if more persistent—effects as a loosening or tightening in monetary policy. There could be a sudden inflow of immigrants or outflow of emigrants. Then there could be demographic shocks. Life expectancy might rise or fall; average family size could change; divorce might become rarer or more common. In such cases, the relative demands for smaller and larger dwellings would change, leading to disturbances in their relative prices, which the joining and splitting of houses would later help to reverse a bit over time. Technical progress in construction could gradually lower Q and raise H. And perhaps most important of all, the real interest rate r could drop unexpectedly for a panoply of possible reasons, quite independently of monetary policy. Indeed, the long upward drift of house prices in many parts of the world from around 2002 to 2007 is the natural concomitant of falling world real interest rates across these years. Dwellings were not the only asset to appreciate in value. So did equities, gold, oil, and long-term bonds. Behind the slide in real interest rates lie many factors at work. One was the growing adoption of lighter taxation on capital income, or on saving, in a number of advanced countries. Another took the form of big rises in life expectancy rates. This was fastest in China but rapid in Japan and much of Europe, too. It occurred at the very period when the big bulge of babies born after World War II were approaching retirement and perceiving a greater need to save more to boost future pension income
346 Sinclair while they still could. A third influence was the big jump in saving, relative to investment, and in exports, relative to imports, that occurred in much of Asia and especially China. A wave of additional net saving helped to lower real interest rates in the North Atlantic and Mediterranean economies. This story is well told by Bernanke (2015) and King (2016).6 Easier credit to finance house purchases was an immediate consequence.
11.3.2 Financial Stability Implications How will the financial system be affected by the monetary policy changes? When the policy rate was initially reduced, banks faced some incentive to lower nominal rates on deposits and loans. Profit-maximizing institutions will seek to set the deposits they attract where the marginal cost of holding and managing these deposits balances a key benchmark rate they will presumably regard as exogenous—the yield they could obtain, let us say, from their own deposits with the central bank or from holding Treasury bills close to redemption. On the asset side, banks will wish to adjust loans to the point where that benchmark rate just balances the expected marginal revenue, net of any associated marginal costs, of issuing and managing those loans. In a perfectly competitive setting, the gap between the policy benchmark rate (call it B) and the rates offered to their depositors will reflect marginal costs of attracting and managing those deposits. The excess of loan rates over B would similarly reflect marginal operating costs relevant to issuing and managing loans, adjusted upward to defray losses from any anticipated defaults. At the opposite extreme of a single commercial bank acting as a monopsonist for the public’s deposits and a monopolist in the loan market,7 these gaps will widen. One reason this might arise goes back to Edgeworth (1888): many aspects of banking, such as the holding of reserves, are liable to display increasing returns. Assuming the simplest conditions, with homogeneous loans and deposits and no price discrimination, deposit rates will be cut by a term inversely proportional to the interest elasticity of deposit supply. Loan rates will be increased by a term inversely proportional to the interest elasticity of loan demand. Most countries outside the United States have banking systems largely dominated by a few large firms. This is typical in Western Europe, Australia, Canada, and much of Africa, Asia, and Latin America. The rather high degree of concentration makes oligopoly the best description of the retail banking markets in such countries. Oligopoly spans a wide range of possibilities. When, as here, product differentiation is only minor, this span varies from a perfectly competitive outcome to the monopoly-monopsony case. If the banking industry is contestable (that is, with parity of technology and factor supplies between incumbents and a potential entrant, an entrant free to enter before incumbents can reprice in retaliation, and no sunk costs), an oligopolist—and even a monopolist-monopsonist—will behave like a perfect competitor. The same thing can occur if a bank takes its rivals’ rates as given and sets its own retail interest rates to try to maximize its profits, a state of affairs described as Bertrand equilibrium. At
Real Estate, Construction, Money, Credit, and Finance 347 the other end, full-blown collusion collapses the oligopoly into effective monopoly- monopsony: the cartel acts like a centrally coordinated multiplant monopolist. Between these extremes, when oligopolists choose quantities of loans and deposits to maximize their perception of profits, talking rivals’ quantity decisions as given, we enter perhaps the most plausible member of the family of models, that of Cournot (1838). Here the size of the retail markups on loans and markdowns on deposits is critically sensitive to the number of players (at least in the simplest case where firms’ costs are similar). With more banks, these spreads go can only down. The theory suggests that the biggest drops occur when that rising number is small. And what that implies is that downward pressure on loan rates, in the wake of a policy rate cut, will be largest when (as in the United States, for example) the number of retail banks is vast (it is still nearly ten thousand, although waning) and interstate competition is allowed, as it has been since the 1999 repeal of Glass-Steagall. In all markets of such types, the fall in the policy rate must tend to lead to cuts in loan rates. Evidence has long confirmed this,8 although pass-through is usually staggered, far from uniform across countries, and, at least in some instances, somewhat incomplete. Gambacorta and Mizen, in chapter 12 of this book, find that it tends eventually, in fact, to be complete. The resulting expansion of loans is not immediate. It takes some time to grow. Banks are only too aware that lending against the collateral of property is much safer than issuing other forms of loans; “safe as houses” could be their motto. And on the commercial side, property development will often tend to respond much faster than other forms of investment to the lure of cheaper external finance. A surprise policy rate cut will therefore act to stimulate real estate construction quite quickly. That aggregate is liable to increase much faster than other components of aggregate expenditure, except perhaps for purchases of cars and other consumer durables that are far more sensitive to lower interest rates and cheaper credit than other types of household outlay. Banks will tend to expand loans in real estate and other categories. If and where the rules permit, they may well economize somewhat on their now less lucrative holdings of key reserve assets, such as short-term government debt and interest- bearing deposits with the central bank. If they do that, they will be rendered a little less secure when the financial weather changes unexpectedly. And this is on top of the fact that the rising exposure to real estate will in the meantime have made them a good deal more vulnerable to any adverse shock, of which an unexpected policy rate reversal is just one instance. Those financial institutions that provided mortgages to house buyers and loans to companies that acquired corporate real estate will have hedged their bets on their borrowers’ ability to repay loans by holding collateral. This is almost universal practice. The collateral will take the form of title deeds to the buildings purchased. An unwelcome and surprising return to higher real interest rates will have raised the current debt- servicing costs of many borrowers, particularly if the mortgages were at variable interest rates. Delinquencies will increase. The quality of lenders’ loan books will deteriorate. And the underlying value of assets held as collateral will have fallen, too.
348 Sinclair If the subsequent unexpected rise in real interest rates is small, most lending institutions will be perfectly able to survive what is no more than a minor disagreeable worsening of their balance sheets. Any lender close to insolvency, however—and the previous interval of low policy rates can only have raised the chances that there will indeed be some in this position—may be tipped over the edge into failure. And the bigger the collapse in real estate values, the larger the number of institutions on which that sword of Damocles might fall. Property developers, building firms, and their intricate chains of subcontractors are, of course, directly at risk; and when they fail, so might their creditors. But bank and real estate interactions don’t just end there. There can be other repercussions. Suppose one retail bank fails. In the event of bankruptcy, administrators will normally take over its assets and sell them quickly to help meet creditors’ demands as much as they can. A fire sale of private homes (perhaps awaiting completion) and commercial property could ensue if large chunks of collateral are dumped on the market rapidly. There can then be wider knock-on effects: a more general shadow might fall over Q elsewhere in the economy. Other banks’ collateral values will slip, taking them nearer to the insolvency boundary. And the mere fear that another bank could be in trouble might be enough to trigger alarm about near-future values of Q. Miles (2015) explores several of the disturbingly numerous ways in which housing market shocks can trigger serious effects on a country’s financial system. A switch to bad times in the building industry will reduce government tax revenues and widen prospective budget deficits, and that might pile more upward pressure on risk premiums and real interest rates on bonds. This could lead to a squeeze on households’ perceptions of their future net income and thus contribute to a further lowering of the demand for housing. On top of that, banks will start to raise their expectations of future delinquencies in their housing loan portfolio, which would reduce the supply of credit and increase its price. All these considerations will have amplified the positive response to the previous policy rate cut—and magnified the downturn when interest rates unexpectedly go up again.
11.4 Complications The analysis of real estate markets up to now has been deliberately kept quite simple. It has omitted a group of practical considerations that can modify the picture in interesting ways. This section looks at several of these.
11.4.1 Dwellings Differ Dwellings vary. They differ in floor area, height, sunlight, insulation, amenity, agreeability of neighbors, and other characteristics. The analysis above has abstracted from all such
Real Estate, Construction, Money, Credit, and Finance 349 features. When the impact of any increase in income or a fall in the cost of capital was under consideration, it was argued that a representative household would demand, and in due course acquire, more units of “housing,” H. The details of how that might be done were left unexplored. Housing is not indivisible: families can have recourse to holiday lets, time-sharing is feasible and not unknown, bigger residences can be subdivided and adjacent ones united, extra rooms can be added through converting a loft or excavating a basement or let out to a tenant. Younger growing families can trade up as their resources permit, and older ones can downsize. But it will definitely take time and resources for the housing stock to adjust to variations in family size and space preferences.
11.4.2 Location The most important feature differentiating one dwelling from another, however, is its position. The model above has abstracted from geography. One might think that the society under consideration inhabited a small central area where everywhere was close enough to everywhere else for location to be of little relevance. A city- state such as Singapore could serve as an example. An alternative assumption might be to imagine that transport costs (measured in time as well as in pecuniary terms) were effectively negligible. But even within cities, transport costs are not trivial, and housing costs vary sharply, especially when the geography is disaggregated. In states with both urban and rural areas, such differences can be much bigger still. Dwellings in the center of the capital city might cost twenty times as much as those in remoter countryside districts—or more—when measured on a per-square-foot or per-square-meter basis. Populations have often moved. Political and technological changes were often drivers behind this. Tribes who lived by hunting favored islands near rich fishing grounds, once they had mastered seamanship. Later, as agriculture took over and dense woods would be cleared for farming, people shunned the coasts and migrated inland. Later, in centuries long past, hilltops would become more attractive than fertile river valleys during troubled periods, when people suddenly valued defense and safety more highly than convenience and bigger harvest yields. When political tranquility returned, back they would go. People would move, but unless they lived in tents, they could hardly take their dwellings with them. So migration, whatever the changes that spurred it, would induce a building boom. In the largely agricultural societies that prevailed in the United Kingdom until around 1750 and somewhat later in much of the rest of the developed world, cities were few and far between. Urban sites accounted for a very modest fraction of the population. Those in ports and market towns traded a narrow range of objects that people couldn’t easily find locally or make for themselves. But most people lived and worked on the land. In time, however, the progress of technology, the unveiling of indivisibilities and increasing returns, and the growing division of labor led to two things: greater specialization of work and a broadening of demand for goods and services. These trends both favored demographic concentration.
350 Sinclair An intriguing process that began in Britain and the United States two to three centuries ago gathered pace. It led to big urban building booms, alternating with busts. By the later nineteenth century, something like a twenty-five-year cycle had developed. British capital would be sunk in new housing in British cities for a decade or so. That would then be followed by a phase of heavy labor emigration and capital exports from the United Kingdom to support new construction in the United States, as well as in Argentina, Australia, Canada, New Zealand, and South Africa. A century later, poorer countries such as Brazil, Greece, India, Ireland, Mexico, Peru, Portugal, and Thailand would witness great waves of workers forsaking the land for their chief cities.9 Again, they could not carry their homes with them. Such countries would witness enormous metropolitan building booms, sucking in still more labor from the depleted countryside—only to be followed, all too frequently, by very painful financial crashes. These observations enable us to see how and why Q can differ so much within countries, and also why Ḣ(t) can be large and positive in some parts of an economy but not in others. Nonportability is one factor; changes in technology or politics, changing the relative appeal of residence in one area compared to another, provide a second. The bricks- and-mortar element in the price of a house is much the same right across an economy, however. Building workers may tend to get paid more in the city than in the countryside, but materials are often costlier outside urban districts.10 The main reason Q is higher in cities has to do with the price of land. An acre of mid-quality agricultural land well outside a city might trade for between two and four months of a typical industrial worker’s pay. At the center of Manhattan, an acre of empty real estate might sell for many thousand times that. And an acre of a barren and inaccessible mountainside might be had for no more than the price of a paperback book. Within a city, land will generally be at its dearest at the center and keep slipping in price the farther you move away. Similar changes can be seen in rents, as equation 3 implies, assuming that the capital gain or loss terms, z, are equivalent. One story, due originally to von Thünen (1826), links rent differentials to travel costs: if it costs you and your partner an extra $2,000 per year per person in lost time and extra fares to commute from one mile nearer the center where you both work, the annual rent on an identical apartment a mile nearer the center will be $4,000, assuming you and your partner are typical. And, if r + δ –z adds to 4 percent per year, that indifference condition implies that its freehold price would be an additional $100,000. A possible objection to the von Thünen principle might be called the Hong Kong response: why not build up in the middle, instead of out? If elevators could be made to work dependably at the speed of light, additional stories could be added at the same unit cost, and skyscrapers could be arbitrarily high; a city could consist of one gigantic Tower of Babel. Leaving aside the tiresome facts that elevators work slowly and building costs per floor do tend to rise with height, Lucas (2000) showed that a neater (or, in fact, perhaps complementary) explanation of the geography of land prices might be that there are positive externalities that link firms, which decay with the distance between them. It pays a business to be close to where the action is.
Real Estate, Construction, Money, Credit, and Finance 351 Whatever drives them, the forces encouraging agglomeration on the one side and the incentives to save on Q and F by moving out will have to balance out and yield the pattern of rents and land prices that we observe. Disentangling the various causes and effects is very challenging in practice. But the recent paper by Ahlfeldt et al. (2015) throws light on what is exogenous and what is endogenous in the case of Berlin, a city that was split unexpectedly in 1961 and reunified, with equal rapidity, thirty-eight years later. Their paper on “density” is a major contribution to what we know about what determines the land price element in the purchase prices and rentals of dwellings. Bringing land prices into our model modifies the algebra in several possible ways. An elegant method of doing so is given by Miles (2012). One simple, useful idea is that there are now two concepts of Q. The Q in equation 1 is what building firms receive for the bricks and mortar. The Q in equation 3, however, must now include not just that but also the capital, freehold price of the land that accompanies it. And z, the capital gain or loss term, relates to that gross-of-land price. So it helps to think of land as acting rather like a tax, separating the supply price (given by equation 1) and the demand price above it (given by equation 3). The land element will be free, as long as there is excess supply of land (in the relevant location)—a condition rarely fulfilled, save in a remote island such as Baffin, Pitcairn, or Tristan da Cunha. An implication of section 11.4.1 above is that dwellings vary in the amount of land attached to them. A condominium, tenement, or flat in an apartment block normally has no yard or garden and shares the land space with many other such dwellings, so the “tax” wedge will be modest. But for a detached house surrounded by yards and located in a city, it could be huge. It is easy to see why property developers use all their wiles urging local (zoning and school) authorities to release playing fields or to tear down grand old houses in spacious lots or redundant churches and fill in the vacated sites. The forces explored by von Thünen, Lucas, and Ahlfeldt et al. seek to explain the spatial pattern of rents and prices. Introducing the concept of scarce land is an indispensable basis for this pattern. But the equilibrium geographical distribution of such prices will be a crucial extension which obeys broad neutrality in the face of the financial variables examined earlier in this chapter. On the other hand, a large geographical (or other kind of) shock that brings gains to some households and losses to others can provoke trouble for the financial system. Many banks have a regional focus. Mortgage obligations of losers will damage their lenders’ balance sheets, posing potentially grave problems for the aggregate, because shocks that affect regions differently can tip some lenders into insolvency and trigger a more general panic. So distributional effects can really matter.
11.4.3 Population and Income Trends When population or incomes per head appear to have positive time trends and land is in excess demand, land prices will keep rising, even if the bricks-and-mortar price does not. This is so because the term z will now be positive. Capital gains are expected
352 Sinclair from house ownership; the spur to buy, as opposed to rent, is sharpened. Rents will have a rising trend, and outright ownership will be an investment, a prepurchase of “imputed” rent. Take trends in income per head, y, first. Suppose land is free. With the parameter a given, so the construction supply curve is unchanged, equation 4 reveals that both Q and H, the real price and volume of housing, will trend upward, too. In long-term equilibrium, each will increase at half the speed of income per head. This case reflects the idea that while technology is advancing in the rest of the economy, it stagnates in building. That is a possibility. But if the construction sector sees technology advancing at the same rate as other sectors, the parameter a will be rising in line. In that case, Q will be trendless, and H will climb in proportion to incomes. Dwellings will just keep getting bigger. If land is priced, however, the logic of section 11.4.2 is that the cost of buying a dwelling will keep rising even if technical advances keep the bricks-and-mortar price constant and will climb still faster if they do not. In that case, land price growth should be anticipated and built into capital values. And if and when the expected income growth rate falls, so will the market price of the land associated with dwellings. Japan in 1990–1991, discussed later, is a dramatic example of that. The effects of a trend in the population size are quite similar. Results turn on how population numbers affect the construction sector’s supply curve. If the aggregate labor force marches in step with population, some of the new labor force entrants will enter the building trades. With land free, if the share of building workers in the total labor force is given and the newcomers enjoy incomes similar to those of the existing population, Q will be stationary in long-run equilibrium, and H will keep rising with the population. Once again, with land in excess demand, additional population must exert upward pressure on the price of land and hence on the price to buy a house. Miles (2012) is eloquent on this subject.
11.4.4 Taxation Taxation can affect the housing market in various ways. First, the relevant definition of household income is net of income tax. This will be the basis of all consumer demands, including those for housing services. A higher income tax rate structure will serve to weaken housing demand; if permanent, it would lower both Q and H in long-run equilibrium. A second effect can come via indirect taxation (such as value- added tax, VAT), if any, on the activities of the building industry. That will act like a reduction in parameter a in equation 1, with similar long-run effects on Q (defined as the price the builder gets) and H. Third, purchases of dwellings may attract stamp duty. This will tilt households toward renting rather than buying or selling. It has been used as a macroprudential policy instrument from time to time; Singapore is an example. Fourth, any tax levied on the rental of a dwelling would tend to work in the opposite direction, encouraging ownership (if imputed rent is taxed more favorably, if at all).
Real Estate, Construction, Money, Credit, and Finance 353 A fifth effect could ensue from taxing capital gains on a house sale. That would encourage a household in an appreciated house to stay put, because delaying a move would be similar to receiving an interest-free loan on its implicit tax liability. Reduced geographical mobility of labor would result. But if, as in many countries, gains on sales of other assets are taxed, while those on owner-occupied houses are not, individuals are encouraged to plow savings into dwellings rather than other assets on which yields might well have been higher. That is liable to create a distortion. Granting some element of relief for mortgage interest against income tax constitutes a sixth effect, this time negative. That is permitted in Norway, Switzerland, and the United States but not in Canada, France, or the United Kingdom, for example. Where granted, this relief tends to encourage house ownership as opposed to renting and to assist occupants with mortgages as opposed to (older or wealthier) households owning dwellings outright. But it tends to raise house prices. And it carries the broader disadvantage of entailing, sooner or later, a higher and thus more distortionary structure of income or indirect tax. The seventh and eighth ways in which taxation can affect the housing market are perhaps the most important. The seventh is property tax. This is an annual tax usually related to the value of the property. It is levied by the city, school district, county, or township in most of Canada and the United States. That system works quite well, though it can lead to instabilities when, as in Detroit, population shrinkage and rising debt create a vicious circle, inducing firms and richer families to hop across political boundaries. In the United Kingdom, local councils levy “council tax” in one of several bands depending on what the value of the dwelling is thought to have been in the early 1990s. It replaced a community charge (or poll tax), which was in force for one or two years, and poll tax had replaced a US-style system of property taxes called rates. But unlike rates, it was unrelated to income, deeply regressive, and highly unpopular. The proposal to switch from rates to poll tax coincided with a giddy house price boom and probably exacerbated it, especially for higher-value properties and if people thought that poll tax would probably last forever. Property taxes would generally be neutral between owning and renting if levied on both types of property, as they often are. But the eighth and final possible tax on housing is certainly not. This would be to tax the imputed rent that owner-occupants pay themselves. A few countries do tax it; Switzerland is one. A Supreme Court judgment in the United States forbade it in 1934, and Britain scrapped it many decades ago. Were the owner-occupants to invest the capital—if they have it—in equity or bonds instead, they would be liable to tax on dividends or interest. Were they to rent out their property to someone else, the rental income would be taxed. So exonerating imputed rent from tax is an anomaly and a distortion. Yes, it simplifies the process of assessing and paying income tax. But on the other side of the ledger, it rewards rich owner-occupants at the cost of others; it implies that other tax (distortionary) rates have to be higher than they would otherwise have been; it may well contribute to a shortage of capital in other, higher- yielding uses; and the discounted value of this presumably long-lasting fiscal privilege simply pumps up house prices. Precious few features of most countries’ taxation systems are more unfair and more inefficient than exempting imputed rent from tax.
354 Sinclair In some jurisdictions unlike Australia, Canada, and Sweden, where no death duties are levied on the estate a deceased person bequeaths to heirs, house ownership may occasionally reap yet a further privilege: a favorable rate of tax applied to bequests of real estate.
11.4.5 Capital Market Imperfections Perhaps the most insidious imperfection is due to the first law of credit markets, that you can get a loan when you can prove you don’t need one. It is humane and just that people cannot sell themselves into slavery. But that makes human capital unacceptable as collateral. And mere promises to pay, IOUs, are just cheap talk; they carry neither commitment nor conviction. A young self-styled entrepreneur with a new idea will be shunned by most potential investors, who fear that the granting of a loan may take the entrepreneur onto the next plane abroad. Asymmetric information is often key here. The mere request for a loan is a signal of low quality, an indication that one couldn’t borrow from another source. Inventories, plant, and machinery are tangible assets that a creditor can seize in extremis. But often they are specific to particular products. If so, the adverse turn for the industry that may have pushed the borrower into distress could trash their secondhand value. That leaves the title deeds of premises—buildings—of a firm, if it owns them, or of the residence of a household with a mortgage. Little wonder, then, that lenders are biased toward lending for real estate, all the more so as and where bank branches’ familiarity and links with their local community loosen. Capital market imperfections also take the form of limits on the volume of any credit that may be provided to a prospective borrower, as well as a premium charged on the interest. The notion of a unique value of r, available to all borrowers and independent of the size or terms of the loan, will no longer hold. The limits and the premiums do not just differ between borrowers. They can also change. Legislators may push for cheaper loans for poorer home buyers, as Mian and Sufi (2009) and Acharya et al. (2011) argued had happened in the United States. Central banks and regulatory agencies may urge relaxation when macroeconomic conditions and property prices look relatively weak. Default risk premiums may well fall in better times, and an unwise lender might even attribute falling delinquency rates to its own good judgment. Worst of all, unseen by its supervisors, a financial institution approaching insolvency may be tempted to gamble more, on the principle that a successful bet could save the firm, while a failing one that didn’t would merely increase the downside risk, and resulting losses, that were passed on to others. From the borrower’s perspective, a sudden loosening of credit limits from a lender could appear to offer a now-or-never chance to borrow that might soon disappear. House prices could surge, along with the volume of fresh credit, without much, if any, change in policy nominal rates or current and future expected real interest rates. The close, if somewhat intricate, statistical link between house prices and credit was first
Real Estate, Construction, Money, Credit, and Finance 355 demonstrated by Hendry (1984). It also forms the basis of the proposals of authors who, like Goodhart (2009) and Gersbach and Rochet (2014), argue first that conventional models of monetary policy were mistaken to eliminate credit as well as monetary aggregates and that safeguarding financial stability calls for macroprudential instruments tailored to phenomena such as abnormal movements in the ratio of credit to GDP. Capital market imperfections may be one reason house price changes can affect consumption. The size of these changes in practice has been the subject of dispute. Campbell and Cocco (2004) find quite a robust if modest positive effect in UK data; Poterba (2000) shows much the same for the consumption impact of equity market value changes. Iacoviello and Neri (2010) offer illuminating econometric results on housing— monetary—spending relationships as well. Mortgage equity withdrawal in the United States grew alarmingly in the early 2000s, and Greenspan and Kennedy (2007) attribute much of the jump in consumption to this response to rising property valuations. Part of the challenge here is to see not so much which of the two variables, house price changes and consumption changes, causes which, but rather the extent to which both might be independent consequences of something else. Changes in actual or expected income, taxation, credit availability, or interest rates are all possible candidates here. Muellbauer (2007) offers a powerful case for looking at the data that way and finds little direct effect, except to some degree in the United States. In other work, Muellbauer, St- Amant, and Williams (2015) find that Canada has no direct effect. The simple adage “You have to live somewhere” may justify some cynicism about the idea of people wishing to take advantage of a rise in house prices to spend more on consumption.
11.4.6 Imperfect Competition Imperfect competition on the part of landlords would tend to raise rental rates above what is implied by equation 3. Some tenants would very probably be overcharged. Any element of monopoly power would usually have this effect, at least in the absence of complete price discrimination (where only intramarginal prices would go up). Typical tenants might be perceived as desperate, with a low elasticity of demand. A likely indicator of monopoly markup in those circumstances would be evidence of a large pool of unlet properties. Discrimination among tenants could lead to multiple prices. One form this might take is a two-price equilibrium. Suppose information about rents set by other lessors were known to some tenants and not to others. Informed tenants would only take bargains. The uninformed could tend to accept the first deal offered to them and might be fortunate to enter a bargain tenancy. But they might be unlucky and accept a rental offer at a high price. That would be a rip-off. If a landlord is shrewd under such circumstances, he or she would reason that charging something below the standard rip-off price but above the bargain price would be foolish; the landlord would gain no informed clients and merely fail to exploit the unlucky uninformed tenants to the maximum. Ideas of this kind are based on a model by Salop and Stiglitz
356 Sinclair (1977). They constitute the only plausible potential argument in favor of maximum rent controls, which, given competition and information to all parties, would normally be either pointless if the constraint did not bind or damaging if it did. There could be imperfect competition among builders. The supply curve (equation 1) would then be distorted by a markup. The result would be higher prices for houses and a smaller long-term volume—much like what would happen under restrictive planning or zoning regulations that impeded construction. Those buying houses or negotiating with builders have more incentive to check out prices from other suppliers than tenants might, but Salop-Stiglitz rip-offs could conceivably occur here, too.11 The existence of collusive tendering by construction firms is not unheard of; that, too, would make for dearer and fewer buildings in the long run. Since a building takes time to complete and bills for labor and materials have to be paid before the constructing firm achieves its sale, trade credit is required. Smaller builders are particularly frightened of the possibility that the real estate market might crash in the meantime, probably when credit conditions have been tightened; that not infrequently leads them to bankruptcy. A form of hysteresis might ensue: there could be fewer builders for a protracted period, with higher prices and less output than otherwise, if the industry is characterized by a set of regional Cournot oligopolies and exhuming or replacing the dead firms took time.
11.4.7 Myopia and Bubbles One difficulty with the analysis of boom and bust in section 11.2 is that it turned on two surprise events: first a fall in r, which people believed would be permanent, and then its unexpected restoration a while later. Is that consistent with rational expectations? If not, is there something else that makes it plausible? Rational expectations need not prove correct. Forecasting mistakes are always possible and indeed inevitable. All rational expectations would imply is that forecasting errors should tend to cancel out on the average—eventually. There is nothing to prevent an actual, finite sequence of shocks pushing a variable subject to forecasting, such as r or y or Q, in the same direction for quite a while. And if one of these variables keeps moving in the same direction for a nontrivial interval, and diverging from its presumed long-run equilibrium path by what appear to be growing amounts, that is what a bubble will resemble. It was once thought that rational expectations implied that an asset price would have to follow a random walk, with no tendency for systematic movement in one direction or another; the glaring falsehood of that view in the context of the real estate market is already evident from our discussion of the saddle path guiding both Q and H in opposite directions, drifting toward a steady state in the absence of disturbances. It is at least logically possible that Q and H move, fitfully perhaps, in the same direction, say upward, for a while. There can be two reasons this could happen. First, it could be that there is a series of several shocks, each of which lifts agents’ beliefs in the steady- state value of H. This is perfectly possible, though increasingly unlikely as the number of
Real Estate, Construction, Money, Credit, and Finance 357 such shocks expands. Every time the perceived long-run value of H goes up, so must Q, in order to provide the necessary incentive for the additional dwellings to get built. The second possible explanation for this spate of positive correlations between changes in Q and H is the destabilizing positive link between Q and its expected near- future growth rate that was displayed by equation 3. Expectations that Q will rise make the asset more attractive. This cannot last forever, as Q will be driven ever farther from its fundamental value. The process must stop sometime, and standard theorizing suggests that if people know this, it cannot keep happening. Yet there are ingenious models where it can certainly happen for a while. One subtle one was presented by Abreu and Brunnermeier (2003); another, by Doblas-Madrid (2012), refined the story to argue that asset-market participants receive different signals of what is happening in the market at different times. If you get a signal that the market is “hot” and believe you are among the first to receive it, for example, your best strategy will be to hold—and indeed probably buy more of—the appreciating asset. You take advantage of the capital gains that you anticipate, as you expect others in the market will be attracted into it later on, when you can probably sell your holdings at a healthy profit. If such beliefs are widespread, frenzies flourish, and the housing markets can swing, suddenly and painfully, from feast to famine. What does evidence tell us? There is a substantial body of empirical evidence against rational expectations, for example, in the case of inflation expectations.12 Gelain and Lansing (2014) are among several to show that moving-average expectations, or semirational expectations, fit the facts of house price dynamics a good deal better than rational expectations. And there is more. Breaks in series for data do certainly occur; data are often mismeasured, particularly in the less recent past. Those observations may justify “discounted least squares learning” which entails placing more emphasis on recent past data and smaller and smaller weights on earlier ones.13 The effect of that is to bias memories and create what might be called exponential amnesia about the more distant past. In the context of our example—three years of low r—any residual belief in the possibility of a return to high r would presumably fade somewhat. People could begin to think, “Maybe it really is different this time.” Present bias and myopia are well-attested and common human psychological characteristics. Behavioral economists regularly confirm their presence in experiments.14 It is often associated with a pattern of discounting, when looking forward, that overweights the near future as against the further future. Hyperbolic as opposed to exponential discounting is an example.15 And when looking at the past, the very recent experiences are overemphasized; people are all too apt to agree with Henry Ford’s dreadful dictum, “History is bunk.” Myopic behavior may arise in lending firms as well. For agency theoretic reasons,16 bank staff members who issue loans are often rewarded, in part, according to the volume and value of the lending they do. The fact that some of those loans might years later prove delinquent is treated as a minor, secondary issue. So when the bank seeks to lengthen its loan book, its internal remuneration structures kick in to make that happen. The personnel themselves will focus on achieving their sales targets. There are no quick
358 Sinclair rewards for caution. And top-management pay, linked as it is so often through bonuses to the price of the company stock and colored by the need to keep satisfying institutional shareholders fixated by current quarterly results and cross-firm comparisons, will, as Thanassoulis (2012) shows, tend to have similar effects on decisions at the apex of the bank’s hierarchy.17 So those running the banking firm may well have perilously short horizons. Short horizons, amnesia, distorted contracts, imitation, willingness to think you have inside information and superior insights into the crystal ball—all these linked phenomena may lead asset prices astray from what their path ought to be. When this occurs, financial corrections become inevitable, and the longer the frenzy lasts, the bigger the crash. We have seen earlier that the overshooting phenomenon is liable to arise in real estate markets with any departure from the steady state, and overreaction is typical in the crash, too.
11.5 A Few Lessons from Recent History and Implications for Policy Few decades escape financial crises of some kind or other. The nineteenth century is sometimes looked back on as a golden age of sound money and global development. But the major banking crises of 1825, 1857, and 1890 scarred national income, lost fortunes, and threw millions out of work in various parts of the world. There were several lesser crises, too, and real estate was usually involved. The twentieth century’s greatest financial mishap was the Great Depression of 1929– 1933, when the United States lost a third of its banks and the equivalent of a full year’s GDP in the turmoil. Construction and land prices tumbled there, too, after a period of overinvestment of all forms, but on that occasion, it was the equity-market collapse that took precedence in time and magnitude; and the interwar years witnessed numerous currency crises. The last third of that century was beset by several grave banking crises, in most of which real estate problems—sometimes linked also to difficulties with oil investments, interest-rate increases, foreign exchange speculation, or fraud—featured prominently. These crises particularly afflicted the United States, Italy, Scandinavia, and much of East Asia and Latin America.18 By far the worst erupted in Japan in 1990–1991, when real estate values and equities, it is calculated, fell by about 75 percent. The Japanese economy had grown by an average of 8 percent per year in the four previous decades; from then till 2015, annual growth has averaged barely 1 percent. So hindsight was to reveal that 1990–1991 would prove not just the turning point from boom to bust but also the most dramatic of long-term watersheds. The world’s fastest-growing hare had morphed into its sleepiest tortoise. The present century has witnessed some damaging vicissitudes, too. The end in early 2002 of Argentina’s ill-fated experiment with a currency board was the explosive
Real Estate, Construction, Money, Credit, and Finance 359 culmination of a gradually deteriorating foreign exchange crisis.19 Excessive government spending, much of it regional, and the budget deficits that entailed, served to keep lowering the country’s external competitiveness. Bank lending to the private sector was also squeezed, so on this occasion, turbulence in the real estate and commercial property markets was merely a limited and passive reaction to pressures emanating from elsewhere. The global financial crisis that erupted violently with the failure of Lehman Brothers in 2008 could hardly have been more different. US real estate was at center stage here. There were many contributory causes. A long history of officially sanctioned cheap housing loans for poorer citizens through the two seminationalized behemoths Fannie Mae and Freddie Mac led to growing risks of eventual delinquency. The Glass-Steagall Act, which had segmented by state and kept investment banking apart from retail banking, was repealed in 1999. The invention of mortgage- backed securities conned everyone into believing (or pretending) that default risk had vanished. Furthermore, from the later 1970s, the new intellectual climate taught that financial- market participants knew best, treated Keynesian demand management and regulation as misconceived obstacles to progress, and regarded financial-market bubbles as such improbable mysteries that any subsequent price corrections should be dealt with later on. Super-low policy rates after the terrorist attacks on September 11, 2001, were ill-advisedly retained for three long years, adding some 800 basis-point-years of inflammatory gasoline to a bonfire now burgeoning with the dry tinder of rotting fancy derivatives and property transactions. New accounting conventions allowed firms to book conjectured future profits as solid profits now and to reward senior staff with excessive bonuses far too early. And on top of these came, of course, the final key ingredient of myopia, explored in section 11.4.7 above. There are inevitably some disagreements about which of these influences mattered most. While Acharya et al. (2011), Garmaise (2015), and Mian and Sufi (2009), for example, stress the role of cheap loans to low-income borrowers, Avery and Brevoort (2015) contest this. Keys et al. (2010) assess the blame that can be attached to securitization, and Dokko et al. (2011) assess it for monetary policy. There is also the role of the world savings glut in depressing real interest rates, emanating from China’s rapid growth, an effect discussed recently by Bernanke (2015). It is plausible that all had some effect; one must beware of the fallacy of unicausality. Davies (2010) wittily scrutinizes the list of culprits like in an Agatha Christie whodunit. Among the other interesting books on the subject, two stand out for the way they compare it with earlier disasters. Eichengreen (2015) contrasts it with the Great Depression, and Sheng (2009) looks at the East Asian crisis a decade earlier. The third major financial disaster of the twenty-first century has been the euro crisis in southern Europe. This combines Argentina’s problems of national fiscal profligacy and the prison of an increasingly inappropriate fixed exchange rate with the US trap of a protracted period of misguided policy toward both real estate transactions and interest rates.
360 Sinclair Greece, Ireland, Portugal, and Spain all experienced a big fall in nominal interest rates when their national currencies were scrapped. These four countries had a legacy of inflation after 1985 well above the German annual average of 2.5 percent.20 The European Central Bank and its new currency were to inherit the German monetary policy stance and framework with minimal change. Inflation and inflation expectations would now be low; added to German-style monetary targets was an annual inflation target of close to, but below, 2 percent.21 The new currency made its debut as a numeraire at the turn of the century, with notes and coins appearing in 2002. Like Argentina, these countries faced an immutably frozen nominal exchange rate against one part of the world and whatever the fates threw at them in terms of other currencies. As Argentina had witnessed some eight years earlier, inflation, inflation expectations, and nominal interest rates fell like a stone, from the much higher rates that had so often prevailed before (in 1985–1998, central bank policy rates averaged 11.1 percent for Portugal and 15.5 percent for Greece) right down to levels close to their new large sound-currency partner. The four countries went on to have very different fiscal and trade balance experiences. Ireland had surpluses. Portugal did not but had budgetary cuts imposed on it. What united them was a strong real estate boom. The short-term income affordability of mortgage-servicing costs fell sharply in both, tempting borrowers to imagine that real amortization payments had fallen. The truth that they were merely being rephased and shunted into the future probably escaped them. Mortgage lenders’ sales forces, scrambling for commissions, would be in no rush to disabuse them. So dwelling prices were bid up sharply. The construction boom gathered pace. The Bank of Spain wisely insisted that banks make substantial provisions against their risky real estate loans, despite angry opposition from politicians, builders, banks, and the media, but could not touch politician-controlled cajas, which escaped and carried on lending merrily. Ireland’s big three banks lent vast sums, mostly on real estate. Dublin, Athens, and Lisbon all drew large inflows of workers from their rural hinterlands and from overseas. Nominal interest rates fell farthest in Greece, generating the biggest drops in nominal mortgage-servicing costs; and it was in Greece, in which fiscal expansion, tax evasion, and “creative” national accounting were seen as most pronounced, where house prices jumped most. Interestingly, house prices across the European Union during the early 2000s rose least in Germany, where the average level of nominal interest rates remained little changed, and they rose most where they had the farthest to fall. A combination of inflation illusion and myopic overattention to near-future debt servicing costs may well explain this. Greece’s external competitiveness fell by some 21 percent against its euro partners in the decade up to 2010. The option to devalue and implement monetary policy tailored to domestic conditions had often helped Greece to stabilize its macroeconomy in the past. But the unfortunate euro lock-in prevented those dodges. Although an effective debt write-down was later agreed on in the case of private institutions, the French and German banks that had lent most to Greece’s government and private sector had been largely protected as a result of quiet pressure on the EU authorities. So much of the
Real Estate, Construction, Money, Credit, and Finance 361 burden of adjustment fell on Greece. The three smaller countries requested and received EU bailouts in return for promises of fiscal austerity. In Ireland’s case, this was to cope with its government’s budgetary costs of rescuing its broken banks in the wake of the real estate crash; for the others, the banking repercussions of the housing crash were a major component, too. Holes in banks’ balance sheets, often resulting from entanglements in local property ventures or in Greece’s disturbed finances, spread to two other euro countries, Cyprus and Slovenia. Countries outside the Eurozone also suffered from the global financial and euro crises. Hungary and Ukraine were hit especially hard in 2009–2010, aggravated by mismatch. Their currencies depreciated against the euro in which many private-and official-sector debts—partly real estate related—had unfortunately become denominated. Yet again, the illusion of judging loan costs by initial interest charges must have tempted myopic and ill-informed borrowers. What, then, are the main policy lessons from history? Jordà, Scularick, and Taylor (2015) scan for real estate market bubbles in almost a century and a half of data for seventeen countries. Bubbles are found to be common. Often, they prove innocuous. But sometimes they can be really perilous. What makes them dangerous, these authors find, is the extent of credit provision. This conclusion reinforces the view that banks need to be restrained from lending too much and too incautiously, particularly against property developments. New macroprudential instruments have been devised.22 These include ceilings on loan-to-value ratios (LTVRs) for lending on dwellings, maximum mortgage durations (MMDs), and refined minimum capital ratios (CRs) on banks.23 Many countries are implementing substantial increases and modifications in CR, often as a consequence of major initiatives, inter alia, spearheaded or recommended by the Financial Stability Board, set up in 2009 at the Bank of International Settlements in Basel, and the United Kingdom’s Independent Commission on Banking, which reported in 2011.24 LTVRs are now set, and sometimes tightened or relaxed, in several countries. Canada is among those that set MMD limits. Difficulties with the euro have not enhanced the interest in currency unions elsewhere in the world. Yet the three Baltic countries Estonia, Latvia, and Lithuania joined the Eurozone after the first signs of serious trouble emerged in 2010. And a prominent traditional devotee of monetary targets and freely floating exchange rates, Switzerland, decided to tie its franc to the euro in 2011. Later, however, in 2015, it opted to abandon it. Recent economic research has come up with the ingenious idea of fiscal devaluation, which can mimic the effects of a change in the exchange rate for a country locked into a fixed exchange-rate system, by modifying rates of tax on payrolls and value added; Farhi, Gopinath and Itskhoki (2014) is a leading example. So there might be ways of saving hard pegs from the fundamental disequilibria that can arise. The list of free (or at least lightly managed) floaters includes countries that target inflation: Australia, Brazil, Britain, Canada, Chile, Colombia, the Czech Republic, Egypt, Ghana, Hungary, Indonesia, Israel, Mexico, New Zealand, Norway, the Philippines, Poland, Serbia, South Africa, South Korea, Sweden, Thailand, and Turkey. The list now appears to embrace the United States. With some exceptions (such as Spain), both
362 Sinclair inflation targeters and countries with fixed exchange rates are now criticized for having paid too little attention to asset price bubbles25 and to risks, so often related to real estate loans, building up in the balance sheets of banks during the late 1990s and early 2000s. Hence the growing recent moves toward (a) sophisticated systems of risk-based supervision of banks, especially those deemed systemic, (b) mandatory provisioning, and (c) the use of the macroprudential instruments of CRs, MMD and LTVRs. All of these are primarily designed to help keep potentially explosive real estate markets in check. Land, unlike equities, is not traded much internationally. So any bubbles there will be a largely national phenomenon calling for national intervention. Pricking any bubbles in the world’s closely integrated equity markets, by contrast, would call for internationally coordinated action, led by the United States. A broader question concerns the taxation of banking firms. Most countries (outside the United States) levy VAT or a similar national sales tax on most services. In many cases, rates are in the range of 18 to 20 percent. But banks are to a large extent exempt. This suggests there is a distortion, since the price ratio of a pair of products facing consumers should ideally equal the ratio of their marginal costs. Crawford, Keen, and Smith (2010) calculated the annual revenue loss from the privilege of VAT exemption at some £12 billion, but taxes levied on banks subsequently will have reduced this large figure. One method of taxing financial firms would be the financial activities tax (FAT). It would be levied essentially on their payroll and profits. So it could resemble the missing value-added tax. Another is FTT, or financial transactions tax, which would be based on the value of all the transactions they undertook, for example, all the purchases and sales of foreign exchange, bonds, shares, and derivatives. With pressure from France, the European Union seeks to implement FTT throughout the union, including members outside the Eurozone. Depending partly on some details of how it might work, this could involve a large net transfer from the United Kingdom to other member countries; firms based in the City of London are world leaders in conducting many of these transactions. Unless introduced at similar rates elsewhere, in New York and the Far East, for example, such activities would emigrate outside Europe. These are the reasons London has been pressing for the rival tax, FAT, and for international agreement on its introduction globally.26 Taxing banks, land, and the real estate sector more evenly would shift resources into other sectors of the economy and generate long-term gains to output and potential welfare. Yet doing so too quickly could provoke a financial and real estate crash. Extricating economies from this wasteful distortion—and others related to it, such as the nonneutral taxation of income from equity and debt—could only be achieved safely by proceeding very gradually indeed. Otherwise, Q could collapse right away, with alarming short-run consequences. Macroprudential instruments are a valuable addition to the policymaker’s armory. But it is interesting to note that their use could complicate and even blunt the operation of monetary policy. Among all the components of national income, construction is highly interest sensitive. This is true of other industries related to real estate transactions, but construction is the biggest. Unexpected policy rate cuts invariably strengthen housing output as compared with what would have happened otherwise;
Real Estate, Construction, Money, Credit, and Finance 363 surprise tightening in policy will squeeze it. The effects on Q(t) will be very quick; those on Ḣ(t) will be felt more gradually but nonetheless with major consequences for short- run aggregate demand. On the other hand, if the economy is in the doldrums and the housing markets are slack, relaxation in both monetary and macroprudential instruments will be mutually reinforcing. An overheated economy and an above-trend value of Q would call for both instruments to be set more restrictively, again resulting in a more powerful reaction in the required direction. But there will also be occasions when the housing market is hot when other sectors are depressed, or the other way around. Here the problem will be that a tightening of one instrument and a relaxation of the other will have an uncertain net effect on construction. And that is surely likely bound to impair and slow down the strength of the transmission mechanism of monetary policy. The fact that the construction cycle and the GDP cycle fail to coincide suggests that this issue is not trivial. An interesting alternative would be to refine the definition of inflation that central banks react to and target. This idea has been explored and advocated recently by Siklos (2014). He agrees with Cecchetti et al. (2000) that central banks may well have been mistaken in ignoring asset prices completely in past decades. He argues that a combination of price level targeting and inflation targeting, accompanied by clear communications of the reasons for policy revisions, could achieve the desired effect of meeting financial stability and monetary stability together. The list of possible macroprudential instruments is large. But there are other arguments for suggesting that their implementation and use may be questionable. Outright bans on loans exceeding some particular LTVR are crude and illiberal. So are prohibitions on MMDs that overstep some boundary. There could be circumstances that make them less disadvantageous. What about a prospective borrower with high net wealth tied up in an obviously profitable private company or someone with a perfect record of honoring debts? In the MMD case, what about a youngish borrower with outstandingly good career prospects? If there are possible externality effects associated with large long-term loans, why not a marginal tax to internalize that externality? Taxes are often preferable to outright bans, and they can bring much-needed additional revenue to an often cash-strapped government. This leads to a wider point. Suppose finance and housing are both to be subject, given time, to a gradual reduction in the seemingly wasteful privileges and exemptions they often currently enjoy. Why, then, not allow the possibility of small temporary departures in the rates of tax levied on them, up or down, when real estate prices really appear to be perilously off track?
Notes 1. Available evidence is broadly consistent with the idea of b equaling about one-third. British data on explicit flow housing costs, as a fraction of household disposable income, are available for London covering 2000–2001 to 2010–2011. They vary by tenure. For
364 Sinclair private rented accommodation, the average rose to 29 percent in 2009–2010 and fell back to 27 percent in 2010–2011. The costs of socially rented accommodation averaged 24 percent in both years. See UK Office for National Statistics (2013). Eurostat (2016 and earlier) provides somewhat different data from the EU-SILC survey. This gives the proportion of countries’ populations living in a household where total housing costs (net of allowances) exceed 40 percent of current disposable income and expresses those proportions by income quintile. In 2015, in the European Union as a whole, they varied from 1.3 percent for the richest quintile to 36.2 percent for the poorest (in Greece, the figure for the poorest rose to 96 percent). In aggregate, just more than 12 percent of the total population dwelled in households that spent more than two-fifths of income on housing. 2. With variants, the model presented here has been a staple of many advanced macroeconomics courses for three decades. Much of it is apparent in the outstanding early paper by Poterba (1984). It is also related to the life-cycle model of Dougherty and Van Order (1982) and to celebrated later work by Muellbauer and Murphy (1997) and by Meen (2002). 3. Developed first by Lucas (1988). 4. See, for example, Romer 1990. 5. As expounded, for example, by Woodford (2003) and in the “superinertial” setting proposed by Amato and Laubach (2004). 6. King 2016, particularly 317–328. 7. Such as in the setup pioneered by Klein 1971. 8. See, for example, Heffernan 2002 for detailed studies of the UK financial product markets and Mahadeva and Sinclair 2005 for a cross-country study. 9. Harris and Todaro (1970) constructed the first major economic model of rural-to-urban migration, while Lewis (1954) offered the seminal theory of how aggregate economic growth could be spurred by transferring labor from family agriculture, where labor might earn its average product, to manufacturing, where pay would be related to its marginal product. Matsuyama (1992) showed how a simple time lag in migration could easily generate a fascinating diversity of dynamic adjustment processes, some of them cyclical. Together, these authors’ work provides an important foundation for the boom-bust construction-financial crises so depressingly common around the world in recent years. 10. A recent paper by Handbury and Weinstein (2015) probes detailed US evidence and concludes that goods are inclined to be somewhat less expensive on average in cities than in other parts of that country. 11. For example, for busy, rich, and poorly informed foreigners. 12. See, for example, several contributions to Sinclair 2010. 13. As deployed, for example, by Cho, Williams, and Sargent (2002) and explored by Evans and Honkapohja (2001). 14. Camerer 2003 covers many topics in experimental and behavioral economics, particularly in settings where individuals interact; and Frederick et al. 2002 provides a useful survey on intertemporal aspects. 15. First proposed in Ainslie 1975. 16. Individual members of the loan sales force cannot be monitored directly and, so their boss may fear, could slack off if paid a fixed salary. Commissions-based pay keeps them on their toes. 17. See also Sinclair, Spier, and Skinner 2009. 18. Many private banks, large and small, collapsed in these episodes. But the greatest losses before the 2008 crisis, by far, were rung up by a state-owned bank, Credit Lyonnais. This
Real Estate, Construction, Money, Credit, and Finance 365 required four bailouts. The soft budget constraint, fraud, political interference, and hubris on a Napoleonic scale all contributed to that disaster. 19. Powell 2002 and de la Torre, Yeyati, and Schmukler 2003 provide detailed analyses. 20. On a GDP deflator basis, according to IMF International Financial Statistics (IFS) data. IFS data for the other countries are Greece 15.5 percent, Ireland 3.7 percent, Portugal 11.1 percent, and Spain 5.5 percent. 21. The gap between average central bank policy rates and average GDP deflator inflation rates was remarkably uniform for this period across EU countries, consistent with the principle discussed in section 11.3 above that monetary policy can have next to no effect on real interest rates in the long run. 22. Dwyer, in chapter 18 of this book, gives an extended analysis of this important area. 23. Minimum CRs applied to banks can be amplified by countercyclical capital buffers, designed to keep the ratio of mortgage credit to national income close to trend, and ceilings on loans may be related to income, as well as to property values, as another way of limiting the risk of borrower delinquency. Akinci and Olmsted-Rumsey (2015) survey macroprudential instruments in general, and Galati and Moessner (2013) provide an excellent literature review. Farhi and Werning (2016) and Collard et al. (2017) probe an important issue: the relationship between optimal monetary and macroprudential policies. 24. This commission, named after its chair, Sir John Vickers, also pressed for Chinese walls that would insulate a bank’s retail activities from its investment activities and thus protect taxpayers if preserving its deposits from losses on the latter were to require a costly bailout. See Vickers 2011. 25. Cecchetti et al. (2000) were early advocates of the need for central bankers to consider taking action against suspected asset-market bubbles. At the Bank of International Settlements, Borio (Borio and Haibin 2008, for example) is among those who have worked tirelessly to convince the central banking community that such arguments cannot be dismissed. 26. Sinclair (2016) goes into this dispute in more detail and looks at another point of unfortunate EU-UK contention, about bonuses; see also Thanassoulis and Tanaka (2016).
References Abreu, Dilip, and Markus Brunnermeier. 2003. “Bubbles and Crashes.” Econometrica 71: 173–204. Acharya, Viral, Matthew Richardson, Stijn van Nieuwerburgh, and Lawrence White. 2011. Guaranteed to Fail: Evidence from the U.S. Mortgage Default Crisis. Princeton: Princeton University Press. Ahlfeldt, Gabriel, Stephen Redding, Daniel Sturm, and Nikolaus Wolf. 2015. “The Economics of Density: Evidence from the Berlin Wall.” Econometrica 83: 2127–2189. Ainslie, George. 1975. “Specious Reward: A Behavioral Theory of Impulsiveness and Impulse Control.” Psychological Bulletin 82: 463–496. Akinci, Ozge, and Jane Olmsted-Rumsey. 2015. “How Effective Are Macroprudential Policies? An Empirical Investigation.” Federal Reserve Board of Governors international finance discussion paper 1136. Amato, Jeffrey, and Thomas Laubach. 2004. “Implications of Habit Formation for Optimal Monetary Policy.” Journal of Monetary Economics 51: 305–325.
366 Sinclair Avery, Robert, and Kenneth Brevoort. 2015. “The Subprime Crisis: Is Government Housing Policy to Blame?” Review of Economics and Statistics 97: 939–950. Bernanke, Ben. 2015. “Why Are Interest Rates So Low, Part 3: The Global Savings Glut.” Blog, Brookings Institution, April 1. 2015. http://www.brookings.edu/blogs/ben-bernanke/posts/ 2015/03/30-why-interest-rates-so-low. Borio, Claudio, and Zhu Haibin. 2008. “Capital Regulation, Risk-Taking and Monetary Policy: A Missing Link in the Transmission Mechanism.” BIS working paper 268. Camerer, Colin. 2003. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton: Princeton University Press. Campbell, John, and Joao Cocco. 2004. How Do House Prices Affect Consumption? Evidence from Micro Data. Harvard Institute of Economic Research Discussion Paper 2015. Cecchetti, Steven, Hans Genberg, John Lipsky, and Sushil Wadhwani. 2000. Asset Prices and Central Bank Policy. Geneva: International Center for Monetary and Banking Studies. Cho, In-Koo, Noah Williams, and Thomas Sargent. 2002. “Escaping Nash Inflation.” Review of Economic Studies 69: 1–40. Collard, Fabrice, Harris Dellas, Behzad Diba, and Olivier Loisel. 2017. “Optimal Monetary and Prudential Policies.” American Economic Journal: Macroeconomics 9: 40–87. Cournot, Antoine Augustin. Recherches sur les principes mathématiques de la théorie des richesses 1838. Paris: Librairie de l’Université Royale de France. Crawford, Ian, Michael Keen, and Steve Smith. 2010. “Value Added Tax and Excises.” In Dimensions of Tax Design, edited by James Mirrlees, vol. 1, 276–422. Davies, Howard. 2010. The Financial Crisis: Who Is to Blame? Chichester: Wiley. De la Torre, Augusto, Eduardo Yeyati, and Sergio Schmukler. 2003. “Living and Dying with Hard Pegs: The Rise and Fall of Argentina’s Currency Board.” World Bank. http://dio.org/ 10.1596/1813-9450-2980 Doblas- Madrid, Antonio. 2012. “A Robust Model of Bubbles with Multidimensional Uncertainty.” Econometrica 80: 1845–1893. Dokko, Jane, Brian Doyle, Michael Kiley, Jinill Kim, Shane Sherlund, Jae Sim, and Skander van der Heuvel. 2011. “Monetary Policy and the Global Housing Bubble.” Economic Policy 66: 233–283. Dougherty, Ann, and Robert Van Order. 1982. “Inflation, Housing Costs and the Consumer Price Index.” American Economic Review 72: 154–164. Edgeworth, Francis Ysidro. 1888. “A Mathematical Theory of Banking.” Journal of the Royal Statistical Society 53: 113–127. Eichengreen, Barry. 2015. Hall of Mirrors. Oxford: Oxford University Press. Eurostat (European Statistical Office). 2016. “Housing Cost Overburden Rate by Income Quintile.” 162. Evans, George, and Seppo Honkapohja. 2001. Learning and Expectations in Macroeconomics. Princeton: Princeton University Press. Farhi, Emmanuel, Gita Gopinath, and Oleg Itskhoki. 2014. “Fiscal Devaluations.” Review of Economic Studies 81: 725–760. Farhi, Emmanuel, and Ivan Werning. 2016. “A Theory of Macroprudential Policies in the Presence of Nominal Rigidities.” Econometrica 84: 1645–1704. Fender, John. Monetary Policy. Chichester: John Wiley, 2012. Frederick, Shane, George Lowenstein, and Ted O’Donoghue. 2002. “Time Discounting and Time Preference: A Critical Review.” Journal of Economic Literature 40, 351–401.
Real Estate, Construction, Money, Credit, and Finance 367 Galati, Gabriele, and Richhild Moessner. 2013. “Macroprudential Policy—A Literature Review. Journal of Economic Surveys 27: 846–878. Garmaise, Mark. 2015. “Borrower Misreporting and Loan Performance.” Journal of Finance 70: 449–484. Gelain, Paolo, and Kevin Lansing. 2014. “House Prices, Expectations and Time-Varying Fundamentals.” Journal of Empirical Finance 29: 3–25. Gersbach, Hans, and Jean- Charles Rochet. 2014. “Capital Regulation and Credit Fluctuations.” In Macroprudentialism, edited by Dirk Schoenmaker. VoxWU.org ebook. London: CEPR. Goodhart, Charles. 2009. “Procyclicality and Financial Regulation.” In Estabilidad financiera, 9–20. Madrid: Bank of Spain. Greenspan, Alan, and James Kennedy. 2007. Sources and Uses of Equity Extracted from Homes. Finance and Economics Discussion Series 2007-20. Washington, D.C.: Federal Reserve Board. Handbury, Jessie, and David Weinstein. 2015. “Goods Prices and Availability in Cities.” Review of Economic Studies 82: 258–296. Harris, John, and Michael Todaro. 1970. “Migration, Unemployment and Development: A Two Sector Analysis.” American Economic Review 60: 126–142. Heffernan, Shelagh. 2002. “How Do UK Financial Institutions Really Price Their Banking Products?” Journal of Banking and Finance 26: 1997–2006. Hendry, David. 1984. “Econometric Modelling of House Prices in the United Kingdom.” In Econometrics and Quantitative Economics, edited by David Hendry and Kenneth Wallace, 135–172. Oxford: Blackwell. Hicks, John. 1974. Crisis in Keynesian Economics. New York: Basic Books. Iacoviello, Matteo, and Stefano Neri. 2010. “Housing Market Spillovers: Evidence from an Estimated DSGE Model.” American Economic Journal: Macroeconomics 2: 125–164. Jordà, Oscar, Moritz Scularick, and Alan Taylor. 2015. “Leveraged Bubbles.” Journal of Monetary Economics 76, supp.: S1–S20. Keys, Benjamin, Tanmoy Mukherjee, Amit Sern, and Vikrant Vig. 2010. “Did Securitization Lead to Lax Screening? Evidence From Subprime Loans.” Quarterly Journal of Economics 125, 307–362. King, Mervyn. 2016. The End of Alchemy: Money, Banking and the Future of the Global Economy. London: Little, Brown. Klein, Michael. 1971. “A Theory of the Banking Firm.” Journal of Money, Credit and Banking 3: 205–218. Knoll, Katharina, Moritz Schularick, and Thomas Steger. 2017. “No Price Like Home: House Prices 1870–2012.” American Economic Review 107: 331–353. Leamer, Edward. 2007. “Housing IS the Business Cycle.” NBER working paper 13428. Lewis, W. Arthur. 1954. “Economic Development with Unlimited Supplies of Labour.” Manchester School 22: 139–191. Lucas, Robert. 1988. “On the Mechanics of Development.” Journal of Monetary Economics 22: 3–42. Lucas, Robert. 2000. “Externalities and Cities.” Review of Economic Dynamics 4: 245–274. Mahadeva, Lavan, and Peter Sinclair. 2005. How Monetary Policy Works. London: Routledge Taylor and Francis.. Matsuyama, Kiminori. 1992. “A Simple Model of Sectoral Adjustment.” Review of Economic Studies 59: 375–388.
368 Sinclair Meen, Geoff. 2002. “The Time-Series Behavior of House Prices: A Transatlantic Divide? Journal of Housing Economics 11: 1–23. Mian, Atif, and Amir Sufi. 2009. “The Consequences of Mortgage Credit Expansion: Evidence from the US Mortgage Default Crisis.” Quarterly Journal of Economics 124: 1449–1496. Miles, David. 2012. “Population Density, House Prices and Mortgage Design.” Scottish Journal of Political Economy 59: 444–466. Miles, David. 2015. “Housing, Leverage and Stability.” Journal of Money, Credit and Banking 47: 19–36. Muellbauer, John. 2007. “Housing, Credit and Consumer Expenditure.” In Housing, Housing Finance, and Monetary Policy, Jackson Hole Symposium, 267–334. Kansas City, Mo.: Federal Reserve Bank of Kansas City. Muellbauer, John, and Anthony Murphy. 1997. “Booms and Busts in the UK Housing Market.” Economic Journal 107: 1701–1727. Muellbauer, John, Pierre St-Amant, and David Williams. 2015. “Credit Conditions and Consumption, House Prices and Debt: What Makes Canada Different?” Bank of Canada discussion paper 2015-40. Poterba, James. 1984. “Tax Subsidies to Owner Occupied Housing.” Quarterly Journal of Economics 99: 729–752. Poterba, James. 2000. “Stock Market Wealth and Consumption.” Journal of Economic Perspectives 14: 99–118. Powell, Andrew. 2002. “Argentina’s Avoidable Crisis: Bad Luck, Bad Economics, Bad Politics, Bad Advice.” Brookings Trade Forum: 1–58. Romer, Paul. 1990. “Endogenous Technological Change.” Journal of Political Economy 98, supp.: S72–S101. Salop, Steven, and Joseph Stiglitz. 1977. “Bargains and Ripoffs: A Model of Monopolistically Competitive Price Dispersion.” Review of Economic Studies 44: 493–510. Sheng, Andrew. 2009. From Asian to Global Financial Crisis: An Asian Regulator’s View of Unfettered Finance in the 1990s and 2000s. Cambridge: Cambridge University Press. Siklos, Pierre. 2014. “Communication Challenges for Central Banks: Evidence, Implications.” International Finance 17: 77–98. Sinclair, Peter, ed. 2010. Inflation Expectations. London: Routledge Taylor and Francis. Sinclair, Peter. 2016. “Financial Policy Challenges for the EU and the United Kingdom.” In Resilient Futures: Finance, 45–49. London: Industry and Parliament Trust. Sinclair, Peter, Guy Spier, and Tom Skinner. 2009. “Bonuses and the Credit Crunch.” In Towards a New Framework for Financial Stability, edited by D. Mayes, R. Pringle, and M. Taylor, 837–854. London: Central Banking Publications. Thanassoulis, John. 2012. “The Case for Intervening in Bankers’ Pay.” Journal of Finance 67: 849–895. Thanassoulis, John, and Misa Tanaka. 2016. “Bankers’ Pay and Excessive Risk.” Bank of England Working Paper 558. UK Office for National Statistics. 2013. “Percentage of Household Income Spent on Housing Costs in London by Tenure 2001–2011.” Ref. 001032, January 25. Vickers, John. 2011. Independent Commission on Banking Standards, Interim Report and Final Report. London: UK Treasury. Von Thünen, Johann. 1826. Die Isolierte Staat in Beziehung auf Landwirtschaft und Nationalökonomie. Berlin. Reprinted Düsseldorf: Wissenschaft und Finanzen, 1986. Woodford, Michael. Interest and Prices. Princeton: Princeton University Press.
chapter 12
Inside the Ba nk B ox Evidence on Interest-Rate Pass-Through and Monetary Policy Transmission Leonardo Gambacorta and Paul Mizen
12.1 Introduction Monetary transmission through the banking system turns monetary policy decisions into reality for households and firms and depends to a large degree on the institutional features of each national financial system. Although many actions of the central bank take place simultaneously when the decision is made to tighten or loosen policy, these take effect as the financial markets respond and the banks adjust their interest rates. The process is known as interest-rate pass-through, and it is commonly assumed that transmission occurs from policy changes to short-term interest rates within a short period of time. With complete pass-through, monetary policy can be more efficient in its ability to control inflation, although incomplete pass-through can still be effective if it is predictable. The theoretical foundations for interest-rate pass-through are found in models of the banking firm introduced by Monti (1971) and Klein (1971), which set out some microfoundations of interest-rate setting as a profit maximization exercise for a deposit- taking institution that also makes loans. The context is usually a Cournot oligopoly model, where rates are determined by a small number of rate setters. Frictions in the form of adjustment costs limit the extent to which a bank may wish to make continuous, synchronized adjustments in interest rates to lenders in response to changes in monetary policy. The first empirical models tested interest-rate pass-through using linear models relating lending and deposit rates (collectively known as retail rates) to policy rates directly. The development of these models introduced more sophisticated dynamics, asymmetric adjustment processes in the short run, while implying a long-run
370 Gambacorta and Mizen relationship between retail rates and policy rates. We review this literature and investigate the empirical evidence on the interest-rate channel of monetary transmission. There are many other channels specific to the banking system that impinge on the transmission of policy to retail rates, for example, the bank lending channel and the bank capital channel. According to the bank lending channel thesis, a monetary tightening has an effect on bank loans because the drop in reservable deposits cannot be completely offset by issuing other forms of funding (i.e., uninsured CDs or bonds; for an opposite view, see Romer and Romer 1990) or liquidating some assets. Kashyap and Stein (1995; 2000), Stein (1998), and Kishan and Opiela (2000) claim that the market for bank debt is imperfect. Since nonreservable liabilities are not insured and there is an asymmetric information problem about the value of banks’ assets, a “lemon’s premium” is paid to investors. According to these authors, small, illiquid, and undercapitalized banks pay a higher premium because the market perceives them to be more risky. Because these banks are more exposed to asymmetric information problems, they have less capacity to shield their credit relationships in the case of a monetary tightening, and they should cut their supply of loans and raise their interest rates by a larger amount. Moreover, these banks have less capacity to issue bonds and CDs, and therefore they could try to contain the drain of deposits by raising their rates relative to other banks. The bank capital channel is based on the fact that bank assets typically have a longer maturity than their liabilities (Van den Heuvel 2002). After an increase in market interest rates, a smaller fraction of loans can be renegotiated with respect to deposits (loans are mainly long-term, while deposits are typically short-term). Therefore, banks incur a cost due to the maturity mismatch that reduces profits and then capital accumulation. If equity is sufficiently low and it is costly to issue new shares, banks reduce lending (otherwise, they fail to meet regulatory capital requirements) and widen their interest-rate spread. This leads to an increase in the interest rates on loans and a decrease in those on deposits; in the oligopolistic version of the Monti-Klein model, the maturity transformation cost has the same effect as an increase in operating costs. Of particular interest in monetary transmission is the role of communications, signaling, and forward guidance over future interest rates. Researchers have recognized the possibility that interest-rate futures may influence the pass-through of policy to retail rates, and therefore communications and signaling have a direct impact on rate setting. But further work has been done to understand how forecasts of future policy rates or the short-term market rates that are closely related to them influence current retail rate setting. This chapter will review this literature and evaluate the lessons for monetary transmission. Forward guidance (FG) is one form in which central banks have innovated as short- term policy rates have approached the zero lower bound (ZLB). We discuss the other mechanisms, such as liquidity operations and asset purchases, that have been used to influence market rates at various points on the yield curve. We then examine how these policies have affected interest rates set by banks, if at all, and consider the relative effectiveness of policy operations through these channels.1
Inside the Bank Box 371 Finally, we review the issues for the future. First, institutional change will to some degree alter banking systems and therefore influence the transmission of policy through the various channels mentioned. This is perhaps most evident in Europe, where a banking union will promote greater competition across national borders within the European Union, but other regions will also be affected. Second, global flows of liquidity through the banking system are likely to ensure that policy actions in advanced economies will spill over in varying degrees to emerging economies, having an impact on monetary transmission in the process. For these reasons, interest-rate pass-through remains a live issue for central banks
12.2 How Do Banks Set Interest Rates? 12.2.1 A Theoretical Starting Point The theoretical basis for the analysis of interest-rate pass-through often begins with the Monti-Klein model, which sets out some microfoundations of the rate-setting problem for a risk-neutral bank that operates under oligopolistic market conditions.2 The balance sheet of the representative bank is as follows:
L + S = D + B + K , (1)
where L stands for loans, S for securities, D for deposits, B for bonds, K for capital. The bank holds securities as a buffer against contingencies. We assume that security holdings are a fixed share of the outstanding deposits (α). They represent a safe asset and determine the risk-free interest rate.3 We have, therefore,
S = αD (2)
For simplicity, bank capital is assumed to be exogenously given in the period and is greater than capital requirements.4 The bank faces a loan demand and a deposit demand.5 The first one is given by
Ld = c0iL + c1 y + c2 p + c3iM
(c0 < 0, c1 > 0, c2 > 0, c3 > 0) , (3)
which is negatively related to the interest rate on loans (il) and is positively related to real income (y) and prices (p) and the opportunity cost of self-financing, proxied by the money-market interest rate (im). The deposit demand depends positively on the interest rate on deposits, the level of real income (the scale variable), and the price level and negatively on the interest rate on securities that represent an alternative to the investment to deposits:
D d = d0iD + d1 y + d2 p + d3iM
(d0 > 0, d1 > 0, d2 > 0, d3 < 0) . (4)
372 Gambacorta and Mizen The representative bank maximizes its profits subject to the balance-sheet constraint. The bank optimally sets the interest rates on loans and deposits (iL, iD), while it takes the money-market interest rate (iM) as given (it is assumed to follow the policy rate set by the central bank). For simplicity, we can consider that the interest rate on a bond is given by a markup on the money-market rate that depends on an exogenously given bank credit risk:iB = b0iM and b0 > 0. The cost of intermediation is given by C IN = g 1L + g 2 D ( g 1 > 0, g 2 > 0) , (5)
where the component g1L can be interpreted as screening and monitoring cost, while g2D is the cost of the branching.6 Loans are risky and, in each period, a percentage j of them is written off from the balance sheet, therefore reducing bank’s profitability. The representative banks maximize the following problem: Max π = (iL − j)L + iM S − iD D − iB B − C IN ; iL , iD
s.t . L + Q = D + B + K.
(6)
The optimal levels of the two interest rates are
iL = Ψ0 + Ψ1 p + Ψ2im + Ψ3 y P + Ψ4 j; (7)
iD = Φ 0 + Φ1 p + Φ 2im + Φ 3 y P ; (8)
Ψ0 =
g1 >0 ; 2
Ψ1 =
c2 >0 ; −2c0
Ψ2 =
b0 c + 3 >0 ; 2 −2c0
Ψ3 =
c1 >0 ; −2c0
Ψ4 =
1 ; 2
b0 (1 − α) −d3 α d2 d g2 + + > 0 ; Φ 3 = − 1 < 0. < 0 ; Φ1 = − 2d < 0; Φ 2 = 2d0 2 2d0 2 0 2 Equation 7 indicates that a monetary tightening increases the interest rate on loans (Ψ2 > 0). The total effect could be divided into two parts: an increase in the cost of market funding b0 > 0 and the “opportunity cost” effect c3 > 0 . Inflation, output, 2 −2c 0 intermediation cost, and provisioning also have a positive effect on the lending rate. Equation 8 for deposit interest rate is slightly different. Also in this case, the impact of “cost a monetary tightening is positive Φ 2 > 0 , but it can now be split in three parts: the of market funding” b0 (1 − α) > 0 , the “opportunity cost” −d3 > 0 , and the “liquidity 2d 2 0 buffer” α > 0 effects. The intuition of this equation is that a monetary squeeze auto 2 matically increases the cost of borrowing for bank uninsured funds and the return on securities (the alternative investment for depositors); therefore, the first two effects push
Φ0 = −
Inside the Bank Box 373 the bank to increase the interest rate on deposits to raise more insured funds. The percentage of deposits invested in securities (α) act, on the one hand, as a simple “reserve coefficient” that increases the revenue on liquid portfolio and the market power of the bank to offset the interest rate on deposits. Deposit interest rate reacts negatively to an output expansion (Φ 3 < 0 ) and to an increase in prices (Φ1 > 0). An economic expansion pushes the deposits demand to the left and causes a decrease in cost of deposits (remember that deposit demand is upward-sloping with respect to iD).
12.2.2 Conventional Monetary Policy and the Monetary Transmission Mechanism The conventional approach to the transmission of monetary policy places great emphasis on the interest-rate channel. Official rates controlled by policymakers feed through to wholesale rates within days and may even be anticipated if policy is clearly communicated in advance of official rate changes (Cook and Hahn 1989). These wholesale rates are the marginal source of funds for many banks, and they feed through to retail rates over time, which then influence household saving and corporate investment decisions. The empirical investigation of this first phase of the transmission channel is explored using retail rates offered on deposits, mortgage products, and corporate loans in many countries. The standard approach in the literature is to assume that banks follow a marginal cost pricing model (Rousseas 1985) defining some long-run relationship between retail and market rates of interest as
iLt = α + β iMt + e, (9)
where iLt is the loan rate set by banks (we could equally discuss deposit rates), α is a constant markup, and iMt is the marginal cost price based on wholesale market rates of a similar maturity, and β is the pass-through estimate. There is a lively current debate about what the relevant marginal cost of funding should be, with many authors arguing that banks use a blend of deposits and secured and unsecured funding to finance their loans (see Cadamagnani, Harimohan, and Tangri 2015; Illes, Lombardi, and Mizen 2015; von Borstel, Eickmeier, and Krippner 2016). Prior to the financial crisis, at least, the money-market rate or the policy rate was a reasonable approximation to the cost of funds for banks. This approach is typically estimated in an error-correction model setup where the long-run relationship between retail and market rates provides the equilibrium relation around which the short-run variation in rates occurs:
J
J
i =1
i =1
∆iLt = γ L + δ L (iLt − α − β iMt ) + ∑ ηiL ∆iLt −i + ∑ ϕiL ∆iMt −i + v Lt .
(10)
374 Gambacorta and Mizen
12.2.2.1 Country-Level Studies of Interest-Rate Pass-Through Studies from the mid-1990s summarized in table 12.1 (from de Bondt 2005) show that changes in wholesale market rates are not fully passed through to short-term bank lending rates to firms and households, but pass-through is more complete in the long term (Band for International Settlements 1994; Cottarelli and Kourelis 1994; Borio and Fritz 1995; Mojon 2000). This is confirmed in Kleimeier and Sander (2000), Donnay and Degryse (2001) and Toolsema et al. (2001), who consider cross-country studies, and de Bondt (2005) who finds pass-through of wholesale rates to retail rates is also incomplete, in line with previous country studies. The adjustment speed of a wholesale market rate change is between 30 and 50 percent in one month, depending on the retail rate, with rates offered to corporations typically adjusting more quickly than rates offered to households. Table 12.1 summarizes the findings of many studies. The average adjustment rate for retail rates to fully pass through the market rate change is three to ten months in de Bondt (2005).
12.2.2.2 Bank-Level Studies of Interest-Rate Pass-Through Economic theory on oligopolistic (and perfect) competition suggests that in the long run, both banking rates (on lending and deposits) should be related to the level of the market interest rate, which reflects the marginal yield of a risk-free investment (Klein 1971). Gambacorta (2008), among others, tests this hypothesis on bank level data using a panel estimator. This has two advantages: first, it allows us to test whether banks respond in different ways to the market interest rate, and second, it permits the introduction of bank-specific characteristics, for example, balance-sheet variables such as liquidity, capital ratios, and so on, that may impinge on the pass-through decision. One typical model for this kind of panel estimate is q
q
j =1
j=0
∆iL k ,t = µ Lk + ∑ κ Lj ∆iL k ,t − j + ∑ β Lj ∆iM t − j + α L iL k ,t −1 + γ L iM t −1 q
+ ∑ ϕ Lj Z t − j + Γ L Φ k ,t + ε Lk ,t j=0
q
q
j =1
j=0
(11)
∆iD k ,t = µ Dk + ∑ κ Dj ∆iD k ,t − j + ∑ β Dj ∆iM t − j + α D iD k ,t −1 + γ D iM t −1 q
+ ∑ ϕ Dj Z t − j + Γ D Φ k ,t + ε Dk ,t j=0
(12)
with k = 1, . . . , N (k = banks) and t = 1, . . . ,T (t = periods). The subscript L stands for loans, D stands for deposits, and q lags have been selected in order to obtain well-behaved residuals. The vector Z includes stationary variables (such as GDP growth, inflation) that influence interest rates in the short run; Φ is a vector of seasonal dummies. The model allows for fixed effects across banks, as indicated by the bank-specific intercept
ST LT
ST
Borio and Fritz 1995
Kleimeier and Sander 2000
Donnay and Degryse 2001
Mojon 2000
Long-term loans to firms
ST LT
11 10
61 100
LT
76 102
ST
ST LT
Toolesma et al. 2001
85 92
100
ST LT
LT
110
95 93
67 87
85 112
BE
100
15 18
68
AT
ST
Donnay and Degryse 2001
Mojon 2000
ST LT
Cottarelli and Kourelis 1994
LT
ST LT
BIS 1994
Short-term loans forms
Study
72 90
66 72
100
36
97
36 98
87 100
18 106
DE
69 40
100
18
103 114
102 100
100
55
107
100 105
78 94
78 110
ES
87 93
195
23 28
FI
23 50
100
42
53 62
43 75
100
71
72
53 59
15
FR
25 64
36 42
117
61 82
GR
17 16
20 18
101
107 107
IE
78 99
61 62
60 86
100
62
114
72 107
60 83
10 61
IT
59
LU
84 97
53 87
100
112
91
95 103
82 82
58 107
NL
11 14
112
95 95
PT
(Continued)
54 67
100
37
70 80
58 74
100
61
100
65 95
75 90
28 89
Euro area
Table 12.1 Interest-Rate Pass-Through Studies for Individual Euro Area Countries (Adjustment of Retail Bank Rates to WO Basis Points Change in Money-Market Interest Rates in Basis Points)
94 100
ST
LT
100
LT
19 48
100
82
100
9
20 44
45 100
48 89
DE
100
15
100
13
40 14
−11 100
27
ES
39 61
FI
41 100
90
FR
GR
16 −6
IE
100
63
63 103
26 88
IT
LU
100
83
100
6
34 27
33 100
21 88
NL
7 35
PT
100
65
100
11
27 41
35 100
41 82
Euro area
Sources: BIS 1994, tab. 5, 1984–1993; Cottarelli and Kourelis 1994, tab. 1, model 2; Borio and Fritz 1995, tab. 8, 1990–1994; Kleimeier and Sander 2000, tab. 5, 1994–1998; Mojon 2000, tab. 2a, 1992–1998; Donnay and Degryse 2001, tab. 3, 1992–2000; Toolsema et al. 2001, tab. 3, 1980–2000. Note: ST = short-term pass-through, that is, adjustment after three months; LT = long-term pass-through. AT=Austria; BE=Belgium; DE=Germany; ES=Spain; FI=Finland; FR=France; GR=Greece; IE= Ireland ;IT=Italy; LU= Luxembourg; NL= Netherlands; PT= Portugal. Euro area figures are based on available country results using January 2001 country weighting structures as applied for euro area retail bank interest rates.
Mojon 2000
Time deposits
Mojon 2000
26 31
5 100
- 82
BE
27
ST LT
Donnay and Degryse 2001
AT
ST
ST LT
Mojon 2000
Savings deposits
ST LT
BIS 1994
Mortgages
Study
Table 12.1 Continued
Inside the Bank Box 377 μk. The long-run pass-through between each bank rate and the money-market rate is given by –γ/α, while the loading coefficient is represented by α. Therefore, to test if the pass-through between the money-market rate and the bank rate is complete, it is necessary to verify that this long-run elasticity is equal to one. If this is the case, there is a one-to-one long-run relationship between the lending (deposit) rate and the money-market rate, while the individual effect influences the bank-specific markup (markdown). The adjustment coefficient must be significantly negative if the assumption of an equilibrium relationship is correct. In fact, it represents the speed of adjustment toward equilibrium, often associated with the concept of a “half-life” of a shock. The degree of banks’ interest-rate stickiness in the short run can be analyzed by the impact multipliers β L0 and β D0 .
12.2.3 Asymmetric Adjustment and Frictions In the Monti-Klein model, it is assumed that if markets are perfectly competitive, then pass-through should be full, symmetrical, and relatively swift in response to changes in official rates, but few studies have found this to be the case. The Monti-Klein model is able to allow for more realistic features of financial markets, including imperfect competition, asymmetric information, and switching costs, and with these features, it implies that full pass-through is a long-run phenomenon, with deviations from this “equilibrium” occurring in the short run. For these deviations to persist, there need to be frictions that impede adjustment. Assessment of the dynamic adjustment of interest rates has reflected asymmetry and nonlinear adjustment in response to official rate changes (Heffernan 1997; Hofmann and Mizen 2004; Sander and Kleimeier 2004; De Graeve, De Jonghe, and Vennet 2007; Gambacorta and Iannotti 2007; Fuertes, Heffernan, and Kalotychou 2008). Studies of time series of weighted averaged interest rates and panel of data for individual financial institutions’ interest rates have found strong evidence in favor of nonlinearity and heterogeneity as financial institutions negotiate imperfections in financial markets, switching, and menu costs. These models make adjustment around an equilibrium between retail and money- market rates that is contemporaneous and does not include forecasts of future money- market rates. Moreover, these models ignore the realistic possibility that banks use their own expectations of future market rates to minimize the uncertainty they face when setting retail rates. We will relax this assumption later on. It is typically assumed in theoretical models that deposit-taking financial institutions have some degree of monopoly power in price setting, and they are normally represented in theoretical settings by monopolistic competitors facing linear demand functions and quadratic costs. But monopolistic competition may not be sufficient to generate the discontinuous adjustment that we observe in retail rates.7 The costs of adjustment must imply that the profit function flattens off to discourage instantaneous adjustment in response to shocks. For a departure from continuous adjustment to retail rates (instantaneous and complete pass-through), we require costs to be associated with adjustment of rates. Costs may arise due to the search for information (Blanchard and Fischer 1998),
378 Gambacorta and Mizen menu costs of adjusting prices (Akerlof and Yellen 1985; Ball and Romer 1989; Mankiw 1985; Ball and Mankiw 1994), or nonpecuniary costs of lost custom after adjustment to rates (Rotemberg 1992).8 When these costs exist, rate setters have an incentive to avoid passing through minor changes to official rates, to anticipate the direction of a sequence of small changes to official rates, accumulating them in a single retail rate change, and to anticipate turning points. When the profit function is flat, there is little incentive to adjust rates to small changes in official rates. Only when it is anticipated that there will be a succession of rate changes in the same direction will banks and building societies have an incentive to adjust rates. These could then be introduced in a cost-minimizing way by preempting the full increase or by catching up with official rates after the event. Or by foreseeing a turning point, the costs of reversals could be avoided by not following the curve all the way to the top and the bottom of the cycle but rather catching the rates on the way back up or down. Financial institutions can reduce costs by smoothing official rates. This involves forward-looking dynamic behavior to anticipate the future path of official rates. A simple model can be imagined as a variant of the asymmetric adjustment framework of Ball and Mankiw (1994), where banks have a desired retail rate that is set for two periods with the option to reset in the intervening period at a cost (the menu cost). Rates are subject to shocks, and these may trigger adjustment, but as in Ball and Mankiw (1994), there will be a range of shocks where adjustment does not occur. This can be implemented if we assume that the loss function of the financial institution is quadratic in the difference of retail rates from their desired levels and that there is a fixed cost C of adjustment in odd periods. The lending rate, iL, set at the beginning of every even period for the next two periods (0 and 1), is given by iL* 0 =
iM 0 + E1iM1 , (10) 2
where iM0 is the current (even period) base rate and E1iM1 is the following market rate.9 This determines the ex ante optimal retail rate. Unexpected innovation to the market rate can occur, hence iM 1 = E1iM 1 + ε , where ε is a shock that is normally distributed with zero mean and constant variance. The financial institution can now decide to adjust iM1 + E1iM 2 * rates, resetting them to the optimal level for the next two periods iL1 = , but 2 if the loss of not adjusting to the shock is higher than the menu cost
1
E1 ∑ (iL* 0 − iM 1+ i )2 − (iL* 1 − iM 1+ i )2 = 2(iL* 0 − iL* 1 )2 > C , (11) i =0
then adjustment will occur. The threshold for adjustment is 2
E1iM 2 − iM 1 C * (iL 0 − iM 1 ) − > 2 . (12) 2
Inside the Bank Box 379 And as a result, retail rates will tend to lie in the range
C E1iM 2 − iM 1 C E1iM 2 − iM 1 (iL* 0 − iL1 ) ∈ − + , + . (13) 2 2 2 2
This condition generates two hypotheses. First, adjustment is predicted to be asymmetric. We expect to find that the response to a shock of a given magnitude will depend on whether the shock is positive or negative. This hypothesis indicates that for a given positive disequilibrium, adjustment is more likely if base rates are expected to fall, and for a given negative disequilibrium, adjustment is more likely if base rates are expected to rise. Second, adjustment is predicted to be nonlinear and related to forward-looking variables that identify the future direction of rates. Adjustment will therefore be faster if the expected differential between the current base rate and its future value widens, and it will be slower if the expected differential narrows. The role of forward-looking variables introduces expectations, the appropriate horizon over which the differential might be calculated, and other indicators of the future path of rates. Among the wholesale market factors that have been found to affect the responsiveness of bank deposit rates are the direction of the change in wholesale market rates (Hannan and Berger 1991), whether the bank interest rate is above or below a target rate (Hutchison 1995; Neumark and Sharpe 1992), and market concentration in the bank’s deposit market (Hannan and Berger 1991). The response of bank rates to changes in wholesale market rates has been shown to depend on the direction of change in wholesale market rates (Borio and Fritz 1995; Mojon 2000) or whether bank interest rates lie above or below some “equilibrium” level determined by cointegration relations (Kleimeier and Sander 2000; Hofmann and Mizen 2004). Gambacorta and Iannotti (2007) find that the interest- rate adjustment in response to positive and negative shocks is asymmetric because banks adjust their loan and deposit rates faster during periods of monetary tightening compared to periods of monetary easing. Hofmann and Mizen (2004) present evidence for UK banks and building societies suggesting that the speed of adjustment of deposit and lending rates to policy rates depends on their expected future path.
12.2.4 Smoothing As central banks have tended to make many small steps in place of few large ones in recent years (Goodhart 1996; Sack 1998), base rates have been smoothed and therefore autocorrelated. In view of this, financial institutions—for which the menu costs of changing rates are considerably higher in pecuniary and nonpecuniary terms than for the central bank—have formed expectations about the future trend in base rates when official rates are expected to follow a sequence of steps. If the base rates are autocorrelated and the differential is expected to widen further, they may pass through the total anticipated base rate change into retail rates in full. Whether this would tend to
380 Gambacorta and Mizen cause retail rates to anticipate official rates changes (so that retail rates lie ahead of the curve) or to follow them (so that retail rates lie behind the curve) will be a feature associated with the type of institution and the product in question. As far as pass-through is concerned, financial institutions have two options in the face of a shock: (i) they can take a lower margin in the odd period, or (ii) they can incur a one-off menu cost to increase retail rates, thereby narrowing the expected differential in the odd period. Under option (i) pass-through of base rate changes to retail rates is 0 percent; under option (ii) it is 100 percent. The possibility of intermediate levels of pass-through occurs because the decision to wait/respond is reassessed every period in the light of expectations about the future path of the base rates. A sequence of decisions to wait, followed by 100 percent pass-through in the kth period, would yield a rate of 100/k percent pass-through.
12.3 New Approaches and Additional Hypotheses With the collection of harmonized data for more countries on a consistent basis and the lengthening of the sample of retail and market rates, more sophisticated econometric methods have been employed. Single-equation models have been replaced with VARs, and panels of data have been used to explore the relationships between banks or between countries. In this section, we consider the range of additional hypotheses that can be tested with broader and longer data sets and multivariate approaches to econometrics.
12.3.1 The New Bank Lending Channel In the decade before the global financial crisis, the banking industry experienced a period of intensive financial deregulation. This increased competition in the banking sector, lowering in turn the market power of banks and thereby depressing their charter value. The decline in banks’ charter values coupled with their limited liability and the existence of “quasi” flat-rate deposit insurance encouraged banks to expand and take on new risks. As a result, there has been intense growth in lending together with an expansion of the range of financial products usually offered by financial institutions. For instance, banks expanded their activity regarding more volatile noninterest sources of income. In parallel, financial innovation contributed to the development of the “originate to distribute” model, an intermediation approach in which banks originate, repackage, and then sell on their loans (or other assets such as bonds or credit-risk exposures) to the financial market. In principle, these innovations allowed a broader range of investors to access a class of assets hitherto limited to banks (i.e., loans), thereby distributing the risks to financial markets.
Inside the Bank Box 381 The spectacular increase in size of institutional investors has also meant that banks could rely much more on market sources of funding contributing to the expansion of the securitization and covered bond markets. As a result, banks’ funding became much more dependent on the perceptions of financial markets. These changes had a significant impact on the bank lending channel of monetary policy transmission, especially during the global financial crisis. In this section, we focus in particular on three major aspects that we believe became important: (i) the role of bank capital; (ii) market funding, securitization, and the new business model; and (iii) the link between monetary policy and bank risk. To better understand the characteristics of the new credit channel of monetary policy transmission, we can slightly modify the model presented in section 12.2.1. Banks are risky, and their bonds are not insured; therefore (unlike the simple Monti-Klein framework), bond interest rate incorporates a risk premium that we assume depends on bank-specific characteristics (i.e., bank size, liquidity, and capitalization). The latter are balance-sheet information or institutional characteristics exogenously given at the end of the previous period:
iB (iM , xt −1 ) = b0iM + b1iM xt −1 + b2 xt −1
(b0 > 1) (14)
In other words, this assumption implies that the distributional effects via the bank lending channel depend on some characteristics that allow the bank to substitute insured liabilities (typically deposits) with uninsured liabilities (such as bonds or CDs) (see Romer and Romer 1990). For example, theory predicts that big, liquid, and well-capitalized banks should be perceived as less risky by the market and obtain a lower cost on their uninsured funding (b2 < 0). Moreover, they could react less to monetary change (b1 0), (15)
where C MT represents the total cost suffered by the bank in case of a change in monetary policy due to the maturity transformation. Since loans have typically a longer maturity than bank fund-raising, the variable ρ represents the cost (gain) per unit of asset that the bank incurs in case of a 1 percent increase (decrease) in the monetary policy interest rate.10 As before, the representative bank maximizes its profits subject to the balance-sheet constraint. The bank optimally sets the interest rates on loans and deposits (iL, iD), while it takes the money-market interest rate (iM) as given (it is assumed to follow the policy rate set by the central bank). Max π = (iL − j)L + im S − iD D − iB B − C MT − C IN ;
it ,id
L + Q = D + B + K.
s.t .
(17)
382 Gambacorta and Mizen Solving the maximization problem, the optimal levels of the two interest rates are
iL = Ψ0 + Ψ1 p + ( Ψ2 + Ψ3 xt −1 ) im + Ψ4 y P + Ψ5ρt −1 ∆im + Ψ6 j + Ψ7 xt −1 ; (18)
id = Φ 0 + Φ1 p + ( Φ 2 + Φ 3 xt −1 ) im + Φ 4 y P + Φ 5ρt −1 ∆im + Φ 6 j + Φ 7 xt −1 ; (19)
where b c g1 c b c > 0 ; Ψ1 = 2 > 0 ; Ψ2 = 0 + 3 > 0 ; Ψ3 = 1 ; Ψ4 = 1 > 0 ; Ψ = 1 ; 5 2 −2c0 2 −2c0 2 −2c0 2 b0 (1 − α) −d3 α b2 g2 d2 1 Ψ 6 = ; Ψ 7 = ; Φ 0 = − < 0 ; Φ1 = − < 0; Φ 2 = + + > 0; 2 2 2d0 2 2 2d0 2 Ψ0 =
Φ3 = −
b1 (1 − α) d b (1 − α) α ; Φ 4 = − 1 < 0; Φ 5 = − < 0 ; Φ 6 = 2 . 2d0 2 2d0 2
Equation 18 indicates that the effect of a monetary squeeze is smaller if the bank-specific characteristic reduces the impact of monetary policy on the cost of funding (b1 < 0 and Ψ3 < 0). In this case, banks have a greater capacity to compensate the deposit drop by issuing uninsured funds at a lower price. The effect of the so-called bank capital channel is also positive (Ψ5 > 0): due to the longer maturity of bank assets with respect to liabilities (ρ > 0), in case of a monetary tightening (∆iM> 0), the bank suffers a cost and a subsequent reduction in profit; given the capital constraint, this effect determines an increase in loan interest rates (the mirror effect is a decrease in lending). In equation 19, the distributional effects of monetary policy are equal to the ones described above for the interest rate on loans: the effects on the cost of deposits are smaller for banks with certain characteristics only if b1 < 0 and Ψ3 < 0. Also, the effects of the bank capital channel are negative (Ф5 < 0); as we have seen, in case of a monetary tightening (ρ∆iM> 0), the bank suffers a cost and a reduction in profit, which induces the bank to increase its interest-rate margin, reducing the interest rates on deposits.
12.3.2 The Role of Bank Capital Capital is an important driver of banks’ decisions, particularly in periods of financial stress in which capital targets imposed by banks’ creditors or regulators become more stringent. There is an extensive literature on the relationship between bank capital and lending behavior (for a review, see, among others, VanHoose 2007; Borio and Zhu 2014). A recent study by the European Banking Authority (2015) finds substantial beneficial credit supply effects of greater bank capital in a cross-country study of European banks. Michelangeli and Sette (2016) use a novel micro data set constructed from web-based mortgage brokers to show that better-capitalized banks lend more. After the global financial crisis, the interest has shifted in particular to the study of the link between bank capital regulation and monetary policy.
Inside the Bank Box 383 In the traditional bank lending channel, a monetary tightening may impact bank lending if the drop in deposits cannot be completely offset by issuing nonreservable liabilities (or liquidating some assets). Since the market for bank debt is not frictionless and nonreservable banks’ liabilities are typically not insured, a “lemon’s premium” has to be paid to investors. In this case, bank capital can affect banks’ external ratings, providing investors with a signal about their creditworthiness. The cost of nonreservable funding (i.e., bonds or CDs) would therefore be higher for banks with low levels of capitalization if they were perceived as riskier by the market. Such banks are therefore more exposed to asymmetric information problems and have less capacity to shield their credit relationships (Jayaratne and Morgan 2000). If banks were able to issue unlimited amounts of CDs or bonds not subject to reserve requirements, the bank lending channel would in principle not be effective.11 In general, bank capital has an effect on lending if two conditions hold. The first is where breaking the minimum capital requirement is costly and as a result banks want to limit the risk of future capital inadequacy (Van den Heuvel 2002). As capital requirements are linked to the amount of credit outstanding, the latter would determine an immediate adjustment in lending. By contrast, if banks have an excess of capital, the drop in capital could easily be absorbed without any consequence for the lending portfolio. Because equity is relatively costly in comparison with other forms of funding (e.g., deposits, bonds), banks tend to economize on units of capital and usually aim to minimize the amount of capital in excess of what regulators (or the markets) require. The second condition is an imperfect market for bank equity: banks cannot easily issue new equity, particularly in periods of crisis, because of the presence of tax disadvantages, adverse selection problems, and agency costs. Empirical evidence has shown that these two conditions typically hold and that bank capital matters in the propagation of shocks to the supply of bank credit (Kishan and Opiela 2000; Gambacorta and Mistrulli 2004; Altunbas, Gambacorta, and Marques- Ibanez 2009b). These papers show that capital could become an important driver of banks’ incentive structures, particularly in periods of financial stress, because during such periods, raising capital becomes more difficult. It is therefore highly probable that during the recent crisis, capital constraints on many banks may have limited the lending supplied. In the same way, Beltratti and Stulz (2009) showed that stock-market prices of banks with more Tier 1 capital have also done relatively better during the crisis than banks with low levels of capitalization. Higher bank capital ratios have sometimes been associated with higher funding costs. However, these studies neglect the fact that more capital makes banks safer, reducing the cost of funding via noninsured deposits and bonds. Using a large data set covering 105 international banks headquartered in fourteen advanced economies for the period 1994–2012, Gambacorta and Shin (2016) find that a 1-percentage-point increase in the equity-to-total-assets ratio is associated with a 4-basis-point reduction in the cost of nonequity financing. Given that nonequity funding represents, on average, around 86 percent of total bank liabilities, the resulting effects on the overall cost of funding can be sizable. A back-of-the-envelope calculation suggests that the
384 Gambacorta and Mizen cost of equity, approximated by banks’ return on equity (ROE), would be reduced to about one-third.12 More generally, in economic systems underpinned by relationship-based lending, adequate bank capital allows financial intermediaries to shield firms from the effects of exogenous shocks (Bolton et al. 2016; Gobbi and Sette 2015). In another study, Banerjee, Gambacorta, and Sette (2016) analyze credit register data for Italian nonfinancial firms and find that investment and employment of firms with close relationships with well- capitalized banks were affected less during the global financial crisis. These results highlight bank capital not only as a buffer to protect depositors but also as a buffer for the wider economy.
12.3.3 Market Funding, Securitization, and the New Bank Business Model Innovations in funding markets have had a significant impact on banks’ ability and incentives to grant credit and, more specifically, on the effectiveness of the bank lending channel. A major innovation has been banks’ greater reliance on market sources of funding, whether traditional (i.e., the covered bond market) or the result of financial innovation (i.e., securitization activity). Greater recourse to these market-funding instruments has made banks increasingly dependent on capital markets’ perceptions. It has also made them less reliant on deposits to expand their loan base (Gambacorta and Marques-Ibanez 2011). Before the financial crisis, most banks were easily able to complement deposits with alternative forms of financing. Specifically, in line with the Romer and Romer (1990) critique, the bank lending channel was less effective as banks could use nondeposit sources of funding—such as CDs, covered bonds, and asset-backed securities (ABSs)—to compensate for a drop in deposits. In the presence of internal capital markets, for example, within bank holding companies, it may be possible to isolate exogenous variation in the financial constraints faced by banks’ subsidiaries. Ashcraft (2006) and Gambacorta (2005) show that the loan growth rate of affiliated banks is less sensitive to changes in monetary policy interest rates than that of unaffiliated banks. In other words, owing to the presence of internal capital markets, banks affiliated with multibank holding companies are better able to smooth policy-induced changes in official rates. This is because a large holding company can raise external funds more cheaply and downstream funds to its subsidiaries. Similar results are obtained by Ehrmann and Worms (2004). Overall, this evidence suggests that the role of the bank lending channel may be reduced in the case of small banks affiliated to a larger entity. The change in the structure of banks’ funding is also having an impact on banks’ intermediation function. As banks become more dependent on market funding, there is also a closer connection between the conditions in the corporate bond market and banks’ ability to raise financing. Consequently, banks’ incentives and ability to lend are
Inside the Bank Box 385 also likely to be more sensitive to investors’ perceptions and overall financial-market conditions than in the past, when banks were more heavily funded via bank deposits than they are in the present.13 From a monetary policy perspective, this would mean that the impact of a given level of interest rate on bank loan supply and loans pricing could change over time, depending on financial-market conditions (Hale and Santos 2010). A related strand of the recent literature focuses on the role of securitization (see Marques-Ibanez and Scheicher 2010). Securitization activity did indeed also increase spectacularly in the years prior to the global financial crisis in countries where it had been hardly used in the past. The change in banks’ business models from “originate and hold” to “originate, repackage, and sell” had significant implications for financial stability and the transmission mechanism of monetary policy. This is because the same instruments that are used to hedge risks also have the potential to undermine financial stability—by facilitating the leveraging of risk. Moreover, there were major flaws in the actual interaction among the different players involved in the securitization process as conducted prior to the crisis. These included misaligned incentives along the securitization chain, a lack of transparency with regard to the underlying risks of the securitization business, and the poor management of those risks. The implications of securitization for the incentives that banks have to grant credit and their ability to react to monetary policy changes can be analyzed from different angles. First, there is significant evidence suggesting that securitization in the subprime segment led to laxer screening of borrowers prior to the crisis.14 The idea is that as securities are passed through from banks’ balance sheets to the markets, there could be fewer incentives for financial intermediaries to screen borrowers. In the short term, this change in incentives would contribute to looser credit standards, so some borrowers who in the past were denied credit would now be able to obtain it. In the long term, this would lead to higher default rates on bank loans. The laxer screening of borrowers is typically linked to an expansion in the credit granted or lower rates. Indeed, Mian and Sufi (2009)—using comprehensive information, broken down by US postal zip codes, to isolate demand factors—show that securitization played an important role in the expansion of the supply of credit. Second, there is evidence that securitization has reduced the influence of monetary policy changes on credit supply. In normal times (i.e., when there is no financial stress), this would make the bank lending channel less effective (Loutskina and Strahan 2006). In line with this hypothesis, Altunbas, Gambacorta, and Marques-Ibanez (2009b) found that, prior to the current financial crisis, banks making more use of securitization were more sheltered from the effects of monetary policy changes. However, their macrorelevance exercise highlights the fact that securitization’s role as a shock absorber for bank lending could even be reversed in a situation of financial distress (Gambacorta and Marques-Ibanez 2011). Another consequence of banking deregulation has been a global trend toward more diversification in banks’ income sources and an expansion of noninterest income revenues (trading, investment banking, higher brokerage fees, and commissions). The increase in noninterest income provides banks with additional sources of revenue.
386 Gambacorta and Mizen Such diversification can help foster stability in banks’ overall income. At the same time, noninterest income is usually a much more volatile source of revenue than interest-rate income. In periods of financial stress, there could be a decline in the traditional sources of revenue together with an even larger decline in revenues from fees and brokerage services. Under these conditions, it is highly likely that the change in business model could have an impact on banks’ performance and their pricing. For example, Borio and Gambacorta (2017) suggest that monetary policy is less effective in stimulating bank lending growth when interest rates reach a very low level. In particular, they find that the negative impact of low rates on the profitability of banks’ traditional intermediation activity explains one-third of the loan dynamic in the period 2010–2014.
12.3.4 Monetary Policy and Bank Risk Market-based pricing and greater interaction between banks and financial markets reinforce the incentive structures driving banks, potentially leading to stronger links between monetary policy and financial stability effects (Rajan 2005). Altunbas, Gambacorta, and Marques-Ibanez (2009a) claim that bank risk must be considered carefully, together with other standard bank-specific characteristics, when analyzing the functioning of the bank lending channel of monetary policy. As a result of financial innovation, variables capturing bank size, liquidity, and capitalization (the standard indicators used in the bank lending channel literature) may not be adequate for the accurate assessment of banks’ ability and willingness to supply additional loans. Namely, the size indicator has become less indicative of banks’ ability to grant loans as banks following the “originate to distribute” model have securitized substantial amounts of assets, thereby reducing their size as measured by on-balance-sheet indicators. The ability of banks to sell loans promptly and obtain fresh liquidity, coupled with new developments in liquidity management, has also lowered banks’ needs to hold certain amounts of risk- free securities on the asset side of their balance sheet. This has, in turn, distorted the significance of standard liquidity ratios. Likewise, developments in accounting practices and a closer link with market perceptions have also probably blurred the informative power of the capital-to-asset ratio. The latter was illustrated most vividly by the recent financial crisis, which showed that many of the risks were not adequately captured on banks’ books. Overall, it seems that financial innovation has probably changed and increased banks’ incentives toward more risk-taking (Instefjord 2005). Some recent studies argue that monetary policy could also have an impact on banks’ incentives to take on risk. The question is whether the stance of monetary policy could lead to an increase in the “risk tolerance” of banks which might trigger a credit supply shock if the risk-taking proved to be excessive. This mechanism could, at least in part, have contributed to the build-up of bank risk during the current credit crisis. The risk-taking channel may operate because low rates increase asset managers’ incentives to take on more risks for contractual, behavioral, or institutional reasons— the so-called search for yield. This would bring about a disproportionate increase in banks’ demand for riskier assets with higher expected returns. The search for yield may
Inside the Bank Box 387 also depend on the sticky rate of (nominal) return targets in certain contracts that are prevalent in banks, pension funds, and insurance companies. For fund managers, the importance of this mechanism seems to have increased in recent years, owing to the trend toward more benchmarking and short-termism in compensation policies. A second way in which low interest rates could make banks take on more risk is through their impact on valuations, incomes, and cash flows.15 A reduction in the policy rate boosts asset and collateral values, which in turn can modify bank estimates of probabilities of default, loss-given default, and volatility. For example, by increasing asset prices, low interest rates tend to reduce asset price volatility and thus risk perception; since a higher stock price increases the value of equity relative to corporate debt, a sharp increase in stock prices reduces corporate leverage and could thus decrease the risk of holding stocks. This example can be applied to the widespread use of value-at-risk methodologies for economic and regulatory capital purposes (Danielsson, Shin, and Zigrand 2004). As volatility tends to decline in rising markets, it releases risk budgets of financial firms and encourages position-taking. A similar argument is provided in the Adrian and Shin (2014) model; they stress that changes in measured risk determine adjustments in bank balance sheets and leverage conditions, and this, in turn, amplifies business cycle movements. Using two comprehensive confidential databases based on credit register data for Spanish and Bolivian banks, Jiménez et al. (2009) and Ioannidou et al. (2009) demonstrate the existence of a risk-taking channel. In particular, they find evidence that a “too accommodative” monetary policy may have led to additional (and probably excessive) banks’ risk-taking prior to the crisis. Altunbas, Gambacorta, and Marques- Ibanez (2014) find support for the idea of a significant link between monetary policy looseness—calculated using both the Taylor rule and the natural rate—and the amount of risks taken by banks operating in the European Union and the United States. The main policy implication of these studies is that central banks should factor in financial stability implications of their decision over the medium term.
12.3.5 Expectations Channel: Forward Guidance, Forward Rates Hofmann and Mizen (2004) showed that there would be an incentive for financial institutions not to adjust future retail interest rates, iLt+j, in response to observed or expected changes in market rates, iMt+j in the two periods j = 0,1. The result of their study explains why retail rates do not always follow market rates (or policy signals that drive market rates), the observation of asymmetry, and stickiness in retail rates. In discussing the expectations channel, we can reformulate the earlier results in terms of s, which is an index counting the number of periods that retail rates must be refinanced (if s counts two periods 0 and 1 and the model is identical to Hofmann and Mizen 2004). If the model were generalized to a multiperiod setting for retail interest rates, then it enables us to emphasize the importance of forward-looking behavior and forecasts of future interest rates. We can also assert that the cost of changing rates applies to any period, irrespective
388 Gambacorta and Mizen of whether the period is odd or even. We make this alteration because it is realistic to assume that resetting the retail rate incurs a cost to the financial intermediary (the fixed cost of administration and communication) at any time changes occur to market rates that had not been expected. This cost will be incurred for new business irrespective of whether the product has a fixed or floating rate. Finally, we allow for the possibility that the maturity of the market rate, h, and the retail rate, H, do not necessarily match. Many empirical models impose a matching maturity, but there is no reason for the maturity to match exactly. Banks may desire maturity matching, but markets may not offer funds at a maturity that actually matches the maturity of the mortgage or loan. Moreover, as discussed above, matching maturities, also with the use of derivatives, is costly, and banks typically have a maturity mismatch. We used this notation in the model above, but here it is explicitly recognized as H > h. If this is the case, then in any empirical analysis, estimation methodologies have to be adapted to allow for possible mismatches. In this context, the bank must determine at the time the retail rate is posted what it forecasts future money market rates to be. For an H-period retail product, say, a thirty- year mortgage, this is given by iLt + j ,H = µ +
H
1− γh 1− γ
H h
Et + j
h −1
∑γ s =0
sh
iMt + j + sh ,h , (20)
where γ is the discount factor, Et+j is the expectations operator, and iMt+j+sh,h is the h- period future money-market rate. As before, once we allow for shocks to cause the actual future rate to deviate from the expected future rate, there will be conditions under which it is optimal to reset interest rates, even if there is a fixed cost of doing so, provided the loss of not adjusting is higher than the menu cost. The loss function the financial intermediary minimizes in each period j is
1− γh
1− γ
H h
Et + j
=
(
)
2 γ sh i − Lt + j , H − i Mt + j + sh +1, h h −1 2 γ sh i − i − ∑ Lt + j +1, H Mt + j + sh + 2 , h s=0 2 γ sh i − i − Lt + j + 2 , H Mt + j + sh + 3, h
H
( (
) )
H iLt + j , H − γ hiLt + j , H − γ 2hiLt + j , H −… > c. h
(
)
2
(21)
This can be rearranged to yield
(
H 1− γh iLt + j , H − Et + j h 1− γH
1− γh − Et + j 1− γH
H
H
h −1
∑γ s=0
h −1
sh
(iMt + j + sh ,h − µ)
) > chH .
∑ γ sh (iMt + j + sh+1,h − iMt + j + sh,h ) s=0
2
(22)
Inside the Bank Box 389 The first term is the deviation from long-run equilibrium, the second term represents the average expected change in the money-market rates H periods into the future for each time period j. The firm will not adjust if
H ch h −1 1− γh ch sh (23) , , ∈ − E i − − + + γ µ µ Z Z iLt + j , H − Mt + j + sh , h t+ j ∑ H H 1− γH s=0
(
h 1− γh where Z = E H 1 − γ H t+ j
)
H
∑ γ (i h −1
s=0
sh
Mt + j + sh +1, h
)
− iMt + j + sh ,h
A change to retail rates on new business (and existing business where rates are variable) is determined by considering expected changes to future market rates. Other authors have made a similar theoretical point. Elyasiani, Kopecky, and VanHoose (1995) and Kopecky and VanHoose (2012) argue that a theoretical model similar in structure to the Monti-Klein model but with quadratic adjustment costs justifies equations for deposit and loan rates that depend on current, past, and future market rates of interest. Elyasiani, Kopecky, and VanHoose (1995) conclude that the correct specification for the deposit supply and loan demand equations should include forecasts of money-market rates, and in empirical work, they provide an ARIMA forecast of interest rates that are then found to be significant in these equations. The focus of the Elyasiani, Kopecky, and VanHoose paper is to establish whether deposit and liability decisions of banks are jointly (in)dependent, and the role of forecasts in the model is a byproduct of this analysis. Kopecky and VanHoose (2012) extend this argument to interest rates to show that current retail rates on loans and deposits should depend on expected future market rates, as well as the extent of competition in the retail market. They then suggest that futures might provide an indication of the commonly held expectation of future market rates. Expected changes to future market rates can be included in the models by considering the path of interest-rate futures (see Kuttner 2001; Bernoth and Von Hagen 2004; Sander and Kleimeier 2004) or by including a model-based forecast of market rates in the model of retail rate adjustment (see Banerjee, Bystrov, and Mizen 2013). Kuttner (2001), Bernoth and Von Hagen (2004), and Sander and Kleimeier (2004), have used futures to predict market rates at future dates. This is one way to allow for the influence of forward-looking responses to market rates, which are implied in the future rates, but typically this has involved the use of futures that settle on the current period as a measure of the expected level of market rates. The main purpose of this work has been to establish the response rate to expected events compared to unexpected events in money markets and not to determine the influence of forecasts farther into the future. Where futures have been used, a single future has been chosen as the measure of expectations; hence, Kuttner (2001) uses a one-month federal funds future, Bernoth and Von Hagen (2004) use the three-month Euro Interbank Offered Rate (EURIBOR) future, and Sander and Kleimeier (2004) use the one-month EURIBOR future.
390 Gambacorta and Mizen Banerjee, Bystrov, and Mizen (2013) use this theoretical structure to motivate the inclusion of expected money-market rates in their dynamic adjustment model of retail rates for data aggregated across banks within each country and for groups of individual banks in a representative country (France). They use two methods to generate forecasts. First, they generate forecasts of market rates based on a dynamic Nelson-Siegel model used by Diebold and Li (2006) and Diebold, Rudebusch, and Arouba (2006) to model the level, slope, and curvature of market rates based on information at all points on the yield curve. Second, they use a principal components method to extract information from the yield curve, using interest rates at various maturities to predict the future path of short-term interest rates. With forecasts of changes to one-month and one-month maturities for EURIBOR market rates, they then use the information on anticipated future changes to market rates to predict changes in four different retail rates in the four largest economies of the euro area (France, Germany, Italy, and Spain) at various forecast horizons. The data used in the estimation comes from two sources. The first includes variables at a monthly frequency from January 1994 to July 2007 for France, Germany, Italy, and Spain from the harmonized monetary and financial institutions’ interest rate (MIR) data set. This may be called the aggregate data set. The second data set consists of interest rates (also at a monthly frequency) taken from a sample of individual French banks. This disaggregation makes it possible to check whether the results derived from the first data set are a consequence of the aggregation involved in constructing the MIR statistics using individual bank-level data. For France, the disaggregate results confirm those from the aggregate data set. Similar analysis could in principle be undertaken for the remaining countries, but the relevant data are available only for France. When Banerjee, Bystrov, and Mizen et al. (2013) compare their results from forecasting models with an alternative model using EURIBOR futures, they find that the results are very similar, in terms of both the pass-through coefficients and the significance of the variables that anticipate future changes to money-market rates. These results provide evidence that previous modeling strategies, using only contemporary market rates to explain retail rates, neglected a large amount of relevant interest-rate information in the yield curve about future short-term rates. The approach acknowledges the uncertainty about future market rates that banks face when they set retail rates, and by forming forecasts of future market rates, banks are able to take into account their expectation of future rates in the current retail rate-setting decision. This can be influenced to a large degree by central bank communications and forward guidance.
12.3.6 Forward Guidance Forward guidance (FG) is a direct attempt to influence the expectations in the market regarding future interest rates, acknowledging that economic agents are forward- looking; therefore, the future stance of monetary policy can influence their decisions. All central banks have used communications policy to guide market expectations, but
Inside the Bank Box 391 first the Reserve Bank of New Zealand in 1997 and then the Norges Bank in 2005, the Sveriges Riksbank from 2007, and the Czech National Bank from 2008 offered forecasts of their own interest-rate paths. Since the financial crisis, many have explicitly aimed to provide conditional forecasts of future interest-rate paths based on the most likely economic and financial scenarios as a substitute for active interest-rate policy. For example, in Norway, the Norges Bank has published explicit numerical interest-rate forecasts as a regular part of its monetary policy communications. The governor has indicated that it is the “preferred way of guiding economic agents’ interest rate expectations,” and according to his assessment, it has been reasonably successful. It is also a commitment device that helps the central bank efficiently stabilize financial markets by offering a predictable and credible path for future interest rates. But not all examples of FG are successful. Svensson (2013) notes that the “Swedish experience includes examples of both great successes and great failures” as market expectations and central bank guidance diverged. Svensson argues that in February 2009, the Riksbank policy offered great predictability and credibility, while in September 2011 it lacked credibility. Another approach has been to use FG to communicate the conditions that need to be fulfilled before a change in the stance of monetary policy will be implemented. These conditions can be state-or time-contingent, and the two are related, since the time that elapses before a tightening could be considered is closely related to the state of the economy that the central bank uses to determine its actions. The Federal Reserve, the Bank of Canada, and the Bank of England have tied future policy actions to state- contingent unemployment thresholds, but they have been pressed to say when they expect these conditions to be fulfilled. When monetary policy explicitly states the conditional forecast for future interest rates or offers state-contingent policies for raising rates, it is unrealistic to continue to expect retail rates set by banks to continue to depend on current short-term interest rates, as they did when policy rates where much farther away from zero, and few central banks attempted to offer FG. The interest-rate pass-through models in the previous section, which allow for anticipation of future short-term interest rates, will become more important if policy involves a greater element of communications and FG.
12.4 Unconventional Monetary Policy and the Monetary Transmission Mechanism 12.4.1 Unconventional Policy Background After the global financial crisis, short-term interest rates were lowered in a dramatic fashion, and they have remained at their effective lower bounds (ELBs) since just after
392 Gambacorta and Mizen the collapse of Lehman Brothers. With the development of unconventional monetary policy instruments to further ease monetary policy once the short-term interest rate was near zero, the interest channel has been less important, but policy measures have had an effect on interest-rate pass-through by banks. Despite the low level of the risk-free rate (represented by overnight index swaps), the credit risk associated with banks has increased in comparison to its level before the crisis. Interbank lending saw a number of peaks connected with the global financial crisis and the sovereign debt crisis, which increased the cost of borrowing for banks. The actions of central banks have in part been aimed at offsetting the effects of financial-market responses by providing additional liquidity, accepting a wider range of collateral, or directly purchasing assets.
12.4.2 Euro Area Policies: Passive and Active The European Central Bank (ECB) implemented unconventional policies to address short-term money-market rates and longer-term bond yields at different times over the period from 2008 to the present. As Giannone et al. (2011) note, from October 2008, the ECB increased its maturity transformation role with the introduction of fixed-rate/full- allotment long-term refinancing operations (LTROs) offering excess liquidity to banks in the euro area.16 The ECB simultaneously expanded the range of eligible collateral for these LTROs in 2008 and extended the program in 2009 until the end of 2010; and made greater use of interbank facilities to establish the ECB as a central counterparty for the eligible counterparties in the Eurosystem, absorbing on its own balance sheet the risks that existed for banks that might otherwise have to deal with each other. These liquidity operations relied on the banking system taking up the LTRO allotments; therefore, the ECB (European Central Bank 2015) refers to this as a passive use of the central bank’s balance sheet. A new phase of active policy began with an announcement to make outright monetary transactions (OMTs) in July 2012. This lowered long-term borrowing costs for government and banks issuing senior unsecured bonds. The ECB (2015) labels these contingent balance sheet policies to recognize the fact that their implementation is to some degree dependent on the conditions in the market. The OMT policy has been announced but not implemented, but the announcement itself can be seen as a significant effect. The beginning of active balance sheet policies started with credit-easing policies implemented using targeted long-term refinancing operations (TLTROs) in June 2014.17 These provided three-year loans at relatively favorable rates to banks that supported net new lending to the real economy in an attempt to ease bank lending conditions. Further balance sheet policies were implemented through asset purchases designed to ease conditions in various market segments. These policies operated through a portfolio adjustment channel, where the asset purchase of the central bank was designed to trigger portfolio readjustments in the markets of many imperfectly substitutable assets.
Inside the Bank Box 393 The transmission channels from ECB liquidity support, credit easing, and asset purchases to market rates are discussed in detail in Giannone et al. (2012), and ECB (2015). Giannone et al. (2011; 2012; 2014) and Altavilla et al. (2015) find that liquidity operations have affected the short-term interbank rates directly (see also ECB 2010a; ECB 2010b; Beirne 2012), while asset purchases affected the longer-term yields (see also ECB 2015; Praet 2015).
12.4.3 An Empirical Assessment of Unconventional Policies on Funding Costs and Lending Rates Hofmann, Lombardi, and Mizen (2016) build on this to investigate the relationship between bank funding and unconventional monetary policy over the period 2008–2014 for the major euro area countries. They employ two different methodologies using event studies, based on daily data for interbank deposits and bond yields, to establish the impact effect of policies using higher-frequency data during a three-to-five-day window, and a Bayesian VAR with monthly data to create conditional forecasts for the funding cost which can be compared with the actual outturn. Hofmann, Lombardi, and Mizen examine the effects of unconventional policies by comparing the outturn of key variables (observed at the monthly frequency) versus their model-implied conditional forecasts. They use forecasts from a Bayesian VAR, estimated with monthly data from January 2003 to April 2015 for six euro area countries. To assess the role of monetary policy in affecting bank funding costs (a weighted average of their cost of liabilities) proposed by Illes, Lombardi, and Mizen (2015), they perform a counterfactual exercise in the spirit of Stock and Watson (2002; 2003); Primiceri (2005); Smets and Wouters (2007); Justiniano and Primiceri (2008); and Giannone et al. (2012). The first step involves a dynamic model featuring the key elements of the monetary transmission from policy (conventional or unconventional) to money and capital market rates that determine the bank funding costs to the interest rates on loans, controlling for output and inflation. Hofmann, Lombardi, and Mizen establish that unconventional policy caused bank funding costs to deviate from the counterfactual based on no unconventional policy. These results are confirmed by event studies that show the policies found to impact interbank deposit rates and bank bond yields at low frequency are also present at higher (daily) frequencies. Banerjee, Bystrov, and Mizen (2017) seek to relate the movements of mortgage and business lending rates offered by banks in four major euro area countries, Germany, France, Italy, and Spain, as representative countries in the core and the periphery, to the changes in underlying funding costs driven by implementation of monetary policy after the crisis and the greater perception of risk. They examine the channels of transmission of unconventional monetary policy using dynamic factors to summarize changes in many macroeconomic and financial variables at monthly frequency. Their results provide a structural interpretation and quantification of the effects on lending rates
394 Gambacorta and Mizen over the sample January 2003–December 2013 for Germany, France, Italy, and Spain. To structural changes in the monetary transmission mechanism over the period of the crisis, they carry out a rolling-window estimation of factor-augmented VARs and generate surfaces of impulse responses. This analysis explains heterogeneity of the monetary policy transmission mechanism in the vulnerable and less vulnerable euro area countries. The results drawn from their analysis is that the pass-through varied considerably in the period 2008–2013, and it is possible to identify the effects of conventional policy operating through an interest-rate channel in the early years of the crisis, giving way to the effects of certain unconventional monetary policy operations (LTROs, TLTROs, and asset purchase programs such as CBPP3 and ABSPP) operating on long-term interest rates in the period 2008–2012.
12.4.4 Unconventional Policy in the United Kingdom In the United Kingdom, there was a nearly identical experience of lower policy rates and reliance on unconventional monetary policies designed for the UK financial system. To compare monetary policy before and after the crisis, we note that the underlying “risk- free” rate as given by the swap curve fell from the range of between 4 and 6 percent pre- 2007 to the range of 0 to 3 percent post-2007; at the same time, the unsecured bond spreads for banks, which had been low and stable in the range 0.08–0.1 percent precrisis, became more volatile and rose to a range 1.0 to 3.5 percent post crisis, depending on the bank in question. To counter some of these adverse effects of the crisis, the Bank of England (BoE) provided liquidity and widened collateral; it also purchased high-quality assets. The BoE’s Special Liquidity Scheme (SLS) was introduced in April 2008 to improve the liquidity position of the financial system by allowing financial intermediaries to swap their high-quality mortgage-backed securities (MBSs) and other assets for UK Treasury bills for up to three years. This was designed to deal with the large balance of illiquid assets on banks’ balance sheets by exchanging them for more liquid (tradable) assets. The BoE also provided a Funding for Lending Scheme (FLS) in July 2012. The FLS gave incentives to financial intermediaries to increase net lending to the UK real economy. Banks were allowed to borrow UK Treasury bills in exchange for eligible collateral, provided their net lending increased, at a fee of 25 basis points per year, which was a discount relative to the price of both retail deposits and wholesale funding at the time the FLS was launched. However, banks that failed to expand their net lending paid an additional 25 basis points for each single percentage-point fall in net lending. The net effect of this scheme was to lower the cost of funding for banks; Churm et al. (2012) estimate that the FLS contributed 40–75 basis points to the fall of around 150–200 basis points in market funding costs experienced by banks by the end of 2012.
Inside the Bank Box 395
12.4.5 Empirical Evidence Harimohan et al. (2016) examine the UK banks’ interest rates on new loans and deposits before the financial crisis and in its aftermath. Based on qualitative evidence on transfer pricing reported in the BoE’s Survey of Bank Liabilities, it links the changes in transfer prices to the cost of unsecured wholesale funding and uses these data to show how the big six banks set rates for households and firms. Banks responded strongly to competitors’ rates and indirectly to their own funding costs. Monetary policy continued to be passed through quickly and completely to household rates, and the FLS worked indirectly.18 Also, banks attempted to fully reflect funding and liquidity risks on average but not identically and possibly with some cross-subsidy between products. These results shed light on the internal pricing process of banks by confirming that qualitative information provided by UK banks to the Prudential Regulation Authority (2013) review and the Bank Lending Surveys is supported by quantitative evidence that market measures of funding costs correspond to the description of transfer prices. Changes in the level of swap rates at the appropriate maturity and spreads of unsecured bonds over mid-swap rates correspond to reported changes in transfer prices. Internally, banks use these to determine transfer prices for their business units that contribute to the setting of retail rates on mortgages and deposits. However, UK banks operate in a concentrated industry, and they reference competitors’ rates on benchmark products, focusing on mid-values rather than “best buy” rates.19 Mid-values can be explained by common swap rates at appropriate maturities and by the median spreads of unsecured bonds over mid-swap rates. Since monetary policy has a direct impact on swaps and spreads, through conventional and unconventional channels, this confirms that monetary policy remains effective over retail rates set by banks in the United Kingdom after the financial crisis.
12.5 Conclusion and Implications for the Future Before the global financial crisis, the world was a simpler and more innocent place (King 2008). Although there were many risks that had not been properly priced into funding costs for banks, lending was funded at the margin in relation to short-term market rates that closely followed policy rates. Since then, that innocence has been lost, and funding costs reflect a range of risks that were small and largely ignored previously. The academic work on interest-rate pass-through reflects this more complex environment, allowing for a blend of funding sources that combine to give information on the cost of bank funding, which can often depart from policy rates by some margin. New channels of transmission have been developed to reflect the bank lending, bank capital, and risk- taking channels, which take on greater significance in the postcrisis world. Similarly,
396 Gambacorta and Mizen since monetary policy has been conducted using FG, the impact has been on future rather than current financial-market conditions requiring models that allow banks to form forecasts of future funding costs. Unconventional policies relying on liquidity operations, asset purchases, and conditional lending schemes have influenced long- term rates of interest rather than overnight rates, and new approaches have been developed to consider the effects of these policies on banks. Despite many concerns about the weakening of monetary transmission, the results of these new developments have pointed to a strong and robust relationship between policy measures and the retail rates that affect households and firms. In this respect, central banks can be reassured that the innovations in the making of monetary policy in the postcrisis era have proven effective in maintaining control over lending and deposit rates. What can we expect in the future for interest-rate pass-through and monetary transmission? Institutional change is under way, as part of the process that aims to avoid a repeat of the crisis in 2007–2008. Banks are now regulated to a greater degree, and liquidity and capital buffers are mandated to avoid dangers of illiquidity and insolvency among individual banks spilling over into the banking system as a whole. The Basel Committee on Banking Supervision will introduce reforms in the form of Basel III to strengthen the regulation, supervision, and risk management of the banking sector. These measures aim to improve the banking sector’s ability to absorb shocks arising from financial and economic stress, whatever the source, to improve risk management and governance of banks and strengthen banks’ transparency and disclosures. Microprudential regulation will help raise the resilience of individual banking institutions to periods of stress, while macroprudential regulation will reduce system-wide risks that can build up across the banking sector as well as the procyclical amplification of these risks over time. These changes will increase the cost of certain types of funding and at the same time reduce the risks associated with banks, which may reduce the costs of others. In the European Union, there are plans to develop a banking union to enhance the banking system as a whole. As the European Commission summarizes the proposals, it aims to offer “stronger prudential requirements for banks, improved depositor protection and rules for managing failing banks, [and] form a single rule book for all financial actors in the Member States of the European Union. The single rule book is the foundation on which the Banking Union sits.” This will change the landscape for banks in EU countries where banks will be more interdependent, funding markets for banks will be more integrated, and there will be common supervision, resolution mechanisms, and deposit insurance. In emerging economies, the flow of capital from advanced economies in a “search for yield” has led to new developments such as larger and more volatile cross-border flows through the banking system, globally and regionally, and greater reliance by banks on funding from financial markets. These flows have had a substantial impact on monetary transmission and the scope for independent action. Currently, banks are financed primarily through core liabilities that are stable, but increasingly, they use noncore liabilities that are less stable, and therefore the transmission of monetary policy from local policy rates to bank lending rates is affected.
Inside the Bank Box 397 It is hard to imagine that the transmission of monetary policy through various channels—some of which are already known and others are as yet undiscovered—will not change in response to these developments.
Notes 1. Central banks have focused to a greater extent since the financial crisis on use of macroprudential policies to meet financial policy objectives. These policies operate alongside monetary policy tools and are recognized to interact with them. Although these developments have implications for monetary transmission and the banking system, they extend beyond the scope of this chapter. We refer the reader to Cerutti, Claessens, and Laeven 2015; Claessens, Ghosh, and Mihet 2013; Kuttner and Shim 2013; Bruno, Shim, and Shin 2017; Gambacorta and Murcia 2017. 2. Elyasiani, Kopecky, and VanHoose (1995) and Kopecky and Van Hoose (2012) have a similar model in which they maximize bank profits, which are adjusted to allow for quadratic adjustment costs. 3. Alternatively, S can be considered as the total amount of bank’s liquidity, where α is the coefficient of free and compulsory reserves. In this case, reserves are remunerated by the money-market rate fixed by the central bank. This alternative interpretation does not change the results of the model. 4. Capital requirements are proxied by a fixed amount (k) of loans. If bank capital perfectly meets the Basel standard requirement, the amount of loans would be L = K/k. We rule out this possibility because banks typically hold a buffer as a cushion against contingencies (Wall and Peterson 1987; Barrios and Blanco 2003; Gropp and Heider 2010). Excess capital allows them to face capital adjustment costs and to convey positive information on their economic value (Leland and Pile 1977; Myers and Majluf 1984). Another explanation is that banks face a private cost of bankruptcy, which reduces their expected future income (Dewatripont and Tirole 1994). Van den Heuvel (2002) argues that even if capital requirement is not currently binding, a low-capitalized bank may optimally forgo profitable lending opportunities now, in order to lower the risk of future capital inadequacy. A final explanation for the existence of excess capital is given by market discipline; well-capitalized banks obtain a lower cost of uninsured funding, such as bonds or CDs, because they are perceived as less risky by the market (Gambacorta and Mistrulli 2004; Gambacorta and Shin 2016). 5. The aim of this paper is not to answer to the question if deposits are input or output for the bank (see Freixas and Rochet, 1997 on this debate). For simplicity here deposits are considered a service supplied by the bank to depositors and are therefore considered an output (Hancock, 1991). 6. The additive linear form of the management cost simplifies the algebra. The introduction of a quadratic cost function would not have changed the result of the analysis. An interesting consequence of the additive form of the management cost is that the bank’s decision problem is separable: the optimal interest rate on deposits is independent of the characteristics of the loan market, while the optimal interest rate on loans is independent of the characteristics of the deposit market. For a discussion, see Dermine 1991. 7. We recognize that a great deal of recent empirical research has shown a link between firm concentration, as a proxy for competition, and rate setting. Papers by Berger and Hannan
398 Gambacorta and Mizen (1991); Prager and Hannan (1998); and Kahn, Pennacchi, and Sopranzetti (2005) have shown for deposit and lending rates that competition does have an impact on rate-setting behavior and can create asymmetries. We make the argument here that if monopolistic competition is to be sufficient to generate discontinuous adjustment, then it must ensure that the profit function levels out in theoretical models. 8. The reasoning we use is not the only motivation for why deposits or mortgage rates might be sticky. There is a growing literature on psychological and behavioral studies that can generate the same kinds of stickiness in retail rates (see Kahn, Pennacchi, and Sopranzetti 2005). 9. Although the adjustment process is quadratic and therefore puts equal weight on deviations of retail rates above and below the base rate, the presence of a positive trend in inflation generates an asymmetric response from the financial institution. 10. Banks use derivatives to minimize interest-rate risk. However, the use of derivatives is costly and even difficult for some financial intermediary. Therefore, banks’ balance sheets are typically characterized by a positive maturity mismatch between their assets and liabilities. Evidence of incomplete hedging of the interest risk on loans by financial institutions has been provided, for example, by Rampini, Viswanathan, and Vuillemey (2016) using US data and by Esposito, Nobili, and Ropele (2015) for Italian banks. 11. This is the point of the Romer and Romer (1990) critique. 12. The cost of equity is typically approximated by banks’ ROE (King 2010). However, Admati and Hellwig (2016) have pointed out that if the Modigliani and Miller (MM) theorem holds, the ROE contains a risk premium that must go down if banks have more equity. Thus, the weighted average cost of capital remains unchanged as the leverage decreases. Despite this point, many other studies find that the strict version of the MM theorem does not hold (Calomiris and Hubbard 1995; Cornett and Tehranian 1994; Myers and Majluf 1984; Stein 1998). 13. This is mainly because deposits tend to be a relatively “sticky” source of funding and by definition less dependent on financial-market conditions than tradable instruments (see Berlin and Mester 1999; Shleifer and Vishny 2009). Recent evidence has shown a shift back toward deposit funding, but marginal sources of funds still tend to be predominantly from market sources. 14. For evidence on the US subprime market, see Dell’Ariccia, Igan, and Laeven 2008; Keys et al. 2010. For a different perspective on the Italian market for securitized mortgages, see Albertazzi et al. 2015. 15. This is close in spirit to the familiar financial accelerator, in which increases in collateral values reduce borrowing constraints (Bernanke, Gertler, and Gilchrist 1996). Adrian and Shin (2014) claim that the risk-taking channel differs from and reinforces the financial accelerator because it focuses on amplification mechanisms resulting from financing frictions in the lending sector. See also Borio and Zhu 2014. 16. Nine six-month operations in 2008 followed by a further three one-year operations in 2009 supplemented existing one- week and three- month refinancing operations (Szczervbowicz 2012). 17. There have been five rounds of TLTROs since their inception in 2014. The ECB began with two Covered Bond Purchase Programs (CBPP1, CBPP2), followed by a Securities Markets Program (SMP) set up to buy government bonds from the secondary market. Then the ECB announced and implemented an Asset Backed-Securities Purchase Program and a third Covered Bond Purchase Program (CBPP3) in October and November 2014 to
Inside the Bank Box 399 implement further easing of monetary policy. Finally, in November 2014, the ECB began quantitative easing with an Asset Purchase Program (APP). 18. Surprisingly, the FLS had very little direct effect on rate setting by UK banks, although there is evidence that it lowered the cost of funding for all banks (see Churm et al. 2012), which was reduced as FLS contributed between 40–75 basis points to the fall of around 150–200 basis points in market funding costs experienced by banks by the end of 2012. 19. The implication for studies that use individual bank data (e.g., de Haan, van den End, and Vermeulen 2015; Holton and Rodriguez d’Acri 2015; Altavilla et al. 2015) to explore the impact of monetary policy in the postcrisis period on rate setting by each bank is that funding costs and competition effects should be taken into account. If competition effects are ignored, then estimates of dynamic effects of monetary policy are potentially biased.
References Admati, A. R., and M. F. Hellwig. 2016. The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It. Princeton: Princeton University Press. Adrian, T., and H. S. Shin. 2014. “Procyclical Leverage and Value-at-Risk.” Review of Financial Studies 27, no. 2: 373–403. Akerlof, G., and J. Yellen. 1985. “A Near-Rational Model of Business Cycles, with Wage and Price Inertia.” Quarterly Journal of Economics 100: 823–838. Albertazzi, U., G. Eramo, L. Gambacorta, and C. Salleo. 2015. “Asymmetric Information in Securitization: An Empirical Assessment.” Journal of Monetary Economics 71: 33–49. Altavilla, C., F. Canova, and M. Ciccarelli. 2015. “Mending the Broken Link: Heterogeneous Bank Lending and Monetary Policy Pass-Through.” Presented at Joint BOE, CEPR, CFM, ECB Conference on Credit Dynamics and the Macroeconomy, December 2015. Altavilla, C., G. Carboni, and R. Motto. 2015. “Asset Purchase Programmes and Financial Markets: Lessons from the Euro Area.” ECB working paper 1864. Altunbas, Y., L. Gambacorta, and D. Marques-Ibanez. 2009a. “Bank Risk and Monetary Policy.” Journal of Financial Stability 6, no. 3: 121–129. Altunbas, Y., L. Gambacorta, and D. Marques-Ibanez. 2009b. “Securitisation and the Bank Lending Channel.” European Economic Review 53, no. 8: 996–1009. Altunbas, Y., L. Gambacorta, and D. Marques-Ibanez. 2014. “Does Monetary Policy Affect Bank Risk?” International Journal of Central Banking 10, no. 1: 95–135. Ashcraft, A. 2006. “New Evidence on the Lending Channel.” Journal of Money, Credit and Banking 38, no. 3: 751–776. Ball, L., and N. G. Mankiw. 1994. “Asymmetric Price Adjustment and Economic Fluctuations.” Economic Journal 104: 247–261. Ball, L., and D. Romer. 1989. “Real Rigidities and the Non-Neutrality of Money.” Review of Economic Studies 57: 183–203. Banerjee, A., V. Bystrov, and P. D. Mizen. 2013. “How Do Anticipated Changes to Short-Term Market Rates Influence Banks’ Retail Interest Rates? Evidence from the Four Major Euro Area Economies.” Journal of Money, Credit and Banking 45: 1375–1414. Banerjee, A., V. Bystrov, and P. D. Mizen. 2017. “Structural Factor Analysis of Interest Rate Pass Through in Four Large Euro Area Economies.” Working Papers in Economics 17/07, University of Canterbury, Department of Economics and Finance.
400 Gambacorta and Mizen Banerjee, R., L. Gambacorta, and E. Sette E. 2017. “The Real Effects of Relationship and Transactional Lending in a Crisis.” BIS working papers 662. Bank for International Settlements. 1994. National Differences in Interest Rate Transmission. Basel: BIS. Barrios, V. E., and J. M. Blanco. 2003. “The Effectiveness of Bank Capital Adequacy Regulation: A Theoretical and Empirical Approach.” Journal of Banking and Finance 27, no. 10: 1935–1958. Beirne, J. 2012. “The EONIA Spread before and after the Crisis of 2007–2009.” Journal of International Money and Finance 31: 534–551. Beltratti, A., and R. M. Stulz. 2009. “Why Did Some Banks Perform Better during the Credit Crisis? A Cross-Country Study of the Impact of Governance and Regulation.” Charles A. Dice Center working paper 2009-12. Berger, A. N., and T. H. Hannan. 1991. “The Rigidity of Prices: Evidence from Banking Industry.” American Economic Review 81: 938–945. Berlin, M., and L. J. Mester. 1999. “Deposits and Relationship Lending.” Review of Financial Studies 12, no. 3: 579–607. Bernanke, B. S., M. Gertler, and S. Gilchrist S. 1996. “The Financial Accelerator and the Flight to Quality.” Review of Economics and Statistics 78, no. 1: 1–15. Bernoth, Kerstin, and Jurgen von Hagen. 2004. “The EURIBOR Futures Market Efficiency and the Impact of ECB Policy Announcements.” International Finance 7: 1–24. Blanchard, O., and S. Fischer. 1998. Lectures on Macroeconomics. Cambridge, MA: MIT Press. Bolton, P., X. Freixas, L. Gambacorta, and P. E. Mistrulli. 2016. “Relationship and Transaction Lending in a Crisis.” Review of Financial Studies 29, no. 10: 2643–2676. Borio, C. E. V., and W. Fritz. 1995. “The Response of Short-Term Bank Lending Rates to Policy Rates: A Cross-Country Perspective.” in Bank for International Settlements, Financial Structure and the Monetary Transmission Policy Transmission Mechanism, 106–153. Basel: BIS. Borio, C. E. V., and L. Gambacorta. 2017. “Monetary Policy and Bank Lending in a Low Interest Rate Environment: Diminishing Effectiveness?” International Finance 20: 48–63. Borio, C., and H. Zhu. 2014. “Capital Regulation, Risk-Taking and Monetary Policy: A Missing Link in the Transmission Mechanism?” Journal of Financial Stability 8, no. 4: 236–251. Bruno, V., I. Shim, and H. S. Shin. 2017. “Comparative Assessment of Macroprudential Policies.” Journal of Financial Stability 28: 183–202. Cadamagnani, F., R. Harimohan, and K. Tangri. 2015. “A Bank within a Bank: How a Commercial Bank’s Treasury Function Affects the Interest Rates Set for Loans and Deposits.” Bank of England Quarterly Bulletin 55, no. 2: 153–164. Calomiris, C. W., and G. R. Hubbard. 1995. “Internal Finance and Investment: Evidence from the Undistributed Profit Tax of 1936–37.” Journal of Business 68: 443–482. Cerutti, E., S. Claessens, and L. Laeven. 2015. “The Use of Macroprudential Policies: New Evidence.” IMF working paper 15/61. Churm, R., J. Leake, A. Radia, S. Srinivasan, and R. Whisker. 2012. “The Funding for Lending Scheme.” Bank of England Quarterly Bulletin 52, no. 4: 306–320. Claessens, S., S. Ghosh, and R. Mihet. 2013. “Macro-Prudential Policies to Mitigate Financial System Vulnerabilities.” Journal of International Money and Finance 39: 153–185. Cook, T., and T. Hahn. 1989. “Effects of Changes in the Federal Funds Rate Target on Market Interest Rates in the 1970s.” Journal of Monetary Economics 24: 331–351.
Inside the Bank Box 401 Cornett, M. M., and H. Tehranian. 1994. “An Examination of Voluntary versus Involuntary Security Issuances by Commercial Banks: The Impact of Capital Regulations on Common Stock Returns.” Journal of Financial Economics 35: 99–122. Cottarelli, C., and A. Kourelis. 1994. “Financial Structure, Bank Lending Rates, and the Transmission Mechanism of Monetary Policy.” IMF Staff Papers 41: 587–623. Danielsson, J., H. S. Shin, and J. P. Zigrand. 2004. “The Impact of Risk Regulation on Price Dynamics.” Journal of Banking and Finance 28: 1069–1087. de Bondt, G. J. 2000. Financial Structure and Monetary Transmission in Europe: A Cross- Country Study. Cheltenham: Edward Elgar. de Bondt, G. 2002. “Euro Area Corporate Debt Securities Market: First Empirical Evidence.” ECB working paper 164. de Bondt, G., 2005. “Interest Rate Pass‐Through: Empirical Results for the Euro Area.” German Economic Review 6, no. 1: 37–78. De Graeve, F., O. De Jonghe, and R. Vander Vennet, 2007. “Competition, Transmission and Bank Pricing Policies: Evidence from Belgian Loan and Deposit Markets.” Journal of Banking and Finance 31: 259–278. de Haan, L., J. W. van den End, and P. Vermeulen. 2015. “Lenders on the Storm of Wholesale Funding Costs: Saved by the Central Bank?” De Nederlandsche Bank conference, November. Dell’Ariccia, G., D. Igan, and L. Laeven. 2008. “Credit Booms and Lending Standards: Evidence from the Subprime Mortgage Market.” IMF working paper 08/106. Dermine, J. 1991. Discussion to Vives. X., “Banking Competition and European Integration.” In European Financial Integration, edited by A. Giovannini and C. Mayer, 31– 34. Cambridge: Cambridge University Press. Dewatripont, M., and J. Tirole. 1994. The Prudential Regulation of Banks. Cambridge: MIT Press. Diebold, Francis X., and Canlin Li. 2006. “Forecasting the Term Structure of Government Bond Yields.” Journal of Econometrics 130: 337–364. Diebold, Francis X., Glenn D. Rudebusch, and S. Borag Aruoba. 2006. “The Macroeconomy and the Yield Curve: a Dynamic Latent Factor Approach.” Journal of Econometrics 131: 309–338. Donnay, M., and H. Degryse. 2001. “Bank Lending Rate Pass-Through and Differences in the Transmission of a Single EMU Monetary Policy.” Center for Economic Studies discussion paper 01.17, Leuven. Ehrmann, M., and A. Worms. 2004. “Bank Networks and Monetary Policy Transmission.” Journal of the European Economic Association 2, no. 6: 1148–1171. Ehrmann, M., L. Gambacorta, J. Martinez-Pages, P. Sevestre, and A. Worms 2003. “Financial Systems and the Role of Banks on Monetary Policy Transmission in the Euro Area.” In Monetary Policy Transmission in the Euro Area, edited by I. Angeloni, A. K. Kashyap, and B. Mojon, 235–269. Cambridge: Cambridge University Press. Elyasiani, E., K. J. Kopecky, and D. VanHoose. 1995. “Costs of Adjustment, Portfolio Selection, and the Dynamic Behavior of Bank Loans and Deposits.” Journal of Money, Credit and Banking 27: 955–974. Esposito, L., A. Nobili, and T. Ropele. 2015. “The Management of Interest Rate Risk during the Crisis: Evidence from Italian Banks.” Journal of Banking and Finance 59: 486–504. European Banking Authority. 2015. 2015 EU-Wide Transparency Exercise Results. London: EBA. European Central Bank. 2010a. “The ECB’s Response to the Financial Crisis.” ECB Monthly Bulletin (October): 59–74. European Central Bank. 2010b. Euro Money Market Study (December).
402 Gambacorta and Mizen European Central Bank. 2015. “The Transmission of the ECB’s Recent Non-Standard Monetary Policy Measures.” Economic Bulletin 7. Freixas, X., and J.-C. Rochet. 1997. Microeconomics of Banking. Cambridge: MIT Press. Fuertes, Anna-Maria, Shelagh Heffernan, and Elena Kalotychou. 2008. How Do UK Banks React to Changing Central Bank Rates? Mimeo: CASS Business School. Gambacorta, L. 2005. “Inside the Bank Lending Channel,” European Economic Review 49: 1737–1759. Gambacorta, L. 2008. “How Do Banks Set Interest Rates?” European Economic Review 52: 792–819. Gambacorta, L., and S. Iannotti. 2007. “Are There Asymmetries in the Response of Bank Interest Rates to Monetary Shocks?” Applied Economics 39, no. 19: 2503–2517. Gambacorta, L., and D. Marques-Ibanez, 2011. “The bank lending channel: lessons from the crisis.” Economic Policy 26, no. 66: 137–182. Gambacorta, L., and P. Mistrulli. 2004. “Does Bank Capital Affect Lending Behavior?” Journal of Financial Intermediation 13, no. 4: 436–457. Gambacorta, L., and P. E. Mistrulli. 2014. “Bank Heterogeneity and Interest Rate Setting: What Lessons Have We Learned since Lehman Brothers?” Journal of Money, Credit and Banking 46, no. 4: 753–778. Gambacorta, L., and A. Murcia. 2017. “The Impact of Macroprudential Policies and Their Interaction with Monetary Policy: An Empirical Analysis Using Credit Registry Data.” BIS working papers 636. Gambacorta, L., and H. S. Shin. 2016. “On Book Equity: Why It Matters for Monetary Policy.” BIS working paper 558. Giannone, D., M. Lenza, H. Pill, and L. Reichlin. 2011. “Non-Standard Monetary Policy Measures and Monetary Developments.” ECB working paper 1290. Giannone, D., M. Lenza, H. Pill, and L. Reichlin. 2012. “The ECB and the Interbank Market.” The Economic Journal 122: F467–F486. Gobbi, G., and E. Sette. 2015. “Relationship Lending during a Financial Crisis.” Journal of European Economic Association 13, no. 3: 453–481. Goodhart, C. A. E. 1996. “Why Do Monetary Authorities Smooth Interest Rates?” LSE FMG Special Paper 81. Gropp, R., and F. Heider. 2010. “The Determinants of Bank Capital Structure.” Review of Finance 14: 587–622. Hale, G. B., and J. A. C. Santos. 2010. Do Banks Propagate Shocks to the Debt Market? Mimeo: Federal Reserve Bank of New York. Hancock D. 1991. A Theory of Production for the Financial Firm. Norwell, Mass.: Kluwer Academic. Hannan, T., and A. Berger. 1991. “The Rigidity of Prices: Evidence from the Banking Industry.” American Economic Review 81: 938–945. Harimohan, Rashmi, Michael McLeay, and Garry Young. 2016. “Pass-Through of Bank Funding Costs to Lending and Deposit Rates: Lessons from the Financial Crisis.” Bank of England working paper 590. Heffernan, S. 1997. “Modelling British Interest Rate Adjustment: An Error- Correction Approach.” Economica 64: 211–231. Hofmann, B., M. Lombardi, and P. D. Mizen. 2016. “The Effects of Unconventional Monetary Policy on Bank Funding Costs, Bank Lending and Output in the Euro Area.” Mimeo, BIS.
Inside the Bank Box 403 Hofmann, B., and P. D. Mizen. 2004. “Base Rate Pass- Through and Monetary Transmission: Evidence from Individual Financial Institutions’ Retail Rates.” Economica 71: 99–125. Hutchison, D. E. 1995. “Retail Bank Deposit Pricing: An Intertemporal Asset Pricing Approach.” Journal of Money, Credit, and Banking 27: 217–231. Holton, S., and C. Rodriguez d’Acri. 2015. “Jagged Cliffs and Stumbling Blocks: Interest Rate Pass-Through Fragmentation during the Euro Area Crisis.” ECB working paper 1850. Illes, A., M. Lombardi, and P. D. Mizen. 2015. “Why Did Bank Lending Rates Diverge from Policy Rates after the Financial Crisis?” BIS working paper 486, February. Ioannidou, V., S. Ongena, and Peydrò J. L. 2009. “Monetary Policy and Subprime Lending: A Tall Tale of Low Federal Funds Rates, Hazardous Loans, and Reduced Loans Spreads.” European Banking Center Discussion Paper 2009-04S. Instefjord, N. 2005. “Risk and Hedging: Do Credit Derivatives Increase Bank Risk?” Journal of Banking and Finance 29: 333–345. Jayaratne, J., and D. P. Morgan. 2000. “Capital Market Frictions and Deposit Constraints at Banks.” Journal of Money, Credit and Banking 32, no. 1: 74–92. Jiménez, G., S. Ongena, J. L. Peydró-Alcalde, and J. Saurina. 2009. “Hazardous Times for Monetary Policy: What Do Twenty-Three Million Bank Loans Say about the Effects of Monetary Policy on Credit Risk-Taking?” Banco de España working paper 0833. Justiniano, Alejandro, and Giorgio Primiceri. 2008. “The Time Varying Volatility of Macroeconomic Fluctuations.” American Economic Review 98: 604–641. Kahn C., G. Pennacchi, and B. Sopranzetti. 2005. “Bank Consolidation and the Dynamics of Consumer Loan Interest Rates.” The Journal of Business 78, no.1: 99–134. Kashyap, Anil, and Jeremy Stein. 1995. “The Impact of Monetary Policy on Bank Balance Sheets.” Carnegie-Rochester Conference Series on Public Policy 42: 151–195. Kashyap, Anil, and Jeremy Stein. 2000. “What Do A Million Observations on Banks Have To Say About the Monetary Transmission Mechanism?” American Economic Review 90, no. 3: 407–428. Keys, B., T. Mukherjee, A. Seru, and V. Vig. 2010. “Did Securitization Lead to Lax Screening? Evidence from Subprime Loans 2001–2006.” Quarterly Journal of Economics 125, no. 1: 307–362. King, M. 2008. “Speech of the Governor of the Bank of England.” To the CBI, Institute of Directors, Leeds Chamber of Commerce and Yorkshire Forward at the Royal Armouries, October 21. King, M. R. 2010. “Mapping Capital and Liquidity Requirements to Bank Lending Spreads.” BIS working paper 324. Kishan, R. P., and T. P. Opiela. 2000. “Bank Size, Bank Capital and the Bank Lending Channel.” Journal of Money, Credit and Banking 32, no. 1: 121–141. Kleimeier, S., and H. Sander. 2000. “Asymmetric Adjustment of Commercial Bank Interest Rates in the Euro Area: Implications for Monetary Policy.” Paper presented at Financial Structure, Bank Behaviour and Monetary Policy conference, Zurich, Apirl 7. Klein, M. 1971. “A Theory of the Banking Firm.” Journal of Money, Credit and Banking 3: 205–218. Kopecky, Kenneth J., and David D. Van Hoose. 2012. “Imperfect Competition in Bank Retail Markets, Deposit and Loan Rate Dynamics, and Incomplete Pass Through.” Journal of Money, Credit and Banking 44, no. 6: 1185–1205.
404 Gambacorta and Mizen Kuttner, Kenneth N. 2001. “Monetary Policy Surprises and Interest Rates: Evidence from the Fed Funds Futures Market.” Journal of Monetary Economics 47: 523–544. Kuttner, K. N., and I. Shim. 2013. “Can Non-Interest Rate Policies Stabilise Housing Markets? Evidence from a Panel of 57 Economies.” BIS working paper 433. Leland, H. E., and D. H. Pile. 1977. “Informational Asymmetries, Financial Structures and Financial Intermediation.” Journal of Finance 32: 371–387. Loutskina, E., and P. E. Strahan. 2006. “Securitization and the Declining Impact of Bank Finance on Loan Supply: Evidence from Mortgage Acceptance Rates.” NBER working paper 11983. Mankiw, N. G. 1985. “Small Menu Costs and Large Business Cycles: A Macroeconomic Model.” Quarterly Journal of Economics 100: 529–537. Marques-Ibanez, D., and M. Scheicher. 2010. “Securitisation: Causes and consequences.” In Handbook of Banking, edited by A. Berger, P. Molyneux, and J. Wilson, 599–633. Oxford: Oxford University Press. Mian, A., and A. Sufi. 2009. “The Consequences of Mortgage Credit Expansion: Evidence from the U.S. Mortgage Default Crisis.” Quarterly Journal of Economic 124: 1449–1496. Michelangeli, V., and E. Sette. 2016. “How does bank capital affect the supply of mortgages? Evidence from a randomized experiment.” BIS working papers 557. Mojon, B. 2000. “Financial Structure and the Interest Rate Channel of ECB Monetary Policy.” ECB working paper 40. Monti, M. 1971. “Deposit, Credit and Interest Rate Determination under Alternative Bank Objective Functions.” In Mathematical Methods in Investment and Finance, edited by K. Shell and G. Szego, 431–454. Amsterdam: North Holland. Myers, S. C., and N. S. Majluf. 1984. “Corporate Finance and Investment Decisions When Firms Have Information That Investors Do Not Have.” Journal of Financial Economics 13:187–221. Neumark, D., and S. Sharpe. 1992. “Market Structure and the Nature of Price Rigidity: Evidence from the Market for Consumer Deposits.” Quarterly Journal of Economics 107: 657–680. Praet, P. 2015. “The APP Impact on the Economy and Bond Markets.” Intervention at the Annual Dinner of the ECB Bond Market Contact Group. Frankfurt am Main, June 30. https://www.ecb.europa.eu/press/key/date/2015/html/sp150630.en.html Prager, R. A., and T. Hannan. 1998. “Do Substantial Horizontal Mergers Generate Significant Price Effects? Evidence from the Banking Industry.” Journal of Industrial Economics 46 no. 4: 433–452. Primiceri, Giorgio. 2005. “Time Varying Structural Vector Autoregressions and Monetary Policy.” Review of Economic Studies 72: 821–852. Prudential Regulation Authority. 2013. Cross Firm Funds Transfer Pricing (FTP) Review. Bank of England. Rajan, R. G. 2005. “Has Financial Development Made the World Riskier?” National Bureau of Economic Research Working Paper Series 11728. Rampini, A., S. Viswanathan, and G. Vuillemey. 2016. Risk Management in Financial Institutions. Mimeo: Duke University. Romer, C. D., and D. H. Romer. 1990. “New Evidence on the Monetary Transmission Mechanism.” Brookings Paper on Economic Activity 1: 149–213. Rotemberg, J. 1992. “Monopolistic Price Adjustment and Aggregate Output.” Review of Economics and Statistics 49: 517–523.
Inside the Bank Box 405 Rousseas, S. 1985. “A Markup Theory of Bank Loan Rates.” Journal of Post Keynesian Economics 8: 135–144. Sack, B. 1998. “Does the Fed Work Gradually? A VAR Analysis.” Federal Reserve Board of Governors, FEDS working paper 17. Sander, H., and S. Kleimeier, 2004. “Convergence in Euro-Zone Retail Banking? What Interest Rate Pass- Through Tells Us about Monetary Policy Transmission, Competition and Integration.” Journal of International Money and Finance 23: 461–492. Smets, Frank, and Rafael Wouters. 2007. “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach.” American Economic Review 97: 586–606. Shleifer, J. A., and R. W. Vishny. 2009. “Unstable Banking.” NBER working paper 14943. Stein, J. C. 1998. “An Adverse-Selection Model of Bank Asset and Liability Management with Implications for the Transmission of Monetary Policy.” RAND Journal of Economics 29: 466–486. Stock, James H., and Mark W. Watson. 2002. “Has the Business Cycle Changed and Why?” In NBER Macroeconomics Annual 2002, edited by M. Gertler and K. Rogoff. Cambridge: MIT Press. Stock, James H., and Mark W. Watson. 2003. “Has the Business Cycle Changed? Evidence and Explanations.” Paper prepared for the Reserve of Kansas City Symposium Monetary Policy and Uncertainty, Jackson Hole, Massachusetts. Svensson, L. E. O. 2013. “Forward Guidance in Theory and Practice: The Swedish Experience.” http://larseosvensson.se/files/papers/forward-guidance.pdf. Szczerbowicz, U. 2012. “The ECB Unconventional Monetary Policies: Have They Lowered Market Borrowing Costs for Banks and Governments?” CEPII 36, December. Toolsema, L. A., J.-E. Sturm, and J. de Haan. 2001. “Convergence of Monetary. Transmission in EMU: New Evidence.” CESifo working paper 465. Van den Heuvel, S. 2002. “Does Bank Capital Matter for Monetary Transmission?” Economic Policy Review 8: 259–265. VanHoose, D. 2007. “Theories of Bank Behavior under Capital Regulation.” Journal of Banking and Finance 31: 3680–3697. Von Borstel, J., S. Eickmeier, and L. Krippner. 2016. “The Interest Rate Pass-Through in the Euro Area before and during the Sovereign Debt Crisis.” Journal of International Money and Finance 68: 386–402. Wall, L. D., and D. R. Peterson. 1987. “The Effect of Capital Adequacy Guidelines on Large Bank Holding Companies.” Journal of Banking and Finance 11: 581–600.
chapter 13
T er m Premium Va ria bi l i t y and Monetary P ol i c y Timothy S. Fuerst and Ronald Mau
13.1 Introduction In the aftermath of the 2008 financial crisis, many central banks have adopted unconventional policies, including outright purchases of long-term government debt. These bond purchases were an attempt to alter the yield curve for a given path of the federal funds rate. That is, they were meant to alter the term premium. In a monetary policy environment with a large Fed balance sheet, an important policy question going forward is whether the term premium (in addition to the funds rate) should be a regular input into the policymaking process. That is, should the Fed’s bond portfolio be used to smooth fluctuations in the term premium? In this chapter, we integrate two distinct approaches to modeling the term premium in a medium-scale dynamic stochastic general equilibrium (DSGE) model and characterize the ability of each approach to generate a variable term premium as observed in the data. We then address how a policymaker should respond to term premium variability within the context of each modeling approach. Finally, we highlight two concerns that should be considered with the use of this new policy instrument and comment on the implications of each on the benefits to policy that smooths fluctuations in the term premium. The first approach we implement to model the term premium alters preferences as in Epstein and Zin (1989), hereafter EZ. EZ preferences separate risk aversion from intertemporal substitution elasticities and are a common feature in the finance literature. Rudebusch and Swanson (2012) is a prominent example of using EZ preferences in a DSGE framework to model the term premium. The second approach deviates from an economy with frictionless asset trade by segmenting the asset market so that short and long bonds are priced by different agents, and the ability of agents to arbitrage the spread between long and short bonds is constrained by the net worth of the financial sector.
Term Premium Variability and Monetary Policy 407 In this environment, a bond purchase policy will alter the term premium and have real effects. Carlstrom, Fuerst, and Paustian (2017) is a recent example of this approach. The two approaches have wildly different implications for monetary policy so that the modeling choice matters for the central bank. If the term premium is simply another asset price in a world with frictionless asset trade, then fluctuations in the term premium reflect changes in real activity but are otherwise irrelevant for central bankers (unless it helps forecast variables of interest to the policymaker). That is, in a frictionless model with EZ preferences, the term premium should not directly concern policymakers. In contrast, if the term premium reflects an economic distortion arising from market segmentation, then there is a first-order role for smoothing fluctuations in the term premium. Our principle results include the following. First, with a standard calibration of the DSGE model and assuming that the business cycle is driven by total factor productivity (TFP) shocks, the EZ approach can produce an average term premium comparable to that found in the data but a trivial and counterfactually small level of variability in the premium. These results are sensitive to the exogenous shocks. A recurring theme in the estimated DSGE literature is the importance of marginal efficiency of investment (MEI) shocks which perturb the link between investment spending and final capital goods. If the business cycle is instead driven by these shocks, then the average term premium is negative, again with trivial variability. We conclude that the EZ approach by itself has difficulty in hitting both the mean and the variability in the term premium. Our second set of results concerns the segmentation model of Carlstrom, Fuerst, and Paustian (2017). This model features two parameters that define the degree of segmentation in the financial sector: (i) the degree of impatience of financial intermediaries and (ii) the level of adjustment costs in changes in portfolios. These parameters can be chosen to hit exactly the empirical mean and variability in the term premium. Given such a calibration, there are significant welfare gains to a central bank smoothing variations in the term premium by actively using its portfolio of long bonds. We consider two practical difficulties with implementing a policy that eliminates all variability in the term premium, which we will call a term premium peg. First, the economic distortion of the peg depends on the steady-state term premium which is different from the mean term premium if the yield on the long bond includes adjustments for risk. This implies that by implementing a term premium peg, the policymaker is suppressing fluctuations in the term premium that come from the risk adjustment but are not representative of the segmentation distortion. Second, the term premium is subject to serially correlated measurement error. Thus, under a term premium peg, the policymaker is inadvertently introducing exogenous variation into the model economy. This chapter explores both of these effects and concludes that their quantitative significance is modest. Section 13.2 lays out the basic segmented markets model with EZ preferences. Section 13.3 provides a quantitative analysis of the model with segmentation effects turned off. Section 13.4 provides the complementary analysis for the model with active segmentation effects. Policy issues are discussed in section 13.5. Section 13.6 concludes.
408 Fuerst and Mau
13.2 The Model The model economy consists of households, employment agencies, firms, and financial intermediaries (FIs). We will discuss each in turn.
13.2.1 Households Each household has recursive preferences over consumption and labor given by
(1−θ) Vt = U (ct , ht ) + β Et (Vt +1 )
1/(1−θ)
. (1)
Using the terminology of Rudebusch and Swanson (2012), the EZ preferences twist the value function. Risk aversion is increasing in θ . If we set θ = 0, we have the standard preferences. The intraperiod utility functional is given by
U (ct , ht ) ≡
ct1− υ h1+ η − b t + k , (2) 1− υ 1+ η
where ct and ht denote consumption and labor, respectively. We choose the constant k > 0 so that steady-state utility is positive, which ensures that the value function always takes on positive values.1 The household has two means of intertemporal smoothing: short-term deposits (Dt ) in the FI and accumulation of physical capital (K t ). Households also have access to the market in short-term government bonds (T-bills). But since T-bills are perfect substitutes to deposits and the supply of T-bills moves endogenously to hit the central bank’s short- term interest-rate target, we treat Dt as the household’s net resource flow into the FIs. To introduce a need for intermediation, we assume that all investment purchases must be financed by issuing new “investment bonds” that are ultimately purchased by the FI. We find it convenient to use perpetual bonds with cash flows of 1, κ , κ 2 , and so on. Let Qt denote the time-t price of a new issue. Given the time pattern of the perpetuity payment, the new issue price Qt summarizes the prices at all maturities; for example, κQt is the time-t price of the perpetuity issued in period t-1. The duration and (gross) yield to maturity on these bonds are defined as duration = (1 − κ )−1 , and gross yield to maturity = Qt−1 + κ . Let CIt denote the number of new perpetuities issued in time t to finance investment. In time t, the household’s nominal liability on past issues is given by
Ft −1 = CIt −1 + κCIt −2 + κ 2CIt −3 + (3)
We can use this recursion to write the new issue as
CIt = (Ft − κ Ft −1 ). (4)
Term Premium Variability and Monetary Policy 409 The representative’s household constraints are thus given by
ct +
F D Q (F − κ Ft −1 ) Dt + Ptk It + t −1 ≤ t ht + Rtk K t − Tt + t −1 Rt −1 + t t + divt (5) Pt Pt Pt Pt K t +1 ≤ (1 − δ)K t + It (6) Ptk It ≤
Qt (Ft − κ Ft −1 ) Qt CIt , (7) = Pt Pt
where Pt is the price level, Ptk is the real price of capital, Rt −1 is the gross nominal interest rate on deposits, Rtk is the real rental rate, wt is the real wage paid to households, Tt is lump-sum taxes, and divt denotes the dividend flow from the FIs. The household also receives a profit flow from the intermediate goods producers and the new capital producers, but this is entirely standard, so we dispense with this added notation for simplicity. The “loan-in-advance” constraint (equation 7) will increase the private cost of purchasing investment goods.2 The first-order conditions to the household problem include:
−
U h (ct , ht ) U c (ct , ht )
1 = Et St +1
= t (8)
Rt (9) Π t +1
Ptk Mt = Et St +1 Rtk+1 + (1 − δ)Ptk+1 Mt +1 (10) Qt Mt = Et
St +1 1 + κQt +1 Mt +1 , (11) Π t +1
where the real stochastic discount factor (SDF) is given by
−θ β U (t + 1) V (12) t +1 c St +1 = (1− θ ) 1/(1− θ ) U ( t ) c Et (Vt +1 )
Pt is gross inflation. Expressions 8 and 9 are the familiar labor supply equaPt −1 tion and Fisher equation, respectively. The capital accumulation expression (10) is distorted relative to the familiar expression by the time-varying distortion Mt , where and Πt ≡
410 Fuerst and Mau
ϑt , and ϑ t and Λ t are the multipliers on the loan-in-advance constraint and Λt the budget constraint, respectively. The endogenous behavior of this distortion is fundamental to the real effects arising from market segmentation. Other things being equal, there is a welfare advantage to stabilizing this distortion. Mt ≡ 1 +
13.2.2 Labor Unions and Employment Agencies There is a continuum of labor unions that purchase raw labor from households at price t and transform it into a unique labor skill that is then sold to competitive employment agencies.3 Union i faces a labor demand curve given by W* H ti = H t t Wt
− w
, (13)
where Wt is the aggregate real wage and Wt* is the real wage set by union i. With probability (1 − θ), the union can reset its nominal wage in the current period, while with probability θ its nominal wage simply grows by Πtιw , where ιw is the degree of nominal wage indexation to the inflation rate. If union i can reset its wage in time t, its maximization problem is given by,
{
}
Wt* H ti − H ti wt + θSt +1 Wt* Ψt1+−1w H ti +1 − H ti +1Ψt−+1w t +1 maxW * , − − − − 1 1 i i 2 * w w w w t + θ St +1St + 2 Wt Ψt +1 Ψt + 2 H t + 2 − H t + 2 Ψt +1 Ψt + 2 t + 2 +
{
}
(14)
Πtιw denotes the automatic adjustment of the real wage to inflation. Π t +1 The optimal real wage choice for a typical union that can reset its wage is given by the following: where Ψt +1 ≡
G1t (15) G2t
Wt* =
Π G1t = W H t wt + St +1θw tι+w1 G1t +1 (16) Πt
Π −1 G2t = w Wtw H t + St +1θw tι+w1 w Πt
w
w t
(w −1)
G2t +1 (17)
Term Premium Variability and Monetary Policy 411 The aggregate real wage then evolves as follows: 1−w t
W
* 1−w
= (1 − θw )(Wt )
Π ιw + θw t −1 Πt
1−w
Wt1−−1w (18)
The nominal wage rigidity implies a time-varying dispersion (dtw ) of real wages given by
W* d = (1 − θw ) t W w t
− w
t
W + θw t −1 W
− w
t
w
Πt w Π ιw dt −1 . (19) t −1
Each union sells its specific employment variety to a competitive employment agency. These agencies aggregate these varieties into a labor service that is sold to firms at real wage Wt . These agencies solve the following maximization problem:
1 Wt ∫(H ti )1−1/w di 0
1/ (1−1/w )
1
− ∫Wti H ti di. (20) 0
The optimization conditions are given by
Wi H ti = t Wt
1 H t ≡ ∫ H ti 0
−w
H t (21)
( )
1−1/w
di
1/(1−1/w )
. (22)
13.2.3 Firms and Production The production side of the model is standard and symmetric with the provision of labor input. There is a continuum of intermediate good producers, each with monopoly power over the input variety it produces. A monopolist produces intermediate good i according to the production function
Yt (i) = At K t (i)α H t (i)1− α , (23)
where K t (i) and H t (i) denote the amounts of capital and labor employed by firm i. The variable lnAt is the exogenous level of TFP and evolves according to
lnAt = ρA lnAt −1 + ε a ,t . (24)
412 Fuerst and Mau Every period a fraction θ p of intermediate firms cannot choose its price optimally but resets it according to the indexation rule ι
Pt (i) = Pt −1 (i)Πt −p 1 , (25)
Pt is gross inflation. The firms that can reset their prices choose their relaPt −1 P (i ) tive price xt ≡ t , optimally to maximize the present discounted value of profits: Pt where Πt =
{
}
x Y i − Y i MC + θ S x Y i Ψ1− p − Y i Ψ − p MC t t p t +1 t t +1 t +1 t +1 t +1 t +1 + t t max xt , (26) − p − p 1− p 1− p i i 2 θ p St +1St + 2 xt Ψt +1 Ψt + 2 Yt + 2 − Yt + 2 Ψt +1 Ψt + 2 MCt + 2 +
{
}
ι
Πp where MCt denotes real marginal cost, Ψt +1 ≡ t is the automatic movement in the Π t +1 relative price, and the firm faces a demand curve given by Yti = Yt xt
− p
. (27)
The firm’s optimization conditions include the following:
Rtk = MCt MPK t (28)
Wt = MCt MPLt (29) p
X1t Π (30) p − 1 X 2t t
Πt* =
X1t = MCt Yt + St +1θ p Πt
− ι p p
ι (1− p )
X2t = Yt + St +1θ p Πt p
Πt p+1 X1t +1 (31) −1
Πt p+1 X2t +1 (32)
The aggregate inflation rate and price dispersion, respectively, then evolve as follows:
1− p
Πt
1− p
= (1 − θ p )(Πt* )
ι (1− p )
+ θ p Πt −p 1
(33)
− −ι dt = Πt p (1 − θ p )(Πt* ) p + θ p Πt −1p p dt −1 (34)
Term Premium Variability and Monetary Policy 413 These monopolists sell their intermediate goods to perfectly competitive firms that produce the final consumption good Yt combining a continuum of intermediate goods according to the constant elasticity of substitution technology:
1 1− Yt = ∫Yt (i) p di 0
1/(1− p )
(35)
Profit maximization and the zero profit condition imply that the price of the final good, Pt , is the familiar CES aggregate of the prices of the intermediate goods.
13.2.4 New Capital Producers New capital is produced according to the production technology that takes It invest I ment goods and transforms them into µt 1 − S t It new capital goods. The time-t It −1 profit flow is thus given by
I Ptk µt 1 − S t It − It , (36) It −1
where the function S captures the presence of adjustment costs in investment and is 2
I ψ I given by S t ≡ i t − 1 . These firms are owned by households and discount It −1 2 It −1 future cash flows using the household’s SDF. The investment shock follows the stochastic process
logµt = ρµ logµt −1 + ε µ ,t , (37)
where ε µ ,t is i.i.d. N(0, σ2µ ). Using the terminology of Justiniano, Primiceri, and Tambalotti (2011), we will refer to these shocks as MEI shocks.
13.2.5 Financial Intermediaries The FIs in the model are a stand-in for the entire financial nexus that uses accumulated net worth (N t ) and short-term liabilities (Dt ) to finance investment bonds ( Ft ) and the long-term government bonds (Bt ). The FIs are the sole buyers of the investment bonds and long-term government bonds. We again assume that government debt takes the form of perpetuities that provide payments of 1, κ, κ2, etc. Let Qt denote the price of a
414 Fuerst and Mau new debt issue at time t. The time t asset value of the current and past issues of investment bonds is
Qt CIt + κQt CIt −1 + κCIt −2 + κ 2CIt −3 + = Qt Ft . (38)
The FI’s balance sheet is thus given by Bt F D Qt + t Qt = t + N t = Lt N t , (39) Pt Pt Pt
where Lt denotes leverage. Note that on the asset side, investment lending and long- 1 + κQt +1 term bond purchases are perfect substitutes to the FI. Let RtL+1 ≡ denote the Qt realized nominal holding period return on the long bond. The FI’s time-t profits are then given by proft ≡
Pt −1 (R L − Rtd−1 )Lt −1 + Rt −1 N t −1 . (40) Pt t
The FI will pay out some of these profits as dividends (divt ) to the household and retain the rest as net worth for subsequent activity. In making this choice, the FI discounts dividend flows using the household’s pricing kernel augmented with additional impatience.4 The FI accumulates net worth because it is subject to a financial constraint: the FI’s ability to attract deposits will be limited by its net worth. We will use a simple hold-up problem to generate this leverage constraint, but a wide variety of informational restrictions will generate the same constraint. We assume that leverage is taken as given by the FI. We will return to this below. The FI chooses dividends and net worth to solve ∞
Vt ≡ max Et ∑(βζ) j Λ t + j divt + j (41)
N t , divt
j=0
subject to the financing constraint developed below and the following budget constraint:
divt + N t [1 + f (N t )] ≤
Pt −1 [(RtL − Rtd−1 )Lt −1 + Rt −1 ]N t −1 . (42) Pt 2
ψ n N t − N ss denotes an adjustment cost function that 2 N ss dampens the ability of the FI to adjust the size of its portfolio in response to shocks. The FI’s optimal accumulation decision is then given by The function f ( N t ) ≡
Term Premium Variability and Monetary Policy 415
Λ t [1 + N t f ′(N t ) + f (N t )] = Et βζΛ t +1
Pt [(R L − Rtd )Lt + Rtd ]. (43) Pt +1 t +1
The hold-up problem works as follows. At the beginning of period t + 1, but before aggregate shocks are realized, the FI can choose to default on its planned repayment to depositors. In this event, depositors can seize at most fraction (1 − Φ) of the FI’s assets. If the FI defaults, it is left with ΦRtL+1Lt N t , which it carries into the subsequent period. To ensure that the FI will always repay the depositor, the time-t incentive compatibility constraint is thus given by
Et St +1
P Pt RtL+1Lt N t − Rtd (Lt − 1)N t ≥ ΦLt N t Et St +1 t RtL+1 . (44) Pt +1 Pt +1
{
}
It is useful to think of equation 44 as determining leverage. Since net worth scales both sides of the inequality, leverage is a function of aggregate variables but is independent of each FI’s net worth. We will calibrate the model so that this constraint is binding in the steady state (and thus binding for small shocks around the steady state). Equations 43 and 44 are fundamental to the model, as they summarize the limits to arbitrage between the return on long-term bonds and the rate paid on short- term deposits. The leverage constraint (equation 44) limits the FI’s ability to attract deposits and eliminate the arbitrage opportunity between the deposit and lending rate. Increases in net worth allow for greater arbitrage and thus can eliminate this market segmentation. Equation 43 limits this arbitrage in the steady state by additional impatience (ζ < 1) and dynamically by portfolio adjustment costs (ψ n > 0). Since the FI is the sole means of investment finance, this market segmentation means that central bank purchases that alter the supply of long-term debt will have repercussions for investment loans because net worth and deposits cannot quickly sterilize the purchases.
13.2.6 Central Bank Policy We assume that the central bank follows a familiar Taylor rule over the short rate (T-bills and deposits):
ln(Rt ) = (1 − ρ)ln(Rss ) + ρ ln(Rt −1 ) + (1 − ρ)(τ π πt + τ y ytgap ), (45)
where ytgap ≡ ln(Yt / Yt f ) denotes the deviation of output from its flexible price counterpart. We will think of this as the federal funds rate (FFR). The supply of short-term bonds (T-bills) is endogenous, varying as needed to support the FFR target. As for the long-term bond policy, the central bank will choose between an exogenous path for the
416 Fuerst and Mau quantity of long-term debt available to FIs or a policy rule that pegs the term premium and thus makes the level of debt endogenous. We will return to this below. Fiscal policy is entirely passive. Government expenditures are set to zero. Lump-sum taxes move endogenously to support the interest payments on the short and long debt.
13.2.7 Debt Market Policies To close the model, we need one more restriction that will pin down the behavior in the long-debt market. We will consider two different policy regimes for this market: exogenous debt and endogenous debt. We will discuss each in turn. B Exogenous debt. The variable bt ≡ Qt t denotes the real value of long-term governPt ment debt on the balance sheet of FIs. There are two distinct reasons for this variable to fluctuate. First, the central bank could engage in long bond purchases (quantitative easing, or QE). Second, the fiscal authority could alter the mix of short debt to long debt in its maturity structure. Further research could model both of these scenarios as exogenous movements in long debt and the quantitative effect of each. Our benchmark experiments will hold long debt fixed at steady state. Under this exogenous debt scenario, the long yield and term premium will be endogenous. Endogenous debt. The polar opposite scenario is a policy under which the central bank pegs the term premium at its steady-state value. Under this policy regime, the level of long debt will be endogenous. Under a term premium peg, the asset value of the intermediary will remain fixed, while composition of assets will vary. That is, any increase of FI holdings of investment debt is achieved via the central bank purchasing an equal magnitude of government bonds. The proceeds from this sale effectively finance loans for investment.
13.2.8 Yields and Term Premiums The (gross) yield on the long-term bond is defined by Rtlong :
Rtlong = Qt−1 + κ. (46)
The term premium is defined to be the difference between this yield and the corresponding yield implied by the expectations hypothesis (EH) of the term structure. This hypothetical bond price and corresponding yield are defined as
QtEH =
Rt
EH , long
1 + κ Et QtEH +1 (47) Rt =
1 + κ. (48) QtEH
Term Premium Variability and Monetary Policy 417 The term premium is then given by TPt ≡
1 1 − EH = Rtlong − RtEH ,long . (49) Qt Qt
We also define two other popular measures of the term structure: the slope of the term structure, which is defined as the spread between the long rate and the short rate, and the excess return (ER), which is defined as the spread between the holding period returns on the long bond and the short rate:
Slopet ≡ Rtlong − Rt (50)
1 + κQt − Rt −1 = RtL − Rt −1 . (51) ERt ≡ Qt −1
13.3 The Term Premium with Frictionless Financial Markets We begin our quantitative analysis by abstracting from market segmentation effects. Segmentation is toggled off by setting ζ = 1 and ψ n = 0 . This implies Mt ≡ 1 . Expanding the definition of the bond price, we have
Qt = Et
St + 1 S S S S S + κ Et t +1 t + 2 + κ 2 Et t +1 t + 2 t + 3 + (52) Π t +1 Π t +1 Π t + 2 Π t +1 Π t + 2 Π t+ 3
Similarly, we have
QtEH = Et
St + 1 S S S S S + κ Et t +1 Et t + 2 + κ 2 Et t +1 Et t + 2 Et t + 3 + (53) Π t +1 Π t +1 Π t + 2 Π t +1 Π t + 2 Π t + 3
This implies that the bond price can be expressed as
S S Qt = QtEH + κcovt t +1 , t + 2 + (54) Π t +1 Π t + 2
Since κ < 1, the early covariances are quantitatively the most important. Hence, S there is a positive term premium if and only if the nominal SDF t +1 is negatively Π t +1 autocorrelated at short horizons. As emphasized by Fuerst (2015), it is impossible to generate a significantly positive term premium with standard preferences (θ = 0). There are two reasons for this result. First,
418 Fuerst and Mau with θ = 0 and for plausible values of ν in equation 2, the real SDF has trivial variability (this is just a manifestation of the equity premium puzzle). Second, inflation is positively autocorrelated at short horizons, an autocorrelation that is inherited by the nominal SDF. This positive autocorrelation in the nominal SDF kills any chance of generating a positive term premium. But the EZ effect (θ > 0) can easily deliver a positive term premium. Recall that the real SDF is given by
−θ β U c (t + 1) (55) Vt +1 St +1 = . [Et (Vt +1 )(1− θ) ]1/(1− θ) U c (t )
The SDF is a product of an EZ term and the traditional intertemporal marginal rate of substitution. The EZ term is, essentially, a forecast error, an innovation in lifetime utility. If θ is large enough, then a positive innovation in lifetime utility will lead to a sharp one-time decline in the real SDF. To generate a term premium, we need a persistent movement inflation that is of the opposite sign to this innovation in lifetime utility. The model has two exogenous shocks: TFP shocks and MEI shocks. Positive innovations in either of these shocks will lead to positive innovations in lifetime utility. The implications for the term premium then depend on the response of inflation to these shocks. We will look at each shock in turn. To capture the mean and variability of the term premium, we use a third-order approximation to the model. The baseline parameter values are displayed in table 13.1. The financial parameters are chosen to imply a bond duration of forty quarters, a steady- state term premium of 100 basis points, and FI leverage of 6. In this section, we abstract from segmentation issues and set ζ = 1 (implying no steady-state term premium) and ψ n = 0 . The remaining parameter values are broadly consistent with the literature that estimates medium-scale DSGE models, with two important caveats. The estimation literature typically includes habit in consumption (from which we abstract) but excludes EZ effects (which we include). Consistent with the evidence in Justiniano, Primiceri, and Tambalotti (2011), output and investment variability are largely driven by MEI shocks. At the eight-quarter horizon, MEI shocks account for 62 percent of the variability of output and 87 percent of the variability of investment. Measurement of the term premium is easy in theory but difficult in practice. For the period 1961–2007, Rudebusch and Swanson (2012) report a mean term premium of 106 basis points and a standard deviation of 54 basis points. For the period January 1962 to April 2008, Adrian, Crump, and Moench (2013) report a mean of 169 basis points and a standard deviation of 154 basis points. For our benchmark data target, we will use a mean of 130 basis points and a standard deviation of 100 basis points. We consider five cases when computing business cycle statistics: I. No Epstein-Zin effects and no segmentation costs, θ = 0 and ψ n = 0 . II. Epstein-Zin effects only, θ = 200 and ψ n = 0 .
Term Premium Variability and Monetary Policy 419 Table 13.1 Parameter Values for Baseline Calibration Preference parameters
β ν η U ss hss
0.99 2 1 1 1
Production parameters
α δ p = w ψi
0.33 0.025 5 2
Nominal stickiness
θp = θw i p = iw Financial parameters κ ψn ζ
Φ
0.75 0.5
1−1/40 1 0.9852 0.1687
Exogenous shocks
ρA , σ A ρµ , σ µ
0.95, 0.01 0.80, 0.06
III. Epstein-Zin effects and segmentation costs, θ = 50 and ψ n = 1 . IV. Pegged term premium. V. Pegged segmentation multiplier. For each case, we show the standard deviation of output; the relative standard deviation of consumption, investment, labor, and real wages to output; the standard deviation of net interest rates, inflation, and spreads in basis points; and the correlation of quantities and prices with output. Slopet = Rtlong − Rt , ERt = RtL − Rt −1 , rt = Rt −1. Table 13.2 reports some business cycle statistics for the model under a variety of scenarios. For present purposes, the focus is on cases I and II, which look at the model without segmentation effects and with no EZ effects (θ = 0) and substantial EZ effects (θ = 200). A key takeaway from table 13.2 is that adding EZ effects has essentially no effect on macroaggregates such as output, consumption, and investment. But there can
ρx ,y
0.69
0.95
σx
ln (I )
ln (h )
ln (w )
–0.33
0.06
127.37
964.41
228.02
0.78
351.57
924.04
r rl
π
TP
Slope
ER
–0.04
–0.47
0.44
0.16
–0.33
127.90
r
EH ,long
0.30
0.39
long
407.36
0.79
3.90
ln (c )
r
0.47
0.85
ρX ,Y
0.67
1.00
ρY ,Y
5.14
σY
I
σ X / σY
ln (Y )
Case
925.99
357.83
3.72
214.79
959.97
97.06
99.84
400.38
σx
0.94
0.70
3.82
0.66
σ X / σY
5.17
σY
Table 13.2 Business Cycle Statistics II
–0.17
–0.49
–0.72
0.25
0.07
–0.22
–0.24
ρx ,y 0.37
0.78
0.41
0.86
0.44
ρX ,Y
1.00
ρY ,Y
943.39
488.45
85.53
328.33
952.52
160.05
113.21
548.33
σx
1.00
1.09
5.31
0.94
σ X / σY
4.06
σY
–0.15
–0.63
0.11
0.51
0.15
0.07
0.18
0.59
ρx ,y
0.52
0.52
0.78
0.11
ρX ,Y
1.00
ρY ,Y
III
890.29
373.69
0.00
227.56
926.88
110.62
110.62
419.89
σx
0.94
0.73
4.02
0.68
σ X / σY
5.09
σY
IV
–0.11
–0.50
0.00
0.26
0.08
–0.25
–0.25
0.37
ρx ,y
0.76
0.42
0.84
0.40
ρX ,Y
1.00
ρY ,Y
924.26
349.93
9.83
224.38
965.43
119.66
129.48
405.33
σx
0.95
0.69
3.88
0.67
σ X / σY
5.15
σY
V
–0.38
–0.49
–0.31
0.18
0.06
–0.31
–0.31
0.31
ρx ,y
0.79
0.39
0.85
0.46
ρX ,Y
1.00
ρY ,Y
Term Premium Variability and Monetary Policy 421 be important effects on some financial variables. This phenomenon has been dubbed macrofinance separation, in that EZ preferences can alter the behavior of asset prices without altering the behavior of macroaggregates. See Lopez, Lopez- Salido, and Vazquez-Grande (2015) for a recent contribution. In the present case, a key change in financial variables is the cyclical behavior of the term premium, which goes from mildly procyclical without EZ effects to strongly countercyclical with the EZ effects. This is the only significant change the financial business cycle statistics. In both cases, no EZ effects and large EZ effects, a significant disappointment is the trivial variability in the term premium. We will return to this below. Figures 13.1 and 13.2 examine the model’s implications for TFP shocks. With no EZ effects (θ = 0), the model generates a trivial average term premium. But with sufficient EZ risk aversion, it is quite easy to hit a fairly large mean premium: with θ = 200 , the mean term premium is nearly 100 basis points. The reason for this is quite evident in the impulse response function (IRF) in figure 13.2 (which assumes θ = 200 ). A positive TFP shock leads to a sharp increase in lifetime utility, which implies a large negative innovation in the real SDF. The persistent TFP shock implies a persistent decline in inflation. Hence, the nominal SDF is significantly negatively correlated at short horizons, thus implying a positive term premium. Without the EZ effect, the movement in the real SDF is trivial so that there is only a tiny term premium. But there is a problem with TFP shocks. The variability of the term premium is counterfactually small. Even with θ = 200 , the standard deviation is less than 5 basis points. This result is distinct from the results of Rudebusch and Swanson (2012), who report much larger variability in the premium. For example, their “best fit” reports a term premium standard deviation of 47 basis points. The reason for this different result is the form of the Taylor rule used. Rudebusch and Swanson assume that the central bank responds to the level of output relative to steady state (or trend in a model with exogenous growth). Let us call this an output rule in contrast to the gap rule in equation 45. It is useful to rewrite the gap as
Y /Y ytgap = ln tf ss = ln(Yt / Yss ) − ln(Yt f / Yss ) = yt − ytflex . (56) Yt / Yss
Since output is the sum of flexible price output and the gap, we can write an output rule as
ln(Rt ) = (1 − ρ)ln(Rss ) + ρ ln(Rt −1 ) + (1 − ρ)(τ π πt + τ y ytgap + τ y ytflex ). (57)
For the case of TFP shocks, an output rule is akin to adding serially correlated exogenous policy shocks to a gap-based Taylor rule. Consider a positive TFP shock. An output-based rule implies that the central bank commits to a persistent series of contractionary policy shocks. This then implies that the decline of inflation is much more substantial and thus generates more term premium variability. Under a gap rule, the initial decline in inflation
422 Fuerst and Mau 100 Mean term premium (basis points)
90 80 70 60 50 40 30 20 10
Standard deviation term premium (basis points)
0
0
20
40
60
80 100 120 EZ parameter, θ
140
160
180 200
0
20
40
60
80 100 120 EZ parameter, θ
140
160
180 200
5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0
Figure 13.1 Term Premium (TFP Shocks)
is 45 basis points, and the (negative) innovation in the real SDF is less than 10 percent. Under an output rule, the initial inflation decline is 275 basis points, and the SDF innovation is more than 15 percent. Further, under an output rule, the persistent sequence of contractionary policy shocks keeps inflation persistently below steady state. Even after twenty quarters, inflation is more than 200 basis points below steady state (compared to 10 basis points for the gap rule). We thus conclude that the model is able to generate substantial term premium variability only if the central bank uses a nontypical policy rule. Figures 13.3 and 13.4 provide the corresponding analysis for MEI shocks. In this case, the EZ preferences actually hurt the model’s ability to generate a positive term premium; the mean term premium is monotonically decreasing in θ . The reason for this is clear in the IRF in figure 13.4 (which assumes θ = 200). A positive MEI shock increases lifetime
Term Premium Variability and Monetary Policy 423 1 0.8 0.6 0.4 0.2 0
Output
0
5
15
20
Inflation
0 –10 –20 –30 –40 –50
10
0
5
10
0.4
1
0.2
0
0
5
15
20
10
15
20
Output gap
0
–20
–0.5
–40 0
5
10
15
20
Term premium
0
–60
–5
–0.1
0
–0.2
–2
–15
–0.3
–4
–20
–0.4
5
10
15
20
0
5
10
5
15
20
–6
10
15
20
15
20
15
20
FFR
0
5
10 Real SDF
2
–10
0
0
0
0
–1
Consumption
0.6
2
0.5
R10 yield
0
Investment
3
0
5
10
Figure 13.2 IRF to a TFP Shock (No Segmentation Effects) Notes: Output, investment, consumption, output gap, and real SDF are percent deviations from steady state. Inflation, the funds rate, the ten-year yield, and the term premium are deviations from steady state in annualized basis points. The IRFs are computed using a third-order approximation.
utility and thus generates a negative innovation in the real SDF. But by increasing the demand for final output, which can now more easily be transformed into capital, the MEI shock increases total demand and thus persistently increases the inflation rate. Hence, the nominal SDF has positive serial correlation and thus delivers a negative term premium. For demand shocks, long-term bonds are a hedge and thus have a negative term premium. As with TFP shocks, the variability of the term premium is again trivial. Figure 13.5 reports the mean and standard deviation of the term premium when both shocks are active. The results are as anticipated. The mean term premium is increasing in the EZ parameter, but the effect is much more modest than in figure 13.1. The variability of the term premium is again trivial. As emphasized by Bansal and Yaron (2004), long-run risk is helpful in matching the equity premium with EZ preferences. The basic logic is that more persistence in the shock process implies larger variability in the real SDF. For the case of the term premium, greater persistence in either TFP or MEI shocks is not helpful. As TFP shocks become more persistent, the wealth effect on labor supply tends to increase marginal cost and inflation, an effect that mitigates the EZ effects. For example, if we set ρA = 0.999 and θ = 200 , the mean term premium is 34 basis points, with a standard deviation of 8 basis points. For MEI shocks, greater persistence simply amplifies the hedging properties of the long bond. With ρµ = 0.95 and θ = 200, the mean term premium is −416 basis points, with a standard deviation of 78 basis points.
424 Fuerst and Mau
Mean term premium (basis points)
10 0 –10 –20 –30 –40 –50
Standard deviation term premium (basis points)
–60
0
20
40
60
80 100 120 140 160 180 200 EZ parameter, θ
0
20
40
60
80 100 120 140 160 180 200 EZ parameter, θ
4.5 4 3.5 3 2.5 2 1.5 1 0.5 0
Figure 13.3 Term Premium MEI Shocks
We thus conclude that the EZ approach to the term premium has difficulty in matching either the mean or the standard deviation of the term premium. One significant caveat to this comes if the cycle is driven by TFP shocks and the central bank follows an output Taylor rule. Since the gap represents departure from the flexible price benchmark, optimal Taylor rules will typically include responses to the measured gap. In contrast, there is never a welfare-based argument for responding to measured output. One could thus suggest that to match the mean and variability of the term premium requires EZ effects and a suboptimal policy rule of a particular form.
13.4 Adding Segmented Markets Given the difficulty of generating significant variability in the term premium solely from EZ effects, here we investigate the model with the segmentation effects turned on. As
Term Premium Variability and Monetary Policy 425 Output
1.5 1 0.5 0
0
5
10
15
20
Inflation
100 50 0 –50
30 20 10 0 –10 –20
0
5
10
15
20
R10 yield
0
5
10
15
20
8 6 4 2 0 –2
2 1.5 1 0.5 0 –0.5
0.3 0.2 0.1 0 –0.1 –0.2
Investment
Consumption
0.5 0
0
5
10
15
20
Output gap
–0.5
0
5
10
15
20
15
20
15
20
FFR
150 100 50 0
0
5
10
15
20
Term premium
–50
0
5
10 Real SDF
0.5 0 –0.5 –1
0
5
10
15
20
–1.5
0
5
10
Figure 13.4 IRF to an MEI Shock (No Segmentation Effects) Notes: Output, investment, consumption, output gap, and real SDF are percent deviations from steady state. Inflation, the funds rate, the ten-year yield, and the term premium are deviations from steady state in annualized basis points. The IRFs are computed using a third-order approximation.
noted in table 13.1, we use ψ n = 1 and ζ = 0.9852 . The portfolio adjustment elasticity is comparable to the estimate of Carlstrom, Fuerst, and Paustian (2017). The extra level of discounting generates a steady-state term premium of 100 basis points and a steady state segmentation distortion of M ss = 1.072. The mean term premium will be the sum of the steady-state premium and the EZ risk adjustment that comes from the higher order approximation of the model. Figure 13.6 reports the mean and standard deviation of the model’s term premium for the case in which both shocks are active. The results with the segmentation effects turned off are also reported for comparison. Several comments are in order. First, the model with segmentation effects makes long bonds much riskier. Compared to the model without segmentation effects, the mean term premium is more sharply increasing in θ . These effects are quantitatively important. With θ = 200 , the segmentation effects increase the mean risk adjustment for long bonds by more than 200 basis points. It is thus quite easy to match the empirical mean of the term premium by choosing some combination of EZ effects (θ) and extra discounting (ζ ). Second, and more important, the segmentation effects can easily generate significant term premium variability. Again, the segmentation effects are quantitatively important: the standard deviation of the term premium is an order of magnitude larger in the model with active segmentation effects. An EZ term of, say, θ = 50 allows the model to closely mirror both the mean and the standard deviation of the term premium in the data. We will use this value going forward.
426 Fuerst and Mau
Mean term premium (basis points)
45 40 35 30 25 20 15
Standard deviation term premium (basis points)
10
0
20
40
60
80 100 120 140 160 180 200 EZ parameter, θ
0
20
40
60
80 100 120 140 160 180 200 EZ parameter, θ
4 3.5 3 2.5 2 1.5 1 0.5
Figure 13.5 Term Premium (TFP and MEI Shocks)
Figures 13.7 and 13.8 report the IRFs for the two exogenous shocks (for θ = 50). The TFP shock causes a significant decline in the inflation rate. Since long bonds are nominal, this leads to a surge in the FI’s real holdings of long bonds. The portfolio adjustment costs imply that the FI is willing to hold this abundance of long bonds only if term premiums rise in compensation. This premium movement is substantial (roughly 20 basis points on impact) and, because of the loan-in-advance constraint on investment, leads to a decline in investment and a corresponding surge in consumption. For the MEI shock, there are contrasting effects on the term premium. The surge in inflation lowers the FI’s holding of long bonds, implying a decline in term premiums. But the MEI shock increases the demand for investment and a consequent rise in term premiums. The net effect is quite small; the term premium falls by 7 basis points on impact. This is an important observation. Although the MEI shocks are an important driver of the business cycle, they contribute only a modest portion of variability in the term premium.
Mean term premium—steady state (basis points)
300 250 200 150 100 50 0
0
20
40
60
80 100 120 EZ parameter, θ
140
160
180
200
160
180
200
ψn = 1
ψn = 0
Figure 13.6a
Standard deviation term premium (basis points)
90 80 70 60 50 40 30 20 10 0
0
20
40
60
80 100 120 EZ parameter, θ ψn = 0
140
ψn = 1
Figure 13.6b Term Premium with Segmentation (TFP and MEI Shocks)
Output
0.6 0.4
–0.5
0.2
–1
0 –0.2
0
5
10
15
20
Inflation
0 –20 –40 –60 –80
0
5
10
15
20
R10 yield
0 –5 –10 –15 –20
Investment
0
0
5
10
15
20
–1.5
0.1 0 –0.1 –0.2 –0.3 –0.4
25 20 15 10 5 0
0
5
10
15
20
Output gap
0
5
10
15
20
Term premium
0
5
10
15
20
1 0.8 0.6 0.4 0.2 0
0 –20 –40 –60 –80 –100
2 0 –2 –4 –6 –8
Consumption
0
5
10
15
20
15
20
15
20
FFR
0
5
10 Real SDF
0
5
10
Figure 13.7 IRF to a TFP Shock (with Segmentation Effects) Notes: Output, investment, consumption, output gap, and real SDF are percent deviations from steady state. Inflation, the funds rate, the ten-year yield, and the term premium are deviations from steady state in annualized basis points. The IRFs are computed using a third-order approximation.
Output
2 1.5
0
5
10
15
20
Inflation
150 100 50 0 –50
30 20 10 0 –10 –20
0.5 0
0
0.5
0
5
10
15
20
R10 yield
0
5
10
–5
2 1.5 1 0.5 0 –0.5
–0.5 0
5
20
10
15
20
Output gap
0
5
10
–1
15
20
50 0 –50
0
0 –0.5
–4
–1
–6
–1.5
5
10
5
15
20
10
15
20
15
20
15
20
FFR
0
5
10 Real SDF
0.5
–2
0
0
200 150 100
Term premium
2
15
Consumption
1
5
1 0
Investment
10
0
5
10
Figure 13.8 IRF to an MEI Shock (with Segmentation Effects) Notes: Output, investment, consumption, output gap, and real SDF are percent deviations from steady state. Inflation, the funds rate, the ten-year yield, and the term premium are deviations from steady state in annualized basis points. The IRFs are computed using a third-order approximation.
Term Premium Variability and Monetary Policy 429 These endogenous movements in the term premium reflect changes in the risk adjustment on long bonds and changes in the segmentation distortion. From our earlier results, we know that there are trivial movements in the premium arising from risk effects. Instead, almost all of the variability in the term premium reflects changes in the segmentation distortion. The central bank could use purchases (sales) of long debt to mitigate these increases (decreases) in the term premium and their consequent effect on real activity. We will investigate these issues below. Table 13.2 also reports the business cycle statistics for the model with segmentation effects (case III in the table). Segmentation is a real distortion, so there is no macrofinance separation here. Output is now less variable because the distortion dampens real behavior. But it also alters the mix of variability so that investment becomes a much more important driver of the business cycle; the relative standard deviation rises from 3.8 to 5.3. As for the finance variables, a key difference is in the variability of the term premium. As discussed above, it increases by an order of magnitude from case II to case III. Curiously, this increase in the volatility of the term premium is due to the increased variability of the EH bond yield. The observed long-term bond return exhibits no additional volatility. But the cyclicality of the term premium does change, with it switching from countercyclical under case II to mildly procyclical in case III. Thus, although the model with segmented markets generates empirically valid levels of variability in the term premium, the model may fail to generate observed cyclicality of risk premiums. The robustness of this result with regard to the inclusion of other shocks, and so on, is a subject for further research. However, observed measures of bond risk such as the term structure slope and excess return remain countercyclical in the presence of segmentation effects.
13.5 Segmented Markets and Monetary Policy Carlstrom, Fuerst, and Paustian (2017) demonstrate that up to a first-order approximation, the term premium moves one-for-one with the market segmentation distortion (Mt ). This segmentation distortion has real effects by altering the efficient allocation of output between consumption and investment. Hence, a policy that stabilizes the term premium is likely to be welfare-increasing. But things are more complicated with EZ effects and higher-order approximations. Now the term premium will reflect both the segmentation distortion (as in Carlstrom, Fuerst, and Paustian (2017)) and the time- varying risk effects arising from the EZ preferences. Our focus is on policy across the business cycle, so we introduce steady-state subsidies on factor prices (to counter the monopoly markups) and the cost of capital goods (to counter the loan-in-advance constraint) so that the steady state of the model is efficient. Using a third-order approximation, we compute the welfare gain
430 Fuerst and Mau of a policy that varies the central bank’s holdings of long bonds to stabilize the term premium at its steady-state level of 100 basis points. The baseline comparison policy is one in which the central bank holds its nominal bond portfolio fixed. Welfare is measured by expected lifetime utility of the household evaluated at the nonstochastic steady state.5 Before turning to the welfare analysis, it is instructive to return to table 13.2 and business cycle statistics. Cases IV and V consider the case of a term premium peg and a segmentation peg, respectively. An important observation is that under either peg, the business cycle statistics closely mirror the model with EZ effects but no segmentation effects. This is not surprising. If, for example, the segmentation distortion is held fixed as it is under a peg, then the real model becomes isomorphic to the model with no such distortion. This is a powerful example of how the nature of monetary policy will drive financial variables such as the term premium. That is, in a world with segmentation effects, optimal monetary policy will eliminate the real and financial implications of segmentation. Figure 13.9 reports the welfare gain of a term premium peg as a function of θ . The welfare gain is normalized to consumption units so that, for example, 0.05 means a perpetual increase in steady-state consumption of 0.05 percent. Figure 13.9 also reports the welfare gain of a long debt policy in which the central bank pegs the market segmentation distortion at steady state (Mt = M ss ). Absent any interactions with the markup distortions, a segmentation peg will be optimal as it eliminates a time-varying distortion 0.25
Consumption units
0.2 0.15 0.1 0.05 0 –0.05
0
20
40
60
80 100 120 EZ parameter, θ
Tech. shocks
140
Inv. shocks
160
180
200
Both
Figure 13.9 Welfare Gain of a Term Premium Peg Note: Welfare gains are normalized to consumption units so that, for example, 0.05 means a perpetual increase in steady- state consumption of 0.05 percent.
Term Premium Variability and Monetary Policy 431 from the model. We do not view a segmentation peg as a reasonable policy alternative, but it helps demonstrate the advantages and disadvantages of a term premium peg. Recall that up to a first-order approximation, these two pegs are identical. But there are differences with higher-order effects. For the case of both shocks and θ = 0, there is a welfare gain of the term premium peg of nearly 0.2 percent. This is close to the gain of a segmentation peg. But as we increase the EZ coefficient θ , these two welfare gains diverge. The gain of a segmentation peg is modestly increasing in θ , an implication of the household’s preference for distortion stabilization as risk aversion increases. But the gain of a term premium peg diminishes in θ . This decline arises because of the growing gap between the steady-state and mean term premium. The steady-state premium comes from the additional FI discounting and thus reflects the segmentation distortion. The mean term premium is the steady-state premium plus the risk adjustment (either positive or negative) that comes from the EZ effect. As we increase the EZ coefficient, the mean term premium under the baseline policy becomes further separated from the steady-state term premium. A central bank that pegs the mean term premium at 100 basis points will therefore exacerbate the average segmentation distortion by pegging the premium at the wrong level. This mismeasurement problem is increasing in θ, so that the welfare gain of the premium peg is decreasing in θ . This mismeasurement problem is similar but distinct for the two shocks. Consider first the case of TFP shocks in figure 13.9. As we increase θ , the mean term premium under an exogenous debt policy increases because of risk-aversion effects (see figure 13.1). If the central bank pegs the mean term premium at 100 basis points, it is then using its portfolio to overcompensate for segmentation effects, thus driving the average segmentation distortion below steady state. These effects are illustrated in figure 13.10, which graphs the average segmentation wedge (relative to steady state) as a function of θ . Under a term premium peg, this mean distortion is decreasing in θ as the “overcompensation” effect increases. This, then, implies that the gain to a term premium peg is diminishing in θ . The story is symmetric for the case of MEI shocks. The risk effect on the average term premium is now decreasing in θ (see figure 13.2). If the central bank pegs the mean term premium at 100 basis points, it will drive the average segmentation distortion above steady state (see figure 13.10). The average segmentation distortion is thus increasing in θ , so that the welfare gain of a term premium peg is again decreasing in θ . In summary, in a world with EZ risk effects on bond prices, the mean term premium is not the same as the mean segmentation distortion. By pegging the mean term premium, the central bank will typically exacerbate the mean segmentation distortion, pushing it above or below its nonstochastic steady state. There is thus a trade-off between minimizing variability in the segmentation distortion and achieving a desired mean. But these mismeasurement effects are modest. Even with θ = 200, the welfare gain of the peg is still substantial at 0.14 percent.
432 Fuerst and Mau
Segmentation wedge, M, mean/steady state
1.06 1.04 1.02 1 0.98 0.96 0.94
0
20
40
60
80 100 120 EZ parameter, θ Tech. shocks
140
160
180
200
Inv. shocks
Figure 13.10 Mean Segmentation Wedge (Term Premium Peg of 100 Basis Points)
All of these effects are magnified if we abstract from subsidies that make the nonstochastic steady state efficient. With TFP shocks, the mean segmentation distortion is decreasing in θ , so that the welfare gain of the term premium peg is increasing in θ . Just the converse is true for MEI shocks. With θ = 200, the welfare gain of the term premium peg is nearly 1.2 percent for TFP shocks, –0.5 percent for MEI shocks, and 0.68 percent for both shocks. Another pitfall to responding to the term premium is that it is likely measured with significant error. We thus introduce serially correlated measurement error to the model’s observed term premium. With the measured term premium pegged at a mean of 100 basis points, measurement error implies that the central bank is inadvertently introducing exogenous fluctuations in the term premium into the model economy. We set the serial correlation of measurement error to 0.50. Figure 13.11 sets θ = 50 and plots the welfare gain of the term premium peg as a function of the standard deviation of measurement error. Since the standard deviation of the term premium in the data is no more than 130 basis points, a reasonable upper bound on the measurement error is 100 basis points. But even with this large degree of noise, the gain to a term premium peg remains substantial. Table 13.3 provides some sensitivity analysis by combining these two forms of measurement error. As discussed, the gain to a segmentation peg is modestly increasing in risk aversion θ . The gain of the term premium peg is decreasing in both θ and the level of measurement error in the central bank’s observation of the term premium. But even with very high risk aversion and a large level of measurement error, the welfare gain of the term premium peg is still significant: 0.07 percent.
0.2
Consumption units
0.15 0.1 0.05 0
–0.05 –0.1
0
10
20 30 40 50 60 70 80 Standard deviation of measurement error, σme Tech. shocks
Inv. shocks
90
100
Both
Figure 13.11 Welfare Gain of a Term Premium Peg (Measurement Error) Note: Welfare gains are normalized to consumption units so that, for example, 0.05 means a perpetual increase in steady-state consumption of 0.05 percent.
Table 13.3 Welfare Gain of Term-Premium Peg and Segmentation Peg (Both Shocks) Term peg
Segmentation peg
0.1869 0.1185
0.1974 0.1974
0.1759 0.1076
0.2000 0.2000
0.1431 0.0748
0.2079 0.2079
θ=0 σme = 0 σme = 100 θ = 50 σme = 0 σme = 100 θ = 200 σme = 0 σme = 100
Note: Welfare gains are normalized to consumption units so that, for example, 0.05 means a perpetual increase in steady-state consumption of 0.05 percent.
434 Fuerst and Mau
13.6 Conclusion The last decade has demonstrated the creativity of central banks in creating new policy tools to deal with an evolving financial crisis and tepid recovery. A natural question going forward is which of these tools should become part of the regular toolbox of policymakers. One justification for the large-scale asset purchases was to alter the term structure for a given path of the short-term policy rate. That is, the policies were (in part) designed to alter the term premium.6 In analyzing such policies in a DSGE environment, one needs an economic rationale for both the mean and the variability in the term premium. This chapter has investigated two natural choices: a time-varying risk premium for long bonds and a time-varying market segmentation effect. The chapter suggests that the risk approach has difficulty explaining the observed variability in the premium, whereas the segmentation approach can easily match the variability in the data. Further, according to the segmentation approach, this variability is welfare-reducing, so that there is a natural argument for a central bank to use its balance sheet to smooth fluctuations in the term premium. There are at least two concerns to consider when smoothing these fluctuations. The first is that a term premium peg prohibits fluctuations due to real economic activity, that is, time-varying risk effects. The second is that the term premium could be mismeasured, and pegging the observed term premium introduces the measurement error into the economy. This chapter shows that the effect of both of these concerns is modest and that there are considerable welfare gains to smoothing term premium fluctuations.
Notes 1. See Rudebusch and Swanson 2012 for a discussion. 2. Although for simplicity we place capital accumulation within the household problem, this model formulation is isomorphic to an environment in which household-owned firms accumulate capital subject to the loan constraint. 3. It is convenient to separate nominal wage rigidity from the household. If each household set a nominal wage, then labor input would vary across households. This would imply differences in lifetime utility across households. This would needlessly complicate asset pricing, which depends on innovations in lifetime utility because of the EZ preferences. 4. In contrast, Gertler and Karadi (2011; 2013) assume that FIs only pay out dividends upon their exogenous death. 5. We use Dynare to carry out these calculations. It is important to note that our welfare criterion will have the same value for both a second-order and a third-order approximation. 6. Meltzer (2004; 2010) recounts other time periods in which the Fed attempted to alter the term structure.
Term Premium Variability and Monetary Policy 435
References Adrian, Tobias, Richard K. Crump, and Emanuel Moench. 2013. “Pricing the Term Structure with Linear Regressions.” Journal of Financial Economics 110, no. 1: 110–138. Bansal, Ravi, and Amir Yaron. 2004. “Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles.” Journal of Finance 59, no. 4: 1481–1509. Carlstrom, Charles T., Timothy S. Fuerst, and Matthias Paustian. 2017. “Targeting Long Rates in a Model with Segmented Markets.” American Economic Journal: Macroeconomics 9, no. 1: 205–242. Epstein, Larry G., and Stanley E. Zin. 1989. “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework.” Econometrica 57: 937–969. Fuerst, Timothy S. 2015. “Monetary Policy and the Term Premium.” Journal of Economic Dynamics and Control 52: A1–A10. Gertler, Mark, and Peter Karadi. 2011. “A Model of Unconventional Monetary Policy.” Journal of Monetary Economics 58, no. 1: 17–34. Gertler, Mark, and Peter Karadi. 2013. “QE 1 vs. 2 vs. 3 . . . : A Framework for Analyzing Large- Scale Asset Purchases As a Monetary Policy Tool.” International Journal of Central Banking 9, no. 1: 5–53. Justiniano, Alejandro, Giorgio E. Primiceri, and Andrea Tambalotti. 2011. “Investment Shocks and the Relative Price of Investment.” Review of Economic Dynamics 14, no. 1: 102–121. Lopez, Pierlauro, David Lopez-Salido, and Francisco Vazquez-Grande. 2015. “Macro-Finance Separation by Force of Habit.” Society for Economic Dynamics 2015 Meeting Papers 980. Meltzer, Allan H. 2004. A History of the Federal Reserve, Vol. 1. Chicago: University of Chicago Press. Meltzer, Allan H. 2010. A History of the Federal Reserve, Vol. 2. Chicago: University of Chicago Press. Rudebusch, Glenn D., and Eric T. Swanson. 2012. “The Bond Premium in a DSGE Model with Long-Run Real and Nominal Risks.” American Economic Journal: Macroeconomics 4, no. 1: 105–143.
chapter 14
Ope n Market Ope rat i ons Jan Toporowski
14.1 Introduction Open market operations are the buying and selling of securities, foreign currency, or financial instruments by the central bank. They are called open market operations because they are usually conducted by the central bank’s trading desk placing orders to buy or sell in markets for those securities, foreign currency, or financial instruments. These operations are undertaken on the initiative of the central bank. This distinguishes open market operations from discount operations, undertaken on the initiative of commercial banks that sell securities to the central bank in order to obtain reserves to give themselves a more liquid asset portfolio (Toporowski 2006). Open market operations are one of the ways in which central banks make their policy effective, whether that policy is to control the level of reserves or credit in the economy or the rates of interest at which commercial banks lend or take deposits. Historically, as indicated below, open market operations have been used when other instruments such as changes in the rate of interest have failed. However, their potential tends to be underestimated, perhaps because the effects of such operations depend on the complexity of the financial system in which those operations are conducted.
14.2 The Origin of Open Market Operations It is common to date the start of open market operations to an almost accidental discovery by the US Federal Reserve in 1922. The Fed had been active in supporting member banks with extensive loans during World War I to commercial banks, whose cash ratios had fallen because the issue of banknotes had not kept up with inflation and
Open Market Operations 437 the high level of economic activity. The loans proved to be a profitable part of Federal Reserve banks’ balance sheets as the Fed increased its discount rate on cessation of hostilities, when the wartime ban on gold exports was removed. However, by 1921, the United States was in severe depression, which discouraged lending (and borrowing) at a time when America’s trading partners in Europe, notably the United Kingdom, France, and Germany, were experiencing similar difficulties, complicated by hyperinflation in Germany and elsewhere and the trading of their currencies below their parities with gold, in which most of their wartime debts were denominated. In this situation, American exporters and bankers required foreign payments in gold or the few currencies (such as the Swiss franc) that remained on the gold standard. An inflow of gold into the United States allowed US banks, in particular the New York commercial banks, to repay their loans from the Fed. The Fed discount rate had peaked in 1922 at 7 percent, a rate last seen in the crisis of 1907 that precipitated the establishment of the Federal Reserve in 1913 with a mandate to provide reserves in a more accommodating fashion than had previously happened under the informal system operated by JP Morgan. After reaching its peak, the discount rate was rapidly and significantly reduced down to 4.5 percent (see Hawtrey 1926, chap. 5). The result was a squeeze on the Fed’s interest income, as interest rates fell and its loan portfolio contracted, and the Federal Reserve banks decided to diversify their portfolios by buying government securities. This was followed by increased repayments of Fed loans. It was the newly established Statistics Department of the Federal Reserve Bank of New York that alerted the Fed to the fact that member banks were repaying their loans from the Fed with the reserves they were receiving in payment for the government bonds that the Fed was buying. Significantly, too, loan repayments were not concentrated in the regions where the local Federal Reserve bank was buying government bonds but were geographically dispersed, so that bond buying in Kansas City might give rise to loan repayments in Chicago or New York. The conclusion was drawn that the buying of government bonds affected the credit system in the whole country, rather than just the local market in which the buying was taking place, in other words, that open market operations had systemic effects (Burgess 1964). The result of this discovery was a decision by the governors of the Fed in March 1923 to establish the Open Market Committee of five governors as a committee of the Federal Reserve Board. Open market purchases of securities in 1927 were considered a loosening of credit and had the effect of bringing about an outflow of gold from the United States (Hawtrey 1933, 289–290). The regional Federal Reserve banks resisted central control of their respective securities purchases and sales, and arguments over their powers to determine the assets in their balance sheets continued through until 1935, when the Banking Act effectively abolished the separate portfolios of the individual Federal Reserve banks. The Fed’s “accidental” discovery of open market operations was perhaps a learning experience by what was then the youngest of the key central banks in the international monetary system. The older central banks, with experience of the gold standard going back to its establishment in the middle of the nineteenth century, had been trading in
438 Toporowski securities, if not operating explicit open market operations for decades, if not centuries, before the American discovery. Hawtrey noted that before the abolition of the usury laws in 1833, “when the Bank of England was experimenting in the control of credit, but discount policy was still shackled by the usury laws limiting interest to 5%, the Bank’s chief reliance was on the purchase and sale of securities in the market” (1926, 142). Indeed, the Bank of England (BoE), like other central banks before the twentieth century, had been established explicitly to assist with funding the British government, with the privilege of issuing banknotes against the government bonds that they held. Needless to say, these responsibilities entailed transactions in the securities held in the BoE’s portfolio. In his impertinent “Proposals for an Economical and Secure Currency,” David Ricardo (1816) had noted that on January 5, 1797, the BoE held bonds (“funds yielding interest”) to the value of just more than £15 million. In the following year, this portfolio had increased to £18,951,514. By 1799, the amount of these bonds had increased to £21,912,683. In 1800, the portfolio had been reduced slightly to £20,883,128, before rising again in 1801 to £22,522,709 and in 1802 to £24,319,247. This therefore constituted a substantial portion of the bank’s assets, and its profits were augmented by the sums that the bank was paid for managing the debt of the British, Irish, German, and Portuguese governments (Ricardo 1816). Once the usury laws were abolished by successive legislation from 1833 to 1854, exaggerated powers of economic and financial control were attributed to changes in the rate of interest. However, the rate of interest proved to be a not always helpful or effective instrument of monetary control and least so when the ruling policy doctrine that emerged in the nineteenth century required a quantitative, rather than a price, restriction on the banknote issue to the amount of gold reserves available to back that issue. The currency theorists, who proved to be most influential, were of the opinion that the rate of interest would be sufficient to sustain that gold link, through some kind of international specie-flow adjustment in response to interest-rate differentials. But this was confused with arguments over the supposed role of the rate of interest in determining the equilibrium between saving and investment (Corry 1962; Hawtrey 1933, 136). The result was the Bank Charter Act of 1844, which gave the BoE the monopoly of the note issue in England and Wales but now tied that note issue to gold reserves. The act was a watershed in the practice of central banks and was widely discussed and emulated as the gold standard emerged to become the mechanism for international banking transactions. Its significance for open market operations lay in the shift in the purpose of those operations. Open market operations to provide liquidity for markets in longer-term securities were effectively abandoned. With the discount rate proving to be sluggish in affecting the inflow of gold reserves, open market operations were now subordinated to the task of controlling the note issue. By then, in any case, the BoE was operating in the discount market, trading securities with much shorter-term maturities. The discount market was until the 1970s the effective money market in London. This reflected the way banking developed in Britain, with country banks managing their cash positions by discounting their bill holdings with the London discount houses. The
Open Market Operations 439 discount houses themselves would, in times of cash shortage, turn to the BoE to discount trade bills and Exchequer or Treasury bills. The practice had been established at the end of the eighteenth century that the BoE would discount such bills. But the bank’s ability to make a market in them was constrained by the usury laws (limiting interest to a maximum of 5 percent) and, subsequently, by the need to keep its note issue within the bounds of its gold reserves and the fiduciary issue and the bank rate that was its main policy instrument. The bank would therefore alleviate modest cash shortages on a routine basis unless an obvious and compelling payments emergency arose in the discount market (Hawtrey 1933, 117–122). In cases of such emergency, the BoE learned to provide accommodation, borrow gold reserves, and raise interest rates, as the emergency demanded. But in normal times, open market operations were restricted because the bank rate seemed to offer an effective instrument of monetary control. But the bank rate made better press headlines than it delivered in practice, and open market operations were brought back as it gradually became standard practice for central banks to buy securities in order to alleviate the effects on commercial banks, or discount houses in London, of an outflow of gold. Walter Bagehot in his classic Lombard Street had noted the BoE’s practice of buying securities in a “panic” to alleviate cash shortages among banks and merchants and famously encouraged this operation (Bagehot 1873, 52). The preferred securities tended to be short-term, for the simple reason that they were less subject to fluctuations in their price. By the end of the nineteenth century, open market operations had become a way of enforcing the BoE’s policy rate of interest, the bank rate. Selling securities was referred to as making the bank rate effective, at a time when the interbank market of the commercial banks operated using Treasury bills. Keeping the money markets “short” of liquidity obliged commercial banks to sell securities to the BoE. In this way, open market operations came to be not so much an alternative to interest-rate policy but more of a necessary supplement to such policy (Sayers 1957, 49–52; Sayers 1936).1 By the twentieth century, central banks in Europe used open market operations to offset such outflows or just to bring market rates of interest back to the central bank’s official or preferred rate (Hawtrey 1934, 55–56; see also Sayers 1957, chap. 5). By the mid- 1930s, after the abandonment of the gold standard, when reduced interest rates failed to stimulate an economic recovery, it became common to blame the burden of debt for this failure in monetary policy. Ralph Hawtrey called it a “credit deadlock,” or what Irving Fisher called “debt deflation,” a situation in which economic activity is frustrated by excessive debt which makes firms unwilling to borrow even at low interest rates. Hawtrey then recommended that “a credit deadlock which is impervious to cheap money may (thus) yield to treatment through open market purchases of securities” by the central bank. Such operations would make banks more liquid and therefore more inclined to lend (Hawtrey 1938, 256; see also Sandilands 2010). The scope and scale of the operations varied among countries according to custom, the structure of banking markets, and statutes of the central bank. The Banque de France, for example, was not allowed until 1938 to use open market operations to control credit but could use them to “smooth out” the flow of reserves
440 Toporowski to banks (Wilson 1962a, 48–51). Similarly, the central bank of the Netherlands, De Nederlandsche Bank, was restricted in its open market operations until 1937 and subsequently by a shortage of Treasury bills. Nevertheless, according to one authority, De Nederlandsche Bank was responsible for one major innovation that was to have a lasting effect on later open market operations, the repurchase agreement, first used by De Nederlandsche Bank in December 1956 (Wilson 1962b, 227), although its priority in this innovation is disputed.2 The history of open market operations confirms their use usually as an alternative to interest-rate policy when that policy cannot be used and as a supplement to such policy when it appears to be ineffective, as hinted at by Hawtrey. Thus, in the two and a half decades or so following World War II, when interest rates were controlled by central governments rather than central banks, open market operations were extensively used by central banks for the purposes of credit control and to control exchange rates through open market operations in foreign currency. The emergence of the Eurocurrency markets of unregulated transactions in credit and foreign currency in the 1960s made control of credit and exchange rates increasingly difficult, as well as casting doubt on the wisdom and usefulness of open market operations whose effects beyond the immediate counterparties of such operations were becoming less and less predictable. This was the background to the publication by the BoE of “Competition and Credit Control,” which recommended a move toward the more extensive use of market mechanisms, including more targeted open market operations (Bank of England 1971; Goodhart 2014). This discussion was the prelude to a period of banking and financial crisis in which, as with the 2007–2009 crisis, central bank activity was concentrated on securing financial stability. By the 1980s, monetarist theory provided the ruling doctrine of central bank operations. In the view of the theory’s chief exponent at the time, Milton Friedman, open market operations had a special place in the control of the money supply; the sale of securities reduced commercial bank reserves, and a stable credit multiplier ensured that the total amount of credit in the economy would be set by the amount of those reserves. The sale of securities to a commercial bank would be paid for by a transfer from that bank’s reserve account at the central bank to the central bank, which would then cancel those reserves. The purchase of securities would be paid for by the creation of reserves that are credited to the reserve account of the commercial bank that sells those securities. The effect on commercial bank reserves is the same, whether the central bank buys from a commercial bank or from a customer of a commercial bank. In the latter case, the purchase price is credited to the commercial bank’s reserve account at the central bank, and the commercial bank credits the same amount to its customer’s account. In the case of a sale, the customer’s account is debited with the amount that the commercial bank transfers from its reserves to the central bank. For Friedman, open market operations were the way to regulate the money supply (Friedman 1959; Bank of England 1983). In the classic statement of monetarist central bank practice, Goodfriend and King (1988) urged that open market operations be used to supply reserves to the banking
Open Market Operations 441 system, which could then allocate those reserves through the money market. If discount window assistance is provided, it should be sterilized, that is, the central bank should sell in the open market the assets it had bought through the discount window. In this way, the overall reserve position of the banking system is kept constant, and the central bank, in effect, sells assets on behalf of the distressed bank. However, this procedure is considered by Goodfriend and King to be inferior because it requires the central bank to monitor and supervise the bank to which assistance is being provided. Goodfriend and King contrast this “monetary policy” with “banking” by which unsterilized assistance is provided.3 Open market operations therefore reached a peak when central banks adhering to this reserve position doctrine sold securities in an attempt to regulate the level of reserves in the banking system (Bindseil 2004; Goodhart 1975, 156–160). A brief period of exchange-rate targeting between 1986 and 1991, following the Plaza Accord of 1986 by which the US Treasury and the Fed, the West German Bundesbank, the Banque de France, the Bank of Japan, and the BoE agreed to devalue the US dollar, was marked by a reversion of open market operations to transactions in foreign currencies. Since the 1980s, the policy framework directing central bank operations has centered on the central bank’s policy rate of interest as the key instrument of monetary policy. Initially, it was felt that this needed to be reinforced by open market operations to ensure that the supply of reserves met demand in the money market at an equilibrium rate of interest that was the central bank’s preferred rate. However, it was quickly realized that the official rate could be enforced more effectively simply by maintaining a narrow enough “corridor” between the deposit and lending rates for reserves at the central bank. In the practice of the European Central Bank (ECB), this was done through “standing facilities” for commercial banks to deposit reserves at the central bank and borrow reserves at the central bank’s official rates. These facilities allow commercial banks to borrow reserves from the central bank or to place excess reserves with it. Thus, standing facilities are a way in which commercial banks can influence, at their discretion, the balance sheet of the central bank, since decisions to use standing facilities are made by commercial banks rather than the central bank. By the early years of the twenty-first century, open market operations to reinforce the policy rate of interest had virtually been abandoned by central banks. The notable exception was Japan, where the Bank of Japan has been buying securities since the turn of the century in a desperate effort to stave off deflation (Doi, Ihori, and Mitsui 2007). In the developing countries, open market operations in convertible currencies are commonly used to stabilize exchange rates that have a large influence on the domestic price level (Levy-Orlik and Toporowski 2007). Even in countries, such as Thailand, where the central bank is targeting inflation and where the exchange rate is officially allowed to float, central banks will often intervene in the foreign exchange market to prevent a depreciation of the domestic currency that could put upward pressure on the domestic rate of inflation.
442 Toporowski
14.3 Open Market Operations Today In conducting open market operations, it is sometimes useful to distinguish between “outright” purchases and sales of securities on the one hand and, on the other hand, “reverse” purchases or “repos” or reverse sale agreements with participants in the money market. A reverse purchase agreement, for example, involves the purchase of risk-free longer-term securities, such as government bonds, for which payment is added to the reserves held at the central bank of the counterparty bank. The agreement then specifies the sale of those securities back to the counterparty bank after a certain period. In the ECB’s practice, this was initially two weeks, but it is now one week. The difference between the purchase price and the sale price is in effect the rate of interest on the temporary addition to its reserves that the counterparty bank now obtains. This rate of interest on repurchase agreements (repos) became the official policy rate of the ECB, the BoE, and most European central banks during the 1990s. These open market operations may be repurchase agreements, providing reserves to counterparty banks, or they may be reverse sale agreements, reducing the reserves of banks. For central banks, repurchase agreements have the attraction that reserves supplied are not supplied as loans to counterparty banks but are exchanged for top-quality assets from banks’ portfolios. Their other attraction is that they allow central banks to inject (or withdraw) reserves over a fixed time horizon. This temporary accommodation offers central banks a way of acceding to any current requirement for reserves from commercial banks without a commitment to provide such reserves in the future that could remove the incentive to sound bank management that a reserve system is supposed to provide.4 Once repurchase agreements are in operation, then obviously, their effect on the money markets is the net outstanding amount at any one time. So if there are weekly repurchase agreements, the central bank’s manager responsible for money-market operations decides, when the agreements expire, whether to renew them or to increase or decrease the amount. An increase in the amount of purchase agreements (or a decrease in sales agreements) would supply additional reserves. A decrease in purchase agreements outstanding (or an increase in sales agreements) would reduce bank reserves. The other kind of open market operations are outright purchases and sales of securities without commitment to sell or buy back. These are longer-term portfolio operations of central banks and may include central bank issues of their own paper such as, for example, the short-term euro notes that are issued by the BoE. Outright purchases and sales of securities or foreign currency are undertaken with a view to changing the asset portfolio of the central bank and hence provide reserves to the banking system or reduce those reserves until such time as the central bank wishes to change its reserve provision. Reverse purchases (repos or resales) involve entering simultaneously into an agreement with the selling (or buying) bank to sell
Open Market Operations 443 (or buy) back the securities at a fixed later date. Repos therefore provide (or reduce) reserves by a fixed amount for a fixed period of time. The price difference between the purchase and the resale of the securities therefore constitutes the discount equivalent to the rate of interest on the reserves sold (or bought). Repos thus allow central banks to fix the period of time for which the reserves are provided (Bindseil and Würtz 2007). From the 1990s onward, the BoE and subsequently the ECB have used the repo rate as their policy rate, in the belief that this facilitates control over the rates of interest in the interbank market, without significantly affecting the liquidity of the markets in the longer-term securities that were the subject of these operations (Toporowski 2006).
14.3.1 Quantitative Easing The reliance of central banks on the repo rate as their key policy instrument was challenged in the financial crisis that broke out in 2008 and subsequently, as repo rates approached zero without significantly reversing the onset of deflation. The response of central banks to the banking and financial crisis in 2008 was a dramatic switch to outright purchases, targeted in large measure at improving the liquidity of markets for long- term securities but also applying a “monetary stimulus” to slowing economies. The Fed, the BoE, and, on a more modest scale, the ECB bought large quantities of securities at, or near, the bottom of the market to inject liquidity. This allowed the central banks to earn capital gains as the market recovered (Fawley and Neely 2013; see also Bank of England 2015). But central banks operate with thin capital, so these operations have left central bank balance sheets looking remarkably similar to those of hedge funds. Between 2007 and 2015, the balance sheet of the Fed expanded by a factor of five to nearly a quarter of US GDP. By comparison, the assets of the ECB have increased by around 50 percent, and the assets of the Bank of Japan have increased from 20 percent of Japanese GDP to nearly 60 percent of GDP. Perhaps most extraordinary among European central banks, the Swiss National Bank, which in 2011 had committed itself to a fixed exchange rate against the euro, expanded its assets from 17 percent of Swiss GDP in 2007 to nearly 80 percent before the bank abandoned the peg against the euro in January 2015. The large-scale purchases of long-term securities by central banks, or quantitative easing (QE) as it was dubbed, marked a return to open market operations by central banks (Gagnon et al. 2010; Benford et al. 2009). These differ from more traditional open market operations in that they are undertaken without any target level of bank reserves, interest rates, or bond prices but with a preannounced value of securities to be purchased by the central bank. Preannouncement is intended to maximize the effect of the purchases on liquidity in the market for the securities, by causing a rush of private investors to acquire the securities before their prices rise as a result of central bank purchases. The purchases themselves inject reserves into the banking system with a view to stabilizing that system and encouraging lending (Joyce, Tong, and Woods 2011).
444 Toporowski
14.4 Open Market Operations and the Structure and Operations of the Financial System The standard textbooks on monetary economics and banking suggest that open market operations are merely another instrument in the toolbox that central bankers bring to their work in monetary policy, bank supervision, and so on. However, as the history of open market operations reveals, context is everything in understanding what such operations do, what they can do, and how we understand what they are doing. Whatever their purpose, the context determines their scope and effectiveness, and this justifies a consideration of those broader factors that affect such operations.
14.4.1Banking and Capital Market Systems One of the determinants of the scope and effectiveness of open market operations is the structure of the financial system. The standard distinction is that between bank and capital market systems (see Allen 1995; see also Mayer and Vives 1995). Bank systems were common until recently in continental Europe (e.g., Germany, France, Italy) and East Asia (India, Japan, Thailand). In such systems, banks lend to household and commercial customers for terms that are both short and long and hold the loans on their balance sheets, even extending them by agreement with their customers. Markets for long-term securities exist in such systems. But the securities traded in them are largely government paper, and typically, the commercial banks themselves make markets in the securities issued by the largest corporations. In bank systems, open market operations are restricted by the smaller size of the capital market, although this means that central bank buying and selling of securities has a much bigger impact. At the same time, commercial banks may be much more dependent on liquidity from the central bank, because their loan book is much less liquid than the loan books of banks in capital-market-based systems. Typically, commercial banks in bank systems ease their liquidity by trading government securities among themselves and avail themselves of discount facilities at the central bank. But the greater stability of bank systems tends to facilitate the planning of liquidity by commercial banks, with the result that there is less resort to liquidity hedging and a reduced need for central bank open market operations when that hedging breaks down. Capital market systems are usually identified with the financial systems of the United States, Great Britain, and those countries whose financial systems have been formed in the image and likeness of their English-speaking counterparts. As their name suggests, these systems have much more active capital markets, which allow commercial banks to manage their liquidity more effectively by holding large quantities of negotiable
Open Market Operations 445 securities. Capital market systems are nowadays supposed to be more efficient than bank systems, because the market in loans facilitates “price discovery” (Hasbrouk 2007). However, an older view regards the liquidity of the secondary market, or the market for “old” securities as it used to be called, as the key factor determining activity in the financial system (Lavington 1921, 225). The liquidity of the secondary market, in turn, determines prices in that market, and these are the prices at which borrowers may issue new securities (Keynes 1930/1971a, 222–223). Accordingly, open market operations are deemed to play a much more important role in regulating capital market systems. In addition to overlooking other important features of financial systems, the commonly made dichotomy between bank-and capital-market-based systems ignores the appearance in many countries of conglomerate banks, that is, banks that are members of conglomerate groups. These may be found even in financially advanced countries with sophisticated financial markets.5 By pooling liquidity, the conglomerate can economize on financial resources. By cross-shareholding, the owners of conglomerates can retain control while maximizing outside share capital. Because of their industrial activities, conglomerates seem to offer a ready transmission mechanism of central bank monetary policy. However, the internalization of financing and capital allocation within conglomerates reduces their resort to outside financing and hence to the banking and financial markets through which the monetary transmission mechanism is supposed to work. Moreover, the conventional wisdom among bank regulators disapproves of bank conglomerates because of their lack of transparency and undiversified asset portfolios. Nevertheless, the predominance of such conglomerates in developing and emerging market economies may be a factor in limiting open market operations to foreign currency securities and government paper.
14.4.2 Financial Trading Financial systems are not just immutable traditions or “havens of familiar monetary practice”6 specific to particular countries and hallowed by the routine business of generations past. Financial systems themselves have evolved, and with them have evolved the open market operations of central banks and their predecessors. The principal factor in this evolution has been the evolution of financial trading. The earliest central banks (the Swedish Sveriges Riksbank established in 1668 and the BoE in 1694) were established specifically to manage government debt, in return for the right to issue banknotes. It is, therefore, not surprising that their trading of securities concentrated on government paper, as well as paper issued by the companies chartered with trading concessions by the government, such as the East India Company. At the time, these were private banks with the distinction of having limited liability. Their business rapidly developed and changed with the proliferation of trade and the bills of exchange that merchants used to finance trade. In the Netherlands, De Nederlandsche Bank was established in Amsterdam rather than in the capital, the Hague, because Amsterdam was the commercial center. London also became a crucial
446 Toporowski center for trade in such bills, because the BoE’s privilege of note issue allowed the bank greater latitude in the discount trade than other banks, limited by their reserves of gold and BoE notes, could offer. At the same time, the short maturity (up to three months) of commercial and Treasury bills meant that any “fiduciary” note issue to discount bills could be rapidly canceled when the bills were redeemed on maturity. The reflux of banknotes in the discount trade was the basis of the “Real Bills Doctrine,” according to which the note issue to buy bills could not be inflationary (Humphrey 1982). The Currency School proceeded with a critique of this doctrine, arguing that the note issue to discount bills was inflationary. The doctrines of monetarism thus have their roots in the credit practices of the merchants of the City of London and the discount business of the BoE (Niebyl 1946, chap. 7). However, the question of the maturity of the instruments traded by the central bank has an important influence on the scope and effectiveness of open market operations and is discussed in section 14.5 below.
14.4.3 The Establishment of Joint Stock Companies The third watershed in the evolution of open market operations came in the second half of the nineteenth century with the Companies Acts allowing the routine establishment of joint stock companies. This greatly expanded the amount of nongovernment bonds that could be traded in open market operations. On the whole, central banks have avoided buying and selling shares in companies.7 But corporate bonds have been more regularly used as collateral for central bank loans and, on some occasions, added to central bank portfolios through open market operations. Through such operations, the central bank was able to influence the liquidity of the secondary market for long-term securities, as well as bank liquidity.
14.4.4 World War II Open market operations were used through the first decades of the twentieth century up to World War II, with particular challenges during the interwar decades when international debt hung over and depressed European economies. World War II was the fourth major watershed in the evolution of open market operations. Having experienced the difficulties of external debt management, in the interwar period, governments now tried to ensure that as much as possible of their debt was domestic, and central banks assisted with open market operations to facilitate the financing of the war effort through the banking system, at low rates of interest. After World War II, the wartime debt overhang and the new era of the welfare state provided fresh challenges for government debt management. A particular innovation of this time was the tap stock, a government bond issued in Britain to the BoE, which would then sell it on into the bond market to regulate liquidity in that market. As colonies achieved independence, new central banks were established to manage currencies
Open Market Operations 447 in place of the currency boards that had supplied currency in the colonies. For the most part, the newly independent countries lacked developed financial markets and so offered few opportunities for open market operations except in foreign currencies to maintain exchange-rate pegs (see Allen 2014, chaps 6 and 12).
14.4.5 The Abolition of Foreign Exchange Controls The final watershed arose with the widespread abolition by countries in Europe, North America, and Japan of foreign exchange controls in the final decade of the twentieth century. As under the gold standard, the major financial centers now operated in integrated cross-border markets. But unlike under the gold standard, banking was no longer national but increasingly cross-border. In these circumstances, any central bank undertaking open market operations is in effect operating in global markets and may therefore need to operate on a sufficient scale to have global impact, even if that central bank only wants to have local impact. This watershed marks the beginning of the withdrawal of central banks from their previous regulation of liquidity in capital markets. At this point, it is helpful to consider more systematically the policy that guides open market operations.
14.5 Open Market Operations and the Policy Framework The academic discussion of open market operations is heavily tilted toward their monetary implications, that is, their effects on the supply of money or bank reserves or the rate of interest on borrowing money (see, for example, Allen 2007). This has been reinforced by the statutory frameworks adopted by central banks in Europe banning the buying of government stocks, on the grounds that this may be inflationary and certainly contrary to the central bank’s chief goal of maintaining the value of the currency unit for which it bears an unique responsibility.8
14.5.1 The Reserve Position Doctrine The monetary consequences of particular operations therefore dictate the kind of open market operations that are undertaken. The reserve position doctrine (that the central bank should operate its monetary policy in order to control the amount of reserves of commercial banks) would indicate that open market operations should be conducted in one of two ways. The first is that the central bank should buy or sell short-term paper, because the buying of the paper or bill will reverse itself when the bill matures and is repaid by its
448 Toporowski issuer to its current owner, the central bank, by a transfer of bank reserves to the central bank.9 If the central bank wishes to effect a lasting increase or decrease in the quantity of bank reserves, then the central bank should buy or sell longer-term securities. A lasting increase or decrease in commercial bank reserves may be obtained by using short-term paper, but this may require a higher turnover in the portfolio of discount bills that may be costly and unnecessary. Alternatively, the operations may be conducted using repurchase or resale agreements that allow the central bank to “inject” reserves into the interbank markets or take reserves out regardless of the maturity of the securities bought and sold in operations that are reversed by the central bank by agreement with its counterparty but at a term that the central bank can set. However, in the case of emerging markets, the reserve position of commercial banks is heavily affected by capital inflows and outflows. Capital inflows expand commercial bank reserves, either as claims on foreign currency deposits that a commercial bank acquires or as reserves at the central bank if the latter buys in the foreign currency reserves, usually to stabilize or depreciate the exchange rate. These inflows and outflows of reserves caused by exogenous factors may then require open market sterilization operations in order to stabilize the quantity of commercial bank reserves. There is, however, an additional complication in using open market operations in order to regulate the amount of bank reserves. This complication arises from an implicit assumption in the reserve position doctrine that open market operations do not affect the liquidity of markets for longer-term securities and, hence, the velocity of circulation of commercial bank reserves (see Toporowski 2006). If that velocity is, in fact, affected by the open market operations undertaken to stabilize the commercial banks’ reserves, then the possibility arises that stabilizing commercial bank reserves may destabilize the capital market and other forms of credit, as in fact happened during the 1980s, when the BoE depressed the capital market by “overfunding,” that is, selling more stock than was required by the government borrowing requirement (Bank of England 1984).
14.5.2 The Rate of Interest As the Key Policy Instrument If, however, the central bank is using the rate of interest as its key policy instrument, which most central banks have been doing since the latter half of the 1980s, then, as indicated earlier, open market operations may be wholly unnecessary. The monetary transmission mechanism is usually presumed to operate through the interbank market, and that can be kept on target simply by keeping a suitable “corridor” of interest rates for deposits and loans of reserves at the central bank. Open market operations may then be undertaken in order to increase or reduce the liquidity in the financial markets.
Open Market Operations 449
4.5.3 Fiscal Implications of Open Market Operations Finally, open market operations have fiscal implications, as well as the more obvious monetary and liquidity effects. A central bank is a government agency, and traditionally, open market operations have been undertaken using government bonds. Such bonds held in the central bank portfolio have therefore, in a financial sense, at least, been bought back by the government. The interest paid on them adds to the profits of the central bank, and those profits are an income to the government sector. So, for example, by 2014, the gross debt of the Japanese government amounted to almost two and a half times the GDP of Japan. However, in massive open market operations since 2001 (see Sandilands 2010; Doi, Ihori and Mitsui 2007), the Bank of Japan has acquired half of those bonds, so that the net debt of the Japanese government is really only half of the total gross debt; the remainder is merely a bookkeeping transfer within the government balance sheet. Open market operations, therefore, have fiscal implications. Insofar as it contains government bonds, the portfolio that the central bank holds reduces net government borrowing. Insofar as it contains private-sector securities, it consists of claims by the central bank on the private sector. The exception here is the ECB and the central banks in the franc zone, which belong to groups of governments. The latter do not engage in open market operations using government paper, while the ECB uses a wide range of private-sector bonds in its open market operations, and its operations in government securities are controversial and carefully scrutinized. In considering the intrinsic link between open market operations and fiscal policy, it is worth mentioning the views of two rival economists with very different monetary theories who reflected on this link but nevertheless came to not altogether dissimilar conclusions. Henry Simons is well known for advocating 100 percent reserves in the “Chicago Plan for Banking Reform” which he drew up in 1933. But he also drew the logical conclusion from this, that the Fed, whose loose credit policies he blamed for the speculative “bubble” in the stock market before 1929, should be abolished. This would allow for the concentration of responsibility for monetary policy in the US Treasury, which, through its open market operations in US government debt, effectively determines the quantity of money in the banking and financial system.10 His rival John Maynard Keynes also reflected on the role and principles by which open market operations should be conducted. In A Treatise on Money, Keynes commended the use of open market operations as the essential way of regulating the reserves held by commercial banks.11 Subsequently, Keynes came to the conclusion that open market operations held the key to managing the liquidity of the market in long-term securities. This liquidity determines the long-term rate of interest, rather than the official central bank rate of interest. For Keynes, that long-term rate is the rate of interest that is relevant to the fixed capital investment, which determines output and employment. This view played a key part in his advice to the government on debt management (Keynes 1945/1980, 396–400; see also Tily 2007, chap. 7). Thus, he commended open market
450 Toporowski operations to maintain the liquidity of corporate and government bonds and keep down the cost of long-term finance for business. Keynes’s analysis was the foundation for the “stable bond market” policy of the postwar period. At the time, this was justified by the high proportion of commercial bank assets that was held in the form of medium-and long-term government securities. Open market sales of such securities could therefore squeeze the balance sheets of commercial banks more effectively than changes in the discount rate (Sayers 1948, 121–122). Keynes’s close collaborator, Richard Kahn, went even further and urged the extension of the scope of monetary policy and open market operations to include the whole yield curve, rather than just the short or long-term rates of interest or the reserve position of banks (Kahn 1972).
14.6 Conclusion Open market operations by central banks, providing liquidity through the trading of debt, are arguably the oldest activity of central banks, predating the use of the discount or interest rate as an instrument of policy. The use of such operations is essential in times of banking “panic” or crisis, and they have been used at other times to manage the reserve position or liquidity of banks or to support the official rate of interest in the money market. Thus, the effectiveness of such operations depends not only on the state of the economy but also on the complexity and liquidity of the financial system. Recent open market operations in the form of repurchase agreements or QE have not altered these fundamental characteristics of such operations.
Notes 1. Hicks suggested that the BoE exceeded its official mandate in this regard (1967, 168). 2. In A Treatise on Money, Keynes noted that before World War I, “ ‘open market policy’ in the modern sense was virtually unknown. The Bank of England would occasionally supplement its bank-rate policy by the sale of consols for ‘cash’ and their simultaneous repurchase for ‘the account’ (anything up to a month later) which was an indirect way of relieving the money market of the equivalent quantity of resources for the unexpired period of the stock exchange account” (1930/1971b, 204–205). 3. For a later view, see Goodfriend 2003. 4. “[A] n efficient, safe and flexible framework for banking system liquidity management . . . should retain incentives for banks to manage their own liquidity actively and prudently” (Bank of England 2004, 218). 5. For example, the General Electric Company or Berkshire Hathaway in the United States, the zaibatsu Mitsubishi and Sumitomo in Japan, Koç Holdings in Turkey, or Anglo-American Corporation in South Africa. 6. Oliver Sprague, an adviser to the US government and the BoE in the 1930s, was referring to the gold standard in League of Nations 1930.
Open Market Operations 451 7. A major exception was the rescue of the Hong Kong Stock Exchange by the Hong Kong Monetary Authority in 1998. See Goodhart and Dai 2003. 8. This may be contrasted with an earlier, but no less authoritative, view at that time that “it is important that the central bank should be, from the start, the government’s banker. In highly developed systems, the importance of coordinating debt management and credit policy is universally recognized, and this state of affairs will come more naturally if the central bank always handles all (or nearly all) the government’s financial business. This is not merely a matter of technical arrangements; it is also at least as importantly a question of the direction from which the government seeks financial advice” (Sayers 1957, 119). 9. This is the principle of reflux behind the Real Bills Doctrine that a note issue to discount trade bills could not be inflationary. 10. According to Simons, the monetary authorities “must focus on fiscal policy as its mode of implementation,” with public expenditure being the means by which to inject money into the economy. He went on: “The Treasury might be given freedom within wide limits to alter the form of public debt—to shift from long-term to short-term borrowing or vice versa, to issue and retire demand obligations in a legal-tender form. It might be granted some control over the timing of expenditures. It might be given limited power to alter tax rates by decree and to make refunds of taxes previously collected” (Simons 1936/1948, 175). 11. “The new post-war element of ‘management’ consists in the habitual employment of an ‘open market policy’ by which the Bank of England buys and sells investment with a view to keeping the reserve resources of the member banks at the level which it desires. This method—regarded as a method—seems to me to be the ideal one” (Keynes 1930/ 1971b, 206).
References Allen, F. 1995. “Stock Markets and Resource Allocation.” In Capital Markets and Financial Intermediation, edited by C. Mayer and X. Vives, 81– 87. Cambridge: Cambridge University Press. Allen, W. A. 2007. “The Scope and Significance of Open Market Operations.” In Open Market Operations and Financial Markets, edited by D. G. Mayes and J. Toporowski, 36–53. London: Routledge. Allen, W. A. 2014. Monetary Policy and Financial Repression 1951–1959. Basingstoke: Palgrave. Bagehot, W. 1873. Lombard Street: A Description of the Money Market. London: Henry S. King. Bank of England. 1971. “Competition and Credit Control.” London, May. Bank of England. 1983. “The Bank of England’s Operational Procedures for Meeting Monetary Objectives.” Bank of England Quarterly Bulletin (June): 209–215. Bank of England. 1984. “Funding the Public Sector Borrowing Requirement: 1952–1983.” Bank of England Quarterly Bulletin (December): 482–492. Bank of England. 2004. “Reform of the Bank of England’s Operations in the Sterling Money Markets.” Bank of England Quarterly Bulletin (Summer): 217–227. Bank of England. 2015. “The Bank of England’s Sterling Monetary Framework.” London, June. Benford, J., S. Berry, K. Nikolov, and C. Young. 2009. “Quantitative Easing.” Bank of England Quarterly Bulletin 49, no. 2: 90–100. Bindseil, U. 2004. “The Operational Target of Monetary Policy and the Rise and Fall of the Reserve Position Doctrine.” ECB Working paper 372.
452 Toporowski Bindseil, U., and F. Würtz. 2007. “Open Market Operations—Their Role and Specification Today.” In Open Market Operations and Financial Markets, edited by D. G. Mayes and J. Toporowski, 54–85. London: Routledge. Burgess, W. R. 1964. “Reflections on the Early Development of Open Market Policy.” Federal Reserve Bank of New York Monthly Review (November): 219–226. Corry, B. A. 1962. Money, Saving and Investment in English Economics 1800–1850. New York: St. Martin’s Press. Doi, T., T. Ihori, and K. Mitsui. 2007. “Sustainability, Inflation and Public Debt Policy in Japan.” In Open Market Operations and Financial Markets, edited by D. G. Mayes and J. Toporowski, 293–330. London: Routledge. Fawley, B. W., and C. J. Neely. 2013. “Four Stories of Quantitative Easing.” Federal Reserve Bank of St. Louis Review 95, no. 1: 51–88. Friedman, M. 1959. A Program for Monetary Stability. New York: Fordham University Press. Gagnon, J., M. Raskin, J. Remache, and B. Sack. 2010. “Large-Scale Asset Purchases by the Federal Reserve: Did They Work?” Federal Reserve Bank of New York staff report 441. Goodfriend, M. 2003. “The Phases of US Monetary Policy 1987–2003.” In Central Banking, Monetary Theory and Practice: Essays in Honour of Charles Goodhart, Vol. 1, edited by P. Mizen, 173–189. Cheltenham: Edward Elgar. Goodfriend, M., and R. G. King. 1988. “Financial Deregulation, Monetary Policy, and Central Banking.” Federal Reserve Bank of Richmond Economic Review 74, no. 3: 216–253. Goodhart, C. A. E. 1975. Money, Information and Uncertainty. Basingstoke: Macmillan. Goodhart, C. A. E. 2014. “Competition and Credit Control.” London School of Economics Financial Markets Group special paper 229. Goodhart, C. A. E., and L. Dai. 2003. Intervention to Save Hong Kong: Counter-Speculation in Financial Markets. Oxford: Oxford University Press. Hasbrouk, J. 2007. Empirical Market Microstructure: The Institutions, Economics and Econometrics of Securities Trading. New York: Oxford University Press. Hawtrey, R. G. 1926. Monetary Reconstruction. London: Longmans Green. Hawtrey, R. G. 1933. The Art of Central Banking. London: Longmans Green. Hawtrey, R. G. 1934. Currency and Credit. London: Longmans Green. Hawtrey, R. G. 1938. A Century of Bank Rate. London: Longmans Green. Hicks, J. 1967. “Monetary Theory and History—An Attempt at Perspective.” In Critical Essays in Monetary Theory, by J. Hicks, 155–173. Oxford: Clarendon Press. Humphrey, T. M. 1982. “The Real Bills Doctrine.” Federal Reserve Bank of Richmond Quarterly Review (September/October): 3–13. Joyce, M., M. Tong, and R. Woods. 2011. “The United Kingdom’s Quantitative Easing Policy: Design, Operation and Impact.” Bank of England Quarterly Bulletin (September): 200–212. Kahn, R. F. 1972. “Memorandum of Evidence Submitted to the Radcliffe Committee.” In Selected Essays on Employment and Growth, by R. F. Kahn, 124–152. Cambridge: Cambridge University Press. Keynes, J. M. 1930/1971a. A Treatise on Money, Vol. 1: The Pure Theory of Money. Basingstoke: Macmillan. Keynes, J. M. 1930/1971b. A Treatise on Money, Vol. 2: The Applied Theory of Money. Basingstoke: Macmillan. Keynes, J. M. 1945/1980. “National Debt Enquiry: A Summary by Lord Keynes of His Proposals.” In The Collected Writings of John Maynard Keynes, Vol. 27: Activities 1940–1946
Open Market Operations 453 Shaping the Post-War World: Employment and Commodities, edited by D. Moggridge, 388– 396. Cambridge: Macmillan and Cambridge University Press. Lavington, F. 1921. The English Capital Markets. London: Methuen. League of Nations. 1930. First Interim Report of the Gold Delegation of the Financial Committee. Geneva: League of Nations. Levy-Orlik, N., and J. Toporowski. 2007. “Open Market Operations in Emerging Markets: The Case of Mexico.” In Open Market Operations and Financial Markets, edited by D. G. Mayes and J. Toporowski, 157–177. London: Routledge. Mayer, C., and X. Vives. 1995. “Introduction.” In Capital Markets and Financial Intermediation, edited by C. Mayer and X. Vives, 1–11. Cambridge: Cambridge University Press 1995 Niebyl, K. H. 1946. Studies in the Classical Theories of Money. New York: Columbia University Press. Ricardo, D. 1816. “Proposals for an Economical and Secure Currency with Observations on the Profits of the Bank of England As They Regard the Public and the Proprietors of Bank Stock.” In The Works and Correspondence of David Ricardo, Vol. IV: Pamphlets and Papers 1815–1823, edited by P. Sraffa and M. H. Dobb, 43–142. Cambridge: Cambridge University Press. Sandilands, R. J. 2010. “Hawtreyan ‘Credit Deadlock’ or ‘Keynesian Liquidity Trap’? Lessons for Japan from the Great Depression.” In David Laidler’s Contributions to Economics, edited by R. Leeson, 335–372. Basingstoke: Palgrave Macmillan. Sayers, R. S. 1936. Bank of England Operations 1890–1914. London: P. S. King. Sayers, R. S. 1948. The American Banking System: A Sketch. Oxford: Clarendon Press. Sayers, R. S. 1957. Central Banking after Bagehot. Oxford: Clarendon Press. Simons, H. C. 1936/1948. “Rules versus Authorities in Monetary Policy.” In Economic Policy for a Free Society, edited by H. C. Simons, 160–183. Chicago: University of Chicago Press. Tily, G. 2007. Keynes’s General Theory, the Rate of Interest and “Keynesian” Economics. Basingstoke: Palgrave Macmillan. Toporowski, J. 2006. “Open Market Operations: Beyond the New Consensus.” Bank of Finland research discussion paper 14/2006. Wilson, J. S. G. 1962a. “France.” In Banking in Western Europe, edited by R. S. Sayers, 1–52. Oxford: Clarendon Press. Wilson, J. S. G. 1962b. “The Netherlands.” In Banking in Western Europe, edited by R. S. Sayers, 197–233. Oxford: Clarendon Press.
Pa rt V
T H E N E W AG E OF C E N T R A L BA N K I N G Managing Micro-and Macroprudential Frameworks
chapter 15
Ce ntral Ban k i ng a nd Prudential Re g u l at i on How the Wheel Turns Kevin Davis
15.1 Introduction Any generic analysis of central banking activities is complicated by the international diversity of regulatory arrangements, institutional structures, mandates given to central banks, and the state of financial-sector development and structure. This is certainly so in the case of prudential regulation and supervision, where responsibilities may be shared across, or be the sole responsibility of other, regulatory agencies. And testing hypotheses about the effects of international differences in central bank involvement in or management of these activities is complicated because “measuring bank regulation and supervision around the world is hard” (Barth, Caprio, and Levine 2013, 112). But there are enough common themes to suggest that a coherent story can be told of how central bank prudential regulation responsibilities and activities have developed internationally over recent decades. Fundamental to that story is the significant impact of the global financial crisis of 2007–2008 (and its aftermath) in changing attitudes toward and practices of prudential regulation. The wheel has turned, back toward the situation prevailing prior to the period of extensive global financial deregulation that began in the 1970s. It is far from a complete reversal, but there are many elements of recent developments that reflect older approaches to prudential regulation. While bank supervision had long been an element of central bank activities in many countries, the wave of financial deregulation of the 1970s and 1980s initially led to a deemphasizing of this role while simultaneously freeing banks from many preexisting constraints, thereby facilitating competition and greater risk-taking. Together with a number of costly bank failures,1 and concerns of national authorities regarding
458 Davis competitive imbalances from international differences in regulatory standards, discussed by Goodhart (2011), this eventually led to the emergence of prudential regulation as a specific, globally led, well-defined area of regulatory activity. It was codified in the standards developed under the Basel Accord, which initially focused primarily on individual bank safety and was based on minimum capital requirements. This micro- orientation, well distant from (and arguably creating potential conflicts of interest with) the operation of monetary policy, led to an increase in international practice of prudential regulation being housed in a specialist regulator separate from the central bank. Although the Basel agenda saw an increasing incorporation of different types of risk in the determination of minimum capital requirements (and increasing complexity of those requirements), the design of regulation reflected in Basel 2 was very much market- oriented. The emphasis was on the role of market discipline and (for large banks) use of approved bank internal risk models for determination of capital requirements.2 It would be difficult to overstate the impact of the financial crisis on attitudes toward prudential regulation and supervision.3 Indeed, US Federal Reserve Board Governor Daniel Tarullo (2014) argued that “the aims and scope of prudential regulation have been fundamentally redefined since the financial crisis,” referring in particular to the increased emphasis on macroprudential policy as well as the need (in his view) for prudential policy to also consider institutions and activities outside of the traditional banking sector. One of the most significant influences has been the exposure by the crisis of a range of risks that were not adequately recognized in the precrisis Basel approach to prudential regulation. Recognition of the role of financial-sector interlinkages, network effects, and systemic risk has led to a new focus on macroprudential regulation and systemic (financial) stability, complementing the microprudential approach previously applied. This renewed focus on financial stability, argues Bernanke (2013), is a return to the rationale for the origins of central banking—at least in the case of the US Fed. The new emphasis on macroprudential regulation has also stalled, if not reversed, the pre-financial-crisis trend toward allocation of prudential regulation and supervision responsibilities to agencies other than the central bank (Masciandaro 2012), with a notable reversal being the decision of the United Kingdom to transfer prudential supervision from the Financial Services Authority (FSA) back to the Bank of England in 2012. Macroprudential concerns have also led to new regulatory changes that aim to influence the shape and structure of the financial sector, which Goodhart (2011) argues was once seen by central banks as being among their roles. The view that microprudential regulation was sufficient to promote financial stability4 and that the evolution of financial- market structure should be left primarily to market forces no longer holds such sway. Microprudential regulation has certainly not escaped unscathed, and views on its appropriate design are currently in something of a state of flux, as evidenced by the ongoing adjustments to the standards prescribed by the Basel Committee for Banking Supervision (BCBS).5 Faith in the risk-weighted assets approach to capital regulation has been shaken with (at least some) regulators looking to place more emphasis on simpler, more traditional measures such as unweighted leverage ratios. Faith has also been shaken in reliance on bank internal risk models to (partially) determine required
Central Banking and Prudential Regulation 459 capital and on allowing bank choice of their preferred response to meeting broad regulatory settings. There has been, arguably, a move back to more of the “command” or “direct controls” approach of earlier decades, reflected in an enhanced role for a (revised) “standardized approach” in the Basel standards. This shift away from reliance on bank internal risk-management practices is particularly apparent in the case of liquidity risk. The crisis highlighted an absence of liquidity regulation in the Basel standards, prompting introduction of new specific regulations involving liquid asset holdings and funding composition requirements. While different in design, this is another example of a reversion to prior approaches and views, including less faith in banks to adequately provide for such liquidity and funding risks. One feature of the changes in the Basel standards, reversing precrisis trends, has been requirements for higher levels, and better quality, of capital, returning emphasis to common equity as core capital. Other forms of eligible regulatory capital have been strictly limited to instruments that can be “bailed in” or written off if a bank is in distress, giving the prudential regulator increased discretion and powers.6 The crisis also highlighted a number of other important issues for prudential regulation, discussion of which are precluded here by space limitations. One is the range of institutions or activities that should be subject to prudential regulation, particularly given increased focus on macroprudential regulation and systemic stability. Shadow banking, for example, can contribute to financial instability but has not been subject to microprudential regulation, while systemic risks can arise from the interactions between capital markets and financial intermediaries. That issue of determining the appropriate boundaries (the “perimeter”) of prudential regulation (which undoubtedly differ for micro versus macro regulation) is further complicated by large banks undertaking activities outside of “traditional banking” and new business models emerging to mimic the functions of banks as a result of fintech. Another important issue has been the ongoing search for design of suitable cross- border arrangements for dealing with prudential supervision and regulation of multinational banking. The crisis highlighted that allocation of supervisory responsibilities between home and host regulators and participation of national supervisors in a college of supervisors for large multinational banks needs to be complemented by effective multilateral resolution arrangements for troubled banks to avoid aggravating a crisis. This is particularly evident in the case of the European Union, where new arrangements (the European banking union) have been introduced in recent years involving sharing of prudential supervision responsibilities between the European Central Bank (ECB) and national authorities (European Commission 2015). Also relevant to these considerations have been the complications caused by deposit insurance arrangements where, as well as cross-agency issues of responsibilities at the national level, national authorities could, under some design structures, be burdened with providing recompense to depositors from other jurisdictions. This chapter elaborates on the issues introduced above. First, it considers the general nature of prudential regulation and supervision activities. This emphasizes the
460 Davis importance of the distinction between regulation and supervision and is followed by an overview of central bank involvement in prudential supervision. Then a brief recent history of the evolution of prudential regulation is provided, which focuses on how the wheel has turned since the 1970s with initially widespread removal of direct controls on banks, which are now being restored (albeit in different forms). This is followed by more specific analysis of microprudential regulation and then an analysis of macroprudential regulation. In these analyses, the questions of the need for such regulation, its particular forms, and interaction with other central bank activities are considered. The conclusion returns to the general theme of how the wheel has turned, making the approach and practices of prudential regulation somewhat more aligned with those prevailing before the phase of financial deregulation of the 1970s and 1980s.
15.2 The Nature of Prudential Regulation and Supervision Polizatto (1992, 175) argued that “prudential regulation is the codification of public policy toward banks, banking supervision is the government’s means of ensuring compliance.” Underpinning such regulation and supervision is the desire to ensure safety and soundness of the banking system and to limit the adverse consequences of bad management. This involves placing limits and constraints on banks via legislation and regulation and using supervision as a complement to ensure compliance and prudent behavior. As components of prudential regulation policy, Polizatto listed matters related to entry criteria, capital adequacy, asset diversification, insider/connected party dealings, permitted/prohibited activities, asset classification/provisioning, audit arrangements, enforcement powers, resolution powers, and deposit insurance. That list is equally relevant today, although it does not explicitly include regulatory liquidity and funding requirements such as the liquidity coverage ratio and net stable funding ratio requirements introduced as part of Basel 3. However, the Basel agenda has tended to lead many toward a narrower perspective for prudential regulation as being about capital requirements (and more recently, liquidity requirements and activity restrictions). Dewatripont and Tirole (1994), for example, in attempting to build a theoretical framework for prudential regulation based on their “depositor representation hypothesis,” tend to adopt a similar narrow focus. Identifying what prudential regulation involves does not provide a rationale for why regulation is needed or what form it should take. Brunnermeier et al. (2009, 2) indicate: Traditional economic theory suggests that there are three main purposes [of regulation]. 1. to constrain the use of monopoly power and the prevention of serious distortions to competition and the maintenance of market integrity;
Central Banking and Prudential Regulation 461 2. to protect the essential needs of ordinary people in cases where information is hard or costly to obtain, and mistakes could devastate welfare; and 3. where there are sufficient externalities that the social, and overall, costs of market failure exceed both the private costs of failure and the extra costs of regulation.
Prudential regulation is primarily premised on the latter two of these reasons. Information asymmetries abound in banking where opacity of bank balance sheets is prominent.7 Protecting customers from mis-selling is one potential consequence, although this is typically regarded as being outside of the “prudential” domain. It is often the responsibility of a securities regulator or consumer protection agency, with a recent trend being the establishment of specialist financial consumer protection agencies in a number of jurisdictions (including China and the United States). Similarly, market manipulation (such as recent bank rate-rigging scandals) and (perhaps unwitting) facilitation of illegal transactions (such as money laundering or terrorist financing) typically fall outside the prudential domain,8 although bank governance failings that enable (or induce or encourage) those activities are of (increasing) interest to prudential regulators. Another consequence is the potential for information failures, in conjunction with bank balance-sheet features, to lead to instability—such as the potential for bank runs (Diamond and Dybvig 1983). Information problems can also lead to significant agency problems between depositors and bank owners or managers. Potential incentives for increased risk-taking by bank owners (particularly when depositors, or other creditors, are protected by implicit or explicit government insurance) are one common rationalization for prudential regulation. Dewatripont and Tirole (1994) also note the problem of reduced market discipline arising from free-riding by depositors who are unable to assess relevant information, and they emphasize their “representation hypothesis” (that small depositors need an agency to represent them in monitoring, intervening, and dealing with cases of failure) as a rationale for prudential regulation. An alternative view is found in Benston and Kaufman (1996, 688), who argue that “banks should be regulated prudentially only to reduce the negative externalities resulting from government-imposed deposit insurance.” They argue that a requirement for prompt intervention and ultimately resolution if a bank’s capital ratio falls below some minimum requirement is sufficient to offset the moral hazard arising from deposit insurance. Their preference for market-based solutions and requirements for regulatory action based on prespecified balance- sheet developments is to some degree reflected in recent Basel requirements that introduce “bail-in” requirements for bank hybrid securities if they are to be eligible as regulatory capital. While the above quote from Benston and Kaufman refers only to explicit deposit insurance, which has become common around the world (Demirgüç-Kunt, Kane, and Laeven 2013), a more pressing problem has been the role of “implicit insurance” in the form of perceptions that governments will bail out failing banks, particularly those deemed too big to fail (TBTF). The actual bailouts that occurred during the financial crisis have led to more stringent prudential regulation designed
462 Davis to limit the likelihood of bailouts and also the consequent competitive benefits and moral hazard risks of TBTF banks. Brunnermeier et al. (2009) suggest five relevant externalities from financial institution failure in potential support for regulation: (a) information contagion, (b) loss of access by customers to future funding, (c) financial institution and market interconnectedness, leading to (d) pecuniary (asset fire-sale price effect) externalities, and (e) deleveraging consequences for real economic activity. In regard to externalities leading to systemic risk and motives for macroprudential regulation, De Nicoló, Favara, and Ratnovski (2012) expand on interconnectedness effects. They identify strategic complementarities (where correlated risks arise in an upswing) and propagation of shocks, as well as fire sales with destabilizing asset price and balance-sheet consequences as important. Increasing attention to the significance of these externalities and their consequences for financial-sector (and economic) stability are characteristics of the post-financial-crisis literature on banking and motivation for macroprudential regulation (Brunnermeier, Eisenbach, and Sannikov 2012). Indeed, the World Bank (2013, 78) suggests that “[f]inancial sector regulation and supervision are areas where the role of the state is not in dispute; the debate is about how to ensure that the role is carried out well.” Prudential regulation is one component of a triumvirate of legislation, regulation, and supervision which together constrain (or define permissible) activities of banks. Legislation (the “rule of law”) sets the overarching framework and provides regulators with powers to make (and enforce) regulations. It is one feature of the Basel agenda that despite significant international differences in legal systems, a largely common framework for prudential regulation has been widely accepted. But at the other end of the triumvirate, international differences exist in the nature of, responsibility for, and quality of prudential supervision. Concern about the ability of supervisory agencies to effectively monitor compliance with prudential regulations is a common theme internationally. Even in relatively well- resourced supervisory agencies in advanced economies, the increasing complexity of the financial sector and regulatory requirements place strains on ability to attract sufficient expertise. These problems are even more marked in developing and emerging market economies, where often the desirable characteristic of independence of the regulator from political interference is lacking and strong legislative underpinnings for regulatory powers may be absent. In this regard, it is surprising that Barth, Caprio, and Levine (2013) find that 45 percent of 125 countries in the World Bank 2011 Global Survey of Bank Regulation reduced supervisory powers after the financial crisis, although most increased the stringency of regulation.
15.3 The Role of the Central Bank in Prudential Supervision One complication facing all nations is the appropriate structure of supervisory arrangements. Financial sectors involve a range of financial products and services created
Central Banking and Prudential Regulation 463 to provide particular economic functions and which are provided by a wide range of financial institutions and markets. In principle, “functional” regulation, ensuring consistent treatment of products and firms performing the same economic function, has appeal but is difficult to implement in practice.9 Consequently, financial regulation typically focuses on types of institutions or types of products, with prudential regulation generally more focused on institutions and with securities market regulation more product-or activity- focused. Ensuring consistency and avoiding regulatory “gaps” in a world of financial innovation is difficult, particularly given the substantial cast of actors in the regulatory family, which can include central banks, prudential regulators, insurance regulators, securities market regulators, deposit insurers, and financial consumer protection agencies. Different jurisdictions have adopted varying allocations of responsibilities among regulatory institutions, including integrated (prudential and securities) regulators and “twin peaks” (separate prudential and securities regulators) approaches, with central banks either taking on one or more of these roles or being limited to monetary policy, payments systems oversight, and stability functions.10 Others have multiple specialized regulators, raising questions about coordination, an issue of increased importance given the emphasis being placed on macroprudential regulation, whose natural home appears to be the central bank but which can involve adjusting regulatory requirements that are under the responsibility of prudential or securities regulators. While the global financial crisis showed up deficiencies in some regulatory structures (and has led to changes in institutional arrangements and responsibilities in some cases), there is no clear answer to the optimal regulatory structure (IMF, FSB, and BIS 2016). But the need for ensuring regulatory cooperation has led to creation of institutions such as the Financial Stability Oversight Council (FSOC) in the United States and the European Systemic Risk Board (ESRB) to ensure system-wide oversight arrangements. International standard setters appear agnostic on whether prudential supervision should be assigned to the central bank. In the Basel Committee’s latest version of its “Principles for Effective Banking Supervision” (2012), there is no reference to any particular set of institutional arrangements other than requirements for clear powers, responsibilities, independence, adequate resourcing, and so on. It notes that as long as this does not create a conflict with the primary objective (of promotion of bank safety and soundness) of bank supervision, the supervisor might “be tasked with responsibilities for: (i) depositor protection; (ii) financial stability; (iii) consumer protection; or (iv) financial inclusion.” Čihák et al. (2012) provide an overview of international differences in the role of the central bank in prudential regulation (as at 2011–2012) and highlight a significant difference in the allocation of responsibilities between emerging market and developing economies and advanced economies. In the former, the central bank is most commonly the bank supervisor (around 75 percent of cases), whereas it has that role in fewer than 40 percent of cases in advanced economies, where other specialized supervisory agencies are found nearly 50 percent of the time. There are a relatively limited number of situations, with the United States being the most notable, where bank supervision is spread across multiple agencies including the central bank.
464 Davis The US case highlights the fact that allocation of supervisory or regulatory responsibilities is to a significant degree historically path-dependent, as discussed in the more general context of banking structure evolution by Calomiris and Haber (2014). While there have been numerous analyses of the merits of alternative allocations of supervisory responsibilities, ultimately, national political factors (including vested interests of existing regulators) are crucial determinants of the allocation of roles and powers. That case also highlights the potential risk that US academic research into banking regulation and supervision, which dominates the literature, may be less relevant to non-US experience, a concern also reflecting the internationally atypical (both geographical and organizational) structure of the US banking sector. Arguably, the same jurisdiction-specific situation applies in the case of the European Union, where concerns over cross-border supervision and resolution (and deposit insurance) arrangements were brought to the fore by the financial crisis and its aftermath. This led to agreement in 2012 for a European banking union framework compulsory for the nineteen euro area states and optional for other EU members. This has three components.11 One is the Single Supervisory Mechanism (SSM), which in November 2014 formally recognized the ECB as the prudential supervisor of member-state banks. In practice, the ECB has taken on responsibility for a designated group of large banks with national authorities (either a central bank or a specialist supervisor) having responsibility for other institutions. (Schoenmaker et al. 2016 provides details and an appraisal of how the changes have worked in the initial years of implementation.) The second feature is the Single Resolution Mechanism (SRM), which came into operation at the start of 2016. A Single Resolution Board (involving national, SSM, and EU representatives) is to oversee resolution activities, as required by the SSM, of the national resolution authorities. The SRM also involves the creation of a Single Resolution Fund financed by ex ante contributions by banks to facilitate resolution arrangements. The third component is the development of a European deposit insurance scheme to eventually replace national schemes in 2024, with transitional arrangements involving reinsurance and coinsurance between national schemes in the interim. Goodhart and Schoenmaker (1995) provide an analysis of the arguments for and against separating responsibility for monetary policy and bank supervision. Among the arguments for allocating bank supervision to the central bank is the suggestion that aggregation of confidential bank-level information derived from supervision can be useful for the implementation of monetary policy. This could result from improved macroeconomic forecasts or from anticipation of bank lending trends. Peek, Rosengren, and Tootell (1999) find that such supervisory information was relevant for the US Federal Open Market Committee (FOMC) policy decisions but are unable to ascertain the relative importance of those two channels. They argue that the greater role of bank intermediation in most other countries might make integration of supervision with the central banking role even more important, given difficulties in sharing confidential information among agencies. And while Yellen (2009) does not comment on the appropriate allocation of responsibilities, she notes that if macroprudential regulation is to fulfill a role of anticipating and preventing
Central Banking and Prudential Regulation 465 systemic problems, it requires massive collection and interpretation of data from the broader financial sector. Another argument is that the role of the central bank as lender of last resort (LOLR) creates a natural synergy between bank supervision and central banking. In line with the view of Bagehot (1873) that the central bank should lend (at penalty rates on good security) to illiquid but otherwise solvent banks, the role of bank supervision provides the information needed to perform that role. More generally, it is the conventional wisdom that the insurance feature of the LOLR role creates moral hazard and increases bank risk-taking. If so, allocating prudential supervision to the central bank may have some merit, although Repullo (2005) demonstrates that while the LOLR operations may reduce bank liquid asset holdings, they do not necessarily lead to an increase in credit risk (arguably the primary concern for prudential supervisors). A contrary view is that simultaneous responsibility for bank supervision and monetary policy may lead to conflicts of interest. In particular, central banks may be inclined to adopt more accommodative monetary policy if they are concerned about risks of bank failures. There have been a number of studies that have attempted to test this conjecture through international comparisons. There is some slight evidence of a positive correlation between inflation levels and a central bank supervisory role (Di Noia and Di Giorgio 1999). Other studies have found relationships between a central bank supervisory role and bank safety and stability. One is a positive correlation between levels of banks’ nonperforming loans and central bank involvement in supervision (Barth et al. 2002), which appears at variance with the finding of Goodhart and Schoenmaker (1995) of combined supervision regimes tending to have fewer bank failures. Other studies emphasize a significant role for the level of regulatory independence and other legal and institutional features in determining financial-sector stability. Of particular note, Masciandaro, Pansini, and Quintyn (2013) find that indicators of supervisory unification and effective governance of regulatory institutions did not have a positive association with a country’s economic resilience during the financial crisis, pointing to the need for ongoing research in this area.
15.4 Recent Precrisis Developments in Prudential Regulation: An Overview Prior to the deregulatory trend that took hold in the 1970s through to the 1990s, banking and supervisory policy had for some time been largely based on the application of direct controls to bank activities that restricted the role of market forces. Banks may have occasionally gotten into financial difficulty, but (compared to more recent times) they had relatively little flexibility to engage in substantive risk-taking. Boot, Dezelan, and Milbourn (2001), for example, argue that the “traditional regulatory approach to
466 Davis Western banking implicitly guaranteed stability by reducing competitiveness” (50), a view also expressed by Dean and Pringle (1994) and Goodhart (2011). Techniques varied from country to country but generally consisted of such restrictions as credit (lending) controls, liquid asset (reserve) requirements, interest-rate controls, activity and portfolio restrictions, and minimum capital requirements. Banking markets were also often characterized by entry barriers (particularly for foreign entities but also by constraints on branching), and in many countries, state ownership was significant. The operation of market forces in foreign exchange markets and securities markets was also constrained. Ultimately, however, the forces of competition and regulatory avoidance combined with the dominance of free market ideology to make financial liberalization both necessary and seemingly optimal, at least in Western countries. In much of Asia, following the Asian financial crisis of the late 1990s, more direct regulation and restrictions tended to prevail. Prior to the “new era” of prudential regulation based around the Basel Accord, central banks took, to differing degrees, responsibility for banking supervision and implemented it in different ways.12 Ultimately, since they would be responsible for clearing up the mess associated with banking failures, there was incentive to reduce the likelihood of such eventualities. In the United Kingdom, the Bank of England for many years operated a largely informal approach to supervision relying on “moral suasion.” This was gradually supplemented with increasing legal powers (particularly in the 1979 Banking Act), including requirements for bank auditors to provide information to the bank. In many British Commonwealth countries, similar approaches prevailed. In the United States, in contrast, bank supervision was fragmented across a range of institutions, including the Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, Federal Reserve banks, and state regulators, and relied more on on-site inspection and use of metrics such as CAMEL ratios. Polizatto (1992) attributes the different approach from the UK model partly to the role of branch banking “internalizing” some part of the supervision within the bank rather than requiring regulatory attention to a multitude of small banks. In Europe, more emphasis was placed on formal ratio requirements with auditor responsibility for informing regulators of the situation. Abiad, Detragiache, and Tressel (2008) provide an overview of the international deregulatory/financial liberalization experience based on an IMF survey covering ninety- one countries over the period 1973–2005. For the advanced economies, liberalization occurred from the early 1970s (or before) and was largely completed by the mid-1990s, whereas for developing and emerging market economies, it was more a phenomenon of the late 1980s to the 2000s. Notably, they find that “regulatory and supervisory reforms remain relatively less advanced even many years after the beginning of financial reforms” (10). And the experiences of some major bank failures were part of the motivation for a refocusing of banking regulation on what Boot, Dezelan, and Milbourn (2001) refer to as an indirect approach to regulation that “seeks to induce the desired behavior” rather than prohibiting undesired behavior. The Basel Accord risk-based capital
Central Banking and Prudential Regulation 467 requirements were one element of this as were, to lesser degree in practice, risk-based premiums for deposit insurance. The history of the Basel Committee is well described in BCBS (2015a). Formed in late 1974 and including the G10 central bank governors, it has expanded its membership (in 2009 and 2014) to include twenty-eight jurisdictions as of 2016. More than half of these have the head of a separate supervisory agency as well as a central bank governor as representatives, while the European Union and a number of its constituent countries are also represented. The initial focus of the committee was on cross-border supervisory cooperation, leading to the 1975 Concordat. This introduced the allocation of responsibility for foreign branches to the home supervisor and for foreign subsidiaries to the host supervisor. Despite many additions and alterations to the Basel standards over time, that allocation remains fundamental to the current structure of prudential supervision across national boundaries. Possibly the most fundamental impact of the Basel Committee on prudential regulation and supervision was the introduction of the Basel capital requirements (now generally referred to as Basel 1) in 1988. This focused attention on the role of minimum capital requirements in prudential regulation, where risk weights were applied to credit exposures to, ideally, make required capital reflect risk-taking by the bank. Over time, the range of risks reflected in the requirements has been expanded to include market (trading) risk, operational risk, and interest-rate risk in the banking book. Risk weights, the minimum risk-weighted capital ratio, and definition of eligible capital have all undergone substantial changes over time. That process commenced in 1996 (with incorporation of trading book market risk) and subsequently led to Basel 2 (2006), Basel 2.5 (2009), and Basel 3 (2010). There have been a number of substantive changes introduced or proposed from late 2014, sometimes referred to (although not by the Basel Committee) as Basel 4. As part of this journey, a two-tier approach emerged (initially in the 1996 market [trading book] risk capital requirements but more generally in Basel 2 in 2006). Some larger banks were accredited to use “internal risk models” (in conjunction with some Basel-assigned parameters) in determining minimum capital requirements, whereas capital requirements of other smaller banks were determined (at comparatively higher required capital levels) using a formulaic “standardized” approach.13 Many of these ongoing changes in regulatory standards reflected recognized shortcomings of earlier approaches (including inadequate risk sensitivity, inappropriate risk weights, concerns over use of bank internal models, which were initially viewed as providing better risk assessment, and eligibility of nonequity securities as regulatory capital). These issues came to the fore in the financial crisis of 2007–2009, prompting Basel 2.5 and Basel 3 and subsequent changes. One important feature of the Basel approach was the emphasis on risk weighting in determination of required capital. This reflected concerns over the potential for regulatory capital arbitrage under a simple (unweighted) leverage requirement, by holding higher-risk assets. While appropriately risk-weighted deposit insurance premiums
468 Davis could, in principle, penalize unwanted risk-taking, most schemes did not involve premium structures that contained the required level of risk sensitivity. Moreover, several analyses, such as Flannery (1991) and Pennacchi (2006), pointed to the need for both risk-weighted capital requirements and risk-based insurance premiums to induce the desired level of risk-taking. Basel 2 also formalized the notion of prudential regulation involving three pillars of minimum capital requirements, effective supervision, and market discipline. Arguably, there were failings on all three pillars in the lead-up to the financial crisis, which also exposed other deficiencies in this focus of prudential regulation. In particular, deficiencies in bank liquidity risk-management and system liquidity relief techniques became apparent, as did inadequacies in crisis management and bank resolution powers (including problems of TBTF) and lack of attention to systemic risk due to the micro (bank-specific) focus of prudential regulation.14 Consequently, the financial crisis has led to a significant shift in the approach to prudential regulation at both the microprudential and the macroprudential level—and with increased attention paid to the latter. The following section focuses on some recent changes at the micro level, and that is followed by an analysis of macroprudential regulation developments. It should, however, be noted that a clear division of prudential regulation tools and techniques into micro and macro categories (which refer more to policy objectives) is not really possible. (The introduction of bail-in requirements for eligibility of hybrid instruments as allowable capital is one example, which is considered below).
15.5 Postcrisis Developments in Microprudential Regulation The changes introduced successively in Basel 2.5, Basel 3, and Basel 4 have been substantial and wide-ranging. The recalibration of risk weights and capital requirements and the introduction of liquidity regulation have involved a significant tightening of microprudential regulation. It has also involved some shift away from the Basel 2 approach of rewarding and partly relying on good, advanced risk-management practices in banks and toward increased regulatory specification of required standards. Space precludes providing details of all the regulatory changes involved, but they can be summarized as follows.15 First, higher levels and better quality of regulatory capital have been mandated, with increased focus on common equity (common equity Tier 1 capital) and bail-in requirements imposed for eligibility of other instruments as additional Tier 1 or Tier 2 capital. Several capital buffers have also been introduced. One (discussed later in section 15.6) is a countercyclical capital buffer. A second is a capital conservation buffer (of an additional 2 percent). Limitations on distributions (dividends and bonuses, etc.) apply if the capital conservation buffer is breached. This
Central Banking and Prudential Regulation 469 has the objective of ensuring that capital is not extracted from the bank at times of stress, reducing the buffer to absorb losses. Risk weights have been adjusted, generally increased, and reduced confidence in a reliance on a risk-weighted approach to capital requirements is reflected in (forthcoming) implementation of a supplementary leverage ratio requirement. That reduced confidence, particularly with regard to bank internal risk models, also finds reflection in reduced scope for use of the internal models approach (rather than a [revised] standardized approach, where regulators determine all relevant parameters) and less benefit from its use. Those lower benefits stem from changes in relative risk weights in the two approaches and the Basel 4 proposed introduction of “capital floors” linked to the revised standardized approach. Additional capital requirements for global and domestic systemically important banks (G-SIBs) and D-SIBs), which use the internal models approach, also reduce the benefits from that approach for those banks relative to the standardized approach. Finally (albeit ignoring other changes such as “living will” resolution and recovery plan requirements), the other major change has been the introduction of specific liquidity requirements.
15.5.1 Liquidity Regulation and Central Bank Operations One of the most significant recent developments in the Basel standards has been the introduction of specific liquidity regulation requirements. This reflects two features of the crisis experience. First, banks had come to rely largely on liability management rather than reserve asset management for liquidity purposes, reflecting the perception that funds could be obtained either from issuing new liabilities into wholesale markets or selling holdings of securities into the capital markets. The freezing up of capital markets in the financial crisis illustrated the risks associated with this strategy, both for individual banks and for financial stability. (Brunnermeier and Pedersen [2009] analyze the way in which market and funding liquidity interact to generate liquidity spirals.) Second, banks had undertaken excessive liquidity transformation (long-term assets financed by short-term liabilities) not consistent with prudent management.16 Consequently, new Basel standards involved the introduction of a liquidity coverage ratio (LCR) and a net stable funding ratio (NSFR). The former requires holding of high- quality liquid assets (HQLAs) that would be sufficient to meet “stress scenario” outflows of funds over a thirty-day period, where different assets are given liquidity weights (to reflect ability to convert into cash) and scenario outflows reflect relevant features of balance-sheet composition. The NSFR essentially requires that there is some correspondence between the durations of the asset and liability sides of the balance sheet, such that a greater proportion of longer-term assets requires greater use of funding that is more stable (by virtue of being longer-term or more reliable due to customer characteristics). Keister and Bech (2012) model the consequences of the introduction of the LCR for the modus operandi and effectiveness of monetary policy. Among these, they note that
470 Davis central bank dealings in HQLAs may have greater effects on overnight interbank rates and differential term structure effects than dealings in non-HQLAs. These effects depend, inter alia, on institutional characteristics of banking system balance sheets and central bank liquidity facility arrangements (such as eligible collateral for repos) and indicate a need for more analysis of these issues, which may be relevant for improved understanding of consequences of nonconventional monetary policies and quantitative easing. More generally, the question arises of the role of liquidity regulation in reducing the risk of bank runs and its interrelationship with the LOLR function of the central bank. Diamond and Kashyap (2016) consider this problem, adapting the model of Diamond and Dybvig (1983) to allow for bank liquid asset holdings to provide a return greater than that from early liquidation of a long-term loan. Within their framework, they are able to examine different forms of liquidity regulation and provide some rationale for LCR-and NSFR-type requirements as mechanisms contributing to less likelihood of bank runs. There is scope for much more analysis of these interrelationships, including the relationship between liquidity regulation, LOLR facilities, and deposit insurance as contributors to reducing the likelihood of bank runs.
15.5.2 Bank Capital Requirements and Central Bank Operations The Basel 2 framework was agreed on by the Basel Committee in 2006 (after a long gestation period) for introduction by 2008, and it recognized “three pillars” of prudential regulation. The first was capital requirements, and risks covered were extended beyond credit and market risk to include operational risk and (at regulator discretion) interest- rate risk in the banking book. More risk-sensitive asset weights were introduced, together with a two-tier approach whereby larger, “more sophisticated” banks could use their internal risk models (once approved by the regulator) in determining risk weights, rather than following the “standardized” approach where the regulator determined the risk weights. This development also involved regulatory capital concessions for the internal models approach, with the objective of providing incentives for banks to improve internal risk-management processes. The second pillar was effective supervision, and the third pillar was disclosure and market discipline. The financial crisis exposed serious deficiencies in the new Basel 2 capital requirements. Risk weights applied to credit exposures were still not appropriately calibrated, hybrid instruments allowed as regulatory capital did not provide the loss- absorption capacity expected, and capital levels were generally inadequate. The Basel 2.5 and Basel 3 changes attempted to address these issues, as well as concerns about inadequate risk disclosures and market discipline and the internal models approach allowing banks to “game the system.” That last concern led to the proposal for use of a (non-risk- weighted) leverage ratio requirement as a backstop to the risk-weighted capital requirement. The latter, in turn, was increased substantially, with higher requirements for
Central Banking and Prudential Regulation 471 common equity Tier 1 (CET1) capital and introduction of a bail-in requirement for any hybrid securities to be included as additional Tier 1 (AT1) or Tier 2 (T2) capital. More recently, the “Basel 4” proposals have suggested a partial shift away from a regulatory approach focused on use of bank internal models and granular risk weighting of capital requirements. As well as the non-risk-weighted leverage ratio requirement of Basel 3, the BCBS has proposed “capital floors,” set at some level below those applicable if the revised standardized approach were to be applied, for banks using advanced approaches. Similarly, limitations on the use of internal models for some credit portfolio exposures (and for operational risk) are also interpreted by many as a backing away from a risk-sensitive approach. One consequence of substantially higher capital requirements is the potential implication for monetary policy, which has traditionally been viewed as operating by affecting available bank reserves and short-term policy interest rates and thus bank incentives and ability to extend credit. In that view, the capital position of banks did not act as a constraint on bank lending. Under Basel 1 and Basel 2, that situation also prevailed, with most banks operating with economic capital levels in excess of the regulatory minimums. Ultimately, increased lending would require higher capital to maintain a desired economic capital position, but in the short run, this was not a constraint. A number of studies summarized by Borio and Zhu (2012) have examined the potential role of a bank capital channel for the monetary transmission mechanism, with the primary factor being the effect of interest-rate changes on bank profitability, cost of capital, and thus incentives to raise additional capital with greater effects when the capital constraint is binding. With Basel 3 and 4 (and total loss absorbency capacity, or TLAC) higher capital requirements, it is arguable that regulatory capital now exceeds economic capital and is more likely to be a binding constraint. While banks will operate with a precautionary buffer of capital above the regulatory minimum, unwillingness to allow that buffer to decline substantially may reduce the short-term responsiveness of lending decisions to traditional monetary policy and make those decisions conditional on ability to raise additional capital. In this regard, the “capital threshold effect” on bank behavior discussed by Borio and Zhu (2012) may have been increased by the higher capital ratio requirements of Basel 3 and 4. In addition, the “capital framework effect” they identify, in which loan terms and conditions are affected by the relationship between risk and implied cost of funding, may have become more significant as regulatory capital replaces economic capital as the primary influence. Borio and Zhu also point to the effect of risk- sensitive capital requirements in creating a “risk-taking channel” for monetary policy through which changes in monetary policy influence the risk premiums incorporated into bank loan pricing and loan terms and conditions. One ongoing concern regarding the Basel approach to prudential regulation has been the tendency for capital requirements to induce procyclicality, resulting (inter alia) from risk weights derived from internal models varying inversely with the health of the economy or financial asset prices. Then, for example, reductions in capital buffers as an economic downturn occurs could aggravate that situation due to reduced bank
472 Davis willingness to lend. Yellen (2009) argues that while monetary policy could play a role in attempting to offset swings in financial asset prices, this would compromise the focus of monetary policy (away from real sector outcomes) and that a better approach is the design and implementation of suitable macroprudential policy instruments for achieving financial stability.
15.6 Macroprudential Regulation An important feature of the evolution of the Basel agenda has been an increased emphasis on systemic stability or macroprudential regulation rather than, as was the case with Basel 1, virtually exclusive focus on individual bank solvency. Macroprudential regulation is a term that emerged in the late 1970s (Clement 2010) but did not come into general prominence until the early 2000s and achieved particular prominence at the time of the financial crisis. While some of the macroprudential tools available are part of the Basel regulatory framework, there are others that are not and that may reflect specific national concerns.17 Macroprudential regulation brings the issue of banking supervision arrangements more closely into alignment with the traditional responsibilities of central banks for financial system and economic stability. Basel 3, in particular, adds a system stability focus to capital requirements in four ways: introduction of countercyclical capital buffers, capital incentives for the use of central clearing counterparties (CCPs) for derivatives trades, higher capital requirements for financial-sector counterparty exposures, and identification of certain banks as G-SIBs to which higher capital requirements are applied. The last of those requirements is complemented by the introduction of TLAC requirements, implying much higher regulatory capital requirements. In November 2015, the Financial Stability Board (FSB) released final details imposing a minimum TLAC requirement for G-SIBs of 16 percent of risk weighted assets (RWA) to apply from January 2019.18 In some jurisdictions, similar TLAC requirements will be applied to D-SIBs. Regulatory responses to implement macroprudential regulation involve both a “time-series” and a “cross-section” perspective. The time-series perspective involves regulations and actions that adjust to the state of the financial sector with the objective of reducing financial cycles. This involves a fundamental change from the precrisis consensus view of central banks that the appropriate way of dealing with apparent bubbles was to manage a smooth adjustment when a bubble burst. Underpinning that view was the belief that central bankers were no better placed than market participants to reliably identify whether inflation of asset prices was due to fundamentals or constituted a bubble. The new paradigm involves regulatory parameters that adapt to the state of the financial cycle, such as the countercyclical capital buffer, whereby the regulator can impose an additional capital requirement of up to 2 percent depending on the state of the financial sector. In a period of excessive growth in asset prices, the buffer would be increased
Central Banking and Prudential Regulation 473 to “lean against the wind” and would be reduced if a decline in asset prices threatened stability. Thus, asset price inflation has been implicitly added as an additional consideration to the usual target of (goods and services) price inflation of central banks, although the Basel Committee proposed a specific linkage between excessive credit growth (relative to trend) and the size of the capital buffer. How regulators will balance competing macro-and microprudential considerations when a financial downturn occurs (and micro bank safety considerations point to ensuring high capital ratios) remains to be seen.19 A resulting question is whether changing bank capital requirements will affect the supply of bank credit and, if so, whether this will be offset by changes in the supply of credit from unregulated sources. Aiyar, Calomiris, and Wieladek (2014) investigate this issue using UK data, where time-varying capital ratios have been applied since the introduction of Basel 1. They find that the supply of bank credit does respond to changes in capital requirements but that this effect is partially offset by changes in the supply of credit from other sources. The countercyclical capital buffer measure raises important issues for international coordination, because it is inherently determined at a national level yet applicable to international banks operating in that jurisdiction. Such banks could find that there are varying (and variable) capital requirements applying to their operations in different countries. Other practical issues arise regarding usage of the countercyclical capital buffer, including regulatory willingness to reduce capital requirements as the financial cycle moves into a downturn. Announcements that there is to be an increase in the capital buffer can also be expected to have effects on bank equity markets by changing expectations about potential new equity issues. There is also the risk that banks may respond to such a change by shifting asset portfolios toward lower-risk-weighted assets, such as housing, which may worsen the asset price bubbles that the change is supposed to offset. Potential changes to loan loss provisioning requirements are also relevant, with Basel requirements for a focus on expected losses conflicting with, and inducing changes in, accounting standards that had previously been more backward-looking.20 Also relevant are margin requirements and haircuts in securities lending arrangements and collateralized lending (such as repos), which the Committee on the Global Financial System (2010) study and suggest could be adjusted to reduce procyclicality and systemic risk. This involves regulations that cover both banks and other participants such as broker-dealers, custodians, and hedge funds engaged in these markets. Usage and understanding of the efficacy of macroprudential tools in the modern financial world is somewhat in its infancy. Cerutti et al. (2016) provide information on the global use of macroprudential tools over the period 2010–2014. Loan-to-valuation ratio and reserve ratio changes had the most instances of loosening and tightening (with mixed correlations with monetary policy changes), while changes to capital requirements have been more common but reflect in large part the ongoing introduction of new Basel requirements.
474 Davis The cross-sectional perspective on macroprudential regulation reflects the increasing recognition that the financial sector may beneficially be viewed from a network perspective involving a complex set of interactions and feedback effects between participants. Shocks to the system may be amplified or ameliorated by the structure of interlinkages, while complexity of the system can reduce transparency and create uncertainty in times of stress. There is thus potential merit to policies that involve influencing the structure of the financial sector to reduce systemic spillovers. These include such things as introduction of CCPs, activity restrictions on banks (Volcker rule), structural separation (Vickers approach), and specific taxes or imposts (capital surcharges) on TBTF or systemically important institutions. Contingent capital and bail-in debt also are relevant in this regard. The problem of understanding the cross-sectional linkages in the financial sector has prompted substantial research on “network” features of the financial system and identification of financial firms acting as important “nodes” in the transmission of disturbances. Bisias et al. (2012) provide a recent survey of metrics that have been developed to identify systemic risk factors, but they note that the relative infrequency of financial crises makes it difficult to empirically test the relevance of theories regarding systemic risk. They also note that systemic risk can arise in different parts of the financial sector (and affect all parts), creating complications when different regulatory agencies have responsibilities for different institutional groups or financial products. This may suggest the need for a lead regulatory agency to take responsibility for macroprudential regulation and systemic stability, with central banks well placed to take on such a role, or ensuring good communication and coordination between regulatory agencies. More generally, Galati and Moessner (2011) note that there is little agreement on appropriate tools for macroprudential policy but that many tools of fiscal and monetary policy are relevant, as may be various forms of capital controls as measures to limit the build up of system-wide currency mismatches. Claessens (2015) also notes that “the set of policies currently being considered is mostly based on existing microprudential and regulatory tools” (398) (i.e., caps on loan-to-value ratios, limits on credit growth, additional capital adequacy requirements, reserve requirements, and other balance- sheet restrictions. Another potential tool is the application of variable maximum loan- to-valuation ratios for some types of lending such as housing to financial institutions. Notably, such measures suggest some willingness to move back toward direct controls and increased regulatory discretion rather than reliance on “incentives” such as risk- weighted capital requirements. Increased regulatory discretion is also involved in the major change implemented with Basel 3 that non-common-equity securities could only qualify for AT1 and T2 regulatory capital status if they contained bail-in provisions. Those provisions involve mandatory conversion (or write-off) of the instrument if a “trigger” event occurs, which could be either breach of a specified risk-weighted capital ratio or regulatory declaration of a point of nonviability (PONV) of the bank. The objective is to achieve recapitalization of a troubled bank, enabling resolution without a government/taxpayer bailout.21 Flannery (2014) also points to a potentially important role of “bail-inable” security
Central Banking and Prudential Regulation 475 prices in market discipline and provision of relevant information for prompt corrective action by supervisors. There has been considerable analysis of “bail-inable” securities following Flannery’s original suggestion (Flannery 2005) for requirements for contingent convertible securities.22 Flannery (2014) provides a recent survey of this literature. Notably, however, that literature focuses primarily on the design of such securities, typically involving an equity price trigger point, rather than the reality of a vaguely specified PONV trigger. A number of authors (Squam Lake Working Group on Financial Regulation 2009) have argued for a “double trigger” involving both individual bank circumstances and regulatory declaration of a systemic crisis to avoid moral hazard concerns and reflecting a view that the bail-in requirements are primarily for dealing with crises. How the bail-in requirements will work in practice has yet to be tested. The likelihood that a declaration of PONV and a bail-in would prompt a run of depositors or other creditors and counterparties, necessitating introduction of a blanket guarantee, cannot be discounted. However, the existence of bail-in provisions provides another weapon in the prudential supervision armory of bank supervisors, including use of threat of a PONV declaration (moral suasion) to induce additional equity raisings or scaling back of bank activities. This also involves a significant degree of discretion for bank supervisors—and, given potential consequences, an increased need for accountability. At this stage, there appears to have been little analysis of the accountability implications of this increased discretion which involves something of a reversion to earlier times when regulators relied on constructive ambiguity as part of their modus operandi.
15.7 Conclusion Masciandaro (2012) refers to an increased postcrisis role of central banks as prudential supervisors as “back to the future?” This chapter argues that this is only one of the ways in which the wheel has turned to refashion prudential regulation in a manner incorporating more features of earlier approaches. One way is the renewed focus on financial stability inherent in macroprudential regulation. Bernanke (2013) notes that “it is now clear that maintaining financial stability is just as important a responsibility as maintaining monetary and economic stability. And indeed, this is a return to where the Fed came from in the beginning . . . now we have come full circle” (120). This chapter suggests that this reversionary trend also applies to some degree with regard to the modus operandi of prudential policy, regardless of whether it is conducted by the central bank or another agency. Following the financial crisis, prudential regulation has involved greater constraints on banks in the form of liquidity requirements, higher and “better” capital requirements, differential capital requirements for exposures to financial versus nonfinancial counterparties, and acceptance of the use of various direct controls as part of macroprudential policy. These changes involve some reversion toward
476 Davis regulatory practices, albeit many of them components of monetary rather than prudential policy, which applied before the era of financial liberalization of the 1970s and 1980s. This also suggests that prudential regulation and supervision may have an even greater (or at least different) impact on the structural development of the financial sector than under the precrisis Basel regimes, where such impacts were byproducts rather than objectives of the regulation. Goodhart (2011) notes that central banks “used to be concerned with such structural issues. They saw themselves as having a deliberate role to play in shaping the developing structure of the financial system. More recently, they have eschewed such a role” (153). And Goodhart suggests that what is needed is “forward-thinking about what should be the desirable future structure of our financial systems, and how the various regulatory initiatives proposed might help to get us there” (153). The final reversionary feature affecting prudential regulation is the increased skepticism about the merits of the Basel risk-weighting approach operating over the past three decades. Less flexibility for banks in ways of meeting aggregate regulatory capital requirements and more reliance on direct control mechanisms are one result. Moreover, skeptical views on the merits of the risk-weighting approach have been expressed by many, including some influential central bankers. Former governor of the Bank of England Mervyn King (2016) has argued that “the pretence that it is possible to calibrate risk weights is an illusion” and that a “simple leverage ratio is a more robust measure for regulatory purposes” (185). Governor Daniel Tarullo (2014) of the US Federal Reserve has argued that he believes “we should consider discarding the IRB approach to risk- weighted capital requirements” (15) and placing greater reliance on stress testing to determine regulatory capital requirements and use of a leverage ratio requirement. While modifications to the Basel approach are ongoing, it would appear that the wheel has turned such that postcrisis prudential regulation is encompassing a number of features of more direct controls reminiscent of earlier approaches, which were largely discarded when financial deregulation took hold. Whether that trend will continue or abate due to a weakening of political and regulatory resolve as the financial crisis recedes into history remains to be seen.
Notes 1. Dean and Pringle (1994) note the UK secondary banking crisis in 1973–1975 and the failures or near failures of Bank Herstatt (Germany, 1974), Franklin National (United States, 1974), Banco Ambrosiano (Italy, 1982), SMH (Germany, 1983), and Continental Illinois (United States, 1984) as relevant in this regard. Reinhart and Rogoff (2009) provide a more recent overview and analysis of banking and financial crises. 2. Incentives for banks, in the form of lower capital requirements for use of regulator- approved internal models, were also directed toward improving bank risk-management practices. 3. In chapter 2 of its 2013 Global Financial Development Report, the World Bank (2013) identified five major failings of prudential regulation and supervision exposed by the
Central Banking and Prudential Regulation 477 crisis. These were a micro-focus not taking into account systemic stability risks, regulatory “silos” on both functional and national lines, poor design of some microprudential requirements, capacity constraints and incentives of regulators and supervisors, and inadequate surveillance and crisis management. 4. Masciandaro, Pansini, and Quintyn (2013, 578) refer to this as the demise of the “Micro to Macro” approach based on the assumption that “if micro incentives were correctly aligned, the macro outcomes would be automatically positive.” 5. A series of proposals issued since late 2014 by the BCBS (2015b), often referred to as Basel 4, which were being finalized in early 2017, are particularly relevant in this regard. 6. There has also been considerable emphasis on ensuring that regulators have increased legal powers with regard to resolution of troubled banks. 7. Schwarcz (2014) also points to “rationality failures” leading to use of unsuitable heuristics in dealing with complex financial issues. 8. The former activity comes under the purview of competition or securities regulators and the latter under specialist national transactions reporting agencies. 9. Schwarcz (2014) provides a recent discussion of how prudential regulation deviates from functional regulation and also notes the distinction with a functional approach to supervision whereby an institution may come under the purview of a number of different supervisors, each focusing on a specific function or activity that it undertakes. 10. International surveys undertaken by the World Bank, such as reported by Barth, Caprio, and Levine (2013) and Abiad, Detragiache, and Tressel (2008), provide information on the diversity of arrangements. 11. For more detail, see http://ec.europa.eu/finance/general-policy/banking-union/index_en.htm. 12. Masciandaro and Quintyn (2013) argue that financial supervision “emerged as an autonomous policy area between the mid-to-late seventies and the early eighties of last century and grew to maturity during the eighties and nineties” (263). 13. The minimum risk-weighted capital ratio requirements were the same, but the differential calculation of risk weights meant that standardized banks required a higher level of capital for a given asset portfolio. 14. One important consequence of the Basel regulatory agenda has been the implications for resourcing and expertise of regulatory agencies, which arguably have become more substantial anyway over recent decades due to financial innovation and financial-sector complexity. Accreditation of internal models involves substantial expertise and resources for the regulator (as well as for banks, for which costs of achieving accreditation ran into the hundreds of millions of dollars). 15. For more detail, see BCBS 2015b. 16. This was arguably even more pronounced in the case of investment banks subject to prudential regulation, where use of very short-term repo financing to fund holdings of long- term securities was pronounced, leading in the financial crisis to what Gorton and Metrick (2012) termed “the run on the repo.” 17. The Vickers ring-fencing (United Kingdom) and the Volcker rule (United States) are examples. 18. The minimum will increase to 18 percent in 2022. There is also a requirement that TLAC exceeds 6 percent of the Basel 3 leverage ratio denominator (with that requirement to subsequently increase to 6.75 percent in 2022). 19. Reducing the capital requirement may be necessary to leave banks with unchanged excess capital if a financial downturn leads to greater provisioning that reduces their eligible
478 Davis regulatory capital. Repullo (2013) develops a model that predicts that reducing regulatory capital requirements when there is such a negative shock to bank capital may improve social welfare because benefits from less contraction of lending outweigh the costs of reducing bank safety. 20. The International Accounting Standards Board published the International Financial Reporting Standard 9, Financial Instruments, in July 2014, for mandatory adoption in 2018, which incorporates forward-looking expected credit losses more compatible with the Basel standards. 21. Another feature of the Basel 3 changes was the introduction of requirements for boards of large banks to develop “living wills” (recovery and resolution plans) to (hopefully) assist regulators in dealing with troubled banks. 22. Earlier proposals by a number of researchers for mandatory requirements for banks to have some minimum amount of subordinated debt on issue have some similarities but generally do not involve the bail-in feature.
References Abiad, Abdul, Enrica Detragiache, and Thierry Tressel. 2008. “A New Database of Financial Reforms.” IMF working paper 08/266. Aiyar, Shekhar, Charles W. Calomiris, and Tomasz Wieladek. 2014. “Does Macro-Prudential Regulation Leak? Evidence from a UK Policy Experiment.” Journal of Money, Credit and Banking 46, s 1: 181–214. Bagehot, Walter. 1873. Lombard Street: A Description of the Money Market London: Henry S. King. Barth, James R., Gerard Caprio Jr., and Ross Levine. 2013. “Bank Regulation and Supervision in 180 Countries from 1999 to 2011.” Journal of Financial Economic Policy 5, no. 2: 111–219. Barth, James R., Luis G. Dopico, Daniel E. Nolle, and James A. Wilcox. 2002. “Bank Safety and Soundness and the Structure of Bank Supervision: A Cross-Country Analysis.” International Review of Finance 3 nos. 3–4: 163–188. Basel Committee for Banking Supervision (BCBS). 2012. “Core Principles for Effective Banking Supervision.” http://www.bis.org/publ/bcbs230.pdf. Basel Committee for Banking Supervision (BCBS). 2015a. “History of the Basel Committee.” http://www.bis.org/bcbs/history.htm. Basel Committee for Banking Supervision (BCBS). 2015b. “Finalising Post-Crisis Reforms: An Update: A Report to G20 Leaders.” http://www.bis.org/bcbs/publ/d344.pdf. Benston, George J., and George G. Kaufman. 1996. “The Appropriate Role of Bank Regulation.” Economic Journal 106, no. 436: 688–697. Bernanke, B. 2013. The Federal Reserve and the Financial Crisis. Princeton: Princeton University Press. Bisias, Dimitrios, Mark D. Flood, Andrew W. Lo, and Stavros Valavanis. 2012. “A Survey of Systemic Risk Analytics.” US Department of Treasury Office of Financial Research working paper 0001. Boot, Arnoud W. A., Silva Dezelan, and Todd T. Milbourn. 2001. “Regulation and the Evolution of the Financial Services Industry.” In Challenges for Central Banking, edited by Anthony M. Santomero, Staffan Viotti, and Anders Vredin, 39–58. Boston, Mass.: Kluwer.
Central Banking and Prudential Regulation 479 Borio, Claudio, and Haibin Zhu. 2012. “Capital Regulation, Risk-Taking and Monetary Policy: A Missing Link in the Transmission Mechanism?” Journal of Financial Stability 8, no. 4: 236–251. Brunnermeier, Markus Konrad, Andrew Crockett, Charles A. E. Goodhart, Avinash Persaud, and Hyun Song Shin. 2009. The Fundamental Principles of Financial Regulation, Vol. 11. London: Centre for Economic Policy Research. Brunnermeier, Markus K., Thomas M. Eisenbach, and Yuliy Sannikov. 2012. “Macroeconomics with Financial Frictions: A Survey.” NBER working paper 18102. Brunnermeier, Markus K., and Lasse Heje Pedersen. 2009. “Market Liquidity and Funding Liquidity.” Review of Financial Studies 22, no. 6: 2201–2238. doi: 10.1093/rfs/hhn098. Calomiris, Charles W., and Stephen H. Haber. 2014. Fragile by Design: The Political Origins of Banking Crises and Scarce Credit. Princeton: Princeton University Press. Cerutti, E., R. Correa, E. Fiorentino, and E. Segalla. 2016. “Changes in Prudential Policy Instruments—A New Cross-Country Database.” IMF working paper 16/110. Čihák, Martin, Aslı Demirgüç-Kunt, María Soledad Martínez Pería, and Amin Mohseni- Cheraghlou. 2012. “Bank Regulation and Supervision around the World: A Crisis Update.” World Bank policy research working paper 6286. http://www-wds.worldbank.org/external/ default/WDSContentServer/WDSP/IB/2012/12/05/000158349_20121205130523/Rendered/ PDF/wps6286.pdf Claessens, Stijn. 2015. “An Overview of Macroprudential Policy Tools.” Annual Review of Financial Economics 7: 397–422. Clement, Piet. 2010. “The Term ‘Macroprudential’: Origins and Evolution.” BIS Quarterly Review (March): 59–67. Committee on the Global Financial System. 2010. “The Role of Margin Requirements and Haircuts in Procyclicality.” CGFS paper 36. Dean, Marjorie, and Robert Pringle. 1994. Central Banks. London: Hamish Hamilton. Demirgüç-Kunt, Asli, Edward Kane, and Luc Laeven. 2013. “Deposit Insurance Database.” In World Bank policy research working paper 6934. De Nicoló, Gianni, Giovanni Favara, and Lev Ratnovski. 2012. “Externalities and Macroprudential Policy.” IMF staff discussion note 12/05. Dewatripont, Mathias, and Jean Tirole. 1994. The Prudential Regulation of Banks. Cambridge: MIT Press. Diamond, Douglas W., and Philip H. Dybvig. 1983. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91: 401–419. Diamond, Douglas W., and Anil K. Kashyap. 2016. “Liquidity Requirements, Liquidity Choice and Financial Stability.” NBER working paper 22053. Di Noia, Carmine, and Giorgio Di Giorgio. 1999. “Should Banking Supervision and Monetary Policy Tasks Be Given to Different Agencies?” International Finance 2, no. 3: 361–378. European Commission. 2015. “Banking Union.” http://ec.europa.eu/finance/general-policy/ banking-union/index_en.htm. Flannery, Mark J. 1991. “Pricing Deposit Insurance When the Insurer Measures Bank Risk with Error.” Journal of Banking & Finance 15, nos. 4–5: 975–998. Flannery, Mark J. 2005. “No Pain, No Gain? Effecting Market Discipline via ‘Reverse Convertible Debentures.’” In Captial Adequacy beyond Basel: Banking, Securities, and Insurance, edited by Hal S. Scott, 171–196. Oxford: Oxford University Press. Flannery, Mark J. 2014. “Contingent Capital Instruments for Large Financial Institutions: A Review of the Literature.” Annual Review of Financial Economics 6, no. 1: 225–240.
480 Davis Galati, Gabriele, and Richhild Moessner. 2011. “Macroprudential Policy—A Literature Review.” BIS working paper 337. Goodhart, Charles Albert Eric. 2011. “The Changing Role of Central Banks.” Financial History Review 18, no. 2: 135–154. Goodhart, Charles, and Dirk Schoenmaker. 1995. “Should the Functions of Monetary Policy and Banking Supervision Be Separated?” Oxford Economic Papers 47: 539–560. Gorton, Gary, and Andrew Metrick. 2012. “Securitized Banking and the Run on Repo.” Journal of Financial Economics 104: 425–451. IMF, FSB, and BIS. 2016. Elements of Effective Macroprudential Policies: Lessons from International Experience. Basel: International Monetary Fund, Financial Stability Board, and Bank for International Settlements. Keister, Todd, and Morten L. Bech. 2012. “On the Liquidity Coverage Ratio and Monetary Policy Implementation.” BIS Quarterly Review (December): 49–61. King, Mervyn. 2016. The End of Alchemy: Money, Banking and the Future of the Global Economy. London: Little, Brown. Masciandaro, Donato. 2012. “Back to the Future?” European Company and Financial Law Review 9, no. 2: 112–130. doi: 10.1515/ecfr-2012-0112. Masciandaro, Donato, Rosaria Vega Pansini, and Marc Quintyn. 2013. “The Economic Crisis: Did Supervision Architecture and Governance Matter?” Journal of Financial Stability 9, no. 4: 578–596. Masciandaro, Donato, and Marc Quintyn. 2013. “The Evolution of Financial Supervision: The Continuing Search for the Holy Grail.” In 50 Years of Money and Finance: Lessons and Challenges, 263–318. Vienna: SUERF. Peek, Joe, Eric S. Rosengren, and Geoffrey M. B. Tootell. 1999. “Is Bank Supervision Central to Central Banking?” Quarterly Journal of Economics 114: 629–653. Pennacchi, George. 2006. “Deposit Insurance, Bank Regulation, and Financial System Risks.” Journal of Monetary Economics 53, no. 1: 1–30. Polizatto, Vincent. 1992. “Prudential Regulation and Banking Supervision.” In Financial Regulation Changing the Rules of the Game, edited by Dimitri Vittas, 175–197. Washington, D.C.: World Bank. Reinhart, C., and K. Rogoff. 2009. This Time Is Different. Princeton: Princeton University Press. Repullo, R. 2005. “Liquidity, Risk Taking, and the Lender of Last Resort.” International Journal of Central Banking (September): 47–80. Repullo, Rafael. 2013. “Cyclical Adjustment of Capital Requirements: A Simple Framework.” Journal of Financial Intermediation 22, no. 4: 608–626. Schoenmaker, Dirk, Nicolas Véron, Thomas Gehrig, Marcello Messori, Antonio Nogueira Leite, André Sapir, Sascha Steffen, Philippe Tibi, and David Vegara. 2016. “European Banking Supervision: The First Eighteen Months.” Bruegel Blueprint 25. Schwarcz, Steven L. 2014. “The Functional Regulation of Finance.” https://ssrn.com/ abstract=2437544. Squam Lake Working Group on Financial Regulation. 2009. “An Expedited Resolution Mechanism for Distressed Financial Firms: Regulatory Hybrid Securities.” Council on Foreign Relations Center for Geoeconomic Studies working paper 3. https://www.cfr.org/ content/publications/attachments/Squam_Lake_Working_Paper3.pdf Tarullo, Daniel K. 2014. “Rethinking the Aims of Prudential Regulation.” 50th Annual Conference on Bank Structure and Competition, Federal Reserve Bank of Chicago.
Central Banking and Prudential Regulation 481 World Bank. 2013. “The State As Regulator and Supervisor.” In Global Financial Development Report: Rethinking the Role of the State in Finance, pp 45–80. Washington, D.C.: World Bank. Yellen, Janet L. 2009. “Linkages between Monetary and Regulatory Policy: Lessons from the Crisis.” Federal Reserve Bank of San Francisco Economic Letter (November): 36.
chapter 16
Central Bank s ’ New Macropru de nt ia l C onse nsu s Michael W. Taylor, Douglas W. Arner, and Evan C. Gibson
16.1 Introduction Throughout history central bank practice has evolved to reflect current economic needs and, more recently, economic theory. From the beginning, in response to the imperative of maintaining economic and financial stability, large commercial banks began transforming into the central bank model, most prevalently during the late nineteenth and early twentieth centuries. In contrast, the development of economic theory lagged economic needs and central bank practice until the latter half of the twentieth century when it began to shape central bank practice. Prior to the global financial crisis (GFC), economic theory was particularly responsible for the central banking consensus of monetary policy and inflation-targeting policy bias at the expense of financial stability. Following the GFC, the practical need of maintaining financial stability has again come to the fore. Central banks’ role is rapidly evolving in response to these crisis- management needs, including having an expanded macroprudential policy role. In the words of Tommaso Padoa-Schioppa, “financial stability is in the genetic code of central banks” (Caruana 2011, 4). Reminiscent of early central banking development, this rapid evolution has caused economic theory to once again lag central banking practice. This chapter will examine the evolving central banking stability role, in particular the shift from the 1980s and 1990s to recent embodiments—namely, the consensus of monetary policy to the exclusion of financial stability and the consensual reversion in the aftermath of the GFC. The postcrisis consensus will be analyzed to determine the optimum
Central Banks’ New Macroprudential Consensus 483 central banking design elements to discharge the new macroprudential stability mandate. To paraphrase Hegel’s owl of Minerva metaphor, the theoretical rationale for expanding central banks’ role may only be established at the falling of the dusk (Hegel 1820).
16.2 The History of the Central Banking Consensus 16.2.1 Traditional Central Banking Fundamentally, the central banking consensus has two objectives, monetary stability and financial stability, often supplemented by economic and/or financial development.1 The first central banks evolved from commercial banks to finance governments in exchange for a legislative monopoly over currency issue, creating a “one reserve” system (Goodhart 1991, 17; Thornton 1802). These privileges grew to include monetary management, because central banks’ clearinghouse function is important for monetary and financial stability by preventing currency overissue (Goodhart 1991; Timberlake 1984; Bagehot 1873). The clearing system also extended central banks’ financial stability role into banking supervision, a necessary corollary for the lender of last resort (LOLR) function (Goodhart 1991, 32; Bagehot 1873). These responsibilities led to legislative changes that separated monetary policy from commercial endeavors. To independently fulfill banking-sector financial stability responsibilities, crisis-management and countercyclical relief measures, the central banking model was noncompetitive yet monopolistic (Goodhart 1991, 45–46). Thus, the evolution and expansion of responsibilities during the nineteenth century, the formative period of central bank development, were preeminent in developing contemporary characteristics: (1) monopolistic note issuance, (2) responsibility for monetary stability, (3) a central role in an economy’s payment system as the “bankers’ bank,” (4) establishing a baseline for interest-rate settings, and (5) acting as the LOLR. By the beginning of the twentieth century, the classical central banking paradigm had been established, focusing on monetary stability via the gold standard and financial stability in the form of the LOLR. Central banks’ role had also extended to banking sector supervision. In some cases, for example the Bank of England (BoE), this role evolved from the LOLR function. Alternatively, statutory central banking responsibilities for banking supervision was imposed, for example European countries in the aftermath of the Great Depression. Following the outbreak of World War II, a central banking model was established that imposed a responsibility for monetary and financial stability. This financial stability function often included banking supervision as a public function rather than a commercial endeavor.
484 Taylor, Arner, and Gibson
16.2.2 The Price Stability Mandate The economic consensus after World War II created a monetary policy conflict. Keynesian demand management was viewed as the primary mechanism to achieve economic stability, whereas the expectation of central banks’ role was to maintain accommodative monetary conditions to support investment and growth. For much of the immediate postwar period, real interest rates remained negative. Administrative measures maintained monetary control and kept inflation low. Following the Great Stagflation in the 1970s, central banking policy switched to maintaining a low and stable rate of inflation, initially by targeting the growth of monetary aggregates and, once the relationship between these aggregates and consumer price inflation proved unstable, targeting the rate of inflation.2 The core assumption underpinning inflation targeting is that a stable price level translates to monetary and financial stability. Consequently, central banks’ role was to focus on achieving price stability while indirectly satisfying the financial stability role without additional policy intervention.
16.2.3 Central Bank Independence Current theory on the central bank relationship with inflation stems from studies during the 1970s. These studies focus on the influence that central bank independence (CBI) has on the rate of inflation. In this context, the concept of independence relates to a separation of central banking decisions from the political process (Parkin 1993). Studies on the effects of CBI on inflation were first developed by Bade and Parkin.3 The scope of Bade and Parkin’s empirical studies was drawn from twelve advanced economies and based on central bank statutes (Parkin 1988). The studies revealed that independent central banks had lower levels of inflation. Empirical studies confirm a statistical relationship between low inflation and CBI. For example, Alesina and Summers (1993) observed a negative correlation between CBI and the average inflation rate and between CBI and the variance of the inflation rate, a measure of the stability of macroeconomic policies. CBI was perceived as the primary mechanism to achieve price stability. Inflation-averse central bank policies serve as a commitment device to sustain a lower inflation rate than is otherwise possible (Alesina and Summers 1993). The precrisis consensus for CBI has been well summarized by Buiter: A long time ago, in a galaxy far, far away, most academic monetary economists working on advanced economies and quite a few central bankers believed that the sum total of central banking was captured by an operationally independent central bank setting short term interest rates to pursue a one-or two-dimensional macroeconomic stability objective. The macroeconomic stability objective was (and is) often
Central Banks’ New Macroprudential Consensus 485 just price stability, typically defined as the pursuit of some target rate of inflation for some broadly defined index of goods and services. (2014, 11)
The theoretical case for CBI was founded on correcting the problem of a time inconsistent rate of inflation when those responsible for setting monetary policy are not independent of the government (Kydland and Prescott 1977). Rogoff identifies the challenges confronting CBI when calibrating a time consistent rate of inflation: Society can sometimes make itself better off by appointing a central banker who does not share the social objective function, but instead places “too large” a weight on inflation-rate stabilization relative to employment stabilization. Although having such an agent head a central bank reduces the time-consistent rate of inflation, it suboptimally raises the variance of employment when supply shocks are large. Using an envelope theorem, we show that the ideal agent places a large, but finite, weight on inflation. (1985, 1169)
Fischer describes the rationale for CBI to overcome suboptimal rates of inflation in terms of the political agenda: In a first-best world, monetary and fiscal policy would be perfectly coordinated and chosen, and there would be no need for an independent central bank. . . But in the imperfect second-best world in which most central bankers ply their trade, political systems tend to behave myopically, favouring inflationary policies with short-run benefits and discounting excessively their long-run costs. An independent central bank, given responsibility for price stability, can overcome this inflationary bias. (1996, 4)
16.3 The Precrisis Consensus 16.3.1 Theoretical Foundations By the 1990s, the consensus was based on the propositions that (1) central banks’ role is to focus on price stability, (2) financial institution supervision and regulation could be transferred to a specialist noncentral bank agency, and (3) microprudential supervision of individual institutions fully discharged the financial stability function. The institutional design that flowed from these propositions epitomized Tinbergen’s rule that for every policy target there must be one policy tool (Tinbergen 1952). Interest rates were a tool for achieving price stability, not a tool to achieve other public policy objectives, such as financial stability. If interest-rate policy was set exclusively to achieve price stability, then financial stability required different policy tools, and there was no central bank obligation or reason to employ these tools.
486 Taylor, Arner, and Gibson Several empirical studies established that central banks had weaker anti-inflation records when monetary policy and banking microprudential supervision were combined (Grilli, Masciandaro, and Tabellini 1991; Alesina and Summers 1993; Goodhart and Schoenmaker 1993). One explanation for these findings is a central banking bias toward accommodative monetary policy to reduce the possibility of bank failures (Čihák and Podpiera 2006, 14). The traditional central bank consensus involves the provision of emergency liquidity assistance to troubled financial institutions. This core responsibility was possible given central banks’ (1) unique ability to create liquid assets in the form of central bank reserves, (2) central position within the payment system, and (3) macroeconomic stabilization objective. Prior to the GFC, this consensus tended to downplay the central bank’s role as LOLR (Goodhart and Schoenmaker 1993). The creation of central bank reserves is fundamental to the LOLR function, because of the capacity to credit a commercial bank’s account in return for a collateral pledge. However, creating central bank reserves expands the money supply, altering the monetary system which can undermine price stability. Since price stability was the paramount precrisis central banking policy objective, extreme caution was required in the willingness to act as the LOLR. Two further considerations restricted a central bank’s willingness to act as the LOLR. First, by exposing central banks to financial risks, the LOLR had the potential to create moral hazard on a massive scale. Second, if these operations were limited to lending against good collateral and were neutral with respect to central bank reserves, “line of credit” services could be provided by the private sector. To mitigate LOLR risks, central bank policy set highly restrictive conditions on the willingness to lend and then only as a provider of general liquidity to the market, through open market operations, rather than as the provider of emergency liquidity to individual institutions (Goodfriend and King 2002). Hence, there was a conflict of interest in combining price stability and financial stability, or at least microprudential banking supervision. Based on these considerations, the prevailing precrisis consensus reasoned that the best approach was to have these roles exercised by separate agencies. Further support for separating monetary policy from microprudential supervision was based on the changing structure of the financial system that had undermined the “specialness” of banks and therefore the basis of central bank supervision. With an emphasis placed on the central bank’s role in maintaining price stability, the role of macroprudential supervision outside of monetary stability was deemphasized. This was partially a consequence of the consensus that each policy objective should be pursued through one policy tool. Since the interest-rate tool was assigned to maintaining price stability, central banks were shorn of a tool to meet the macroprudential financial stability objective beyond monetary policy. In addition, financial stability was conceptualized in terms of ensuring the safety and soundness of individual institutions or microprudential stability. Macroprudential stability of the financial system was seen as a byproduct of price stability and the microprudential regulation of individual institutions, notwithstanding substantial conflicting evidence from the Asian financial crisis in the late 1990s.
Central Banks’ New Macroprudential Consensus 487
16.3.2 Implications for Central Banks’ Institutional Structure 16.3.2.1 International Trends A consensus emerged by the 1990s that had important implications for central banks’ institutional structure, which was reflected in legislation and cooperative arrangements over the course of the decade. The period was characterized by financial stability in developed economies and intense competition within the financial system which provided a deregulatory impetus, most notably to remove the strict demarcations between commercial and investment banking (e.g., the repeal of the Glass Steagall Act in the United States) (Borio and Filosa 1994, 6; Taylor 1995, 4–6; Briault 1999, 26; Goodhart 2000, 8–9). This led to the rise of universal banking. The boundaries between the banking, securities, and insurance sectors became blurred, which was instrumental in decoupling central banks’ traditional function of microprudential banking supervision (Goodhart and Schoenmaker 1993, 9–10; Briault 1999, 26–33; Taylor and Fleming 1999, 11). For example, in Norway, Denmark, Sweden, Finland, the United Kingdom, and Australia, microprudential financial supervisors were established outside the central bank from the mid 1980s to the late 1990s without being predicated by a financial accident or crisis. If central banks’ role maintained microprudential supervision, this would be undertaken outside the historical banking supervisory ambit (e.g., reaching into the insurance and investment banking sectors) (Briault 1999, 21; Goodhart 2000, 10; Abrams and Taylor 2000, 21). Arguably, the more effective and efficient approach was to separate microprudential supervision from the central bank’s monetary policy role (Taylor 1995, 3; Briault 1999, 33; Goodhart 2000, 10). The central bank consensus supported monetary stability through independence, which was most clearly typified by the establishment of the European Central Bank and reforms in the United Kingdom creating a single microprudential financial regulatory authority outside the BoE.
16.3.2.2 The European Central Bank The design of the European Central Bank (ECB) cleaved closely in favor of the central banking consensus focusing on price stability and eschewing a fundamental role in financial stability, including banking supervision. The Treaty on European Union, known as the Maastricht Treaty, mandated the formal establishment of the European Union and financial integration by providing for the European System of Central Banks (ESCB) in 1992 made up of the ECB and the European Union members’ national central bank (NCB) (European Union 2012, art. 1). From 1999, the European Community Treaty established the ECB as an independent organization to conduct a single monetary policy (i.e., price stability) for European Union member states. The Eurosystem includes all European Union central banks under the auspices of the ESCB (twenty-eight as of 2018) and all member states’ NCB that have adopted the euro (nineteen as of 2018).
488 Taylor, Arner, and Gibson Modeled in many ways after Germany’s central bank design (e.g., the Bundesbank), the ECB’s purchasing power and price stability language can be implicitly interpreted as inflation-targeting monetary policy. ECB member states adopt a strategy that emphasizes the pursuit of a publicly announced quantitative inflation target, an ECB target rate of below or close to 2 percent over the medium term. Prior to the GFC and 2010 European Union debt crisis, price stability was the ECB’s paramount objective, subordinating all other considerations. In contrast, the United States’ Federal Reserve’s mandates require pursuing the twin objectives of price stability and full employment.
16.3.2.3 The Bank of England Prior to May 1997, the BoE was modelled on a traditional central bank design. The full suite of central banking functions and responsibilities included monetary stability, microprudential functions, note issuance, and government debt management. However, there was no statutory responsibility for banking supervision until 1979. Prior to that time, the supervision of banks and major intermediaries was subject to the BoE’s powers of moral suasion and LOLR facilities to compel compliant behavior (Schooner and Taylor 1999, 626). The Bank of England Act of 1998 led to a fundamental redefinition of the central bank’s role. This redefinition pivoted on a new tripartite structure of financial regulation. Distinct financial stability roles were allocated, in accordance with a memorandum of understanding, to HM Treasury, the BoE, and a newly created Financial Services Authority (FSA). The 1998 act allocated monetary responsibilities to the BoE by establishing the Monetary Policy Committee (MPC) and the transferring of banking supervisory functions to the FSA. Two core purposes were required of the BoE: monetary stability and financial stability. The BoE was granted operational independence, which gave the MPC independent responsibility for setting interest-rates in pursuit of inflation-targeting monetary policy (i.e., price stability). Monetary stability overlapped with macroprudential stability because the BoE’s responsibilities were (1) ensuring the stability of the monetary system, (2) overseeing systemic financial system infrastructure, and (3) maintaining a broad overview of the financial system (Bank of England 2006, 1). However, the financial stability function was never clearly defined, despite a deputy governor being formally assigned this responsibility. The BoE’s financial stability purpose and responsibility for maintaining an overview of the financial system can be inferred as macroprudential responsibility. Yet the BoE’s powers were limited to monetary policy, which curtailed the central bank’s macroprudential powers. Furthermore, the transfer of microprudential banking supervision to the newly formed FSA resulted in a large gap between the two agencies’ macroprudential responsibilities, the size of which only became apparent during the GFC.
16.3.3 The Precrisis Consensus The central banking consensus between the Great Stagflation of the 1970s and the Great Moderation in the 1990s focused on monetary stability, achieved through
Central Banks’ New Macroprudential Consensus 489 independence. CBI was balanced with a level of accountability to the government in pursuit of lower and more stable levels of inflation or price stability. Other historic central bank functions were transferred, most notably microprudential banking supervision and the government’s bank (i.e., national debt management), or were otherwise considered of secondary importance. Moreover, macroprudential financial stability beyond monetary policy and the LOLR function were subordinated, with the exception of maintaining payment system functions and stability.
16.4 The Global Financial Crisis and Systemic Risk 16.4.1 The International Response Since the GFC, the trend of central banks’ role in pursuing purely monetary stability with limited macroprudential responsibilities has reversed. The Group of 20, the International Monetary Fund (IMF), the Financial Stability Board (FSB), and the Bank for International Settlements (BIS) outlined a new consensus on macroprudential policies and frameworks by publishing a series of soft-law documents and issuing peer reviews of countries implementing recommended reforms.4 One of the first tasks in developing these recommendations was to expand the prevailing macroprudential policy beyond the precrisis central banking consensus of focusing on monetary policy and microprudential supervision. Microprudential supervision of individual institutions was exposed as being too restrictive to maintain financial stability.5 Macroprudential stability was redefined and expanded in terms of mitigating systemic risk contagion in the financial system (IMF, FSB, and BIS 2016, 4).6 The new systemic macroprudential policy objective marked a decisive shift from the precrisis consensus whereby central banks’ macroprudential stability role focused primarily on monetary stability and ensuring the prudential soundness of individual institutions.7
16.4.2 Macroprudential Policy: A New Consensus Macroprudential policy from the perspective of systemic risk has both a time dimension and a cross sectoral dimension. The former is concerned with the prevention and/or mitigation of asset and credit booms, bubbles, and busts from the build-up of risks over time (IMF, FSB, and BIS 2016, 4). The role of monetary policy in mitigating booms and busts has been controversial, especially against the backdrop of the precrisis central banking consensus focusing on price stability. This has given rise to a debate characterized as “lean versus clean”; central banks either should lean against asset bubbles by preemptively raising short-term interest rates or should allow the
490 Taylor, Arner, and Gibson bubble to burst and then clean up afterward through accommodative monetary policy. The BIS, represented by chief economist William White, was a proponent of the former, whereas most central bankers, such as Federal Reserve chairmen Alan Greenspan and Ben Bernanke, supported the latter. As the GFC’s costs and the limitations of extraordinarily accommodative monetary policy to clean up afterward have become apparent, the pendulum has shifted toward preemptive monetary policy, the willingness to raise short-term rates even if not justified by the central bank’s price stability goal. Nonetheless, no consensus has emerged on the macroprudential use of interest- rate policy, and hence a range of preemptive macroprudential countercyclical policies were formulated to provide alternative mechanisms to limit the impact of asset booms and busts (IMF, FSB, and BIS 2016, 10). Some of these tools were fiscal and thereby were the province of ministries of finance; examples include aiming to stabilize real estate markets through varying levels of turnover taxes (e.g., stamp duty) to discourage speculative purchases. Others had a close relationship to conventional prudential measures, for example, time varying capital requirements such as the countercyclical capital buffer component of Basel 3. The cross sectoral dimension of macroprudential policy involves developing a deeper understanding of the complex interlinkages between the components of a modern financial system at any time. All financial sectors and markets, whether regulated or not, were affected by the GFC because of financial system’s interlinkages and distribution of risk (IMF, FSB, and BIS 2016, 4). The recognition of these complex interlinkages necessitated the design of institutional arrangements that could permit a system wide approach. This was outside the existing supervisory parameter of many central banks’ role, including the BoE, the Federal Reserve, and the ECB. For example, lacking the capacity to identify, monitor, and assess systemically important financial institutions (SIFIs) that were neither sectorally defined (e.g., financial conglomerates), effectively macroprudentially regulated, nor transparent in terms of risks to ascertain the threat to the financial system. To mitigate the risk of common asset exposures when credit conditions tighten, a range of preemptive cross sectoral policies were developed to increase the resilience of borrowers and banks (IMF, FSB, and BIS 2016, 11).
16.5 Institutionalizing the Financial Stability Role: Systemic Risk 16.5.1 Revisiting Central Banks’ Financial Stability Responsibilities In view of the close relationship between financial stability powers (i.e., monetary, microprudential, and macroprudential policy and tools) and central banks’ role as the stabilizer of the financial system, the substantive overlap between these
Central Banks’ New Macroprudential Consensus 491 different policy functions has recently become a key consideration in institutional design. Macroprudential tools in the financial system epitomize the fact that policy functions and objectives tend to overlap (i.e., macroprudential and microprudential or macroprudential and monetary), necessitating agency coordination and decision- making arrangements (BIS 2011, 60). This point has been addressed by IMF, FSB, and BIS (2016, 5), which argue that institutional foundations are essential in addressing these overlaps. To facilitate the central bank’s expanded financial stability role and external agencies in the financial stability decision-making process under the new consensus, mandates and cooperation arrangements are necessary so that tools can be deployed to target the pressure points in the monetary, microprudential, and macroprudential matrix to maintain financial stability. The IMF, FSB, and BIS (2016, 5) state that effective macroprudential policy is well served by the relevant authorities having clear mandates, adequate powers, and strong accountability. Furthermore, authorities need to be aware that macroprudential policy is subject to certain biases that require effective cooperation arrangements to preserve the autonomy of policy functions (IMF, FSB, and BIS 2016, 5–6). Many jurisdictions, including the United States, the European Union, and the United Kingdom, have created macroprudential arrangements at the institutional level to manage systemic risks while complementing monetary and microprudential policies. Examples include the Financial Stability Oversight Council (FSOC) in the United States, the European Systemic Risk Board (ESRB) in the European Union, and the Financial Policy Committee (FPC) of the BoE in the United Kingdom. The structure and the membership of these bodies are designed to strengthen macroprudential oversight and supervision and therefore be better placed to mitigate the incidence of systemic risk to maintain financial stability. While the memberships of these bodies often go beyond central banking institutional design to include other microprudential regulators and finance ministries, these institutional arrangements have the effect of cementing the central bank’s new consensus of ensuring the macroprudential stability of the financial system. The designs of these macroprudential institutional arrangements may consist of an assembly of supervisors including a central bank agency or alternatively be internalized within the central bank institutional design. No particular design is better than another, as the arrangements need to reflect each jurisdiction’s characteristics. For example, the FSOC and the ESRB consist of an assembly of supervisors because of the nature of their financial supervisory arrangements, legal systems, and political systems. The characteristics of these jurisdictions are far too challenging to neatly compartmentalize monetary stability with broader macroprudential supervision, crisis-management measures, or microprudential stability. This design was adopted precrisis in the United Kingdom, in accordance with the then prevailing central banking consensus, yet it was found to suffer from macroprudential underlap and biases between regulatory functions.8 The alternative institutional design, where jurisdictional circumstances permit, involves combining all institutional components as part of the central bank’s design. For
492 Taylor, Arner, and Gibson example, the current design of the BoE addresses these deficiencies by establishing a centralized macroprudential agency supported by structural elements and cooperation mechanisms to mitigate the incidence of biases between regulatory functions.9 To clarify the scope of these macroprudential arrangements, the membership and structure of these designs will be delineated and analyzed.
16.5.2 Postcrisis Macroprudential Institutional Design 16.5.2.1 Financial Stability Oversight Council In the United States, the Dodd-Frank Wall Street Reform and Consumer Protection Act promulgated two FSOC financial stability mandates. The first mandate involves identifying financial stability risks that could arise from material distress or failure of large interconnected financial institutions or from exogenous shocks to the financial system (i.e., systemic risk). A second mandate requires the FSOC to respond to any emerging threats to financial system stability. With ten voting and five nonvoting members constituting the FSOC, the Federal Reserve’s monetary stability mandates could become subordinated by the competing macroprudential and microprudential interests of its members. Thus, the FSOC’s institutional design and multitude of mandates may displace the Federal Reserve’s monetary stability mandate. In times of financial distress, such high levels of membership and competing interests could obscure or obstruct the decision-making capacity of the FSOC (i.e., macroprudential) and the Federal Reserve (i.e., monetary and macroprudential) to fulfill their financial stability responsibilities.
16.5.2.2 European Systemic Risk Board The primary objective of the ECB is to maintain price stability in the Eurosystem. As a result of postcrisis changes, the ECB has additional mandates with respect of euro area banking supervision pursuant to the Single Supervisory Mechanism (SSM). Namely, microprudential and macroprudential responsibility for systemically important bank supervision and financial system macroprudential systemic risk from its membership in the ESRB. The ESRB’s macroprudential objective is described as follows: The ESRB shall be responsible for the macroprudential oversight of the financial system within the Union in order to contribute to the prevention or mitigation of systemic risks to the financial stability in the Union that arise from developments within the financial system and taking into account macroeconomic developments, so as to avoid periods of widespread financial distress. (European Parliament 2010, art. 3)
In comparison to the Federal Reserve and the FSOC, the chair of the ESRB is the president of the ECB, with the governor of the BoE being the vice chair (for the time being).
Central Banks’ New Macroprudential Consensus 493 The ESRB’s steering committee is well represented by NCBs and includes numerous ECB appointments. With central banks overshadowing the ESRB’s membership, the ESRB’s macroprudential responsibilities are susceptible to a dominant monetary policy (i.e., price stability) influence. This point can be extended to the SSM, which is led by the ECB, despite being made up of national banking supervisors. The inclusion of bank supervisors in the composition of the SSM might explain why the European System of Financial Supervisors (ESFS) is not a member; the ESFS is a microprudential and macroprudential body consisting of national financial supervisory authorities and, among others, the ESRB (i.e., NCBs). There is a possibility that the SSM’s ongoing microprudential and macroprudential banking supervisory responsibilities could be subordinated by the ECB’s price stability objective, thereby biasing monetary stability at the expense of broader macroprudential stability. A further complexity arises because the SSM’s macroprudential and systemic supervisory activities necessitate coordination with the ESRB to maintain financial stability. The ECB’s dominant position, in relation to the ESRB and the SSM, and the primary objective of price stability have the potential to displace broader macroprudential stability objectives and responsibilities. Although the monetary and macroprudential stability mandates are expressed explicitly and clearly in the United States and the European Union, the dominant position of the Federal Reserve and the ECB on macroprudential bodies may bias the objectives and responsibilities in favor of monetary stability, thereby undermining broader macroprudential stability mandates.
16.5.2.3 Bank of England Financial Policy Committee In contrast to the macroprudential institutional designs in the United States and the European Union, the BoE created the FPC to internalize its macroprudential agency. Microprudential supervision was reinstated into the BoE through the creation of the Prudential Regulatory Authority (PRA) and replacing the FSA with the Financial Conduct Authority (FCA). Macroprudential crisis-management and microprudential supervisory objectives and responsibilities are interwoven from the microprudential supervisory membership of the PRA and FCA within the FPC. Furthermore, the PRA has macroprudential responsibilities: “ensure that when individual institutions fail, they do not pose a threat to the stability of the financial system as a whole” (Bank of England 2017, 1). The FPC’s primary objective is identifying, monitoring, and taking action to remove or reduce systemic risks with a view toward protecting and enhancing the resilience of the financial system.10 Coordination and information sharing within the BoE is managed by overlapping membership of the FPC and the MPC, overlapping objectives, and therefore overlapping responsibilities. This overlap may, however, raise ambiguity surrounding the responsibilities of the FPC and the MPC in relation to the counterpart’s objective. Any ambiguity is resolved by the governor of the BoE being the chair of both committees, with the support of three deputy governors, thereby ensuring that each committee’s objective and policy
494 Taylor, Arner, and Gibson spillovers are addressed. This arrangement creates an internal FPC and MPC dispute resolution mechanism within the BoE. The institutional design resolves any policy spillovers between the FPC and the microprudential supervisors (i.e., the PRA and the FCA). This enables the clarification of responsibilities while creating synergies between the monetary stability and broader macroprudential stability responsibilities, including overlaps with microprudential stability, to be more effectively prioritized and efficiently utilized.
16.6 Central Banks and the New Macroprudential Consensus 16.6.1 The Changing Consensus The precrisis central bank consensus placed operational independence at the center of institutional design. The theoretical case for empowering decisions over short term interest-rate settings to a group of unelected officials who are not influenced by government was seen as fundamental to ensuring that central banks’ role was credible when pursuing their price stability goals. Nonetheless, the reversion of the central bank consensus to ensuring broader macroprudential stability has raised new questions about central bank independence and accountability when discharging this expanded macroprudential mandate.
16.6.2 Independence, Accountability, and the New Consensus Expanding central banks’ macroprudential mandates has profound implications for CBI and, by inference, central bank accountability. The new central bank consensus is further complicated by the FSB and others’ extensive postcrisis macroprudential policies, tools, and frameworks, and the creation of new macroprudential regulators. Macroprudential accountability has become multidimensional, compelling the broader consideration of whether central bank accountability extends to the new macroprudential regulators, for example, the FSOC, the ESRB, and the FPC. This consideration is necessary because of CBI and central banks’ capacity to deploy tools that combine monetary and macroprudential policy which are subsequently generating deep interlinkages across financial markets.11 For example, quantitative easing (QE) effectively increases the money supply while providing liquidity to financial markets and financial institutions. When deployed during a financial crisis, QE provides a nexus between monetary and macroprudential policy to support the objective of maintaining financial stability. This expanded financial stability objective is the new consensus within central banks’ traditional role and current statutory mandates.
Central Banks’ New Macroprudential Consensus 495
16.6.3 Central Banks’ Role and the New Macroprudential Consensus 16.6.3.1 United States The FSOC is designed to coordinate member agencies and create collective accountability to address the macroprudential gaps in the fragmented United States’ financial regulatory system. In support of this institutional design, Dodd Frank has been formulated to avoid regulatory duplication and facilitate information collection. This is supplemented by the Federal Reserve’s expanded postcrisis macroprudential stability mandate. The FSOC, Dodd Frank, and the Federal Reserve’s expanded macroprudential mandate all focus on systemic financial institutions and financial infrastructures, not financial markets, despite the existence of the FSOC markets subcommittee. Prima facie QE does not fall within a macroprudential supervisory gap among FSOC supervisors, yet it does fall into a regulatory gap, being a market based tool. In this context, the FSOC would be viewed as duplicating the Federal Reserve’s existing regulatory role, which is counter to Dodd Frank. The Federal Reserve is nonetheless economically independent of and substantively unaccountable macroprudentially to the FSOC for its QE programs. Yet QE’s macroprudential market force warrants supplementary supervision and cooperation with the FSOC. From an institutional perspective, the Federal Reserve has shared responsibilities and decision-making powers within the FSOC. Thus, the Federal Reserve’s political independence is curtailed by its responsibility to the FSOC. If a decision on QE were to be made in collaboration with the other members of the FSOC, then the Federal Reserve is required to publicly respond to any final recommendations made by the FSOC on a “comply or explain” basis (FSB 2013b, 7). In turn, the FSOC is formally and substantively accountable to Congress by issuing an annual report and the chairman’s testimony before Congress.12 When appropriate, the FSOC reports to Congress, for example, implementing new rules or regulations that affect the FSOC’s mandate. The FSOC, and by extension the Federal Reserve being a member of the FSOC, is accountable to the American public as supervision is conducted in an open and transparent manner (US Department of Treasury 2016). To better balance the economic independence and substantive macroprudential accountability of the Federal Reserve and the FSOC, any deployment of QE should statutorily require collaboration with the FSOC to ensure that both regulators’ macroprudential and financial stability mandates are satisfied and not subordinated, namely, the Federal Reserve’s monetary policy mandate. Further, the Federal Reserve and the FSOC should be formally and substantially accountable to Congress for any QE programs.
16.6.3.2 European Union Decision-making powers pertaining to the ECB are unencumbered by the ESRB, because there is no obligation to seek or take instructions from any other body,
496 Taylor, Arner, and Gibson government, or European Union member state, and are subject to the “Protocol (No. 7) on the Privileges and Immunities of the European Union” (ECB 2016; European Union 2012, art. 39). ECB decisions and its political independence are nonetheless open for review or interpretation by the Court of Justice of the European Union (European Union 2012, art. 35). Thus, the ECB retains economic independence yet is formally accountable to the European Parliament for its QE program.13 However, the ECB is not accountable for its QE program to the ESRB (European Parliament 2010, art. 17). The ECB is only accountable to the ESRB for its macroprudential policy in accordance with its additional postcrisis mandate and for being the chair of the ESRB (European Parliament 2010, art. 24). As a member and chair of the ESRB, the ECB is compelled to consider the financial stability of the European Union and comply with any recommendations from the ESRB (European Parliament 2010, arts. 17 and 26). In turn, the ESRB is formally accountable to the European Parliament by issuing annual reports. Analogous to the FSOC, the ESRB’s recommendations are subject to a “comply or explain” mechanism or substantive accountability. In times of financial distress, the chair of the ESRB will be invited to hearings and therefore be substantively accountable before the European Parliament. Where appropriate, the European Parliament and the Council of the European Union shall invite the ESRB to examine specific issues relating to financial stability (European Parliament 2010, art. 23). However, these hearings are to be conducted separately from monetary dialogue, which is indicative of the ECB independence from, and lack of accountability to the ESRB for QE programs (European Parliament 2010, art. 19). Prima facie macroprudential accountability is skewed toward the ESRB, with macroprudential economic independence biased toward the ECB. A better balance would ensure that the ECB is substantively accountable to, and economically dependent on the ESRB when formulating macroprudential policy and deploying monetary stability tools. However, the ECB and NCBs have a dominant position on the ESRB’s board, which could bias macroprudential policy and decisions in favor of monetary policy. To overcome this potential conflict of interest, any macroprudential judgments should be subject to increased political dependence and accountability, formally and substantively, to elected officials (e.g., the European Parliament). This will increase the ECB’s existing macroprudential dependence on and accountability to the European Parliament and therefore taxpayers by being accountable to, and economically dependent on the ESRB.
16.6.3.3 United Kingdom In contrast to the United States and the European Union, the macroprudential accountability of the BoE is endogenous. The independence enjoyed by the Federal Reserve and the ECB is based on the creation of exogenous macroprudential bodies. In contrast, the BoE’s macroprudential agency, the FPC, is part of the BoE, with the QE decision-making process arising from the FPC’s membership in the MPC. Nonetheless, the FPC is classified as an independent macroprudential regulator, a statutory subcommittee of the BoE’s court. Formal transparency of the MPC’s decision-making powers are published in the minutes of meetings and derived from individual members votes, which provide a full
Central Banks’ New Macroprudential Consensus 497 account of all policy decisions including differing and dissenting views. Accordingly, the MPC is formally accountable to the BoE. Furthermore, the FPC has MPC voting powers and consequently wields influence in the MPC’s decision-making process, rendering the MPC accountable to the FPC. The MPC and the FPC hold joint briefings on topics of mutual interest, which would include macroprudential implications from monetary policy programs, such as QE (Bank of England 2014, 3). Therefore, the MPC is accountable to, and economically dependent on the FPC when deploying QE programs. This arrangement instills a formal and substantive accountability mechanism within the BoE, as the MPC must engage in decision-making with the FPC. Both committees are responsible and therefore accountable for the MPC’s actions. In turn, the BoE is accountable to HM Treasury. The BoE’s institutional design demonstrates that monetary accountability can be achieved when the macroprudential regulator is a voting member and decision-maker within the central bank agency. This design contrasts with that of the United States and the European Union, where central bank agencies are voting members and decision- makers within their respective macroprudential regulators. Furthermore, the BoE’s design allows for the most effective pollination and balance of macroprudential and monetary policies to maintain financial stability. This is a more efficient and effective accountability and calibration mechanism than the designs from the United States and the European Union. The endogenous design of the BoE to incorporate macroprudential and monetary policy decisions may raise issues of potential bias in favor of the central bank’s monetary policy role. Monetary bias is mitigated by two fundamental design characteristics. First, the FPC is statutorily independent of the BoE and the MPC. The FPC is therefore empowered with economic independence within the BoE when making macroprudential judgments. Second, the BoE is part of the broader “twin peaks” regulatory model. A leading argument in favor of the twin peaks model is that the design centers on two clear regulatory objectives, financial stability and market conduct, which are each allocated to separate agencies.14 The central bank’s institutional design is built around the financial stability mandate. This is reflected in the MPC and FPC statutory stability mandates. The overarching influence of the BoE’s court over the FPC and the MPC ensures that conflicts can be resolved, mitigating the risk of policy subordination to the detriment of financial stability. Formal accountability for the BoE’s financial stability mandates is to HM Treasury.
16.6.4 Optimum Design Elements of the New Macroprudential Consensus The optimum balance between independence, accountability, and central banks’ expanded role necessitates four prerequisite design elements for the new macroprudential consensus: (1) an independent macroprudential regulator that
498 Taylor, Arner, and Gibson has voting membership within the central bank agency or monetary committee, (2) clear macroprudential, microprudential, and monetary stability mandates and/ or objectives, (3) transparent and distinct monetary and macroprudential accountability to the government, and (4) flexible cooperation mechanisms consisting of intertwined membership and subject to an overarching dispute resolution mechanism. These design elements of the new consensus serve to enhance central banks’ expanded macroprudential stability mandate while optimizing the balance with monetary stability performance. Each design element of the new consensus is not contingent on the macroprudential regulator being exogenous or endogenous to a central bank agency.
16.7 Conclusion From the 1970s until 2008, the central banking consensus focused on monetary stability, namely, inflation targeting based on price stability. Central banks that were independent and focused on price stability rather than microprudential supervision were viewed as having the optimum structural design to lower the rate of inflation. This theoretical consensus led to a central bank redesign in 1990s, with many jurisdictions introducing an exogeneous microprudential regulator. At this time, CBI became statutorily reinforced. Macroprudential financial stability beyond monetary stability was deemphasized. The GFC shattered this consensus by returning to central banks’ traditional yet broader macroprudential stability role. Central banks’ pivotal responsibilities are to maintain the functioning of the financial system. This includes mitigating the spread of systemic risks from financial institutions and markets thereby imposing new macroprudential stability responsibilities which forms the basis of the new macroprudential consensus. Macroprudential supervision and systemic risk have been brought to the fore of institutional design, regulation, and financial stability management. Two models reflecting this new consensus have since emerged: a macroprudential body exogenous to the central bank agency and an endogenous macroprudential agency within the central bank institutional design. When central banks’ role is to participate in a macroprudential body, including a large membership with a multitude of mandates, its financial stability mandate(s) may become subordinated and monetary policy mandates displaced. An endogenous central bank institutional design that incorporates macroprudential and monetary policy decisions may facilitate monetary policy bias because this is the agency’s fundamental role. In financial crises, these design flaws could obscure or obstruct central banks’ decision-making capacity and/or the macroprudential body to fulfill their respective stability responsibilities. To overcome these flaws, central banks’ new macroprudential consensus necessitates a design that balances and discharges the financial and monetary stability mandates.
Central Banks’ New Macroprudential Consensus 499
Notes 1. For detailed discussion, see Arner 2007; Lastra 1996; Bagehot 1873; Thornton 1802. 2. For discussion, see Mishkin 2001; F. Capie et al. 1994, 29; Laidler 1978. 3. Bade and Parkin’s studies were based on central banking laws that were further developed by themselves and a number of scholars, some of which drew on findings in relation to other macroeconomic variables. For example, Bade and Parkin 1982 and 1988 (budget deficits); Grilli, Masciandaro, and Tabellini 1991; Alesina 1988; Alesina and Summers 1993 (GNP); Cukierman, Webb, and Neyapti 1992 (expanding the study to sixty-eight countries); Fry et al. 2000 (inter alia, legal goals, government deficits, and the governor’s term of office). Also see Parkin 2013. 4. For example, IMF, FSB, and BIS 2016 and 2011; IMF 2013; BIS 2010 and 2011; FSB 2013a and 2013b; Clement 2010; Borio 2011; Goodhart 2011 and 2012; Caruana 2010; Nier 2009; Vinals 2010. 5. See generally Goodhart and Schoenmaker 1995. 6. For discussions on systemic risk, see Schwarcz 2008; Haldane 2009; Gai and Kapadia 2010; Acharya 2011; Hokkett 2015; Kauffman and Scott 2003; Adrian 2015. 7. See generally Goodhart and Schoenmaker 1995. 8. See generally Turner 2009. 9. For a detailed discussion, see Gibson, 2014, chaps. 4 and 6. 10. Financial Services Act of 2012, section 9C(2). 11. Quantitative easing (QE) can be used as a crisis-management tool and/or an economic stimulus. 12. For a discussion on formal and substantive accountability, see Buiter 2006, 22–24. 13. The QE program that began in 2015 was more of an economic stimulus exercise than a financial stability measure. 14. See generally Taylor 2009a; 2009b; 1995.
References Abrams, R. K., and M. W. Taylor. 2000. “Issues in the Unification of Financial Sector Supervision.” IMF working paper 00/213. Acharya, V. V. 2011. “Systemic Risk and Macro-Prudential Regulation.” CEPR NEPR draft paper, March. Adrian, T. 2015. “Discussion of Systemic Risk and the Solvency-Liquidity Nexus of Banks.” Federal Reserve Bank of New York staff report 722. Alesina, A. 1988. “Macroeconomics and Politics.” In NBER Macroeconomics Annual, Volume 3, edited by Stanley Fischer, 13–62. Cambridge: The MIT Press. Alesina, A., and L. H. Summers. 1993. “Central Bank Independence and Macroeconomic Performance: Some Comparative Evidence.” Journal of Money, Credit and Banking 25, no. 2: 151–162. Arner, D. 2007. Financial Stability, Economic Growth and the Role of Law. Cambridge: Cambridge University Press. Bade, M., and R. Parkin. 1982. “Central Bank Laws and Monetary Policy.” Unpublished manuscript, University of Western Ontario.
500 Taylor, Arner, and Gibson Bade, M., and R. Parkin. 1988. “Central Bank Laws and Monetary Policy.” https://www. researchgate.net/ profile/ M ichael_ Parkin3/ p ublication/ 2 45629808_ C entral_ B ank_ laws_and_Monetary_Policy/links/564a30e208ae127ff98687e5/C entral-Bank-L aws-and- Monetary-Policy.pdf?origin=publication_detail. Bagehot, W. 1873. Lombard Street. London: Henry S. King & Co. Bank for International Settlements (BIS). 2010. “Macroprudential Instruments and Frameworks: A Stocktaking of Issues and Experiences.” CGFS paper 38. Bank for International Settlements (BIS). 2011. “Central Bank Governance and Financial Stability.” Study group report, May. Bank of England. 2006. “Memorandum of Understanding between HM Treasury, the Bank of England and the Financial Services Authority.” March 22. webarchive.nationalarchives.gov. uk/20081013114121/http://www.hm-treasury.gov.uk/6210.htm. Bank of England. 2014. “Transparency and Accountability at the Bank of England.” https:// w ww.bankofengland.co.uk/ - / m edia/ b oe/ f iles/ n ews/ 2 014/ d ecember/ transparency-and-the-boes-mpc-response. Bank of England. 2017. “Memorandum of Understanding on Resolution Planning and Financial Crisis Management.” https://www.bankofengland.co.uk/-/media/boe/files/210018/ memoranda-of-Understanding/resolution-planning-and-financial-crisis-management.pdf. Borio, C. 2011. “Implementing a Macroprudential Framework: Blending Boldness and Realism.” Capitalism and Society 6, no.1: 1–23. Borio, C., and R. Filosa. 1994. “The Changing Borders of Banking: Trends and Implications.” BIS working paper 3. Briault, C. 1999. “The Rationale for a Single National Financial Services Regulator.” FSA Financial Regulation occasional paper 2. Buiter, W. H. 2006. “Rethinking Inflation Targeting and Central Bank Independence.” Background paper, Inaugural Lecture for the Chair of European Political Economy, in the European Institute, London School of Economics and Political Science, October. Buiter, W. H. 2014. “The Role of Central Banks in Financial Stability: How Has It Changed?” In The Role of Central Banks in Financial Stability: How Has it Changed?, edited by D. D. Evanoff, C. Holthausan, G. G. Kaufman, and M. Kremer, 11–56. Singapore: World Scientific Publishing Co. Pte. Ltd. Capie, F., S. Fischer, C. Goodhart, and N. Schnadt. 1994. The Development of Central Banking. Cambridge: Cambridge University Press. Caruana, J. 2010. Speech. “Macroprudential Policy: Working towards a New Consensus.” Remarks at the high-level meeting on “The Emerging Framework for Financial Regulation and Monetary Policy” jointly organized by the BIS’s Financial Stability Institute and the IMF Institute. April 23, Washington DC. Caruana, J. 2011. Panel remarks. Bank of Italy Conference in Honour of Tommaso Padoa- Schioppa. December, Rome. Čihák, M., and R. Podpiera. 2006. “Is One Watchdog Better Than Three? International Experience with Integrated Financial Sector Regulation.” IMF working paper 06/57. Clement, P. 2010. “The Term ‘Macroprudential’: Origins and Evolution.” BIS Quarterly Review (March): 59–67. Cukierman, A., S. Webb, and B. Neyapti. 1992. “The Measurement of Central Bank Independence and Its Effects on Policy Outcomes.” The World Bank Economic Review 6, no. 3: 353–398. European Central Bank (ECB). 2016. “Independence.” https://www.ecb.europa.eu/ecb/orga/ independence/html/index.en.html.
Central Banks’ New Macroprudential Consensus 501 European Parliament. 2010. “European Union Macro-Prudential Oversight of the Financial System and Establishing a European Systemic Risk Board.” EU regulation 1092/2010. European Union. 2012. “On the Statute of the European System of Central Banks and of the European Central Bank.” Protocol 4. Financial Stability Board (FSB). 2013a. “Peer Review of the United Kingdom.” Review report, September 10. Financial Stability Board (FSB). 2013b. “Peer Review of the United States.” Review report, August 27. Fischer, S. 1996. Statement. “Central Banking: The Challenges Ahead.” 25th Anniversary Symposium of the Monetary Authority of Singapore, May 10. Fry, M., D. Julius, L. Mahadeva, S. Roger, and G. Sterne. 2000. Key Issues in the Choice of Monetary Policy Framework. London: Bank of England. Gai, P., and S. Kapadia. 2010. “Contagion in Financial Networks.” Bank of England working paper 383. Gibson, E. C. 2014. “Managing Financial Stability and Liquidity Risks in Hong Kong’s Banking System: What Is the Optimum Supervisory Model?” Ph.D. dissertation, University of Hong Kong. Goodfriend, M., and R. A. King. 2002. “Financial Deregulation, Monetary Policy, and Central Banking.” In Financial Crises, Contagion, and the Lender of Last Resort: A Reader, edited by C. Goodhart and G. Illing, 145–168. London: Oxford University Press. Goodhart, C. A. E. 1991. The Evolution of Central Banks. Cambridge: The MIT Press. Goodhart, C. A. E. 2000. “The Organisational Structure of Banking Supervision.” FSI occasional paper 1. Goodhart, C. A. E. 2011. “The Changing Role of Central Banks.” Financial History Review 18, no. 2: 135–154. Goodhart, C. A. E. 2012. “The Macro-Prudential Authority: Powers, Scope and Accountability.” OECD Journal: Financial Market Trends 2011, no. 2: 97–123. Goodhart, C., and D. Schoenmaker. 1993. “The Institutional Separation between Supervisory and Monetary Agencies.” London School of Economics Financial Markets Group special paper 52. Goodhart, C., and D. Schoenmaker. 1995. “Should the Function of Monetary Policy and Banking Supervision Be Separated?” Oxford Economic Papers 47, no. 4: 539–560. Grilli, V., D. Masciandaro, and G. Tabellini. 1991. “Political and Monetary Institutions and Public Finance Policies in the Industrial Countries.” Economic Policy 6, no. 13: 341–392. Haldane, A. J. 2009. Speech. “Rethinking the Financial Network.” Financial Student Association. April 28, Amsterdam. Hegel, G. W. F. 1820. Elements of the Philosophy of the Right. Berlin: NP. Hokkett, R. C. 2015. “The Macroprudential Turn: From Institutional ‘Safety and Soundness’ to Systemic Stability.” Virginia Law & Business Review 9, no. 2: 201–256. International Monetary Fund (IMF). 2013. “Key Aspects of Macroprudential Policy.” IMF policy paper, June 10. International Monetary Fund (IMF), Financial Stability Board (FSB), and Bank for International Settlements (BIS). 2011. “Macroprudential Policy Tools and Frameworks: Progress Report to G20.” October 27. https://www.imf.org/external/np/g20/pdf/102711.pdf. International Monetary Fund (IMF), Financial Stability Board (FSB), and Bank for International Settlements (BIS). 2016. “Elements of Effective Macroprudential Policies: Lessons from International Experience.” August 31. https://www.imf.org/external/ np/g20/pdf/2016/083116.pdf.
502 Taylor, Arner, and Gibson Kauffman, G. G., and K. E. Scott. 2003. “What Is Systemic Risk, and Do Bank Regulators Retard or Contribute to It?” Independent Review 7, no. 3: 371–391. Kydland, F., and E. Prescott. 1977. “Rules Rather Than Discretion: The Inconsistency of Optimal Plans.” Journal of Political Economy 85, no. 3: 473–492. Laidler, D. E. 1978. The Demand for Money: Theories and Evidence. New York: Dun Donnelley Publishing Corp. Lastra, R. 1996. Central Banking and Banking Regulation. London: London School of Economics and Political Science, Financial Markets Group. Mishkin, F. S. 2001. “From Monetary Targeting to Inflation Targeting: Lessons from Industrialized Countries.” World Bank working paper 2684. Nier, E. W. 2009. “Financial Stability Frameworks and the Role of Central Banks: Lessons from the Crisis.” IMF working paper 09/70. Parkin, M. 1993. “Domestic Monetary Institutions and Deficits.” In Deficits, edited by James Buchanan, Charles Rowley, and Robert Tollison, 310–337. New York: Basil Blackwell. Parkin, M. 2013. “Central Bank Laws and Monetary Policy Outcomes: A Three Decade Perspective.” American Economic Association meeting session, “Central Bank Independence: Reality or Myth?” January 4, San Diego. Rogoff, K. 1985. “The Optimal Degree of Commitment to an Intermediate Monetary Target.” Quarterly Journal of Economics 100, no. 4: 1169–1189. Schooner, H. M., and M. Taylor. 1999. “Convergence and Competition: The Case of Bank Regulation in Britain and the United States.” Michigan Journal of International Law 20, no. 4: 595–655. Schwarcz, S. 2008. “Systemic Risk.” Georgetown Law Journal 97, no. 1: 193–249. Taylor, M. W. 1995. “Twin Peaks”: A Regulatory Structure for the New Century. London: Centre for the Study of Financial Innovation. Taylor, M. W. 2009a. “The Road from ‘Twin Peaks’—and the Way Back.” Connecticut Insurance Law Journal 16, no. 1: 61–95. Taylor, M. W. 2009b. Twin Peaks Revisited’: A Second Chance for Regulatory Reform. London: Centre for the Study of Financial Innovation. Taylor, M. W., and A. Fleming. 1999. “Integrated Financial Supervision: Lessons from Northern European Experience.” World Bank working paper 2223. Thornton, H. 1802. An Enquiry into the Nature and Effects of the Paper Credit of Great Britain. London: George Allen & Unwin Ltd. Timberlake, R. H. 1984. “The Central Banking Role of Clearinghouse Associations.” Journal of Money, Credit and Banking 16, no. 1: 1–15. Tinbergen, J. 1952. On the Theory of Economic Policy. Amsterdam: North-Holland Publishing Company. Turner, A. 2009. The Turner Review: A Regulatory Response to the Global Banking Crisis. London: Financial Services Authority. US Department of Treasury. 2016. “Financial Stability Oversight Council.” https://www. treasury.gov/initiatives/fsoc/about/Pages/default.aspx. Vinals, J. 2010. “Central Banking Lessons from the Crisis.” IMF policy paper, May.
chapter 17
Central Bank s a nd t h e New Regu l atory Re g i me for Ba nk s David T. Llewellyn
17.1 Introduction This chapter focuses on the role of central banks (including the European Central Bank) as regulators and supervisors of banks. While not all central banks have this role (though almost invariably they have an input in one way or another), they are almost universally responsible for systemic stability. It follows that central banks have an interest in all aspects of bank regulation and bank business models that have a potential impact on systemic stability. Major parts of the regulation of banks are the responsibility of international bodies such as the Basel Committee for Banking Supervision, the European Banking Authority, and the European Central Bank (ECB). The role of central banks when they have regulatory responsibilities is largely in the area of supervision. As already noted, the role of central banks in both regulation and supervision varies among different countries. The European institutional landscape has changed radically in the past few years in two major respects. In the United Kingdom, the Bank of England has regained responsibility for the prudential (and conduct of business) regulation with the demise of the Financial Services Authority. In the European Union, the Single Supervisory Mechanism (SSM, established in 2014) has become the single supervisory authority for all large banks in the euro areas and in any other member state that chooses to participate in the putative European banking union. National regulatory agencies (whether or not they are central banks) will continue to have an important role. The intent of the SSM is to provide more consistent and harmonized supervisory practices and standards within the euro area, which is a difficult goal given the different
504 Llewellyn national complexities. The main aims (as described by the ECB) are threefold: to ensure the safety and soundness of the European banking system, to enhance financial integration and stability, and to ensure consistent supervision of banks. The ECB has authority to conduct supervisory reviews, grant and withdraw banking licenses, assess banks’ acquisitions and disposals of qualifying holdings, ensure compliance with EU prudential rules, and set higher capital requirements when appropriate. The ECB will directly supervise banks that satisfy any of five conditions: having assets in excess of €430 billion; having a ratio of assets to GDP of its home country in excess of 20 percent; being judged by the ECB to be “systemically significant,” being (irrespective of size) one of the three most significant banks operating in its home country, and having received financial support from the European Stability Mechanism or the European Financial Stability Facility. Overall, the ECB is responsible for the supervision of banks that account for more than 80 percent of Eurozone bank assets. Also with respect to these banks, the ECB has the power to impose prudential requirements. The ECB will also carry out supervisory reviews and stress tests and impose and assess compliance with governance and probity requirements, including “fit and proper” tests. All other banks are to be supervised by national authorities (national central banks where relevant) but under the oversight of the ECB, which also has authority to take over direct supervision of any bank. The objective of this chapter is to offer perspectives on the postcrisis regulatory regime adopted by central banks and other regulatory agencies. The context is that the most serious global banking crisis in living memory has given rise to one of the most substantial structural changes in the regulatory regime. Two core objectives of regulation are identified at the outset: lowering the probability of bank failures (objective 1) and minimizing the social costs of those bank failures that do occur (objective 2). For our purposes, regulatory regime refers to the combination of these two core objectives. One of the lessons of the crisis was that prior to the crisis, the main focus of regulation had been on objective 1, and little attention had been given to objective 2. Historically, the focus of the regulatory regime has been on reducing the probability of failures rather than minimizing their costs, and in many countries, the second issue has been addressed in a serious way only since the crisis. What can be termed a regulation matrix illustrates the possibility of a trade-off between the two objectives; if the social costs of failure can be lowered, there need be less concern about the probability of failures. In the extreme (totally unrealistic) case, if the social costs of bank failures could be reduced to zero, the probability of failures would be of little concern; there would be no potential taxpayer liability; no need for bailouts, and no moral hazard attached to bailouts. Furthermore, there would be no need for regulation to reduce the probability of bank failures. Such an Alice in Wonderland utopia is just that. Nevertheless, it serves to illustrate the nature of the trade-off implicit in the regulation matrix. The central strategic issue in any comprehensive regulatory reform program is the positioning in the regulation matrix. A central imperative of the postcrisis regulatory reform strategy has been to limit claims on taxpayers and to prevent risks being shifted to them. As part of this, credible,
The New Regulatory Regime for Banks 505 predictable, and timely resolution arrangements for failing institutions are needed to enable banks to fail without causing undue disruption to customer services or imposing costs on taxpayers. Prior to the onset of the crisis, most countries did not have special resolution arrangements for banks but instead applied traditional approaches to insolvency management (e.g., liquidation and reorganization procedures) that are applied to any company. These worked reasonably well and without systemic repercussions in the cases of relatively small bank insolvencies in otherwise sound markets. These arrangements, however, would not work efficiently in the cases of large banks and where there could be significant systemic impacts. Given the limited resolution arrangements that were in place for most countries prior to the onset of the 2007–2008 crisis, governments had little choice other than to bail out (in one way or another) key financial institutions that were in serious distress. This, in turn, imposed costs on taxpayers and created moral hazard for the future. Recent experience has also revealed that in a crisis, and as a result of bailouts, the taxpayer becomes the insurer-of-last-resort, albeit on the basis of an inefficient insurance contract as no ex-ante premiums are paid by the industry. A major feature of the postcrisis regulatory reform program is that it has included resolution arrangements to address objective 2. If future bailouts are to be avoided, commitments to refrain from such action need to be made credible, and such credibility will emerge only if reform of the arrangements for objective 2 implies low costs of bank failure, which would obviate the need for bailouts. Several lessons for regulation arise from the crisis and have informed the reform process. Clearly, detailed regulation did not prevent the crisis, which suggests one of three interpretations: there were fault-lines in the detail of previous regulation, the general approach to bank regulation was misguided, or supervisory staff failed to detect developments in time and/or did not make appropriate use of their powers to enforce existing regulation. A number of bank failures revealed such shortcomings. It also emerged that many aspects of the underlying culture in some parts of the banking sector were hazardous in terms of excess risk-taking, systemic stability, and consumer protection. This, in turn, might suggest that the culture of banks needs to be part of the supervisory process and that the culture of central banks and other regulatory and supervisory agencies might need to place more focus on the underlying culture of regulated institutions. The structure of the chapter is as follows. The remainder of this section summarizes the main themes. This is followed in section 17.2 by a discussion of the endogeneity paradigm and the limits this implies for the effectiveness of bank regulation, whether conducted by central banks or other agencies. Section 17.3 considers the major strategic changes that have occurred in bank regulation in the wake of the banking crisis, followed in section 17.4 by a discussion of the proportionality principle that regulators are supposed to apply. Sections 17.5 and 17.6 consider the central role of culture within banks and its relevance as a supervisory issue. Section 17.7 considers the culture of central banks and other regulatory agencies in the context of the culture of banks being a supervisory issue.
506 Llewellyn The main themes are summarized as follows: • One of the deepest banking crises in history has been followed by one of the biggest-ever changes in the regulatory regime. This has been both incremental (revisions to existing capital and liquidity regulation) and strategic (wider scope of regulation and the establishment of more formal resolution arrangements for objective 2). What is termed the regulatory regime has changed substantially since the crisis, becoming more extensive, intensive, prescriptive, and complex. • A symbiotic relationship exists between bank business models and behavior on the one hand and regulation on the other, with each reacting to the other in a way that leads to regulatory escalation. The endogeneity problem (banks seeking to minimize regulatory costs through regulatory arbitrage) has the potential to both limit the effectiveness of regulation and induce a process of regulatory escalation and greater complexity in regulation. Both of these tendencies have the potential to compromise the requirement for proportionality in bank regulation. • In the process of a rules-escalation approach, the issue of proportionality needs to be considered in terms of the totality of regulation, possible excess complexity, whether sufficient differentials are made between different types of banks, and materiality. • The underlying culture of banking is important because it creates business standards and employee attitudes and behavior; it overarches all aspects of the business. There are many ways in which the culture of banks changed in the years prior to the crisis, which contributed to the banking crisis and compromised consumer protection. • Bank culture could become a central issue when considering the optimal regulatory regime. Detailed and prescriptive rules are a necessary but not sufficient part of any regulatory regime. There are limits to what can be achieved through detailed, prescriptive, and complex rules if the underlying culture of banking is hazardous. The challenge is to manage the interface between regulation, supervision, and culture within both banks and supervisory agencies. • While regulation cannot determine culture, because of its central importance, a case can be made for banking culture becoming a central supervisory issue. A danger exists that detailed and extensive regulation may itself divert attention of banks from their underlying culture and the need for reform. This, in turn, implies that a different culture may be needed in central banks and other supervisory agencies. The next section considers the symbiotic relationship between bank business models and behavior and regulation.
The New Regulatory Regime for Banks 507
17.2 The Endogeneity Paradigm and Regulation Regulatory strategy conventionally assumes that problems to be addressed by regulation (e.g., excessive risk-taking by banks) are exogenous to the regulatory process. In this case, a problem is observed, and a regulatory response is made to deal with it, that is, to reduce the probability of it happening. This is a bold assumption, because problems and hazardous behavior by regulated firms may be at least partly endogenous to regulation itself. This arises as banks seek to circumvent regulation through financial innovation and by changing the way business is conducted, which in turn calls forth more regulation: Kane’s “regulatory dialectic” (Kane 1987). As regulation responds to this endogeneity by successive adjustments, the cost of regulation rises. As the costs of regulation designed to lower the probability of bank failures rise, the trade-off between the two objectives in the risk matrix moves in favor of minimizing the cost of bank failures as opposed to their probability. The endogeneity problem is likely to raise the cost of regulation because it engenders a rules-escalation process. By raising regulatory costs, this becomes part of the trade-off between the two core objectives. The process of regulatory arbitrage diverts the nature of the problem. Because of this, regulation is often shooting at a moving target, with the target itself moving partly because of regulation. For instance, the Basel 2 capital regime (hailed at the time as a decisive breakthrough in the regulation of banks) created incentives for banks to remove assets from their balance sheets, for securitization, the creation of structured investment vehicles and other off-balance-sheet vehicles, excess gearing, and the use of a range of credit risk-shifting derivatives such as credit default swaps and synthetic collateralized debt obligations. All of these featured as central aspects of the banking crisis (Llewellyn 2010). It is evidently the case that detailed regulation at the time did not prevent the crisis and, to some extent, may have contributed to it. In addition to regulatory arbitrage, under some circumstances, some aspects of regulatory capital requirements may induce banks into more risky business. Blundell- Wignall, Atkinson, and Lea (2008) show a positive correlation between losses and banks’ Tier 1 risk-weighted capital ratios, although there is a negative correlation between losses and leverage ratios. A similar conclusion is found in a Centre for European Policy Studies report (Ayadi et al. 2012), which suggests that the risk-weight approach to capital adequacy may induce banks to incur more risk through increased leverage. Regulatory arbitrage will always be a feature of bank business models. As noted in Haldane (2010), “risks migrate to where regulation is weakest, so there are natural limits to what regulatory strategies can reasonably achieve” (3). The endogeneity problem can be considered in the context of the history of the Basel Capital Accord. The original Basel regime established in 1988 was revised in Basel 2
508 Llewellyn and again in 2010 in Basel 3. One interpretation is that subsequent adjustments imply moving toward the perfect model (Basel N) by correcting for past errors. An alternative interpretation is that there are fault lines in the regulatory process itself and that the methodology is flawed because banks will always engage in regulatory arbitrage. In this sense, the view that regulators are always behind the curve is not a critique of regulators but an endemic feature of the regulatory process. Successive adjustments over time have not solved the problem of periodic crises. This problem might be alleviated (as suggested in a substantial literature) by a relative shift toward a principles-based rather than a detailed rules-based approach to bank regulation. Unless regulation is to become grossly repressive, regulatory arbitrage will always be a major feature of bank business models. There are three particular implications following on from the endogeneity paradigm. First, it highlights the limits to what regulation for objective 1 can achieve. Second, it is likely to engineer a process of regulatory escalation as regulation becomes more intensive, detailed, and complex. Third, regulation is increasingly prone to compromising the proportionality principle. Given the limitations of regulation, while rules designed to lower the probability of bank failures are a necessary part of an overall regulatory regime, they are not sufficient. Two alternative approaches are to lower the cost of bank failures by keeping risks private rather than, as was the case in the recent crisis, socialized by shifting risks to taxpayers, and placing a greater focus on supervising underlying culture rather than escalating the rules regime.
17.3 The New Global Regulatory Regime The regulatory pendulum has a long history as bank regulation swings between more and less intensity, coverage, and complexity. There are several reasons for this, not the least being the bargaining and lobbying power of regulated institutions as they seek to minimize regulatory costs. Prevailing ideology also plays a part, as views change about the efficiency of markets and institutions in making rational, prudential, and systemically safe decisions. The prevailing ideology in the years running up to the crisis was dominated by rational expectations and efficient markets paradigms that were supported not only in mainstream academia but also by banks, regulators, and (in some countries) politicians. This spawned light-touch approaches to regulation. In some countries (notably the United States with the Gramm-Leach-Bliley Act of 1997), major legislative changes were made to the structure of regulation, allowing banks to conduct a wider range of business. There is also a tendency for approaches to regulation to be based disproportionately on recent events and trends (availability heuristic) with risk-averse regulators under pressure to respond to a recent problem but gradually to ease off as judgments are made that perhaps the initial reaction was excessive.
The New Regulatory Regime for Banks 509 Several structural features of the precrisis environment proved to be unsustainable and contributory causes of the crisis. An underlying perversity was that bank profits were privatized while risks were socialized, with the taxpayer effectively acting as an insurer of last resort on the basis of an inefficient contract as no ex ante premiums were paid. Overall, as there was a reluctance to require creditors to absorb a proportionate share of the costs of bank distress and failures, burden sharing was disproportionate, with an excessive share borne by taxpayers. In the absence of predetermined resolution arrangements, the perception of banks being too big to fail (TBTF) weakened the incentives for private monitoring of banks. Banking crises inevitably bring forth more and different regulation of banks, and the recent global crisis is no exception. There are many reasons a comprehensive review of regulatory, supervisory, and intervention arrangements have been, and are being, made in the wake of one of the most serious banking crises ever. First, given the enormity of the crisis, there were evident fault-lines in regulatory and supervisory arrangements; the rules enshrined in thousands of pages behind the Basel Capital Accords did not prevent the crisis. Indeed, they may have even incentivized market participants to engage in regulatory arbitrage and/or more excessive risk-taking. Second, the crisis imposed substantial costs and risks on taxpayers. Third, it became evident that reform strategy needed to be framed in terms of measures to lower both the probability of bank failures and the cost of those failures that do occur. Fourth, there is the issue of whether the focus should be on individual banks or on the system in aggregate, because, with due respect to the fallacy of composition, it does not follow that regulating individual nodes in a network is necessarily the optimal approach to ensuring the stability of the network as a whole (see Haldane 2010). As a result, following the crisis, there has been one of the biggest-ever reforms in the international regulatory regime and (most especially with respect to the European Union) in the basic regulatory architecture; it has become more extensive, intensive, and complex. The approach to reform has been both incremental and strategic. With respect to the former, regulation has become more detailed, intensive, granular, voluminous, and complex. Banks are required to have higher levels of loss-absorbing capacity, will be subject to more intensive and extensive supervision including of internal risk models, and will be subject to more detailed reporting requirements. In many regimes, banks are required to construct living wills (recovery and resolution plans) and to be subject to a ring-fencing condition. In this regard, major changes have been introduced to capital and liquidity requirements, which have become more detailed and extensive (covering a wider range of risks). New categories of capital requirements have been introduced: capital conservation buffer, countercyclical capital buffer, systemic risk buffer, leverage ratio, and bail- inable debt. With regard to liquidity, two ratio requirements have been established: the net stable funding ratio and a liquidity coverage ratio. As these have been discussed extensively elsewhere, I do not discuss them further here. With respect to the strategic dimension to regulatory reform, the main elements have been a greater focus on objective 2 through a focus on resolution measures, a widening
510 Llewellyn of the scope of regulation, and changes in the ethos of supervision, including in some cases (notably the Netherlands) a focus on banks’ culture. The concept of the regulatory regime has been extended to include structural measures to minimize the social costs of bank failures (the resolution regime). More intensive and detailed supervision and reporting requirements have also been imposed. The introduction of living wills (recovery and resolution plans), various structural measures such as ring-fencing requirements (designed in part to make resolution of failed banks easier and less costly), and the principle of bail-in of some bondholders have also taken regulation in new directions. Banks have also become subject to rigorous stress testing and more intense supervisory scrutiny of internal models (see, for instance, Bank of England 2015; European Banking Authority 2016). In the absence in the precrisis period of credible and predictable resolution arrangements for failing banks, there was little choice about whether bailouts and official support operations for TBTF banks should be undertaken. Given the potential costs of bank failures (largely due to the absence of credible resolution arrangements), rescue operations or bailouts may be the least-cost option in the short run. However, given the time-consistency issue, such bailouts create serious moral hazard for the future. A distinction is therefore made between short-run and long-run optimality with respect to rescue operations. Only if the social costs of bank failures can be minimized will a no-bailout policy be credible. The absence of clearly defined and credible resolution arrangements was an unsustainable feature of the precrisis environment that needed to be corrected. In effect, arrangements were needed that would allow banks to fail without imposing substantial systemic and social costs and taxpayer liability. Given the increased cross-border interconnectedness that developed over the decade or so before the onset of the crisis, the absence of agreed cross-border resolution arrangements surfaced as a particular issue that needed to be addressed most especially within the European Union. This was the case in both the European Union and the United States (such as the failure to adopt a coordinated approach to the Lehman Brothers collapse and the many difficulties in the coordination of insolvency procedures related to the bank’s subsidiaries). Whatever regulatory regime exists to reduce the probability of bank failures, it can never reduce the probability to zero, and neither should it attempt to do so, as this would imply gross overregulation, which would undermine the effectiveness and efficiency of the banking sector. As there always will be bank failures, it is prudent to have explicit resolution regimes so as to reduce the social costs when they do occur. Overall, optimality in the regulatory regime shifts toward objective 2 the more costly or less effective regulation to lower the probability of bank failures becomes, and the more the costs of bank failures can be reduced. Optimal regulation for objective 1 is, therefore, indeterminate independently of the arrangements for objective 2, which implies that a strategic (holistic) approach is required. This implies a regulatory regime that simultaneously addresses both objectives. The aim, therefore, is not to create optimal regulation but to optimize the overall regulatory regime.
The New Regulatory Regime for Banks 511 Historically, with the possible exceptions of the United States and Canada, the focus of regulation has been on reducing the probability of failures rather than minimizing their costs. Indeed, in many countries, the second issue has only been addressed in a serious way since the crisis. For instance, the United Kingdom adopted a comprehensive Special Resolution Regime in the 2009 Banking Act in the context of the absence of any prior special insolvency arrangements for banks and with weak and ill-defined institutional arrangements for dealing with failing institutions. In practice, both approaches are needed: lowering the probability of bank failure and limiting the social costs of failures. The European Union has also adopted a comprehensive approach with the 2014 Bank Recovery and Resolution Directive, which in turn implements and closely mirrors the Financial Stability Board principles. The debate about the role of regulation and supervision for financial stability needs to be about the appropriate weight to be given to the two dimensions. Given that all regulatory measures to reduce the probability of bank failures have costs, and the costs that would arise in seeking to reduce the probability of failure to zero (supposing it were possible even with the most draconian regulation) would be substantial, the trade-off between the two dimensions is central to decisions about the optimal intensity of regulation. In practice, a combined strategy is likely to be optimal, and this has been the approach adopted since the crisis. Prior to the onset of the crisis, few countries had clearly defined resolution arrangements in place or a legal structure giving powers of central bank intervention before insolvency is reached. Exceptions were the United States, Canada, Italy, and Norway, with Norway having put in place special resolution arrangements following the country’s banking crisis in the 1990s. To avoid this uncertainty, it became evident that a reformed regulatory regime needed to encompass credible, predictable, and timely resolution arrangements for failed institutions that limit the potential liability imposed on taxpayers and maintain systemic stability. The objective has been to create arrangements that allow banks to “fail” without disturbing business and customer relationships and to ensure that the costs of failure fall on private stakeholders (equity and bondholders and other unsecured creditors) rather than taxpayers. Bailouts are to be avoided, as they impose costs on taxpayers, create serious moral hazard, may support inefficient banks, and weaken market discipline. A key objective is to minimize the moral hazard created by bank rescues. Problems emerge when resolution arrangements are not clear. First, uncertainty and unpredictability are created for all stakeholders, including depositors and other banks in the system. Second, it creates time-consistency problems, as expectations may be created that governments will always bail out failing banks; credibility of “no bailouts” is undermined if the perception is that banks will always be rescued if the social costs of failure are deemed to be high. Third, stakeholders are inclined to bargain for economic rents, usually at the expense of the taxpayer. Fourth, it can lead to political pressure for forbearance and the moral hazard attached to it and can lead to costly and unnecessary delays in resolution. Fifth, uncertainty is created with respect to rights and obligations in the event of bank failures, and the absence
512 Llewellyn of predictable rules on burden allocation creates delay in resolution. Two further considerations in the case of cross-border banks are the extent to which countries have different resolution regimes and how burden sharing is to be distributed. Lack of clarity in resolution arrangements is not the only issue. Arrangements regarding the rules, institutions, responsibilities, and powers may be clear, but the economic implications may not be sufficiently predictable to make implementation in particular cases likely. Several elements of new resolution arrangements are included in the United Kingdom’s special resolution regime: the facility for private-sector purchases of failed banks, transfer of engagements to a bridge bank, partial transfer of assets and liabilities to other institutions, temporary public ownership, the ability to restructure claims of an institution (e.g., debt-equity conversions and the writing down of unsecured creditors’ claims), forced mergers/acquisitions without shareholder consent, ring fencing, living wills, the creation of bad banks, the suspension or termination of powers, and forced haircuts imposed on some investors in banks. The central objective is to lower the social costs of individual bank failures, although the power of central banks in this dimension varies among countries.
17.3.1 From Bailout to Bail-In With regard to objective 2, several resolution instruments and procedures are available: special insolvency laws applied exclusively to banks, private sales of failed banks, partial transfer of failed banks’ assets, mergers, transfer of engagements to bridge banks, the creation of bad banks, temporary nationalization, and so on. A key objective of the new regulatory regime is to significantly enhance banks’ total loss-absorbing capacity. One component of this is a bail-in regime whereby some bank debt liabilities can be either written down or converted into equity at the command of the regulator. Banks are to be required to hold a certain amount of bail-inable debt. It forces some of a bank’s creditors to bear some of the burden of resolution. In a bail-in process, the claims of shareholders and unsecured creditors are written down or converted into equity to absorb the losses of the failed bank and to enable a recapitalization of the bank or its successor. In the case of Cyprus, the creditors forced into bail-in were bondholders and depositors with deposits in excess of €100,000. The bail-in regime is designed to meet several related objectives: • To make banks safer. • To reduce the potential claim on taxpayers in the event of a bank resolution by ensuring that shareholders and some creditors bear the costs of bank failure. In this sense, it is designed to avoid the asymmetric reward structure sometimes faced by banks whereby shareholders gain when the bank makes profits (including when this results from a bank taking excessive risk) but taxpayers incur losses when
The New Regulatory Regime for Banks 513
bailouts are made. In so doing, the bail-in regime is designed to remove the moral hazard inherent in this process. • Avoiding the sometimes cumbersome and time-consuming normal corporate insolvency process. • To ensure that bank failures are managed in an orderly manner. • To avoid the difficulties of quickly finding a buyer of a large failed bank (Chennells and Wingfield 2013). • To enable the bank to continue to operate its critical functions. If possible, it offers the resolution authorities and bank management more time to find a permanent solution and to restructure the bank’s operations and business model.
In the process, it should in theory encourage more monitoring by bank bondholders and also a more realistic pricing of bank debt. It remains to be seen how this regime will operate in practice, most especially when systemically significant banks might be involved. Several potential problems might emerge. First, the requirement for bail-in by a particular bank might induce panic in the creditors of other banks. For instance, the restructuring of Greek government debt created serious problems for banks in Cyprus that were substantial holders of Greek government bonds. Second, and according to the Financial Times (2017), “bail- in regimes have unnerved bondholders because they are not traditional bankruptcies which have strict rules and a court-supervised process that mean creditors are ranked in order of repayment precedence, and those in each group must be treated equally.” Third, it may not always be the case that a country has an appropriate legal framework that ensures that the resolution authority has the necessary powers to act. Banks may not always have sufficient bail-inable liabilities to absorb the losses and be recapitalized.
17.4 The Proportionality Principle It is a formal mandate of the European Commission and other regulatory authorities that regulatory requirements should be “proportionate.” The EU Treaty definition of the principle of proportionality is that the content and form of EU action shall not exceed what is necessary to achieve the objectives of the treaties (TEU [Treaties of the European Union], art. 5[4]). According to settled case law, the principle of proportionality requires that European Community measures 1. do not exceed the limits of what is appropriate and necessary in order to attain the objectives legitimately pursued by the legislation in question; 2. when there is a choice between several appropriate measures, recourse must be had to the least onerous; and 3. the disadvantages caused must not be disproportionate to the aims pursued.1
514 Llewellyn The European Banking Authority’s Banking Stakeholder Group (2015) presented a report on case studies to illustrate where it is judged that the principle of proportionality had not been followed to the full extent possible. There are many reasons why proportionality in regulation is important: • Regulation imposes costs: resource costs of the regulatory agencies, compliance costs imposed on regulated firms (IT, employees in the compliance area and their training, use of regulatory consultants, etc.), customer costs to the extent that the costs of regulation are passed on to customers, and also broader economic costs. • While in some cases, regulation may be designed specifically to change the way banks do business, disproportionate regulation may induce changes in bank business models that are not necessary for the regulatory objectives. A member of the Financial Stability Board’s enhanced disclosure task force has argued that new regulation imposed on European banks makes “huge swathes of traditional bank lending unprofitable . . . the profitability of European mortgages will now fall by about two-thirds while the profitability of lending to the strongest corporate customers will fall by three quarters. . . . European banks will have to shrink dramatically.”2 • Disproportionate regulation may induce arbitrage within the banking system if, for any reason, regulation impacts disproportionately on some types of banks. It is also likely to induce a process of disintermediation toward less regulated institutions and the capital market. While this is not necessarily to be condemned, the potential for such disintermediation to occur because of disproportionate regulation needs to be carefully monitored. • Disproportionate regulation may compromise competition in the banking system. A study by the European Commission has argued that “there is a risk that the rules will increase barriers to entry for market entrants. Regulation tends to impose a disproportionate burden on small players in the market and new entrants, which can make it harder for them to compete with more established players” (European Commission 2014, 260). • Regulation tends to be particularly costly for small institutions, partly because of the fixed costs that compliance systems involve. This may compromise the competitive position of such institutions, and to the extent that it induces more mergers in the banking industry, competition can be reduced. As Goodhart et al. (1999) put it: “To the extent that regulation enhances competition and, through this, efficiency in the industry, it creates a set of markets that work more efficiently and through which consumers gain” (67). However, the converse of this might also apply as a result of disproportionate regulation. Disproportionate regulation may also generate wider costs on the economy if some of the basic functions of the financial system (financial intermediation, optimal risk shifting, etc.) are compromised or made unnecessarily costly. The potential dangers of
The New Regulatory Regime for Banks 515 inhibiting financial innovation and making lenders unnecessarily more risk-averse may also be mentioned in this regard. A further issue is whether a rules-based approach to the regulatory regime might be ineffective and even counterproductive, ineffective because it focuses on the wrong issues and counterproductive because it may induce regulated institutions to focus slavishly on the rules at the expense of addressing more fundamental cultural issues. Kay (2015) has argued: “There are already far too many rules. The origins of the problems . . . are to be found in the structure of the industry and in the organisation, incentives, and culture of financial firms” (189). Several economic, political, and psychological pressures can produce disproportionate or excessive regulation: • Regulation is sometimes mistakenly perceived by stakeholders (and notably consumers) as a free good. This may happen because while regulation has a cost, it does not have an observable price attached to it. This will tend to raise the demand for regulation, which, combined with risk-averse regulators who may be induced to oversupply it, leads to disproportionate regulation. • A further factor in regulatory escalation relates to the symbiotic relationship between bank behavior and regulation, where regulation may cause different bank behavior, which in turn induces new regulation (the endogeneity paradigm). • A further possible source of disproportionate regulation may lie in a failure to recognize the potential trade-off between regulation that is designed to lower the probability of bank failures and regulatory measures designed to lower the costs of bank failures. • Regulators may be prone to the availability heuristic: a tendency to give a disproportionate weight to the most recent experience. After a costly crisis, there can be strong public and political pressure for more regulation. In this regard, the swing in political sentiment between the periods before and after the recent crisis is marked. Thus, while the 1980s and 1990s were decades of liberalization of banking markets and operations, they have been replaced by a period of intensive regulation. With this pendulum effect, the pendulum tends to swing alternately too far in both directions. • The potential for excess harmonization in the European Union (supposedly in the interest of establishing or consolidating a common market in financial services) may also produce an environment leading to disproportionate regulation. The principle of proportionality has several dimensions, each of which raises different issues with respect to costs and benefits for all stakeholders (including banks and consumers of banking services). Four pillars are identified: (1) The totality of regulation: whether the totality of regulation (as opposed to each regulation taken alone) is disproportionate for the key regulatory objectives given the possibility of diminishing marginal returns that may emerge if regulation is taken beyond its optimal level in terms of scope and intensity (represented in figure 17.1).
516 Llewellyn Total costs and benefits
Total benefits of regulation Marginal benefits of regulation Total costs of regulation Maximum net benefits
Marginal costs of regulation Total amount of regulation
Figure 17.1 Diminishing Returns
(2) Excess complexity: whether regulation is excessively and unnecessarily complex for the objectives that are sought and whether the same regulatory objectives could be achieved, and with the same degree of effectiveness, with less complex regulatory requirements. (3) Differentiations: whether in the application of a regulation, sufficient differentiations are made between different types of banks without compromising the regulatory objectives. (4) Materiality: whether a particular regulation applies to institutions to which it should not be applied (the materiality principle) and/or to institutions that are subject to a costly new regulation when they are only marginally prone to the risks that such regulation aims to control. Although there is necessarily some degree of overlap among these four dimensions, they are briefly considered in the following sections.
17.4.1 The Totality of Regulation and Diminishing Marginal Returns The principles of proportionality and cost-benefit analyses usually relate to individual regulations. But while difficult in practice, it needs to be related also to the totality of regulation. A situation can arise in any regulatory regime where each individual regulation might be justified on cost-benefit criteria, and yet, due to a process of regulatory escalation, the totality of regulation might be disproportionate in terms of costs exceeding benefits. In other words, what applies to each individual regulation does not necessarily
The New Regulatory Regime for Banks 517 apply to the regulatory regime as a whole. With respect to EU regulation more generally, the European Commission recognizes that “the entire stock of EU legislation” needs to be kept under review.3 The problem may arise because of diminishing marginal returns as regulation is extended. As represented in figure 17.1, the total cost of regulation rises as regulation is extended, while benefits increase at first, then peak, and may begin to decline after some point is reached. The logic of this representation is that at some optimal point, the marginal cost and marginal benefit of regulation are the same, but after that point, the marginal cost exceeds the benefit.
17.4.2 Excess Complexity Over time, finance has become increasingly complex and global in nature, with banks conducting an ever wider range of business involving sophisticated and complex products and business models. This has been followed by ever more complex and extensive regulation of banks, which, it might be argued, is inevitable if regulation is to reflect the complexity of regulated firms. While recognizing this, the issue arises of whether some regulatory requirements may still be excessively or needlessly complex. In the case of the European Union, the aim of creating a single market (and competitive neutrality between different institutions and jurisdictions) is an important consideration that introduces an additional dimension to regulatory design that could add more complexity. But this, in turn, raises the question of whether avoidable costs are being imposed by a degree of harmonization that is more than is needed for the creation of a single market in financial services. Just as regulation might in some areas be disproportionate, so also might be the target degree of harmonization. There are several reasons for concern about the potential for excess complexity, some of which have been raised by some regulators and academic analysis: • The consensus appears to be that complexity of regulation and its application results in excessive costs of compliance. By increasing opaqueness, excess complexity may also weaken the value of market discipline. • The arguments pertaining to the proportionality of regulation typically, but not exclusively, relate to the disproportionate costs of compliance for smaller organizations vis-à-vis their larger counterparts. Partly because of the fixed costs of compliance, the size of an organization is a critical element in determining the cost burden. This may have the effect of raising entry barriers in the industry and in the process compromise the competition objective. • It can make compliance more superficial by turning it into a box-ticking exercise. • Haldane and Madouras (2012) argue that complex rules often have high costs of information collection and processing. Again, this is likely to be a particular burden for smaller institutions.
518 Llewellyn • Complex rules have the potential for regulatory arbitrage and greater gaming of the system by regulated institutions, which makes it more difficult to identify when rules are being gamed. As suggested by Aikman et al. (2014), “such arbitrage may be particularly difficult to identify if the rules are highly complex. By contrast, simpler approaches may facilitate the identification of gaming and thus make it easier to tackle” (43). • Complexity can add opacity to banking business and make it harder for outsiders to appraise risks. This is the case with, for instance, the internal models that many large European banks use for market and credit risk (under the int ernal ratings-based approach), where apparent inconsistencies across banks have emerged.4 A more general feature suggested by Haldane and Madouras (2012) is that “the more complex the environment, the greater the perils of complex control” (16). (A related point is also made by Gigerenzer 2008). Haldane and Madouras argue further that “because complexity generates uncertainty, not risk, it requires a regulatory response grounded in simplicity, not complexity” (2012, 19). In some circumstances and especially in situations of uncertainty (where knowledge and understanding are limited), simplicity may be superior, which implies that, at least in some areas, there are elements of excess complexity in current regulation. In such cases, the principle of proportionality should lead to adopting the simplest possible regulation. Financial products and financial activity have become more complex, and regulation will necessarily have to reflect this. However, in many cases, regulation has become more complex and detailed than the regulated business and products require in order to achieve required objectives. It is in this sense that the focus is on excess complexity. What level of detail is required to meet the purpose of the regulation seems to have lost the lawmakers’ and regulators’ attention in their efforts to avoid the repetition of the last financial crisis. Excess complexity adds to the cumulative cost of regulation.
17.5 The Central Role of Bank Culture The Group of Thirty (2015) has addressed the issue of bank culture in a report, which suggests that the crisis revealed a number of cultural failures that also contributed to an erosion of trust and confidence. Amongst its conclusions are that: • A major improvement in the culture of banks is now a matter of sustainability, and is an imperative for regaining society’s trust • The banking community needs to repair the damage done by failures in culture, values and behavior
The New Regulatory Regime for Banks 519 • The reputation of banking and the broader financial sector has deteriorated since the financial crisis, and is now at an historical low level in terms of trust on the part of clients and customers • Rules are necessary but not sufficient to restore trust; the underlying ethical behaviour in the financial sector has to improve. It is well established in both theoretical and empirical research that corporate culture and organizational behavior are key factors explaining firm performance. I argue in the next section that this implies that culture could be a supervisory issue. While formal regulation may influence and constrain behavior in some specific areas of banking, it cannot deal with the more fundamental issue of culture and why banks behave in a particular way. But culture remains a somewhat illusive and ill-defined concept. When related to firms, it is about the underlying ethos of the company. Culture is an important issue largely because it creates business standards, has an impact on employee attitudes, and is a central influence on a firm’s and its employees’ behavior. It also has an impact on the extent to which customers have trust and confidence in the firms they deal with. This is a particularly important dimension when considering the culture of financial firms. Following the insights provided in the economics of identity literature, individuals have multiple identities (see, for example, Akerlof and Kranton 2010). A key identity relates to the profession or the workplace of an individual. Identities are associated with specific standards and social and economic norms that prescribe permissible behaviors. Which identities and associated norms and culture are behaviorally relevant depends on the relative weight an individual attributes to a particular identity. In a given situation, behavior is shifted toward those norms that arise from the particular culture relevant at the time. It seems that people do behave differently within a firm (e.g., a bank), and this is determined largely by the culture of the bank. The Group of Thirty (2015) concluded that the crisis revealed a multitude of cultural failures that were major drivers of the crisis and led to serious reputational damage and an erosion of trust and confidence in the banking industry. This was also costly for firms in terms of fines, litigation costs, and a more demanding regulatory environment created by central banks and other regulators. The report’s overall assessment was that there is an imperative for banks to improve culture and ethical standards. The report concluded: “Lack of trust and confidence in the banking sector creates material costs to society. Fixing culture in banking is now a public trust—as well as economic— imperative” (Group of Thirty 2015, 18). A result of the crisis and a series of high-profile scandals has been that trust and confidence in banking has been eroded. Lambert (2014) argues, “there is a strong case for a collective effort to raise standards of behaviour and competence in the banking sector” (4). More generally, the Group of Thirty (2015, 22) has argued: “The reputation of banking and the broader financial sector has deteriorated since the financial crisis, and is now at a historic low in terms of trust on the part of clients and consumers” (22). It argues further:
520 Llewellyn Events that precipitated the global financial crisis and the subsequent issues that have emerged have revealed a multitude of cultural failures. This report recognises that problematic cultural norms, and sub-cultures within large banks, have caused widespread reputational damage and lack of public trust. . . . The banking community as a whole needs to repair the damage done by failures in cultural values and behaviours. (5)
The scandals surrounding banking in recent years are well known: the banking crisis, several examples of banks misselling financial products to vulnerable consumers, the attempts to rig the London interbank offered rate (LIBOR) and to manipulate the foreign exchange market, and mistreatment of Small and Medium-Sized Enterprises (SMEs). Several high-profile scandals have resulted in heavy fines imposed on banks and other financial firms. In many important respects, the behavior that led to these fines reflects how the culture of banking changed over the years. The key issue of bank culture has been receiving increasing attention in some countries from the public, the media, some supervisory agencies, and the banks themselves, with some banks establishing culture reform programs. The poor reputation of banks in some countries (perhaps most especially in the United Kingdom) is illustrated by Kay (2015): “Parts of the finance sector today. . . .demonstrate the lowest ethical standards of any legal industry. . . .A culture has developed in which any action, no matter how close to the border of legality, is acceptable if it is profitable for the individual engaged in it” (286). The underlying culture of banking is important for several reasons, not the least being that it has a significant impact on consumer perceptions of individual banks and the industry as a whole and the degree to which there is trust and confidence in the sector. The underlying culture of a bank creates business standards and employee attitudes and behavior. It overarches all aspects of the business. Inappropriate culture can impose reputation damage, financial instability when the culture creates incentives for excessive risk-taking, and consumer damage. Furthermore, the culture within a regulated firm is likely to have an impact on the firm’s approach to regulation and its attempts at aggressive regulatory arbitrage. Several high-profile investigations have been made in this area, one being that of the Group of Thirty. All have come to similar conclusions: that culture is a central issue, that there have been serious deficiencies in bank culture, that there was a marked change in the culture of banks prior to the crisis, that culture has a major influence on behavior, and that the culture within some banks contributed to the banking crisis. There is the related issue of corporate governance within banks. The Group of Thirty (2015) argued inter alia the following: A bank with weak or undesirable culture (or sub-cultures) will damage both its reputation and industry trust. . . . A major improvement in the culture of banks is now a matter of necessity and sustainability, and is an imperative for regaining society’s trust. . . .
The New Regulatory Regime for Banks 521 The banking community needs to repair the damage done by failures in culture, values and behaviour. . . . The reputation of banking and the broader financial sector has deteriorated since the financial crisis, and is now at an historical low in terms of trust in terms on the part of clients and customers.
The emergence of particular cultures within banks partly reflects the underlying ideology of the time, because it has a potentially powerful influence on the evolution of bank culture, which in turn is a dominant factor determining behavior. Substantial empirical evidence establishes that the workplace influences behavior even to the extent that it may sometimes induce employees to behave in a way that is contrary to their behavior in private life. In other words, understanding behavior and decision-making needs to be set in institutional contexts. The dominant ideology in the period prior to the crisis, to which everyone (bankers, some governments, regulatory agencies, rating agencies, etc.) seemed to subscribe, was based on ideas of rational expectations and efficient markets, that markets are self-correcting, the dominance of shareholder value principles as the central objective of banks, and that regulation should be limited and light-touch. To some extent, free markets became the ethical standard, although that does not imply that poor standards should be followed. Indeed, the free-banking era worked well largely because at that time, there were high ethical standards in banking. There are several reasons why the culture of banking changed in the years prior to the banking crisis: • The increasing emphasis being given to maximizing shareholder value (SHV) as the ultimate objective of banks and particularly with a short-term focus. Some analysts argue that the SHV model is more likely to produce risk-taking, most especially if returns are measured on a short-term basis. Shareholder pressure for short-term profits can be powerful. Focussing on maximizing SHV itself creates a particular culture and incentive structures that may produce hazardous behavior. • Transactions versus relationship-based business practices. Relationship banking gave way to transactional banking, which in turn promoted a dominant sales and trading culture and potentially hazardous incentive structures compared with large financial conglomerates. • The trend toward large banks becoming financial conglomerates and within these the culture of investment banking becoming increasingly dominant. There is some evidence that smaller banks have a less risky culture partly because a uniform culture emerges and the business tends to be less complex with conflicting cultures. • The emergence of a dominant, and sometimes aggressive, sales and trading culture. • Internal incentive structures and remuneration systems that are often perverse in terms of risk-taking, prudential standards, systemic stability, and the interests of the consumer. The culture of an organization is often powerfully influenced by the remuneration systems in place (New City Agenda 2014). Incentive structures
522 Llewellyn are often perverse within financial firms at two levels: inherent in the SHV model (maximizing SHV especially over short periods) and within financial firms (bonuses based on sales, etc.). In examining the context of the £28 million fine on Lloyds Bank, the Financial Conduct Authority (2013) in the United Kingdom concluded: “Financial incentive schemes are an important indicator of what management values and a key influence on the culture of an organisation, so they must be designed with the customer at the heart. The review of incentive schemes that we published last year makes it quite clear that this is something to which we expect all firms to adhere.” Incentive arrangements that may prove to be hazardous include remuneration systems based on the volume of business or trading; the extent to which risk characteristics are or are not taken into account; the strength or weakness of internal control mechanisms; the nature of any profit sharing; front-loaded payoffs; bonuses paid on profits earned over a short period; how bonuses are paid; and so on. The size of banks, the impact of technology, and the greater remoteness from customers are also likely to have impacts on culture. Many of these factors were significant contributors to the banking crisis, especially those that produced a climate of excessive risk-taking (which can be profitable in the short run) and weak internal governance arrangements with respect to the monitoring and control of risks. Culture is partly a governance issue, particularly with respect to excessive risk-taking and whether relevant officers fully appreciate the risk appetite set by the board. Equally important are issues such as internal monitoring arrangements, reporting structures, accountability, and the extent of individual responsibility. In some cases, poor governance arrangements contribute to weak or hazardous culture. The Kelly report into the many failings in the United Kingdom’s Cooperative Bank (Kelly Review 2014) revealed a full catalog of governance weaknesses that contributed to the failure: confused or diffused accountabilities in several areas including risk management, a failure to take seriously enough the warnings that were given by the supervisors, less than complete transparency, risk awareness on the front line generally low and not supervised internally, accountability not always being clear, and risk appetite statements not always translated into operational action. Some analysts argue that detailed, complex, and intensive formal regulation is itself a potential problem with regard to the evolution of good culture. First, the more detailed is formal regulation, the more likely it will be that banks will be governed by the letter of the law rather than the spirit of what is intended and which conforms to the bank’s culture. Second, and related to this, the view can emerge that if a specific activity or behavior is not explicitly covered by external regulation, then it is acceptable. Third, detailed and extensive regulation may inculcate a box-ticking mentality. Angeloni (2014) has argued: “regulation based on detailed prescriptive rules has undermined rather than enhanced ethical standards by substituting compliance for values.” He further asserts, “Rules are necessary but not sufficient to restore trust; the underlying ethical behaviour in the financial sector has to improve as well.”
The New Regulatory Regime for Banks 523
17.6 Culture As a Supervisory Issue In a detailed study of the weaknesses in the culture of banking, a report from ResPublica (Llewellyn, Steare, and Trevellick 2014) made several recommendations, including inter alia improvements to internal governance structures of banks, greater diversity in the banking sector, establishment of codes of conduct, and tougher shareholder fiduciary duties to promote activism. Change in underlying culture is difficult to achieve in any organization. But an outline of a strategy can be given along the following lines in what might be termed a cultural mission: • Financial firms are to have explicit ethical standards in their missions. • Training and competence regimes within financial firms are to explicitly incorporate such standards. • While the cultural ethos is established at the top of an organization, it needs to be “owned” throughout the organization. This requires effective internal communication systems. • Ethical standards are to be monitored, with clear mechanisms established to enable systematic internal audits to take place. • Internal reward and incentive structures are to change away from a bias toward a sales culture. • Clear understanding is needed throughout the bank of the institution’s risk appetite in all areas of the bank’s business. • Systematic and universal mechanisms are to investigate the risk characteristics of products, contracts, and services, and these are to be clearly understood by front- line staff at all relevant levels. • Credible complaints-handling mechanisms and procedures are to have the confidence of, and credibility with, customers. • Clear arrangements are to be in place for whistle-blowing within the bank. In essence, the desirable culture within a bank is one of strong ethical standards in dealing with all aspects of the bank’s business. There is a limit to what regulation can achieve if the underlying culture within banks is potentially hazardous with respect to risk management, systemic stability, and consumer welfare. This in itself suggests that a focus on culture could be an integral part of the supervisory process. It might even be argued that the need for detailed and prescriptive regulation might be lessened if the underlying culture in banking was more aligned with regulatory objectives. Given the importance of the underlying culture for achieving the objectives of regulation and in order to sustain credibility, the elements of a cultural mission such as outlined above could be monitored by the relevant supervisory agencies. In this way, not only could the elements of the cultural mission be a strategic issue for financial firms, but they could also be part of regular ongoing
524 Llewellyn supervisory procedures, as is the case with De Nederlandsche Bank’s supervision of Dutch banks. While regulation per se cannot determine the culture of the banking industry and its underlying values, the supervisory process can have an important influence and a monitoring role. Supervision can have a complementary role to that of banks’ internal cultural reform programs. A senior official of the ECB has argued that “strengthening ethics in banking is something that banking supervisors should care about” (Angeloni 2014). There are several routes through which supervision might contribute to improving bank culture, including monitoring and advising on best practice, monitoring incentive structures within individual banks, making judgments on the effectiveness of management processes, and monitoring internal systems and the effectiveness of internal communication systems, especially with regard to the values and culture that the board of directors and senior management wish to instill within the bank. The supervisory agency might also monitor whether a bank has a clearly defined risk appetite and effective risk-management governance structures. Mechanisms and procedures for individual responsibility and accountability, along with support arrangements for whistle-blowing, could also be monitored on a systematic basis. Practice varies considerably among regulatory agencies with respect to the emphasis given to the issue of culture. De Nederlandsche Bank (DNB) in particular has been a pioneer in this area. Since 2011, supervision of behavior and culture has been an integral part of the DNB’s supervision of banking in the Netherlands. The Dutch central bank has indicated that “supervisory directors have been made more aware of the importance of behavioural and cultural aspects in decision-making. Supervision of behaviour and culture has proved to be a valuable supplement to the more traditional forms of supervision, as it addresses the causes of behaviour that impacts on the performance and risk profile of financial institutions and consequently on financial stability” (De Nederlandsche Bank 2015). The DNB has outlined its approach in this area of its supervisory role (Lo 2015). The supervision of behavior and culture has become an integral part of the DNB’s methodology. The DNB was the first bank supervisory agency to target culture as a risk factor. As a senior official of the bank put it, the aim is to “identify risks that may arise from culture and to take appropriate measures” (de Haan 2016, 43–45). The general approach to the DNB’s focus on culture and ethical standards and the reasons it considers these to be central supervisory issues are outlined in its memorandum “The Seven Elements of Ethical Culture” (De Nederlandsche Bank 2009)
17.7 The Culture of Central Banks If banking culture is to become a supervisory issue, the culture and expertise of central banks as supervisory and regulatory agencies would also need to change. Ahead of the
The New Regulatory Regime for Banks 525 crisis, some central banks began to raise issues of concern in, for instance, their financial stability reports. Although many central banks raised concerns about trends in the years prior to the crisis (including the potential dangers in global financial imbalances, the search for yield by banks, excessive risk-taking, some aspects of new bank business models such as “originate to distribute,” the lowering of lending standards and loan pricing, etc.), little action was taken before the crisis erupted. Regulators tended toward a light-touch approach (sometimes, as in the United Kingdom, under political pressure) driven partly by the ideology of the time. In effect, regulatory capture emerged, except that it was largely a capture of ideology (efficient markets, etc.) rather than, as is usually portrayed, overt capture by regulated institutions. All of this influenced the culture of central banks, including a culture against intervention in the business practices of banks. As part of this, the prevailing culture was of excess faith in precise rules and in banks’ business models. Little attention was given to banks’ internal incentive structures or governance arrangements. Culture was also influenced to a large extent by the collective euphoria of the time (Llewellyn 2010). As these issues are addressed and more emphasis is given to cultural issues in regulated firms, the culture of central banks as regulatory and supervisory agencies will need to reflect the new approach.
17.8 Conclusion Focusing on the two objectives of the regulatory regime (lowering the probability of bank failures and minimizing the costs of those failures that do occur), the issue arises of whether a greater emphasis on bank culture and internal incentive structures will emerge in the next phase of change in regulatory strategy. There are limits to what formal regulation can achieve if the underlying culture and incentive structures within banks are hazardous. A potential problem with detailed and complex externally imposed rules is that they tend to focus on processes, which may inculcate a box-ticking mentality within regulated forms. Furthermore, the emphasis on detailed rules may itself have marginalized the importance of reform to bank culture. Scandals have occurred despite detailed regulation. In that case, it may be that regulation has focused on the wrong issues: processes rather than culture and incentive structures. A central theme has been that regulation is a necessary though not sufficient condition to address the two central objectives of the regulatory regime. In particular, other requirements are measures to address objective 2 (lowering the social costs of bank failures) and to make bank culture a central supervisory issue. Regulation in the form of detailed and prescriptive rules (e.g., with respect to capital and liquidity, etc.) is clearly a necessary part of any regulatory regime. However, a central theme has been that such regulation is not sufficient. There are limits to what can be achieved through detailed, prescriptive, and complex rules, given what we have termed the endogeneity problem, if the underlying culture of banking is hazardous and when a
526 Llewellyn rules-escalation approach raises important issues of proportionality. The challenge is to manage the interface between regulation, supervision, and culture within both banks and supervisory agencies.
Notes 1. This three-pronged formulation derives from case C-331/88, R v. Minister of Agriculture, Fisheries and Food, ex parte Fedesa (1990), ECR I-4023. Recent authorities include case C- 343/09, Afton Chemical (2010), ECR I-7027, para. 45; joined cases C-581/10 and C-629/10, Nelson and others (2012), ECR I-0000, para. 71; case C-283/11 Sky Österreich GmbH, para. 50; joined cases C-293/12 and C-594/12, Digital Rights Ireland Ltd. and others, para. 46. 2. https://www.ft.com/content/d23922de-f08c-11e2-929c00144/feabdcO 3. See http://ec.europa.eu/smart-regulation/docs/com2014_368_en.pdf. 4. See European Commission 2014, 269, para. 3.
References Aikman, D., M. Galesic, S. Kapadia, K. Katsikopoulos, K. Kothiyal, E. Murphy, and T. Neumann. 2014. “Taking Uncertainty Seriously: Simplicity versus Complexity in Financial Regulation.” Bank of England financial stability paper 28. Akerlof, G., and R. Kranton. 2010. Identity Economics: How Our Identities Shape Our Work, Wages and Wellbeing. Princeton: Princeton University Press. Angeloni, I. 2014. “Ethics in Finance: A Banking Supervisory Perspective.” Remarks. Conference on New Financial Regulatory System: Challenges and Consequences for the Financial Sector. Venice, September. Ayadi, R., R. Schmidt, D. T. Llewellyn, E. Arbak, and W. De Groen. 2012. Investigating Diversity in the Banking Sector in Europe: Key Developments, Performance and Role of Cooperative Banks. Brussels: Centre for European Studies. Bank of England. 2015. The Bank of England’s Approach to Stress Testing the UK Banking System. London: Bank of England. Banking Stakeholder Group. 2015. “Proportionality in Bank Regulation.” www.eba.europa.eu/ bsg. Blundell-Wignall, A., P. Atkinson, and S. G. Lea. 2008. “The Current Financial Crisis: Causes and Policy Issues.” Financial Market Trends, no. 2: 1–21. Paris: OECD. Chennells, L., and V. Wingfield. 2015. “Bank failure and bail-in: an Introduction.” Bank of England Quarterly Bulletin, 55, no. 3: 228–241. De Haan, J. 2016. “Watchful Eye Helps Boards Be on Their Best Behaviour.” Financial World (February/March): 43–45. De Nederlandsche Bank. 2009. “The Seven Elements of Ethical Culture.” Memorandum, November. De Nederlandsche Bank. 2015. DNB Bulletin, December. European Banking Authority. 2016. “EU-Wide Stress Testing 2016” (previous years). www.eba. europa.eu. European Commission. 2014. “Economic Review of the Financial Regulation Agenda.” Commission staff working document 158.
The New Regulatory Regime for Banks 527 Financial Conduct Authority. 2013a. “FCA Fines Lloyds Banking Group Firms a Total of £28 Million for Serious Sales Incentive Failings.” www.fca.org.uk Financial Conduct Authority. 2013b. “FCA Response to Parliamentary Commission on Banking Standards, London.” www.fca.org.uk. Financial Times. 2017. “More evidence that bail-ins are better than bail-outs.” October 5. Gigerenzer, G. 2008. Gut Feelings: Short Cuts to Better Decision Making. London: Penguin. Goodhart, C. A. E., P. Hartmann, D. T. Llewellyn, L. Rojas-Suarez, and P. Weisbrod. 1999. Financial Regulation: Why, How and Where Now? London: Routledge. Group of Thirty. 2015. Banking Conduct and Culture: A Call for Sustained and Comprehensive Reform, Washington, D.C.: Group of Thirty. Haldane, A. 2010. “The $100 Billion Question.” Speech to Institute of Regulation and Risk. Hong Kong, March. www.bankofengland.co.uk/speeches Haldane, A., and V. Madouras. 2012. “The Dog and the Frisbee.” Jackson Hole Symposium, Federal Reserve Bank of San Francisco. Kane, E. 1987. “Competitive Financial Regulation: An International Perspective.” In Threats to Financial Stability, edited by R. Portes and A. Swoboda, 289–305. Cambridge: Cambridge University Press. Kay, J. 2015. Other People’s Money: The Real Business of Finance. New York: Public Affairs. Kelly Review. 2014. Failings in Management and Governance: An Independent Review of the Co- operative Bank. Cooperative Bank. https://www.co-operative.coop/investors/ kelly-review Lambert, R. 2014. Banking Standards Review. www.bankingstandardsboard.org.uk Llewellyn, D. T. 2010. The Global Banking Crisis and the Post-Crisis Regulatory Scenario. Amsterdam: Amsterdam Centre for Corporate Finance. Llewellyn, D. T. 2013. “A Strategic Approach to Post Crisis Regulation: The Need for Pillar 4.” In Stability of the Financial System, edited by A. Dombret and O. Lucius, 358–387. Cheltenham: Edward Elgar. Llewellyn, D. T., R. Steare, and J. Trevellick. 2014. Virtuous Banking: Placing Ethos and Purpose at the Heart of Finance. London: ResPublica. Lo, A. 2015. “The Gordon Gekko Effect: The Role of Culture in the Financial Industry.” NBER working paper 21267. New City Agenda. 2014. A Report on The Culture of British Retail Banking. The New City Agenda, London Samuels, S 2013, “EU banks still pose systemic threat.” Financial Times, 21 July.
chapter 18
Macropru de nt ia l Regu l ation of Ba nk s a nd F inancial Inst i t u t i ons Gerald P. Dwyer
18.1 Introduction Since the financial crisis of 2007–2008, regulation of institutions to avoid the worst effects of crises has become a major topic of research by economists and a focus of financial regulators’ efforts. Much of the research by economists has been explicitly designed to affect regulatory policy. In this chapter, I suggest that much of this regulation is built on shaky ground. Financial crises have been studied for many years. The thoughtful summary by Reinhart and Rogoff (2009) breaks financial crises into banking crises, sovereign debt crises, currency crises, and inflation crises. The latter three forms generally reflect problems associated with government policies. The first reflects problems due to arrangements in the private sector of an economy, in banking historically. Based on this list, one might expect financial regulation related to crises to be banking regulation. Instead, much new regulation that goes by the name of macroprudential regulation has been interpreted as including regulation of insurance companies and possibly mutual funds and hedge funds as well as clearing and settlement organizations. This expansion has occurred because the class of phenomena included in crises has expanded dramatically. The economics of banking crises provides the analytical foundation for much of the theoretical understanding of financial crises. Failures to deliver on promised payments James R. Lothian and a referee provided helpful comments. I thank the Spanish Ministry of Economy and Competitiveness for support through project ECO2013-42849-P at the University of Carlos III, Madrid.
Macroprudential Regulation 529 begin the difficulties, and then at least partly, unpredictable effects worsen the situation. In the first part of this chapter, I discuss how problems arise and how they are connected to the more general phenomena in recent crises. I also discuss how the adverse effects of banking crises were ameliorated historically in the United States. A reason for doing this is to highlight the importance of private arrangements for reducing the adverse effects of financial difficulties. In many cases, private agents have an incentive to reduce these adverse effects. To the extent that such arrangements can be encouraged by the government, this can be an effective way to reduce the adverse effects of financial crises. This has been done to some degree in recent years, for example, in the case of clearinghouse arrangements. This analysis then can be used as a backdrop to think about what regulation might be justified. I discuss the risks addressed by macroprudential regulation, including an approach based on externalities and one based simply on dealing with moral hazard problems created by deposit insurance and bailouts. There are two approaches to dealing with the risk of a financial crisis (Wildavsky 1998). One approach is to try to prevent crises, thereby making them less likely. The other is to make the system resilient to crises, thereby limiting the damage. Some policies can have both effects. Current regulation has not limited itself to either approach, which is appropriate in many cases because the policies both make crises less likely and make them less costly when they occur. Recent papers provide a nice summary of the institutional details of macroprudential regulation (Lastra 2015) and the empirical evidence on regulation’s possible effectiveness (Galati and Moessner 2013 and 2017; Claessens 2015). I highly recommend these papers and do not repeat the material in them. Instead, I examine the economic basis for macroprudential regulation.
18.2 Banking Crises Until recent years, research on private financial crises has focused on banking crises. Maybe especially since Friedman and Schwartz’s classic work A Monetary History of the United States: 1867–1960 (1963), there has been a great deal of theoretical and empirical research on banking crises and policies to deal with them. A banking crisis is a situation in which depositors attempt to withdraw their funds in the promised form from banks in a banking system and the banks cannot deliver on that promise. The banks fail to deliver on their promise because they hold fractional reserves of the promised asset. Banks are illiquid, in the sense that they cannot deliver on this promise if all depositors tries to avail themselves of this promise at the same time. Furthermore, the banking system is illiquid in a way that a single bank is not. In a run on a single bank, that bank can borrow from other banks if the other banks consider it to be solvent. In a run on the banking system, there are only fractional reserves that can be
530 Dwyer redeemed, and it is impossible for the banking system by itself to deliver on the promised redemption. In Friedman and Schwartz, banking crises are important because they precipitate recessions or make recessions worse. Sometimes this is explained, especially by macroeconomists, by saying that the banking crisis affects the “real economy.”
18.2.1 Why Are There Runs? Diamond and Dybvig (1983) present a model that, while primitive in many respects, highlights some of the conditions under which banking crises occur. Knowing why banking crises occur can inform understanding of how to deal with them. Their model reflects maturity transformation of short-term liabilities into long-term assets and uncertain timing of demand for consumption by consumers. In the model, there are consumers who are ex ante identical and risk-averse. They are uncertain about whether they want to consume early or late, which can be called periods 1 and 2. Absent a financial intermediary, early consumers liquidate their investment in period 1 and consume; late consumers maintain the investment in the technology and receive additional consumption in period 2. Diamond and Dybvig show how a financial intermediary can improve consumers’ ex ante welfare by offering them a demand deposit contract.1 Essentially, the bank sells insurance to the consumers in the event that they are early consumers. The simplest way to see this is to suppose that households’ preferences for consumption are identical in periods 1 and 2, are not discounted, and have a representation with diminishing marginal utility. If technology permitted it, consumers would prefer equal consumption whether they are early or late consumers. By assumption, the technology provides higher consumption to late consumers than to early consumers. The bank promises early consumers more than they could consume on their own in period 1. It promises late consumers less than they could consume on their own. This reduced consumption makes the additional consumption available to those who turn out to be early consumers. Because of diminishing marginal utility, this reallocation of consumption increases expected utility. The bank promises all consumers that if they want to redeem early, they can have the payment promised to those who turn out to want to consume early. The cost of this insurance is that the bank has made a promise that it cannot deliver in all states of the world. If all customers show up at the bank in period 1, the bank can at best give customers one unit each, which is less than the amount promised to anyone.2 This is not merely a theoretical curiosity. It is an immediate implication of compound probabilities that with a nonzero, independent probability of a run every year—possibly quite small—the probability of a run is one as time goes to infinity. Sooner or later, there will be a run on the bank. Runs in Diamond and Dybvig are one of two equilibria: one in which depositors receive the promised payoffs and one in which they do not.3 In their model, any run is a
Macroprudential Regulation 531 “pure panic run” that is not due to any fundamental factor affecting the value of the assets. If the government provides deposit insurance which guarantees the payments by the bank and this guarantee is believed, there is no run, and the equilibrium is the first- best equilibrium with bank deposits and withdrawals by early and late consumers when they want to consume. There is a large literature expanding on the insights from this model. Probably the most important theoretical development is the introduction of risky assets which can have good and bad outcomes.4 Such models give rise to “information-based runs,” as opposed to Diamond and Dybvig’s pure panic runs in which there is no fundamental reason to run. The development of these models by adding risky assets is quite important, because all of the literature of which I am aware finds that runs are due to adverse financial developments at banks.5 There are other theoretical explanations of why banks promise to repay whatever is deposited even though they cannot in all states of the world. These include explanations based on asymmetric information in which some or all depositors are not as informed about the values of banks’ assets as is the bank (Calomiris and Kahn 1991). Concretely, depositors may find it hard to believe a bank manager who says that the deposits are worth less than when deposited because the value of the banks’ loans has fallen in the meantime. Some have argued that an important reason banks make the promise to redeem whatever is deposited at its par value is that they are required by law to do so (Wallace 1983). While this is true sometimes, it is not always the case. A recent strand of the literature has argued that bank liabilities are redeemable on demand because they function as a medium of exchange (McAndrews and Roberds 1999). Dwyer and Samartín (2009) find that all of these explanations line up with some historical situations but none of them is the sole explanation of why banks promise to pay on demand. The promise by money-market funds to pay par on demand is an interesting instance in which none of the theories explains why they do so. This is especially interesting because it is precisely a run on money-market funds that precipitated the climax of the financial crisis in the United States in September 2008.6 Furthermore, reforms of rules governing money-market funds are one of the major regulatory developments in the United States since the financial crisis (US Securities and Exchange Commission 2014a and 2014b). As in the financial crisis of 2007–2008, it is not runs on individual banks that create problems for the economy. Runs on the banking system and not lone banks are the phenomena that have more widespread effects, as is clear in, for example, Sprague (1910) and Friedman and Schwartz (1963). In the case of money-market funds, the run was a run on prime money-market funds that held commercial paper in addition to short- term government securities. Other developments that can be interpreted as being similar to runs on a banking system contributed to the difficulties in the financial crisis of 2007–2008. These developments were not exactly the same as bank runs, partly because the underlying
532 Dwyer financial assets were overnight loans and not deposits and partly because the underlying financial assets were not redeemable on demand. Still, some institutions—in particular, investment banks in the United States—had substantial maturity transformations in which they borrowed overnight to fund holdings of longer-term assets. Commercial banks created supposedly bankruptcy-remote subsidiaries that held collateralized debt obligations backed by subprime mortgages and were financed by short-term commercial paper. In the event, the overnight borrowing on which investment banks depended evaporated (Gorton and Metrick 2012). The assets held by commercial banks’ subsidiaries were taken on balance sheet because the contracts included liquidity clauses and the commercial paper issued by the subsidiaries became illiquid (Acharya, Schnabl, and Suarez 2013). While overnight debt is not the same as a demand deposit, it is not so very different in terms of a lender’s or a depositor’s ability to stop providing funds. The financial crisis of 2007–2008 highlighted this similarity, and institutions with similar characteristics in addition to money-market mutual funds are part of what is meant by the phrase shadow banking. It is not a lone institution’s problems that are likely to have economy-wide effects. It is simultaneous problems at several or many firms. Runs are not inevitable. It is possible to avoid ever having a run by making the probability of a run go to zero before there is a run. This can be done, for example, by closing the institution before it goes broke and there is a run. While possible, such a result mandated for banks in the Federal Deposit Insurance Improvement Act of 1991 seemed to have no perceptible effect in the financial crisis of 2007–2008 (McKinley 2012). In any case, in the event of many firms having problems, closing many banks at the same time may not seem like a policy with much to recommend it.
18.2.2 Avoiding Runs or at Least the Worst Consequences It also is possible to avoid a run, or at least the worst consequences, without government intervention. One way is to stop payments to depositors. Another is by government intervention. One option to short-circuit a run is to stop payments by the banks to depositors. Historically, there have been two ways of doing this. One is to insert a clause in the contract that makes this possible. This was a feature of Scottish banking for a time until it was outlawed (Selgin and White 1997). A joint suspension of payments by banks is an alternative way to stop runs. Suspensions of payments were a repeated feature of runs on the banking system in the United States before the creation of the Federal Reserve (Sprague 1910). Such joint suspensions of payments by the banks can be used to create time for sound and unsound banks to be sorted out. This can ameliorate the negative effects of a run; banks need not simply make good on their promises until they inevitably are broke. Gorton (1985) provides a theoretical explanation, and Dwyer and Hasan (2007) provide evidence from
Macroprudential Regulation 533 a natural experiment in the United States in 1860 and 1861 that joint suspensions do lessen the adverse effects of runs.7 Suspensions of payments are not perfect. In particular, even if some limited withdrawals by depositors continued, the lack of ready access to funds in the banks can affect payments to both employees and other firms, spreading difficulties throughout the economy. Private arrangements can overcome some of the difficulties in a suspension of payments, as shown by Timberlake (1984) for the national banking period in the United States. The perceived inadequacy of those arrangements can be gauged partially by the creation of the Federal Reserve despite these arrangements, which developed over half a century. One explanation for the creation of the Federal Reserve is a perceived improvement in arrangements by having a national central bank in a run on a fragmented banking system. The Great Contraction from 1929 to 1933 was followed by changes in government regulation to lessen the effects of a banking crisis and to lessen the probability of another such crisis (Friedman and Schwartz 1963). One of the most enduring changes is the introduction of deposit insurance. Deposit insurance provides funds to depositors in the event that a bank fails, at least up to some specified amount. As in Diamond and Dybvig (1983), there is no reason for a pure panic run if there is deposit insurance and the funds are provided seamlessly to depositors. Even for information-based runs, deposit insurance for some but not all depositors implies that some depositors will not run if funds are provided seamlessly. In addition, deposit insurance by a government agency makes it less likely that the central bank will stand by and not provide funds to depositors as the Federal Reserve did in the Great Contraction. Deposit insurance, though, creates moral hazard, as does other insurance. Insured depositors need no longer be concerned about the riskiness of a bank’s activities and won’t be. As a result, the interest rates on deposits in an insured bank will be less sensitive to changes in a bank’s risk. Interest rates on deposits that are less affected by risk lower the relative price of risk to a bank. Because they are profit-maximizing institutions, banks will take additional risks. This additional risk-taking due to deposit insurance can be lessened by government regulation of banks and examination of their activities, both of which have been a prominent aspect of the activities of the Federal Deposit Insurance Corporation (FDIC) since its creation. While clearly justified by deposit insurance, such examinations were not introduced with deposit insurance. They were carried out in the United States a hundred years before the introduction of deposit insurance. Banking in the United States always has been regulated in various ways, and a purpose of examinations has been to ensure that banks comply with the restrictions laid out in the laws.8 Such regulation and examinations would be called microprudential regulation now. Moral hazard also is created by bailouts of failing banks, which are of more recent vintage than deposit insurance. Bailouts provide funds to the bank that prevent failure and provide a continued flow of funds to depositors and other creditors from an otherwise
534 Dwyer failed bank. This lessens the effect of a bank’s risk on interest rates it pays on its liabilities, similar to the effect of deposit insurance. The direct effect of these payouts when the bank is in difficulties is unambiguous: the cost of engaging in risky activities decreases. The size of the effect is an empirical question. Evidence indicates that deposit insurance creates perceptible moral hazard and increases the probability of future banking crises (Demirgüç-Kunt and Kane 2002). I do not know of systematic evidence for bailouts of banks.
18.3 Crises, the Real Economy, and Externalities Even in Friedman and Schwartz (1963), a major reason for interest in the effects of banking crises is the likely effect of such crises on real GDP and worsening recessions. This is a different focus from that of microprudential regulation, because it introduces concerns about effects on the “real economy” or, more precisely, real GDP, investment, and employment. Banking crises often are associated with subsequent decreases in real GDP. The size of the effect is controversial. Some, in particular Reinhart and Rogoff (2014), take the position that the effects are large. An increasing literature provides evidence that the effects can be large but typically are nowhere near as large as the Great Depression for the United States (Bordo and Haubrich 2010 and 2017; Howard, Martin, and Wilson 2011; Dwyer et al. 2013; Dwyer and Devereux 2016).9
18.3.1 Systemic Risk and Macroprudential Regulation Effects of banking crises on real GDP are reflected in macroprudential regulation. Borio (2003) distinguished macroprudential regulation from microprudential regulation partly in terms of objectives: “The objective of a macroprudential approach is to limit the risk of episodes of financial distress with significant losses in terms of the real output for the economy as a whole” (183). In Borio’s discussion, the approaches differ in terms of the “model” used, with the macroprudential approach allowing for risk possibly being partly determined by the actions taken by other agents in the financial system. Borio’s emphasis on financial distress can be replaced, as it often is, by systemic risk (Galati and Moessner 2013). The phrase is used to mean various things, and there is a danger of it being used vaguely. Systemic risk is a relatively new term that has its origin in policy discussions, not the professional economics and finance literature.10 Given this origin, perhaps it is not so surprising that the term often is used with no apparent precise definition in mind. If it arose from a theoretical analysis, as did a term it sometimes is confused with—systematic risk—there would be a very precise definition.
Macroprudential Regulation 535 The “G10 Report on Consolidation in the Financial Sector” (Group of Ten 2001) suggested a working definition: “Systemic financial risk is the risk that an event will trigger a loss of economic value or confidence in, and attendant increases in uncertainly [sic] about, a substantial portion of the financial system that is serious enough to quite probably have significant adverse effects on the real economy” (Group of Ten 2001, 126). While this is a reasonable definition in terms of the concerns in mind, the precise definitions and measurement of terms such as confidence, uncertainty, and quite probably are likely to be elusive for some time, if not forever.11 Kaufman and Scott (2003) have another definition of “systemic risk”: “Systemic risk refers to the risk or probability of breakdowns in an entire system, as opposed to breakdowns in individual parts or components, and is evidenced by comovements (correlation) among most or all the parts” (Kaufman and Scott 2003, 371). This definition is better than the G10 definition because it does not tie the event being analyzed (the breakdown) with a particular cause (the loss of confidence). Even so, a precise definition of breakdown may be elusive even if the term is evocative. The term systemic risk is used in the rest of this chapter with Kaufman and Scott’s definition in mind. Executed correctly, monetary policy can lessen the systemic effects of banking crises. As Friedman and Schwartz (1963) emphasize, the central bank can provide currency to banks whose depositors want currency instead of deposits and thereby avoids an effect on the total quantity of money in the economy. If the central bank does this, the effect of the banking crisis on real GDP will be less than it would have been otherwise. That, of course, does not mean that the effects on real GDP are eliminated. Nonetheless, poor monetary policy is widely acknowledged to be a major contributor to the depth and duration of the Great Depression in the United States due in part to runs on the banking system from 1929 to 1939. The financial crisis of 2007–2008 generated increased interest in the relationship between monetary policy and financial crises, presumably partly because of the serious recessions in the United States and other countries that followed the crisis. Another reason for increased interest in macroprudential regulation is the spread of financial difficulties well beyond firms legally chartered and regulated as banks. Money-market mutual funds in the United States are organized under the Investment Company Act of 1940 and regulated by the Securities and Exchange Commission. A run on these firms started because the Reserve Primary Fund was no longer able to redeem investments at par, “breaking the buck,” on September 16, 2008. This run was stopped by a guarantee of such investments by the US Treasury using funds in the Foreign Exchange Stabilization Fund on September 19, 2008, a novel use of those funds (Dwyer and Tkac 2009). Runs on repo were associated with the failures of investment banks, which became commercial banks in the crisis in order to gain access to the discount window at the Federal Reserve (Gorton and Metrick 2012). A regulatory focus solely on possible problems at commercial banks contributed to the severity of the financial crisis. A central bank can make funds available to agents in the event that financial markets suddenly break down and financial institutions under stress themselves suddenly cut off borrowers from their usual sources of funds. This understanding of the
536 Dwyer problem determined some of the Federal Reserve’s policies during the financial crisis of 2007–2008. Given this spread of difficulties beyond banks, monetary policy by itself has difficulties dealing with runs on firms that do not hold reserves at the Federal Reserve (or more generally the central bank). A different approach is required that addresses problems beyond banks. As Claessens (2015) notes, this concern about institutions beyond commercial banks and a further concern with financial stability generally also seem to motivate an approach to macroprudential regulation to minimize or contain systemic risk. This is so even though the economic rationale for much of such government intervention is not obvious.
18.3.2 Externalities Rather than effects on real GDP, some have suggested that a microeconomic rationale for macroprudential regulation would make it more effective. Some literature has begun to explore externalities that might motivate regulation and how regulation might ameliorate those externalities. De Nicolò, Favara, and Ratnovski (2012) present a nice summary of currently known possible external effects that might motivate macroprudential regulation. One set of possible externalities concerns strategic complementarities. These are an explanation for banks’ well-known tendency to have similar changes in policies at similar times. Besides the obvious explanation that the external environment faced by banks is similar, banks may implement similar policies for granting loans because they use others’ credit policies as information about appropriate credit policies. There also may be reputational concerns or incentives for employees that create such effects. Government bailouts also create an incentive to have similar policies. If all banks get into financial difficulties, the evidence indicates that the government will bail out the banks, at least in high-income countries. If one bank, even one viewed as important, gets into financial difficulties, the government might not bail out that bank.12 A second class of externalities is related to fire sales. If a bank gets into difficulties, it may be forced to sell assets quickly relative to a typical sale or to sell an unusually large amount. This sale can lower the prices and lessen the solvency of the institution. The sale also lowers the current market values of similar assets at other institutions. Such sales could trigger financial difficulties at other firms that might have real external effects.13 Finally, distress or failure of a bank can affect other institutions because of interconnectedness. Besides effects of fire sales, interbank exposures mean that failure at one firm can create losses at any other banks that are counterparties to the failing bank. Furthermore, if a failure of one firm adversely affects real GDP to a significant degree, this can create difficulties at additional firms. Much of the underlying economic justification of macroprudential regulation is based on pecuniary externalities, most obviously in the case of fire sales. At first glance, this is
Macroprudential Regulation 537 odd. Since Scitovsky (1954), it has been known that pecuniary externalities need not be pertinent for welfare (Scitovsky 1954; Bator 1958; Buchanan and Stubblebine 1962). In the context of increasing returns due to lower prices, Scitovsky showed that prices will reflect pecuniary externalities and that there is no reason for policymakers to concern themselves with them. Pecuniary externalities do affect behavior in some contexts and can make people worse off, a proposition that can be seen in quite general contexts (Loong and Zeckhauser 1982; Geanakoplos and Polemarchakis 1986; Greenwald and Stiglitz 1968). On the other hand, the mere existence of an externality does not imply that anything should be done. For example, if a new gas station opens across the street from an existing gas station, the existing one may go out of business, the value of the site will fall, there may well be effects on counterparties to the failing gas station, and the site may be unused for some time. Few would argue that entry of gas stations should be regulated to avoid externalities associated with adverse effects on competitors. But this is exactly analogous to the situation in Lorenzoni (2008, 813, 824) in which lost profits in the bad state are necessary for the existence of an externality. Besides this debatable reliance on pecuniary externalities, regulation implemented invariably is different from an economist’s ideal policy. Demsetz (1969) argues this point forcefully, calling it the “Nirvana Fallacy” to compare imperfect market outcomes to ideal government regulation and conclude that government regulation clearly will produce better outcomes. More recently, Claessens (2015) referred to this difficulty with actual macroprudential policy. Partly because these externalities are the result of research in the last couple of decades and the financial crisis less than a decade ago has made them possible concerns for policy, the size of any of these externalities is largely unknown. Some of them are likely to remain of unknown importance for quite some time, because the theories are one of many explanations of the same phenomena, and disentangling them is no small task. While eliminating externalities is a solid economic way to motivate policy, it is not clear that it is viable in this case, at least for the foreseeable future. The economic importance of these externalities is a matter of conjecture. The best policies to deal with these externalities are unknown. De Nicolò, Favara, and Ratnovski (2012) evaluate proposed policies in terms of their effects on these externalities, but it is at best difficult to know what is too much or too little regulation when the existence or size of any externality itself is a conjecture.14 In the end, they conclude that capital regulation is a useful way to deal with all of these possible problems. Capital requirements can be justified in another way, and the determinants of the desirable amount are more evident.
18.3.3 Deposit Insurance and Moral Hazard An alternative way to think about macroprudential policy is to try to design policy to limit the adverse effects of other policies. This suggestion may seem perverse at first glance, but it is the soundest argument for microprudential regulation. There is little or
538 Dwyer no reason to think that bank examiners can assess the riskiness of banks’ activities better than the owners of a bank, its staff, or its auditors. Any agency problems may well be at least as severe for a bank’s regulators as for its owners. The existence of deposit insurance implies that banks have an incentive to take on more risk than they would otherwise; the government providing the insurance has an incentive to limit this enhanced riskiness. It is improbable that deposit insurance will be eliminated. Indeed, it has spread around the world (Demirgüç-Kunt, Kane, and Laeven 2014). Much microprudential regulation can be interpreted as a response to the moral hazard created by deposit insurance. One consequence of deposit insurance is that banks want to hold less capital than they would without deposit insurance. Banking regulation in the United States has included capital requirements since the first bank’s charter. These rules made it harder for people to start a bank, collect funds, and then abscond with them.15 Bank capital fell over time, possibly because the cost of being informed about a bank fell, and it has fallen particularly since the introduction of deposit insurance. Figure 18.1 shows the level of capital in banks in the United States from 1834 to 2015. Banking capital fell substantially before the Federal Reserve, with a ratio of equity capital to assets of about 19 percent in 1913. After the introduction of federal deposit insurance in 1933, the ratio of capital to assets fell to 8 percent by 1960, and it has been around 10 percent in recent years.16 This ratio of capital to assets is similar to the bank’s leverage ratio, which is an overall measure of capital held. Banking regulators in recent decades have introduced complicated rules about the capital a bank must hold, especially for large institutions.17 These risk-based
0.7
Capital relative to assets
0.6 0.5 0.4 0.3 0.2 0.1 0.0 1834 1844 1854 1864 1874 1884 1894 1904 1914 1924 1934 1944 1954 1964 1974 1984 1994 2004
Figure 18.1 Bank Capital Relative to Assets, 1834–2015 Notes: The data are from Historical Statistics of the United States for 1834–1967, series CJ252 and Cj271 (Carter et al., 2006), and from the Federal Deposit Insurance Corporation (FDIC), tab. CB09 for 1968–2015. The ratio shown is the ratio of the book value of capital to total assets. The Historical Statistics data are for all banks, and the FDIC data are for insured banks. The Historical Statistics data end in 1980. The ratios from the two sources differ by only 0.01 percentage points in 1976, so data from the FDIC are used in 1977 and later years.
Macroprudential Regulation 539 capital requirements permit the level of capital required to reflect risk to some degree. Regulators estimate the riskiness of various assets and impose higher capital for riskier assets. The assets are categorized by broad bin.18 For example, all sovereign debt with the same credit rating has the same risk weighting (Bank for International Settlements 2013.) While this risk weighting of capital requirements has advantages, it also requires that regulators have a sound basis for categorizing risks and the capital required for various risks.
18.3.4 Banking and Related Regulation since the Financial Crisis Some of the new elements in the regulatory structure in the United States since the financial Crisis of 2007–2008 can be interpreted as attempts to lower moral hazard by lowering the costs of bailouts and the probability of those bailouts. The Financial Stability Oversight Council (FSOC) was created and consists of the secretary of the Treasury, the chair of the Board of Governors of the Federal Reserve, and other federal officials.19 FSOC has the power to designate some financial firms as systemically important financial institutions (SIFIs.) SIFIs are regulated by the Federal Reserve Board in addition to any other regulation that applies to them. SIFIs include financial firms in addition to banks. As of this writing, SIFIs include insurance companies and included General Electric Capital Corporation until 2016. FSOC also designates financial-market utilities, mostly exchanges and clearinghouses. These utilities are firms whose “failure . . . could create or increase the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the U.S. financial system” (Financial Stability Oversight Council 2016). Similar efforts are in progress in other countries. In addition, an international body, the Financial Stability Board (FSB), coordinates information, makes recommendations about financial regulation including macroprudential regulation, and monitors implementation.20 It also designates institutions as globally systemically important banks (G-SIBs). While all of these efforts are of interest, the changes in the United States can be taken as emblematic and useful for thinking about the general issues and not details of governance. The regulation by the Federal Reserve includes enhanced capital requirements and stress tests of institutions’ resilience to scenarios. Higher capital requirements can decrease the probability of failure. In addition, the Federal Reserve determines the instruments that count as regulatory capital in a SIFI’s capital structure. More required capital lowers the probability that a bank or other SIFI will fail, since capital can cover larger losses. In addition, more book capital can lower the losses to nonstockholders in the event of failure. Accounting capital, even a substantial amount when the bank closes, does not guarantee that there will be no bailout. Accounting capital is a lagging indicator of the value of a bank. Substantial
540 Dwyer disparities between accounting capital and the market values of banks are common when banks fail. It is possible for a firm to have substantial accounting capital and no market value. A firm closes when it no longer transacts business on a regular basis. This need not coincide with failure. The owner of the firm may simply decide that the firm’s capital is better deployed elsewhere. Banks, though, are more likely to be bought out than simply to close their doors. On the other hand, a bank inevitably will close if others no longer wish to do business with it, perhaps because of perceived counterparty risk. This could occur when the bank still has positive capital. Shareholders do not, in fact, receive payouts when banks in the United States are closed and resolved.21
18.3.5 Too Big to Fail Failure was replaced by a policy of “too big to fail” (TBTF) for large banks in May 1984. Continental Illinois failed in May 1984 and was bailed out by bank regulators. At the time, and since, large banks that a regulator will not let fail are known as TBTF (Haltom 2013). Why do regulators bail out large banks? An unwillingness to take the risk of large effects on the economy when holders of liabilities other than insured depositors bear losses was evident in the failure of Continental Illinois.22 It can be argued that the probability of such effects is small, but this is not a compelling reason for a regulator to let a large bank fail. As William Seidman, a former FDIC chairman, explained some events during the failure of many banks in Texas in the 1980s: They [the holders of banks’ and holding companies’ debt] reckoned that the Continental precedent would force us to cover all company debtors, including them, because we had no other way to handle such a large institution and would recoil from closing it down and risking the panic and huge losses that might occur in the weakened Texas economy. Some of course argued that no such panic would have occurred, but most public officials operate under the banner of “not on my watch” and do not care to gamble with such possibilities. (McKinley 2012, chap. 6)
Ben Bernanke (2015) puts it somewhat differently, but the implications are similar: “I knew that, as chaotic as financial conditions were now, they could become unimaginably worse if AIG defaulted—with unknowable but assuredly catastrophic consequences for the U.S. and global economies” (2015, xi). Banking crises do not happen often enough for us to have much confidence in any particular forecast of the outcome in any particular crisis. It is not surprising that Bernanke characterizes the consequences of a policy of letting AIG fail on September 16, 2008, as having “unknowable” yet “assuredly catastrophic” effects.23 The state for the policymaker is one of uncertainty, in which the set of states of the world is not completely known and the costs of mistakes can be very high.
Macroprudential Regulation 541 The designation of SIFIs can be interpreted as the creation of a list of institutions that also would be TBTF, for example, in the financial crisis of 2007–2008. By itself, this is not particularly useful and may even be detrimental because it makes the list of TBTF institutions known with some certainty.
18.3.6 Living Wills A new way of resolving SIFIs, though, was created by the Dodd-Frank Act, with the intended purpose of ending the TBTF policy. The usual means of resolving failed firms, including financial firms, in the United States is under bankruptcy law. Banks have bank charters, not corporate charters, and are resolved by bank regulators. Because virtually all banks in the United States are insured, the suspension of operations and closing of banks is executed by the FDIC, which makes any necessary payments to insured depositors and winds up the bank’s operations. Financial firms such as Lehman Brothers are resolved in bankruptcy courts. These were the means available to banking regulators before the Dodd-Frank Act, which created an alternative structure for SIFIs. Under Dodd-Frank, the FDIC resolves SIFIs using an orderly liquidation authority (OLA) outlined in the act. SIFIs, as well as large bank holding companies and large banks, are required to develop, maintain, and periodically submit resolution plans to the Federal Reserve and the FDIC (Federal Deposit Insurance Corporation 2010).24 These resolution plans have been called living wills because they require a firm to develop a resolution plan while there is little or no prospect of failure. The overarching purpose of the requirement for living wills is to end TBTF. This purpose is supposed to be accomplished by having plans to resolve SIFIs (Federal Deposit Insurance Corporation 2012), which include (1) providing the FDIC with the information to resolve a firm, (2) providing for orderly and rapid resolution, (3) providing a funding plan, and (4) assisting the FDIC and the Federal Reserve in supervising the firm. Given the requirements in the law, the hope is that the regulators will resolve the bank or SIFI and not bail it out. A new living will is to be submitted annually to reflect changes in the firm over the year. Maybe especially initially, the submitted plans or required revisions are expected to include organizational changes that make resolution of a firm easier. Some organizational changes have been made (Federal Reserve System Board of Governors 2016.) There are good reasons to have serious doubts about the credibility of these living wills. Perhaps first and foremost, in the event of a financial crisis, it is hard for some to imagine that regulators will allow a firm to fail. Despite protestations to the contrary before a financial crisis, there is an incentive to bail out the firms and avoid the probability, no matter how small, of serious substantial negative effects on the financial system and the economy (Jarque and Athreya 2015; Chari and Kehoe 2016). In the context of monetary policy, the problem is characterized as time inconsistency. In the context of a failing bank, a time-inconsistent plan is as follows: (1) There is an announced plan in place to resolve the banks. (2) Before the crisis, it is optimal to have
542 Dwyer this plan because it lowers moral hazard and it is optimal to state that it will be followed. (3) Once a crisis occurs, though, the regulators are confronted by a stark possibility of very bad consequences; given the discretion to avoid this outcome, they will do so. The problem is one of time inconsistency because, given discretion, the regulators will be inconsistent over time, claiming that they will follow a plan and then ignoring the plan once a failure or crisis occurs. In other words, the announced plan is not an equilibrium. Rational agents will ignore the announced plan that will not be followed. In the context of living wills, this time-inconsistency argument claims that executing the living will is not an equilibrium. Given any latitude to bail out firms, the regulators are likely to find it optimal to bail out the firms for exactly the same reasons as in the recent financial crisis. Even though the living wills may lower the perceived cost of letting firms fail and be resolved, the perceived costs still are likely to be “unknowable” and possibly “catastrophic.” The implication is that everyone should ignore the announced plan because it will not happen. This is not improbable. There are many details that make resolution of a SIFI difficult. How the firm will run into difficulties is liable to affect the resolution process itself and the options available. It is hard to see all the different ways in which a set of SIFIs can get into financial difficulties. With no experience and no obvious way to generate empirical evidence in such a new environment, no definite conclusion is warranted. It is hard to reliably say more than “Some think living wills can solve the TBTF problem.” (Huertas 2013 is a prominent example.) And “Some are quite dubious that it will.” (Pakin 2013 provides a thorough summary of the arguments, and Jarque and Athreya (2015) and Chari and Kehoe (2016) provide theoretical analyses. Even Huertas’s (2013 and 2016) strong exposition of how living wills can solve the TBTF problem says that regulators “must” do many things to make it work. Two elements stand out. First, living wills will not work if regulators exercise forbearance, delaying closing the institution until losses exceed the value of nonequity nondeposit assets that can be used to cover losses. Forbearance has been common in the past, and even an attempt to legislate it out of existence in the United States failed. Second, jurisdictional issues across countries can make an otherwise sound plan fail. US bank regulators have plans that are likely to create difficulties (Huertas 2013 and 2016). Huertas claims that living wills can work, not that they will work. While it is possible to conclude that living wills will work, it is reasonable to view the probability of their working as quite small.
18.4 Conclusion Macroprudential regulation to lower systemic risk was first discussed before the financial crisis of 2007–2008 but has received a prominent place in all regulatory discussions since the crisis. This is warranted. Given the bailouts in the crisis, moral hazard absent
Macroprudential Regulation 543 any regulatory change is likely to be substantially worse than before the crisis. It may be far less than eighty years before there is another such crisis absent appropriate regulatory responses. Many new justifications for macroprudential regulation have appeared in the literature, some before the crisis and more since. Externalities have received some attention. The usefulness of this literature for thinking about regulation or designing it is, at best, very limited in the current state of knowledge. The moral hazard problems caused by deposit insurance and bailouts have clear implications and are part of, if not much of, the motivation for some actual changes in bank regulation. Regulators appear to have quickly reached a consensus on how to deal with perceived difficulties in the financial system and to be implementing them aggressively. Prominent ones are increased capital requirements and a requirement that institutions create living wills. Increased capital requirements are opposed by banks—not surprisingly—but have much to recommend them (Admati and Hellwig 2013.) The creation of a living will for a complex financial institution is a costly process. While it is possible that regulators will execute living wills for multiple financial institutions simultaneously in a crisis, there is no evidence that they will do so. Regulators will have the discretion to avoid doing that. Even if it were not legally permitted, an extralegal choice would be unlikely to be punished.25 In a crisis, bailing out banks has reasonably predictable consequences in the near term. Letting multiple institutions fail has consequences that are much harder to predict. It will not be surprising if regulators decide that the long-term moral hazard implications are a problem that can be dealt with after the crisis is over. This was the solution adopted in the financial crisis of 2007–2008. What are the implications of this conclusion for regulation? The main one is that it is best to have self-enforcing rules determining government actions in a crisis. A related way to say it is that the rules must be incentive-compatible. There is a tendency to think that regulators need merely to be told the preferred solution, and that is enough to guarantee that they will implement the solution. Regulators “must” do this or “must” do that. That is wrong, as much literature on nonfinancial regulation shows and as Jarque and Athreya (2015) and Chari and Kehoe (2016) show can be correct for financial regulation. Regulators respond to incentives in the same way that people do in other contexts. If it is not possible to make some government actions the best ones available in the crisis, those actions will not be taken. Regulators will take longer-term preferred actions only if the environment is set up so they have an incentive to take those actions.
Notes 1. This deposit contract can support the full-information risk-sharing equilibrium. 2. This outcome is best if the bank can pay all customers the same. If the bank confronts customers sequentially and makes the promised payments to some customers in period 1, then other customers receive less than one unit of consumption.
544 Dwyer 3. Arguably, the equilibrium in which depositors do not receive the promised payment is one in which no one puts funds into the bank in the first place and there simply are no banks. 4. Allen and Gale (2009) provide a thorough summary of the literature. 5. The most common citation for an examination of several crises is Gorton’s (1988) analysis of banking crises during the national banking period in the United States. A little bit of skepticism is warranted about repeated findings that fundamental factors invariably are important for individual banking crises. Suppose that someone examines a banking crisis and concludes, “There is no fundamental factor that produced the run and it must be a pure panic run.” (This is stronger than saying that depositors thought a factor was important and it was not. That would indicate that depositors made a mistake, not that the run was produced by no perceived fundamental factor.) A referee, fairly enough, would say, “There is no known fundamental factor.” Effectively, someone trying to support this conclusion is trying to claim that “There is no known or unknown fundamental factor.” The claim about unknown factors is a statement of faith. As the well-known aphorism says, absence of evidence is not evidence of absence. It is hard to imagine such a paper being taken seriously by anyone who thinks that fundamental factors are important. 6. This was a run on prime funds that held commercial paper after the failure of the Reserve Primary Fund because of losses on its holdings of Lehman Brothers commercial paper (see, e.g., Dwyer and Tkac 2009). Funds flowed into Treasury money-market mutual funds that held short-term federal government securities but not commercial paper. 7. As Friedman and Schwartz (1963) point out, the term “restriction of payments” for these extralegal “suspensions” is more apt, because the banks continued to pay out cash for many ordinary withdrawals but did not honor extraordinary demands for cash. This lessened the adverse effects on depositors and the economy. 8. Dwyer (1996) provides a partial summary of some legal aspects of what is called free banking in the United States before the Civil War, and Bodenhorn (2002) provides a more general discussion of the banking systems in that early period. For the general legal framework in the national banking system after the war, the best sources known to me are largely implicit in the histories by Sprague (1910) and Freidman and Schwartz (1963). 9. In fact, of the banking crises identified by Laeven and Valencia (2012), roughly a third have no subsequent decrease in annual real GDP or real GDP per capita (Dwyer et al. 2013; Dwyer and Devereux 2016). 10. A search of EconLit on June 6, 2016, turned up the first reference in a paper published by the World Bank in 1986 (Lawrence 1986). Lawrence uses the term to mean “the possibility of a major series of synchronous losses for the lenders” to developing countries (Lal and Wolf 1986, 9). 11. This definition includes a lot more than what usually seems to be meant by systemic risk. For example, the risks of an earthquake, a large oil price increase, and a coup fit in this definition. Or maybe systemic risk should include such events? 12. This reason for macroprudential regulation also can be viewed as a reason not to bail out banks rather than a reason for additional regulation. 13. The decline in the value of the assets themselves is a pecuniary externality that lowers the wealth of the assets’ owners but need have no real effect beyond these wealth effects. 14. Dávilia (2014) dissects fire-sale externalities and shows that attempts to correct them may not be time-consistent, that is, consistent in equilibrium with the discretion given to regulators.
Macroprudential Regulation 545 15. While it may be hard to imagine doing that with today’s communication technology, it is not so hard to imagine on the Michigan frontier in the 1830s. 16. The ratio of capital to assets ignores off-balance-sheet items and is not likely to adequately reflect positions in derivatives. 17. Alexander (2015) provides an excellent summary of the Basel requirements concerning bank capital. 18. There are bound to be errors in the classifications and changes in the relative riskiness of assets and the relative price of that risk over time. Risk-based capital requirements create an incentive to hold assets that have lower risk weightings relative to their true risk and not to hold assets that have higher risk weights than their true risk. 19. The FSOC website has a substantial amount of information about its organization and activities; see https://www.treasury.gov/initiatives/fsoc/Pages/home.aspx. 20. The membership includes twenty-four countries and jurisdictions and international organizations (Financial Stability Board 2016.) 21. Of the 468 bank failures handled by the FDIC from 2000 to 2016, all of them required payments by the FDIC. The total payments were $50 million on $1.52 billion of insured deposits and assets of $3.31 billion. 22. This preference to avoid creating panic and serious distress, even if there is only a small probability of it, can be due to a sincere desire to avoid harming many fellow citizens. It does not require pure self-regard or bureaucratic ineptness. 23. AIG is American International Group, an international insurance and financial services group that had written credit default swaps on collateralized debt obligations backed by subprime mortgages. Its failure to deliver on its promised payments in the event of defaults on mortgages would have affected many institutions, including European banks (Kos 2010). 24. The FSB is recommending these plans for thirty international institutions. 25. Consider the US Treasury secretary’s guarantee of funds in money-market mutual funds by the Exchange Stabilization Fund. This stopped the run on prime money-market funds, and it is evident that it did. There was no punishment for this extralegal action in the recent crisis, and arguably there should not have been.
References Acharya, Viral V., Philipp Schnabl, and Gustavo Suarez. 2013. “Securitization without Risk Transfer.” Journal of Financial Economics 107 (March): 515–536. Admati, Anat, and Martin Hellwig. 2013. The Banker’s New Clothes. Princeton: Princeton University Press. Alexander, Kern. 2015. “The Role of Capital in Supporting Banking Stability.” In The Oxford Handbook of Financial Regulation, edited by Niamh Moloney, Eilís Ferran, and Jennifer Payne, 334–363. Oxford: Oxford University Press. Allen, Franklin, and Douglas Gale. 2009. Understanding Financial Crises. Oxford: Oxford University Press. Bank for International Settlements. 2013. “Treatment of Sovereign Risk in the Basel Capital Framework.” http://www.bis.org/publ/qtrpdf/r_qt1312v.htm. Bator, Francis M. 1958. “The Anatomy of Market Failure.” Quarterly Journal of Economics 72 (August): 351–379.
546 Dwyer Bernanke, Ben. 2015. The Courage to Act: A Memoir of a Crisis and Its Aftermath. New York: W. W. Norton. Bodenhorn, Howard. 2002. State Banking in Early America. Oxford: Oxford University Press. Bordo, Michael D., and Joseph G. Haubrich. 2010. “Credit Crises, Money and Contractions: An Historical View.” Journal of Monetary Economics 57, no. 1: 1–18. Bordo, Michael D., and Joseph G. Haubrich. 2017. “Deep Recessions, Fast Recoveries, and Financial Crises: Evidence from the American Record.” Economic Inquiry 55 (January): 527–541. Borio, Claudio. 2003. “Towards a Macroprudential Framework for Banking Supervision and Regulation?” CESifo Economic Studies 49, no. 2: 181–215. Buchanan, James M., and William Craig Stubblebine. 1962. “Externality.” Economica, new series 116 (November): 371–384. Calomiris, Charles W., and Charles M. Kahn. 1991. “The Role of Demandable Debt in Structuring Optimal Banking Arrangements.” American Economic Review 81 (June): 497–513. Carter, Susan B., Scott S. Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright, eds. 2006. Historical Statistics of the United States, Millennial Edition. New York: Cambridge University Press. Chari, V. V., and Patrick J. Kehoe. 2016. “Bailouts, Time Inconsistency, and Optimal Regulation.” American Economic Review 106 (September): 2458–2493. Claessens, Stijn. 2015 “An Overview of Macroprudential Policy Tools.” Annual Review of Financial Economics 7: 397–422. Dávilia, Eduardo. 2014. “Dissecting Fire-Sale Externalities.” Unpublished paper, New York University. Demirgüç-Kunt, Asli, and Edward Kane. 2002. “Deposit Insurance around the Globe: Where Does It Work?” Journal of Economic Perspectives 16 (Spring): 175–195. Demirgüç-Kunt, Asli, Edward Kane, and Luc Laeven. 2014. “Deposit Insurance Database.” IMF working paper 14/118. Demsetz, Harold. 1969. “Information and Efficiency.” Journal of Law and Economics 12 (April): 1–22. De Nicolò, Gianni, Giovanni Favara, and Lev Ratnovski. 2012. “Externalities and Macroprudential Policies.” IMF discussion note SDN/12/05. Diamond, Douglas W., and Philip H. Dybvig. 1983. “Bank Runs, Deposit Insurance and Liquidity.” Journal of Political Economy 91 (June): 401–419. Dwyer, Gerald P. 1996. “Wildcat Banking, Banking Panics and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81 (December): 1–20. Dwyer, Gerald P., and John Devereux. 2016. “What Determines Output Losses after Banking Crises?” Journal of International Money and Finance 69 (December): 69–94. Dwyer, Gerald P., John Devereux, Scott Baier, and Robert Tamura. 2013. “Recessions, Growth and Banking Crises.” Journal of International Money and Finance 38, no. 1: 18–40. Dwyer, Gerald P., and Iftekhar Hasan. 2007. “Suspension of Payments, Bank Failures and the Nonbank Public’s Losses.” Journal of Monetary Economics 54 (March): 565–580. Dwyer, Gerald P., and Margarita Samartín. 2009. “Why Do Banks Promise to Pay Par on Demand?” Journal of Financial Stability 5 (June): 147–169. Dwyer, Gerald P., and Paula Tkac. 2009. “The Financial Crisis of 2008 in Fixed-Income Markets.” Journal of International Money and Finance 28 (December): 1293–1316. Federal Deposit Insurance Corporation. 1968–2015. Table CB09. Data from https://www5.fdic. gov/hsob/hsobrpt.asp?state=1&rpttype=1&rpt_num=9.
Macroprudential Regulation 547 Federal Deposit Insurance Corporation. 2010. “FDIC Staff Summary of Certain Provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act.” https://www.fdic.gov/ regulations/reform/summary.pdf. Federal Deposit Insurance Corporation. 2012. “Living Wills Overview.” https://www.fdic.gov/ about/srac/2012/2012-01-25_living-wills.pdf. Federal Reserve System Board of Governors. 2016. “Resolution Plan Assessment Framework and Firm Determinations (2016).” http://www.federalreserve.gov/newsevents/press/bcreg/ bcreg20160413a2.pdf. Financial Stability Board. 2016. “FSB Members.” http://www.fsb.org/about/fsb-members/ #international. Financial Stability Oversight Council. 2016. “Designations.” https://www.treasury.gov/ initiatives/fsoc/designations/Pages/default.aspx. Friedman, Milton, and Anna J. Schwartz. 1963. A Monetary History of the United States: 1867– 1960. Princeton: Princeton University Press. Galati, Gabriele, and Richhild Moessner. 2013. “Macroprudential Policy: A Literature Review.” Journal of Economic Surveys 7, no. 5: 846–878. Galati, Gabriele, and Richhild Moessner. 2017. “What Do We Know about the Effects of Macroprudential Policy?” Economica, doi: 10.1111/ecca.12229. Geanakoplos, John D., and Heraklis M. Polemarchakis. 1986. “Existence, Regularity, and Constrained Suboptimality of Competitive Allocations When the Asset Market Is Incomplete.” In Essays in Honor of Kenneth Arrow, Vol. 3, edited by W. Heller, R. Starr, and D. Starrett, 65–95. Cambridge: Cambridge University Press. Gorton, Gary. 1985. “Bank Suspension of Convertibility.” Journal of Monetary Economics 15 (March): 177–193. Gorton, Gary. 1988. “Banking Panics and Business Cycles.” Oxford Economic Papers 40 (December): 751–781. Gorton, Gary, and Andrew Metrick. 2012. “Securitized Banking and the Run on Repo.” Journal of Financial Economics 104: 425–451. Greenwald, Bruce C., and Joseph E. Stiglitz. 1968. “Externalities in Economies with Imperfect Information and Incomplete Markets.” Quarterly Journal of Economics 101 (May): 229–264. Group of Ten. 2001. “The G10 Report on Consolidation in the Financial Sector.” https://www. bis.org/publ/gten05.pdf. Haltom, Renee. 2013. “Failure of Continental Illinois.” http://www.federalreservehistory.org/ Events/DetailView/47. Howard, Greg, Robert F. Martin, and Beth Anne Wilson. 2011. “Are Recoveries from Banking and Financial Crises Really So Different?” Board of Governors of the Federal Reserve System international finance discussion paper 1037. Huertas, Thomas F. 2013. “Safe to Fail.” LSE Financial Markets Group special paper 221. Huertas, Thomas F. 2016. “Resolution: What Authorities and Banks Need to Do to End Too Big to Fail.” https://www.linkedin.com/pulse/resolution-what-authorities-banks-need-do-end-too- big-thomas-huertas?trk=prof-post. Jarque, Arantxa, and Kartik Athreya. 2015. “Understanding Living Wills.” Federal Reserve Bank of Richmond Economic Quarterly 101 (Third Quarter): 193–223. Kaufman, George G., and Kenneth E. Scott. 2003. “What Is Systemic Risk, and Do Bank Regulators Retard or Contribute to It?” Independent Review 7 (Winter): 371–391. Kos, Dino. 2010. “The AIG Backdoor Bailout: But a Bailout of Whom?” International Economy (Spring): 50–64.
548 Dwyer Laeven, Luc, and Fabián Valencia. 2012. “Systemic Banking Crises Database: An Update.” IMF working paper 12/163. Lal, Deepak, and Martin Wolf. 1986. “Introduction.” In Stagflation, Savings, and the State: Perspectives on the Global Economy, edited by Deepak Lal and Martin Wolf, 3–13. Washington, D.C.: World Bank. Lastra, Rosa M. 2015. “Systemic Risk and Macro-Prudential Supervision.” In The Oxford Handbook of Financial Regulation, edited by Niamh Moloney, Eilís Ferran, and Jennifer Payne, 309–333. Oxford: Oxford University Press. Lawrence, Robert Z. 1986. “Systemic Risk and Developing Country Debt.” In Stagflation, Savings, and the State: Perspectives on the Global Economy, edited by Deepak Lal and Martin Wolf, 91–102. Washington, D.C.: World Bank. Loong, Lee Hsein, and Richard Zeckhauser. 1982. “Pecuniary Externalities Do Matter When Contingent Claims Markets Are Incomplete.” Quarterly Journal of Economics 97 (February): 171–179. Lorenzoni, Guido. 2008. “Inefficient Credit Booms.” Review of Economic Studies 75 (July): 809–833. McAndrews, James, and William Roberds. 1999. “Payment Intermediation and the Origins of Banking.” Federal Reserve Bank of Atlanta working paper 99–11. McKinley, Vern. 2012. Financing Failure: A Century of Bailouts. Oakland: Independence Institute. Pakin, Nizan. 2013. “The Case against Dodd-Frank’s Living Wills: Contingency Planning following the Financial Crisis.” Berkeley Business Law Journal 9, no. 1: 29–93. Reinhart, Carmen M., and Kenneth S. Rogoff. 2009. This Time Is Different: Eight Centuries of Financial Folly. Princeton: Princeton University Press. Reinhart, Carmen M., and Kenneth S. Rogoff. 2014. “Recovery from Financial Crises: Evidence from 100 Episodes.” American Economic Review 104, no. 5: 50–55. Scitovsky, Tibor. 1954. “Two Concepts of External Economies.” Journal of Political Economy 62 (April): 143–151. Selgin, George, and Lawrence H. White. 1997. “The Options Clause in Scottish Banking.” Journal of Money, Credit and Banking 29 (May): 270–273. Sprague, O. M. W. 1910. History of Crises under the National Banking System. Washington, D.C.: Government Printing Office. Timberlake, Richard H. Jr. 1984. “The Central Banking Role of Clearinghouse Associations.” Journal of Money, Credit and Banking 16 (February): 1–15. US Securities and Exchange Commission. 2014a. “Money Market Reform; Amendments to Form PF.” https://www.sec.gov/rules/final/2014/33-9616.pdf. US Securities and Exchange Commission. 2014b. “SEC Adopts Money Market Fund Reform Rules.” https://www.sec.gov/news/press-release/2014-143. Wallace, Neil. 1983. “A Legal Restriction of the Demand for ‘Money’ and the Role of Monetary Policy.” Federal Reserve Bank of Minneapolis Quarterly Review 7, no. 1: 1–7. Wildavsky, Aaron. 1988. Searching for Safety. Somerset, N.J.: Transactions.
Pa rt V I
C E N T R A L BA N K I N G A N D C R I SI S M A NAG E M E N T
chapter 19
Ce ntral Ban k i ng a nd Crisis M anag e me nt f rom the Per spe c t i v e of Austrian Bu si ne s s Cycle Th e ory Gunther Schnabl
19.1 Introduction With the US subprime crisis and the European financial and debt crisis, the central banks of the large industrialized countries are widely understood to be key players in (financial) crisis management. With large financial institutions tumbling, threatening to trigger a meltdown in the global financial system, decisive interest-rate cuts and ample liquidity provisions have become standard tools to ensure financial and economic stability. Since the mid-1980s, the asymmetric nature of monetary policy crisis management—that is, a stronger monetary expansion during crisis than monetary tightening during the recovery after the boom—has led to a gradual decline of interest rates toward zero and an inflation of central bank balance sheets. Because the structural decline of interest rates has been accompanied by low consumer price inflation, monetary policy crisis management was regarded for a long time as an outstanding success. With improved communication skills being assumed to ensure low inflation (Woodford 2003), interest rates as operational targets (or later the size I thank Andreas Hoffmann and an anonymous referee for helpful comments and the Friedrich August von Hayek Foundation and the Jackstädt Foundation for financial support. I thank Hannes Böhm for excellent research assistance.
552 Schnabl of the central bank balance sheet) became popular tools to achieve additional goals such as financial stability and growth. The US subprime crisis and the European financial and debt crisis have destroyed, however, the illusion of the Great Moderation (Bernanke 2005). Now the structural decline of the nominal and real interest rates is interpreted as a response to a structural decline of growth dynamics, which are linked to a saving glut of aging societies, declining (marginal efficiency of) investment, and (exogenously) increasing income inequality (Gordon 2012; Summers 2014). The result is a declining natural interest rate (as Summers 2014 calls it), which involves an increasing probability of financial-market bubbles, while product markets remain in equilibrium. Similarly, Laubach and Williams (2015) suggest that (in the United States) the fall in trend GDP growth rates triggered a decline in the natural rate of interest, which implies that the gradual interest-rate cuts of the Fed are an inevitable response to the secular stagnation. This explanation approach is in line with the literature, which sees financial crises as a result of random or exogenous shocks, amplified by the irrationality of human action, and therefore sees discretionary policy intervention as indispensable to stabilize inherently instable markets (Keynes 1936; De Grauwe 2011). Monetary policy has increasingly become the pivotal instrument of crisis management, because with high levels of government debt and growing dimensions of crises, fiscal crisis management has reached its limits. In contrast to the Keynesian views, monetary policy crisis management is discussed here from the point of view of the monetary overinvestment theories of Mises (1912) and Hayek (1929 and 1931), which see monetary policies that are too loose as the origin of structural distortions and crises. This view is line with assessments based on the Taylor (1993) rule that suggest that monetary policies that were too expansionary during the 2000s sowed the seeds for financial exuberance and therefore the current crisis (see Taylor 2007; Jorda, Schularick, and Taylor 2015; Adrian and Shin 2008; Brunnermeier and Schnabel 2014; Hoffmann and Schnabl 2008, 2011, and 2016b). The approach is in many aspects similar to Borio’s (2014) perception of the financial cycle, but it takes a stronger focus on the role of central banks for crisis management. It extends Borio (2014) to the possible role of central banks for the emergence of crisis.
19.2 Asymmetric Monetary Policy Patterns To model the central bank crisis management emerging since the mid-1980s, a Wicksell- (1898), Mises-(1912), and Hayek-(1929 and 1931) based monetary overinvestment theory framework is used. This allows us to understand asymmetric monetary policy crisis-management patterns since the mid-1980s.1 Whereas Borio (2014) takes the empirical observation of financial cycles as a starting point and matches it with the
Crisis Management and Austrian Business Cycle Theory 553 currently dominating business cycle theories, I take the seminal Austrian business cycle theory as a starting point and match it with the observed stylized facts occurring in financial markets.
19.2.1 An Austrian Business Cycle Framework Based on the business cycle theories of Wicksell, Mises, and Hayek, four types of interest rates can be distinguished. First, the internal interest rate ii reflects the (expected) returns of (planned) investment projects. Second, Wicksell’s (1898) natural interest rate in is the interest rate that balances the supply (saving) and demand (investment) of capital (and thereby causes neither inflation nor deflation). Third, the central bank interest rate icb is the policy interest rate set by the central bank. It represents the interest rate that commercial banks are charged by the central bank for refinancing operations. Fourth, the capital market interest rate ic is defined as the interest rate set by the private banking (financial) sector for credit provided to private enterprises. For simplicity, it is assumed that under normal conditions, the capital-market interest rate equals the policy interest rate (see Hoffmann and Schnabl 2011). Wicksell (1898) and Hayek (1929 and 1931) have different concepts of the natural interest rate. According to Wicksell, the deviation of the central bank interest rate (and the capital-market interest rate) from the natural rate of interest (which guarantees goods market equilibrium) disturbs the equilibrium between ex ante saving and investment plans, bringing about inflationary (I > S) or deflationary processes (S > I).2 During an inflationary credit boom, the supply of goods cannot satisfy the additional demand for goods at given prices. Therefore, Wicksell’s natural rate of interest is the interest rate at which inflation is zero (or at the target level). In Wicksell’s framework, money is not neutral, but additional money supply affects decisions of economic agents, triggering additional credit provision.3 Although he does not directly refer to overinvestment, Wicksell (1898) stresses (in contrast to Woodford 2003) the role of the production structure for the transmission of monetary policy to inflation.4 Building on Wicksell (1898), Mises (1912) and Hayek (1929 and 1931) aimed to explain business cycles caused by the deviation of the central bank interest rate (capital-market interest rate) from the natural rate of interest. They attribute the main role in the creation of cycles to credit creation triggered by the central bank and the private banking sector. Hayek emphasized the importance of the intertemporal misalignments of plans of producers and consumers to derive mal-or overinvestment5 as mismatch between the production structure and consumer preferences. The natural interest rate is the interest rate that aligns saving and consumption preferences with the production structure over time. A fall in the central bank interest rate (capital-market interest rate) below the natural interest rate causes a cumulative inflationary process, creating distortions in the production structure that later make an adjustment necessary (unless the central bank keeps on inflating credit at an ever-increasing pace and artificially prolongs the credit boom).
554 Schnabl i Ii
2
Ii
S1,2
1
in , icb , ic 2
2
2
in , icb , ic 1
1
1
I, S
I 1 = S1 I2 = S2
Figure 19.1 Equilibrium i
i Ii
Ii 1
2
S1,2 in , icb , ic
in , icb , ic
in
1
1
2
3
in
2
Ii
Ii
2
3
S1,2,3
3 3
1
= icb , ic 2
2
I1 = S1 = S2 I2
I, S
I, S
I3 S3
Figure 19.2 Boom and Bust
An economy is in equilibrium when the natural rate of interest equals the central bank interest rate, that is, planned savings equal investment. In the view of Mises and Hayek, an economic upswing starts when positive expectations—for instance, due to an important innovation—raise the internal interest rate of investment, bringing about a rise in investment demand at given interest rates.6 In figure 19.1, this corresponds to a right shift of the investment curve from Iii to Iii . The natural rate of interest rises along from 1 2 in1 to in2 . Credit demand in the economy rises. If the central bank increases the policy rate from icb1 to icb2 , assuming a perfect interest-rate transmission to credit markets, planned saving and investment in the economy will stay in equilibrium (S2 = I 2 ). If, however, the central bank does not raise the policy interest rate, (in1 = icb1 = icb2 < in2 ), as shown in the left panel of figure 19.2, relatively low interest rates will give rise to an unsustainable overinvestment boom. Holding policy rates too low (for too long) will hereafter be referred to as monetary policy mistake type 1. To market participants, a rise in credit to the private sector at constant interest rates signals that saving activity of households increased. Additional investment projects
Crisis Management and Austrian Business Cycle Theory 555 aim to satisfy the expected rise in future consumption. As planned household saving does not increase, an unsustainable disequilibrium between ex ante saving and investment S2 < I 2 at ic2 < in2 arises. In what follows, additional investments of some enterprises trigger additional investments of other enterprises (cumulative upward process). As soon as capacity limits are reached and unemployment is low, wages and prices rise. At first, rising prices signal additional profits and therefore trigger a further increase in investment. There may be spillovers to financial markets. Increases in expected profits of companies are typically associated with rising stock prices. Given relatively low interest rates on deposits, shares are an attractive investment class. When stock prices move upward, trend followers will provide extra momentum such that “the symptoms of prosperity themselves finally become . . . a factor of prosperity” (Schumpeter 1934, 226).7 Consumption is fueled by rising stock prices via the wealth channel, which leads, with a lag, to an increasing price level. The boom turns into bust when the central bank increases the central bank interest rate to slow down inflation (Mises 1912; Hayek 1929, 1931, and 1937). Then investment projects with an internal interest rate below the risen natural interest rate turn out to be unprofitable. The fall in investment of some firms will depress investment of other firms as expected returns fall. When stock prices (and other asset prices) burst, balance sheets of firms and banks worsen, bringing about further disinvestment (cumulative downward process). The investment curve shifts back from Iii to Iii (right panel of figure 19.2). 2 3 The natural interest rate falls to n3. Wages fall, and unemployment rises. In this situation, the central bank should cut the central bank interest rate to contain the downward spiral. Yet the central bank interest rate is kept too high. Figure 19.2 shows that when the policy interest rate is above the natural interest rate (icb3 = ic3 > in3 ), credit supply is restricted further, such that ex ante saving is higher than investment (S3 > I 3 ). This aggravates the downturn beyond what would be necessary to remove the structural distortions, which have been induced by type 1 monetary policy mistakes. Based on the monetary overinvestment theories, holding policy interest rates above the natural interest rate during the downturn is labeled a type 2 monetary policy mistake.
19.2.2 Asymmetric Monetary Policy Crisis-Management Patterns From the point of view of Mises (1912) and Hayek (1929), monetary policy crisis management is a failure if interest rates are held too high (above the new natural interest rate in3) during the crisis (what corresponds to a type 2 monetary policy mistake). In line with this notion, Friedman and Schwartz (1963) as well as Bernanke (1995) argued that the US Fed had kept interest rates too tight during the early years of the 1930s world economic crisis, which they argue aggravated the crisis.8 Similarly, the Bank of Japan was blamed for having kept the interest rate too high in the early years after the bursting of the Japanese bubble economy in December 1989 (Bernanke 2000; Posen 2000). Given the lessons drawn by Bernanke (and others) from the world economic crisis, starting from the late 1980s in the United States, an asymmetric central bank
556 Schnabl crisis-management pattern emerged. With crisis more and more originating in financial markets and with inflation remaining at historically low levels, monetary policy was increasingly used as a tool of crisis management. When prices in some financial-market segments sharply declined, the Federal Reserve under chairmen Alan Greenspan and Ben Bernanke cut interest rates decisively to ensure financial-market stability by preventing type 2 monetary policy mistakes. In contrast, during the recoveries after the crisis, the Federal Reserve and increasingly other central banks such as the Bank of Japan and the European Central Bank tended to keep interest rates low in order not to endanger the economic recovery. Because interest- rate cuts and central bank balance-sheet expansions were less and less reflected in consumer price inflation (see section 19.3.1), central banks could hold interest rates very low for a rather long time during periods of economic recovery without interfering with their inflation targets. This asymmetric policy approach was reflected in the “Jackson Hole consensus,” where US central bankers claimed that central banks do not have sufficient information to spot financial-market bubbles but should react swiftly in times of financial turmoil (Blinder and Reis 2005). As this approach to central bank crisis management seems to have caused new booms in financial markets (see section 19.3.2), central banks may have tended to transform monetary policy mistakes of type 2 into monetary policy mistakes of type 1. By cutting interest rates in the face of crisis below the natural interest rate and by keeping interest rates during the recovery below the natural interest rate, short-term nominal interest rates were depressed—in cycles—down to zero, as shown in figure 19.3. With the zero 15 13 11
Percent
9 7 5 3 1 –1 –3 1980
1985
1990
1995 Nominal
2000
2005
2010
2015
Real
Figure 19.3 G3 Short-Term Interest Rates Source: International Monetary Fund, via Datastream, 2016. G3 = United States, Japan, and Germany (up to 1998)/euro area; arithmetic averages.
Crisis Management and Austrian Business Cycle Theory 557 90 80 70 % of GDP
60 50 40 30 20 10 0 1980
1985
Federal Reserve
1990
1995
2000
Bank of Japan
2005
2010
2015
European Central Bank
Figure 19.4 G3 Central Bank Assets as Shares of Gross Domestic Product Sources: European Central Bank and Eurostat, Bureau of Economic Analysis, Cabinet Office (Japan), International Monetary Fund.
interest rate being reached, unconventional monetary policies became mainly based on forward guidance that short-term and long-term interest rates would remain low. The continuing downward trend of long-term interest rates was ensured by the extensive government bond purchases of central banks, which inflated or shall inflate central bank balance sheets at growing speeds (figure 19.4). The controversial discussions on the US Fed’s tapering and the long-delayed increase in interest rates by the Fed (for the first time after nine years in 2015) and other large central banks signal that an exit from zero- interest-rate policies and balance-sheet expansion is a difficult endeavor.
19.3. Asymmetric Effects of Monetary Policy Crisis Management on Consumer and Asset Prices Monetary policy as a tool of crisis management had since the mid-1980s several benefits. Given that crises increasingly materialized as financial crises, central banks could respond fast when the potential collapse of systemic financial institutions threatened to trigger ostensibly uncontrollable chain reactions. The large and further growing scales of rescue measures could be easily financed via base money creation. Governments remained (widely) spared from having to pass increasing costs of crisis management through the parliaments. Whereas the fiscal costs of crisis management would have become visible in the form of growing government debt, tax hikes, or dire expenditure cuts, the costs of monetary crisis management are more difficult to trace and therefore more difficult to associate with policy mistakes (see section 19.4).
558 Schnabl 15 13 11
Percent
9 7 5 3 1 –1 –3 1980
1985
1990
1995 US
2000 Euro area
2005
2010
2015
Japan
Figure 19.5 Inflation Rates in G3 Countries Sources: Thomson Reuters, Japanese Ministry of Internal Affairs and Communication, Bureau of Labor Statistics, Eurostat.
In particular, gradual interest-rate cuts toward zero and unprecedented central bank balance-sheet expansions (figures 19.3 and 19.4) went along with historically low consumer price inflation (figure 19.5), which kept monetary policy crisis management in line with the widely used inflation-targeting regimes. Instead, monetary policy crisis management became increasingly visible in asset markets, which central banks continued to regard to be outside their responsibility.
19.3.1 Monetary Policy Crisis Management and Low Inflation At the end of the 1970s, the notion that monetary policy could reduce unemployment via surprise inflation (Phillips curve effect) had proven to be misleading. Lucas (1976) showed that rational private agents adjust their behavior in a dynamic game situation to discretionary policy interventions. Central banks became regarded as able to generate short-term employment effects, but at the cost of rising uncertainty and—in the long term—lower investment and growth. This paved the way for rules in monetary policy making, which intended to isolate central banks from the short-term goals of policymakers such as crisis management.9 Wicksell (1898) had pioneered the idea that the interest rate should be used to stabilize inflation. Friedman (1970) launched the idea of a rule for money supply growth, which he proposed to be calculated based on macroeconomic fundamentals and financial factors, targeting a specific level or range of inflation.10 Kydland and Prescott (1977) designed
Crisis Management and Austrian Business Cycle Theory 559 for democracies easy institutionalized rules, which should be difficult to reverse. This set the stage for monetary policy rules, which were framed by operative, institutional, and financial central bank independence (Barro and Gordon 1983). The German Bundesbank created a corridor for money supply growth to achieve low inflation. After the link between money supply and inflation had proven instable, inflation targeting combined with central bank independence became the widely accepted frameworks to control consumer price inflation (Taylor 1993). A model generation was born, in which a specific inflation target could be achieved by decisive central bank communication without targeting monetary aggregates (Woodford 2003).11 The coincidence of historically low inflation rates in most industrial countries with the introduction of inflation-targeting frameworks in a growing number of industrialized countries and emerging market economies was seen as a proof of the success of inflation targeting. Bernanke (2004) created the notion of the Great Moderation: central banks seemed to able to keep inflation low via clear communication of their inflation targets, while at the same time being able to use interest rates and base money supply as tools of crisis management. The US subprime crisis put a harsh end to the Great Moderation, revealing that the establishment of inflation-targeting frameworks in a large number of countries had triggered a new game situation: whereas private agents seemed to have adjusted their behavior to the establishment of the inflation-targeting rules to maximize their personal utilities, central bankers adhered to their increasingly ineffective targets. Goodhart (1975) argued that “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes” (5). In this context, the underlying statistical regularity can be the positive relationship between monetary base (money supply) and consumer price inflation as assumed by the quantity equation. Assuming a stable negative relationship between monetary base and interest rate,12 the Taylor rule and other inflation targets are based on the quantity equation. This is best formulated in the derivation of the 2 percent inflation target and the reference value for the money supply growth of the European Central Bank.13 Indeed, the relationship between money supply or the monetary base and consumer price inflation became increasingly empirically hard to trace (see Gertler and Hofmann 2016). Whereas central banks gradually cut interest rates toward zero and gradually expanded their balance sheets far beyond real GDP growth, inflation rates fell to historically low levels (figures 19.3, 19.4, 19.5). Figure 19.6 shows, based on the quantity equation, the collapse of the link between base money (as a proxy for monetary policy) and consumer price inflation. It shows rolling regressions of inflation on base money growth based on ten-year backward- looking windows.14 Sufficiently long time series are available for the United States and Japan. In the 1970s, the assumed positive relationship between the base money growth and consumer price inflation is confirmed for both the United States and Japan. The respective coefficient is positive and statistically significant at the common levels. During the 1980s, the positive relationship vanishes and is no longer detectable starting from the 1990s. By contrast, a (partially statistically significant) negative relationship seems to have emerged since the turn of the millennium. The more the Fed and the Bank of Japan expanded their balance sheets to push up inflation, the lower inflation was.
560 Schnabl US
Japan 1
1.5
Coefficient
0.5 0
0.5
0
95% Confidence interval
Coefficient with confidence interval
0.8
0.6
0.6
Year
q1
q1
20
15
q1
10 20
q1
q1
05 20
q1 15
20
q1
q1
10 20
q1
05 20
00 20
q1 95
19
q1
q1
90 19
q1
85 19
80 19
q1 70 19
q1
q1
15 20
q1
10 20
05 20
q1
q1
00 20
95
19
90 19
85 19
80 19
75
70
q1
0
q1
0
q1
0.2
q1
0.2
q1
0.4
75
0.4
19
p-value
0.8
19
95% Confidence interval
Coefficient with confidence interval 1
q1
p-value
20
Growth rate MO
1
19
19 Year
Growth rate MO
p-value
00
q1
Year
95
q1
19
90
q1
85 19
80 19
75
q1 19
70
q1
q1
20
15
q1
20
10
q1
05 20
00 20
19
95
q1
q1
q1
19
90
q1
19
85
q1
19
80
q1
75
70
19
19
q1
–0.5
–0.5
19
Coefficient
1
Year p = 5%
Level of significance (5%)
p-value
p = 5%
Level of significance (5%)
Figure 19.6 Rolling Coefficients for the Effect of the Monetary Base on Inflation Notes: IMF quarterly data. Top graphs: coefficients of rolling regression of inflation (backward-weighted ten-year window) on the growth rate of the monetary base (minus real growth). Bottom graphs: p-values of the rolling regressions.
Assuming that monetary policy crisis management has become increasingly visible in asset markets (see section 19.3.2.), the effect of monetary expansion on consumer price inflation is delayed as it takes a detour via asset markets. If unconventional monetary policy causes a rise in asset prices (instead of consumer prices), wealth effects will still stimulate demand for consumer goods as some people feel richer. However, this increase in consumer prices is delayed, which implies a growing time lag before the inflation target is hurt following a monetary expansion. If redistributive effects cause a rise in demand for mainly luxury goods (see section 19.4.3), which are not included or underrepresented in the predefined consumer baskets, then only substitution effects between the various product groups can result in more inflation. The monetary policy transmission toward higher inflation is delayed even further. The relationship between the monetary base and inflation can be postponed to an extent that inflation will not rise noticeably until a considerable bubble has already built up in one or another segment of national or international asset markets. If the central
Crisis Management and Austrian Business Cycle Theory 561 bank then lifts its interest rate in an effort to curb the looming inflation, the bubble will burst. The financial crisis dampens inflation again. The now increasingly unchallenged insight that monetary policy has become increasingly visible in asset markets (see section 19.3.2), may hinge on a structural change in the transmission of monetary policy. The financial sector as a group of rational economic agents—which has a core role in the transmission of monetary policy—may, in the spirit of Lucas (1976), have changed its behavior in response to the introduction of the monetary policy rules. In the case of a monetary expansion, additional liquidity is provided by the central bank to a set of financial institutions, which decide as so-called primary dealers about the allocation of the additional funds.15 The traditional perception of the monetary policy is that the additional liquidity is transferred—supported by lower interest rates—in the form of additional credit to households and enterprises. A growing credit volume is linked to increasing profits of commercial banks. Additional consumer credit leads to additional demand for consumer goods and—given full capacities or rigid supply—to rising prices. If enterprises anticipate the growing demand for consumer goods, they will invest more to expand their production capacities. For this purpose, the resources have to be shifted first from the consumption goods sector to the investment goods sector, which implies less supply of consumer goods and therefore further rising prices. If households expect growing inflation (and free capacities on the labor market are depleted), trade unions will bid for higher wages. Higher wages will trigger higher prices. Once inflation rises (or is anticipated to rise), the inflation-targeting framework will force the central bank to tighten money supply to tame inflation. The resulting increase in interest rates will force the commercial banks to tighten credit supply. Investment and consumption will slow down, which reduces demand on labor markets and therefore puts a ceiling on wage and price pressure. During the resulting downturn, the profits of financial institutions will decline as credit volumes shrink and some debtors default. If rational commercial banks anticipate the monetary tightening in the case of rising inflation, they may funnel the additional liquidity provided by the central bank to the financial sector, for instance, in form of real estate financing and financing of other asset purchases (e.g., stocks and raw materials). Then asset prices instead of consumer prices will increase (see quote from Schumpeter 1934 in section 19.2.1).16 Because consumer price indices used by central banks to measure inflation do not contain asset prices (such as real estate or stocks) and thereby reflect asset price increases only indirectly, consumer price inflation remains low. This effect is even larger when—as, for instance, in Japan since the bursting of the bubble in December 1989—the domestic monetary expansion feeds credit growth abroad. Given an inflation-targeting framework, the financial institutions profit more from funneling the credit flows into the financial sector (than into the enterprise and household sectors) for at least five reasons. First, given declining interest rates, the volumes of credit increase, which implies additional turnover. Second, the commission returns for intermediating financial transactions (including mergers and acquisitions) increase.
562 Schnabl Third, as financial institutions tend to hold assets such as stocks and real estate, the book value of these assets increases. This, fourth, implies growing values of collateral for additional transactions. Fifth, if central bank crisis management can be anticipated, the asymmetric nature of monetary policy is equivalent to an insurance mechanism for losses on financial investment (and speculation). The upshot is that there is an incentive to circumvent the monetary policy rule as a control mechanism against undue credit growth, because the profits of financial institutions grow fast during the upswing. Even if the bursting of the bubble is anticipated, the incentive for a single rational financial agent not to participate in the boom is low. The personal income flows during the financial boom would be substantially smaller compared to financial agents participating in the boom. In case of financial crisis, monetary policy crisis management can be expected to prevent asset prices from falling or to push up alternative investment classes to compensate for the losses.17 Therefore, even in the case of financial crisis, the income flows of prominent financial- markets agents are likely to remain high (see section 19.4.3). Through this mechanism, an ultra-loose monetary policy can lead to falling—rather than increasing—inflation as measured in the usual consumer price indices for at least four reasons. First, in many cases, central bank interest-rate cuts and the expansion of the monetary base were/are accompanied by excesses in real estate markets and usually go along with booms in the construction industry. The impact of fast-increasing real estate prices on consumer price indices remains limited. Although prices for new rentals (in particular in centers of economic activity) may rise, housing market regulations dampen any transmission from rising real estate prices to average home rental rates. As rents represent housing in the consumer price baskets, the transmission of fast-growing house prices to consumer price baskets is weak. In the long term, the creation of additional capacities can dampen home rental rates after the burst of the bubbles. Second, low interest rates constitute lower costs for credit. In the case of boom phases in the stock markets, they also cause favorable conditions to raise capital via stock emission. This contributes to lower costs of enterprises, in particular large enterprises, which can issue stocks. Bearing in mind the significant increase in global competition following the entry of China and many central and eastern European countries into the world economy, the declining financing costs are likely to have contributed to lower prices in the product markets. Third, financial institutions can use the additional liquidity to purchase government bonds, meaning that government spending will continue to grow. A shift in demand from the private to the public sector and therefore price increases of goods consumed by the public sector are not reflected by the established consumer price indices. Fourth, the distributional effects of boom-and-crisis cycles in the financial markets indirectly bring about income repression for major parts of the population (see section 19.4.3). This dampens consumption among those sections of the population, whose consumption habits are modeled in the consumer price indices of central banks.
Crisis Management and Austrian Business Cycle Theory 563
19.3.2 Crisis Management and Financial-Market Exuberance The gradual monetary expansion of central banks in the course of crisis management has come along with a gradual growth of financial markets, as the asymmetric monetary expansion provided an incentive to expand debt at lower interest rates.18 In the United States, the size of the financial and real estate sectors (which are strongly intertwined) as a share of the total economy increased from 11 percent in 1980 to 18 percent by 2015 (figure 19.7).19 Together with the size of financial markets, volatility also seems to have increased. According to Mises (1949), “[t]he wavelike movement affecting the economic system, the recurrence of periods of boom which are followed by periods of depression is the unavoidable outcome of the attempts, repeated again and again, to lower the gross market rate of interest by means of credit expansion” (572). In liberalized and globalized financial markets, the effects of domestic monetary policy crisis management can emerge in any other country or any market segment of globalized financial, real estate, and raw material markets (Hoffmann and Schnabl 2016b). The growth of financial markets went along with a boost in financial market deregulation, which spread across all industrialized and many emerging market economies. The growth of financial markets can be seen as either the outcome or the motivation of financial deregulation. For instance, Dell’Arricia et al. (2012) see financial deregulation as a precondition for the growth of financial markets (and possible financial crisis). Assuming a gradual monetary expansion to be exogenous (as in Hayek 1931), the gradual interest-rate cuts can be seen as the prerequisite for credit creation and investment in financial markets. If gradual interest-rate cuts create additional profit opportunities in new market segments, financial institutions would have an incentive to use parts of the expected profits to lobby for financial deregulation. This financial-market deregulation
35
Percent
30 25 20 15
Industry
14
12
20
10
20
08
20
06
20
04
20
02
20
00
20
98
20
96
19
94
19
92
19
90
19
88
19
86
19
84
19
82
19
19
19
80
10
Finance, insurance, real estate, rental and leasing
Figure 19.7 US Financial and Real Estate Sector As Share of Total Economy Source: US Bureau of Economic Analysis.
564 Schnabl
500
5000
450
4500
400
4000
350
3500
300
3000
250
2500
200
2000
150
1500
100
1000
50
500
0 1985
1990
1995
2000
2005
2010
Nikkei (1985: 01 = 100)
Bursa Malaysia (1985: 01 = 100)
NASDAQ (right side)
Case-Shiller (1985: 01 = 120)
2015
NASDAQ, Dow Jones
Nikkei, Bursa Malaysia, Case-Shiller Index
could take the form of liberalizing new market segments in domestic financial markets or opening up whole countries (such as China) for international financial flows. With the benefit of hindsight, a wave of wandering bubbles (see Hoffmann and Schnabl 2008) can be identified. In figure 19.8, the wave of wandering bubbles starts in Japan in the mid-1980s, when Japan was forced to let the yen appreciate to cure the US-Japanese trade imbalance (September 1985 Plaza Agreement). As the strong yen appreciation by about 50 percent against the dollar pushed the export-dependent Japanese economy into a deep recession, the Bank of Japan cut interest rates from 8 percent (September 1985) to a by-then historical low of 3.5 percent (September 1987). While the eased financing conditions facilitated investment to reduce production costs of the export enterprises, the cheap liquidity also nurtured an unprecedented speculation boom in stock and real estate markets, which peaked in December 1989 (see Nikkei in figure 19.8). As in the overinvestment theories, the Bank of Japan pricked the bubble by increasing interest rates when it started to regard the development on stock and real estate markets as unsustainable. Whereas monetary policy was kept tight in the early phase of the postbubble recession (as assumed by the monetary overinvestment theories), the advent of the lasting postbubble recession finally tempted the Bank of Japan to ease monetary conditions even further by cutting interest rates gradually toward zero. Japanese money-market interest rates reached 0.5 percent in 1995 and remained close to zero after March 1999. But instead of expanding credit to the domestic economy, the ailing Japanese banks provided credit to banks and enterprises active in a set of Southeast Asian economies.
0
Dow Jones (1985: 01 = 300, right side)
Figure 19.8 Wandering Bubbles Sources: Nikkei, Reuters, NASDAQ Stock Market, Standard & Poor, Dow Jones. Based on Hoffmann and Schnabl 2008.
Crisis Management and Austrian Business Cycle Theory 565 Stock and real estate bubbles in the Southeast Asian “miracle” economies emerged, as represented by the Malaysian stock market in figure 19.8. In June 1997, the speculation boom in Southeast Asia finally turned sour, as international capital markets had lost confidence in the regional boom. During the Asian crisis, which was accompanied by the Japanese financial crisis (because Japanese banks and export enterprises were strongly engaged in the region), capital flows returned to the safe haven of the industrialized countries. With the central banks of the large industrialized countries having responded to the Asian crisis and the Japanese financial crisis (as well as to buoyant capital inflows) by interest-rate cuts, bubbles in the dot-com market emerged (see NASDAQ in figure 19.8). The bursting of the dot-com bubbles in December 2000 triggered new interest-rate cuts in the large industrialized countries, which built the breeding ground for a set of new, ever larger bubbles. Figure 19.8 shows the Case-Shiller index as a proxy for the US subprime boom. Beyond the United States, in European periphery countries (including Greece, Ireland, Portugal, Spain, the Baltics, Iceland, etc.), stock and real estate bubbles emerged, which were accompanied by private and public spending booms.20 Given the low interest level in the United States, China was also faced with speculative capital inflows, which significantly contributed to a build-up of overcapacities in the Chinese (export) industry and the real estate sector (Schnabl 2017). The Chinese boom—which triggered growing demand of China for raw materials—was accompanied by hiking prices for oil, gas and raw materials. The outbreak of the US subprime crisis was followed by crises in the periphery countries of the euro area and other European periphery countries. The following monetary policy crisis management of the Fed and the European Central Bank helped to reverse the dips in the global raw material markets and in China until confidence in the Chinese economic miracle started to fade beginning in 2014.21 Since then, as the overcapacities in China have become visible, the bubble in the raw material markets has burst, and China’s growth rates are on continuous decline. Increasing amounts of capital are invested in the US stock markets (see NASDAQ and Dow Jones in figure 19.8) and the stock markets of some northern European countries, particularly Germany. In short, the new bubbles seem to emerge in the industrial sectors of the industrialized countries, as previous bubbles have undermined the confidence in the financial institutions in industrial countries and the industrial capacities of the emerging market economies (particularly China).
19.4. Growth and Distribution Effects of Monetary Policy Crisis Management The upshot is that from a global perspective, the monetary policy crisis management of today is the breeding ground for the overinvestment and speculation booms of tomorrow and the financial crises of the day after tomorrow. Whereas in the face of crisis, monetary policy crisis management helps to stabilize financial markets in the
566 Schnabl short term, it has a growing number and scale of unintended side effects in the form of quasi-unpredictable bubbles in some—more or less random—segments of international financial markets. Once bubbles have burst, in many cases, a lasting stagnation is observed, which can be attributed to two main reasons. First, as argued by Koo (2003), balance-sheet recession puts a drag on growth. Declining stock, real estate prices, and consumer prices increase the real value of debt. As the value of collateral of credit declines, households and enterprises are forced to curtail consumption and investment to reduce their debt burden. Such balance-sheet recessions are also argued to be longer and deeper than usual recessions because the financial sector is strongly damaged (Reinhart and Reinhart 2010; Dell’Arricia et al. 2012). They are associated with permanent output losses due to the precrisis overestimation of potential output, the misallocation of capital stock and labor during the upswing, and the oppressive effects of debt and capital overhangs during the downswing. The implicit consequence is that growth will recover once the adjustment process to real debt is finished. Focusing on the notion of misallocation of capital, sections 19.4.1. and 19.4.2 are extensions of the theories of Mises (1912) and Hayek (1929) fitted to explain lasting stagnation not only as the heritage of precrisis exuberance (as in Reinhart and Reinhart 2010 and Dell’Arricia et al. 2012) but also as a consequence of the monetary policy crisis therapy (in contrast to Dell’Arricia 2012). They are developed in the notion of Hayek (1931) that if “voluntary decisions of individuals are distorted by the creation of artificial demand, it must mean that part of the available resources is again led into a wrong direction and a definite and lasting adjustment is again postponed” (98) Section 19.4.3 provides insights into possible redistribution channels of the unprecedented monetary expansion in the industrialized world (with their repercussions on consumer price indices and thereby inflation targeting). Section 19.4 has to be seen as the starting point for more research on the unintended side effects of monetary policy crisis management.
19.4.1 Declining Productivity Gains The most important argument in favor of bailouts of financial institutions during crisis is the potential credit crunch (Koo 2003). When asset price bubbles burst, bad loans in the banking sector increase, as debtors are rendered overindebted. The resulting shrinkage of equity forces banks to tighten credit to the private sector. As the resulting credit crunch threatens to extend the crisis to the enterprise sector (with a respective rise in unemployment), interest cuts and low-interest (no-cost) liquidity provision by the central bank shall stabilize both the financial sector and the enterprise sector (Posen 2000). As the bad-loan problem in the financial sector is contained, contagion effects to financial institutions and enterprises are prevented. However, a highly expansionary monetary policy during crisis—which is not followed by a symmetric monetary tightening—leads in the long term to a quasi-nationalization of money and credit markets. In money markets—given growing distrust among banks during crisis—interbank lending of commercial banks is substituted for by borrowing from and depositing at the central bank. The zero-interest-rate policy perpetuates this situation, because profit
Crisis Management and Austrian Business Cycle Theory 567 margins in money markets are compressed (McKinnon 2012). Banks with excess liquidity no longer have any incentive to supply overnight credit. Even if banks requiring liquidity were to offer higher interest rates to create a supply, the loan is unlikely to be granted, because offering high interest rates signals higher risk.22 As a result, the private supply in money markets is substituted for by the central bank. Banks with excess liquidity invest with the central bank. In the credit markets, the zero-interest-rate policy contributes to a credit crunch, because the low-or zero-interest-rate policy is equivalent to a subsidy for the enterprise sector, which is under usual circumstances an aggregated demander on the lending market. Especially for large companies that can issue their own securities and stocks, financing costs drop. The declining costs of obtaining capital give rise to additional profits for enterprises, particularly large ones, which becomes visible in the form of increasing corporate savings as shown in the lower panel of Figure 19.9.23 The demand for loans
12
Percent of GDP
10 8 6 4 2 0 –2 1980 1984 1988 1992 1996 2000 2004 2008 2012 2016 Household sector 7
Percent of GDP
5 3 1 –1 –3 –5 –7 1980 1984 1988 1992 1996 2000 2004 2008 2012 2016 Corporate sector Germany
Japan
USA
Figure 19.9 G3 Household and Enterprise Savings (As a Share of GDP) Notes: Household savings are defined as total disposable income of households less total final consumption expenditure. Enterprise savings are defined as the net acquisition of financial assets less incurrence of liabilities by nonfinancial corporations. Sources: OECD, Japan Cabinet Office, US Federal Reserve.
568 Schnabl declines, and outstanding credit is repaid. Increasingly, shares are bought back, because alternative investment categories (bank deposits, government bonds) render low yields due to the asymmetric monetary policy crisis management.24 If the larger, less risky companies withdraw from the loan portfolios of commercial banks, the average risk in the banks’ loan portfolios increases. In this case, loans to comparatively high-risk small and medium-sized enterprises have to be restricted. In particular, new and risky investment projects by small and medium-sized enterprises will tend to remain unfinanced. If the banks are more strictly regulated as a result of financial crisis, they need to accumulate more equity, which is an additional incentive to restrict lending to higher-risk companies. As lending to the government is widely acknowledged to be riskless (because of respective Basle III regulations and because of asymmetric monetary policy crisis management), lending to the private sector tends to be substituted for by lending to the public sector.25 In contrast, credit to enterprises and households having been granted before the crisis has a high probability of being extended despite low creditworthiness. To remain in the market, banks in trouble tend to disguise their bad-loan problem by overlooking precarious business situations of enterprises. For Japan—where the (close to) zero-interest- rate period persists the longest—Sekine, Kobayashi, and Saita (2003) find forbearance lending: banks continue to provide irrecoverable loans, thus keeping themselves and (potentially) insolvent companies alive. Peek and Rosengren (2005) associate Japan’s central bank crisis management with a misallocation of capital in the credit sector, which keeps companies with poor profit prospects alive (“evergreening”). This implies a structural change in the nature of banking business. Traditional banking involves accepting deposits with a positive rate of return and lending that capital, in the form of loans, to businesses and households at higher interest rates. Banks fulfill an intermediary function making an assessment about the future returns on investments. Projects with higher expected returns than the prevailing interest rate are financed at a given interest rate. Projects with lower expected returns (with a high probability of default) are rejected. By having accumulated the know-how to make this assessment properly, the banking sector has been traditionally playing a crucial role in the transmission of the allocation function of the interest rate. The banking sector separates investment projects with higher marginal productivity from those with lower marginal productivity. If the banking system is, however, no longer subject to strict budget constraints due to quasi-unlimited central bank liquidity provision in the course of monetary policy crisis management, the allocation function of the interest rate is undermined. Kornai (1986) wrote of “soft budget constraints” in the case of the central and eastern European planned economies: since unemployment was politically undesirable, unprofitable companies were kept alive by covering their losses via the state-controlled banking systems. Quian and Xu (1998) showed for China that such soft-budget constraints made it harder to separate profitable from unprofitable projects as the selection mechanism of the market was undermined. Caballero, Hoshi, and Kashyap (2008) show for Japanese companies that under zero-interest-rate policies, profits became dependent on cheap
Crisis Management and Austrian Business Cycle Theory 569 6 5
Percent
4 3 2 1 0 1973
1978
1983
1988 Japan
1993
1998
2003
Germany
2008
2013
USA
Figure 19.10 G3 Productivity Increases Source: OECD.
central bank liquidity provision. Thus, while the monetary policy crisis management successfully helped in keeping unemployment low, it went globally along with declining average productivity increases of firms (figure 19.10). Similar developments seem to take place in other industrialized countries, particularly since the advent of zero-interest-rate policies (see figure 19.10 for the United States and Germany). Barnett et al. (2014) demonstrate that since 2007, the United Kingdom has experienced a significant drop in productivity growth among businesses. Cardarelli and Lusinyan (2015) show for the United States that total-factor productivity has dropped significantly since the turn of the millennium. Gopinath et al. (2015) provide respective empirical evidence for the southern European countries since the outbreak of the European debt and financial crisis. The upshot is that monetary policy crisis management delays or prevents the structural adjustment process during crisis, as stressed by Schumpeter (1934): the cleansing process, which is regarded in the monetary overinvestment theories as a prerequisite for a sustained recovery, is postponed. Investments that would have to be dismantled at Wicksell’s (1898) and Hayek’s natural rate of interest continue to be financed, which implies that the average efficiency of investment gradually declines below Hayek’s and Wicksell’s natural interest rate.26 This interpretation is line with Borio (2014), who identifies capital overhang as a major determinant of postbubble crisis.
19.4.2 Impact on Investment and Growth Given gradually declining productivity, asymmetric monetary policy crisis management can affect investment in fixed assets negatively for several reasons. First, resources
570 Schnabl are bound in investment projects with low productivity, whereas new investment is discouraged by the ailing banking sector (see section 19.4.1). Second, because the ample liquidity provision in the course of monetary policy crisis management prevents asset prices from further falling (or drives prices upward in alternative asset classes), monetary policy crisis management constitutes an implicit insurance mechanism for investment and/or speculation in financial markets. In contrast, enterprises have to bear the risk of investment in new products or production processes without any public insurance mechanism. Therefore, enterprises have incentives to substitute for investment in real investment projects with financial investment, including repurchases of their stocks (see section 19.4.2.).27 Third, if the financial crisis is transformed into a sustained crisis in which there is no limit to the central bank’s government bond purchases, the likelihood increases that private investments are substituted for by public investments and/or government consumption.28 For instance, after the Japanese bubble had burst in December 1989, numerous Keynesian economic stimulus programs were implemented (Yoshino and Mizoguchi 2010). Figure 19.11 shows that gross investment in Japan as a share of GDP declined from 32 percent after the bursting of the bubble in 1990 to 20 percent in 2014, whereas government spending as a share of GDP rose from 13 percent to 19 percent. Assuming that public investments have a lower marginal efficiency than private investments, the average efficiency of investments has further decreased. In the neoclassical growth theory, growth is explained by the accumulation of capital toward a long-term equilibrium between investment and depreciation (steady- state economy). The steady state is based on the assumption of a declining marginal efficiency of capital when the stock of capital increases (Solow 1956; Swan 1956). Only through innovation and technological progress, which can be interpreted as increasing
65
Percent of GDP
55 45 35 25 15 5 –5 1980
1985
1990
1995
Private consumption Gross capital formation
2000
2005
2010
2015
Government consumption Net exports
Figure 19.11 Structure of Japan’s GDP Source: Japan Cabinet Office.
Crisis Management and Austrian Business Cycle Theory 571 productivity, can growth be positive in the long term (Solow 1957). In this framework, an asymmetric monetary policy crisis management negatively affects growth if it has a negative impact on innovation and productivity gains. Leibenstein (1966) regards incentives and motivation as major determinants of a concept of efficiency that goes beyond allocative efficiency (assuming constant production costs in different types of markets such as polypolies in contrast to monopolies): X-(in) efficiency. Businesses do not realize all possible efficiency gains when competition is limited. Such X-inefficiency can arise when asymmetric monetary policy crisis management results in the creation and cementing of structural distortions.29 In the course of asymmetric monetary policy crisis management, liquidity and loans are provided increasingly independently from efficiency criteria, causing the average productivity of zombie firms supported by zombie banks to decline. Loan provision to new dynamic enterprises becomes more restrictive. A reduced pace of innovation, which according to Hayek (1968) is triggered by lower levels of competition, has a negative impact on productivity gains. By tying resources to sectors with low or negative productivity gains, in the context of the Solow-Swan model, a negative allocative effect is created which results from declining average productivity (defined as output per unit of labor). From the perspective of companies, average costs will rise ceteris paribus. At the macroeconomic level, a smaller amount of goods and services is produced with a constant amount of labor. Since declining output also entails a decrease in savings per worker, this results in an additional negative growth effect because households make less savings available for investment. A further determinant favoring lower growth is declining household savings as was shown in figure 19.10 and, coupled with this, declining investments, which result from reduced incentives for people to save. The transmission channel from monetary policy crisis management toward reduced savings activity is financial repression, which drives down returns on low-risk investments. Real household savings fall as was shown in figure 19.9, meaning that real investments also fall, and in turn production opportunities increase less.30 Once depreciations exceed gross investment, the result is a downward spiral of declining returns on capital, households saving less, declining investments, and declining output. As seen in figure 19.12, real growth rates in the large industrialized countries are gradually falling together with investments as a share of GDP.
19.4.3 Redistributive Effects via Financial and Real Wage Repression Asymmetric monetary policy crisis management has redistribution effects, which work through a large number of diverse transmission channels (see Hoffmann and Schnabl 2016a). Cantillon (1931) stressed the redistribution effects of monetary expansion in favor of the banking system relative to other parts of the economy (Cantillon effect).31
28
5
26
4 3
24
2
22
1
20
0
18 16 1980
Percent
Percent of GDP
572 Schnabl
–1 1985
1990
1995
2000
2005
2010
–2 2015
Investment as percentage of GDP (left side) GDP growth (right side)
Figure 19.12 G3 Investment As Share of GDP and Real Growth Sources: OECD, Japan Cabinet Office, BEA, Eurostat. Three-country arithmetic averages.
3 2
Percent
1 0
–1 –2 –3 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 Financial industry
Manufacturing sector
Figure 19.13 Wage Development in the US Financial and Manufacturing Sectors Sources: IMF, BLS.
Given a monetary expansion by the central bank, commercial banks not only benefit from accelerating credit growth, but they can also acquire stocks, real estate, securities, and so on, at still constant prices. If the sellers of these assets use or provide the received funds for new purchases in these asset classes, real estate, stock, and security prices have already increased. This implies a redistribution effect in favor of banks and at the cost of agents, which are more remote from the central bank’s liquidity provision. Such redistribution effects in favor of the financial sector are, for instance, visible in the United States, which is the country with the largest and most developed financial markets. As shown in figure 19.13, until the mid-1980s, the wages of industrial-sector
Crisis Management and Austrian Business Cycle Theory 573 workers grew faster than in the financial sector. Since the mid-1980s—with the start of the asymmetric monetary policy crisis management—financial-sector employees are benefiting from higher wage increases. This was even the case in financial-market crises, during which industrial workers’ wages declined more than in the financial sector. Financial-sector executives may benefit more than other employees from windfall profits during speculative upswings, because one-off dividends (bonuses) are more common at this level. An important transmission channel from asymmetric monetary policy crisis management to diverging wealth and income is via its impact on asset prices, because assets such as stocks and real estate are unevenly distributed among different wealth and income groups. If an asymmetric monetary policy is geared toward pushing up asset prices, it redistributes in favor of high-income groups, which tend to hold these asset groups overproportionally. This is suggested by figure 19.14, where the left axis shows price trends on the US and Japanese stock markets (NYSE and Nikkei), while the right axis plots the share of the top 1 percent of incomes as a proportion of total incomes of the United States (including income from capital). The rise of the share of top 1 percent incomes as a share of total incomes goes along with the rise of Japanese stock prices during the 1980s. Since 1987, when Greenspan took office as chairman of the Federal Reserve and initiated a monetary policy aimed primarily at stabilizing the financial markets, the share of the top 1 percent of incomes in the United States has risen from around 13 percent to almost 22 percent of total income together with US stock prices.
24
140
22
120
20 18
80
16
60
14 12
40
10
20
8
0
NYSE (l.h.s.)
2
7
20 1
2
20 0
7
20 0
2
19 9
7
19 9
2
19 8
7
19 8
2
19 7
7
19 7
2
19 6
7
6
19 6
19 5
Percent
Share index
100
Nikkei (l.h.s.)
Top 1% income share, United States (r.h.s.)
Figure 19.14 Stock Prices (US, Japan) and US Income Distribution Note: NYSE index: 2010 = 100; Nikkei: 2010 = 50. Sources: OECD, Main Economic Indicators, World Top Incomes Database.
574 Schnabl 12 10
Percent
8 6 4 2 0 1980
1985
1990
1995
2000
10-year government bond rate
2005
2010
2015
Deposit rate
Figure 19.15 Interest Rates of Low-Risk Assets in G3 Countries Sources: Thomson Reuters, IMF, German Bundesbank. Arithmetic averages.
In contrast, returns of riskless assets, which are mainly held by middle-and lower- income groups, are depressed by asymmetric monetary policy crisis management.32 Among other things, the returns on low-risk investments such as fixed-income savings and government bonds are lowered toward zero nominally and—given moderate inflation—into negative territory in real terms. Figure 19.15 shows the gradual decline of returns on low-risk investment classes such as government bonds and bank deposits in the large industrialized countries. This financial repression also changes the traditional banking business, as lending-deposit spreads are compressed and commercial banks are charged negative interest rates on their deposits at the central bank (Japan, euro area, Switzerland, etc.).33 In addition to financial repression, real wage repression can occur when postbubble crisis undermines the bargaining power of employees in the public sector and the private sector, while at the same time monetary policy crisis management leads to declining productivity gains, which are the basis for real wage increases. Financial crises (and growing public expenditure obligations triggered by exuberant boom phases in the financial markets) drive public debt upward.34 During the course of the crisis, tax revenues structurally decline, while payment obligations cannot be reduced at the same speed. To curtail the expenditure, the public sector is forced to curtail wages. The signaling effects of public wage agreements as well as gloomy business expectations cause public austerity to be followed by wage moderation in the private sector. Wages are driven down, especially in those segments of the labor market where qualification and bargaining power are low. Such nominal wage repression has been observed in most countries suffering from financial crises. Usually, young employees who enter the labor markets will be more exposed to nominal wage repression than older cohorts, whose contracts remain fixed. As shown in figure 19.16, in Japan the average real wage level has fallen steadily since the Japanese
Crisis Management and Austrian Business Cycle Theory 575 180
Index: 1985 = 100
160 140 120 100 80 60 40 1985
1989
1993
1997
2001
2005
2009
2013
Wages and salaries Property income (interest, dividends, insurances, rents)
Figure 19.16 Real Wage and Factor Income in Japan Sources: Economic and Social Research Institute, Japan Cabinet Office.
financial market crisis of 1998. Given mainly stable prices, the decline of real wages is mainly achieved via declining average nominal wages. In Europe, too, nominal wage repression, particularly for the younger generation, has increasingly become a reality since the outbreak of the European financial and debt crisis.35 Real wage repression cum financial repression can in turn be seen as an important determinant of what leads to weak private demand of large proportions of the population. If this is anticipated by enterprises, investment will remain sluggish.36 In a nutshell, the negative redistributive and real wage effects, which were traditionally widely ascribed to consumer price inflation, are now achieved indirectly—that is, without consumer price inflation—via boom-and-crisis cycles in the financial markets.
19.5 Persistence of Low-Interest-R ate Policies and Policy Implications An asymmetric monetary policy crisis management has self-reinforcing effects if it encourages—directly and indirectly—an increase in national debt, as observed in most industrialized countries. The higher the levels of government debt, the higher the pressure on central banks to keep key interest rates low and to continue purchases of government bonds via unconventional monetary policies. Otherwise, the interest-rate burden of overindebted governments would become unsustainable. The governments would be forced into politically unpopular structural reforms and spending cuts.
576 Schnabl Furthermore, in the course of monetary policy crisis management, central banks tend to hold increasingly (potentially) bad assets in their balance sheets. As any monetary tightening would reduce the expected value of these assets, central banks would lose equity and therefore in the long term would necessitate recapitalization. As this would further undermine their independence, there is an incentive to continue asymmetric monetary policy crisis management. Given the widely used inflation targets, the persistence of unconventional monetary policies is technically possible, because monetary policy crisis management based on ultra-expansionary monetary policy has—due to Goodhart’s law—no or a negative impact on consumer price inflation. As described by Hayek (1929; 1937; 1944), the Great Depression and the following stagnation of the 1930s were the outcome of monetary policies that were too loose, resulting from undue credit growth and intervention spirals in response to the crisis. In this chapter, it has been shown that the current “secular stagnation” can be seen as the outcome—and not as the origin—of crisis management, which is based on excessive low-cost liquidity provision by central banks. To reverse the vicious circle of monetary policy crisis management and declining growth, a timely exit from the ultraloose monetary policy is necessary to reconstitute the allocation and signaling function of the interest rate as well as the principle of liability in financial markets. Despite negative growth effects in the short term, as nonproductive investment has to be dismantled, by slowly but irrevocably raising interest rates growth could be restored. The incentives for financial-market speculation would be reduced, as risk would become priced again based on market forces. The resulting cleansing process in financial markets would set free capital and labor for real investment, which was previously bound in sectors with low productivity. The increasing interest rates would provide an incentive for more household saving to finance growing investment. The marginal and average efficiency of investments would increase again. Aggregated saving and investments as well as innovation would be strengthened. Growing debt-servicing costs would force governments to consolidate their spending by pushing forward structural reforms. Parts of the public economic activity would have to be privatized, which would contribute to an increasing average productivity of previously public investment. By substituting for public consumption and investment with private investment, the average efficiency of investment would increase. Fiscal consolidation combined with rising interest rates would encourage banks to restore their traditional business model. The banking sector would return to its task of financing investment projects with the highest expected returns (instead of buying government bonds). This would lead to pressure on enterprises to come up with innovative investment projects. A higher degree of X-efficiency would be reached. Productivity gains would allow real wages to grow again. This would be even more the case for the middle-and low-income classes if the adverse redistribution effects of boom and bust in financial markets would be eliminated. A growing purchasing power of broad parts of the middle-and lower-income classes would help to fully use the newly created capacities. Growing income levels would contribute to higher tax revenues for the state, which could be used to reduce public debt. The political pressure toward
Crisis Management and Austrian Business Cycle Theory 577 regulation, price and rent controls, redistribution of wealth, and so on, would be eased. A higher degree of economic freedom would help to create sustainable growth and to secure welfare for all parts of the society. This would all help to contain political polarization, which has been favored by increasing income inequalities.
Notes 1. The monetary overinvestment theories of Mises (1912) and Hayek (1929 and 1931) fulfill the requirements for a theory of financial cycles as formulated by Borio (2014, 186–187) as follows: (1) The financial boom not only precedes the bust, but it also causes the bust. (2) There is a capital overhang (i.e., overinvestment), which is related to debt overhang (unsustainable credit growth). (3) There is a notion of a sustainable level of output, with the deviation of realized output from sustainable output being related to monetary conditions. 2. Therefore, the theory of Wicksell can be seen as the foundation of inflation-targeting frameworks. If the central bank successfully adjusts the central bank interest rate to the natural interest rate, the inflation rate will be zero. 3. “In other words, the real cause of the rise in prices is to be looked for, not in the expansion of the note issue as such, but in the provision by the bank of easier credit, which is itself the cause of the expansion” (Wicksell 1936, 87). 4. “It follows obviously that unless the fall in the rate of discount is neutralised by simultaneous changes elsewhere, it must, when it has persisted long enough to exert a depressing influence on long-term rates of interest, provide a stimulus to trade and production, and alter the relation between supply and demand of goods and productive services in such a way as necessarily to bring about a rise in all prices” (Wicksell 1936, 89). “Now let us suppose that the banks and other lenders of money lend at a different rate of interest, either lower or higher, from that which corresponds to the current value of the natural rate of interest on capital. The economic equilibrium of the system is ipso facto disturbed. If prices remain unchanged, entrepreneurs will in the first instance obtain a surplus profit (at the cost of the capitalists) over and above their real entrepreneur profit or wage. This will continue to accrue so long as the rate of interest remains in the same relative position. They will inevitably be induced to extend their businesses in order to exploit to the maximum extent the favourable turn of events. And the number of people becoming entrepreneurs will be abnormally increased. As a consequence, the demand for services, raw materials, and goods in general will be increased, and the prices of commodities must rise” (Wicksell 1936, 105). 5. Mises (1912) and Hayek (1929) used the word Überinvestition (overinvestment). In the English literature on Austria business cycle theory, the term malinvestment is more common. 6. Wicksell saw innovation, that is, a real factor, as the main trigger of business cycles: “It is in the nature of things that new, great discoveries and inventions must occur sporadically, and that the resulting increase in output cannot take the form of an evenly growing stream like population growth and the increase in consumption demand. . . . For my part, until I am shown something better, it is in this that I discern the real source of economic fluctuations and crises, which, in their present form, belong entirely to modern times” (1936, 67). Nevertheless, Wicksell wrote, monetary elements also can be found: “But above all, new inventions, of whatever kind they be, almost always require for their
578 Schnabl materialization a great deal of preparatory labour, creation of new facilities, etc., in a word, they require capital. . . . But a crisis can hardly be avoided if new ventures have been started on a scale that gradually makes the available capital insufficient. And this crisis is reinforced by the psychological factors which exert an influence on the money and credit market in such circumstances—the excessive lack of confidence which replaces the all too widespread reliance on willingness to extend credit, and so on” (1936, 232). 7. Borio (2014, 183–186) discusses the link between real and financial cycles. He does find a direct link. 8. “Let me end my talk by abusing slightly my status as an official representative of the Federal Reserve. I would like to say to Milton and Anna: Regarding the Great Depression. You’re right, we did it. We’re very sorry. But thanks to you, we won’t do it again” (Bernanke 2002). “In their classic study of U.S. monetary history, Friedman and Schwartz (1963) presented a monetarist interpretation of these observations, arguing that the main lines of causation ran from monetary contraction—-the result of poor policy—making and continuing crisis in the banking system—to declining prices and output” (Bernanke 1995, 3). 9. New Zealand was the first country to introduce an inflation-targeting framework, in 1989. 10. Friedman (1970) argued, “Inflation is always and everywhere a monetary phenomenon in the sense that it is and can be produced only by a more rapid increase in the quantity of money than in output” (24). 11. Monetary policy according to Woodford (2003) is similar to that in Wicksell (1898) in that interest rates are used to control inflation. Woodford calls his models “neo-Wicksellian.” However, one considerable difference is that the model framework of Woodford does not require monetary aggregates, whereas according to Wicksell, these play an important role in the transmission of changes in interest rates to inflation via credit creation. 12. The central banks did not usually change directly the monetary base to influence inflation as is often assumed in textbooks. Moreover, they used the interest rate as an intermediate target of monetary policy making, which has implications on the credit creation of the banking system and therefore on the base money held by commercial banks at the central bank. Base money can be decomposed in several components, out of which only open market operations are directly controlled by the central bank (see Disyatat 2008). Autonomous factors as important components of base money such as standing facilities or the demand for coins and notes in circulation cannot be directly influenced by the central bank. Therefore, the reserve holdings of the commercial banks at the central bank tend to be in the short term independent from the interest-rate decisions of the central bank. However, in the medium term, central banks influence the monetary base via the impact of interest-rate decisions on economic activity. If the central bank cuts (lifts) interest rates, under normal conditions, the commercial banks’ credit provision to the private sector will increase (fall), and therefore the demand for base money will grow (fall) as well. Since money-market rates have reached the zero bound in most industrialized countries, the monetary bases (or the size of the central bank balance sheets) have become the direct instrument of monetary policy making. 13. The European Central Bank assumed the average real growth rate of the euro area to be 2 percent and the change in the velocity of money to be 0.5 percent per year. Given an inflation target of 2 percent, this implied a reference value for money supply growth of 4.5 percent. After the turn of the millennium up to the European financial and debt crisis,
Crisis Management and Austrian Business Cycle Theory 579 the reference value for money supply growth was mostly missed by far, while the inflation target of close to 2 percent was mainly achieved. For this reason and because the connection between money supply and inflation had proved to be empirically hardly traceable, money supply growth as a reference for monetary policy decisions was stepwise discarded and moved from the first to the second pillar of the European Central Bank’s monetary policy strategy in 2003. 14. The velocity of money is assumed to be constant. 15. This effect is circumvented in case of helicopter money, which is directly distributed among households. 16. This is an important innovation and a deviation from Wicksell (1898 and 1936), who did not primarily address stock markets. Nevertheless, Wicksell (1936) acknowledged that interest rates and credit conditions affect real estate markets: “Even more convincing is the case where the money is used for durable investment. Let us suppose that there is a general fall in the rate of interest from 4 to 3 per cent. We have seen that the value of all permanent capital goods, for instance of dwelling-houses, will go up by 33 per cent. . . . Rents will rise, and while it is true that the owners of houses will spend proportionately more on such things as repairs, it is to be presumed that their net return will go up in proportion. This will bring about a further rise in the price of houses (though not the full equivalent of the original rise), and so indirectly a further rise in the prices of everything else. . . . An abnormally large amount of investment will now probably be devoted to durable goods. There may result a relative overproduction of such things as houses and a relative underproduction of other commodities” (95). 17. During the upswing, in particular, the upper management (as agent) can privatize major parts of the (speculative) profits, for instance, in the form of bonuses. Once the bubbles burst, the resulting losses during the downswing may be shifted to the owners of the financial institution (stockholders), that is, the principal (bail-in). Alternatively, the costs of the rescue measures are shifted to the public sector directly via public bailouts and/or indirectly via monetary expansion. This reduces the incentives for the principal to take sanctions against the agent. This is equivalent to the circumvention of the liability principle of market economies. 18. “In depressed times when they usually suffer from an excess of cash, the banks should reduce their loan rates of interest early and energetically, just as they should raise them early, at the outset of good times, both in order to prevent excessive speculation and unsound ventures and to stimulate saving” (Wicksell 1936, 233). 19. Concerning the complementary development in China in the context of US monetary and Chinese exchange rate and monetary policies, see Schnabl 2017. 20. See Schnabl and Wollmershäuser 2013 on the asymmetric effects of European Central Bank’s monetary expansion on different parts of the European monetary union (and beyond). 21. On the overinvestment boom in China, see Schnabl 2017. 22. This is equivalent to a market failure, as in Akerlof 1970. 23. At the same time household savings rates decline (see upper panel of Figure 19.9), as increasingly loose monetary policies constitute a redistribution from households to enterprises. It is therefore difficult to provide sound empirical evidence for the hypothesis of the global liquidity glut as launched by Bernanke (2005). The assumed structural increase in net household savings because of aging societies cannot be observed in any of
580 Schnabl the aging countries with surplus savings (over investments). The increase in aggregate net savings surpluses in these countries (relative to investments) is rather due to the increase in corporate savings rates (especially resulting from declining financing costs) and the fall in investments. 24. This increases the profit per share and therefore—in many cases—the bonus payments for the upper management (see section 19.4.3). 25. This is particularly the case if national debt increases in the course of the crisis management as observed in Japan and many European crisis countries. This effect vanishes when interest rates on government bonds turn negative. 26. In the monetary overinvestment theories, overly favorable refinancing conditions during the upswing cause additional investment projects with lower expected returns to be financed. The marginal and average efficiency of investments decreases. During downturn and crisis, investment projects with low internal rates of return are canceled. The marginal and average efficiency of investments increases. In the long term, the average efficiency of investment is therefore mainly constant. In contrast, Summers (2014), and Laubach and Williams (2015) assume a gradual secular decline of the average return on investment, which they assume to be matched with a secular decline of what they call natural interest rate. 27. From a private-sector perspective, the average return on financial investments will seem relatively high if potential losses are counteracted by the central bank. In aggregate, however, the overall returns need to be adjusted for possible state subsidies. This is, for example, the case when banks are recapitalized with public money or the costs of bailouts in the course of central bank crisis management become visible, such as in the form of higher inflation or the recapitalization of the central bank. From a macroeconomic perspective, returns on speculative investments in the financial markets are therefore significantly lower or even negative. 28. As proposed by Koo (2003) in the face of balance-sheet recession. 29. On the impact of credit booms on the allocation of labor and productivity dynamics, see also Borio et al. (2016). 30. Similar reasoning can be found in McKinnon (1973) and Shaw (1973), who identified by then financial repression as a major obstacle to growth in developing countries. 31. This hypothesis is consistent with de Haan and Sturm (2016) if financial liberalization and credit growth are acknowledged as the outcome of gradual monetary expansion. 32. Low-and middle-income groups are assumed to hold more low-risk financial assets, because they perceive investments in the asset markets to be high-risk. 33. For more on financial repression, see Hoffmann and Zemanek (2012). 34. As experienced, for instance, in Greece, Portugal, Spain, and Ireland prior to the crisis, unsustainable financial-market booms inflate tax revenues, which tempts governments to spend beyond their long-term means (see Eschenbach and Schuknecht 2004). 35. Germany is currently an exemption for two reasons. First, productivity increases, which have been exported precrisis to other European countries, are now partially repatriated, which has allowed for real wage increases since the outbreak of the crisis. Second, with the European Central Bank keeping monetary conditions very loose as crisis management for the southern European crisis countries, a real estate bubble and an export boom are fueled in Germany, encouraging real wages increases. 36. The negative demand effect of declining real incomes is partially offset by declining saving of the household sector, in particular of the younger generations.
Crisis Management and Austrian Business Cycle Theory 581
References Adrian, Tobias, and Hyun-Song Shin. 2008. “Liquidity, Monetary Policy and Financial Cycles.” Current Issues in Economics and Finance 14, no. 1: 1–7. Akerlof, George. 1970. “The Market for Lemons: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84, (3): 488–500. Barnett, Alina, Adrian Chiu, Jeremy Franklin, and Maria Sebastiá- Barriel. 2014. “The Productivity Puzzle: A Firm-Level Investigation into Employment Behavior and Resource Allocation over the Crisis.” Bank of England working paper 495. Barro, Robert, and David Gordon. 1983. “A Positive Theory of Monetary Policy in a Natural Rate Model.” Journal of Political Economy 91, no. 4: 589–610. Bernanke, Ben. 1995. “The Macroeconomics of the Great Depression: A Comparative Approach.” Journal of Money, Credit and Banking 27, no. 1: 1–28. Bernanke, Ben. 2000. “Japanese Monetary Policy: A Case of Self-Induced Paralysis?” In Japan’s Financial Crisis and Its Parallels to U.S. Experience, edited by Ryoichi Mikitani and Adam Posen, 149–166. Washington, D.C.: Institute for International Economics. Bernanke, Ben. 2002. “Remarks by Governor Ben S. Bernanke at the Conference to Honor Milton Friedman.” University of Chicago, November 8. Bernanke, Ben. 2004. The Great Moderation. Remarks. At the meetings of the Eastern Economic Association, Washington, DC. Bernanke, Ben. 2005. “The Global Saving Glut and the U.S. Current Account Deficit.” Remarks. Sandridge Lecture, Virginia Association of Economists, Richmond. Blinder, Alan, and Ricardo Reis. 2005. “Understanding the Greenspan Standard.” Princeton University Department of Economics working paper 88. Borio, Claudio. 2014. “The Financial Cycle and Macroeconomics: What Have We Learnt?” Journal of Banking and Finance 45: 182–198. Borio, Claudio, Enisse Kharroubi, Christian Upper, and Fabrizio Zampolli. 2016. “Labor Reallocation and Productivity Dynamics: Financial Causes, Real Consequences.” BIS working paper 534. Brunnermeier, Markus, and Isabel Schnabel. 2014. “Bubbles and Central Banks: Historical Perspectives.” CEPR discussion papers 10528. Caballero, Ricardo, Takeo Hoshi, and Anil Kashyap. 2008. “Zombie Lending and Depressed Restructuring in Japan.” American Economic Review 98, no. 5: 1943–1977. Cantillon, Richard. 1931. Abhandlung über die Natur des Handels im allgemeinen. Jena: Fischer. Cardarelli, Roberto, and Lusine Lusinyan. 2015. “U.S. Total Factor Productivity Slowdown: Evidence from the U.S.” IMF working paper 15/116. De Grauwe, Paul. 2011. “Animal Spirits and Monetary Policy.” Economic Theory 47, no. 2: 423–457. De Haan, Jacob, and Jan Sturm. 2016. “Finance and Income Inequality: A Review and New Evidence.” CESifo working paper 6079. Dell’Arricia, Giovanni, Deniz Igan, Luc Laeven, Hui Tong, Bas Bakker, and Jérôme Vandenbussche. 2012. “Policies for Macrofinancial Stability: How to Deal with Credit Booms.” IMF staff discussion note 12/06. Disyatat, Piti. 2008. “Monetary Policy Implementation: Misconceptions and Their Consequences.” BIS working paper 269. Eschenbach, Felix, and Ludger Schuknecht. 2004. “Budgetary Risks from Real Estate and Stock Markets.” Economic Policy 19, no. 39: 313–346.
582 Schnabl Friedman, Milton. 1970. “Counter- Revolution in Monetary Theory.” Wincott Memorial Lecture. Institute of Economic Affairs occasional paper 33. Friedman, Milton, and Anna Schwartz. 1963. A Monetary History of the United States, 1867– 1960. Princeton: Princeton University Press. Gertler, Pavel, and Boris Hofmann. 2016. “Monetary Facts Revisited.” BIS working paper 566. Goodhart, Charles. 1975. “Problems of Monetary Management: The UK Experience.” Papers in Monetary Economics, vol. I, Reserve Bank of Australia, Canberra. Gopinath, Gita, Sebnem Kalemli-Özkan, Loukas Karabarbounis, and Carolina Villegas- Sanchez. 2015. “Capital Allocation and Productivity in South Europe.” The Quarterly Journal of Economics 132, no. 4: 1915–1967. Gordon, Robert. 2012. “Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds.” NBER working paper 18315. Hayek, Friedrich August von. 1929. Geldtheorie und Konjunkturtheorie. Salzburg: Philosophia Verlag. Hayek, Friedrich August von. 1931. Prices and Production. New York: August M. Kelly. Hayek, Friedrich August von. 1937. “Monetary Nationalism and International Stability.” Fairfield: Augustus Kelley. Hayek, Friedrich August von. 1944. The Road to Serfdom. London: Routledge. Hayek, Friedrich August von. 1968. “Der Wettbewerb als Entdeckungsverfahren.” In Die Österreichische Schule der Nationalökonomie. Texte—Band II von Hayek bis White, edited by Internationales Institut, Österreichische Schule der Nationalökonomie, 119–137. Vienna: Manz’sche Verlags-und Universitätsbuchhandlung. Hoffmann, Andreas, and Gunther Schnabl. 2008. “Monetary Policy, Vagabonding Liquidity and Bursting Bubbles in New and Emerging Markets—An Overinvestment View.” The World Economy 31: 1226–1252. Hoffmann, Andreas, and Gunther Schnabl. 2011. “A Vicious Cycle of Manias, Crises and Asymmetric Policy Responses—An Overinvestment View.” The World Economy 34: 382–403. Hoffmann, Andreas, and Gunther Schnabl. 2016a. “The Adverse Effects of Ultra-Loose Monetary Policies on Investment, Growth and Income Distribution.” Cato Journal 36, no. 3: 449–484. Hoffmann, Andreas, and Gunther Schnabl. 2016b. “Monetary Policy in Large Industrialized Countries, Emerging Market Credit Cycles, and Feedback Effects.” Journal of Policy Modeling 38, no. 3: 1–19. Hoffmann, Andreas, and Holger Zemanek. 2012. “Financial Repression and Debt Liquidation in the US and the Euro Area.” Intereconomics: Review of European Economic Policy 47, no. 6: 344–351. Jorda, Oskar, Moritz Schularick, and Alan Taylor. 2015. “Betting the House.” Journal of International Economics 96, no. 1: 2–18. Keynes, John Maynard. 1936. “The General Theory of Employment, Interest, and Money.” Cambridge: Cambridge University Press. Koo, Richard. 2003. “Balance Sheet Recession: Japan’s Struggle with Uncharted Economics and Its Global Implications.” Hoboken: John Wiley. Kornai, János. 1986. “The Soft Budget Constraint.” Kyklos 39, no. 1: 3–30. Kydland, Finn, and Edward Prescott. 1977. “Rules Rather Than Discretion: The Inconsistency of Optimal Plans.” Journal of Political Economy 85, no. 3: 473–490.
Crisis Management and Austrian Business Cycle Theory 583 Laubach, Thomas, and John Williams. 2015. “Measuring the Natural Rate of Interest Redux.” Federal Reserve Bank of San Francisco working paper 2015-16. Leibenstein, Harvey. 1966. “Allocative Efficiency vs. X-Efficiency.” American Economic Review 56, no. 3: 392–415. Lucas, Robert. 1976. “Econometric Policy Evaluation: A Critique.” Carnegie-Rochester Conference Series on Public Policy 1: 19–46. McKinnon, Ronald. 1973. Money and Capital in Economic Development. Washington, D.C.: Brookings Institution. McKinnon, Ronald. 2012. “Zero Interest Rates in the United States Provoke World Monetary Instability and Constrict the U.S. Economy.” SIEPR policy brief. Stanford, August. Mises, Ludwig von. 1912. Die Theorie des Geldes und der Umlaufmittel. Leipzig: Duncker und Humblot. Mises, Ludwig von. 1949. Human Action.” New York: Foundation for Economic Education. Peek, Joe, and Eric Rosengren. 2005. “Unnatural Selection: Perverse Incentives and the Misallocation of Credit in Japan.” American Economic Review 95, no. 4: 1144–1166. Posen, Adam. 2000. “The Political Economy of Deflationary Monetary Policy.” In Japan’s Financial Crisis and Its Parallels to U.S. Experience, edited by Ryoichi Mikitani, and Adam Posen, 194–208. Washington, D.C.: Institute for International Economics. Quian, Yingyi, and Chenggang Xu. 1998. “Innovation and Bureaucracy under Soft and Hard Budget Constraints.” Review of Economic Studies 65, no. 1: 151–164. Reinhart, Carmen, and Vincent Reinhart. 2010. “After the Fall.” NBER working paper 16334. Schnabl, Gunther. 2017. “Exchange Rate Regime, Financial Market Bubbles and Long-term Growth in China: Lessons from Japan.” China & World Economy 25, no. 1: 32–57. Schnabl, Gunther, and Timo Wollmershäuser. 2013. “Fiscal Divergence and Current Account Imbalances in Europe.” CESifo working paper 4108. Schumpeter, Joseph. 1934. “The Theory of Economic Development.” Cambridge: Harvard University Press. Sekine, Toshitaka, Keiichiro Kobayashi, and Yumi Saita. 2003. “Forbearance Lending: The Case of Japanese Firms.” Bank of Japan Institute for Monetary and Economic Studies 21, no. 2: 69–92. Shaw, Edward. 1973. Financial Deepening in Economic Development. New York: Oxford University Press. Solow, Robert. 1956. “A Contribution to the Theory of Economic Growth.” Quarterly Journal of Economics 70, no. 1: 65–94. Solow, Robert. 1957. “Technical Change and the Aggregate Production Function.” Review of Economics and Statistics 39, no. 2: 312–320. Summers, Larry. 2014. “U.S. Economic Prospects: Secular Stagnation, Hysteresis, and the Zero Lower Bound.” Business Economics 49, no. 2: 65–73. Swan, Trevor. 1956. “Economic Growth and Capital Accumulation.” Economic Record 32, no. 2: 334–361. Taylor, John. 1993. “Discretion versus Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public Policy 39: 195–214. Taylor, John. 2007. “Housing and Monetary Policy.” NBER working paper 13682. Wicksell, Knut. 1898 [2005]. Geldzins und Güterpreise. Munich: FinanzBuch.
584 Schnabl Wicksell, Knut. 1936. Interest and Prices. Translation of 1898 edition by R. F. Kahn. London: Macmillan. Woodford, Michael. 2003. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton: Princeton University Press. Yoshino, Naoyuki, and Tetsuro Mizoguchi. 2010. “The Role of Public Works in the Political Business Cycle and the Instability of the Budget Deficits in Japan.” Asian Economic Papers 9, no. 1: 94–112.
chapter 20
T he Changin g Rol e of the Central Ba nk i n Crisis Avoida nc e a nd M anage me nt David G. Mayes
20.1 Introduction The experience of recent years, especially with the global financial crisis (GFC), has substantially changed the role of the central bank in both avoiding and managing financial crises.1 Until the last decade or so, central banks would typically have a rather vague mandate to maintain the stability of the financial system (Bank for International Settlements 2009). Most of them would have a relatively clear approach to how they would manage the “lender of last resort” role should a bank or banks in general find themselves unable to raise the necessary liquidity to meet their obligations despite being solvent. But beyond that, what they should do and, indeed, how widely they should interpret their role in the financial sector beyond traditional banking would be largely unspecified. Most central banks had a responsibility for the supervision of individual banks, although often at arm’s length from monetary and other responsibilities, but many did not. Fewer had supervisory responsibility for the whole financial sector. The underlying economic justification of the various structures and roles has been similarly underdeveloped, and theory has in many respects followed empirical evidence, rather than setting out a clear set of well-based principles for practice to follow.
I am grateful to Giannoula Karamichailidou for careful comments and to the referee for helpful suggestions.
586 Mayes There has been a progressive reaction against this vagueness, driven by examples of problems caused by financial crises that could not be or were not addressed satisfactorily among the advanced countries. In some respects, this can be dated back to the reaction to the spillovers from the Herstatt failure in 1974 and the setting up of what became the Basel Committee on Banking Supervision.2 However, it is probably the Nordic crises of 1987–1994 that started the modern concerns (Mayes, Halme, and Liuksila 2001; Mayes 2016) and led to a much more sweeping approach to both crisis avoidance and crisis management. In particular, this led to an explicit concern for assessment of the threats to the financial system as a whole and the production of financial stability reports (FSRs)3 which have become steadily more sophisticated over the years (Čihák et al. 2012).4 The Basel Committee had focused its attention largely on ensuring that each bank held enough (risk-weighted) capital so the chance of its getting into difficulties or even failing was sufficiently small and that this formed an acceptable risk for central banks to manage. Clearly, the larger the bank and the more connected it was in the financial system, then the greater attention supervisors paid to its performance and soundness. While some steps had been taken, most countries found that the GFC showed that their procedures were extensively wanting in both crisis avoidance and crisis management. Even the United States, which had extensive experience in handling bank failures from the time of the last great crisis after the stock-market crash of 1929, found it needed to take extraordinary measures to keep the financial system from imploding and, when it did not or could not in allowing Lehman Brothers to fail, created a disaster that had repercussions around the world.5 The most significant and unwelcome finding was that the lack of powers meant that many governments could only maintain adequate financial stability by bailing out failing institutions with taxpayers’ money.6 Not only was this judged inequitable because the losses fell mainly on people who had not knowingly taken the risks, but if it were not changed for the future, it would set up incentives for financial institutions to run similar risks in the future— simple moral hazard. The routes followed to improve both crisis-avoidance measures and crisis management have varied across countries but have almost uniformly involved a substantial increase in the powers of central banks and much greater clarity in the objectives and how they will be met in the case of potential or actual financial crises in the future.7 These extra powers pose many challenges, not least being potential conflicts among objectives. Therefore, in this chapter, I explore the principal changes that have taken place, their implications for central banks, the strains that these may pose for the future, and their economic justification. For reasons of space, the focus is on the advanced countries, but major changes have been taking place in almost all central banks.8 Section 20.2 considers four main issues that have affected the developments: too big to fail (TBTF), gaps in the system, the macroprudential approach, and lender of last resort (LOLR), before moving from the national to the international level. In section 20.3, I consider how national central banks can effectively handle an international problem where banks run across borders and causes and indeed solutions to problems frequently
Changing Role in Crisis Avoidance and Management 587 lie in other jurisdictions. A final section, 20.4, discusses the enhanced role of the central bank in the light of these changes.
20.2 The Balance between Crisis Avoidance and Crisis Management Tucker (2013) has argued that increasing the resilience of banks through greater capital cover and liquidity, on the one hand, and improving the ability to resolve troubled banks at minimum cost to society, on the other, can be regarded as providing the two bookends to having a stable banking system. In other words, better avoidance measures at one end and better crisis management at the other would buttress the system. While the capital buffer and crisis-avoidance elements of this twin picture had become steadily more convergent through the work of the Basel Committee, the treatment of bank resolution had not by the time of the GFC. In the United States, bank failure was common, and hence there were well-developed procedures for handling it. Elsewhere, failure was less usual, and the normal response was to try to find a solution that either involved a takeover by other stronger institutions (with or without participation by the state or a consortium of private-sector financial entities) or involved some direct public- sector intervention, in the form of nationalization or some form of direct loan or other investment. Even in the United States, where the Federal Deposit Insurance Corporation (FDIC) had been in operation since 19359 and had been playing an increasingly important role in the resolution of financial institutions, especially since the difficulties of the savings and loan crisis of the early 1980s had led to the passing of the FDICIA (the Federal Deposit Insurance Corporation Improvement Act) in 1991, large banks and nonbank financial institutions effectively fell outside the system. Under the FDICIA, the FDIC was not merely responsible for organizing the resolution of banks that had failed, but it was also responsible for handling banks that were getting into difficulty as they approached and then fell below the prescribed limits for capital adequacy. The FDIC had an obligation to resolve banks in a manner that was of least cost to itself.10 If an institution was very small, then it could simply be closed and insured depositors paid out from the FDIC’s funds, with the FDIC acceding to their claims and eventually being paid back from the insolvency proceedings. On the whole, however, the FDIC found the least-cost solution was to sell off all or part of the operations of the bank to competitors through some process of auction. Again, losses and costs could be recouped from the insolvency estate. The key to the potential success of this operation was the fact that the FDIC was supposed to act early, before the bank reached the point of nonviability. In that way, it might be possible to turn the bank around short of failure, or if that did not work, there would be time to find a buyer or other means of resolving the problem before the bank reached the point where it would have to cease operation.
588 Mayes This approach, labeled structured early intervention and resolution by Benston and Kaufman (1988) and prompt corrective action more generally after its introduction, was intended not so much as a plausible way out of difficulty without the use of taxpayer funds but as a strong incentive structure for banks. By having a plausible route to resolution that the owners and managers of banks would not like, it would encourage them first of all to run their banks better so they would avoid getting into fatal trouble and, second, if trouble did threaten, it would encourage them to find a private-sector solution that they would find more attractive and hence avoid the clutches of the authorities. Key to this approach was the idea that banks would not and, indeed, could not go through the process of traditional insolvency, because those managing insolvencies on behalf of the creditors do not have the powers and ability to act fast enough to minimize the costs, particularly to third parties.11 In particular, it is difficult to sell the bank as a going concern, which is often the best way to minimize losses. Thus, the FDICIA, like its predecessors, is a lex specialis, which allows the authorities to take over the running and resolution of banks from the shareholders when their capital is largely eroded and failure is pretty well inevitable.12 At first glance, these developments have few implications for central banks. Microprudential supervision of individual financial institutions is the preserve of financial supervisors. While some central banks have this responsibility, many do not.13 Some countries, such as Mexico and Canada, had followed a similar route to that of the United States and assigned a deposit insurer the role of resolving banking problems, but most did not.14 Central banks became deeply involved for four main reasons: 1. Because this approach to resolution would not work for large institutions―the idea of being too big to fail (Stern and Feldman 2004; Mayes and Liuksila 2004). 2. Because some essential parts of the financial system lay outside the arrangements―illustrated most vividly by the failure of Lehman Brothers in September 2008. 3. Because there were other elements of the stability of the financial system that could not be dealt with well simply by handling each individual institution separately―the idea of macroprudential supervision. 4. Because central banks were involved anyway as the lender of last resort, in providing liquidity to troubled but solvent institutions. While each of these aspects is dealt with in turn in what follows, it is important to recognize here that while the authorities were aware of all of these issues in advance, it was only the GFC that brought home the importance of treating them all together. When the crisis struck and the deficiencies in the arrangements became clear, the only way to prevent collapse was for the state to step in and use taxpayers’ money to stop the financial system from collapsing. The arrangements that are currently being put in place through international agreement, largely through the Financial Stability Board, are predicated on the view that the next time around, the taxpayer should not be called on.15 Not only has the GFC been enormously costly, but much of that cost has fallen on people who
Changing Role in Crisis Avoidance and Management 589 were not knowingly taking on such risks. The consequences have thus been both inefficient and very unfair. The general result of the pressures and the finding of deficiencies in the system of crisis avoidance has been to increase the role of the central bank. This is not so much because the central bank is the logical repository or because economic theory suggests that such an increase in responsibility would reap economies of scale or have reinforcing and consistent incentives. Indeed, such expansion is setting up a number of potential conflicts of interest. The reason is more related to the central bank having the motivation, information, and capacity to act. The primary main areas of expansion have been in resolution and macroprudential stability. Both have implications for the supervision and regulation of institutions, so it is not surprising if, in addition to resolution and macroprudential stability, more central banks should be acquiring responsibility for supervision and regulation of individual institutions and of key parts of the financial infrastructure as well. First of all, the resolution authority needs to be well informed about incipient problems, and the supervisor is best placed to have that information. Second, forestalling problems requires having powers to affect the structure of financial institutions, the financial system, products, and indeed financial institutions’ behavior. Most macroprudential tools, such as loan-to-value ratios, loan-to-income ratios, credit growth limits, and others, are applied at the individual institution level. They do not represent levers the central bank can otherwise pull for itself. On top of this, many of the problems of financial stability relate to liquidity and the functioning of markets. Not only does the central bank need to be able to step in when markets cease to function efficiently, but they also have concerns over stability that imply an element of control. The limits of the requirements do not even stop at the ability to use funds to help address the problem. Thus, resolution authorities need access to funds in the short run, usually funds contributed by the banking industry itself rather than the taxpayer, but a contingent ability to draw on a government line of credit is normally a sensible backup. As a result, some central banks are becoming very important institutions in the running of the economy, certainly to the extent that they justify the concerns of the “rise of the unelected” voiced by Vibert (2007) and others.16 These functions do not have to be undertaken by the central bank, but they do require major coordination. Indeed, they do not have to be undertaken by the public sector; deposit insurance can be private, as it is in Finland or in part in Germany. It is notable that central banks of developed countries appeared to pay little regard to the experience of Asian countries in responding to their financial crises in 1997. Not only were severe problems encountered, with Indonesia and Thailand, for example, enduring extensive bank losses and closures, but these countries had to make extensive changes to their bank-resolution and crisis-management arrangements. Korea, in particular, is cited as a model example of how to implement a new regime quickly and resolve the problems effectively and without delay (Cho 2012). At the same time, these countries have sought to provide themselves with a greater ability to absorb shocks, in their case by building up foreign exchange reserves. Thus, the same principle of the
590 Mayes bookends approach to crisis avoidance and crisis management was followed. Since developed countries do not normally face the same problem of having to rely on external resources to absorb a crisis, as they can rely on raising government debt in their own currency to finance the costs, the relevance of the emerging-market experience might have been thought more limited. But OECD countries, Greece in particular, did reach just the same constraints in the GFC. Moreover, once such constraints are reached, the country in difficulty has to accept the demands of the official lenders regarding how it should manage the crisis. In the case of the Asian countries, it was the International Monetarey Fund (IMF), and for Greece and other euro area countries, it was the “troika” of the IMF, the European Commission, and the European Central Bank (ECB). The Asian countries sought to avoid being caught in the same way again by building up reserves. As a consequence, their central banks became much more important in resolving any future problems, reducing dependence on the outside world. The euro area has determined to go through a similar process but by reducing domestic government debt rather than by building up foreign exchange reserves. It remains to be seen whether the arrangements currently being implemented can actually reduce the costs and improve the fairness as much as might be hoped, which is discussed in sections 20.3 and 20.4. But first, I return to the four factors that have drawn central banks into crisis prevention and resolution.
20.2.1 Too Big to Fail If a bank that is large compared to the financial system or a large number of smaller banks were to stop operating, then this could threaten the viability of the financial system as a whole. The rest of the banking system would be exposed to so many unresolved transactions that it, too, could not function. Such large-scale problems need to be treated differently, so that although shareholders may be wiped out if the losses are sufficiently large and management is replaced, the operations of the institution need to continue without a break. This makes some of the resolution methods that can be applied to smaller banks inappropriate. In the past, this would have been achieved by some sort of bailout by the taxpayer, involving either nationalization or at least a major injection of taxpayers’ funds to keep the bank(s) adequately capitalized. The disaster that followed not doing this for Lehman Brothers and the substantial extrafinancial costs, let alone knock-on costs from the loss of confidence, have been documented by the FDIC (Federal Deposit Insurance Corporation 2011).17 The size and unfairness of these bailouts in the GFC have led authorities around the world to avow that in the future, it should be the shareholders and creditors of banks, in increasing order of seniority, who should cover the losses and in most cases provide the recapitalization of banks18 and not the taxpayer. This process of recapitalization through writing down of claims and/or conversion of debt into equity has been labeled bailing in.19
Changing Role in Crisis Avoidance and Management 591 These ideas were around before the GFC (e.g., Mayes, Halme, and Liuksila 2001; Mayes and Liuksila 2004), but the problem was not seen as important enough to warrant much in the way of discussion. It was assumed that the existing systems would work adequately, and of course, authorities would not wish to admit either that their largest banks could possibly fail or that there were important deficiencies in their systems, which would require action. For some form of bailing in to work, not only do the authorities need to have the legal power to enforce it, but the banks need to have the structure and systems in place for it to work. Thus, in addition to the lex specialis conveying the power, the statutory manager or special manager who is put in by the resolution authority to effect the change has to be able to complete the bail-in between the time the market is closed to interbank transactions at the end of one day and the time the market reopens the following day. For this reason, bank resolutions are normally performed over a weekend so that around sixty-two hours are available and not just fourteen. Even so, authorities have to be prepared to work just overnight, and this is a clear requirement by the Reserve Bank of New Zealand (RBNZ) for that country’s OBR. Moreover, the Bank of England (BoE) found that it was able to complete the resolution of its first institution under this scheme, the Dunfermline Building Society, in about three hours. However, that institution was very small, and the BoE knew well in advance that there was a problem and had all parts of the transaction in place beforehand.20 The worry has been that for large institutions, this is impossible. Very extensive planning has to be in place, and the authorities have to have a resolution plan that sets out a plausible route for resolution for each institution in the required time frame. However, such plans are not public. Banks do not usually collapse suddenly, except in the case of fraud, especially large banks, so there will be a period of increasing concern beforehand. The usual sequence is that markets become concerned first, share prices start to fall, and bond spreads widen, particularly for the junior debt that will be threatened first. Customers are typically ill informed, so bank runs are normally the very last part of the process and are frequently forestalled by the authorities’ own actions. The second step is that the authorities become concerned enough to start looking into the bank’s activities much more closely. Then, in the third step, while the bank is still compliant with capital and other requirements for registration, the authorities will push it to change. Following that, change will become compulsory, and resolution plans will only come into effect if all of the previous stages have not resulted in an adequate recovery. Of course, even before the first signal, when outsiders begin to become concerned, the management and owners of the bank will have had worries and will be undertaking corrective action of their own. The problem will be worse if the management ignores the signals or argues that markets and others are mistaken and hence takes no action.21 Supervisors and central banks also have their own indicator models that seek to provide an early warning of both individual bank problems and more general periods of stress, using both private information disclosed to them and more public data (for surveys, see Detken et al. 2014; Mayes and Stremmel 2014).
592 Mayes What has happened since the GFC is that authorities have tried to think of means of advancing the points at which action takes place. Even with a lex specialis, there are limits to how early the authorities can intervene and take executive control away from the owners―on human rights grounds, if nothing else (Hüpkes 2009).22 Moreover, it is important to be realistic. Just because the authorities could and should intervene in a timely manner does not mean that they do.23 The main step is to require all systemically important institutions to have in place a recovery plan approved by the authorities.24 Such a plan explains how the bank can turn itself around, short of insolvency, if it gets into difficulty and can become properly capitalized and viable again. On the whole, such plans are also not published, so stakeholders have to take them on trust.25 The initial signs of believability are not good, as the Federal Reserve and the FDIC rejected the initial plans of all their systemically important banks (Federal Deposit Insurance Corporation 2014; Board of Governors of the Federal Reserve and FDIC 2014).26 Such plans can only guess at the plausible sources of problems and at best can only be a shopping list for what could be done, not a blueprint for action, as action will be circumstance-dependent. As part of such recovery plans, banks are being compelled to bring forward the bailing-in process. While in pursuit of greater resilience they are being compelled to hold more equity and more securities that could be bailed in the event of insolvency,27 some of that bail-inable debt needs to be triggered before insolvency to make the chances of recovery greater and hence avoid needing to go into resolution with all of its costs. The main type of such securities are labeled “contingent convertibles” (CoCos), whose conversion into equity or simple write-down is triggered either by risk-weighted capital ratios falling too low or by the authorities determining that the bank will become nonviable (Calomiris and Herring 2011). In the case of the ANZ CoCos, for example, the capital ratio trigger point is 5.25 percent (of Tier 1 common equity)—clearly above the minimum permitted ratio.28 A problem with such early triggers is that rather than forcing early action and thereby avoiding failure, they may simply bring forward the crisis as people act to cover themselves against possible losses.29 Thus, the price of CoCos is likely to fall dramatically as soon as there is any apprehension that they might be triggered. Even if the central bank is neither the supervisor nor the resolution authority, it will have to be involved at this stage to offer confidence. Such confidence will need to come in two forms. The first is some sort of underwriting of the viability of the bank, perhaps in the form of a collateralized loan, as the bank is supposed to be solvent and expected to remain so at this stage in the proceedings. Of course, a strong signal would be to buy the CoCos themselves, as these are subordinated. In theory, this should just make money for the central bank (and hence the taxpayer), as it is confident that the bank will recover, but the market price is discounted.30 The second form of underwriting is for the system as a whole. If one large bank is on the verge of trouble and it has not followed some irresponsible idiosyncratic policy of its own, stakeholders, including depositors, are likely to conclude that others may be likely to encounter trouble as well. As a result, spreads on all bank securities will widen, especially on CoCos, and the ability to issue any new such securities will disappear.
Changing Role in Crisis Avoidance and Management 593 Furthermore, if depositors could be bailed in, as in New Zealand, it will probably be necessary to issue a deposit guarantee.31 In compelling early action, there may be a fine line between buying enough time to solve systemic bank problems without a costly failure and triggering an earlier crisis of confidence. What seems very clear, however, is that it does not seem likely that one can somehow “privatize” or internalize the resolution of large banks and make this something that, although directed by the resolution agency, is entirely internal to the shareholders and creditors of the bank in question―even if, as in the European Union, there are some limited resolution funds available to help with the direct costs (see the next section). It is likely that there will be substantial spillovers. It is therefore not really possible to separate out the need for buffers at the level of the individual institution that seek to absorb the impact of adverse shocks without bringing that institution down from buffers at the national level that give confidence that the shock can be absorbed. Such buffers are simply debt-raising capacity, foreign exchange reserves, or sovereign wealth funds. A central bank that does not have these behind it may not be able to cope.
20.2.2 Gaps in the System Normally, where the central bank is the supervisor, supervision is restricted to some definition of banks, depository institutions in some cases. What the collapse of Lehman Brothers and AIG in the United States showed is that investment banks and insurance companies can be just as systemically important as large conventional retail banks. Indeed, even for conventional banks, it can be the nonretail banking side of their activities that is the primary source of their problems.32 The safety net and some viable approach to resolution, short of simply stopping operating, are therefore needed for these institutions as well. A simple solution is just to expand the definition of what constitutes a bank and hence expand the regulatory responsibility of the central bank. However, the route to handling these other institutions is not the same. There are no depositors to protect. It was only the financial insurance side of AIG that was in difficulty, and problems with a major life or general insurer would not have the same consequences. It is worth looking at the case of the insurance company AMI in New Zealand to see the concerns. AMI got into difficulty not because of the global financial crisis but because of the Christchurch earthquakes in 2010–2011. It was a local insurer, with heavy exposure to the Christchurch area, which had undertaken insufficient reinsurance. Clearly, in a major disaster, allowing one of the main insurers to fail would not have been a smart move. However, this was an issue not for a central bank but for the government―from which the support did come. The central bank would only have been involved if the government had not intervened, AMI had failed, and it was then necessary to try to avoid the consequences being a significant financial crisis. Even so, the main problem would have been with those directly affected, the claimants, for whom the central bank would not be able to act. Handling such problems cannot therefore be
594 Mayes uniquely a central bank responsibility and requires coordination with others, particularly the government, with its access to funding.33 It would otherwise be easy for central banks to find themselves in the position of the Bank of Finland in the Finnish financial crisis of 1992. When the KOP bank got into difficulty, it was clearly too systemic to allow it to fold. While using public funds to bail out the bank was the obvious way forward, the government did not have the powers to do that. The central bank was the only organization with the ability to invest a large sum immediately. Ultimately, when the government did arrange to purchase the bank from the Bank of Finland, it did not compensate the central bank for the full cost.34 While it is not normal for central banks to use their funds in this way, it shows on the one hand that in a crisis, one should grasp whatever route is available to address the problem. On the other hand, it shows that this is a risky activity, and those stepping in can quickly be liable for much bigger losses than they expected—something that is true of the private sector as well as the public sector, as Lloyds Bank’s acquisition of HBOS illustrates. In the case of large conglomerates where it is the nonretail banking part of the group that causes the problem, the approach has been to try to ensure that the structure of the group is such that the problem will not bring down the vital retail banking operations. This can be achieved either by requiring that the two businesses be separate, along the lines of the Volcker rule in the United States35 and the proposals of the High Level Group led by Erkki Liikanen for the euro area36 or by ring-fencing the retail operation, as in the United Kingdom following the recommendations of the Vickers Commission (Independent Commission on Banking 2013). If splitting the group would be economically damaging, then the price would be a higher capital buffer. There is, of course, a counterargument here (Lehmann 2014) that the nonretail operations are a useful diversification of risk that may enable the banking group to cover a retail banking loss. If the returns from nonretail activities tend to be higher, then this will help build up both retail market share and reserves. It is interesting that the European Union has found itself unable to agree on the way forward on this point, although a regulation was proposed in January 2014 (European Commission 2014). However, it is clear that it is not the economics of the problem that is driving the disagreement but rather the vested interests in the larger member states that do not want to change their existing approaches and banking structures.
20.2.3 A Macroprudential Approach Before the GFC, central banks had become increasingly aware of the importance of financial stability as part of their mandate, as is evidenced by the emergence of FSRs from 1996 onward.37 It is not quite clear how this interest arose. In the case of the Nordic central banks, it was obvious, as they had endured crises of their own between 1986 and 1993 (except Iceland, whose crisis came in 2008). For the BoE and other inflation-targeting central banks, a secondary reason was increasing discomfort with asset price bubbles. Although there was some debate about whether house prices should be included in the
Changing Role in Crisis Avoidance and Management 595 target (Goodhart and Hofmann 2007; Cecchetti 2006), this did not in the main extend to stock and other asset prices. Furthermore, while the general conclusion was that asset prices were an important information variable in the fight against inflation, it was not possible to use interest rates to tackle asset prices separately from their targeting of the general or consumer price level.38 What the GFC has emphasized is the role of financial cycles as causes of wider disruption to the real economy (Borio 2014). Traditionally, the financial sector had been seen as a shock absorber. One of the roles of financial markets was to transfer resources quickly and smoothly away from sectors and companies in decline in the event of a structural shock and toward the new sectors with growth prospects. Furthermore, financial markets offer a means of spreading the costs of the shock over a longer period of time, thereby making its impact less severe. Achieving this implied that financial markets needed to be flexible; hence substantial fluctuations in these circumstances could be seen as a sign of operating well rather than weakness and threat. It is probably Alan Greenspan’s remark about “irrational exuberance” in December 1996 (Greenspan 1996) that signals a start of the realization that overshooting could be building up problems. Ironically, though, Greenspan’s own approach of a somewhat restrained leaning against expansions followed by a vigorous tackling of downturns (Blinder and Reis 2005) is widely thought to have been a contributor to the GFC. Overshooting in other financial markets, such as foreign exchange (Dornbusch 1976), had been seen as a means of heightening the pressure on the real sector for change rather than as a force for building up problems. The tension with monetary policy therefore tended to increase. The ECB, for example, saw an increase of nearly 100 percent in the exchange rate with the US dollar between 2002 and 2009, eliciting comments from President Jean-Claude Trichet that the changes were “brutal.”39 In the same way, the inflation target has been progressively amended in New Zealand to allow some of the concern over the exchange rate to be admitted. However, it has been housing markets where the biggest stress has occurred in many countries, as outside the United States and the United Kingdom, stock markets tend to be of more limited importance. The initial reviews of financial stability were rather more descriptive than analytic and in the case of Finland were designed to reassure depositors and financial markets that the strains and problems that led to the financial crisis were firmly in the past and that the banks were both well capitalized and much more efficient. Such reviews have rather more value when they are forward-looking (Čihák et al. 2012) and when they include stress tests where the degree of stress is substantial. By the time the financial crisis began to bite, a clear role for the central bank had been established (Schinasi 2006). Over the same period, the Bank for International Settlements (BIS) was becoming more interested in new financial products, with surveys of derivative instruments, but it was only shortly before the crisis that the BIS began to warn of threats to stability from financial imbalances and the growth of new instruments (Borio and Lowe 2002a, 2002b, 2004; Borio and McGuire 2004). Thus, while the discussion of financial stability and macroprudential concerns was well developed by the mid-2000s, it had not yet been
596 Mayes translated into systems of warning where those being warned could take specific actions to avoid problems. Borio and Drehmann (2009) suggest that the indicators then available could have been valuable for predicting the problems in the United States, but of course, the practice was that while there were concerns, they were not translated into action. That has now all changed, with considerable effort going into assessing the potential for instability and crises.40 The response has been fourfold. In the first place, countries have formalized responsibility for maintaining financial stability. In the second, they have set up bodies designed to detect risks early and to act on them, although the latter are not necessarily the same agencies. The European Systemic Risk Board (ESRB), for example, is responsible for diagnosis, but it is the ECB and the national authorities that are responsible for taking the requisite action.41 The arrangement with the Financial Stability Oversight Council (FSOC) in the United States is similar, but there the secretary of the Treasury has the chair, and with all the relevant agencies around the table, action could be readily coordinated.42 Third, they have established a list of potential tools that can assist with greater financial stability independent of monetary policy (but of course, bearing regard to it). Last, there has been considerable investigation of how the structure of financial markets and financial institutions can be changed to make them less susceptible to instability. The emphasis has thus been mainly on the avoidance end of the spectrum. The IMF has spent a lot of effort trying to classify the macroprudential tools that are available. Claessens (2014), in a helpful summary, sets out five types of tools that can be used in three circumstances: • Concerns over an excessive rate of expansion. • Amelioration of the pressures of contraction. • Contagion. The five sets of tools are also well known but on the whole represent measures that were previously used by either supervisory authorities or central banks in emerging markets. What has become apparent is a consensus that this detailed and widespread intrusion into the workings of financial markets not only should be viewed as a sensible approach to maintaining financial stability but also should be part of the toolkit of central banks, even though in many cases they have to be applied by supervisors, which may not be the central bank. Pursuing Claessens’s classification a little further, the macroprudential tools involve: • Restrictions on borrowers, instruments, and activities―probably the group most associated with active tools, such as loan-to-income ratios, loan-to-value ratios, sectoral lending, credit growth.43 • Restrictions on financial-sector balance sheets―such as liquidity requirements, reserve requirements, mismatches, and exposures. • Capital requirements―particularly in a countercyclical manner.
Changing Role in Crisis Avoidance and Management 597 • Taxation and levies. • Institutional infrastructure―which covers most of the avoidance measures discussed in the previous section, such as those on resolution and LOLR and a range of others relating to accounting, disclosure, governance, product standardization, and the use of central counterparties. Intervention on this scale may have wide ramifications, many of them unintended, as evidence on the effects of these measures, particularly in combination, is scarce. The first implication for the central bank is its involvement with detail. On the whole, central banks have not yet had to deploy this armory of weapons to restrict the upside of the cycle since the GFC. The RBNZ is an exception in that it had to handle a vigorous boom in the Auckland housing market, which represents around a third of the country, while the economy as a whole was on the edge of deflation, and the rest of the country was not showing the same pressures on housing. Thus, while the government and the RBNZ indicated that the primary problem lay in difficulties in expanding the supply of housing rapidly in the face of growing immigration, the RBNZ felt the need to intervene through limiting loan-to-value ratios in Auckland. However, since this was a supply problem, loans for new houses were exempted, and there were also incentives for first-time buyers. By late 2016, the measures had not turned the tide but got the RBNZ into detailed intervention. While stock markets rebounded in New Zealand and elsewhere, central banks were reluctant to intervene, thus emphasizing the rather asymmetric attitude that exists toward asset prices and the build-up of financial pressures. Indeed, when there was a rapid decline in the Chinese stock market, the authorities there intervened rapidly to stem it at the beginning of 2016. The contrast between the approaches to financial stability and monetary policy is striking. Not only had modern monetary policy become focused on a single objective and a single tool to execute it (quantitative easing being an extension of trying to influence interest rates), but the economic theory underlying it was largely straightforward. This is in contrast to a range of different measures and a much less clear understanding of the underlying economic mechanisms on the financial stability front. Furthermore, while monetary policy has been concentrating on extreme measures to get recovery going again, financial stability measures have continued to focus on calming the banking sector to try to make sure that it will be easier to avoid and cope with the next periods of financial stress.44 Thus, central banks are faced with contradictions both in the short run and in prospect and find themselves having to intervene in a detailed way that will tend to increase their politicization in the detailed running of the economy. Much of the argument for greater independence in central banks was predicated not only on the need to escape from the political cycle but also on the ability to have a straightforward and transparent policy objective and means of execution for which they could readily be held accountable to stop the independence being abused.45 Crisis-avoidance and crisis-management policies and actions do not offer the same clarity.
598 Mayes Emerging markets, on the other hand, have confronted these issues for much longer, as they have needed to use capital controls to try to stabilize the financial system. Although to some extent prudential controls can be used, relating to both domestic currency and foreign currency lending, capital controls have been required because of the importance of foreign financing. However, these same capital controls contribute to monetary stability (Eichengreen and Hausmann 2010). At a macroeconomic level, such countries are subject to sudden stops (Edwards 2004) where foreign lenders either cease to offer new lending or cut back by refusing to roll over existing lending. Whereas such lending is at the heart of a domestic banking system’s business, it may be peripheral for the foreign lenders. Even developed countries can potentially face this problem if their banking system is foreign-owned. Emerging-market central banks tend not to be as independent as their developed country counterparts, so the imposition of new controls is likely to involve the government as well.46 In the same way, central banks facing instability from major movements in commodity prices may encounter considerable conflict with their governments over such capital controls and maintaining exchange-rate pegs. As commodity prices fall, downward pressure on the exchange rate rises, and the government faces increasing fiscal pressure as revenues from the commodity producers and exporters fall.47
20.2.4 Lender of Last Resort For central banks such as the BoE, part of their original raison d’être was to act as the bankers’ bank, a place the banks could go to when they had surplus funds they could deposit or, as in the present context, where they could borrow against good collateral. Indeed, the banks have subscribed capital to this form of central bank, which would be a private-rather than a public-sector institution. However, the LOLR function is much more specific than this, in that it offers a source of confidence that however bad circumstances may get in financial markets, it will always be possible for solvent banks to meet their obligations, because the central bank is there to provide unlimited liquidity (i.e., liquid funds against good collateral). The provision of such facilities is now a normal function for all central banks, even where they did not start as a private institution.48 There is some debate (Lastra 2015; Capie and Wood 2007) about the exact terminology of such last resort lending. However, it clearly has to be on harsher terms than the market under normal circumstances; otherwise, the central bank would be the lender of first resort, as it is the most reliable counterparty in the whole financial system. The problem for central banks is that they can be certain neither of the solvency of the bank to which they are lending nor of the quality of the collateral they are being offered.49 The central bank thus faces a risk that it may make losses. Take sovereign debt, for example. A central bank will lend against the debt of its own sovereign. However, in the euro area, the central banks can lend against debt from any of the member states. In the early days of the Eurosystem, all such debt traded on virtually identical terms with very little premium compared to Germany. Until 2010, the risk from doing so, while
Changing Role in Crisis Avoidance and Management 599 noticeably greater for the weaker countries after debt began to expand in the face of the crisis, was not really likely to lead to losses. However, until the three tranches of lending from the IMF and the European Union under the auspices of the “troika”50 to Greece and the similar agreements with Ireland,51 Portugal, and Cyprus, there was a real chance that there might be a default, especially since the second tranche of lending to Greece involved a “voluntary” write-down of Greek government debt held by the private sector (Christodoulakis 2015). Moreover, banks were and still are encouraged to hold this debt, because they do not have to hold capital against its default risk in the Basel system, enabling them to increase their leverage.52 The lending from the IMF and the European Stability Mechanism (ESM)/European Financial Stability Fund (EFSF)53 enabled these countries not only to raise new debt in the face of their ongoing fiscal deficits but also to replace debt from existing debt holders as it became due for repayment. The Eurosystem was and is a major holder of this debt. Banks in the euro area can lend to their governments through buying their bonds and then placing them with the Eurosystem central banks, thereby passing on the risk. Until the GFC, lending of this form was just part of servicing the requirement of the ECB that all banks should hold a minimum level of reserves with their respective central banks. In other words, it was nothing to do with the LOLR and was simply a facet of monetary policy. LOLR takes place when the lending to the system is exceptional, and this has occurred with a series of special programs, longer-term refinancing operations (LTROs), which have increased in duration and size, culminating in the current quantitative easing (QE) (with the asset-backed securities and covered bond purchase programs in 2014 and the public-sector purchase program in 2015, which has been expanded twice). However, QE is really just monetary policy in the sense that it is designed to try to reduce the costs of borrowing. There can thus be an overlap between monetary policy and liquidity provision objectives, but this is less likely when the range of securities accepted under QE is narrower than that of those under LOLR.54 The conflict of objectives and the politicization of the ECB occurred clearly enough with traditional LOLR, labeled emergency liquidity assistance (ELA) in the ECB context (Goodhart and Schoenmaker 2014). When the troika was trying to sort out the terms for a third bailout for Greece in 2015, the ECB found itself at the front of the negotiations, because it was setting short-term limits for how much ELA financing the Greek banks could obtain, eventually on a weekly basis as depositors fled from the banks.55 Thus, it had a double hold over Greece, both as one of the troika negotiators over what the terms of the new package would look like and the conditions it would involve and also as the marginal lender that could trigger a Greek default. When the ECB refused to raise the ELA limit further in July 2015, it precipitated the closure of the negotiations as the banking system shut down and the government would have seen little alternative to default if this had been prolonged.56 The Eurosystem is thus facing a major exposure, although that exposure is really to its constituent governments, as the “shareholders” of the central banks and in turn of the ECB or rather to their taxpayers. At present, both monetary policy and the liquidity needs of the banking system are congruent. The ECB is facing the specter of
600 Mayes deflation and is therefore prepared to use QE in an attempt to drive the cost of borrowing lower and encourage lending, investment, and consumption to get the euro economy moving again. However, there is a much deeper debate underlying this; many see the QE and credit expansion as a backdoor means of financing the deficits of euro area governments, which the Eurosystem is expressly forbidden to do (Sinn 2014). Clearly, the two are interlinked. For example, when private-sector holders of Greek debt were written down in May 2011, this included the Greek banks (Christodoulakis 2015).57 As a result, the Greek government had to make capital injections into several of the banks to keep them solvent. There was thus a round trip. Greek debt fell one day only to have to rise again the next, with the banks now in debt to the government rather than the government being in debt to the banks. The EFSF and the IMF thus took on some of the lending to the Greek government instead of the Greek banks, along the way helping to shore up the position of the ECB’s ELA. As a result of this round trip, there is now much more at stake, and Cyprus had to call for funding the ESM. The problem has worked in the other direction as well, elsewhere in the European Union. The Irish government was prevented by the ECB from imposing some of the cost of its bank failures on bondholders and, as a result, has had to bear the full cost itself (Sandbu 2015). As a result, a bank crisis was turned into a sovereign debt crisis.58 This is a step into politics well beyond what had previously been normal for central banks. But, as stressed earlier, this is a natural result of crises where central banks are often the only players that can act because of their almost unlimited lending ability. On the return to more normal times, these abilities are likely to be unwound, as is discussed in section 20.4.
20.3 An International Problem While central banks and supervisory authorities have been cooperating in setting standards through the Basel Committee framework at least since the Herstatt collapse in the 1970s, the true complexity of the problem of handling large international banks did not strike home until the GFC. While banks may be international, the powers to intervene in them are only national (Lastra 2011). The potential for conflict was, of course, clear (Eisenbeis 2007), as each country would have a duty to its own citizens first, but even if countries could agree on common aims (Schoenmaker and Oosterloo 2007), it is not at all clear that they would be able to cooperate in practice, however well intentioned they were. For example, they might not have the legal power to use equivalent instruments, where there was no justification from the domestic public benefit. Of course, there is not just the straightforward conflict of interest among the central banks and other authorities involved but also that between the banks themselves and the authorities. Thus, for example, the home authority in the country where the international bank is headquartered might wish to see a loss-making subsidiary closed, while
Changing Role in Crisis Avoidance and Management 601 the host authority for that subsidiary might regard it as being of systemic importance and therefore essential to keep open (Mayes and Vesala 2000). The bank itself might very well have the same view as the home supervisor and want to exit from loss-making activities. Peek and Rosengren (2000) cite the case of the Japanese banks operating in New England, which rapidly cut back lending when they had problems at home.59 The same experience affected western European banks lending in central and eastern Europe when the GFC struck. The Baltic states were particularly affected, as most of their banks were western European owned. The problem for the host central bank or other supervisory authority is that the troubled bank is not breaking any regulations. An international bank that is compliant with the rules of registration is entitled to shut down its business (or at least shut it to new business) if it wishes. While it would no doubt wish to sell its loan portfolio and other going-concern parts of the business, it does not have to give regard to the central bank’s wish of seeing lending expand to prevent the domestic economy from enduring a significant recession. It merely has to sell to parties that meet the central bank’s “fit and proper persons” tests. The Basel Committee (2010) has addressed the problems that are faced in cross- border resolution, and these have been translated into recommendations for action by national authorities (Financial Stability Board 2014b). A key ingredient is that systemically important banks around the world have now been identified (labeled systemically important financial institutions, SIFIs). Within that classification, they have been designated according to whether they are only important domestically (D), regionally (R), or globally (G). Then each institution is dealt with on a case-by-case basis, with each authority for which it is systemically important being involved. The European Union has probably gone farthest in trying to get a workable cross- border arrangement with what it labels banking union.60 Other countries have come to the conclusion that there are only two solutions that will work in a crisis. One is that the bank and its assets and liabilities should be structured so that the home country can resolve the entire problem. This is the route the United Kingdom and the United States have chosen, which is currently labeled single point of entry (SPOE).61 The other is to make sure that all the systemically important parts of the bank that need to be resolved smoothly are legally and operationally separable in each of the relevant countries (Mayes 2006; Krimminger 2015). This multiple points of entry (MPOE) approach has been adopted by Australia and New Zealand, which have the same four main, systemically important banks, all headquartered in Australia.62 Of these, the SPOE is the only route that should work whatever the different authorities decide, unless a subsidiary authority were to opt to intervene in its jurisdiction in a way that derailed the home country resolution.63 An example would be closing a subsidiary that was vital to the operation of the group as a whole or at least to other essential businesses within the group.64 Nevertheless, it is quite difficult to think of such examples. MPOE should also work, provided that all jurisdictions establish the required separability. Here one can think of circumstances where the different jurisdictions can effectively impose costs on one another through the particular choice of resolution method they make. For example, in the Australia and New Zealand case, if the New
602 Mayes Zealand authorities decide to bail in a subsidiary, the Australian parent will bear the first loss as the owner. The EU solution is untried and faces a number of potential difficulties, which I outline below. However, a prior concern is simply that these solutions are being driven by practicality, rather than the minimization of losses on some basis of equity. Or, to put it differently, the solution is driven by law rather than economics. There is no particular reason to expect that MPOE will minimize the costs of resolution of the banks in Australia and New Zealand. Its big advantage is that it should be possible for each country to achieve smooth resolution without operations ceasing, irrespective of what the other country decides to do. In resolving banks that are purely national, the authorities have a variety of objectives. The UK 2009 Banking Act, for example, has five objectives in the use of resolution powers: to ensure the continuity of critical functions of banking institutions, to protect and enhance the stability of the financial systems of the United Kingdom, to protect and enhance public confidence in the stability of the banking systems of the United Kingdom, to protect depositors and investors, and to protect public funds. However, these do not among them amount to any clear direction regarding priority or how any trade-off is to be made.65 In the US case, if the FDIC were to remain a junior creditor when it accedes to the claims of the depositors in an ensuing set of insolvency proceedings, then minimizing its own losses would be pretty much the same as minimizing the losses under a normal insolvency. However, the FDIC is a preferred creditor, so the insured depositor losses get moved up the hierarchy of creditors. There is therefore frequently litigation in such insolvencies between the noninsured depositors who remain junior claimants and the FDIC, because their losses may be increased as a result. The European Union gets around this by having a “no creditor worse off ” clause in its bank recovery and resolution directive.66 This is particularly important for systemically important banks, as maintaining systemic stability will be part of the objective of the Single Resolution Board in deciding on the resolution method. In other words, it is the losses to the economy as a whole that drive the choice of resolution, not simply the direct losses to the claimants.67 Looking at the knock-on consequences of actions is a very significant part of the overall cost, as this depends on the ability of the initial loser to absorb the losses. For example, if a bail-in were used and the creditors bailed in were mainly hedge funds or other rich investors, then the further impact of the loss on GDP would be small. These parties are unlikely to change their consumption patterns much just because their financial wealth has fallen. Their portfolios will normally be reasonably diversified. If, however, the loss falls on ordinary depositors (as it would for OBR in New Zealand, then the knock-on effect would be much greater, as these bank accounts are likely to be a major part of many such people’s savings. Those who are already retired could suffer a major loss of income and hence have to cut back on consumption, thereby triggering knock-on losses to the firms that would normally sell to them. The same would happen if pension funds are bailed in and pensions fall or contributions rise as a result.68
Changing Role in Crisis Avoidance and Management 603 It is thus a nontrivial issue to work out where the losses will fall and the international distribution of the consequences. The shareholders and bondholders may be anywhere in the world, although home country bias suggests they will be disproportionately in the country of issue. There will thus be knock-on problems for central banks and other authorities that cannot be handled by any ex ante attempt to minimize total losses. Only the initial loss can be managed by the responsible authority, whether this is under SPOE or MPOE.
20.4 The Enhanced Role of the Central Bank As is shown in the previous analysis, there is a whole range of pressures that have led to an enhanced role for the central bank as a result of the issues raised by the GFC and the routes to solution of them that have been adopted. While the United Kingdom and New Zealand (among others) have come to a very straightforward solution, whereby the central bank is responsible for everything except committing taxpayer funds to a resolution, the European Union has come to a much more complex conclusion, dictated by practicalities rather than economics.69 The United States and Canada have maintained their position whereby multiple agencies have responsibility for the steps in the process.70 In Canada, the Office of the Superintendent of Financial Institutions retains responsibility for supervising banks, and the Canada Deposit Insurance Corporation is responsible for resolution, while the Bank of Canada is the LOLR and is responsible for financial stability. The disadvantage of concentration of responsibilities is the possibility of conflicts of interest, particularly in a crisis, although the BoE attempts to keep the various functions separate until reaching the topmost decision-making bodies: the Monetary Policy Committee and the Financial Stability Committee. The principal conflict for the central bank is not so much between monetary policy and its exposures as a lender to banks, which is emphasized in the literature, but between its roles as a lender and supervisor and its role as a resolution agency. As a lender, it would prefer banks not to close if delay will increase the chance of not having to realize losses. Furthermore, needing to apply a resolution implies a prior failure of supervision, which a supervisor would rather not acknowledge. But the job of the resolution agency is to minimize the costs, which normally entails acting early. In small countries, the practical advantage of concentration is very obvious particularly for resolution, since a resolution authority is like a fire brigade: well equipped, well trained, but hoping that it will not be called on. In a small country, resolution staff can be allocated to other tasks in good times, and then the central bank can call on a large group of trained people immediately should the need arise. The European Union is not small, but it is constrained by the treaties that enshrine its constitution. Changing these treaties is not to be undertaken lightly; not only is a
604 Mayes unanimous decision required from all twenty-eight members, but other difficult issues might be opened up as well as the immediate concern. For that reason, the European Union sought measures that it could undertake within existing competencies when reacting to the GFC. Thus, improving the ability to avoid crises by increasing capital requirements and enhancing other aspects of supervision could be done by new regulations and directives. Here not only could just the specific item be addressed, but passing it into law requires just a qualified majority, not unanimity.71 Expanding responsibilities was a different issue. It was possible to set up the ESRB readily because it is an advisory body and cannot compel the member states to act on the identification of risks. The best it can hope for is that by drawing attention publicly to deficiencies, those named will be shamed into action. However, trying to organize action on resolution at a European level was a different matter. First of all, any joint responsibility in resolution would presuppose a joint responsibility for supervision. Otherwise, one member state could be responsible for paying for the consequences of poor supervisory actions by another. It was particularly important that all existing problems be sorted out before member states would take joint responsibility for any future problems. A sensible way forward would have been to create new independent EU-level institutions. Mayes, Nieto, and Wall (2011) and Schoenmaker and Gros (2012), for example, have suggested that the European Union could follow a US-style model, with a European-level supervisor, born out of existing supervisory cooperation through the European Banking Authority and a new European Deposit Insurance Corporation that handled both deposit insurance and resolution. However, these would have required the treaty renegotiation route. What the European Union did instead was to make use of a provision in Article 127(6) of the Treaty on the Functioning of the European Union (TFEU), which allowed the ECB to take responsibility for bank supervision under unanimous agreement. The drawback was that this scope applied only to banking and not the whole of financial supervision and that the ECB can only take decisions for the euro area―or, more literally, only the euro area members are on its Governing Council, so other countries would be in a secondary position.72 There is, of course, no necessary link between membership of the euro area and the scope of the major international banks that operate in the European Union. Particularly, London is the most important banking and financial market, and the United Kingdom is not a member of the euro area or likely to become one in the reasonable future.73 Hence, on the one hand, the United Kingdom was not likely to agree to ECB responsibility for its banks if it was not represented, and on the other hand, the means of incorporating nonmembers of the euro area has had to be a contortion (Huertas 2015). The ECB has set up a supervisory board on which the euro area and all other states that choose to join are represented. But the Governing Council retains ultimate responsibility. Hence, if the Governing Council makes a decision to which euro nonmembers object, they are left with a choice of caving in or leaving if the resolution of the dispute does not succeed. This system of supervision, known as the Single Supervisory Mechanism (SSM),74 puts the ECB in charge of supervision of all banking operations in the euro area plus those of any other member state that chooses to join.75 However, the ECB only
Changing Role in Crisis Avoidance and Management 605 supervises directly the 130 or so largest banks that might be deemed of systemic importance. All the remaining smaller banks and the nonbanking parts of the largest groups are still supervised by the national authorities, although the ECB has ultimate responsibility over the banking operations. Thus, the coverage is cumbersome. Those countries that do not opt in are not covered; nonbanking operations are not covered; and operations of banks headquartered outside the European Union are not covered, only their activities within the euro area. If all goes well, this will not be a problem, but the opportunities for difficulties and disagreements between national and EU-level authorities are considerable (Binder 2015). One really has to turn the discussion around and ask whether this is an improvement over relying on national supervisors to cooperate. It is difficult not to conclude that while this is well short of a well-structured EU/ EEA-wide system, it is still a lot better than largely unenforceable cooperation, however well intentioned. Setting up a resolution authority was equally difficult, as it had to be “independent” yet subject to existing institutions, as under the Meroni doctrine (Chamon 2014), EU institutions cannot delegate their responsibilities. There is therefore a separate Single Resolution Board, based in Brussels, that covers the same countries as the SSM but is subject to approval of its decisions by the European Commission. In turn, if the European Commission were to object, then the European Council has a say, to maintain the balance of powers between the member states and the institutions. As Huertas (2015) points out, this results in a potentially unworkable arrangement for crisis decision- making. The protracted decision-making could use up too much of the time available before the bank has to reopen for the resolution to be completed.76 Of course, none of these systems comes into force instantly. The new EU/EEA-wide standards come into force progressively, with amendments necessary for the EU version of the latest FSB standards on TLAC.77 As part of the bank resolution and recovery directive and its associated single resolution regulation,78 each member state has to create a resolution fund to assist with the costs of bailing in. These will be created progressively until 2024, and over that period, the funds of the euro area members will be progressively mutualized into a Single Resolution Fund (Binder 2015). In the meantime, the ECB has conducted a review of the asset quality of the main banks, and only as the banks come up to the required standards will they become part of the full system. In other words, legacy problems have to be resolved before resolution can be fully mutualized. In running supervision of banks, a central bank faces little conflict outside crises, whether it is a single (large) bank that is in trouble or a numbers of smaller banks. While the exposure to loss through the LOLR may conflict with monetary policy objectives, we have already seen from the case of the ECB that the central bank may feel it needs to take actions that lie right at the limit of its powers. This has been mainly because there is no countervailing force in government that has been prepared to act. In the case of the European Union, this is understandable, as there is no EU-level government, and neither the Commission nor the Council of Ministers may have the power or the ability to generate enough consensus to act vigorously. Thus, while TARP in the United States was introduced within days of the meltdown of the system in September 2008, it was not
606 Mayes until June 2010 that the European Union got an equivalent program in place with the EFSF and later the ESM. And even then, ability to use the funds to help recapitalize the banking system was not clear. In the meantime, the central bank has an obligation to do what it can; whether it is justifiable or not, it will get blamed for insufficient intervention. Thus, there has been a major focus on monetary policy easing, both because it has not proved possible for governments to address the problem of weak banks adequately and because governments have been cautious about how much they are able or willing to use fiscal policy to restore confidence and encourage investment to achieve the structural changes needed (Sandbu 2015). It remains to be seen whether there will be strong criticisms of these actions subsequently, in the same way that Taylor (2009) has criticized both the ECB and more particularly the Federal Reserve for contributing to the GFC by running too easy a monetary policy in the period 2001–2004 after the collapse of the previous boom. While a major aspect of crisis avoidance comes through running policies that smooth the economic cycle, if this in itself leads to more volatility in the financial cycle, then it may be somewhat counterproductive and leave the central bank with a difficult trade-off in objectives. Bean (2016), for example, suggests that the need for unusually low interest rates (and QE) for so long raises the risks for financial stability, where the prospects for reducing those risks through macroprudential measures is still moot. He points out that the United Kingdom has addressed this trade-off, and the chancellor of the Exchequer has advised the Monetary Policy Committee that it should continue to focus monetary policy on price stability unless the range of measures addressed in this chapter fails to maintain price stability. Part of the problem comes from asymmetry in the approach to the phases of the cycle. While this has been set out clearly for monetary policy by Blinder and Reis (2005) in their description of the “Greenspan standard,” there is an analogue in the treatment of financial cycles as well. There is a clear reluctance to intervene in the up phase, because it is difficult to decide how much of the increases in credit and asset prices are justified by changes in longer-term factors. Macroprudential interventions may also be distortionary and are also used with reluctance. Microprudential interventions in individual institutions have long been characterized by forbearance. Thus, the bookend analogy mentioned at the beginning of this chapter might need to be viewed slightly differently. The crisis-avoidance measures could represent an attempt to reduce the asymmetry of the process by trying to make the build-up of pressures more difficult. However, the resolution processes represent an attempt to have a swift but shorter and shallower downturn than before. Thus, in one sense, this is increasing the asymmetry in terms of acting earlier but with the intention of trying to make sure that the downturns themselves are less onerous. In any event, what we have been seeing is an increasing intervention by the central bank in the economy: by increasing the range of monetary policy tools with major expansions of central bank balance sheets in most of the main economies; by impinging more firmly in the banking sector, particularly in prospect when bail-ins are required; by employing macroprudential tools; and, in the case of the ECB, being involved through
Changing Role in Crisis Avoidance and Management 607 the troika in setting the agenda for fiscal and structural reforms and in monitoring their progress. In part, this has been a reaction to political difficulties in taking action through other routes. The rather odd structure of the institutions set up in the European Union in the last few years is a case in point (although if the United Kingdom were to leave, the oddity would be substantially reduced). But the increasing role of the central bank is likely, at some stage, to generate a pushing back by governments and parliaments to involve other agencies and to make the process more democratically accountable. Not merely is it likely that some strong actions by central banks will prove unpopular, but there are bound to be errors, and with an increasingly politicized role, such errors may generate a political response. Rather than compromise their independence, which has become central for the credibility of monetary policy, central banks may themselves seek some division of responsibility, if only to restrict the conflicts of interest.79 Either way around it seems likely, with the additional responsibilities placed on central banks by and since the GFC, that central banks are probably at the apex of their power at present.
Notes 1. Given the nature of the GFC, the focus in this chapter is on crises related to banks and other financial institutions, not foreign exchange crises. There were substantial changes earlier in several emerging-market central banks following the 1997 Asian crises. 2. http://www.bis.org/bcbs/history.htm. 3. Sveriges Riksbank and Norges Bank began producing regular half-yearly FSRs in 1997, with the Bank of England beginning a year earlier. See Čihák 2006 for a survey. 4. While Čihák et al. (2012) find that publishing an FSR, per se, does not improve observed financial stability, the more sophisticated ones do appear to be associated with greater financial stability. 5. Geithner (2014) argues that the US authorities did not have the power to intervene to resolve Lehman Brothers without insolvency, although a government bailout could have been possible with congressional support. 6. When the banking system collapsed in Iceland, the authorities managed to act by passing the enabling legislation one day and using it the next (October 6–7, 2008). Most countries find it more difficult to pass even emergency legislation, as was the case with the Troubled Asset Relief Program (TARP) in the United States. 7. I am grateful to the reviewer for pointing out that just because what is supposed to be done in a crisis is clear, it does not entail that in the heat of the crisis, when there is a need to act rapidly on insufficient information, the plan will be followed. The apparent clarity may turn out to be illusory. 8. See Mohanty 2014 on the case of Africa, for example. 9. It is notable that many of the main features of the pre-GFC system stemmed from changes made in the wake of the last great financial crisis on 1929. This is no surprise as it tends to be only disasters of that scale that lead to major action. 10. Prior to the FDICIA, the approach had tended to be to prop up failing banks using the insurer’s funds or to help organize amalgamations (Federal Deposit Insurance Corporation 1997).
608 Mayes 11. The same applies to Chapter 11 resolution in the United States and to other cases where the debtor tries to effect a resolution with the agreement of the creditors―as Lehman Brothers illustrated. 12. See for more details https://www.fdic.gov/regulations/laws/rules/8000-2400.html. It is important to recognize that while having such a lex specialis may not be costly for the authorities and that the costs of implementing it can be recouped from the creditors in a resolution, the banks themselves have to bear extensive costs in structuring themselves so that such a rapid resolution is possible. In particular, they need to have computer systems in place such that assets and liabilities can be reorganized overnight, with the bank functioning the following morning (Harrison, Anderson, and Twaddle 2007). 13. The BIS (Bank for International Settlements 2009, chap. 2) provides a whole range of useful statistics on the objectives and functions of central banks. 14. The issue of the relative merits of central bank involvement in the supervision of individual banks is covered here in c hapter 15 and lies beyond the purview of the present chapter. Even if the central bank is not the supervisor, it clearly benefits from easy access to supervisory information in its responsibility for monetary policy and more particularly in its role as guardian of the stability of the financial system as a whole. 15. Financial Stability Board 2014b. 16. Sandbu (2015), for example, is extremely critical of the way the European Central Bank forced Ireland and Greece to bail out bank bondholders, thereby placing the burden on the domestic government rather than spreading it around to other countries. This he regards as subverting the democratic rights of the two countries. 17. The FDIC (2011) suggests that with an orderly resolution, the $40 billion or so of losses that Lehman made would have been met mainly by the shareholders, and while the subordinated debt holders would have been wiped out, other unsecured debt holders would have only lost 3 cents on the dollar, whereas the loss in practice was more like 80 cents, not to mention the knock-on effect on the US and world economies estimated at trillions of dollars. 18. The New Zealand scheme, open bank resolution (OBR), only writes down shareholders and then creditors to the point where the losses are extinguished (see Hoskin and Woolford 2011 for a summary of OBR). It does not then recapitalize the bank by converting debt into equity. The reason for this is that there is no guarantee that the new owners of the bank that are created by this process will either be suitable or indeed willing in the event of a conversion. In these tense circumstances, the new owners need to be well qualified to complete the turnaround to continuing profitability. It is therefore proposed that the statutory manager who runs the bank on behalf of the authorities should seek a capital injection from suitable established financial institutions, which in the New Zealand case would almost certainly be foreign owned. 19. While the US version of bailing in (the United States has not used that term) has been around for a long time, experience of it elsewhere is limited, and it is interesting how much store has been set by it without experience. In Denmark, the two examples of bailing in (Poulsen and Andreasen 2011), admittedly with small banks, posed many difficulties and were controversial. Much of the problem stemmed from having to value assets in a hurry and hence needing to choose a conservative value, as greater value can be returned to creditors later on, but any undervaluation will become a loss for the authorities since further losses cannot be imposed on the creditors.
Changing Role in Crisis Avoidance and Management 609 20. The BoE’s note on the action is available at http://www.bankofengland.co.uk/ financialstability/Pages/role/risk_reduction/srr/previousunder.aspx. 21. The “ideal” outcome is illustrated by the decline of the Midland Bank in the United Kingdom and its takeover by HSBC in 1992, with the whole process remaining in the private sector and being completed before the bank becomes a serious concern to the authorities. 22. In the European environment, a lot of store was set by the Pafitis case in 1996, Panagis Pafitis and others v. Trapeza Kentrikis Ellados AE and others (Lastra 2011). 23. As Garcia (2012) points out in a review of the major loss reviews published by the FDIC relating to the first part of the GFC, many of the problems in the banks that failed in the United States were clear beforehand, but the supervisors chose not to act―a straightforward example of what is known as forbearance. 24. These plans and resolution plans are often referred to as living wills in the sense that they set out what should happen to the bank when it is in difficulty and may require the authorities to act on its behalf. 25. In the United States, a part of the plan is public, but it does not contain enough information to make an informed judgment. 26. There has also been external concern about the plausibility of such plans; see Pakin 2013, for example. 27. The Financial Stability Board (2014a) has developed the concept of total loss-absorbing capacity (TLAC), while the European Union has developed its own related concept of minimum ratio of eligible liabilities (MREL), where the idea is that a bank must hold enough equity plus other unsecured liabilities that can be bailed in during a crisis so that a systemically important bank can keep operating. The range of such ratios is likely to run from around 16 to 24 percent (Krimminger 2015). There has been an increasing preference for issuers to write the possibility of being bailed in into the terms of the security itself rather than to set out, as the RBNZ has done (Reserve Bank of New Zealand 2013), which securities could be written down and how. 28. The issuer’s statement about the securities, which are described simply as “capital notes,” is available at http://www.anz.co.nz/resources/2/f/2f761f86-3664-43ee-8c26-fab144ea7a5f/ deedpoll.pdf?MOD=AJPERES. 29. As Calomiris and Herring (2011) point out, it is a tricky balancing act to ensure that the incentives for both existing shareholders and the CoCo holders, who will become shareholders if bailed in, do not result in instability in the market as one group tries to transfer losses to the other. The degree to which the existing shareholders will be diluted by the bail-in is crucial. Goodhart (2010) argues that the instability is inherent, and hence the instrument may be flawed. 30. In practice, central banks normally avoid such direct purchases of lower-grade securities. 31. If depositors can be bailed in, then there is a serious danger of a general bank run, as ordinary depositors will be poorly informed and will tend to think that if one large bank could be in trouble, then there may be problems with the others. If this is a period of low interest rates, as is likely in a crisis, the rush to cash is both understandable and relatively costless (Mayes 2015). 32. It is notable that bailing in does not normally apply to derivative transactions, and these contracts are likely to be maintained even where senior unsecured bondholders are having to bear a substantial loss.
610 Mayes 33. The technique used for resolving the problem is also instructive, as the government first offered a $500 million tranche of support against possible losses. It was then able to organize a split of the company whereby IAG (a major Australian insurer) bought all the non-earthquake-related insurance, and the government set up a new company called Southern Response to handle the earthquake claims, thereby reducing its exposure to around $100 million. This thereby extends a version of bank-resolution techniques to the insurance sector. 34. The government paid what it regarded as the market price at the time, leaving the Bank of Finland with the loss. This is discussed in more detail in Mayes 2016. 35. § 619 (12 U.S.C. § 1851) of the Dodd–Frank Wall Street Reform and Consumer Protection Act, known as the Volcker rule because Paul Volcker former Chairman of the Federal Reserve Board proposed it. 36. High-level Expert Group on reforming the structure of the EU banking sector (2012) available at http://ec.europa.eu/internal_market/bank/structural-reform/index_ en.htm.. 37. Čihák (2006) analyzes reports from central banks in forty-seven countries that were published by the end of 2005, suggesting that with this rapid rate of growth, many other central banks would have also been contemplating publication and hence amassing similar material. 38. Cecchetti (2008) tries to take this further by suggesting that the avoidance of extreme events is effectively an additional objective to monetary policy for central banks and that this can be tackled by a “risk management” approach. However, the evidence he puts together for the relationship between financial structures and such instability is somewhat inconclusive but does suggest that it is greater where the role of bank lending in finance is greater. 39. See https://www.ecb.europa.eu/press/pressconf/2007/html/is071108.en.html for example. 40. See Detken et al. 2014 and European Systemic Risk Board 2014 for the case of the ESRB. 41. Before the ECB took on responsibility for supervision of banks in the euro area with the Single Supervisory Mechanism (SSM), the responsibility for action was even more diffused. 42. The members are secretary of the Treasury (chair), chair of the Federal Reserve, comptroller of the Currency, director of the Consumer Financial Protection Bureau, chair of the Securities and Exchange Commission, chair of the FDIC, chair of the Commodity Futures Trading Commission, director of the Federal Housing Finance Agency, chair of the National Credit Union Administration Board, and an independent member (there are five other nonvoting members). 43. Claessens does place the Volcker rule and the Vickers requirements in this category as well, which were described as structural crisis-avoidance measures in the previous section. 44. The monetary policy and financial stability measures are in direct contradiction, as monetary policy has been trying to get banks to expand lending by providing liquidity while financial stability measures have been seeking to increase capital buffers, which implies, at least in part, deleveraging. 45. I do not develop arguments about the rise of independence of central banks here, as that is the topic of chapter 3. Nevertheless, it is likely to be more than coincidence that the focus on new policy areas and the use of new instruments have come as central banks have enjoyed increasing independence. Prior to that, governments would have found it more difficult not to act themselves. The exercise of independence has been a two-way street. To
Changing Role in Crisis Avoidance and Management 611 some extent, governments have been able to avoid taking politically risky measures themselves and have been able to rely on the central bank. 46. Responsibility for operating existing controls tends to lie with the central bank. 47. Such problems are asymmetric; with rising revenues, a sovereign wealth fund can absorb the increased revenues. Because of its size, the sovereign wealth fund’s relationship with the central bank is likely to be close (as in Kazakhstan and Norway, for example). 48. Some currency boards will not be able to perform this lending if their only capital is tied up in backing the currency. Similarly, some central banks may have reached the limits of the ability to provide any financing if the country cannot raise further debt. 49. This debate is nontrivial, because it distinguishes providing liquidity to the market in general from lending specifically to troubled institutions. The traditional LOLR (Castañeda, Mayes, and Wood 2015) is provided to the rest of market when a major player defaults, to prevent the failure of the one resulting in knock-on failures of its solvent but temporarily illiquid counterparties and onward through the financial system. Unless lending patterns are heavily concentrated, the number of knock-on failures is normally likely to be very small if confidence is maintained. While Continental Illinois in the United States was bailed out in 1984 through the fear that the knock-on failures would be too large to contemplate, subsequent analysis (Wall and Peterson 1990) has suggested that there would have been virtually none had the failure been allowed and resolution taken place under the FDIC’s normal rules―if the Fed had provided all the necessary collateralized lending to keep the system operating and maintain confidence. As the work of de Bandt and Hartmann (2000) shows, direct contagion in markets where the failure of one party to pay leads to the failure of counterparties is quite small. It is indirect contagion, where people withdraw from the market because they are worried that counterparties may not pay them back, that causes the major damage, as illustrated all too vividly by the GFC. 50. The troika was the term coined by the media and used increasingly widely for the three parties involved in such lending: the European Commission, the IMF, and the ECB. 51. The problems for Ireland are covered in Honohan 2010. 52. At least under Basel 3, there is a minimum leverage ratio as well as the risk-weighted capital ratios, although some countries, such as Australia and New Zealand, have decided not to apply it. There is, of course, extensive debate about how much capital banks should hold against sovereign debt if this is not highly rated. A higher weight would both make banks somewhat less risky and place pressure on the sovereigns to run more prudent fiscal policy. 53. https://www.esm.europa.eu/. 54. A monetary policy concern could only be distinguished from a macroprudential concern in recent circumstances if lending and related behavior were normal or the central bank was not close to the zero bound. As pointed out by Bean (2016), though, the zero bound in practice can differ from zero. 55. While technically it is the local central bank that provides the ELA, the ECB has to authorize it, with an emergency exception. 56. As soon as the new lending package was agreed through the ESM, the ECB then raised its ELA cap. 57. A second consequence was that the assets of the two main Cypriot banks were also written down, precipitating the crisis in Cyprus, as the total write-down of assets amounted to 23 percent of Cypriot GDP (Phylaktis 2015). This emphasizes the international aspect of crisis resolution, which is dealt with in section 20.3. It is worth noting here, though, that it
612 Mayes was not one national central bank causing a problem for another but the troika causing a problem for another member state. 58. This is in stark contrast to Iceland, where pushing the burden on to bondholders by introducing depositor preference the day before the resolution of the Icelandic banks and even postponing the repayment of insured depositors have helped the government avoid default (Jännäri 2009). Of course, the bondholders appealed the decision all the way to the Supreme Court and the European Free Trade Association Court. 59. This problem, as it relates to central banks in emerging markets, has already been referred to. 60. Banking union, discussed in more detail in section 20.4, is an attempt to move the regulatory and resolution system in the European Union to the level of the United States, where, although the constituent states retain considerable powers and their own institutions, the problems of needing to act across borders in a single and coherent manner are addressed. While falling well short of what one might expect from a full banking union as exists in, say, Germany (Mayes and Wood 2016), the EU arrangements involve not merely harmonizing the increased capital and other crisis-avoidance measures but also harmonizing resolution tools and powers. For the euro area, and other countries that choose to join, it institutes centralizing supervision through the ECB and resolution through a Single Resolution Board in Brussels. See European Banking Authority 2015 on the “single rulebook” and notes 68 and 76 below for regulations on the single supervisory resolution mechanisms. 61. Federal Deposit Insurance Corporation and Bank of England 2012. 62. The important reason for going to one or the other extreme is that it avoids the entire discussion of which authority is responsible for the problem and hence how the costs of resolution should be apportioned among different countries (Goodhart and Schoenmaker 2009). While some principles for allocation can be set out in advance, it is likely that the particular circumstances would be arguable, and there is no time in a crisis for having such an argument. Some authority has to be responsible, and it has to act promptly. 63. This assumes, of course, that there is enough loss-absorbing capacity in the parent in the first place. The recommended levels are such that they would cover the losses made by all of the large banks that failed in the GFC, with the exception of Anglo-Irish, which was just 1 percent larger. So we can reasonably expect that the chance of SPOE requiring an injection of public money is very small. The use of public money is, of course, very contentious with an international bank, as the government of the parent’s country is likely to be effectively bailing out activities in other countries without requiring their governments to do likewise. There are other problems; there is a question of whether CoCos are only triggered in relation to the parent or whether they can relate to the subsidiaries (Carmassi and Herring 2013). A group with a troubled subsidiary might well prefer to trigger a CoCo relating to that subsidiary rather than one relating to the group as a whole, but that would depend on there being different triggers in the two markets. 64. Within a national system, the national resolution authority can compel a part of the failed bank to stay operating if it is essential for the continuing operation of vital functions of the bank, even if it is in a part of the bank that remains in insolvency (as set out in the UK Banking Act of 2009, for example). 65. In their justification of their resolution rules, the RBNZ (Reserve Bank of New Zealand 2012) and the European Commission (2012) set out expected gains (and indeed costs for the subject banks), but they do not attempt to estimate the impact of the gains and losses across the economy as a result of using the different resolution techniques. The central bank will therefore be very exposed the first time it uses these tools and seeks to follow any route that is not clearly maximizing the return to the creditors as in insolvency.
Changing Role in Crisis Avoidance and Management 613 66. If any creditor were to be worse off than it would in insolvency, it would have to be compensated, and each jurisdiction has to have a resolution fund so that restitution can be made after an independent valuation. “Regulation (EU) No 806/2014 of the European Parliament and of the Council Establishing Uniform Rules and a Uniform Procedure for the Resolution of Credit Institutions and Certain Investment Firms in the Framework of a Single Resolution Mechanism and a Single Resolution Fund and Amending Regulation (EU) No 1093/2010,” http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX: 32014R0806&from=EN. 67. That said, it is not at all clear from the legislation how the wider costs to the economy or the excess burden on the creditors will be evaluated. Creditors have to be bailed in up to 8 percent of liabilities before resolution funds can be used. Bank Recovery and Resolution Directive: “Directive 2014/59/EU of the European Parliament and of the Council of 15 May 2014 Establishing a Framework for the Recovery and Resolution of Credit Institutions and Investment Firms and Amending Council Directive 82/891/EEC, and Directives 2001/ 24/EC, 2002/47/EC, 2004/25/EC, 2005/56/EC, 2007/36/EC, 2011/35/EU, 2012/30/EU and 2013/36/EU, and Regulations (EU) No 1093/2010 and (EU) No 648/2012, of the European Parliament and of the Council, OJ L 173 of 12 June 2014.” It will probably require some cases before anyone can discover how this is likely to play out in practice. Even so, assessing any compensation will be very difficult. Even though an independent valuer is to be appointed under the terms of the United Kingdom’s implementation of the BRRD (the 2009 Banking Act as amended), it would be necessary to work out the value of the whole range of assets of a complex bank under hypothetical circumstances, which sounds like a recipe for litigation. 68. Mayes 2015 offers an assessment of the impact of different resolution approaches on the wider economy. 69. Since New Zealand does not have deposit insurance, the RBNZ does not, of course, have responsibility for it. When deposit guarantees were introduced temporarily in the GFC, the Treasury, not the RBNZ, was responsible, with rather disastrous consequences (New Zealand Auditor-General 2011). 70. Indeed, in the United States, the Dodd-Frank Act has reduced the powers of the Federal Reserve somewhat, removing what have been regarded as loopholes that enabled it to resolve the problems in Bear Stearns, among others. 71. Passing EU legislation requires a simple majority in the European Parliament but a qualified majority in the European Council, where each country’s votes are roughly proportional to its population. Such a qualified majority is a double majority in the sense that the majority of countries (55 percent) must be in favor, and they must represent 65 percent of the population (see http://www.consilium.europa.eu/en/council-eu/voting-system/ qualified-majority). 72. The United Kingdom and other noneuro countries were prepared to agree, because the ECB would not have any authority over the supervision of their banks unless they chose individually at some future date for that to happen. 73. In a referendum on June 23, 2016, the United Kingdom voted to leave the European Union. At the time of this writing, the process of leaving the European Union, begun by invoking Article 50 of the TFEU, has not commenced. There is therefore no indication of what form of future association might be negotiated for the financial sector, although one would expect that the City of London would press for a relationship that enabled it to continue to access all mechanisms, including those inside the euro area. Negotiations can take two years from the time the process is triggered (longer with unanimous agreement), so
614 Mayes the result may not be known until 2019. Moreover, exit is not a certainty, as the United Kingdom could change its mind. It is therefore not worth speculating on the nature of the outcome. 74. “Regulation (EU) No 1022/2013 of the European Parliament and of the Council of 22 October 2013 Amending Regulation (EU) No 1093/2010 Establishing a European Supervisory Authority (European Banking Authority) As Regards the Conferral of Specific Tasks on the European Central Bank Pursuant to Council Regulation (EU) No 1024/2013.” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32013R1022 75. There is a wrinkle here, as the whole of the European Economic Area (EEA), including Iceland, Liechtenstein, and Norway, is subject to the EU rules on banking standards, but the non-EU members appear not to be eligible to join the SSM, which means that their part of any systemic banks would be excluded from participation. 76. It certainly would make overnight resolution impossible if any party were to object. 77. See n. 24 above. 78. “Regulation (EU) No 806/2014 of the European Parliament and of the Council Establishing Uniform Rules and a Uniform Procedure for the Resolution of Credit Institutions and Certain Investment Firms in the Framework of a Single Resolution Mechanism and a Single Resolution Fund and Amending Regulation (EU) No 1093/2010.” http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32014R0806. 79. While it is difficult to avoid the potential conflict of interest between monetary policy and the LOLR for the central bank, it is possible to avoid the conflict through both a supervisor and a resolution authority. It is also possible to avoid the conflict between these and monetary policy by assigning them outside the central bank. Splitting up the monetary policy and financial stability mandates is more problematic, as monetary policy tools also affect financial stability, and the use of macroprudential tools can ease the burden of monetary policy.
References Bank for International Settlements. 2009. “Issues in the Governance of Central Banks.” http:// www.bis.org/publ/othp04.pdf. Basel Committee on Banking Supervision. 2010. “Report and Recommendations of the Cross- Border Bank Resolution Group.” http://www.bis.org/publ/bcbs169.pdf. Bean, Charles. 2016. “Living with Low for Long.” Economic Journal 126 (May): 507–522. doi: 10.1111/ecoj.12370. Benston, George, and George Kaufman. 1988. Risk and Solvency Regulation of Depository Institutions: Past Policies and Current Options. New York: Salomon Brothers Center, Graduate School of Business, New York University. Binder, Jens-Hinrich. 2015. “Resolution Planning and Structural Bank Reform within the Banking Union.” In Banking Union: Prospects and Challenges, edited by Juan Castañeda, David G. Mayes, and Geoffrey Wood, 129–155. Abingdon: Routledge. Blinder, Alan, and Ricardo Reis. 2005. “Understanding the Greenspan Standard.” Jackson Hole Symposium, Federal Reserve Bank of Kansas City. http://www.kansascityfed.org/Publicat/ sympos/2005/PDF/Blinder-Reis2005.pdf. Board of Governors of the Federal Reserve System and Federal Deposit Insurance Corporation. 2014. “Agencies Provide Feedback on Second Round Resolution Plans of ‘First-Wave’ Filers.” Joint press release. https://www.fdic.gov/news/news/press/2014/pr14067.html.
Changing Role in Crisis Avoidance and Management 615 Borio, Claudio. 2014. “The Financial Cycle and Macroeconomics: What Have We Learnt?” Journal of Banking & Finance 45: 182–198. Borio, Claudio, and Phillip Lowe. 2002a. “Assessing the Risk of Banking Crises.” BIS Quarterly Review (December): 43–54. Borio, Claudio, and Phillip Lowe. 2002b. “Asset Prices, Financial and Monetary Stability: Exploring the Nexus.” BIS working papers 114, July. Borio, Claudio, and Phillip Lowe. 2004. “Securing Sustainable Price Stability: Should Credit Come Back from the Wilderness?” BIS working papers 157, July. Borio, Claudio, and Patrick McGuire. 2004. “Twin Peaks in Equity and Housing Prices?” BIS Quarterly Review (March): 79–93. Borio, Claudio, and Mathias Drehmann. 2009. “Assessing the risk of banking crises—revisited.” BIS Quarterly Review (March): 29–46. Herring, Richard, and Charles W. Calomiris. 2011. “Why and How to Design a Contingent Convertible Debt Requirement.” http://dx.doi.org/10.2139/ssrn.1815406. Capie, Forrest, and Geoffrey Wood. 2007. Lender of Last Resort. Abingdon: Routledge. Carmassi, Jacopo, and Richard Herring. 2013. “Living Wills and Cross-Border Resolution of Systemically Important Banks.” Journal of Financial Economic Policy 5, no. 4: 361–387. Castañeda, Juan, David G. Mayes, and Geoffrey Wood. 2015. European Banking Union: Prospects and Challenges. Abingdon: Routledge. Cecchetti, Stephen. 2006. “The Brave New World of Central Banking: Policy Challenges Posed by Asset Price Booms and Busts.” National Institute Economic Review 196 (May): 107–119. Cecchetti, Stephen. 2008. “Measuring the Macroeconomic Risks Posed by Asset Price Booms.” In Asset Prices and Monetary Policy, edited by John Y. Campbell, 9–43. Chicago: University of Chicago Press. Chamon, Merijn. 2014. “The Empowerment of Agencies under the Meroni Doctrine and Article 114 TFEU: Comment on United Kingdom v. Parliament and Council (Short-Selling) and the Proposed Single Resolution Mechanism.” European Law Review 39, no. 3: 380–403. Cho, Yoon J. 2012. “The Role of State Intervention in the Financial Sector: Crisis Prevention, Containment and Resolution.” In Implications of the Global Financial Crisis for Financial Reform and Regulation in Asia, edited by Masahiro Kawai, David G. Mayes, and Peter Morgan, 179–199. Cheltenham: Edward Elgar. Christodoulakis, Nicos M. 2015. Greek Endgame: From Austerity to Growth or Grexit. London and New York: Rowman & Littlefield. Čihák, Martin. 2006. “How Do Central Banks Write on Financial Stability?” IMF working paper 06/163. http://www.imf.org/external/pubs/ft/wp/2006/wp06163.pdf. Čihák, Martin, Sonia Munoz, Shakira Teh Sharifuddin, and Kalin Tintchev. 2012. “Financial Stability Reports: What Are They Good For?” IMF working paper 12/1. Claessens, Stijn. 2014. “An Overview of Macroprudential Policy Tools.” IMF working paper 14/214. De Bandt, Olivier, and Phillipp Hartmann. 2000. “Systemic Risk: A Survey.” ECB working paper 35. Detken, Carsten, et al. 2014. “Operationalising the Countercyclical Capital Buffer: Indicator Selection, Threshold Identification and Calibration Options.” European Systemic Risk Board occasional paper 5. https://www.esrb.europa.eu/pub/pdf/occasional/20140630_occasional_paper_5.pdf?12806293e038df2398ec91fccaf5c6ac. Dornbusch, Rudiger. 1976. “Expectations and Exchange Rate Dynamics.” Journal of Political Economy 84, no. 6: 1161–1176. Edwards, Sebastian. 2004. “Financial Openness, Sudden Stops and Current Account Reversals.” American Economic Review 94, no. 2: 59–64.
616 Mayes Eichengreen, Barry, and Ricardo Hausmann. 2010. Other People’s Money: Debt Denomination and Financial Instability in Emerging Market Economies. Chicago: University of Chicago Press. Eisenbeis, Robert. 2007. “Agency Problems and Goal Conflicts in Achieving Financial Stability: The Case of the EMU.” In The Structure of Financial Regulation, edited by David G. Mayes and Geoffrey Wood, 232–256. Abingdon: Routledge. European Banking Authority. 2015. The Single Rulebook. http://www.eba.europa.eu/regulation- and-policy/single-rulebook. European Commission. 2012. “Commission Staff Working Document on Impact Assessment Accompanying the Document Proposal for a Directive of the European Parliament and of the Council Establishing a Framework for the Recovery and Resolution of Credit Institutions and Investment Firms and Amending Council Directives 77/91/EEC and 82/891/EC, Directives 2001/24/EC, 2002/47/EC, 2004/25/EC, 2005/56/EC, 2007/36/EC and 2011/35/EC and Regulation (EU) No 1093/2010.” http://ec.europa.eu/internal_market/bank/ docs/crisis-management/2012_eu_framework/impact_ass_en.pdf. European Commission. 2014. “Proposal for a Regulation of the European Parliament and of the Council on Structural Measures Improving the Resilience of EU Credit Institutions.” http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52014PC0043. European Systemic Risk Board. 2014. “The ESRB Handbook on Operationalising Macro- Prudential Policy in the Banking Sector.” https://www.esrb.europa.eu/pub/pdf/other/ 140303_esrb_handbook_mp.en.pdf?ac426900762d505b12c3ae8a225a8fe5. Federal Deposit Insurance Corporation. 1997. “The Banking Crises of the 1980s and Early 1990s: Summary and Implications.” In History of the Eighties—Lessons for the Future, Vol. 1, 3–82. Washington, D.C.: FDIC. Federal Deposit Insurance Corporation. 2011. “The Orderly Liquidation of Lehman Brothers Holdings Inc under the Dodd-Frank Act.” FDIC Quarterly 5, no. 2: 1–19. Federal Deposit Insurance Corporation and Bank of England. 2012. “Resolving Globally Active, Systemically Important, Financial Institutions.” Joint paper. https://www.fdic.gov/ about/srac/2012/gsifi.pdf. Financial Stability Board. 2014a. “Adequacy of Loss- Absorbing Capacity of Global Systemically Important Banks in Resolution.” Consultative document. http://www. financialstabilityboard.org/wp-content/uploads/TLAC-Condoc-6-Nov-2014-FINAL.pdf. Financial Stability Board. 2014b. “Key Attributes of Effective Resolution Regimes for Financial Institutions.” . Garcia, Gillian G. H. 2012. “Missing the Red Flags.” In Reforming the Governance of the Financial Sector, edited by David G. Mayes and Geoffrey Wood, 220–237. Abingdon: Routledge. Geithner, Timothy. 2014. Stress Test: Reflections on Financial Crises. New York: Crown. Goodhart, Charles A. E. 2010. “Are CoCos from Cloud Cuckoo-Land?” VOXeu.org. http:// www.voxeu.org/article/are-cocos-cloud-cuckoo-land. Goodhart, Charles, and Boris Hofmann. 2007. House Prices and the Macroeconomy. Implications for Banking and Price Stability. Oxford: Oxford University Press. Goodhart, Charles A. E., and Dirk Schoenmaker. 2009. “Fiscal Burden Sharing in Cross- Border Banking Crises.” International Journal of Central Banking 5, no. 1: 141–165. Goodhart, Charles A. E., and Dirk Schoenmaker. 2014. “The ECB As Lender of Last Resort.” http://www.voxeu.org/article/ecb-lender-last-resort. Greenspan, Alan. 1996. “The Challenge of Central Banking in a Democratic Society.” Speech. American Enterprise Institute, Washington, D.C., December 5. http://www.federalreserve. gov/BOARDDOCS/SPEECHES/19961205.htm.
Changing Role in Crisis Avoidance and Management 617 Harrison, Ian, Steven Anderson, and James Twaddle. 2007. “Pre-Positioning for Effective Resolution of Bank Failures.” Journal of Financial Stability 3, no. 4: 324–341. Honohan, Patrick. 2010. “The Irish Banking Crisis Regulatory and Financial Stability Policy 2003–2008.” Report to the Minister for Finance by the Governor of the Central Bank. http://www.bankinginquiry.gov.ie/The%20Irish%20Banking%20Crisis%20Regulatory%20 and%20Financial%20Stability%20Policy%202003-2008.pdf. Hoskin, Kevin, and Ian Woolford. 2011. “A Primer on Open Bank Resolution.” Reserve Bank of New Zealand Bulletin 74, no. 3: 5–10. Huertas, Thomas. 2015. “Banking Union the Way Forward.” In Banking Union: Prospects and Challenges, edited by Juan Castañeda, David G. Mayes, and Geoffrey Wood, 23–37. Abingdon: Routledge. Hüpkes, Eva. 2009. “Special Bank Resolution and Shareholders’ Rights—Balancing Competing Interests.” Journal of Financial Regulation and Compliance 16, no. 3: 277–301. Independent Commission on Banking. 2013. “The Vickers Report.” http://www.parliament.uk/ business/publications/research/briefing-papers/SN06171.pdf. Jännäri, Karlo. 2009. “Report on Banking Regulation and Supervision in Iceland: Past, Present and Future.” http://www.forsaetisraduneyti.is/media/frettir/KaarloJannari__2009.pdf. Krimminger, Michael. 2015. “Shadows and Mirrors: The Role of Debt in the Developing Resolution Strategies in the US, UK and European Union.” In European Banking Union: Prospects and Challenges, edited by Juan Castañeda, David G. Mayes, and Geoffrey Wood, 156–183. Abingdon: Routledge. Lastra, Rosa, ed. 2011. Cross-Border Bank Insolvency. New York: Oxford University Press. Lastra, Rosa. 2015. “Lender of Last Resort and Banking Union.” In European Banking Union: Prospects and Challenges, edited by Juan Castañeda, David G. Mayes, and Geoffrey Wood, 109–128. Abingdon: Routledge. Lehmann, Matthias. 2014. “Volcker Rule, Ring-Fencing or Separation of Bank Activities: Comparison of Structural Reform Acts around the World.” LSE Law, Society and Economy working paper 25/2014. Mayes, David G. 2006. “Financial Stability in a World of Cross-Border Banking: Nordic and Antipodean Solutions to the Problem of Responsibility without Power.” Journal of Banking Regulation 8, no. 1: 20–39. Mayes, David G. 2015. “The Funding of Bank Resolution.” In Bank Resolution: The European Perspective, edited by Jens-Hinrich Binder and Dalvinder Singh, 211–233. Oxford: Oxford University Press. Mayes, David G. 2016. “Top-Down Restructuring of Markets and Institutions: The Nordic Banking Crises.” Journal of Banking Regulation. doi:10.1057/s41261-016-0006-z. Mayes, David G., Liisa Halme, and Aarno Liuksila. 2001. Improving Banking Supervision. Basingstoke: Palgrave-Macmillan. Mayes, David G., and Aarno Liuksila. 2004. Who Pays for Bank Insolvency? Basingstoke: Palgrave-Macmillan. Mayes, David G., Maria J. Nieto, and Larry D. Wall. 2011. “A Deposit Insurance Model for Europe.” Paper presented at the EUSA Conference, Boston. Mayes, David G., and Hanno Stremmel. 2014. “The Effectiveness of Capital Adequacy Measures in Predicting Bank Distress.” SUERF Study 2014/1. Vienna. Mayes, David G., and Jukka Vesala. 2000. “On the Problems of Home Country Control.” Current Economics and Politics of Europe 10, no. 1: 1–25. *Mayes, David G., and Geoffrey Wood. 2016. “Bank Regulation: Starting Over.” In Breaking Up Is Hard to Do: Britain and Europe’s Dysfunctional Relationship, edited by Patrick Minford and J. R. Shackleton, 229–252. London: Institute of Economic Affairs.
618 Mayes Mohanty, Madhusudan. 2014. “The Role of Central Banks in Macroeconomic and Financial Stability.” BIS paper 76. New Zealand Auditor-General. 2011. “The Treasury: Implementing the Crown Retail Deposit Guarantee Scheme.” http://www.oag.govt.nz/2011/treasury. Pakin, Nizan. 2013. “The Case against Dodd-Frank Act’s Living Wills: Contingency Planning following the Financial Crisis.” Berkeley Business Law Journal 9, no. 1: 29–93. Peek, Joe, and Eric Rosengren. 2000. “Collateral Damage: Effects of the Japanese Bank Crisis on Real Activity in the United States.” American Economic Review 90, no. 1: 30–45. Phylaktis, Kate. 2015. “The Cyprus Debacle: Implications for European Banking Union.” In European Banking Union: Prospects and Challenges, edited by Juan Castañeda, David G. Mayes, and Geoffrey Wood, 67–77. Abingdon: Routledge. Poulsen, Ulrik L., and Brian L. Andreasen. 2011. “Handling Distressed Banks in Denmark.” Nationalbankens Kvartalsoversigt 3: 81–96. Reserve Bank of New Zealand. 2012. “Regulatory Impact Assessment of Pre-Positioning for Open Bank Resolution.” https://www.rbnz.govt.nz/-/media/ReserveBank/Files/regulation- and-supervision/banks/policy/5014272.pdf?la=en. Reserve Bank of New Zealand. 2013. “Open Bank Resolution (OBR) Pre- Positioning Requirements Policy.” http://www.rbnz.govt.nz/-/media/ReserveBank/Files/regulation- and-supervision/banks/banking-supervision-handbook/5341478.pdf?la=en. Sandbu, Martin. 2015. Europe’s Orphan: The Future of the Euro and the Politics of Debt. Princeton: Princeton University Press. Schinasi, Garry. 2006. Safeguarding Financial Stability: Theory and Practice. Washington, D.C.: International Monetary Fund. Schoenmaker, Dirk, and Daniel Gros. 2012. “A European Deposit Insurance and Resolution Fund—An Update.” CEPS policy brief 283. . Schoenmaker, Dirk, and Sander Oosterloo. 2007. “Cross-Border Issues in European Financial Supervision.” In The Structure of Financial Regulation, edited by David G. Mayes and Geoffrey Wood, 264–291. Abingdon: Routledge. Sinn, Hans-Werner. 2014. The Euro Trap: On Bursting Bubbles, Budgets and Beliefs. Oxford: Oxford University Press. Stern, Garry, and Ron Feldman. 2004. Too Big to Fail: The Hazards of Bank Bailouts. Washington, D.C.: Brookings Institution. Taylor, John B. 2009. Getting Off Track: How Government Actions and Interventions Caused, Prolonged, and Worsened the Financial Crisis. Stanford: Hoover Institution Press. Tucker, Paul. 2013 “Resolution and the Future of Finance.” Speech at INSOL International World Congress, the Hague, May 20.. Vibert, Frank. 2007. The Rise of the Unelected: Democracy and the New Separation of Powers. Cambridge: Cambridge University Press. Wall, Larry D., and David R. Peterson. 1990. “The Effect of Continental Illinois’ Failure on the Financial Performance of Other Banks.” Journal of Monetary Economics 26, no. 1: 77–99.
chapter 21
M anag i ng Mac rofinanc ia l C ri se s The Role of the Central Bank Patrick Honohan, Domenico Lombardi, and Samantha St. Amand
21.1 Introduction The core task of a central bank—ensuring price level stability—requires predictability and steadiness within a well-communicated, medium-term framework. In sharp contrast, the emergence of a macrofinancial crisis calls for the central bank to move quickly to interpret what can be fast-changing market conditions and to act flexibly and decisively to avoid systemic damage. A crisis is most frequently manifested in the form of a “sudden stop,” when the provision of funds dries up because of changes in investor expectations or heightened risk aversion. The focus is most often on the external value of the currency or on the liquidity of the banking system, sometimes on both. Periods of crisis are characterized by uncertainty and chaos. In this context, economic theory and policy norms—actions that are considered best practice in central banking—often are insufficient for guiding action. The unique characteristics of crises frequently require unorthodox policy actions. Responsibility for dealing with macrofinancial crises is usually shared with other public authorities, but the central bank, with its unlimited ability to create local currency liquidity, is generally in the front line. Open-ended provision of liquidity can unblock a crisis that has been based on incomplete market information, or it can nudge the economy away from a bad node in multiple equilibria. But if misjudged, the crisis- management actions of the central bank can result in large-scale socialization of private losses and create costly moral hazard for the future.
620 Honohan, Lombardi, and St. Amand If the change in investor expectations underlying the sudden stop reflects a correct interpretation of new information, policymakers are responsible for guiding the economy to a new equilibrium, as an attempt through liquidity actions to preserve precrisis conditions will be unsuccessful and distorting. Although they can create local currency liquidity, central banks have limited net worth. Unsuccessful crisis-management interventions will generally be associated with unintended transfers of resources, either through the inflation tax or through central bank losses ultimately falling on public finances (through reduced seigniorage flows). Without compromising its independence, a close, cooperative relation between the central bank and other regulatory authorities, and especially with the fiscal authority, can ensure that policy actions during a crisis are both coherent and seen as legitimate. The fiscal authority can offer guarantees on losses, but the associated risks and limitations of this form of intervention must also be scrutinized before it takes action. Ultimately, it is up to elected policymakers—the government, with approval from the legislature—to decide on the appropriate size and risk acceptability of crisis- management interventions. These policy options and limitations may in some circumstances be agreed on beforehand; however, in many crisis situations, interventions are ad hoc, and decisions must be undertaken within a very short time frame. While much can be done to reduce the frequency and amplitude of financial crises, including the use of macroprudential preventive measures, the global financial crisis (GFC) that broke out in 2007 dramatically illustrated that central banks need to be prepared to manage a crisis. This and other cases highlight the pitfalls that can occur in crisis management: insufficiently precise or detailed mandates that contribute to lack of preparedness and coordination with the fiscal authority and other relevant public bodies; weak analysis of the nature of the problem, leading to inappropriate policy responses; and complacency about the scale of the problem, resulting in ambivalent or insufficiently comprehensive action. In short, what is needed for good central bank crisis management is preparedness and a willingness to take quick and decisive action. Focusing on the mandates and the toolkit of central banks, this chapter illustrates how shortcomings can arise in responding to macrofinancial crises, implicitly highlighting directions for improvement. We begin in section 21.2 by setting out the rationale for crisis-management intervention by central banks, recognizing that recent events have shown how, without undermining the price stability mandate, these issues can come to dominate central bank policy in times of crisis. The ways in which central banks interpret their explicit and implicit mandate to manage financial crises is the topic of section 21.3. Specifically, it describes how these mandates not only depend on formal institutional arrangements but also draw on the theory, history, and accumulated conventions of central banking. It also highlights how internal and external decision-making structures influence the quality of central bank crisis-management policies; especially important is the working relationship with the fiscal authority. Section 21.4 turns to the tools used in crisis management, highlighting the recent expansion in scale and scope of such tools and the tension that can arise when crisis management seems to call for
Managing Macrofinancial Crises 621 liquidity expansion beyond what is normally seen as appropriate for ensuring price stability. Section 21.5 brings the issues to life through a selection of case studies that illustrate how issues raised in earlier sections have had an impact on historic episodes. The conclusion follows, in section 21.6.
21.2 The Rationale for Crisis Intervention Systemic financial crises are best thought of as involving “an impairment of all or parts of the financial system” and having “serious negative consequences for the real economy” (FSB- IMF 2009, 2).1 The policy rationale for crisis intervention lies in the interconnectedness of financial-sector firms and the dependence of economic transactions on the functioning of payments, credit, and other services provided by banks and some other financial firms. Contagion and spillover from one affected firm can propagate to the rest of the economy, resulting in widespread economic damage (Allen and Gale 2000). Payments systems can seize up, making the normal transfer of property rights impossible. The individually rational risk-management reactions of individual firms to a sudden stop (notably, delays in lending and spending decisions) can exacerbate liquidity spirals and force disruptive fire sales, further deepening the economic crisis. Problems of asymmetric information and of the strategic behavior of firms in conditions of asymmetric information lie at the root of the market disruptions; central bank crisis intervention is designed to prevent these types of disruptions (Heider, Hoerova, and Holthausen 2009). Walter Bagehot’s classic solution remains uncontested for situations where the source of the information disturbance is isolated in one firm or a small section of the economy. When the information disturbance is more diffuse—as in the collapse of the US structured mortgage finance and related markets in 2007–2008—real-time analysis of the need for intervention, and of the best way of dealing with an emerging crisis, becomes more complex and disputed. Whatever the underlying cause, at the center of any crisis episode is illiquidity. A distinction, not always precise, can be made between funding illiquidity (when a firm is unable to maintain or increase its liabilities) and market illiquidity (when the firm is unable to dispose of assets for cash without materially depressing price) (Brunnermeier and Pedersen 2008). Strategic behavior both by the illiquid firm and by liquid firms that could be counterparts can result in large (fire-sale) price movements between multiple equilibria, or they can prevent an equilibrium altogether. For example, a seller of assets in such circumstances has an incentive to appear weak, so that the buyer will not fear that the seller knows something unfavorable about the quality of the assets. On the other hand, a borrower of liquidity has exactly the opposite incentive (Diamond and Rajan 2011; Tirole 2011). The wider economic consequences of a bad equilibrium—disruption of the payments system or a credit crunch—can warrant policy intervention. By intervening to provide
622 Honohan, Lombardi, and St. Amand liquidity, the central bank can eliminate bad equilibria, although it thereby potentially exposes itself to losses or the economy to inflation. Provision of liquidity under conditions that are too easy can generate considerable moral hazard, thus storing up future problems. Analysis of the use of easy availability of liquidity during the GFC does suggest a pattern of higher risk-taking by the banks that drew more heavily on these facilities (Drechsler et al. 2016; Garcia-de-Andoain et al. 2016). Identifying the emergence of bad equilibria, let alone guiding the economy to a new equilibrium, is not a straightforward task. It is difficult to predict the impact of financial shocks on disruptions to financial services and nearly impossible to pinpoint precisely the longer-term economic consequences. The appropriateness of policy actions aimed at stemming liquidity crises, while balancing potential moral hazard implications, thus depends on the quality of the analysis of current and future financial and economic circumstances.
21.3 The Mandate for Crisis Management and Its Interpretation Central banks have a mandate—either explicit or implicit—to guard against systemic financial crises and manage them if and when they occur. (Explicit mandates are either enshrined in legislation or plainly stated and well known; they are easily communicated.) While the monetary policy mandates of central banks, usually expressed in terms of price or exchange-rate stability and employment or economic growth, can be easily communicated and performance assessed, financial stability mandates tend to be less precise and less easily judged. In addition to the tools that are proper to monetary policy, central banks possess the capacity to be the lender of last resort (LOLR) in local currency. They can also be assigned several functional policy areas relevant to the pursuit of an explicit financial stability mandate, including the maintenance of some financial-market infrastructures, microprudential and macroprudential regulatory or supervisory authority, and responsibility for bank resolution.2 But central bank decisions are also guided by implicit mandates: goals and instruments based on convention rather than being clearly defined. When relying on such implicit mandates, the central bank will have particular regard to its functional relationship with government and other agencies and to issues of legitimacy and credibility with respect to the public and other external stakeholders. It is useful to distinguish between different strands of influence on how central bankers may determine their actions in the fast-moving, uncertain, and complex circumstances that can accompany a financial crisis. First, institutional structures surround the central bank. Second, central banks are strongly influenced by the prevailing ideas and norms in economic policymaking. Third, central bankers are inherently political actors. Table 21.1 provides an overview of the framework laid out below.
Managing Macrofinancial Crises 623 Table 21.1 Factors That Underlie Central Banks’ Explicit and Implicit Mandates Political Institutional
Ideational
Internal
External
Explicit Mandate
Legal mandate Design of the central bank institutional arrangements
Economic theory and models International agreements
Decision-making structure
Accountability mechanisms to external stakeholders
Implicit Mandate
Functional relationship with government and other agencies Support from powerful constituencies
Policy norms and social conventions
Accountability culture
Decision-making traditions
21.3.1 Legal Mandate and Institutional Arrangements Institutional arrangements are important in influencing central bank behavior, both in the choice of policy goals and in the specific policy instruments that are used to pursue those goals. Famously, institutional mechanisms for appointing personnel, financing operations, and selecting policy goals and instruments have been designed to prevent elected politicians from influencing monetary policy decisions (see, for example, Debelle and Fischer 1994; Eijffinger and de Haan 1996).3 A similar analytical framework can be applied to central banks’ financial stability mandates. An explicitly defined mandate and the range of assigned policy functions—for example, whether the central bank has primary responsibility for microprudential supervision, monitoring macrofinancial systemic risks, and so on—provide the foundations that guide policy decisions. But financial stability policy is wider in scope than monetary policy, with several overlapping functions and more than one responsible agency. The central bank must therefore cooperate closely with other public bodies. For example, if the central bank is not the microprudential supervisor, there must be arrangements for the central bank, as provider of LOLR facilities, to receive information from the supervisor. These arrangements may be embedded in formal institutional mechanisms, but they will also depend on relationships between relevant agencies. The functional relationship between the central bank and government and other agencies shapes domestic policymaking cooperation and contributes to the development of central banks’ implicit mandates. Cooperative mechanisms depend on a host of factors, such as historical conventions, legal requirements, international cooperation arrangements, and economic and financial circumstances (Bodea and Huemer 2010; Cargill, Hutchison, and Ito 1997). Much of the cooperation relates to sharing information and exchanging views; the central bank may also act in an advisory capacity to other agencies or play a role in policy implementation (Bodea and Huemer 2010). These
624 Honohan, Lombardi, and St. Amand relationships help shape the central bank’s interpretation of its legal mandate (Lombardi and Moschella 2015b). The nature of these relationships changes during macrofinancial crises, especially in regard to the fiscal authority and government generally. Although most central banks insist on distance as far as monetary policy is concerned, “close cooperation between the central bank and the fiscal authority in a crisis is both inevitable and desirable,” because the central bank’s role goes beyond inflation fighter to crisis fighter (Blinder 2012, 2). Furthermore, when the central bank must engage in quasi-fiscal operations, as when the quality of collateral in emergency lending arrangements is in question, involvement of the fiscal authority can provide political legitimacy for unusual actions that may have significant fiscal consequences. The desirability of such cooperation does not always mean that it will occur smoothly. Brunnermeier and Reis (2017) interpret the long period that was needed to resolve the euro area crisis as a game of attrition between the central bank and the fiscal authorities. The euro area’s unusual configuration, with at least seventeen separate fiscal authorities, each following a strategy that sought to delay adopting more comprehensive action, left the European Central Bank (ECB) in a constitutionally and politically confined situation. The ECB was able to provide enough liquidity to prevent systemic collapse but unable to decisively resolve the area’s problems of insolvency. Some of the most dramatic events of the GFC illustrate the role of the central banks’ relationships with key stakeholders and how they can evolve in crises. The September 2008 discussions between the US Treasury and the Federal Reserve (the Fed) when Lehman Brothers went into bankruptcy are a prime example.4 Another illustration is the ECB’s controversial May 2010 securities market program (SMP) decision (despite being prevented by its formal mandate from lending to governments) to conduct open market purchases of the securities of three stressed euro area governments, which was interpreted as a kind of bridge to the intergovernmental loans that were subsequently provided to the same governments (see Saccomanni 2016).
21.3.2 The Policy Conventions of a Transnational Epistemic Community Choices made by central banks in a crisis are also strongly influenced by the ideas that are pervasive in what can be seen as a transnational epistemic community of central bankers seeking to establish consensus on best practice on matters of economic and financial governance (see, for example, Baker 2006; Bean et al. 2010; Johnson 2016).5 As noted by Johnson (2016), this community of central bankers is united by widely shared principles, practices, and professional culture; by a transnational infrastructure— especially through the bimonthly meetings of the Bank for International Settlements (BIS) in Basel, as well as various committees associated with the BIS and with the FSB, along with the International Monetary Fund (IMF)—and by a degree of insulation from
Managing Macrofinancial Crises 625 outsiders, through both legal independence and technical expertise. Evolving over the past century, the international “club” of central bankers has widened its membership beyond the core of advanced economies and has over the past few decades displayed a considerable degree of convergence on acceptable economic theories and policy practices in financing government deficits and the tools of monetary policy in particular (for example, Goodfriend 2007; Woodford 2009). Marcussen (2006) noted a “scientization” of central banking, which he associated with the depoliticization of central bank policymaking. (Convergence of ideas may have been further reinforced by the Great Moderation, a period covering the 1990s and early 2000s characterized by historically low inflation and output volatility.) Independent central bankers have built strong connections with the academic community to form a transnational community of like-minded technocrats. While central bank crisis management has hardly become a science, this community shares a body of historical experience and lore on central banking practices from past crises. To be sure, while being open to incorporating the insights of academia into its economic and financial analysis and policy operations, the community is not always receptive to new ideas. Indeed, while there are sometimes provocative and influential presentations at high-profile conferences, such as the annual Jackson Hole symposium hosted by the Federal Reserve Bank of Kansas City, the BIS annual conference usually held in Lucerne, and the more recently established ECB research conference inaugurated in Sintra, challenges to the accepted norms are not always well received. One notable example is the speech delivered by Raghuram Rajan (2005) (then economic counselor and director of research at the IMF and later governor of the Reserve Bank of India) at the Jackson Hole symposium in 2005, whose perceptive observations on the emerging sources of systemic risk in the financial system were met with hostility and denial by participants.6 The strength of this community goes beyond ideas. There is a long history of cooperation of central bank policies in maintaining monetary and financial stability, though not always successfully (Borio and Toniolo 2011b and references therein). In the post- Bretton Woods system, cooperation shifted from monetary stability to financial stability (Borio and Toniolo 2011a). The development of minimum capital standards under Basel 1, 2, and 3 are one example of success in international financial regulatory cooperation. The orchestration of a simultaneous unscheduled reduction in interest rates jointly announced by seven leading central banks in October 2008, as well as the rapid creation or expansion around the same time of swap arrangements between the US Federal Reserve and fourteen central banks, is another example. (Cooperation among central banks during the GFC was made easier by the fact that these economies were experiencing similar economic shocks.) By establishing their expertise and contributing to macrofinancial stability, central bankers have accumulated significant epistemic authority in domestic macroeconomic policymaking. Their strong technocratic community has helped to establish the credibility of independent central banks.
626 Honohan, Lombardi, and St. Amand But strong epistemic communities can also lead to groupthink and thus constrain novel or unorthodox policy actions (Kirshner 2003). Convergence on ideas of the science of monetary policy has resulted in a rather narrowly focused view of the normal range of appropriate behavior (see Issing 2011; Mishkin 2011; Svensson 2009). Furthermore, as successful monetary policy has relied in part on what can be thought of as a social convention involving a shared understanding between central banks and financial markets, policy indications feed back through market actions to reinforce the desired policy direction (see Nelson and Katzenstein 2014; see also Blinder 1998, 60). A fear of undermining this predictability could inhibit novel or unusually vigorous action. But in practice, these conventions did not prevent central banks from innovating in response to the GFC. Instead, recognizing exceptional circumstances, they reached for a wider toolkit, drawing on a more expansive, historically informed, view of the range of action proper to central banks. Indeed, the experience of crisis management as understood by a generation of economic historians makes conventional a wider array of policy reactions.
21.3.3 Internal and External Political Context A third strand of influence on central bank decisions is the fact that central banks are political actors themselves (Lockwood 2016): the internal and external decision-making context is relevant for policy choices. As with its monetary policy mandate, the central bank’s ability to deliver on its financial stability mandate hinges on whether there continues to be effective support for this mandate from powerful interest groups in society; the central bank must retain standing and authority to ensure that its policy decisions are widely accepted.7 There is evidence, for example, that more robust macroprudential policy frameworks have been established in countries that suffered more from the GFC, because strong constituencies of support may have formed to prevent future financial crises (Lombardi and Siklos 2016). Extensive research has analyzed how the institutional structure of central bank policy committees affect decision-making (see Blinder 1998, 20; Blinder 2009; Lombardelli, Proudman, and Talbot 2005; Riboni and Ruge-Marcia 2010; Sibert 2003). On some committees, a small group of influential members can steer decisions (Bernanke 2015, 689). On other committees, efforts to aggregate preferences and foster support for policy position may require significant compromise (Bastasin 2014; Lombardi and Moschella 2015a). Such factors as the size of the committee, presence or otherwise of nonexecutive members or of members who represent geographic regions, differences in professional backgrounds, and the personality of leadership can all be relevant (see, for example, Farvaque, Hammadou, and Stanek 2011; Gerlach-Kristen 2009; Hayo and Méon 2013; Smales and Apergis 2016). The structure of analytical deliberation, in both the order of discussion and the way policy positions are presented, can also affect the committees’ policy decisions (see Bhattacharjee and Holly 2015; Warsh 2014). Another significant factor can be the central bankers’ understanding of their accountability, which not only
Managing Macrofinancial Crises 627 relies on the explicit mandate but is also influenced in an informal and dynamic way by the central bank’s policymaking culture, the membership composition, and the political landscape of the day (Lombardi and Moschella 2015a). Because decision-making traditions guide how policy committees and their members publicly communicate policy decisions, they can affect the legitimacy of policy choices. Blinder (2009) distinguishes among three types of central bank committees: the autocratically collegial, the genuinely collegial, and the individualistic. Public dissent from members of a normally consensus-based committee (as, for example, in the ECB following the 2010 SMP decision) represents a signal that is potentially more disruptive to public acceptance of the bank’s policy than division in the voting records of a committee known to be individualistic. This issue is especially salient for supranational central banks such as the ECB, whose legitimacy is intended to be strengthened by having geographical representation (Howarth 2007). (Public dissent by a member of the ECB’s Governing Council may reduce legitimacy in that person’s home country.) Most of the literature on central bank policy committees relates to monetary policy. Likely rather different is the way in which crisis-management policy actions are decided, with a small group of executives developing options and assessing their relative merits in a compressed period of time. Final decisions on crisis matters are often made either by a delegation from the main governance bodies or in urgent teleconferences with limited scope for extended discussions. The influence of the chair or governor relative to that of other committee members is surely much higher in such circumstances. The decisive personal roles of Bank of England (BoE) Governor Mervyn King in the run on Northern Rock (September 2007), Federal Reserve Chair Ben S. Bernanke and New York Fed President Timothy Geithner (with Treasury Secretary Hank Paulson) in the bankruptcy of Lehman Brothers (September 2008), and ECB President Jean-Claude Trichet in the SMP program (May 2010) are well documented (see Irwin 2013). Maintaining the legitimacy of central bank policies in the eyes of the general public, the financial markets, and the government requires more than ensuring consistency in the communication of committee decisions. For the public, as Blinder et al. (2003) so eloquently put it, the central bank “must create and maintain an impression of competence and of understanding that generates quiet acquiescence” (23). The emergence of a crisis may damage a central bank’s reputation for competence insofar as the bank may be blamed for permitting the conditions (for example, a property price or equity bubble) that set the scene for the crisis (see Brunnermeier and Schnabel 2016). Inertia can result when a central bank lacking sufficient public trust becomes reluctant to risk taking actions that could further diminish its public legitimacy, especially if those actions help banks and other financial institutions. In contrast, the trust that an effective central bank has built over time can ensure that the damage done to its reputation from financial crises does not become the institution’s undoing. The central bank can foster support by demonstrating transparency in its policy actions, by connecting to the public through various communication and outreach activities, and also by being an expert source of high-level economic and financial analysis on economic and financial policy.
628 Honohan, Lombardi, and St. Amand In this context, sufficient transparency in central bank policies—that is, “the absence of asymmetric information between monetary policymakers and other economic agents” (Geraats 2002, 1)—is important (see, for example, Bernanke 2010; Bini Smaghi 2007; Dincer and Eichengreen 2014). As has been seen, especially in the United States, political backlash against poorly communicated central bank crisis-management actions can strengthen support for the legislative imposition of constraining policy rules that would prevent the central bank from reacting appropriately to unforeseen developments in a future crisis. Similarly, communicating the central bank’s policy analysis and expectations improves the effectiveness of policy by ensuring that financial markets’ actions align with the central bank’s policy intention (see, for example, Woodford 2005). Indeed, a central bank’s reputation vis-à-vis financial markets is essential to its effectiveness. Policies aimed at cooling financial volatility require a belief by financial markets that crisis-management policies will be implemented as the authorities say they will (Blinder et al. 2003). If the central bank is not credible in its implementation of liquidity policies, it could escalate financial panic. Similarly, a more credible central bank can significantly expand liquidity during financial crises without creating fears that it is no longer dedicated to fighting inflation (Blinder 2000). Full, immediate, and continuous transparency is not always ideal; there may be a trade-off between transparency and effectiveness. By announcing the central bank’s willingness to extend loans to financial institutions in advance of a crisis, there is a risk of creating moral hazard. Before the GFC, the conventional wisdom was that ambiguity with respect to the provision of emergency lending reduced moral hazard. This view has since been questioned (see Bank for International Settlements 2014 and references therein). More controversially, it is often argued during liquidity crises that it might be prudent for central banks and their corresponding political authorities to keep emergency liquidity assistance (ELA)8 operations covert, in order to minimize risks to financial stability (Gertler, Kiyotaki, and Queralto 2012; Hauser 2016). But ambiguity in this respect may also add to financial-market uncertainty and volatility. Still, sufficient transparency is needed for the central bank to retain legitimacy in its crisis-management role (Tucker 2014).
21.3.4 Relationship with Government Despite their independence, central banks must, if they wish to retain legitimacy, also remain accountable to political authorities, often the legislature. Formal accountability arrangements for this are typically established in law, but cooperation will often transcend explicit institutional mechanisms. A close, constructive, and confidential working relationship between the central bank and other relevant institutions, especially the fiscal authority but also including the microprudential bank regulator, the deposit insurer, and the resolution authority, can help avoid paralysis or counterproductive policy conflicts.
Managing Macrofinancial Crises 629 The relationship between the central bank and government is made complex because the short-term political goals of politicians do not always align with the longer-term objectives of financial and monetary stability mandated to the central bank. The nature of this conflict is well established for monetary policy in normal times but will likely be more hotly contested in times of crisis. For example, conflict may arise if the central bank calls for a government bailout of bank creditors in the interest of financial stability but at the expense of the public finances. Likewise, the fiscal authority may prefer that the central bank take risks with its lending, even though losses by the central bank will generally affect the treasury adversely over time through its effect on central bank profits (see Bordo and Meissner 2016). Then again, the central bank’s concern with the flow of credit in the economy may argue against a failing bank being placed in resolution, even though the mandate and criteria of the resolution authority point in that direction. Interinstitutional relationships are increasingly being formed through the creation of umbrella bodies such as national financial stability committees, which have representatives from the central bank, the fiscal authority, and other regulatory authorities. Although such committees are primarily mandated to prevent and mitigate, rather than manage, crises, they can provide the foundation for building the kind of shared understanding and trust among the participants that is vital for effective cooperation when a crisis breaks out. Partly, this is a question of shared information and analysis of the evolving situation among the relevant agencies. But also important is the exchange of views toward achieving a common understanding of what each agency proposes to do as the crisis unfolds. The cases described below illustrate the diverse kinds of problem that can occur when this exchange is not achieved. Cooperation does not imply a constraint on the scope for each agency’s independent action in accordance with its mandate, nor should it delay action. But when strategic central bank decisions on liquidity provision in a crisis can result in sharp currency depreciation or extensive banking defaults, active involvement of the fiscal authority can ensure that the package of policy measures being adopted by itself and the central bank is coherent. This coordination of action can improve the immediate effectiveness of the crisis-management policies and send a signal to other stakeholders that the central bank’s action are legitimate.9 After all, especially in cases where the banking system has incurred heavy, albeit at first hidden losses, the provision of central bank liquidity can threaten the sustainability of the sovereign’s finances, especially if foreign currency debt is involved (Acharya, Drechsler, and Schnabl 2014; Farhi and Tirole 2017). Requests from central banks for a treasury indemnity when risky loans are being made bring the quality of the relationship into focus; such requests have not always been granted. Cooperation with the fiscal authority, however, does not imply that the central bank’s actions will always be viewed as legitimate. Indeed, despite the vigorous efforts of the US Federal Reserve during the GFC to coordinate its actions with the US Treasury in extending financial support to major financial institutions, these actions led to concerns about the exercise of such broad authority by the Fed and
630 Honohan, Lombardi, and St. Amand questions about whether it had overstepped its mandate. This was particularly severe in the extension of a line of credit to the American Insurance Group (AIG), when use of these funds to pay bonuses to the bank’s executive team caused public outrage, and a questioning in the US Congress of the lack of transparency and democratic accountability (Committee on Financial Services 2009). It is not surprising, then, that subsequent legislation (the Dodd-Frank Act) restricted the Fed’s freedom to take crisis-management action to an extent that could, according to authoritative voices, be severely damaging in a future crisis (Geithner 2016).
21.4 Crisis-M anagement Tools in Practice The tools available to and used by central banks arguably derive more from their unquestioned ability to generate domestic liquidity than from explicit statements in their mandates. While much attention has long been paid to the central bank’s chief monetary policy mandate, and especially the control of inflation, the experience of the GFC has brought the financial stability mandate to the fore, to the extent that some scholars suggest liquidity management, including the LOLR role, is in fact the key function of the central bank (e.g., Goodhart 2011). Already by the nineteenth century, central banks and their precursors had become increasingly involved in the provision of liquidity when needed to stem or head off a panic. During the GFC, several central banks’ crisis-management interventions went well beyond ELA to specific stressed institutions. Disruption of key short-term money markets and stress affecting systemically important nonbank institutions in 2008 showed that central banks would need to use multiple points of entry to address market dysfunction and limit contagion. Use of a wider range of instruments, interaction with nonbanks, and intervention in organized securities markets were not unknown in the history of central banking, but the scale, scope, and complexity of such activity during the GFC were unprecedented.
21.4.1 Scale and Timing Whereas the task of controlling inflation has generally led central banks to be very cautious about an expansion of liquidity—this had been seen as the main task of central banks throughout the second half of the twentieth century—a crisis of the scale of the GFC required them to counter their instincts in this regard. Central banks focused instead on interest-rate differentials and other price signals much more than on quantity signals in judging the appropriate scale of liquidity support. Many of the central banks’ crisis-management actions in the GFC can be seen as designed to get interest-rate
Managing Macrofinancial Crises 631 differentials back from unwarranted levels in a sustainable manner and without suppressing evidence of unresolved insolvency. The expansion of central bank monetary liabilities during the GFC in the United States, the euro area, the United Kingdom, and Japan, for example, was wholly unprecedented. Had expansion in the monetary base been a reliable predictor of inflation, a rapid increase in consumer prices would have followed. Instead, inflation remained subdued, reflecting a sharp increase in liquidity preference and a corresponding decline in monetary velocity (e.g., Anderson, Bordo, and Duca 2017). The increased scale came both from open market purchases (of government debt and other high-quality securities) and through open-ended provision of liquidity facilities. As the cause of many—if not most—financial crises is a prior build-up of credit, relaxing liquidity too early will not correct the situation but risks fueling an even larger bubble that will be more costly to correct (see Schnabl, c hapter 19 in this volume).10 The shift to accommodation becomes necessary only when the bubble has burst in a damaging manner, causing a sudden stop or crisis. Determining when conditions warranting central bank crisis- management action have emerged is not always straightforward. The collapse of the dot-com equity bubble in March 2001 is generally held not to have created crisis conditions requiring exceptional central bank activity (see Jordà, Schularick, and Taylor 2015). But the critical nature, from the middle of 2007, of the distressed asset-backed commercial paper market and its knock-on effects on the rest of the financial system may have at first been underestimated by central banks, leading to the catastrophic sudden stops of September and October 2008. (Relevant literature is summarized in Gorton and Metrick 2012.)
21.4.2 Scope In the years before the GFC, some scholars argued that by channeling liquidity to where it was needed, the efficiency of the interbank market had rendered redundant the bilateral provision of liquidity of the classic Bagehot type (Goodfriend and King 1988).11 This expectation was invalidated, however, when broader liquidity facilities, as well as direct ELA to financial institutions in distress, became an important aspect of central banks’ crisis-management toolkits during the GFC (especially in late 2008) and not just for large banks. The traditional presumption that significant contagion could only come from large firms (Freixas 1999; Goodhart and Huang 1999) met the reality that in times of heightened risk aversion and uncertainty, contagion could be anticipated from even moderately sized banks not normally considered important in the system. It became evident that the concept of systemic importance is not static but varies depending on wider market conditions (Cerutti, Claessens, and McGuire 2012). In order to meet the general scramble for liquidity that characterized money markets during the GFC, central banks eased collateral requirements for all of their counterparts (rather than confining support to stressed entities) and lent at longer terms in the
632 Honohan, Lombardi, and St. Amand interbank market. This exceptional horizontal liquidity support to the interbank market persisted for several years. Similarly, direct central bank support in the GFC was also not strictly limited to commercial banks. The US Fed offered and provided direct financial support to firms of different types; investment bank Bear Stearns and insurance firm AIG were early high-profile beneficiaries. The Fed also provided more or less direct support to financial and nonfinancial issuers of asset-backed corporate paper in a temporary program that operated at the height of the crisis.12 Technical preparations were also made for the provision of emergency liquidity to clearinghouses, although no use was made of this and no entitlement established.
21.4.3 Innovation in the Instruments of Policy As the LOLR function and financial stability policies more generally have adapted to the changing characteristics of the financial system, the tools used have been exceptionally diverse. Indeed, as nonbank financial institutions such as insurance companies and broker-dealers became more important for the functioning of the financial system and banks began to rely more heavily on wholesale funding, central banks needed to be able to offer similar types of liquidity insurance to these areas of the financial system. Therefore, in addition to lending at various maturities (not just very short term) and outright purchases of securities, central banks have also entered into more complex risk-sharing arrangements with distressed firms and other market participants. In providing support to systemically important institutions other than commercial banks, central banks had to innovate to address specific needs while remaining within their legal powers.13 The large bridge loan provided by the Fed in March 2008 for the rescue of investment bank Bear Sterns is one such example; the subsequent Fed loan to a special-purpose vehicle (Maiden Lane LLC), which acquired assets from Bear Stearns, is another example where the central bank clearly took on significant credit risk. The Swiss National Bank made an even larger risk-sharing loan to a similar special-purpose vehicle arrangement in support of the banking firm UBS later that year. Central banks also provided various forms of guarantee. The Fed’s arrangement (in cooperation with the US Treasury and the Federal Deposit Insurance Corporation) with Citigroup and Bank of America, also in late 2008, guaranteed coverage of the losses of each of those banks on a portfolio of troubled assets. The outright monetary transactions (OMT) program of the ECB can also be seen as a powerful, albeit imprecise, form of guarantee on the provision of sufficient liquidity to the sovereign debt markets of stressed euro area countries that accepted an IMF- type macroeconomic and financial adjustment program. Indeed, market and media commentators more often cite the nonbinding London declaration of ECB President Mario Draghi (2012b) in July 2012 that the ECB was “ready to do whatever it takes to preserve the euro: and believe me, it will be enough” as the effective instrument of policy
Managing Macrofinancial Crises 633 rather than the formal adoption of the OMT program a couple of months later or the subsequent judgment of its legality by the European Court of Justice. Although weak banks have been recapitalized by direct injection of public funds, this has not often been done by the central bank, even though, under plausible conditions, this could lead to better postintervention incentives for the assisted bank (Philippon and Schnabl 2013). Other measures extensively used during the GFC aimed at repairing segments of the funding markets that were illiquid, where there was uncertainty about the value of the underlying asset, or where primary dealers had become capital-constrained. These actions drew central banks into the additional role of market makers of last resort (see, for example, Tucker 2014). The BoE’s corporate bond purchase scheme is one example of this function (Fisher 2010). Similarly, the Fed’s money-market facilities were critical to ensuring the continued functioning of short-term private-sector funding channels. The ECB also intervened in 2009 and 2011—well before it moved toward quantitative easing (QE) in 2014 and 2015—to purchase covered bonds in order to address exceptional market conditions. Swap arrangements between central banks, aimed at ensuring that banks losing access to international markets could source replacement foreign currency from their home central bank, were greatly expanded in the crisis. Indeed, several central banks did lend to their local banks in foreign currency. Two elements of the Bagehot doctrine— good collateral and penalty interest rates—were arguably downplayed, though not discarded, in the response to the GFC (Domanski, Moessner, and Nelson 2014). The relaxation of collateral standards for “normal” monetary policy lending early on in the crisis is one aspect of this (although they still remained strict enough to have largely prevented central bank credit losses to date). And policy interest rates were sharply reduced in 2008–2009 to levels well below what would have been expected from the Taylor rule, suggesting that LOLR support was being provided at subsidized rates rather than penalty rates (Tirole 2011; Freixas and Parigi 2014). On the other hand, the simultaneous and sharp downturns in both economic activity and price inflation could be seen as a macroeconomic and monetary policy justification for lowering interest rates. The crisis also engendered innovation in how tools employed to deliver central banks’ monetary policy mandates were used. As short-term interest rates approached their perceived effective lower bound (below zero in several cases), manipulation of these interest rates, long accepted as the main mechanism for achieving the primary goal of price stability (e.g., Bernanke and Blinder 1992), had to be supplemented by other techniques when growth in economic activity and postcrisis inflation rates remained stuck at low rates (Ball et al. 2016). The dividing line between what can be considered normal monetary policy response to the low inflation and weak aggregate demand that accompanied the crisis and crisis- management policy became blurred. For example, the ECB’s SMP (purchasing the government bonds of Greece and other stressed member governments) and its OMT program were both crisis-management policies introduced to improve the transmission
634 Honohan, Lombardi, and St. Amand of monetary policy, but their objective was not to change the monetary policy stance. Other examples are the credit-easing policies of the Fed, such as the purchases of agency mortgage-backed securities, which were intended to improve funding conditions in this important financial-market segment, but also designed to support credit activity in mortgage and housing markets. Similarly, programs specifically aimed at boosting credit creation, such as the United Kingdom’s funding for lending scheme and the ECB’s targeted long-term refinancing operations (TLTRO), were meant to provide, for suppliers of credit to the real economy, incentives similar to what the interest-rate tool provides for credit demanders. This blurring casts some doubt on the idea of a separation principle of policy instruments, whereby policy interest rates would be assigned to macroeconomic objectives, while other nonstandard liquidity provision policies are assigned to the stabilization of financial markets. Whether such a separation principle can be unambiguously defined or whether instead LOLR policy necessarily interacts and potentially conflicts with monetary policy remains a disputed topic (see Buiter 2010; Freixas and Parigi 2014). Tucker (2016) argues against segmentation of these policy regimes, suggesting instead that these functions should be analyzed with respect to their pursuit of a single (broadly defined) monetary stability regime. Some of the tools used during the GFC, such as central bank swaps, have become permanent facilities. Others have de facto been replaced with more formal mechanisms, some of them shared with agencies other than the central bank, for example, the European Stability Mechanism in the euro area. Still others will likely remain part of the crisis-management toolkit, such as those that offer liquidity support to nonbank institutions and systemically important markets. Interventions as market maker of last resort expose the central bank to additional risks, beyond those of more traditional LOLR interventions. Furthermore, it is likely that the scope and scale of recent LOLR interventions and broader financial stability policies have added to moral hazard. Overzealous crisis management can unwittingly sow the seeds of the next crisis.
21.4.4 Effectiveness The actions of each of the five principal central banks during the GFC have come under criticism. There are scholars who believe the interventions were too early or too large or entailed too much moral hazard and too much subsidizing of private losses by the public purse (e.g., Cochrane 2011; Feldstein 2012; Reinhart 2011; Sinn 2014). And there are others who consider the actions, although large, to have been too late (as in being slow to support Northern Rock, allowing Lehman to fail, the late introduction of OMT and QE in the euro area, and Japanese deflation) (e.g., Ball 2016; Ueda 2012; Wyplosz 2016), or lacking in persistence (as in the Swiss National Bank foreign exchange purchases and the ECB’s 2011 interest-rate increases) (e.g., Brunnermeier and James 2015; Krugman 2012; Stiglitz 2016).
Managing Macrofinancial Crises 635 These ongoing debates highlight the elements of policy action on which quick decisions must be made, as well as the potential pitfalls. The first challenge is to judge the nature, severity, and scale of the overall problem in order to assess both the scale and the appropriate direction of policy action. A sudden stop reflecting only issues of domestic liquidity can be addressed through actual and promised central bank liquidity interventions that can be reversed when the market’s concerns are allayed. The interventions need to reach all segments of the market that have seized up. In their huge provision of aggregate liquidity at the height of the crisis, central banks, recognizing the limitations of the quantity theory, correctly diagnosed and responded to what was a surge in the aggregate demand for safe assets (Gorton and Ordoñez 2013). To the extent that unavoidable losses are involved, as when the true capital of significant banks has become negative, a decision about the allocation of these losses is also needed, as is government endorsement, if the consequential exposure of the public finances to losses is likely to be material (as in Cyprus, Iceland, and Ireland, to take the largest examples from the GFC).14 Even if losses on such a scale are not certain, liquidity injections may be insufficient to unblock markets that have ceased to function in the crisis. Although the liquidity injections and interest-rate reductions during the GFC were essential, it became clear by October 2008 that significant capital increases in the main banks were needed to restore market confidence. Capital increases for all major US banks were engineered by the authorities, acting decisively in a step prepared by the Fed in collaboration with the US Treasury and using US government funds (see Gorton and Metrick 2012). This is a clear example of how collaboration between the central bank and government can be essential in crisis management, especially since the central bank cannot, from its own resources, recapitalize banks on any material scale. Although several large European banks that had lost market access were recapitalized in the same month with government funds, in the absence of pan-European fiscal resources for bank recapitalization, comparable action could not easily occur in parts of Europe where a national government faced stressed fiscal conditions. The fact that recapitalization of major European banks in the crisis was at first less comprehensive than in the United States was partly the result of this lack of common fiscal resources.
21.5 Lessons from Case Studies The history of central banking is punctuated by spectacular crisis-management actions, ranging from the end of the South Sea Bubble in 1720, through the London money- market crises of the middle and late nineteenth century so famously analyzed by Bagehot, to the ending of German hyperinflation in 1922. But the instances of crises are too sporadic and the nature of crises too diverse to make the effectiveness of central bank crisis interventions readily amenable to econometric analysis. In the past decades, central bank action has been focal to the developments of a large number of crises—and
636 Honohan, Lombardi, and St. Amand not always successfully (see Caprio and Honohan 2015). The following paragraphs sketch recent cases that illustrate problems that can arise in crisis management and consider how the various facets of central banks’ explicit and implicit mandates influenced their actions, for better or worse. The case studies highlight the practical relevance of the issues discussed above, especially the problems that can be caused by inadequate preparedness resulting in short-termism and a failure to use the most effective policy tools, and the mishaps that can result from poorly articulated central bank relations with government and other relevant agencies resulting in unfavorable political economy dynamics. The actions taken or rejected by the central banks involved in these cases remain controversial today.
21.5.1 Indonesia: Banking Crisis of 1997 The Indonesian banking crisis of 1997–2001 generated the largest relative fiscal costs of any crisis on record: 57 percent of GDP (Laeven and Valencia 2012). The fundamental problem, in the words of the IMF team (brought in after the government requested assistance in October 1997), was that “in the pre-crisis period the state banks had been used as vehicles for directed lending to noncommercial ventures, and private banks as vehicles channeling deposits to the owners” (Enoch et al. 2001, 17). Management of the crisis after it broke has been widely criticized, though the criticisms have sometimes been contradictory. Although it was not the only part of government involved, and although it had limited operational, financial, and institutional independence, the central bank, Bank Indonesia (BI), had a key role in management of the crisis. Seeming at first to reflect mainly contagion from the rest of the East Asia region (Radelet and Sachs 1998), the Indonesian problems of 1997 snowballed when capital inflows stopped and began to reverse, causing a sharp depreciation in the rupiah and an economic slowdown (eventually resulting in a 14 percent GDP contraction in 1998, in contrast to the more than 7 percent annual growth of the previous years). Awareness of some weaknesses in the banking system led the IMF to conduct a wide-ranging but accelerated review of the solvency of ninety-two of the largest banks. This review concluded that thirty-six of the banks were insolvent, with balance sheets flattered by the inclusion at full value of unpaid loans that were being “evergreened” at low interest rates. (An evergreened loan is one that is consistently renewed and where repayment on the principal may only be expected over a very long horizon.) As part of the IMF program signed on October 31, 2007, it was announced that sixteen small banks would be closed, with losses to uninsured depositors. Unfortunately, plans for strengthening the remainder of the sector were not disclosed, and because of this lack of clarity as well as fears regarding the integrity of the political regime led by Suharto, confidence in the banking system was lost. There followed seven months of instability and bank runs. BI provided extensive liquidity facilities to banks, typically without requiring collateral and instead relying
Managing Macrofinancial Crises 637 on guarantees from the shareholders of the banks. The ample liquidity fueled further declines in the rupiah, which fell to about one-sixth of its precrisis US dollar level in a matter of months. Having criticized an impractical and half-baked proposal supported by Suharto to revalue the currency upward by about 40 percent and to establish a currency board regime, BI Governor J. Soedradjad Djiwandono was dismissed in January 1998. Shortly thereafter, the government introduced a blanket bank guarantee, following which the Indonesia Bank Restructuring Agency (IBRA) was created to deal more comprehensively with the increasingly evident problem of bank insolvency. Eventually, during 1998 and 1999, amid a tense political and security situation, IBRA employed international consultants to carry out a more in-depth analysis of solvency, which uncovered deeper losses, and then triaged and arranged for the closure or recapitalization of the remaining banks. In its complexity, the Indonesian case raises numerous questions about the role of a central bank in the crisis (Enoch et al. 2001; Goldstein 2003; McLeod 2004). The early closure of some small banks without clarity on the remainder of the system was a clear mistake, especially considering that so many other banks were unsound. This displayed a lack of decisiveness and boldness. It is less clear, though, what the optimal approach would have been. Some argue that a blanket government guarantee should have been brought in right away—although that would hardly have been credible, not least because of the degree to which bank liabilities were in foreign currency. Others suggest that an open bank resolution policy would have been less disruptive. Yet others feel that bank creditors should have shared much more of the costs. By providing open-ended liquidity support for an extended period, it is likely that BI increased the fiscal cost of the crisis by facilitating the payment by insolvent banks of very high deposit interest rates. BI’s insistence that recipient banks should conceal the ELA in their accounts can also be faulted. BI additionally put its financial independence at risk by making the liquidity advances without adequate collateral or other security, without adequate controls to make sure that the funds were needed to meet deposit withdrawals, and without an explicit indemnity from the government. BI’s operational and institutional independence was weak, however, and it appears to have been further undermined by the creation of IBRA. Indonesia’s crisis demonstrates how the credibility of the government and other authorities involved in the crisis-management responses (including international organizations) has implications for the effectiveness of central bank policy actions, especially where central bank independence is weak.
21.5.2 Argentina: Crises of 1989 and 2001 The central banking experience of Argentina from the late 1970s provides one cautionary tale after another. The story can be summarized as a sequence of policy innovations, each one designed to address an unforeseen and unmitigated side effect created by its predecessors, and each one failing in turn, often with catastrophic consequences. The
638 Honohan, Lombardi, and St. Amand underlying pressure was generally one of fiscal excess and a long-standing practice of relying on heavy quasi-fiscal impositions on banks (that is, high reserve requirements and inflation tax) as a means of assisting government budgets. The four systemic banking crises in Argentina between 1980 and 2001 exhibit notable differences. The costly collapse in 1980 was a classic case of boom and bust following a short-lived experiment in financial liberalization, while that of 1994 was mainly a side effect of the contemporary “Tequila” crisis in Mexico. The major crises of 1989 and 2001 display distinctive characteristics. The first had its roots not in bank failure but in a generalized loss of confidence in economic policy. But it was greatly amplified by a quasi-fiscal compensation scheme, originally intended to limit the impact of increased reserve requirements, and which required the central bank to make open-ended payments to support the profitability of banks. By 1989, the banking system’s viability had become wholly dependent on huge injections of inflationary financing by the central bank (Beckerman 1992). As inflation accelerated to very high levels, banks increased the interest rates they offered on deposits to retain customers—eventually to 137 percent per month in June 1989. (Argentina’s annual inflation rate reached a peak of more than 14,000 percent in the first quarter of 1990.) The amounts payable to banks from the compensation scheme grew accordingly, expanding the monetary base and further fueling inflation and currency depreciation. In this way, the crisis spiraled into hyperinflation and ended only with a confiscatory substitution of long-term bonds for most bank deposits, effectively ending the compensation mechanism. After the hyperinflation, a currency board regime was introduced with a fixed exchange rate at one-for-one with the US dollar, and strict rules were imposed to limit the powers of the central bank to avoid a recurrence. This was at first successful in stabilizing monetary conditions, but by 2001, it had come to a dead end. Partly because of external shocks (crises in Russia, Brazil, and East Asia; a reduction of foreign direct investment inflows; and adverse terms of trade shocks) and partly because of fiscal excesses and labor-market inflexibility, the economy had become quite uncompetitive. This had led to a deep recession, large current account deficits, and outflows of funds that were financed by diminishing stocks of central bank foreign exchange reserves. The currency board exchange rate could not be maintained, but the legislation under which it had been set up provided no room for policy maneuver. Following a period of increasing political and economic turmoil (including the turnover of four national presidents and three central bank governors in a matter of weeks), the whole regime collapsed catastrophically. Bank deposits were frozen, followed by compulsory conversion of US-dollar-denominated assets, deposits, and other liabilities of the banks into pesos (at different rates). Meanwhile, the government defaulted on its foreign debt (Gelpern 2004). The hyperinflation, confiscatory disruption of property rights, and deep recessions associated with these crises show the weakness of central bank crisis management. In a country with more than one change in the central bank governor per annum on average between 1945 and 1990, this is perhaps not surprising. While each crisis had
Managing Macrofinancial Crises 639 diverse and complex aspects, it is evident that in both cases the central bank was insufficiently prepared for the consequences of policy rules—such as the compensation scheme and the currency board—that had been in place for years. The need for a sufficiently effective fallback plan is evidenced by the damaging consequences of the eventual resolution of both crises (de la Torre, Yeyati, and Schmukler 2003; IMF Independent Evaluation Office 2004). Years of crisis and unresolved policy imbalances had by 2001 left Argentina with a banking system that was very small (credit to the private sector only amounted to the equivalent of around 20 percent of GDP) and largely dollarized. Ultimately, the experiences in Argentina highlight how short-termism and ad hoc solutions to one crisis can create the conditions for financial or economic imbalances that cause the next crisis.
21.5.3 United Kingdom: Northern Rock 2007, Royal Bank of Scotland and Lloyds/HBOS 2008 In 2007, Northern Rock was the United Kingdom’s fifth-largest mortgage lender and was funding its operations by taking short-term loans based on securitized mortgage products. As demand for securitized financial products dried up, Northern Rock became unable to pay its obligations. The United Kingdom’s prudential regulator, the Financial Services Authority (FSA), disclosed its concerns about Northern Rock’s financial stress to the BoE in mid-August 2007. Alternatives to emergency financial support were explored for several weeks, a period during which there were differences of opinion between the BoE and the other relevant authorities, namely the FSA and HM Treasury. The impression was given that in order to limit moral hazard, the BoE was prepared to see losses fall on bank creditors. After ELA was finally requested and approved on an unprecedented scale, it was not immediately put in place. When news about it leaked before any official statement clarifying the context, it triggered a loss of confidence among the public, resulting in the first bank run in Britain in 150 years. The run was only stopped when the government introduced a blanket deposit guarantee of Northern Rock’s liabilities. In its general approach to relieving the money-market pressures of August 2007, the BoE adhered to Bagehot’s dictum and resisted suggestions to take additional measures such as lending at longer maturities, removal of penalty rates, or increasing the range of acceptable collateral (Plenderleith 2012). This approach may have increased stigma associated with using the discount window facility and could even have fostered the doubts about the BoE’s willingness to support a failing bank through ELA. However, it is unlikely that an alternative liquidity strategy alone would have saved Northern Rock. The bank run could, however, have been avoided if implementation of the ELA had been handled more speedily and with either better communication or more effective secrecy, though there were concerns at the time regarding the legality of keeping the ELA provision secret.
640 Honohan, Lombardi, and St. Amand Although there was some subsequent criticism of the lack of clarity about responsibilities within the “tripartite system”—the BoE, the FSA, and HM Treasury— that was tasked with maintaining financial system stability (House of Commons Treasury Committee 2008), coordinated actions by the three parties eventually proved critical to quelling panic. More than a year later, shortly after the Lehman bankruptcy in October 2008, the BoE extended sizable ELA facilities to Lloyds and HBOS (which were in the process of merging) and to the Royal Bank of Scotland (RBS). The government’s plan to recapitalize these major banks (they were among the largest in the world) provided a degree of assurance to the BoE regarding the viability of the borrowing banks. This time, secrecy was successfully maintained, and the making of the loans was not publicly disclosed for more than a year after they were first extended and ten months after the loans had been fully repaid. For its part, the government either wholly or partially indemnified the ELA loans to RBS and Lloyds/HBOS, in addition to providing large-scale bank recapitalizations. Although defended by the BoE at the time, “constructive” ambiguity for ELA interventions resulted in uncertainty about the conditions under which action would be taken and about who would be responsible for taking such actions (Hauser 2014). The BoE has since published a framework for its LOLR function arising out of its assessment of these issues; that framework significantly improves the clarity of the central bank’s responsibilities, the information provided to financial markets, and the mechanisms that hold the central bank accountable for its actions (Plenderleith 2012; Tucker 2016; Winters 2012). The ELA operations of the BoE in 2007–2008 show the importance of establishing institutional responsibilities and authorities clearly ex ante to prevent hesitation or delays in responding to crises. Furthermore, they highlight the complexity of dealing with transparency in regard to ELA; the flow of public information on this subject must be handled with great care to ensure that the policy is effective.
21.5.4 Ireland: Sovereign Bank Loop 2008–2010 After a sustained period of export-driven growth from the mid-1990s, the Irish economy maintained rapid growth into the early years of the new millennium, but now the growth was driven by a huge expansion in bank credit for property acquisition and construction. In parallel with comparable credit bubbles in the United Kingdom, the United States, and Spain but with greater amplitude, the expansion accelerated in 2004– 2006 and was mostly financed by heavy foreign borrowing (largely denominated in euros) by banks in Ireland—both domestically controlled and foreign-owned. Between the end of 2003 and early 2008, the net indebtedness of Irish banks to the rest of the world grew from just 10 percent of GDP to more than 60 percent. Balance-sheet growth at Anglo Irish Bank averaged 36 percent per annum 1998–2007, making it one of the
Managing Macrofinancial Crises 641 largest banks, without any criticism from the supervisory arm of the Central Bank of Ireland (CBI). The real estate bubble was extreme: from 1994 to 2007, real new house prices in Ireland tripled, despite a vigorous construction boom. After property prices peaked in 2007, construction collapsed, undermining tax revenue (Honohan 2009). After the Lehman bankruptcy, the Irish banks’ liquidity, already dwindling, threatened to dry up altogether. The government, advised by the CBI, introduced a statutory blanket bank guarantee, even covering some subordinated debt. (This contrasted with the roughly contemporaneous decisions in the United Kingdom, which employed ELA and a government injection of capital, and Iceland, which, faced with proportionately much larger and unaffordable bank losses, protected only the creditors at local branches of the failing banks, allowing losses to fall on other creditors.) Over the following couple of years, sliding property prices and increasingly granular assessments of the condition of the Irish banks’ loan books revealed the extent of their losses. The economic slowdown was compounded by the impact of the global recession, with Ireland experiencing a steeper collapse in output and employment than any other euro area country except Greece. By late 2010, the Irish government—which had enjoyed an AAA rating eighteen months earlier—was faced with soaring borrowing costs and a bank run, signaling that the market recognized how much the bank guarantee was going to cost the government, on top of the sharp deterioration in the rest of the budget. It had to turn to the IMF. Had Ireland been on its own, currency depreciation would surely have been part of the resolution of this crisis. Because Ireland is a member of the euro area, however, the bank run was financed by central bank liquidity, including ELA provided by the CBI (with acquiescence of the ECB and indemnity from the minister for finance). That ELA amounted at its peak to almost 50 percent of Ireland’s GDP. All bank depositors were paid in full and in euros; the Irish crisis did not entail currency depreciation or reliance on the inflation tax. (The ECB, fearing contagion, pressed the government to ensure that even bonds for which the guarantee had expired would be paid.) The government provided cash and promissory notes totaling more than 40 percent of GDP to recapitalize the banks; the remaining costs were borne by the shareholders and unguaranteed subordinated debt holders. However, at least one-third of this government outlay was likely to be recovered, and much of the remainder was financed at very low servicing costs, thanks to the provision of low-cost long-term loans from European intergovernmental funds and thanks also to what was eventually a very accommodating policy of the ECB. The CBI had not foreseen in 2008 that the blanket bank guarantee would so severely limit room for maneuvering as the crisis unfolded or that it could trigger state insolvency. A more active central bank would have prepared better contingency plans and ensured more flexibility. Eventually, the full repertoire of central banking liquidity policy was brought to bear and helped redress the situation (Honohan 2016). The Irish crisis thus underscores the problems that result when the authorities are not adequately prepared for crisis management.
642 Honohan, Lombardi, and St. Amand
21.5.5 Euro Area: Government Bond Purchases Policies in the Bank and Sovereign Debt Crisis 2010–2012 The EU Treaty is written to preclude the cross-border assumption of any liability for government obligations. This no-bailout rule was tested in early May 2010, when the market price of Greek government debt slid in anticipation of default, spilling over into other sovereign debt markets. The single currency entails a high degree of interconnectedness and ease of substitutability between financial assets across the multicountry euro area, making the financial systems of individual member states highly vulnerable to spillovers and contagion from shocks in other countries. Countries with high levels of debt or weak banking systems—including Ireland, Italy, Spain, and Portugal—were experiencing rising interest rates, and firms and households were finding it harder to gain access to credit. Limited tools and a lack of clear lines of authority within the euro area for dealing with financial stability problems on such an unforeseen scale exacerbated the emerging uncertainties. Currency depreciation and monetization of national fiscal problems were no longer available to member states. This emergence of fragmentation in euro area financial markets was impairing the transmission mechanism of monetary policy, justifying ECB intervention in the secondary market of government bonds that had been issued by stressed sovereigns. Some commentators challenged this policy, the SMP, as undermining the no-bailout principle fundamental to the euro area’s construction, even though only primary market purchases of government bonds were explicitly precluded in the ECB statute. The euro area’s multicountry character entails a relatively complex set of official institutions for economic and financial policy, not only at the level of the euro area itself but also in each member state and at the wider level of the European Union. The ECB’s actions to tackle the euro area bank and sovereign debt crisis had to navigate the overlapping mandates of these institutions, while complying with its extensive, but in some important respects restricted, legal powers. The crisis thus presented difficult questions about the roles and responsibilities of different levels of government (such as national versus supranational) and authorities (such as fiscal versus monetary) in crisis management.15 The ECB’s government bond- buying programs were designed to protect the institution’s independence while performing its role in crisis management. The statute establishing the ECB requires it to be independent of government, with a mandate aimed primarily at maintaining price stability and legal constraints designed to ensure monetary dominance, meaning that monetary accommodation should not act as a substitute for fiscal indiscipline (Lombardi and Moschella 2015a, 2015b).16 This institutional framework shaped the Governing Council’s decision not to announce the SMP (with purchases of Greek, Irish, and Portuguese debt) until just after the European Council had agreed (in May 2010) to establish an intergovernmental lending facility—later transformed into the European Stability Mechanism (ESM)—as a fiscal backstop. This
Managing Macrofinancial Crises 643 delay made it harder to construe the ECB’s actions as providing a backstop for sovereigns (Irwin 2013, 127). It was a reminder to other EU policymakers that the Governing Council would not be pressured into using monetary policy to compensate for a lack of fiscal action (Bastasin 2014, 194). The ECB could thus stress that it had influenced the fiscal authorities and not the other way around (Der Spiegel 2010, interview with ECB President Trichet). A similar situation emerged during 2011, when the ECB waited until it had received fiscal commitments by the governments of Italy and Spain before extending the use of SMP to those countries. The intention to assert monetary dominance similarly played a role in the design of the OMT program, in that compliance with the conditionality of an ESM program was required for the continuation of asset purchases under that program (Coeuré 2013; Draghi 2012a). While some observers question the effectiveness of the ECB’s policy response (see, for example, Gros, Alcidi, and Giovanni 2012), its caution may have been necessary to ensure that it continues to be viewed as a legitimate, independent, supranational policymaking authority. On the other hand, if the ECB had gone down the route of other major central banks and introduced an open-ended asset purchase program earlier in the GFC, before Greek debt was seen as a troubled asset, it is possible that the Greek debt crisis, when it emerged, could have been more effectively contained with fewer sudden stops. The potential implications of this counterfactual scenario raise important questions about multiple equilibria during macrofinancial crises. The euro area crisis and the economic policy response also underscore the importance of clearly identifying the responsible authorities and establishing crisis-response mechanisms. In the absence of these mechanisms, there is a potential for political infighting, which could exacerbate crises and result in failures or delays in policy responses.
21.6 Conclusion As custodians of the economy’s liquidity, central banks are thrust into the public eye during a crisis. Sometimes they seem all-powerful, with the ability to decide on the life or death of stressed financial institutions. Sometimes they seem paralyzed by the constraints of a limiting mandate. At times of crisis, central banks have reached for a much larger toolbox than is normal, reflecting the true scope of their mandate, explicit or implicit, as interpreted by the community of central bankers especially through observation of historic practice in such circumstances. The GFC has seen the use of innovative financial engineering and an expansive approach to the provision of liquidity. To be effective in stabilizing the situation, rather than aggravating it, use of such tools requires careful analytical assessment of the underlying crisis circumstances. Their successful use also calls for ensuring political legitimacy, and this has implications for communications strategy and for the maintenance of relationships with government and other relevant agencies.
644 Honohan, Lombardi, and St. Amand Given the need to make urgent decisions in the face of rapidly evolving circumstances, the central bank in a crisis thus faces challenges of economic analysis, market judgment, and political legitimacy. As illustrated by the Ireland and Argentina cases, insufficient prior analysis can result in poor policy decisions constraining policymakers’ later ability to implement an appropriate crisis response; the UK case illustrates the pitfalls of misjudging the communications requirements if an adverse market response to crisis measures is to be avoided; while the Indonesia and euro area examples show the importance of political legitimacy and a sufficiently articulated mandate. Lacking a simple criterion such as the rate of inflation, crisis-management decisions are inevitably contested in terms of both technical success and democratic legitimacy. The potential fiscal consequences of crisis management can test the limits of the doctrine of central bank independence. By enhancing the legitimacy of the central bank’s actions, cooperation with the relevant political authority can decrease the likelihood that its reputation will be damaged. For example, better cooperation between the BoE and HM Treasury could have avoided reputational damage in the Northern Rock affair. In the euro area, inadequate mechanisms for the coordination of government and central bank crisis management surely contributed to the significant decline in trust in the ECB and other European institutions during the crisis, as captured in the Eurobarometer survey. These case studies demonstrated why, in addition to efforts to avoid incoherent or conflicting policies, the central bank needs to engage collaboratively with the fiscal and other public authorities with which it shares responsibility for dealing with crisis. The far-reaching scale and scope of the crisis-management measures employed by the major central banks in the GFC, as in earlier crises, confirm the need for decisiveness and boldness in such circumstances. Indeed, the crisis-management response of each of the central banks in our case studies involves policy failures caused by weakness, delay, or insufficiency in policy action. A well-prepared central bank will be better placed than others to carry out such measures successfully and without delay. While the preparation of some actions demands secrecy, a considerable degree of transparency is needed both for the effectiveness of the policies and to ensure accountability and legitimacy. In crises, even more than at other times, central banks need to explain what they are doing and show that what they are doing will advance the explicit and implicit mandates that govern good crisis management.
Notes 1. This characterization is based on the definition of systemic risk in a report to the G20 Finance Ministers and Central Bank Governors by the IMF, BIS, and FSB. 2. See Lombardi and Schembri 2016 for a discussion of the role of central banks in financial- system stability frameworks. 3. See also Cukierman, Webb, and Neyapti 1992; Grilli, Masciandaro, and Tabellini 1991. 4. Chairman Ben Bernanke of the Fed consulted with the US Treasury throughout the Fed’s operations in 2008, and in September 2008, he asked US Treasury Secretary Hank Paulson to develop a plan to request approval from Congress on the use of taxpayer funds
Managing Macrofinancial Crises 645 for stabilizing the financial system, which plan later materialized into the Troubled Asset Relief Program (TARP) (Bernanke 2015, 299). 5. Baker (2006) analyzes the emergence of consensus among G7 finance ministries and central bankers, related to shared ideas, interests, and institutional roles. 6. Larry Summers, former secretary of the US Treasury, found the “slightly Luddite premise of this paper to be largely misguided,” while Donald Kohn, soon to be vice chair of the Fed, argued against Rajan that systemic risks had decreased with greater diversification. 7. For monetary policy, successful pursuit of price stability has been found to hinge on a societal preference for low-inflation policies, emanating from the financial sector and/or the public (Goodman 1991; Hayo 1998; Issing 1993; Posen 1995). Informal arrangements between the central bank and the government have also been found to affect central bank policies (Cukierman, Webb, and Neyapti 1992; Eijffinger and de Haan 1996). 8. The terms ELA and LOLR overlap in the literature. In this chapter, we refer to LOLR in the broader sense as a function of the central bank, using ELA to refer specifically to direct loans to individual institutions by the central bank in fulfillment of this function. 9. See Tucker 2016 for a discussion of issues of governance and democratic legitimacy in the central bank’s crisis-management function. 10. The debate over whether central banks should actively seek to deflate an asset price bubble and precisely how to judge when and how vigorously monetary policy should lean against the bubble is not the topic of the present chapter (Brunnermeier and Schnabel 2016; Reinhart and Rogoff 2009; Schularick and Taylor 2012). 11. Bagehot’s dictum suggests that the central bank should lend freely, at a penalty interest rate, against good collateral to prevent a systemic crisis caused by runs on solvent but illiquid banks. As noted by Castiglionesi and Wagner (2012), penalty rates do not help if the bank may be insolvent. 12. The effectiveness of these interventions has been extensively analyzed; see, for example, Del Negro et al. 2011. 13. See, for example, Domanski, Moessner, and Nelson 2014 for a categorization of crisis- management tools employed during the GFC. 14. This issue was especially controversial in the case of Cyprus; see Zenios 2016. 15. Such questions also became important in relation to ELA in the euro area, as discussed in Honohan 2017. 16. Monetary dominance describes the situation in which the central bank sets monetary policy independently of fiscal policy decisions; therefore, monetary policy creates a constraint on which fiscal policy decisions must be made. The alternative is fiscal dominance, where the fiscal authority independently decides its budget, which the monetary authority accommodates by ensuring that there is sufficient liquidity in government bond markets and seigniorage revenue to satisfy the budget expenditures (see Sargent and Wallace 1981).
References Acharya, Viral, Itama Drechsler, and Philipp Schnabl. 2014. “A Pyrrhic Victory? Bank Bailouts and Sovereign Credit Risk.” Journal of Finance 69, no. 6: 2689–2739. Allen, Franklin, and Douglas Gale. 2000. “Financial Contagion.” Journal of Political Economy 108, no. 1: 1–33.
646 Honohan, Lombardi, and St. Amand Anderson, Richard G., Michael Bordo, and John V. Duca. 2017. “Money and Velocity during Financial Crises: From the Great Depression to the Great Recession.” Journal of Economic Dynamics and Control 81: 32–49. Baker, Andrew. 2006. The Group of Seven: Finance Ministries, Central Banks and Global Financial Governance. London: Routledge. Ball, Laurence M. 2016. “The Fed and Lehman Brothers.” NBER working paper 22410. Ball, Laurence, Joseph Gagnon, Patrick Honohan, and Signe Krogstrup. 2016. What Else Can Central Banks Do? Geneva Reports on the World Economy 18. Geneva: International Center for Monetary and Banking Studies; London: Centre for Economic Policy Research. Bank for International Settlements. 2014. Rethinking the Lender of Last Resort.BIS Papers 79. Bastasin, Carlo. 2014. Saving Europe: Anatomy of a Dream. Washington, D.C.: Brookings Institution Press. Bean, Charles, Matthias Paustian, Adrian Penalver, and Tim Taylor. 2010. “Monetary Policy after the Fall.” In Macroeconomic Challenges: The Decade Ahead, 2010 Economic Policy Symposium Proceedings, 267– 328. Kansas City, Missouri: Federal Reserve Bank of Kansas City. Beckerman, Paul. 1992. “Public Sector ‘Debt Distress’ in Argentina, 1988–89.” World Bank working paper 902. Bernanke, Ben S. 2010. “Central Bank Independence, Transparency, and Accountability.” Speech. Institute for Monetary and Economic Studies International Conference, Bank of Japan, Tokyo, May. Bernanke, Ben S. 2015. The Courage to Act: A Memoir of a Crisis and Its Aftermath. New York: W. W. Norton. Bernanke, Ben S., and Alan S. Blinder. 1992. “The Federal Funds Rater and the Channels of Monetary Transmission.” American Economic Review 82, no. 4: 901–921. Bhattacharjee, Arnab, and Sean Holly. 2015. “Influence, Interactions and Heterogeneity: Taking Personalities out of Monetary Policy Decision-Making.” Manchester School 83, no. 2: 153–182. Bini Smaghi, Lorenzo. 2007. “Central Bank Independence: From Theory to Practice.” Speech. Good Governance and Effective Partnership Conference, Hungarian National Assembly, Budapest, April. Blinder, Alan S. 1998. Central Banking in Theory and Practice. Cambridge: MIT Press. Blinder, Alan S. 2000. “Central-Bank Credibility: Why Do We Care? How Do We Build It?” American Economic Review 90, no. 5: 1421–1431. Blinder, Alan S. 2009. “Making Monetary Policy by Committee.” International Finance 12, no. 2: 171–194. Blinder, Alan S. 2012. “Global Policy Perspectives: Central Bank Independence and Credibility during and after a Crisis.” In The Changing Policy Landscape, 2012 Economic Policy Symposium Proceedings, 483–691. Kansas City, Missouri: Federal Reserve Bank of Kansas City. Blinder, Alan S., Charles A. Goodhart, Philipp M. Hildebrand, David Lipton, and Charles Wyplosz. 2003. How Do Central Banks Talk? Geneva Reports on the World Economy 3. Geneva: International Center for Monetary and Banking Studies; London: Centre for Economic Policy Research. Bodea, Cristina, and Stefan Huemer. 2010. “Dancing Together at Arm’s Length? The Interaction of Central Bank with Governments in the G7.” ECB occasional paper 120. Bordo, Michael D., and Christopher M. Meissner. 2016. “Fiscal and Financial Crises.” Handbook of Macroeconomics 2: 355–412.
Managing Macrofinancial Crises 647 Borio, Claudio, and Gianni Toniolo. 2011a. “One Hundred and Thirty Years of Central Bank Cooperation: A BIS Perspective.” In The Past and Future of Central Bank Cooperation, edited by Claudio Borio and Gianni Toniolo, 16–75. New York: Cambridge University Press. Borio, Claudio, and Gianni Toniolo, eds. 2011b. The Past and Future of Central Bank Cooperation. New York: Cambridge University Press. Brunnermeier, Markus, and Harold James. 2015. “Making Sense of the Swiss Shock.” Project Syndicate, January 17. https://www.project-syndicate.org/commentary/swiss-central-bank- stops-swiss-franc-euro-peg-by-markus-brunnermeier-and-harold-james-2015-01 Brunnermeier, Markus K., and Lasse Heje Pedersen. 2008. “Market Liquidity and Funding Liquidity.” Review of Financial Studies 22, no. 6: 2201–2238. Brunnermeier, Markus K., and Ricardo Reis. 2017. “A Crash Course on the Euro Crisis.” Mimeo. Princeton University and London School of Economics and Political Science. Brunnermeier, Markus K., and Isabel Schnabel. 2016. “Bubbles and Central Banks: Historical Perspectives.” In Central Banks at a Crossroads: What Can We Learn from History? edited by Michael D. Bordo, Oyvind Eitzheim, Marc Flandreau, and Jan Qvigstad. Cambridge: Cambridge University Press. Buiter, Willem H. 2010. “Central Banks and Financial Crises.” In Maintaining Stability in a Changing Financial System, 2008 Economic Policy Symposium Proceedings, 495–633. Kansas City, Missouri: Federal Reserve Bank of Kansas City. Caprio, Gerard Jr., and Patrick Honohan. 2015. “Banking Crises: Those Hardy Perennials.” In The Oxford Handbook of Banking, 2nd ed., edited by Allen N. Berger, Philip Molyneux, and John O. S. Wilson, 700–720. New York: Oxford University Press. Cargill, Thomas F., Michael M. Hutchison, and Takatoshi Ito. 1997. The Political Economy of Japanese Monetary Policy. Cambridge: MIT Press. Castiglionesi, Fabio, and Wolf Wagner. 2012. “Turning Bagehot on His Head: Lending at Penalty Rates When Banks Can Become Insolvent.” Journal of Money, Credit and Banking 44: 201–219. Cerutti, Eugenio, Stijn Claessens, and Patrick McGuire. 2012. “Systemic Risks in Global Banking: What Can Available Data Tell Us and What More Data Are Needed?” NBER working paper 18531. Cochrane, John H. 2011. “Understanding Policy in the Great Recession: Some Unpleasant Fiscal Arithmetic.” European Economic Review 55: 2–30. Coeuré, Benoît. 2013. “Outright Monetary Transactions, One Year On.” Speech. The ECB and Its OMT Programme Conference, Centre for Economic Policy Research, German Institute for Economic Research, and KfW Bankengruppe, Berlin, September 2. Committee on Financial Services. 2009. “An Examination of the Extraordinary Efforts by the Federal Reserve Bank to Provide Liquidity in the Current Financial Crisis.” Hearing. US House of Representatives, 111th Congress, first session, no. 111-3, February 10. Cukierman, Alex, Steven B. Webb, and Bilin Neyapti. 1992. “Measuring the Independence of Central Banks and Its Effect on Policy Outcomes.” World Bank Economic Review 6, no. 3: 353–398. Debelle, Guy, and Stanley Fischer. 1994. “How Independent Should a Central Bank Be?” In Goals, Guidelines, and Constraints Facing Monetary Policymakers, Conference Series No. 38, edited by Jeffrey C. Fuhrer, 195–221. Boston: Federal Reserve Bank of Boston. De la Torre, Augusto, Eduardo Levy Yeyati, and Sergio Schmukler. 2003. “Living and Dying with Hard Pegs: The Rise and Fall of Argentina’s Currency Board.” World Bank policy research working paper 2980.
648 Honohan, Lombardi, and St. Amand Del Negro, Marco, Gauti Eggertsson, Andrea Ferrero, and Nobuhiro Kiyotaki. 2011. “The Great Escape? A Quantitative Evaluation of the Fed’s Liquidity Facilities.” Federal Reserve Bank of New York staff report 520. Der Spiegel. 2010. “A ‘Quantum Leap’ in Governance of the Euro Zone Is Needed.” Der Spiegel Online, May 15. http://www.spiegel.de/international/europe/european-central-bank- president- j ean- claude- t richet- a - quantum- l eap- i n- governance- of- t he- e uro- z one- is- needed-a-694960-3.html. Diamond, Douglas W., and Raghuram G. Rajan. 2011. “Fear of Fire Sales, Illiquidity Seeking, and Credit Freezes.” Quarterly Journal of Economics 126, no. 2: 557–591. Dincer, N. Nergiz, and Barry Eichengreen. 2014. “Central Bank Transparency and Independence: Updates and New Measures.” International Journal of Central Banking 10, no. 1: 189–253. Domanski, Dietrich, Richhild Moessner, and William Nelson. 2014. “Central Banks As Lender of Last Resort: Experience during the 2007–2010 Crisis and Lessons for the Future.” In Rethinking the Lender of Last Resort, BIS Papers 79: 43–75. Draghi, Mario. 2012a. “Introductory Statement to the Press Conference (with Q&A).” ECB, Kranju, October 4. https://www.ecb.europa.eu/press/pressconf/2012/html/is121004.en.html#qa Draghi, Mario. 2012b. “Verbatim of the Remarks Made by Mario Draghi.” Global Investment Conference, London, July 26. Drechsler, Itamar, Thomas Drechsel, David Marques-Ibanez, and Philipp Schnabl. 2016. “Who Borrows from the Lender of Last Resort?” Journal of Finance 71, no. 5: 1933–1974. Eijffinger, Sylvester C. W., and Jakob de Haan. 1996. “The Political Economy of Central Bank Independence.” Princeton Special Papers in International Economics 19. Enoch, Charles, Barbara Baldwin, Olivier Frécaut, and Arto Kovanen. 2001. “Indonesia: Anatomy of a Banking Crisis.” IMF working paper 01/52. Farhi, Emmanuel, and Jean Tirole. 2017. “Deadly Embrace: Sovereign and Financial Balance Sheets Doom Loops.” The Review of Economic Studies 85, no. 3: 1781–1823. Farvaque, Etienne, Hakim Hammadou, and Piotr Stanek. 2011. “Selecting Your Inflation Targeters: Background and Performance of Monetary Policy Committee Members.” German Economic Review 12, no. 2: 223–238. Feldstein, Martin. 2012. “Fed Policy and Inflation Risk.” Project Syndicate, March 31. https:// www.project-syndicate.org/commentary/fed-policy-and-inflation-risk. Fisher, Paul. 2010. “The Corporate Sector and the Bank of England’s Asset Purchases.” Speech. Association of Corporate Treasures Winter Paper, February 18. Freixas, Xavier. 1999. “Optimal Bail Out Policy, Conditionality and Constructive Ambiguity.” Mimeo. Bank of England. Freixas, Xavier, and Bruno M. Parigi. 2014. “Lender of Last Resort and Bank Closure Policy: A Post-Crisis Perspective.” In Oxford Handbook of Banking, 2nd ed., edited by Allen N. Berger, Philip Molyneux, and John O. S. Wilson, 474–504. New York: Oxford University Press. Garcia-de-Andoain, Carlos, Florian Heider, Marie Hoerova, and Simone Manganelli. 2016. “Lending-of-Last-Resort Is As Lending-of-Last-Resort Does: Central Bank Liquidity Provision and Interbank Market Functioning in the Euro Area.” Journal of Financial Intermediation 28: 32–47. Geithner, Timothy F. 2016. “Are We Safer? The Case for Updating Bagehot.” Per Jacobsson Lecture, IMF Annual Meeting, Washington, D.C., October. Gelpern, Anna. 2004. “Systemic Bank and Corporate Distress from Asia to Argentina: What Have We Learned?” International Finance 7, no. 1: 151–168.
Managing Macrofinancial Crises 649 Geraats, Petra M. 2002. “Central Bank Transparency.” Economic Journal 112, no. 483: F532–F565. Gerlach-Kristen, Petra. 2009. “Outsiders at the Bank of England’s MPC.” Journal of Money, Credit and Banking 41, no. 6: 1099–1115. Gertler, Mark, Nobuhiro Kiyotaki, and Albert Queralto. 2012. “Financial Crises, Bank Risk Exposure and Government Financial Policy.” Journal of Monetary Economics 59 (S1): 17–34. Goldstein, Morris. 2003. “IMF Structural Programs.” In Economic and Financial Crises in Emerging Market Economies, edited by Martin Feldstein, 363–437. Chicago: University of Chicago Press. Goodfriend, Marvin. 2007. “How the World Achieved Consensus on Monetary Policy.” Journal of Economic Perspectives 21, no. 4: 47–68. Goodfriend, Marvin, and Robert G. King. 1988. “Financial Deregulation Monetary Policy and Central Banking.” In Restructuring Banking and Financial Services in America, edited by William S. Haraf and Rose Kushmeider. American Enterprise Institute Studies 481. Lanham: University Press of America. Goodhart, Charles A. E. 2011. “The Changing Role of Central Banks.” Financial History Review 18, no. 2: 135–154. Goodhart, Charles A. E., and Haizhou Huang. 1999. “A Model of the Lender of Last Resort.” IMF working paper 99/39. Goodman, John B. 1991. “The Politics of Central Bank Independence.” Comparative Politics 23, no. 3: 329–349. Gorton, Gary B., and Andrew Metrick. 2012. “Getting Up to Speed on the Financial Crisis: A One‐Weekend‐Reader’s Guide.” Journal of Economic Literature 50, no. 1: 128–150. Gorton, Gary B., and Guillermo Ordoñez. 2013. “The Supply and Demand for Safe Assets.” NBER working paper 18732. Grilli, Vittorio, Donato Masciandaro, and Guido Tabellini. 1991. “Political and Monetary Institutions and Public Financial Policies in the Industrial Countries.” Economic Policy 13: 341–392. Gros, Daniel, Cinzia Alcidi, and Alessandro Giovanni. 2012. “Central Banks in Times of Crisis: The FED vs. the ECB.” CEPS policy brief 276, July. Hauser, Andrew. 2014. “Lender of Last Resort Operations during the Financial Crisis: Seven Practical Lessons from the United Kingdom.” In Rethinking the Lender of Last Resort, BIS Papers 79, 81–92. Hauser, Andrew. 2016. “Between Feast and Famine: Transparency, Accountability and the Lender of Last Resort.” Speech. Committee on Capital Markets Regulation Conference, The Lender of Last Resort: An International Perspective, Washington, D.C., February. Hayo, Bernd. 1998. “Inflation Culture, Central Bank Independence and Price Stability.” European Journal of Political Economy 14: 241–263. Hayo, Bernd, and Pierre-Guillaume Méon. 2013. “Behind Closed Doors: Revealing the ECB’s Decision Rule.” Journal of International Money and Finance 37: 135–160. Heider, Florian, Marie Hoerova, and Cornelia Holthausen. 2009. “Liquidity Hoarding and Interbank Market Spreads: The Role of Counterparty Risk.” ECB working paper 1126. Honohan, Patrick. 2009. “Resolving Ireland’s Banking Crisis.” Economic and Social Review 40, no. 2: 207–231. Honohan, Patrick. 2016. “Debt and Austerity: Post-Crisis Lessons from Ireland.” Journal of Financial Stability 24: 149–157. Honohan, Patrick. 2017. “Management and Resolution of Banking Crises: Lessons from Recent European Experience.” Peterson Institute for International Economicspolicy brief 17-1, Washington, D.C.
650 Honohan, Lombardi, and St. Amand House of Commons Treasury Committee. 2008. The Run on the Rock. Fifth Report of Session 2007–08, Vol. 1, January 24. Howarth, David. 2007. “Running an Enlarged Euro-Zone—Reforming the European Central Bank: Efficiency, Legitimacy and National Economic Interest.” Review of International Political Economy 14, no. 5: 820–841. IMF Independent Evaluation Office. 2004. The IMF and Argentina 1991–2001. Washington, D.C.: International Monetary Fund. Irwin, Neil. 2013. The Alchemists: Three Central Bankers and a World On Fire. New York: Penguin. Issing, Otmar. 1993. “Central Bank Independence and Monetary Stability.” Institute of Economic Affairs occasional paper 89, London. Issing, Otmar. 2011. “Lessons for Monetary Policy: What Should the Consensus Be?” IMF working paper 11/97. Johnson, Juliet. 2016. Priests of Prosperity: How Central Bankers Transformed the Postcommunist World. Ithaca: Cornell University Press. Jordà, Òscar, Moritz Schularick, and Alan M. Taylor. 2015. “Leveraged Bubbles.” Journal of Monetary Economics 76, Supplement Issue December: S1-S2. Kirshner, Jonathan. 2003. “Explaining Choices about Money: Disentangling Power, Ideas, and Conflict.” In Monetary Orders: Ambiguous Economics, Ubiquitous Politics, edited by Jonathan Kirshner, 260–280. Ithaca: Cornell University Press. Krugman, Paul. 2012. End This Depression Now! New York: W. W. Norton. Laeven, Luc, and Fabian Valencia. 2012. “Systemic Banking Crises Database: An Update.” IMF working paper 12/163. Lockwood, Erin. 2016. “The Global Politics of Central Banking: A View from Political Science.” The Changing Politics of Central Banking Series, Mario Einaudi Center for International Studies, Part 3, 5–16. Lombardelli, Clare, James Proudman, and James Talbot. 2005. “Committee versus Individuals: An Experimental Analysis of Monetary Policy Decision Making.” International Journal of Central Banking 1, no. 1: 181–205. Lombardi, Domenico, and Manuela Moschella. 2015a. “The Government Bond Buying Programmes of the European Central Bank: An Analysis of Their Policy Settings.” Journal of European Public Policy 23, no. 6, 851–870. Lombardi, Domenico, and Manuela Moschella. 2015b. “The Institutional and Cultural Foundations of the Federal Reserve’s and ECB’s Non-Standard Policies.” Stato e Mercato 1 (April): 127–152. Lombardi, Domenico, and Lawrence Schembri. 2016. “Reinventing the Role of Central Banks in Financial Stability.” Bank of Canada Review (Autumn): 1–11. Lombardi, Domenico, and Pierre Siklos. 2016. “Benchmarking Macroprudential Policies: An Initial Assessment.” Journal of Financial Stability 27: 35–49. Marcussen, Martin. 2006. “The Fifth Age of Central Banking in the Global Economy.” Frontiers of Regulation conference, University of Bath, September. McLeod, Ross H. 2004. “Dealing with Bank System Failure: Indonesia 1997–2003.” Bulletin of Indonesian Economic Studies 40, no. 1: 95–116. Mishkin, Frederic S. 2011. “Monetary Policy Strategy: Lessons from the Crisis.” NBER working paper 16755. Nelson, Stephen C., and Peter J. Katzenstein. 2014. “Uncertainty, Risk, and the Financial Crisis of 2008.” International Organization 68 (Spring): 361–392.
Managing Macrofinancial Crises 651 Philippon, Thomas, and Philipp Schnabl. 2013. “Efficient Recapitalization.” Journal of Finance 68: 1–48. Plenderleith, Ian. 2012. “Review of the Bank of England’s Provision of Emergency Liquidity Assistance 2008–09.” Report to Court of the Bank of England, October. Posen, Adam S. 1995. “Declarations Are Not Enough: Financial Sector Sources of Central Bank Independence.” In NBER Macroeconomics Annual 1995, Vol. 10, edited by Ben S. Bernanke and Julio J. Rotemberg, 253–274. Cambridge: MIT Press. Radelet, Steven, and Jeffrey D. Sachs. 1998. “The East Asian Financial Crisis: Diagnosis, Remedies, Prospects.” Brookings Papers on Economic Activity 1: 1–90. Rajan, Raghuram G. 2005. “Has Financial Development Made the World Riskier?” In The Greenspan Era: Lessons for the Future, 2005 Economic Policy Symposium Proceedings, 313– 369. Kansas City, Missouri: Federal Reserve Bank of Kansas City. Reinhart, Carmen M., and Kenneth Rogoff. 2009. This Time Is Different: Eight Centuries of Financial Folly. Princeton: Princeton University Press. Reinhart, Vincent. 2011. “A Year of Living Dangerously: The Management of the Financial Crisis in 2008.” Journal of Economic Perspectives 25, no. 1: 71–90. Riboni, Alessandro, and Francisco J. Ruge- Marcia. 2010. “Monetary Policy by Committee: Consensus, Chairman Dominance, or Simple Majority?” Quarterly Journal of Economics 125, no. 1: 363–416. Saccomanni, Fabrizio. 2016. “Policy Cooperation in the Euro Area in Time of Crisis: A Case of Too Little, Too Late.” In Managing Complexity: Economic Policy Cooperation after the Crisis, edited by Tamim Bayoumi, Stephen Pickford, and Paola Subacchi, 113–138. Washington, D.C.: Brookings Institution Press. Sargent, Thomas J., and Neil Wallace. 1981. “Some Unpleasant Monetarist Arithmetic.” Federal Reserve Bank of Minneapolis Quarterly Review (Fall): 1–17. Schularick, Moritz, and Alan M. Taylor. 2012. “Credit Booms Gone Bust: Monetary Policy, Leverage Cycles and Financial Crises, 1870–2008.” American Economic Review 102, no. 2: 1029–1061. Sibert, Anne. 2003. “Monetary Policy Committees: Individual and Collective Reputations.” Review of Economic Studies 70, no. 3: 649–665. Sinn, Hans- Werner. 2014. The Euro Trap: On Bursting Bubbles, Budgets, and Beliefs. New York: Oxford University Press. Smales, Lee A., and Nick Apergis. 2016. “The Influence of FOMC Member Characteristics on the Monetary Policy Decision-Making Process.” Journal of Banking & Finance 64 (March): 216–231. Stiglitz, Joseph E. 2016. The Euro: How a Common Currency Threatens the Future of Europe. New York: W. W. Norton. Svensson, Lars E. O. 2009. “Flexible Inflation Targeting—Lessons from the Financial Crisis.” Speech. Towards a New Framework for Monetary Policy? Lessons from the Crisis workshop. Netherlands Bank, Amsterdam, September. Tirole, Jean. 2011. “Illiquidity and All Its Friends.” Journal of Economic Literature 49, no. 2: 287–325. Tucker, Paul. 2014. “The Lender of Last Resort and Modern Central Banking: Principles and Reconstruction.” In Rethinking the Lender of Last Resort, BIS Papers 79, 10–42. Tucker, Paul. 2016. “How Can Central Banks Deliver Credible Commitment and Be ‘Emergency Institutions’?” In Central Bank Governance and Oversight Reform, edited by John H. Cochrane and John B. Taylor, 1–30. Stanford: Hoover Institution Press.
652 Honohan, Lombardi, and St. Amand Ueda, Kazuo. 2012. “Japan’s Deflation and the Bank of Japan’s Experience with Nontraditional Monetary Policy.” Journal of Money, Credit and Banking 44, no 1: 175–190. Warsh, Kevin. 2014. Transparency and the Bank of England’s Monetary Policy Committee.Independent Review for the Bank of England, December Winters, Bill. 2012. Review of the Bank of England’s Framework for Providing Liquidity to the Banking System. Report to Court of the Bank of England, October. Woodford, Michael. 2005. “Central Bank Communication and Policy Effectiveness.” NBER working paper 11898. Woodford, Michael. 2009. “Convergence in Macroeconomics: Elements of the New Synthesis.” American Economic Journal: Macroeconomics 1, no. 1: 267–279. Wyplosz, Charles. 2016. “The Six Flaws of the Eurozone.” Economic Policy 31, no. 87: 559–606. Zenios, Stavros A. 2016. “Fairness and Reflexivity in the Cyprus Bail-In.” Empirica 43, no. 3: 579–606.
Pa rt V I I
E VOLU T ION OR R E VOLU T ION I N P OL IC Y M ODE L I N G ?
chapter 22
Macromodeling , De fau lt, and Money Charles Goodhart, Nikolaos Romanidis, Martin Shubik, and Dimitrios P. Tsomocos 1
22.1 Why Mainstream Macromodeling Is Insufficient When the queen came to the London School of Economics to open a new academic building in November 2008, shortly after the recent financial crisis had struck, she turned to one of her hosts and asked “Why did no one warn us of this?” One reason for this failure to foresee the coming financial debacle was that mainstream economic models have mostly assumed away all such financial frictions. Standard dynamic stochastic general equilibrium (DSGE) models were in essence real (real business cycle, RBC) models, with the addition of price/wage stickiness (frictions). All models involve simplification, and such simplification has both benefits, in enabling focus and a capacity to manipulate and solve, and costs, in the guise of departure from reality. The assumption of no default by any agent, as represented by the transversality condition, for example, allowed for much simplification. The whole financial sector could be eliminated from such models; all agents could borrow, or lend, at the same riskless interest rate(s) since there was no credit (counterparty) or liquidity risk; it greatly facilitated the adoption of the “representative agent” as a modeling device (this also has the incidental property that it rules out any examination of changes in the wealth structure of an economy).
We would like to thank Vassili Bazinas, Udara Peiris, Xuan Wang, Ji Yan, and especially Nuwat Nookhwun, Pierre Siklos, Geoffrey Wood, and one anonymous referee for their helpful comments. However, all remaining errors are ours.
656 Goodhart, Romanidis, Shubik, and Tsomocos Under normal conditions, the treatment of finance as a “veil” and the assumption that default is constant and low may be a reasonable approximation of reality. But then, in such normal times, almost any form of simple modeling, such as a vector autoregression, will forecast as well as or better than the more complex macromodels. However, what appears clear from the historical records is that really serious disturbances to the economy, both deflation and depression on the one hand and inflation on the other, are closely connected to monetary and financial malfunctions. In particular, many of the most severe depressions have been set off by bank defaults and the concomitant collapse of asset prices. Among such occasions are the 1907 failure of the Knickerbocker Trust, the 1929–1932 failure of Credit Anstalt and the Bank of New York, and the 2008 failure of Lehman Brothers. The failures and the expectations of potential failures of major financial entities have been key drivers of macroeconomic weakness throughout modern history. While the standard DSGE model does not include financial intermediaries, let alone the rare instances of their default, it does normally include (outside) money. Fluctuations in monetary growth, which may be caused in part by financial failure, can, with sticky prices, lead to depression in such models when monetary growth is abruptly cut back, and, of course, to inflation when monetary growth surges. But the inclusion of money in such a model is anomalous and internally inconsistent, as Hahn (1965) originally argued and as has been repeatedly underlined by the authors here (Dubey and Geanakoplos 1992; Shubik and Tsomocos 1992; Quint and Shubik 2013; Goodhart and Tsomocos 2011). If no one ever defaults, anyone’s personal IOU will be perfectly acceptable and perfectly liquid in purchasing any good, service, or other (financial) asset from anyone else. There is no need for any specific money, and devices such as cash-in-advance constraints, money in the utility function, and so on, are not only artificial but inconsistent with the other assumptions of the model (see, for example, Black 1970 and 2009 and the experimental work Huber, Shubik, and Sunder 2011). Moreover, the “no default” assumption of mainstream DSGE models is internally inconsistent in other respects. There is no explicit mechanism in such models to enforce repayment (plus interest). Homo economicus in such models is supposed to be rationally selfish. So if he can default in circumstances when the benefit of so doing (i.e., more consumption) outweighs the cost, he should do so. Yet he is assumed not to do so, an internal inconsistency. In essence, if individuals, or institutions, are meant to have any strategic power whatsoever, while lending plays any role whatsoever, bankruptcy and default rules are required as a logical necessity, not just an institutional addition. The minimum addition to standard macromodels that is necessary to introduce money, liquidity, credit risk, financial intermediaries, (such as banks), and financial frictions in general (and bank and other agents’ failures in particular) is the possibility of default. It has been a major part of the work of Shubik, one of this chapter’s authors, to model default and to show how such default affects the working of the financial system and the economy more widely (Shubik 1973; Shubik and Wilson 1977). It has also been the largest part of our analytical endeavor to extend this work toward the construction, calibration, and estimation of macromodels in which default plays a central role
Macromodeling, Default, and Money 657 and in which, therefore, monetary and banking developments have real effects (money and banks are nonneutral). The development of strategic market games (Shubik 1973; Shapley and Shubik 1977) came about in order to turn the essentially static general equilibrium model into a system with fully and properly delineated processes for dynamic adjustment. Money played a critical role in providing the extra degree of freedom that enabled the system to be defined out of equilibrium as well as in equilibrium; but money alone, unless in sufficiency and appropriately distributed, did not guarantee efficient market outcomes. Credit had to be introduced. The introduction of credit leads to the need to specify default conditions.2 Furthermore, if through error, intent, or chance credit cannot be repaid, the default laws keep the rules of motion well defined. Since the original introduction of default into the economic system, there has been considerable growth in the literature on bankruptcy and default, both in an open or partial equilibrium setting and in a closed or general equilibrium setting, as represented by the works of Hellwig (1981), Dubey, Geanakoplos, and Shubik (2005), Zame (1993), Geanakoplos and Zame (2010), and, in a dynamic programming setting, Geanakoplos et al. (2000). Our approach may be compared and contrasted with the classical Walras-Arrow- Debreu (W-A-D) general economic equilibrium model and its extension by Lucas and his followers (e.g., Stokey and Lucas 1989). The W-A-D model is essentially noninstitutional and static. No institutions are present in this model, and the market structure of the economy is ad hoc in the sense that the “invisible hand” brings the system to a resting point. All the real-world phenomena that pertain to credit, liquidity, default, and financial intermediation are absent from the classical paradigm. In addition, the W-A-D model is static and does not detail the route whereby equilibrium is attained; that is, it exists only in equilibrium. Through the operation of the invisible hand, one can study the equilibrium properties of the model, provided that one such equilibrium is selected arbitrarily, because the classical model as developed by Arrow and Debreu (see Debreu 1959) manifests a great degree of nominal indeterminacy. In other words, since there are more unknowns than equations, the equilibrium specification requires additional ad hoc assumptions to be well defined. Thus, the process that leads to an equilibrium outcome is undefined and patently absent from that model. Our approach is not subject to any of the above criticisms. In particular, the minimal necessary institutions emerge as logical necessities to close our model, and the process of adjustment is an integral part of our framework. The degrees of freedom that actually exist, even in the classical model, are so many that institutions have to emerge to generate a determinate solution (e.g., Lin, Tsomocos, and Vardoulakis 2016). Unless one is prepared to make the implausible assumptions of complete financial markets, no transactions costs, and personal complete commitment (no strategic default), then money (fiat or commodity) has to be introduced as a means of payment. Money and the associated transactions technology remove the nominal indeterminacy that the classical W-A-D model (even without uncertainty) manifests. In addition, the transactions technology that we introduce underscores the importance and the real effects of liquidity in the economy and, consequently, of monetary policy. The aforementioned analysis is
658 Goodhart, Romanidis, Shubik, and Tsomocos incorporated in the institutional economics as it has been developed by Shubik (1999). In this framework, the liquidity, credit, default, and financial intermediaries codetermine the real and the nominal variables of the economy. In our approach, there exists a genuine interaction of the real and the financial sectors of the economy. The study of monetary policy must be done in a framework where credit, money, and liquidity matter. Analysis of monetary policy and, more generally of fiscal and regulatory policy in a nonmonetary world is akin to performing surgery on the operating table without the patient being present. The real world is uncertain as well as risky; there are unknown unknowns as well as known unknowns (as US Secretary of Defense Donald Rumsfeld said in 2002). It is too costly to hedge all eventualities, even when their probabilities may be assessed. When people can get away with not paying their debts, they will do so. So the first step in our analysis is to consider the possibility of default on the contractual obligations that an agent undertakes and, therefore, indicates the necessity for some form of a cash- in-advance constraint. The interplay of liquidity and default justifies (fiat) money as the stipulated mean of exchange. Otherwise, the mere presence of a monetary sector without any possibility of endogenous default or any other friction in equilibrium would become a veil without affecting real trade and, eventually, the final equilibrium allocation. Indeed, cash-in-advance constraints are the minimal institutional arrangement to capture a fundamental aspect of liquidity and how it interacts with default to affect the real economy. The introduction of institutions in our framework and the specific price formation mechanisms that we analyze explicitly require process-oriented models that are well defined not only in equilibrium but also out of equilibrium. The models that we have produced are all “playable games” in the sense that they can be implemented, for example, in a classroom, with clear and well-articulated rules of the game. This is definitely useful, as it provides a detailed setup that ensures valid conclusions. In conclusion, we argue that the introduction of institutions and of financial frictions in an economic model provides for the guidance of the economy as a loosely coupled economic system and acts as its “neural network.” The inclusion of default, money, financial institutions, liquidity, and so on, allows policy to be explicitly addressed and to connect the different players, including differing generations, in a coherent fashion. It is only by constructing a mathematical institutional economics that one can study the economic system in a rigorous and analytical manner. This should be done. But if it was easy, it would already have been done. It is not easy, because in a world with default, money, and credit risk, there are numerous (evolving) institutions and financial instruments, not just a single (credit-riskless) interest rate at which everyone can borrow or lend (see, for example, Woodford 2003). Moreover, agents differ in their risk aversion and probability of default; hence there is no room for the “representative agent” simplification in a satisfactory model. To give a brief illustration of the complexity of the scale of the kind of minimum model (note that it is without housing, fiscal policy, or nonbank financial intermediaries and is a single, closed
Macromodeling, Default, and Money 659
Info
Firms
Bids
Consumers
Loans
Credits
Goods
Evaluation
Settlement Adjustment
Payments Netting
Info
Traders
Goods
Stock, commodity and goods markets
Clearinghouse
Credit evaluation
Courthouse
Bank
Evaluation Central bank
Figure 22.1 Description of the minimum model where the arrows represent behavioral interactions among the various sectors and agents of the economy
economy), see Figure 22.1, where the arrows represent behavioral interactions among the various sectors and agents of the economy. The complete model is far from being fully developed. Nevertheless, the work that we describe in this chapter constitutes an initial step in this direction. Finally, with missing financial markets, since we depart from the complete markets paradigm, there are important welfare consequences. As Geanakoplos and Polemarchakis (1986) argued, the general equilibrium with incomplete markets model leads to constrained inefficient equilibria. Not only does the first welfare theorem not hold, as it does in the standard W-A-D model, but also the second best is not obtained. The consequence of this is that policy and government intervention may generate welfare-improving effects for consumers and firms without necessarily removing the frictions in the short run. Indeed, Goodhart et al. (2013) produce one such example whereby multiple regulations Pareto improve the equilibrium allocation.
660 Goodhart, Romanidis, Shubik, and Tsomocos
22.2 Modeling a Macroeconomic System with Default The possibility of default among economic agents, companies, and, above all, financial institutions not only is pervasive but also at times can have a dominating impact on macroeconomic developments. Trying to remove default from macroeconomic models will mean that they are at best partial and at worst, as in recent years since 2007, almost completely useless. But unfortunately, the attempt to incorporate default as a central, key component of such models makes their construction far more difficult. In particular, it makes the dimensionality of the model so much larger. It is no longer feasible to use the useful fiction of a representative agent. Not only do agents, in practice, vary in risk aversion (and such risk aversion may vary over time in response to events), but also if all agents were required to move in lockstep as a representative agent, then either none defaults (so the model reverts to the default free form) or they all default simultaneously, which may lead to the collapse of the model. Next, the number of sectors in the model will increase. With default, there is a twofold function for banks. First, on the liability side, bank deposits, especially when insured, are less subject to default (a lower probability of default, or PD) than the IOUs of (most) other members of the private sector and hence become used as acceptable means of payment. Second, on the asset side, banks should have a comparative advantage in assessing the creditworthiness of other agents’ IOUs (borrowing proposals) and monitoring them to ensure that PD is kept lower.3 Likewise, with incomplete financial markets and asymmetric information, there is room for a large assortment of other financial intermediaries. In the standard DSGE model, there are two assets (although, as argued above, one of these, i.e., outside money, zero-interest-bearing currency, is actually redundant and should play no role). The other asset in the model is riskless debt, which all agents can issue for borrowing or buy for saving. In equilibrium, when the model settles down, all borrowers in the final period just have enough income to pay off their entire debt with certainty, and all net savers just obtain sufficient repayment in that final session to finance their planned expenditures. Hence, there is no residual availability of funds, or equity finance, in such models. By contrast, once default is allowed to occur, a much wider menu of financial assets naturally occurs, including currency, bank deposits and bank loans, debt with a wide range of credit ratings, and a whole range of derivatives, other intermediary liabilities and assets, and equity. In other words, it is default that gives a meaning to and an explanation for the wide assortment of financial assets/liabilities that we observe in practice. Put it all together, and this means that a model with default at its center will have more agents per sector, more sectors, and more assets. The problem immediately arises of a potentially exploding dimensionality (besides the additional complexity of maximizing
Macromodeling, Default, and Money 661 subject to bankruptcy costs). This can be, and has been, partially resolved by refocusing the model in each case to deal with a specific issue and omitting those agents/sectors/ assets not strictly necessary for the question at hand. But this means that models with default have not yet been “general” in the sense of encompassing a complete macroeconomy for all questions. Yet good application calls for context-specific models of specific economies designed to answer specific real-world questions with a general theoretical basis that can be applied to the case at hand. A main challenge is still how to extend such general models with default—as in Shubik (1999); Dubey and Geanakoplos (2003); Tsomocos (2003); Dubey, Geanakoplos, and Shubik (2005); Goodhart, Sunirand, and Tsomocos (2006); and Lin, Tsomocos, and Vardoulakis (2016)—in a dynamic setting, preserving the main properties of a parsimonious model, without the inclusion of redundant or unrealistic features. Default in the aforementioned models is introduced as an endogenous choice variable whereby agents weigh the marginal benefits versus the cost of defaulting either in the form of loss of collateral or nonpecuniary costs by subtracting a term proportional to the level of default. The initial endeavors, by de Walque et al. (2010), Leao and Leao (2006), and Iacoviello (2005), that introduced those concepts into the DSGE framework have not taken into account simultaneously liquidity, agent heterogeneity, money, and default risk. Nevertheless, those models are valuable efforts in the development of a plausible explanation of the phenomena observed after the credit crisis. The interplay of liquidity and default justifies the presence of fiat money as the stipulated medium of exchange. Otherwise, the mere presence of a monetary sector without the possibility of endogenous default or any other friction in equilibrium may become a veil without affecting real trade and, eventually, the final equilibrium allocation. Indeed, liquidity constraints, or their extreme version of cash-in-advance constraints, are the minimal institutional arrangement to capture a fundamental aspect of liquidity and how it interacts with default to affect the real economy. It is also worth noting that renegotiation is a first cousin of default. Renegotiation is often used euphemistically to tell the public that no default has taken place. There is, perhaps, an even more fundamental problem with models with default. Such default can happen at any time, from the interaction of stochastic events with agents’ risk aversion. Thus, there is no sense in which such models can ever settle down into some long-run steady-state distributions of equilibrium variables. Indeed, the underlying problem is even worse. Laws, market structures, and economic institutions are themselves subject to endogenous change as a result of experience, as witness the changes to financial regulation as a consequence of the recent crisis. Most of us find it difficult to countenance a system that does not revert to an equilibrium supported by some set of fixed social, legal, and institutional structures. Yet the reality of default and crises actually means that we, and the macroeconomy, are in the middle of a process. A reading of John Maynard Keynes suggests that he was talking about an economy tending toward equilibrium, not one that actually attained the misleading equilibrium system imposed by John Hicks. If one were to add in this context the amendments of Joseph Schumpeter on innovation, the appropriate picture at best is one of a system that may follow some
662 Goodhart, Romanidis, Shubik, and Tsomocos adjustment toward a perceived equilibrium, but it will be constantly knocked off course by every innovation4 and political/institutional change, such as Brexit.
22.3 Some Implications of Default As a Process 22.3.1 What Are the Costs of Default? One rather obvious reason default has not been made a central feature of macroeconomic finance is that it is quite hard to do. Default is a binary condition. Either an entity has declared bankruptcy or not. It is therefore difficult to fit into the standard models of continuous variables, manipulated and solved via calculus. This problem can be treated by turning it around and focusing not on the occasion of default but on the expectation of repayment (Shubik 1999, chap. 12). This is akin to the inverse of the combination of the probability of default (PD) and expected loss given default (LGD) that play such a central role in finance. If the costs of default were infinite (e.g., death for the debtor and his or her family), no one would ever borrow. If the costs of default were zero, no one would ever repay, and so no one would ever lend. So, given a positive marginal efficiency of investment and intertemporal preferences over consumption, an interior optimum must exist with positive, but not infinite, costs of default and positive borrowing and lending. Another problem of modeling default is that its costs depend on institutional conditions and social norms, which can change over time and can, indeed, vary in response to economic conditions, such as the frequency and severity of prior defaults. There are two main categories of default costs. The first is that the borrower may be required to put the ownership claim to some asset, which may be but need not always be the same asset bought with the borrowed money, in forfeit to the lender, should the borrower not meet his or her contractual repayment obligations. This is known as the collateral for the loan, and loans protected by such collateral are termed secured loans. Normally, the market value of the collateral exceeds the value of the loan, with the difference being the required margin, or down payment, that the borrower has to make to get the initial loan. Geanakoplos (2001), Goodhart et al. (2011), and Shubik and Yao (1990) have focused primarily on this aspect of the cost of default. Naturally, the required margin will be greater the greater is the perceived risk of the collateralized asset falling (volatility) and the greater the deviation between the value of the asset to the lender, should foreclosure occur, and to the borrower, such as with houses during a housing price collapse. Thus, in this aspect of the borrower-lender relationship, the extent of down payment, margin, or leverage available is highly cyclical, with the costs of borrowing (margins, down payments) going up sharply during financial crises and busts and going down during expansionary booms. Since there is no need
Macromodeling, Default, and Money 663 for collateral in a world without default, this aspect of the cost of borrowing is omitted from standard DSGE models without financial frictions. Monetary policy aims to stabilize the macroeconomy by lowering (raising) riskless interest rates during depressions (booms), but without having a better grasp of the impact on borrowing/lending decisions of the changing terms and conditions of borrowing, which generally fluctuate inversely to interest rates, it is impossible to forecast at all accurately what the net effect on private-sector behavior may be. Particularly after a financial crisis, policies that focus solely on official rates without concern for spreads and margins facing private-sector borrowers are likely to be inefficient and insufficient. The difficulty of assessing the changing effects of terms of borrowing is one reason focusing on quantities of lending or monetary growth may be often a better indicator of the effects of monetary policy than is looking at the time path of some chosen interest rate. Besides having to give up (to the lender) such collateral as the borrower may have pledged, or as can be seized by the lender, for example, in the case of sovereign default, there are other costs to default. These depend, inter alia, on the form of the bankruptcy laws (see La Porta et al. 2000 for a useful overview). Some such costs can be nonpecuniary, such as debtors’ prison, social shame, loss of the use of an asset (house, car, etc.); some may be pecuniary, such as the present value of future exclusion from financial markets, legal costs, the reduction in income as the bankrupt has to shift from his or her preferred role and occupation to a less preferred activity. Modeling both the time-varying collateral (margin/down payment) costs and the more direct costs of bankruptcy is an underdeveloped aspect of macroeconomics and one that needs much more attention. In much of our own work to date, we have assumed that the direct costs of bankruptcy were fixed in linear proportion to the rate of repayment failure5 (Shubik 1999, chap. 11; Quint and Shubik 2011), but more analytical and empirical work would be useful to try to explore this key element in the model.
22.3.2 DSGE Models and Default In a number of recent models (Bernanke, Gertler, and Gilchrist 1999; Carlstrom and Fuerst 1997; Christiano, Motto, and Rostagno 2010), default occurs, though in an oversimplified way. It is implicitly assumed that a fraction of entrepreneurs goes bankrupt, and this probability of bankruptcy can be made partially endogenous to the macroeconomic context. There can be both ex ante and ex post default, operating at the firm level. Some of the firms do default in equilibrium, as their return to capital is subject to both idiosyncratic and aggregate risk. However, in aggregate, the bank is assumed to be able to diversify its loan portfolio, so that it repays depositors the riskless rate of interest. Such idiosyncratic risk can affect the firm’s equity through the cost channel, since expected default probability directly affects the interest rate paid by firms. Default in this model is not, however, genuinely endogenous. Banks and firms do not choose explicitly what amount to default on. Instead, the firm chooses its capital demand and cutoff productivity level where the firm will start to default in order to maximize its
664 Goodhart, Romanidis, Shubik, and Tsomocos expected profits subject to a bank’s participation constraint. Hence, by optimally raising the cutoff level, it subjects itself to a higher probability that it will default in the case that the realized productivity level turns out to be low. It may, or may not, default in the end, given the realization of the shock. As it turns out, there is no default in equilibrium in the financial sector. Banks charge a default premium to borrowers so that on average they break even. Curdia and Woodford (2010) and Meh and Moran (2010) address default in a completely ad hoc fashion. In the former, impatient households default on their loans every period by an exogenous amount that is known to the banks. In the latter, in each period, a fraction of the (risk-neutral) entrepreneurs and a fraction of (risk-neutral) bankers exit the economy, and they are replaced by new ones with zero assets. The aforementioned fractions are exogenously given. De Walque, Pierrard, and Rouabah (2010) introduce endogenous default probabilities for both firms and banks.6 Following Goodhart et al. (2006), the borrowers choose the rate of repayment of loans optimally, taking into account penalties incurred. The penalties can be both nonpecuniary in terms of reputation cost and pecuniary in terms of search costs to obtain new loans. Distortions arise as lenders price such default probability into the interest rates and, in line with previous literature, contribute to the financial accelerator. Their model is then used to assess the role of capital regulations and liquidity injections on economic and financial stability. In response to the current crisis, where disruption in financial intermediation generates negative consequences on the real economy, some articles (e.g., Gertler et al. 2010) have shifted focus to impose credit constraints on intermediaries rather than on nonfinancial borrowers.
22.3.3 The Functions of Liquidity In the absence of default, it is hard to give much meaning to the concept of liquidity, since each agent can borrow, at the given riskless rate of interest, up to a limit given by his or her need to repay with certainty and is presumed not even to try to borrow beyond that limit. So up to that limit, each agent is fully liquid and can buy whatever he or she wants. Up to that limit, no agent is liquidity-constrained. The implicit assumption of mainstream DSGE is that in equilibrium, no agent is liquidity constrained, that is, no agent is attempting to borrow so much that there is a nonzero probability of default. This also removes any significant distinction between long-term and short-term loans and treats informational services such as accounting and evaluation as free goods. If agents are perceived as capable of defaulting, then they will only be able to borrow at rising interest rates or not at all. If so, their ability to spend will be constrained by their access to income and owned liquid assets. In this latter case, agents are liquidity-constrained. As is surely obvious, in practice, the scale of economic activity, the volume of goods and services traded in a period, is endogenous. By contrast, in the model of Lucas and his followers, the scale of economic activity is exogenously specified by the requirement
Macromodeling, Default, and Money 665 that each agent sells the whole of his or her endowment in each period. A corollary of the quantity theory of money in a model with default, ceteris paribus, is that increases in trading activity in a good state, due perhaps to more productivity or lower interest rates, will result in lower price levels, given the same money supply in that state because of variation in credit risk. Similarly, the volume of trade in asset markets affects prices and has second-order effects on the inflation rate.
22.3.4 Nonneutrality of Money Given the existence of money, government monetary policy (open market operations, that is, changing the stock of bank money) typically has real effects on consumption. Similarly, government transfers of money to agents, no matter how it is distributed across the population, also necessarily have real effects. These nonneutrality conclusions are contrary to those derived by the “representative agent” paradigm. The explanation is that in a model with default, the outcome is not necessarily Pareto-efficient because of the distortion caused by trading via money borrowed at positive interest rates with a wedge between effective borrowing rate and the riskless rate. In any transactions-based model with heterogeneous agents, positive interest rates generate a margin between selling and buying prices, thus inducing nonneutral effects when monetary policy changes. When the government eases credit (by putting more money into the banks, for example, by open market operations), it facilitates borrowing, reduces interest rates, and increases real activity. In an RBC model with wage/price frictions, the mainstream DSGE models, money is nonneutral in the short run but becomes neutral in the long run as the wage/price frictions dissipate. Financial frictions provide another and probably rather longer-lived source of nonneutrality. Financial developments and monetary policy will help to determine time-varying perceptions of the probability of default and hence risk spreads and the liquidity constraints of agents. The present euro crisis can be analyzed in the context of models in which endogenous default is possible. It cannot be adequately analyzed by using models without default. It has been the ubiquity of the latter kind of model that has led central bank governors, such as Jean-Claude Trichet and Mervyn King, to complain that macroeconomics provides little guide to the present weak financial conjuncture. There is also another deeper sense in which finance is nonneutral. In RBC models, growth is driven by an exogenous (trend) increase in productivity, deriving from innovations, which is disturbed in the short run by a variety of shocks. In models with default, fluctuations in growth will also be driven by the willingness and ability of agents to assume risk. Virtually all projects are risky. So a very risk-averse society is likely to be safe but slow-growing. Levine has documented the positive association between financial intermediation and growth. But the rapid expansion of credit and leverage can put the economy at greater risk, as evidenced since 2000 (Taylor 2009). Society and governments can change the costs of default and the costs and availability of access to borrowing (e.g., via intermediaries) by a variety of regulatory
666 Goodhart, Romanidis, Shubik, and Tsomocos measures. What we should be trying to assess is the optimal structure of (regulatory) control over risk, by such instruments as recovery and resolution programs (RRPs), capital adequacy requirements (CARs), margin requirements, and so on, to give the best balance between growth (more risk-taking) and financial disturbances (less risk-taking). Models without default cannot even begin to address this issue; models with default at their heart can now begin to do so (Goodhart et al. 2013; Tsomocos 2003). Sadly, one notable feature of the exercises to reform and improve financial regulation in recent decades, such as Basel 1, 2, and 3 and the various bankruptcy codes, is how little they have been informed by macroeconomic theory (Goodhart 2011, chap. 16), at least until quite recently. This lacuna was, at least in part, owing to the implicit belief among macroeconomists that such matters had no, or little, macroeconomic effects. In a world without default, such a view would have been correct, but that world does not exist in practice.
22.3.5 The Unification of Macroeconomics, Microeconomics, and Finance In recent decades, the models and the focus of attention of macroeconomists, microeconomists, and financial theorists have drifted apart. Insofar as such macroeconomic models abstract from default, their sole focus on financial matters was on the determination of the official short-term interest rate, usually via a Taylor reaction function, and thence on the determination of the riskless yield curve, via expectations of the future path of official rates. So, such macroeconomists had little interest in financial intermediation, which played no role in their models. In contrast, of course, the probability of default played a major role in finance theory and in the pricing of assets. Even here, however, the existence and incidence of bankruptcy costs were often assumed away, as in the Modigliani-Miller theorem, or their wider impact on the economy, via externalities, was ignored. In a different setting, mathematical microeconomists and game theorists, as noted above, have written extensively on bankruptcy in a general equilibrium or strategic market game setting but with little connection to macroeconomic literature, although one should regard the work of Diamond and Dybvig (1983), Morris and Shin (2004), and Smith and Shubik (2011) on bank runs as connected with macroeconomic stability. Be that as it may, a greater appreciation, especially among macroeconomists, of the importance of financial frictions in general and of default in particular may reunify these specializations. The separation, often physical of micro-and macroeconomists in economics departments and of finance theorists in business schools with almost entirely differing agendas and models has not been beneficial. The greater focus now on financial frictions as a key issue in money-macro analysis will have the side advantage of reunifying these three strands of the subject, which should never have diverged as far as they did.
Macromodeling, Default, and Money 667 Also note that the bankruptcy process, like innovation, does not generally involve the destruction of real assets. It involves their reorganization. The only items that are destroyed are credit arrangements. One of the deepest misunderstandings of the public, in general, is the fear of a bankruptcy process that is largely mythical. The public often appears to believe that it is the assets that are destroyed, whereas in fact a well-executed bankruptcy may be the savior of ongoing enterprises.7 The saving of General Motors from bankruptcy in the United States in the global financial crisis, for example, is presented as an act that saved more than a million jobs (including many of the upper management of GM), though it is debatable whether an orderly bankruptcy of GM with a subsequent takeover by a group led by, say, Toyota or Honda might have been better for all concerned except for the continuing GM management.
22.4 Modeling Banks: An Example of How, and How Not, to Undertake Financial Modeling The failure to focus on financial frictions was partly responsible for a continuing, often egregious failure in most macro work to model the behavior of financial intermediaries, especially banks. If such intermediaries were part of a veil, with no real effect, why bother? Even among those who most emphasized behavioral incentives and optimization, the supply of money was often, and in most textbooks until recently, supposedly controlled by a deterministic and tautological formula, the monetary base multiplier, whereby the stock of money depended on the stock of base money and two ratios, the banks’ desired reserve/deposit ratio and the public’s desired currency/deposit ratio. Causation supposedly ran from the monetary base, supposedly set by the central bank, to the broader money supply. This accounting identity provided the analytical framework for the (otherwise) great historical book by Friedman and Schwartz (1963). But even in the decades prior to the 1970s, which had conditions most appropriate to this formulation with required cash, and sometimes required liquidity, ratios, the monetary base multiplier was upside down. As everyone, including prominently Milton Friedman, knew in practice, the policy choice variable of (almost all) central banks, almost all the time, was the level of the official short-term interest rate. Given that official interest rate, and other macro conditions (and their risk aversion), banks would decide on their own interest-rate terms and conditions for lending (and for offering interest- bearing deposits), with such decisions in turn often dependent on various controls applied by the authorities. That determined credit and monetary expansion, with the banks’ and the public’s preferences for reserves and currency then determining the monetary base that the authorities had to create in order to maintain their separately decided policy choice over short-term interest rates.8
668 Goodhart, Romanidis, Shubik, and Tsomocos The concept that individual commercial banks were cash-or liquidity-constrained became even harder to sustain after the 1970s. The emergence of the Eurodollar market in the late 1960s and early 1970s and its extension into an international wholesale market for funds meant that all creditworthy banks, meaning most large banks most of the time, could raise large amounts of additional funding whenever they wanted on standard terms on such markets, for example, when the elasticity of supply of such wholesale funds to any one bank was high; banks were almost price takers in wholesale markets. Funding liquidity was replacing market (asset) liquidity. At least, prior to the 1970s, when a bank treasurer was contemplating credit expansion, he had to consider how such loans might be funded. Once the wholesale markets got going, all the treasurer needed to compare was the expected risk-adjusted rate of return on credit expansion with the marginal interest rate on wholesale markets. Prior to the 1970s, the money stock (broad money) and bank lending marched in lockstep together in most countries, whereas after the 1970s, they increasingly diverged, with credit expansion growing far faster than broad money (Schularick and Taylor, 2012). Bank deposits had increasingly been insured after the Great Depression of 1929–1933 in order to prevent runs; the new wholesale funding of credit expansion, via Money Market Funds, repos, and so on, was not, but no one paid that much attention, since the creditworthiness of bank borrowers in such markets was meant to be anchored by their adherence to the Basel CARs. Nowadays, with quantitative easing (QE) and the payment of interest on bank reserves at the central bank, the liquidity requirements of banks are satiated (Greenwood et al. 2016). In this context, the main constraint on bank expansion is not the availability of cash reserves but the availability of capital and the profitability of banking business. So where and how is this perfectly well-known history of banking and financial developments reflected in a modern theory of banking and money (and near-money) supply determination? The blunt answer is that by and large, it is not. One can, indeed, say that there is at present no comprehensive theoretical analysis of banking worthy of the name. This has unfortunate consequences. At the start of the unconventional monetary expansions (QE, Credit Easing [CE], Long Term Refinancing Operations [LTROs], etc.), after the zero lower bound to interest rates was hit, it was widely hoped that expansion of the monetary base would lead to some large-scale, if not fully commensurate, expansion in broad money and bank lending. It did not. In most key developed countries, the monetary base quadrupled; bank lending and broad money barely budged. Besides some general arm waving about a lack of demand for credit from the private sector, there has been a dearth of postmortems, analyses about why the banks behaved as they did. Then, in pursuit of the closure of the stable doors from which the horses had already bolted, the authorities, especially in Europe, mandated the early achievement of sharply higher ratios of equity to (risk-weighted) assets. But what factors would determine bank willingness to raise additional equity? Admati and Hellwig (Admati et al. 2011) have shown that, especially where there is debt overhang and a perceived risk of insolvency, the benefits of additional equity go largely to the creditors, not to existing shareholders. So existing shareholders, and bank managers responsive to their shareholders, will
Macromodeling, Default, and Money 669 eschew all methods of reinforcing equity, by new issues or profit retention, as far as possible. That implies that raising required equity ratios will be achieved by deleveraging. Of course, each country’s monetary authorities will try to ensure that such deleveraging will not take the form of reductions in bank lending to the private sector by their own banks in their own countries. So cross-border bank lending gets decimated (actually, decimation was only one out of ten; this is in many cases closer to nine out of ten), and so-called noncore assets are forcibly sold. This has led to the Balkanization and fragmentation of the European banking system. Meanwhile, most analytic studies of the effects of regulation on banking demonstrated that in equilibrium, a banking system with a higher equity ratio would not require any much higher interest charge on bank lending and would be much safer. All of this is perfectly true. The problem has been that the present banking system is not in an equilibrium state but is in a transitional process under conditions of severe market stress. It has long been a key theme of ours that the focus on equilibrium conditions in mainstream macro is insufficient and even at times potentially misleading. What needs to be done instead is to model financial developments as a continuing process. Rarely has this argument been so clearly manifested as in the case of banking in the aftermath of the 2008 debacle. In a series of excellent symposiums reported in the Journal of Economic Perspectives, there has been an attempt to look back at the debacle in 2008. In particular, Gorton (2010) and Gorton and Metrick (2012) trace the explosion of leveraging by the sophisticated utilization of instruments such as the repo. Note that the utilization of the repo is recorded by Walter Bagehot in an appendix in his masterful work, Lombard Street (1873). Those who study finance deal with the microeconomic realities of the arbitrage among markets, laws, customs, accounting conventions, and cultures. They also deal with the high-tech micro-microeconomic problems of mechanism design (see, as a nice example, Bouchaud, Caccioli, and Farmer 2012) and with sophisticated studies in stochastic processes, which seek out even minor fleeting correlations in stock or foreign currency trading in some instances in order to trigger computerized trading. This work rarely interacts with either mainstream micro-or macroeconomics, yet it is relevant to both in the sense that in its devotion to the understanding and profiting from trading, it attempts to capture, at least in the short run, actual financial dynamics.
22.4.1 Role and Rationale of the Financial Sector in General Equilibrium Models In this section, we consider how the role and rationale of the financial sector is addressed in some general equilibrium models. In Kiyotaki and Moore (1997), Fostel and Geanakoplos (2008), and Iacoviello and Neri (2010), the financial sector is not modeled. The first model is a pure real model with two
670 Goodhart, Romanidis, Shubik, and Tsomocos types of agents (credit-constrained and non-credit-constrained) and the non-credit- constrained agents directly lend to the credit-constrained ones. Thus, in this model, there is no financial sector. And the second model subsumes the role of the financial sector in the activities of agents, whereas in the third, the banking sector is also not present. Likewise, in Bianchi and Mendoza (2010), there is no fully fledged financial sector. The neo-Keynesian models (e.g., Bernanke, Gertler, and Gilchrist 1999; Curdia and Woodford 2010) introduce a rudimentary financial sector to intermediate between the borrowers and savers, introducing a spread between deposit and lending rates. Moreover, state-contingent contracts are introduced directly without intermediation (Curdia and Woodford) or through intermediation (Bernanke, Gertler, and Gilchrist) between the households in order to insure one another against both aggregate risk and the idiosyncratic risk associated with household types. Banks break even in equilibrium by charging an insurance premium to entrepreneurs. As a result, aggregate risk is fully absorbed by the entrepreneurial sector. Banks’ only role is to intermediate funds in a rather mechanical fashion. For example, in Carlstrom and Fuerst (1997), the entrepreneurs are involved in producing the investment good, receiving their external financing from households via intermediaries that are referred to as capital mutual funds. The production of capital is subject to an idiosyncratic productivity shock that incorporates innovation, and capital goods are purchased at the end of the period with the assistance of intermediaries. Notable exceptions to this family of models are Christiano, Motto, and Rostagno (2010) and Meh and Moran (2010). In Christiano, Motto, and Rostagno, there is a fully integrated financial sector and competitive banks. They receive deposits from the households and make loans to the entrepreneurs, as in Bernanke, Gertler, and Gilchrist (1999), and working capital loans to the intermediate good firms and other banks, as in Christiano, Eichenbaum, and Evans (2005). Banks bundle transactions services with their deposit liabilities and produce transactions services with capital, labor, and bank reserves. They hold a minimum of cash reserves against households deposits, consisting of base money and central bank liquidity injections. The modeling of the banking sector becomes nontrivial, partially due to the production function for bank deposits or liquidity creation. In particular, capital, labor, and excess reserves are utilized. This gives rise to a liquidity constraint for banks and hence allows certain types of shocks, such as funding liquidity and shifts in demand for excess reserves. In addition, there is also a central bank that sets the nominal interest rate according to a generalized Taylor rule. Meh and Moran (2010) extend the basic framework introduced by Bernanke, Gertler, and Gilchrist (1999). Loans are modeled in a similar fashion. However, an agency problem is posited between the bank depositors and the banking sector. Monitoring is costly, and depositors cannot monitor the quality of entrepreneurs’ projects. Since losses are mostly suffered by depositors, banks lack the incentive to monitor entrepreneurs. Therefore, investors require banks to invest their own capital as well. As a result, banks that are better capitalized attract more loanable funds. Finally, in Goodhart et al. (2012), there is also a fully integrated banking system with a nontrivial role and furthermore a shadow banking system that during bad times triggers fire sales which further worsen the financial crises by causing further reduction in asset prices. Commercial banks are
Macromodeling, Default, and Money 671 risk-averse, take deposits, make mortgage loans, make short-term loans to households, and make repo loans to the shadow banks. They also receive money from the central bank through a discount window. Moreover, they securitize mortgage loans and sell them, and they are endowed with capital. On the other hand, shadow banks are less risk-averse, endowed with capital, receive repo loans to buy securitized assets, and use these assets as a collateral for the repo loan. All of them are profit maximizers.
22.5 Can Such a Complex System Ever Be Adequately Modeled within a General Equilibrium Framework? Many (mathematical) economists, approaching macrodynamics from an (industrial) microeconomics viewpoint, have worried about the large number of degrees of freedom that seem to appear as soon as one tries to introduce default and money into a general equilibrium structure. As is often the case, a paradox is no paradox when examined from the appropriate stance. Following any attempt to describe the general equilibrium structure as a playable game with a monetary economy, the immediate critique is not only that is it unduly special but also that the construction could have been done in an astronomically large number of ways. We provide a framework, however, whereby we use institutions as minimally necessary to define the economic process and allow policy experiments to be conducted. Therefore, our analysis is robust to alternative institutional arrangements, since our emphasis is not so much on the specific institutions that we introduce but rather on the pecuniary and nonpecuniary externalities that we try to rectify given the frictions that are present in our models. For example, cash-in-advance formulation is the simplest possible way to capture the effects of liquidity in a transactions-based model. Alternatively, we could utilize more complicated liquidity requirements, as in Espinoza and Tsomocos (2010) and Martinez and Tsomocos (2016), where not only fiat money but also commodities can be partly used for transactions purposes. So this criticism is completely correct yet completely misses the positive message, which is the need for a link between static economics on one side and economic dynamics and institutional evolution on the other side. The plethora of degrees of freedom are there so that it is feasible to construct hosts of ad hoc micro-macroeconomic models that fit special niches and are able to adjust with varying flexibility to the micro-and the macrodynamics to which they are applied. In other words, given each particular friction and its associated externality, we are able to address it in multiple ways without necessarily modifying our results. We consider the fact that we offer many alternatives to address a particular externality a strength rather than a weakness of our approach. One should be aware of all the possible alternatives and subsequently select the one that fits current objectives best. In attempting to describe the general equilibrium structure as a playable game with a monetary economy, the immediate critique of anyone who constructs a playable strategic market game is
672 Goodhart, Romanidis, Shubik, and Tsomocos that not merely is it so special but that the construction could have been done in an astronomically large number of ways. One may argue that general equilibrium analysis is not useful because in such large-scale models, anything can happen when initial conditions are changed. We consider this attitude as a vindication of our analysis. We believe that until one knows how all the effects depend on the behavior of the agents, one cannot aim to comprehend fully any policy change. In a modern economy, one of the most important markets is the market for firms, and firms are institutions with both an aggregation of assets and an organization. A good industry specialist trying to forecast the dynamics of the firm and its viability will investigate real and financial assets and liabilities, as well as, inter alia, management and software. A checklist includes the following:
1. Labor 2. Land 3. Buildings 4. Machinery 5. Raw materials 6. Goods in process 7. Office supplies 8. Cash and receivables 9. Short-term debt 10. Long-term debt 11. Financial assets 12. Software 13. Off-balance-sheet assets such as derivatives 14. Management Each of these items may have a time profile attached to it involving properties such as the ages and length of service of management and the age structure of machinery. Cautious accounting calls for leaving the valuation of management and organization off the balance sheets, but one of the key tasks of a good analyst is to attach estimates to the worth of this off-balance-sheet asset.9 In general, the larger institutions are all run by committees of fiduciaries using other people’s money (OPM) and buried in a Byzantine mixture of tax laws, regulation, and accounting conventions that place considerable constraints on the manifestation of economic behavior. This observation is not a counsel of despair. It merely suggests that the diversity of time lags and organizational constraints (which are the daily concern of a good industrial organization economist and any competent financial analyst doing due diligence) do not disappear into the maw of one aggregate single-dimension dynamic programming model. Economic dynamics are highly dependent on models aggregated from the bottom up. This is not impossible, but it is an arduous task. A government in general and a central bank in particular require evidence on the macroeconomy far beyond the analogies of RBC. The challenges and the dangers of bridging the gap between theory and practice were eloquently noted in Francis Edgeworth’s Oxford inaugural address in 1891:
Macromodeling, Default, and Money 673 It is worthwhile to consider why the path of applied economics is so slippery; and how it is possible to combine an enthusiastic admiration of theory with the coldest hesitation in practice. The explanation may be partially given in the words of a distinguished logician who has well and quaintly said, that if a malign spirit sought to annihilate the whole fabric of useful knowledge with the least effort and change, it would by no means be necessary that he should abrogate the laws of nature. The links of the chain of causation need not be corroded. Like effects shall still follow like causes; only like causes shall no longer occur in collocation. Every case is to be singular; every species, like the fabled Phoenix, to be unique. Now most of our practical problems have this character of singularity; every burning question is a Phoenix in the sense of being sui generis. (Edgeworth 1891, 630)
In spite of the many successful applications of mathematics to equilibrium economics, the development of economics as a science has a considerable way to go. In particular, as is well known, there is no generally acceptable theory of dynamics. Yet the whole basis of macroeconomics requires economic dynamics. In the broad sweep of the development of both pure and applied economics, techniques come and go, and fashions change. This warning by Edgeworth is not an admission of defeat but a call for us to appreciate that the use of dynamics requires an understanding of structure and a realization that control in a loosely coupled dynamic system is not the same as prediction but is the intelligent placement of constraints to prevent an unpredictable dynamics from getting out of control. In other words, the manifold degrees of freedom in a modern economy require an intelligent appreciation of the missing institutional and/or political constraints that anchor the political economy of modern societal organizations. In addition, these institutions permit comparative institutional dynamics and remove the indeterminacy that follows from the classical W-A-D model and its noninstitutional structure. To sum up, we can build a political economy that can be flexible enough to survive robustness checks and yet also be analytically tractable. Recently, one of this chapter’s authors, with colleagues, had been considering the problems involved in incorporating innovation into a financial control structure. Two out of four projected papers were completed (Shubik and Sudderth 2011 and 2012). Even in highly simplified low-dimensional models, a clean proof of the convergence of a low- dimensional dynamic programming model is difficult to obtain, and the length of time to equilibrium may be of any duration; however, some simulation results show extreme instability with the introduction of a banking system (Miller and Shubik 1994). So in comparison with the standard DSGE models, models that face up to the reality of default will surely be dynamic, will generally be stochastic, but can only approach generality and do not converge to an equilibrium. Going down this road has some severe disadvantages. For those who find abandoning both the G and the E features of their familiar DSGE models too much of a stretch, we would suggest that the best way forward is that suggested by Curdia and Woodford (2010), which is to add an ad hoc, exogenous, but time-varying credit-risk spread to the official interest and then proceed as before. It is a simple addition and incorporates some of the effects of financial frictions (in an ad hoc manner) into the model.
674 Goodhart, Romanidis, Shubik, and Tsomocos But it does not incorporate all such effects, being unable, for example, to account for the changing terms and conditions of borrowing, notably the cyclical path of collateral, margins, and leverage emphasized by Fostel and Geanakoplos (2008). It also fails to account for the sometimes self-amplifying interactions between the financial system and the rest of the economy; thus, weakness in, say, the housing sector can weaken the banks, and their fragility can impinge back onto the economy (and onto government finance) and so on. Finally, taking financial frictions as exogenously given provides no help or guidance whatsoever for the monetary authorities in their role as guardians of financial stability. By and large, economic theory has played little or no role to date in informing the design of financial regulation (Goodhart 2011). One of the key functions that Tsomocos and Goodhart have seen for such models is to provide a top-down approach for assessing stresses in financial markets and in helping to design regulatory reform. Our hope is that central banks will come to use such models on a regular basis, besides or instead of models that assume away default, financial intermediation, and money, just those variables that should lie at the center of their concerns. One of the consequences of the recent financial crisis was the rehabilitation of the reputation of Hyman Minsky. He had great insight into the workings of the financial system, but he never formalized these into a model that could be developed as a basis for analysis and for regulatory reform. Our own aim is not to revert only to a narrative and historical approach toward finance but to extend rigorous and mathematical modeling to encompass a wider set of real and behavioral phenomena.
22.6 Where Do We Go from Here? We start from the premise that financial systems and markets are not static (with reversion to some constant equilibrium) but are evolving processes, subject to institutional control mechanisms that are themselves subject to sociopolitical development. The concept of mathematical institutional economics as an appropriate way to develop the dynamics of a financially controlled economy is consistent with Edgeworth’s warning, with precise question formulation and an empirically based construction of ad hoc models dealing with the important specific goals of government regulation of an economy with a high level of functionally useful decentralized market activity. The flexible, but logically consistent and complete, process-oriented models of strategic market games translate, with care, into stochastic process models incorporating default and boundary constraints. In special simple examples, it is possible to obtain interior solutions and even more rarely analytical solutions, but in general, even the parsimonious models require approximation or simulation methods to trace the dynamics. But because there is no royal road, this does not mean that there is no road.
Macromodeling, Default, and Money 675
Notes 1. His coauthors dedicate this paper in memory of Martin Shubik, a good man, a dedicated teacher and a great political economist. Martin Shubik (24/03/1926–22/08/2018) 2. Unexpected inflation results in monetization of debt obligation, but it is not equivalent to default since each economic agent optimally chooses his own repayment rates. 3. After those occasions when (some) banks fail in the second function and hence amass performing loans and then fail (or approach failure), there is usually a call to split these two functions into a narrow bank that provides payments services on retail deposits against safe (government) assets on the one hand and a lending institution financed by some other (nonpayment) set of liabilities on the other. The general idea is to increase safety, even at the expense of efficiency. The arguments, pro and con, are perennial and outside the scope of this chapter. 4. Friedrich Hayek and Alfred Marshall with the “trees and forest” analogy and Smith (see Shubik and Eric Smith 2016) offer the same argument. 5. The parameter may be regarded as an approximation for the marginal disutility of default. A more general functional form would add little to the qualitative understanding but is mathematically considerably more complex. 6. Martinez and Tsomocos (2016) produce a related model that studies the interplay between liquidity and default. 7. The renegotiation of sovereign debt can be a euphemism for a less transparent and efficient process than a clean reorganization by bankruptcy. 8. Technically, when there are no constraints, control of the quantity of money and control of the rate of interest by the central bank give the same results, as they are dual variables. This is not so when there are constraints present. 9. Given this listing, a little contemplation of the description in these terms of a bank, a life insurance company, a family restaurant, an automobile rental firm, a large oil company, a hedge fund, or a funeral parlor immediately calls for a taxonomy of dynamic models at least as diverse as an optimal kingdom stretching from whales to ants or mites. The sizes in terms of both personnel and assets, the flexibility of the institution, and their complexity differ so considerably that it is unlikely that “one shoe fits all” when one is trying to develop economic dynamics. This is not counter to general economic theory. It merely places two sets of empirical constraints on it. It implicitly suggests that the understanding of the dynamics of a specific firm in the steel industry requires that one knows considerable physical and financial facts about both the firm and its industry. Yet that alone is not sufficient to describe the dynamics.
References Admati, A. R., P. M. DeMarzo, M. F. Hellwig, and P. C. Peiderer. 2011. “Fallacies, Irrelevant Facts, and Myths in the Discussion of Capital Regulation: Why Bank Equity Is Not Expensive.” Stanford Business Graduate School Working Paper 2065, 2010/42, October. Bagehot, W. 1873. Lombard Street: A Description of the Money Market. London: Henry S. King. Bernanke, B. S., M. Gertler, and S. Gilchrist. 1999. “The Financial Accelerator in a Quantitative Business Cycle Framework.” Handbook of Macroeconomics 1: 1341–1393.
676 Goodhart, Romanidis, Shubik, and Tsomocos Bianchi, J., and E. G. Mendoza. 2010. “Overborrowing, Financial Crises and ‘Macro-Prudential’ Taxes.” NBER technical report. Black, F. 1970. “Banking and Interest Rates in a World without Money.” Journal of Bank Research 1, no. 9: 8–20. Black, F. 2009. Business Cycles and Equilibrium. New York: John Wiley. Bouchaud, J.-P., F. Caccioli, and D. Farmer. 2012. “Impact-Adjusted Valuation and the Criticality of Leverage.” Risk 25, no. 12: 74. Carlstrom, C. T., and T. S. Fuerst. 1997. “Agency Costs, Net Worth, and Business Fluctuations: A Computable General Equilibrium Analysis.” American Economic Review 87, no. 5: 893–910. Christiano, L. J., M. Eichenbaum, and C. L. Evans. 2005. “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy.” Journal of Political Economy 113, no. 1: 1–45. Christiano, L. J., R. Motto, and M. Rostagno. 2010. “Financial Factors in Economic Fluctuations.” European Central Bank Working Paper series 1192. Curdia, V., and M. Woodford. 2010. “Credit Spreads and Monetary Policy.” Journal of Money, Credit and Banking 42, series 1: 3–35. Debreu, G. 1959. Theory of Value: An Axiomatic Analysis of Economic Equilibrium. New Haven: Yale University Press. De Walque, G., O. Pierrard, and A. Rouabah. 2010. “Financial (In)Stability, Supervision and Liquidity Injections: A Dynamic General Equilibrium Approach.” Economic Journal 120, no. 549: 1234–1261. Diamond, D. W., and P. H. Dybvig. 1983. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91, no. 3: 401–419. Dubey, P., and J. Geanakoplos. 1992. “The Value of Money in a Finite Horizon Economy: A Role for Banks.” In Economic Analysis of Markets and Games, 407–444. Cambridge: MIT Press. Dubey, P., and J. Geanakoplos. 2003. “Monetary Equilibrium with Missing Markets. Journal of Mathematical Economics 39, no. 5: 585–618. Dubey, P., J. Geanakoplos, and M. Shubik. 2005. “Default and Punishment in General Equilibrium.” Econometrica 73, no. 1: 1–37. Edgeworth, F. Y. 1891. “An Introductory Lecture in Political Economy.” Economic Journal 1: 625–634. Espinoza, R., and D. P. Tsomocos. 2010. “Liquidity and Asset Prices in Monetary Equilibrium.” Economic Theory 59, no. 2: 355–375. Friedman, M., and A. J. Schwartz. 1963. A Monetary History of the United States, 1867–1960. Princeton: Princeton University Press. Fostel, A., and J. Geanakoplos. 2008. “Leverage Cycles and the Anxious Economy.” American Economic Review 98, no. 4: 1211–1244. Geanakoplos. 2001. Liquidity, Default and Crashes: Endogenous Contracts in General Equilibrium. Cowles Foundation Discussion Paper No. 1316. Yale University. Geanakoplos, J. 2003. “Liquidity, Default and Crashes: Endogenous Contracts in General Equilibrium.” Advances in Economics and Econometrics: Theory and Applications, Eighth World Conference, Volume II, Econometric Society Monographs, 170–205. Geanakoplos, J. 2010. “The Leverage Cycle.” In NBER Macroeconomics Annual 2009, Vol. 24, 1–65. Chicago: University of Chicago Press. Geanakoplos, J., I. Karatzas, M. Shubik, and W. Sudderth. 2000. “A Strategic Market Game with Active Bankruptcy.” Journal of Mathematical Economics 34, no. 3: 359–396. Geanakoplos, J., and H. Polemarchakis. 1986. “Existence, Regularity, and Constrained Sub- Optimality of Competitive Allocations When the Asset Market Is Incomplete.” Uncertainty, Information and Communication: Essays in Honor of K. J. Arrow 3: 65–96.
Macromodeling, Default, and Money 677 Geanakoplos, J. and W. Zame et al. 1997. “Collateralized Security Markets.” Unpublished working paper. Geanakoplos, John, and William Zame. 2010. “Collateral equilibrium.” Working Paper, Yale University. Gertler, M., N. Kiyotaki, et al. 2010. “Financial Intermediation and Credit Policy in Business Cycle Analysis.” Handbook of Monetary Economics 3, no. 3: 547–599. Goodhart, C. 2011. The Basel Committee on Banking Supervision: A History of the Early Years 1974–1997. Cambridge: Cambridge University Press. Goodhart, C. A., A. K. Kashyap, D. P. Tsomocos, and A. P. Vardoulakis. 2012. “Financial Regulation in General Equilibrium.” NBER technical report. Goodhart, C. A., A. K. Kashyap, D. P. Tsomocos, A. P. Vardoulakis, et al. 2013. “An Integrated Framework for Analyzing Multiple Financial Regulations.” International Journal of Central Banking 9, no. 1: 109–143. Goodhart, C. A., P. Sunirand, and D. P. Tsomocos. 2006. “A Model to Analyse Financial Fragility.” Economic Theory 27, no. 1: 107–142. Goodhart, C. A., and D. P. Tsomocos. 2011. “The Role of Default in Macroeconomics.” Institute for Monetary and Economic Studies technical report, Bank of Japan. Goodhart, C. A., D. P. Tsomocos, and A. P. Vardoulakis. 2011. “Modeling a Housing and Mortgage Crisis.” Central Banking, Analysis, and Economic Policies Book Series 15: 215–253. Gorton, G. B. 2010. Slapped by the Invisible Hand: The Panic of 2007. Oxford: Oxford University Press. Gorton, G., and A. Metrick. 2012. “Securitized Banking and the Run on Repo.” Journal of Financial Economics 104, no. 3: 425–451. Greenwood, Robin, Samuel G. Hanson, and Jeremy C. Stein. 2016. “The Federal Reserve’s Balance Sheet as a Financial-Stability Tool.” Designing Resilient Monetary Policy Frameworks for the Future. Jackson Hole Symposium: Federal Reserve Bank of Kansas City. Hahn, F. H. 1965. “On Some Problems of Proving the Existence of an Equilibrium in a Monetary Economy.” In The Theory of Interest Rates, edited by F. Hahn and F. Brechling, 126–135. London: Palgrave Macmillan. Hellwig, M. F. 1981. “Bankruptcy, Limited Liability, and the Modigliani-Miller Theorem.” American Economic Review 71, no. 1: 155–170. Huber, J., M. Shubik, and S. Sunder. 2011. “Default Penalty As a Disciplinary and Selection Mechanism in Presence of Multiple Equilibria.” Cowles Foundation Discussion Paper 1730, February. Iacoviello, M. 2005. “House Prices, Borrowing Constraints, and Monetary Policy in the Business Cycle.” American Economic Review 95, no. 3: 739–764. Iacoviello, M., and S. Neri. 2010. “Housing Market Spillovers: Evidence from an Estimated DSGE Model.” American Economic Journal: Macroeconomics 2, no. 2: 125–164. Kiyotaki, N., and J. Moore. 1997. “Credit Cycles.” Journal of Political Economy 105, no. 2: 211–248. La Porta, R., F. Lopez-de Silanes, A. Shleifer, and R. Vishny. 2000. “Investor Protection and Corporate Governance.” Journal of Financial Economics 58, no. 1: 3–27. Leao, E. R., and P. R. Leao. 2006. “Technological Innovations and the Interest Rate.” Journal of Economics 89, no. 2: 129–163. Leao, E. R., and P. R. Leao. 2007. “Modelling the Central Bank Repo Rate in a Dynamic General Equilibrium Framework.” Economic Modelling 24, no. 4: 571–610. Levine, R. 2005. “Finance and Growth: Theory and Evidence.” Handbook of Economic Growth 1: 865–934.
678 Goodhart, Romanidis, Shubik, and Tsomocos Lin, L., D. P. Tsomocos, and A. P. Vardoulakis. 2016. “On Default and Uniqueness of Monetary Equilibria.” Economic Theory 62, no. 1: 245–264. Martinez, J. F., and D. P. Tsomocos. 2016. “Liquidity and Default in an Exchange Economy.” Journal of Financial Stability 35: 192–214. Meh, C. A., and K. Moran. 2010. “The Role of Bank Capital in the Propagation of Shocks.” Journal of Economic Dynamics and Control 34, no. 3: 555–576. Miller, H. H., and M. Shubik. 1994. “Some Dynamics of a Strategic Market Game with a Large Number of Agents. Journal of Economics 60, no. 1: 1–28. Morris, S., and H. S. Shin. 2004. “Liquidity Black Holes.” Review of Finance 8, no. 1: 1–18. Quint, T., and M. Shubik. 2004. “Gold, Fiat and Credit: An Elementary Discussion of Commodity Money, Fiat Money and Credit.” Cowles Foundation Discussion Paper 1460, May. Quint, T., and M. Shubik. 2011. The Demonetization of Gold: Transactions and the Change in Control. Cowles Foundation. Yale University. Quint, T., and M. Shubik. 2013. “Barley, Gold and Fiat”. Yale University Press. Schularick, M., and A. M. Taylor. 2012. “Credit Booms Gone Bust: Monetary Policy, Leverage Cycles, and Financial Crises, 1870–2008.” American Economic Review 102, no. 2: 1029–1061. Shapley, L., and M. Shubik. 1977. “Trade Using One Commodity As a Means of Payment.” Journal of Political Economy 85, no. 5: 937–968. Shubik, M. 1973. “Commodity Money, Oligopoly, Credit and Bankruptcy in a General Equilibrium Model.” Economic Inquiry 11, no. 1: 24–38. Shubik, M. 1999. The Theory of Money and Financial Institutions, Vol. 1. Cambridge: MIT Press. Shubik, M., et al. 1999. Political Economy, Oligopoly and Experimental Games. Cheltenham, UK: Edward Elgar. Shubik, M., and E. Smith. 2016. The Guidance of an Enterprise Economy. Cambridge: MIT Press. Shubik, M., and W. D. Sudderth. 2011. “Cost Innovation: Schumpeter and Equilibrium, Part 1: Robinson Crusoe.” Cowles Foundation Discussion Paper 1881, August. Shubik, M., and W. D. Sudderth. 2012. “Cost Innovation: Schumpeter and Equilibrium, Part 2: Innovation and the Money Supply.” Cowles Foundation Discussion Paper 1881, December. Shubik, M., and D. P. Tsomocos. 1992. “A Strategic Market Game with a Mutual Bank with Fractional Reserves and Redemption in Gold.” Journal of Economics 55 no. 2: 123–150. Shubik, M., and C. Wilson. 1977. “The Optimal Bankruptcy Rule in a Trading Economy Using Fiat Money.” Journal of Economics 37, no. 3: 337–354. Shubik, M., and S. Yao. 1990. “The Transactions Cost of Money (A Strategic Market Game Analysis).” Mathematical Social Sciences 20, no. 2: 99–114. Smith, D. E., and M. Shubik. 2011. “Endogenizing the Provision of Money: Costs of Commodity and Fiat Monies in the Relation to the Value of Trade.” Journal of Mathematical Economics 47: 508–530. Stokey, N., and R. Lucas Jr. 1989. Recursive Methods in Economic Dynamics. Cambridge: Harvard University Press. Taylor, J. B. 2009. Getting Off Track. Stanford: Hoover Institution Press. Tsomocos, D. P. 2003. “Equilibrium Analysis, Banking and Financial Instability.” Journal of Mathematical Economics 39, no. 5: 619–655. Woodford, M. 2003. Interest Rate and Prices. Princeton: Princeton University Press. Zame, W. R. 1993. “Efficiency and the Role of Default When Security Markets Are Incomplete.” American Economic Review 83 no. 5: 1142–1164.
chapter 23
Model Uncerta i nt y i n M acroec onomi c s On the Implications of Financial Frictions Michael Binder, Philipp Lieberknecht, Jorge Quintana, and Volker Wieland
23.1 Introduction Quantitative macroeconomic models play an important role in informing policymakers at central banks and other institutions about the consequences of monetary, fiscal, and macroprudential policies. These policies in turn influence the decision- making of households and firms, the functioning of economies and the macroeconomic outcomes, and the economic welfare inherent in these outcomes. Macroeconomic modelers have, however, been criticized for failing to predict the Great Recession of 2008 and 2009 or at least failing to provide adequate warning that global financial disruptions could trigger such a massive contraction. For example, Buiter (2009) and Krugman (2009) have questioned the usefulness of macroeconomic research conducted during three decades preceding the global financial crisis (GFC) and have blamed academic and central bank researchers’ focus on New Keynesian dynamic stochastic general equilibrium (DSGE) models for misdirecting the attention of policymakers. Such New Keynesian DSGE models account within an intertemporal optimization- based general equilibrium framework for forward-looking-based expectations formation and decision-making by market participants. At least the latter modeling element is widely deemed crucial for purposes of policy evaluation. In addition, these models typically involve a range of nominal and real rigidities as well as adjustment costs, such as habit formation, investment adjustment costs, capital utilization restrictions, frictions on wage and price adjustments, search in labor markets, and so on. Thus, they
680 Binder, Lieberknecht, Quintana, and Wieland have already augmented the optimization-based framework with some relevant market imperfections and behavioral assumptions that are commonly associated with behavioral economics. The GFC and ensuing criticism of financial economics and macroeconomics have further inspired researchers to work on better integrating imperfections and risks associated with the financial sector in business cycle analysis. In this chapter, we review recent developments in integrating more detailed characterizations of the financial sector in New Keynesian models as typically used at central banks and other policymaking institutions for evaluating monetary policy. On the basis of this review, we then analyze the implications of these models for the design of monetary policy. To do so, we employ a comparative approach to macroeconomic modeling (Taylor and Wieland 2012; Wieland et al. 2016) that draws on a public model archive (www.macromodelbase.com). Specific questions we address in this chapter include the following: What is the role of the financial sector in these models with regard to policy transmission? Should prescriptions for monetary policy be revised in light of new findings from these models with financial frictions? Should monetary policy actively lean against asset price or credit growth? Has it been possible to improve model fit and forecasting performance by including more detailed representations of the financial sector in New Keynesian models? The remainder of the chapter is organized as follows: In section 23.2, we review some of the recent developments in structural macroeconomic modeling, specifically as have taken place at central banks since the paradigm shift away from Cowles Commission- type macroeconomic models. With section 23.3, we turn to the implications for policy transmission of integrating financial frictions into New Keynesian models. We consider the implications for the transmission of monetary policy measures as well as the implications for the interaction of monetary with fiscal and macroprudential policies. Section 23.4 focuses on the implications of embedding financial frictions in New Keynesian models for the formulation of robust monetary policy rules. So far, there are few studies that aim to identify policies that perform well across a range of relevant models. Yet it is important to inform policymakers about such robust strategies given the degree of uncertainty inherent in macroeconomic modeling and the prediction of policy effects. In section 23.5, we discuss issues of forecasting based on models with and without financial frictions. Section 23.6 concludes.
23.2 Macroeconomic Modeling and Central Banks In this section, we briefly outline some of the history of macroeconomic modeling, with a special focus on structural models, models used at central banks, and recent developments in modeling the financial sector.1 Macroeconomic modeling has a long
Model Uncertainty in Macroeconomics 681 history. As early as 1936, Jan Tinbergen proposed one of the first mathematical macroeconomic models.2 This seminal contribution constituted a prelude to the work of the Cowles Commission at Chicago and the Cowles Foundation at Yale in the 1940s and 1950s, as well as at the University of Pennsylvania in the 1960s, where researchers such as Trygve Haavelmo and Lawrence Klein investigated how to build and estimate structural macroeconomic models and developed cornerstones for the evolution of macroeconomics at large (Haavelmo 1944; Klein 1969). Since then, macroeconomic modeling has been characterized by competition and evolution. Models are continually being revised in light of new findings, from both theoretical and empirical perspectives. Particular emphasis has been placed on internal consistency of economy-wide models and the development of suitable empirical benchmarks. Moreover, macroeconomic models have been adapted so as to provide solutions for new challenges faced by policymakers. In central banking, macroeconomic models were adopted only very gradually; even pioneering central banks such as the US Federal Reserve and the Bank of Canada started to use macroeconomic models as late as the 1960s.3 Most of the models employed during this phase featured backward-looking dynamics and various Keynesian elements. Dynamics were imposed using partial adjustment mechanisms and/or error-correction approaches. Expectations were appended using estimated distributed lag structures. Such models shaped the understanding of monetary policy and its effects on the real economy for a long period of time. Such models still played an important role in the late 1990s, as evidenced by Svensson (1997a; 1997b), Rudebusch and Svensson (1999), and Ball (1999). In the 1970s, however, macroeconomic modeling experienced a paradigm shift. The influential critiques by Lucas (1976), Kydland and Prescott (1977), and Sims (1980) constituted a major challenge for the traditional Keynesian-style of modeling that relied on short-run restrictions not derived directly from economic behavior. Eventually, this led to the development of a first generation of New Keynesian models, which incorporated rational expectations with regard to the forward-looking calculation of market participants’ expectations as well as nominal rigidities such as staggered wage and price contracts (e.g., Anderson and Taylor 1976; Taylor 1979; Taylor 1993b). In those models, monetary policy had sustained real effects even if households and firms could anticipate the monetary policy actions. With some lag, models of this nature were adopted by central banks, including the QPM at the Bank of Canada and the FRB/US model at the Federal Reserve.4 Furthermore, the focus on forward-looking and optimizing behavior in modeling put rules rather than discretionary action at center stage in policy analysis. Also, research on monetary policy eventually moved from focusing on money growth to interest-rate rules, as emphasized by the seminal contribution on interest rate rules by Taylor (1993a). Parallel to the New Keynesian approach, another strand of literature, building on Lucas (1976) and initiated by Kydland and Prescott (1982), led to the development of so-called real business cycle (RBC) models. These models implemented a general equilibrium framework in which decision rules are derived from constrained intertemporal
682 Binder, Lieberknecht, Quintana, and Wieland optimization problems faced by households and firms. However, the absence of nominal rigidities rendered monetary policy analysis obsolete in these models. The “New Neoclassical Synthesis,” as exemplified in the work of Goodfriend and King (1997) and Rotemberg and Woodford (1997), brought together New Keynesian modeling and the RBC approach. The combination of nominal rigidities, imperfect competition, and the general equilibrium framework produced a second generation of New Keynesian models, including, among others, McCallum and Nelson (1999), Clarida, Gali, and Gertler (1999), and Walsh (2003).5 In these models, monetary policy is usually described by means of a simple Taylor-type rule relating the central bank nominal policy rate to inflation, economic activity (as measured by output, the output gap, or output growth), and possibly other aggregate variables. The early contributions to the second generation of New Keynesian DSGE models provided an internally consistent optimizing- behavior- based framework, which allowed for real effects of monetary policy and could be used for analysis of monetary policy strategies. However, these models did not fit the data as well as the first generation of New Keynesian models. Christiano, Eichenbaum, and Evans (2005; henceforth CEE) first proposed to extend such New Keynesian DSGE models with a combination of certain behavioral assumptions and adjustment costs that would allow such medium- size models to better match estimates of the macroeconomic effects of monetary policy obtained with vector autoregressive analysis (VAR).6 Such medium-size models from the second New Keynesian generation typically feature physical capital in firms’ production function with variable capital utilization, habit formation in consumption, and various frictions such as wage stickiness and investment adjustment costs as well as price and wage indexation. The latter frictions are assumed ad hoc, that is, they are not derived themselves from optimizing behavior. As a consequence of these modeling features, medium-size second-generation New Keynesian models exhibit not only forward-looking-based expectations but also significant backward-looking dynamics. Thus, they can match VAR-based measures of the macroeconomic effects of monetary policy shocks as well as the first generation of New Keynesian models. This recognition combined with the Bayesian estimation approach proposed and implemented by Smets and Wouters (2003; 2007; henceforth SW), who estimated versions of CEE models for the euro area and US economies, led to a rapid popularization of the second-generation New Keynesian DSGE framework. Advances in computing technology made it possible to solve many of these models relatively easily and quickly. As evidenced by Coenen et al. (2012) and Vlcek and Roger (2012), variants of this second generation of New Keynesian DSGE models had been added to the modeling toolkits of many central banks and policy institutions prior to the onset of the Great Recession in 2008. Table 23.1 provides an overview of structural macroeconomic models used at selected central banks since the 1960s. Many macroeconomists see great benefit from using a model that consistently relies on deriving the whole system of equations from optimizing behavior. In the words of Del Negro and Schorfheide (2013), “The benefit of building empirical models on sound
Model Uncertainty in Macroeconomics 683 Table 23.1 Macroeconomic Models in Selected Policy Institutions before 2008 Institution
Model name
References
Traditional Keynesian-style models Bank of Canada
RDX1/RDX2/RDXF
Helliwell 1969
European Central Bank
Area-Wide Model (AWM)
Dieppe, Küster, and McAdam 2005
US Federal Reserve System
MIT-PENN-SSRC Model (MPS)
De Leeuw and Gramlich 1968
US Federal Reserve System
Multicountry Model (MCM)
Stevens et al. 1984
First-generation New Keynesian models Bank of Canada
Quarterly Projection Model (QPM)
Poloz, Rose, and Tetlow 1994; Coletti et al. 1996
US Federal Reserve System
FRB-US Model
Brayton and Tinsley 1996
Second-generation New Keynesian models Bank of Canada
Terms-of-Trade Economic Model (ToTEM)
Murchison and Rennison 2006
Bank of England
Bank of England Quarterly Model (BEQM)
Harrison et al. 2005
Bank of Japan
Japanese Economic Model (JEM)
Fujiwara et al. 2005
US Federal Reserve System
SIGMA
Erceg, Guerrieri, and Gust 2006
US Federal Reserve System
EDO Model
Edge, Kiley, and Laforte 2007
European Central Bank
New Area-Wide Model (NAWM)
Christoffel, Coenen, and Warne 2008
Norges Bank
NEMO
Brubakk et al. 2006
Sveriges Riksbank
RAMSES
Adolfson et al. 2007
theoretical foundations is that the model delivers an internally consistent interpretation of the current state and future trajectories of the economy and enables a sound analysis of policy scenarios” (61). In our view, though, this should not mean that other approaches to structural macroeconomic modeling ought to be considered “unsound,” thereby creating an environment that is hostile to a pluralism of macroeconomic modeling paradigms. Following the financial crisis and recession of 2008–2009, an important criticism of second-generation New Keynesian models was that they failed to capture the importance of the financial sector for the development of real economic activity. Against the backdrop of the Great Recession, the assumptions of frictionless and complete financial markets and the absence of financial risk in these models seemed untenable. Blanchard, Dell’Ariccia, and Mauro (2010) find that a key lesson for economists
684 Binder, Lieberknecht, Quintana, and Wieland from the GFC is that “financial intermediation matters” (205). Hence, researchers aimed to explore a variety of new modeling approaches to investigate the role of credit channels and of bank and household balance sheets, as well as of asset prices for the business cycle, for policy transmission mechanisms and for forecasting performance. It is a welcome development that there is again a pluralism of structural macroeconomic modeling approaches. For example, new macrofinance models in the spirit of Brunnermeier and Sannikov (2014) advocate strong endogenous feedback loops between financial and goods markets. However, these models have not yet been brought to the data to the same extent as common in the New Keynesian literature. Another approach is agent-based modeling (ABM). ABM models also feature a detailed modeling of financial markets, studying, among other things, the interaction among traders from a granular perspective. In recent years, researchers have begun to investigate in more depth how to estimate such models (Alfarano, Lux, and Wagner 2005; Fagiolo and Roventini 2012) and how to use them for economic policy design (Dawid, Harting, and Neugart 2014; Delli Gatti and Desiderio 2015). Other approaches such as network models are also used to model the interactions between financial-sector agents in more detail, in particular the relationships between financial institutions, with a focus on systemic risk (compare Aldasoro, Delli Gatti, and Faia 2015; Gai, Haldane, and Kapadia 2011). Most notably, however, the criticism of macroeconomic modeling spurred many new approaches to integrate more detailed characterizations of the financial sector into New Keynesian DSGE models. In other words, the GFC led to the development of a third generation of New Keynesian DSGE models featuring financial frictions. Despite the increased plurality in structural modeling paradigms, New Keynesian models continue to be the main structural modeling tool at central banks. Before documenting this prevalence of New Keynesian models in central banks in more detail, we discuss various approaches to integrating financial-sector characterizations and associated frictions in the New Keynesian framework. The most prominent type of financial frictions currently employed is the one proposed by Bernanke, Gertler, and Gilchrist (1999; hereafter BGG). Even before the GFC, De Graeve (2008) already incorporated this financial accelerator in a medium- size second-generation New Keynesian model. Following BGG, entrepreneurs need to obtain loans to purchase physical capital, as they do not have sufficiently high own net worth. However, in the spirit of Townsend (1979), their idiosyncratic returns are observable by the entrepreneurs themselves only. By contrast, lenders, which are assumed to be perfectly competitive financial intermediaries or banks (and thus appear only implicitly in the model), need to pay a fixed auditing cost in order to observe the returns. The resulting risky debt contract between lenders and entrepreneurs ties the external finance premium (EFP)—the expected return on capital minus the risk-free rate—to the entrepreneur’s net worth position. A higher net worth lowers the implied probability of the entrepreneur’s default and thus decreases the required EFP. As entrepreneurial net worth is procyclical, this gives rise to a countercyclical EFP. This interaction works as an amplifying mechanism over business cycles—the so-called financial accelerator
Model Uncertainty in Macroeconomics 685 mechanism. The BGG financial accelerator can be integrated in a New Keynesian model while leaving the core second-generation model block intact. The result is a third- generation New Keynesian DSGE model with all of the second-generation features and additional financial frictions. There are different possibilities for better integrating the financial sector in New Keynesian DSGE models. Some of them extend the BGG framework but keep the focus on financial frictions between firms and banks linked to firms’ net worth. For example, Christensen and Dib (2008) consider a debt contract à la BGG written in terms of the nominal interest rate. This gives rise to an additional debt-deflation channel in the model. Carlstrom et al. (2014a) consider a privately optimal contract variant, which is indexed to the aggregate return on capital. Christiano, Motto, and Rostagno (2014) propose an extension of the BGG framework by introducing risk shocks, that is, shocks to the standard deviation of firms’ idiosyncratic returns. By contrast, other approaches emphasize bank balance sheets, household balance sheets, or asset prices as important drivers of financial-sector risk. Among them, models that explicitly characterize the behavior of financial intermediaries in the New Keynesian DSGE framework have attracted a lot of attention.7 In light of the GFC, key ideas from Bernanke and Blinder (1988) and Bernanke and Gertler (1995) were resuscitated, the notion that balance sheets of financial intermediaries—or, more specifically, bank net worth—are crucial for business cycle dynamics and policy transmission (Gambacorta and Mizen, c hapter 12 in this volume). The resulting strand of literature aiming to incorporate banks into New Keynesian DSGE models is too rich to discuss it fully in this chapter. Notably, despite these new developments in characterizing banks’ interaction with the overall economy, there is no consensus yet regarding how to incorporate banks into New Keynesian DSGE models. In the following, we list some prominent examples. Gertler and Kiyotaki (2010) and Gertler and Karadi (2011) propose a New Keynesian DSGE framework in which information asymmetries exist between banks and households. These asymmetries create a moral hazard problem. In each period, bankers are able to divert a certain fraction of household deposits. An incentive compatibility constraint rules out such behavior and gives rises to an endogenous leverage constraint linking the volume of intermediated loans to bank net worth. As bank net worth is procyclical, commercial bank intermediation of loans is procyclical as well, leading to an amplification of business cycles. In similar fashion, Meh and Moran (2010) propose a framework in which bank capital is crucial to mitigate informational asymmetries in the banking sector. They assume a double moral hazard problem between banks, entrepreneurs, and households in the spirit of Holmstrom and Tirole (1997). As a result, the capital position of the bank governs its ability to obtain deposits, such that the bank capital channel amplifies business cycles. Other authors highlight the importance of the structure of the banking market and regulatory capital requirements. Gerali et al. (2010) and Dib (2010) depart from the assumption of perfect competition in the banking sector. Banks possess some market power and act as price setters on loan rates. However, banks are also subject to
686 Binder, Lieberknecht, Quintana, and Wieland regulatory loan-to-deposit ratios or capital requirements, linking the loan rate to bank balance-sheet conditions. In these models, the financial sector has an attenuating effect on shocks that impact the economy via a change in real rates or in the value of collateral. In similar fashion, Afanasyeva and Güntner (2015) reverse the bargaining power in the traditional BGG setup. In their model, an expansionary monetary policy shock increases the leverage ratio and bank net worth and accordingly also bank lending. This gives rise to a risk-taking channel in the transmission of monetary policy. Here, bank market power does not lead to an attenuating effect of the financial sector but preserves the financial accelerator mechanism of BGG. Overall, it is noteworthy that central banks have included New Keynesian models with financial frictions rather quickly in their toolkits. Table 23.2 provides an overview of structural macroeconomic models currently used at selected central banks and international organizations.8 New Keynesian models remain the central modeling tool used at policy institutions. Nominal rigidities, forward- looking decision- making, and the mixture of other frictions are still considered crucial for policy evaluation. By now, most central banks have variants of those models available to include some form of financial frictions. In our categorization, these are considered third-generation New Keynesian models. As shown in table 23.2, in many cases, such as in the Fed’s EDO model or the Bundesbank GEAR model, the financial sector is modeled in a reduced-form way as a time-varying risk premium over some risk-free interest rate, where the latter is usually assumed to be under control of the central bank. Except for a reduced-form risk premium, the most common specification for financial frictions is the financial accelerator of BGG. Central bank models using this approach include models from the Bank of Canada, the European Central Bank (ECB), and Sveriges Riksbank. Only few central banks and other international organizations have so far incorporated banking sectors into their structural models used for regular policy analysis. Notable exceptions are models used at the Bank of Canada (BoC-GEM-Fin), the ECB (EAGLE-FLI), and Norges Bank (NEMO). Of course, this should not be interpreted as an indication that central bankers disregard the role of the banking sector for business cycles. Most of the macrobanking models have been developed and proposed by researchers working at central banks.9 According to Vlcek and Roger (2012), they can be interpreted as “satellite models” for central banks. While their exact mechanisms are not yet included in core models used for regular policy analysis, they may still shape the discussion and thinking of policymakers at central banks. In other words, their influence on policy discussions is rather indirect and goes beyond what is suggested by looking exclusively at central banks’ flagship models. A key lesson we draw from this survey is that the Great Recession led to a substantial attempt to improve the modeling of the financial sector and associated frictions in New Keynesian DSGE models. To date, there are many competing modeling approaches. While there is not yet a workhorse macroeconomic model including financial frictions, the most popular approach is the firm-based financial accelerator mechanism of BGG. Many central banks have incorporated this channel or reduced-form variants thereof
Table 23.2 Structural Macroeconomic Models Currently Used in Selected Policy Institutions Institution
Model name
Type
Financial frictions
Bank of Canada
BoC-GEM-Fin
NK-DSGE
BGG + banks
Lalonde and Muir 2007; de Resende, Dib, and Perevalov 2010
Bank of Canada
ToTEM II
NK-DSGE
RP
Murchison and Rennison 2006; Dorich et al. 2013
Bank of England
COMPASS
NK-DSGE
No
Burgess et al. 2013
US Federal Reserve System
FRB-US
NK-PAC
RP
Brayton and Tinsley 1996; Brayton, Laubach, and Reifschneider 2014
US Federal Reserve EDO System
NK-DSGE
RP
Chung, Kiley, and Laforte 2010
Deutsche Bundesbank
GEAR
NK-DSGE
RP
Stähler, Gadatsch, and Hauzenberger 2014
Deutsche Bundesbank
BBK-Model
NK-DSGE
BGG
Buzaushina et al. 2011
Federal Reserve Bank of Chicago
Chicago Fed DSGE
NK-DSGE
BGG
Brave et al. 2012
Federal Reserve Bank of New York
FRBNY -DSGE
NK-DSGE
BGG
Del Negro et al. 2013
European Central Bank
NAWM
NK-DSGE
BGG
Christoffel, Coenen, and Warne 2008; Lombardo and McAdam 2012
European Central Bank
CMR
NK-DSGE
BGG
Christiano, Rostagno, and Motto 2010; Christiano, Motto, and Rostagno 2014
European Central Bank
NMCM
NKa
No
Dieppe et al. 2011
European Central Bank
EAGLE-FLI
NK-DSGE
Banks
Gomes, Jacquinot, and Pisani 2012; Bokan et al. 2016
European Commission
QUEST III
NK-DSGE
RP
Ratto, Roeger, and in’t Veld 2009
International Monetary Fund
GIMF -BGG
NK-DSGE
BGG
Anderson et al. 2013; Andrle et al. 2015
International Monetary Fund
GIMF -BANKS
NK-DSGE
BGG + banks
Andrle et al. 2015; Benes and Kumhof 2011
Banco de Portugal PESSOA
NK-DSGE
BGG
Almeida et al. 2013
Norges Bank
NEMO
NK-DSGE
BGG + banks
Brubakk et al. 2006; Brubakk and Gelain 2014
OECD
OECD Fiscal
NK-DSGE
RP
Furceri and Mouroguane 2010
Banco de España
FiMod
NK-DSGE
RP
Stähler and Thomas 2012
Sveriges Riksbank
RAMSES
NK-DSGE
BGG
Adolfson et al. 2007; 2013
References
Note: NK-DSGE is New Keynesian Dynamic Stochastic General Equilibrium. PAC is Polynomial Adjustment Cost framework. BGG denotes the financial accelerator in Bernanke, Gertler, and Gilchrist 1999. RP means some other form of (ad hoc) risk premium. a
The model assumes that agents don’t know the full structure and parameter values of the model. This bounded rationality constitutes a major departure from the usual DSGE framework. The authors describe their model as an “optimising agent—New Keynesian model.”
688 Binder, Lieberknecht, Quintana, and Wieland in their policy models, thus moving from the second to the third generation of New Keynesian DSGE models. However, this is only a start, and other financial-sector imperfections need to be considered. The core structure of New Keynesian DSGE models as represented by the second generation variant à la CEE and SW has been widely used for monetary policy analysis. However, the relatively new and rich strand of macrofinancial models represents an increased degree of model uncertainty for policymakers. There is modeling uncertainty regarding the most important financial frictions and their implications for transmission mechanisms, optimal monetary policy, and forecasting performance. Model uncertainty itself is a possible explanation for why there is an implementation lag regarding those models used regularly for policy analysis at central banks. At this point, comparative analysis can be very useful. Policymaking institutions need to compare old and new models and need to evaluate the impact and interaction of policy instruments in order to design effective and robust policy strategies. In fact, macroeconomic model comparison has a long tradition in the fields of monetary and fiscal policy analysis.10 Wieland et al. (2016) state, “Central banks and international organizations have made much use of academic research on macroeconomic modelling, and they have invested staff resources in practical policy applications in large-scale comparison exercises” (1). Yet as the strand of research on financial-sector modeling in New Keynesian models is relatively young, there have been few comparative studies. Thus, in the following sections, we review and investigate some key issues regarding the transmission channels and the impact of monetary policy, fiscal policy, and macroprudential policy in these third-generation New Keynesian models relative to earlier generations. We also discuss whether model fit and forecasting performance have improved by including more detailed representations of the financial sector in such economy-wide models.
23.3 Policy Transmission in Models with and without Financial Frictions First, we employ a comparative approach to analyze the implications of financial frictions for policy transmission. This approach makes use of an archive of macroeconomic models (www.macromodelbase.com) that includes many novel contributions. It builds on and extends recent work on model comparison by Taylor and Wieland (2012), Wieland et al. (2012), Schmidt and Wieland (2013), and Wieland et al. (2016). We investigate to what extent assessments of the impact of monetary policy on the real economy have changed with third-generation New Keynesian models relative to earlier ones. Specifically, the role of the financial sector and associated frictions is explored. Our focus on New Keynesian models should not be interpreted as an exclusive preference for this modeling approach. Rather, much can be learned from rigorous
Model Uncertainty in Macroeconomics 689 comparison between competing modeling paradigms. Yet to make progress, models need to be useful for providing answers to the very questions that policymakers ask. This is the case for New Keynesian DSGE models. At the same time, there is criticism of DSGE models, as expressed by Blanchard (2016) and Romer (2016), and alternative modeling approaches are under development. The comparative approach pursued in the following would be very suitable to explore differences and similarities with regard to the impact of policy measures with those models in the future.
23.3.1 Interest-Rate Policy It is natural to start with a comparison of the impact of interest-rate shocks in traditional Keynesian-style models versus New Keynesian models. For example, the model of Rudebusch and Svensson (1999; hereafter RS99), which consists of a backward-looking IS curve, an accelerationist Phillips curve (with adaptive expectations), and an interest- rate rule, constitutes such a traditional model. The models of Taylor (1993b; hereafter TAY93), Fuhrer and Moore (1995; FM95), and Coenen, Orphanides, and Wieland (2004; COW04) belong to the first generation of New Keynesian models, featuring nominal rigidities and forward-looking rational expectations. In all models, we implement the monetary policy rule by Gerdesmeier et al. (2004) to eliminate differences stemming from model-specific rules and isolate the effect of differences in core model structures. Figure 23.1 shows the impulse responses following an expansionary monetary policy shock, that is, a decrease of the nominal interest rate by 1 percentage point. In all four models, such an interest-rate cut induces a hump-shaped response of output. The peak response in first-generation New Keynesian models occurs after two to three quarters. TAY93 and COW04 indicate quite similar magnitudes. The FM95 model features a smaller, somewhat longer response. In contrast, the largest and longest-lasting output effects occur in the traditional Keynesian-style RS99 model. This is due to the Output
0.25 0.2 0.15 0.1 0.05 0 –0.05
0
2
4
6
8 10 12 14 16 18 20
0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 –0.02
Inflation
Interest rate
0.4 0.2 0 –0.2 –0.4 –0.6 –0.8
0
2
FM95
4
6
8 10 12 14 16 18 20
RS99
COW04
–1
0
2
4
6
8 10 12 14 16 18 20
TAY93
Figure 23.1 Monetary Policy Shock in Traditional Keynesian- Style Models and First- Generation New Keynesian Models Notes: Impulse responses following a decrease in the gross annualized nominal interest rate of 1 percent. All impulse responses are in percentage deviations from the nonstochastic steady state, one period is a quarter, and inflation is the annual inflation rate. The rule by Gerdesmeier et al. (2004) is used as a common monetary policy rule.
690 Binder, Lieberknecht, Quintana, and Wieland Output
2.5 2 1.5
0.12
0
0.1
–0.2 –0.4
0.06
–0.6
0.04
0.5
–0.8
0.02
0 0
2
4
6
8
10 12 14 16 18 20
Interest Rate
0.2
0.08
1
–0.5
Inflation
0.14
0
–1
–0.02
–1.2
0
2 RW97
4
6
8 10 12 14 16 18 20 IR04
SW07
0
2
4
6
8 10 12 14 16 18 20
ACEL05
Figure 23.2 Monetary Policy Shock in Second-Generation Models Notes: Impulse responses following a decrease in the gross annualized nominal interest rate of 1 percent. All impulse responses are in percentage deviations from the nonstochastic steady state, one period is a quarter, and inflation is the annual inflation rate. The rule by Gerdesmeier et al. (2004) is used as a common monetary policy rule.
pronounced backward-looking dynamics of that model. A similar picture emerges with respect to inflation: the traditional Keynesian-style model features notably large and persistent increases in inflation following expansionary monetary policy shocks. Considering both generations of models, there is substantial model uncertainty yet less so within the class of first-generation New Keynesian models. In contrast, the second generation of New Keynesian DSGE models displays quite large variance in the quantitative effects of monetary policy shocks. Figure 23.2 compares the impact of such shocks across four different models of this generation. Early models are represented by Rotemberg and Woodford (1997; RW97) and Ireland (2004; IH04) and later variants by Altig et al. (2005; ACEL05) and Smets and Wouters (2007; SW07). The first two models are calibrated versions of the canonical small-scale New Keynesian DSGE model with just a few equations (Euler equation, forward- looking New Keynesian Phillips curve, a monetary policy rule, and money demand in IH04). The latter two models additionally include capital, habit formation, and various frictions such as investment adjustment costs. The smaller-scale models of this generation feature large responses of output and inflation on impact, where the magnitude is a multiple of the effect in models of the first generation shown in figure 23.1. Moreover, the impulse responses peak on impact, returning only slowly to their nonstochastic steady-state values afterward. As shown by CEE, this is at odds with empirical VAR evidence, which suggests that both output and inflation exhibit a hump-shaped response to a monetary policy shock. Medium-size DSGE models of the second generation induce such hump-shaped impulse responses by adding capital in the production function, investment adjustment costs, and other frictions in addition to nominal price rigidities. The assumption of habit formation can generate a hump-shaped response in consumption by adding backward-looking components similar to the traditional Keynesian- style models and the first- generation New Keynesian models. The presence of investment adjustment costs helps to dampen the strong initial decrease of investment. Overall, the picture emerging from
Model Uncertainty in Macroeconomics 691 this analysis suggests that monetary policy transmission works very differently in the medium-size second-generation New Keynesian models. As noted earlier, many central banks included such second- generation New Keynesian models in their suite of models prior to the Great Recession, mainly due to the improved empirical fit of these models relative to the early small-scale models. As found by Taylor and Wieland (2012), these estimated second-generation models tend to display impulse responses following monetary policy shocks that are strikingly similar to first-generation models such as Taylor (1993b). We replicate this result in figure 23.3, adding further model variants. We compare TAY93, SW07, ACEL05 with original timing assumptions (ACEL05) and timing assumptions following SW (ACEL05sw), as well as Cogan et al. (2010) (CCTW10). For the TAY93 model, Taylor and Wieland (2012) show that it exhibits properties very similar to those of second-generation models. The ACEL05 model is the CEE framework with two additional shocks and assumes that firms have to borrow working capital to pay the wage bill. The resulting so-called cost channel is not present in SW07. CCTW10 is the SW07 model with rule-of-thumb consumers reestimated on US data. The rule by SW07 is used as a common monetary policy rule. As is evident in figure 23.3, monetary policy shocks imply very similar transmissions to the real economy in these five models. In particular, the peak output responses are quantitatively almost identical. The speed of transition back to steady state is only somewhat larger in ACEL05. For inflation, there is some disagreement among the models regarding the timing of the peak response, but overall the picture emerging from these models is broadly in line with empirical evidence from VAR’s. Next, we investigate whether different approaches to modeling the financial sector in New Keynesian DSGE models imply different transmission mechanisms. To this end, we compare seven third-generation models. Out of these, De Graeve (2008; DG08), Christiano, Rostagno, and Motto (2010; CMR10), and Christiano, Motto, and Rostagno
Output
0.3 0.25 0.2 0.15 0.1 0.05 0 –0.05
0
2
4
6
8
10 12 14 16 18 20
0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 –0.02
SW07
Inflation
Interest Rate
0.2 0 –0.2 –0.4 –0.6 –0.8 –1
0
2
4
ACEL05
6
8
–1.2 10 12 14 16 18 20 0
ACEL05sw
CCTW10
2
4
6
8
10 12 14 16 18 20
G7TAY93
Figure 23.3 Monetary Policy Transmission in Medium-Size Models Notes: Impulse responses following a decrease in the gross annualized nominal interest rate of 1 percent. All impulse responses are in percentage deviations from the nonstochastic steady state, one period is a quarter, and inflation is the annual inflation rate. The rule by Smets and Wouters (2007) is used as a common monetary policy rule.
692 Binder, Lieberknecht, Quintana, and Wieland (2014; CMR14) feature the canonical BGG financial accelerator. CMR10 additionally includes a bank-funding channel.11 Banks are also modeled in Gerali et al. (2010; GNSS10, monopolistic competition), Meh and Moran (2010; MM10, double-sided moral hazard), and Gertler and Karadi (2011; GK11, moral hazard between depositors and banks). Iacoviello and Neri (2010; IN10) incorporate frictions in the housing market via household collateral constraints related to housing value. The SW model serves as a representative second-generation benchmark. Figure 23.4 shows the transmission of a monetary policy shock in these models. Third-generation models exhibit substantial quantitative differences regarding the transmissions of monetary policy shocks to the real economy. Medium-size models using the canonical financial accelerator mechanism display similar peak responses to output but longer-lasting real effects and less overshooting of inflation than the SW07 model. Gerke et al. (2013) document similar findings for five third-generation New Keynesian DSGE models used by central banks in the Eurosystem. For the model by Christiano, Motto, and Rostagno (2014), Wieland et al. (2016) show that this result stems from the interaction of wage stickiness and financial frictions and conclude that additional research is warranted to investigate whether monetary policy shocks indeed induce longer-lasting effects on the real economy than previously thought. Another explanation centers on the capital-accumulation channel as emphasized by Carrillo and Poilly (2013). Following a decrease in the nominal interest rate, the real interest decreases as well, such that households want to increase consumption. Higher aggregate demand triggers a higher demand for capital, thus increasing the price for capital. This increases the value of firm collateral and corresponds to a decrease in leverage and the EFP. This, in turn, leads to higher investment and an increase of capital, giving rise to a feedback loop resulting in more persistent output effects. In stark contrast, the models by Gerali et al. (2010) and Iacoviello and Neri (2010) feature a peak output and inflation response on impact, similar to early small-scale models of the second generation—despite including all of the CEE features originally intended
Output
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –0.1
0
5
10
15
20 SW07
25
30
35
DG08
40
0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 –0.05
Inflation
Interest Rate
0.2 0 –0.2 –0.4 –0.6 –0.8
0
CMR10
5
10
15
CMR14
20
25
30
IN10
35
40
–1
GNSS10
0
5
10 MM10
15
20
25
30
35
40
GK11
Figure 23.4 Monetary Policy Transmission in Third-Generation Models Notes: Impulse responses following a decrease in the gross annualized nominal interest rate of 1 percent. All impulse responses are in percentage deviations from the nonstochastic steady state, one period is a quarter, and inflation is the annual inflation rate. The rule by Smets and Wouters (2007) is used as a common monetary policy rule.
Model Uncertainty in Macroeconomics 693 to match VAR-based impulse responses. Finally, the banking models by Meh and Moran (2010) and Gertler and Karadi (2011) imply similar output responses to those of the SW model. However, the Gertler and Karadi model has an even stronger peak effect for inflation. The differences between the second and the third generations of New Keynesian models are thus at least as pronounced as the ones between small-scale and medium- size variants of the second generation. The implications stemming from third- generation models incorporating the canonical financial accelerator mechanism seem to be broadly the same, in that they imply more persistent effects of monetary policy shocks. The models with bank-based frictions, in turn, imply stronger peak responses. Overall, the third-generation models all indicate an immediate acceleration and/or a more persistent effect of monetary policy shocks on the real economy. The heterogeneity in estimated impacts of monetary policy shocks in these models indicates increased model uncertainty in the third generation of New Keynesian DSGE models that takes into account financial frictions. Policymakers using this new set of models in policy design inherently face different implications depending on what (subset of) model(s) they are using. An important question is how to adjust monetary policy in the presence of such model uncertainty. In section 23.4, we will therefore investigate what monetary policy rules would be robust in the sense of performing fairly well across a range of models with financial frictions and whether these are different from rules performing well across a range of first-and second-generation models. Before doing so, we turn our attention to quantitative monetary policy easing, fiscal policy, and macroprudential policy and their interaction with interest-rate policy.
23.3.2 The Zero Lower Bound and Unconventional Monetary Policy Expansionary monetary policy does not need to end when interest rates reach a lower bound near or below zero. This is of particular importance in the post-GFC world of near-zero interest rates. Thus, Ball et al. (2016) state in the 2016 Geneva Report, “Short term interest rates have been near zero in advanced economies since 2009, making it difficult for central banks to cut rates further and provide needed economic stimulus. There is reason to believe that this lower bound problem will be common in years to come” (xix). Yet it should be acknowledged that the zero lower bound (ZLB)—which arises from the presence of cash which offers a zero-nominal-interest investment option to savers— was studied by central bank researchers long before the GFC. It was already the focus of research in the mid-to late 1990s, especially once the Japanese economy started exhibiting very low interest rates starting in 1995. For example, researchers at the Federal Reserve such as Fuhrer and Madigan (1997) and Orphanides and Wieland (1998) already evaluated the effects of the lower-bound constraint on monetary policy in first- generation New Keynesian models.
694 Binder, Lieberknecht, Quintana, and Wieland Similarly, Krugman (1998) employed a simple macroeconomic model with temporarily fixed price level to investigate causes and consequences of the ZLB. Other contributions on the zero bound using first-generation New Keynesian models included Coenen and Wieland (2003; 2004) and Coenen, Orphanides, and Wieland (2004). They showed that a sequence of negative demand and deflationary shocks can lead to a more pronounced recession and a prolonged period of deflation when the ZLB is taken into account. Subsequent work using second-generation New Keynesian models also investigated the consequences of an occasionally binding ZLB for optimal monetary policy (compare, e.g., Adam and Billi 2007; Nakov 2008; Schmidt 2013). Against the backdrop of Japan’s prolonged period near the ZLB in the second half of the 1990s and early 2000s, macroeconomic research already explored remaining policy options aside from interest-rate policy, today broadly speaking called unconventional monetary policy. One of these options is quantitative easing (QE), that is, an expansion of the central bank’s balance sheet and the monetary base by means of central bank asset purchases. The objective is the same as with interest-rate cuts, namely, to increase aggregate demand and inflation. The Bank of Japan started making use of QE in 2001. Other major central banks, including the Federal Reserve, the Bank of England, and the European Central Bank, have engaged in massive QE in the aftermath of the Great Recession. To give an example, the Federal Reserve’s balance sheet rose such that the total size of its balance sheet increased from roughly $900 million in 2008 to more than $4.4 billion US by 2015. For a more detailed review, see Cukierman, chapter 6 in this volume. QE exerts an influence through signaling, confidence, real balance, and portfolio balance channels on medium-to longer-term interest rates, risk premiums, exchange rates, assets prices, and overall aggregate demand. Real balance and portfolio balance effects arise to the extent that investments in money, private bonds, and government bonds are not treated as perfect substitutes by households and firms. As a consequence, the demand for these assets is driven by relative quantities in addition to relative prices. Central bank asset purchases thus unfold real effects even without changing the short- term interest rates. Related research has been conducted well before the Great Recession using New Keynesian models. Orphanides and Wieland (2000) already studied optimal QE. Coenen, Orphanides, and Volker (2004) and Coenen and Wieland (2003) explored the role of the exchange-rate channel in QE in order to stimulate inflation and growth in Japan. Auerbach and Obstfeld (2005) study the impact of QE on longer-term interest rates. Further transmission channels of QE encompass the direct influence on inflation expectations (Krugman 1998; Belke and Klose 2013) and the stimulation of bank lending (Gambacorta and Marques-Ibanez 2011; Kashyap, Stein, and Wilcox 1993). As such, both the ZLB and QE have been investigated extensively already prior to the Great Recession. However, third-generation New Keynesian DSGE models that incorporate financial-market imperfections offer new microeconomic foundations for analyzing how QE works through the financial sector on the overall economy. They allow for jointly modeling the ZLB, financial frictions, and large-scale central bank asset
Model Uncertainty in Macroeconomics 695 purchases. Doing so makes it possible to investigate and separate out different transmission channels of unconventional policies deployed after the Great Recession more effectively than in earlier New Keynesian models.12 Thus, it is not surprising that, in particular, third-generation New Keynesian models with banking frictions have been used to study QE. However, there is considerable heterogeneity with respect to the particular modeling approach. Most contributions focus on the portfolio balance effect by distinguishing short-and long-term rates, with the latter being affected by central bank asset purchases (see, for example, Chen, Cúrdia, and Ferrero 2012; Ellison and Tischbirek 2014; Carlstrom et al. 2014b). In such models, the central bank is able to reduce long-term yields by increasing demand for long-term bonds. In turn, this causes crowding out of saving and an increase in consumption, with expansionary effects on output and inflationary pressure as a final result. A complementary approach is to assume that QE directly affects or attenuates the modeled financial frictions. As an example, Gertler and Karadi (2011; 2013) and Kühl (2016) model QE as a way of circumventing the moral hazard problem inherently existing between depositors and banks (as described above in section 23.2) by providing additional financial intermediation not subject to the financial friction. As an illustrative example, we replicate some key findings of Gertler and Karadi (2013) by considering large-scale asset purchases similar to the QE programs pursed by the Federal Reserve, shown in figure 23.5. Under a binding ZLB (mimicked by constant interest rates for four quarters), the central bank asset purchases cause a decline in long-term bond rates and the EFP. The combination of a binding ZLB and inflationary pressure reduces the real interest rates, in turn leading to crowding in of consumption, while the resulting increase in the asset price stimulates investment. Overall, the QE program is expansionary, with the peak increase in output being around 1 percent. The particular relevance of QE at the ZLB is visible from its expansionary effect relative to a situation without a ZLB. In such a scenario, the output effect is weaker, owing to an increase in real interest rates leading to a crowding out of consumption which counteracts the increase in investment. Despite the more detailed characterization of the financial sector, most third- generation New Keynesian DSGE models do not explicitly model household money holdings and the monetary base. As such, the financing of QE is usually and necessarily assumed to occur through lump-sum taxes or distortionary taxes instead of money creation. As a consequence, some aspects of QE, such as the real balance channel, can only be captured indirectly, in contrast to earlier models such as Orphanides and Wieland (2000). There are some advances in explicitly modeling money creation in such models, but to date without explicit ties to QE (Jakab and Kumhof 2015). Another policy option that has been advanced as a suitable tool for managing expectations near the lower bound is so-called forward guidance (FG). Technically, there are two types of FG. One type consists simply of providing more information about the likely path of future policy rates conditional on the central bank’s forecast of macroeconomic developments and a reaction function that characterizes past systematic central
696 Binder, Lieberknecht, Quintana, and Wieland QE
6
%Δ from SS
0.6
3
0.4
2
0
5
10
15
20
25
30
35 40
Nom. interest
1
%Δ from SS
–0.2
0
5
10
15
20
25
30
35 40
Real interest
0.6 0.4
0.4 0.2
0
5
10
15
20
25
30
35 40
Bank net worth
14
5
10
15
20
25
30
35 40
25
30
35 40
Assert price
1.5 1 0.5 0
–1 –1.2
0
2
–0.6 –0.8
0
–0.2
2.5
0.2 0 –0.2 –0.4
0.6
–0.2
0
5
10
15
20
25
30
35
40
–0.5
0
5
10
15
20
EFP
0.15 0.1
12 10 %Δ from SS
0.2 0
0
0.8
0.05
8
0
6 4
–0.05
2
–0.15
ZLB
no ZLB
–0.1
0 –2
0.4
0.2
1
–0.4
1.2 1 0.8 0.6
0.8
4
Inflation
1.6 1.4
1
5
0
Output
1.2
–0.2 0
5
10
15
20
25
30
35 40
–0.25
0
5
10
15
20
25
30
35
40
Figure 23.5 Quantitative Easing in Gertler and Karadi (2013) Notes: Government bond purchases (QE) by the central bank in the model by Gertler and Karadi (2013). Purchases are calibrated to a peak effect of 2.5 percent of GDP. Interest rates are kept unchanged k for four periods in the ZLB scenario (solid line). EFP is the external finance premium E[Rt +1 – R t+1 ].
bank reactions to these developments reasonably well. Some central banks have been publishing such forecasts for some time, in particular the central bank of Norway. Other central banks, such as the ECB, have explained their FG in the same manner. The other type of FG can be characterized as a public commitment concerning future policy rates that constitutes a deviation from the reaction function or policy rule. In practice, these two approaches are difficult to separate empirically. Owing to the explicit modeling of forward-looking behavior, New Keynesian models are a suitable framework to analyze the macroeconomic effects of such policies, as is evident in some early contributions prior to the Great Recession by Reifschneider and Williams (2000), Eggertsson and Woodford (2003), and Adam and Billi (2006; 2007), among others. Such analyses have shown that it can be beneficial to announce that the central bank will keep interest rates lower for longer in the aftermath of a period at the zero bound than would be anticipated based on a central bank reaction function that characterizes policy in normal times. In the aftermath of the Great Recession, the
Model Uncertainty in Macroeconomics 697 Federal Reserve made use of qualitative announcements regarding its anticipated future path of the federal funds rate. For example, in January 2012, the Federal Open Market Committee stated that it “anticipates that weak economic conditions are likely to warrant exceptionally low levels of the federal funds rate for at least . . . late 2014.” The effects of such actions are investigated by Campbell et al. (2012). Using a third-generation New Keynesian model, Giannoni, Patterson, and Del Negro (2016) document that the assumption of rational expectations and full information that typically governs forward-looking behavior in these models implies large real effects of FG well in excess of empirical estimates. A number of contributions have aimed to make these models consistent with weaker effects of FG. Examples of such modifications include higher discounting of the future (Giannoni, Patterson, and Del Negro 2016), heterogeneous agents and borrowing constraints (McKay, Nakamura, and Steinsson 2016), and heterogeneous beliefs (Gaballo et al. 2016). Of course, the strong expectational channel of monetary policy (announcements) has also been a feature of earlier New Keynesian models. For example, Coenen and Wieland (2004) have shown that the effectiveness of price-level targeting at the ZLB is substantially reduced when the central bank’s target is not credible and market participants are learning from the data. Overall, it can be argued that the macrofinancial models of the third New Keynesian generation can be quite useful to analyze unconventional monetary policy transmission through financial markets and bank balance sheets, while accounting for occasionally binding lower bounds on nominal interest rates. Many of them, however, are missing an explicit modeling of household money holdings and the relationship to the monetary base. Furthermore, the heterogeneity in modeling approaches suggests that it would be urgent to compare the effects of QE across models and study what strategies would be robust to the elevated degree of model uncertainty.
23.3.3 Fiscal and Monetary Policy Interaction When central bank rates reach a lower bound at zero or small negative values, it is often argued that it would be best to use fiscal policy to stimulate aggregate demand. Indeed, this holds in traditional Keynesian models without forward-looking behavior and fixed prices, where fiscal stimulus has particular strong effects when policy rates remain constant. Research with RBC models and first-generation New Keynesian models instead studied fiscal policy under rational expectations and optimizing behavior. The resulting literature in the 1980s and 1990s concluded that discretionary fiscal stimulus is largely crowding out private spending and that fiscal policy best focuses on automatic stabilizers such as progressive taxes and unemployment benefits at business cycle frequencies (Taylor 2000). The advent of zero interest rates in 2008 launched a new debate on the effects of discretionary fiscal stimulus. Structural analysis that focused on second-generation New Keynesian DSGE models could easily be extended with a more detailed fiscal sector. Cogan et al. (2010) and Cwik and Wieland (2011) found multipliers below unity in
698 Binder, Lieberknecht, Quintana, and Wieland normal times, whereas a binding ZLB for two years led to a moderate increase just above unity.13 The degree of model uncertainty about the effects of fiscal stimulus triggered some large-scale model comparison exercises by Coenen et al. (2012) and Kilponen et al. (2015), who also used various policy institutions’ models. As many of these models do not feature financial frictions (or at least did not at that time), the authors did not specifically investigate the impact of financial frictions on fiscal multipliers. Coenen et al. (2012) report a government consumption multiplier of 0.8 to 0.9 after one year for the euro area, which increases to an average of 1.5 under a binding ZLB. Kilponen et al. (2015) focus on models employed by the euro area central banks and find a multiplier between 0.7 and 0.9, while a two-year ZLB generates a multiplier of 1.3 in the euro area. Among others, Eggertsson (2011), Eggertsson and Krugman (2012), and Fernández- Villaverde (2010) have provided analysis indicating that the fiscal multiplier might be substantially higher in the presence of financial frictions, in particular when the ZLB is binding. Financial frictions may accelerate the impact of government spending on the real economy. The issue of fiscal multipliers in third-generation of New Keynesian DSGE models deserves further investigation. Carrillo and Poilly (2013) argue that the financial accelerator mechanism of BGG implies a capital-accumulation channel, which significantly amplifies output effects of stimulus. In their calibrated model,14 they find an initial multiplier of 1.28 relative to 1.04 in a model variant without financial frictions. They emphasize, however, that the effect of financial frictions is particularly large in times of a binding ZLB. As prevailing downward pressure on inflation and output prevents the central bank from increasing policy rates and thus eliminates the usual crowding out of consumption and investment, the positive feedback loop between aggregate demand and the EFP is unmitigated and leads to large and persistent effects on output. Under a binding ZLB for six quarters, Carrillo and Poilly (2013) find an initial multiplier of 2.9, whereas the long-run multiplier is even larger. Of course, this capital-accumulation channel also influences the impact of other shocks affecting the demand for capital. Uncertainty regarding the effects of fiscal policy in macrofinancial models remains large. There is a multiplicity of modeling approaches regarding the financial sector with differing results on amplification mechanisms. To the extent that such models have been used to assess fiscal policy, they have usually featured a skeleton fiscal sector such as exogenous government spending financed by lump-sum taxes. By contrast, the second- generation New Keynesian models used in Coenen et al. (2012) and Kilponen et al. (2015) featured a rich representation of the fiscal sector. Another source of uncertainty, as outlined by Bletzinger and Lalik (2017), is that the size of fiscal multipliers depends crucially on the modeling approach chosen for the ZLB. For third-generation New Keynesian models, Binder, Lieberknecht, and Wieland (2016) show that fiscal multipliers are not only sensitive to the specification of financial frictions and the assumed length of monetary policy accommodation in the form a binding ZLB. Rather, fiscal multipliers also crucially depend on the monetary policy rule followed after the period at which the lower bound ends. Here we illustrate the
Model Uncertainty in Macroeconomics 699 findings of Binder, Lieberknecht, and Wieland (2016) with some simulations. We consider the models by De Graeve (2008) and Gertler and Karadi (2011) as two examples featuring different types of financial frictions (BGG and bank moral hazard). In doing so, we apply estimates of the structural parameters obtained with euro area data from Gelain (2010a) and Villa (2016), respectively. The simulated fiscal stimulus is an exogenous marginal increase in government consumption for six periods,15 with a ZLB binding for six periods as well.16 As policy rule after the period of a binding lower bound, we use three different specifications: the canonical rule by Taylor (1993a), a variant with interest-rate smoothing and a response to output gap growth, as well as one with a higher coefficient on inflation.17 Figure 23.6 reports the impulse responses for a shock to government purchases relative to a scenario without fiscal stimulus.18 These simulations indicate that quantitative and qualitative results for a fiscal policy stimulus are highly sensitive to the choice of the monetary policy rule. If the monetary policy stance after the period at the ZLB is accommodating, the relatively lower interest rate translates into an acceleration of the capital-accumulation channel and hence higher multipliers. In contrast, if the central bank reacts relatively aggressively to the fiscal stimulus by hiking interest rates, the overall effect is significantly weaker. Quantitatively, there are substantial short-and medium-run differences in the output responses. The impact multipliers range from 0.62 to 1.38, while the first-year effect ranges from –0.37 to 1.57. Even within a given model, the anticipated monetary policy rule has a large effect. This analysis shows that more comparative research with macrofinancial models is needed to investigate the transmission of fiscal measures, possible interactions, and trade-offs, as well as the propagation of other shocks possibly hitting the economy. A related matter is the question of fiscal consolidation. Some studies find that fiscal consolidation based on a reduction of government purchases is likely to be associated with substantial contractionary effects (Eggertsson 2011). However, Binder, Lieberknecht, Output
2
1.5
1.5
1
1
0.5 0
0.5
–0.5
0 –0.5
Inflation
2
–1 0
5
10
15
DG, Smoothing
20
25
30
35
DG, Taylor rule
40
–1.5
0
5
10
15
DG, Inflation targeting
20
25
30
35
40
GK, Smoothing
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –0.1 –0.2 –0.3
Interest rate
0
5
10
GK, Taylor rule
15
20
25
30
35
40
GK, Inflation targeting
Figure 23.6 Fiscal Policy Shocks in Third-Generation Models Notes: Partial impulse responses following an increase in government consumption by 1 percent of its nonstochastic steady-state value. A binding ZLB for six periods is generated by an exogenous contractionary shock. All impulse responses are in percentage-point deviations relative to a scenario with the contractionary shock only (i.e., without fiscal stimulus).
700 Binder, Lieberknecht, Quintana, and Wieland and Wieland (2016) show that well-designed fiscal consolidation—consisting of a reduction of government spending coupled with a decrease of distortionary labor taxes— can be effective in mitigating contractionary effects and can even provide positive stimulus while nominal interest rates are at the ZLB.
23.3.4 Financial Stability and Macroprudential Policy The GFC has put financial stability at the forefront of policymakers’ concerns and hastened efforts to put in place new policy instruments that would be effective in containing systemic risk in the financial sector. While banking regulation has always been the first line of defense, there has been a conceptual shift from a microprudential to a macroprudential approach. Hanson, Kashyap, and Stein (2011) provide the following definition: “A micro-prudential approach is one in which regulation is partial equilibrium in its conception and aimed at preventing the costly failure of individual financial institutions. By contrast, a ‘macro-prudential’ approach recognizes the importance of general equilibrium effects, and seeks to safeguard the financial system as a whole” (1). Nowadays, policymakers frequently argue that macroprudential policy should actively seek to manage the supply of credit—relative to underlying trends in economic activity—throughout the business cycle so as to reduce its impact on macroeconomic volatility and curb the potential for financial disruptions (see Turner 2010; Hanson, Kashyap, and Stein 2011). The emphasis on moderating the financial cycle is linked to the view that excessive leverage on the part of financial intermediaries not only makes them individually more vulnerable to external shocks but also raises the risk of a systemic event. In general, intermediaries’ individual contribution to systemic financial risk is thought not to be internalized, thus generating an externality that provides room for policy to play a positive role (see, for instance, Faia and Schnabel 2015). This view has influenced the international reforms that constitute the Basel 3 framework, which sets out new capital and liquidity standards for financial institutions (Basel Committee on Banking Supervision 2010). Yet research on how to best operate with macroprudential instruments in policy practice is still in its infancy, at least compared to the huge literature on the effects of monetary and fiscal policy and the design of appropriate policy rules in those areas. A consensus has yet to emerge in terms of operational objectives, target variables, instruments, transmission mechanisms, and institutional structures. There is still no generally agreed-upon definition of financial stability, much less a consensus around how to measure it (Borio et al. 2011). No single target variable has been widely accepted as essential for ensuring the operational effectiveness of macroprudential policy (Angelini, Neri, and Panetta 2011). There is still relatively little understanding about the effectiveness and precise functioning of macroprudential instruments (Financial Stability Board 2009). The interactions between macroprudential and monetary policies have yet to be
Model Uncertainty in Macroeconomics 701 completely worked out (Nier et al. 2013). Moreover, there is no generally appropriate institutional arrangement for financial stability supervision (Brockmeijer et al. 2011). At present, policymakers face the challenge of integrating financial stability concerns into their policy frameworks in an efficient and robust manner. This is greatly complicated by a precarious understanding about the precise relationship between the financial sector and the macroeconomy. Providing a comprehensive account of this issue is beyond the scope of this chapter.19 At the start of our analysis, it is useful to consider the following key issues concerning the analysis and design of macroprudential policy as laid out by German Council of Economic Experts (2014). First, the time dimension—vis-à-vis the cross-sectional dimension—of financial stability management should be developed further so as to moderate the procyclical, amplifying role of the financial sector in macroeconomic dynamics. Second, bank balance sheets play a central role in managing both financial stability risks and the macroeconomic business cycle; hence, there is a need to further develop and deploy instruments that act directly on bank balance sheets. Finally, understanding the interaction between macroprudential and monetary policy is of prime importance in designing an efficient policy framework. In line with these observations, we illustrate some of the complications for macroprudential policy that arise from model uncertainty. To this end, we employ three of the third-generation New Keynesian DSGE models presented above. These models are amenable to this type of analysis because they incorporate sufficient detail on financial intermediaries so as to define macroprudential policy in a precise manner. Specifically, we consider regulatory requirements that concern banks’ balance sheets. Furthermore, as DSGE models, they account directly for general equilibrium effects present in the interaction between the financial sector, regulatory authorities, and the macroeconomy, as emphasized by Hanson, Kashyap, and Stein (2011). Finally, the definition of macroprudential policy we use accords well with key instruments that have been employed by regulatory authorities and analyzed in the financial literature on macroprudential policy, namely, guidelines for and requirements on variables of banks’ and financial institutions’ balance sheets (Financial Stability Board 2009). Granting direct control over these and other variables to the macroprudential authority allows it to exert a strong influence on factors such as leverage, maturity, and liquidity mismatches; concentration of risks; and moral hazard problems—among others. Specifically, we incorporate the capital-to-assets (CTA), loan-to-value (LTV), and loan-to-deposits (LTD) ratios as macroprudential instruments into the models of GNSS10, MM10, and GK11. The CTA ratio is defined as the minimum ratio of capital (net worth) to the real value of assets that banks must satisfy. The LTV ratio is the maximum permissible value for the real value of a bank loan divided by the expected real value of the loan’s collateral. Finally, the LTD ratio is the maximum ratio between banks’ assets and deposits that is allowed. In each case, the macroprudential authority is able to influence the amount of leverage and/or loss-absorbing capacity of
702 Binder, Lieberknecht, Quintana, and Wieland the banking sector—and thereby the supply of credit—by varying the instrument in question.20 We consider the effects of a restrictive macroprudential policy shock that aims to reduce the level of credit relative to GDP, that is, the credit gap. In doing so, we assume that the macroprudential authority and the central bank act independently of each other. Figure 23.7 presents the impulse response functions of output, inflation, the credit gap, and the monetary policy rate following a 1 percent tightening of the macroprudential instrument indicated in each column, that is, an increase of 1 percent for the CTA ratio and a decrease of 1 percent for the LTV and LTD ratios.21 Note that in all cases, this exogenous variation in policy has the expected—and desired—effect of reducing the credit gap. The magnitude of the fall, however, is highly model-specific. In the case of the CTA ratio, the decrease in the credit gap is fairly modest in GNSS10 and MM10, yet it is substantial for GK11. In the case of the LTV ratio, each model implies a different dynamic response of the credit gap. Finally, for the LTD ratio, it is the MM10 model that stands out and exhibits a strong and long-lived decline in the credit gap following the macroprudential shock. The differences in the impulse responses displayed in figure 23.7 reflect both the specific parameterization of each model and the transmission mechanisms through which policy takes effect. For the GNSS10 model, which features monopolistic competition in the banking sector, the CTA and LTD ratios have little effect, because banks are able to fund themselves at very low costs. As a result, there is only a modest pass- through to interest-rate spreads. The LTV ratio, on the other hand, has more substantial effects. It directly affects quantities as it links the supply of credit to the value of the investment project. In the case of the MM10 model, all instruments work through their impact on the equilibrium level of market leverage, which results from the double moral hazard problem between households, banks, and entrepreneurs. Therefore, it is not surprising that the most powerful macroprudential instrument is the LTD ratio. It affects both sides of the problem: between households and banks and between banks and entrepreneurs. The CTA and LTV ratios, meanwhile, exhibit less of an impact as they only alter the asymmetry between banks and entrepreneurs. Finally, recall that the model of GK11 features a moral hazard problem between households and banks. Analogous to the simulation of MM10, it turns out that the more potent instrument in the GK11 model is the LTV ratio, which directly links the supply of credit to the expected value of the investment project. The CTA and LTD ratios have a much milder impact on the credit gap since they do little—at the margin—to alter bankers’ incentive to divert household funds to their personal accounts, thus leaving banks’ cost of funds largely unaffected. It is clear from figure 23.7 that if the macroprudential authority wishes to reduce the credit gap, it is faced with considerable uncertainty concerning the quantitative and qualitative responses following an exogenous change in the policy instrument. Furthermore, the policymaker must take into account the effect its policy will have on overall economic activity. For the GNSS10 and MM10 models, employing the most
Model Uncertainty in Macroeconomics 703 Capital-to-assets
Loan-to-value
Output
0.1
0
20
40
Inflation
–0.5
0
0
20
40
Inflation
0
20
40
Credit gap
0
–0.05
0
20
40
Credit gap
20
40
Interest rate
0.05
–4
0
20
40
Interest rate
20
40
–0.5
–4 0
0.1
Inflation
20
40
Credit gap
20
40
Interest rate
0
0 0
40
–2
0.5
0
–0.02 0
0
–2 0
20
0
0
–2
–1 0 0.02
0
–0.01
Output
–0.5
0.05
0.01
–0.05
0
0
–0.1
–4
Output
0.5
0
Loan-to-deposits
0 GNSS
20
40 MM
–0.1 0
20
40
GK
Figure 23.7 Macroprudential Policy Shock Notes: Impulse responses following a tightening of the macroprudential instrument (column title) of 1 percent. All impulse responses are in percentage deviations from the nonstochastic steady state, one period is a quarter, and inflation is the annual inflation rate. The rule by Orphanides and Wieland (2013) is used as a common monetary policy rule. Macroprudential instruments are assumed to follow an AR(1) process with an autoregressive coefficient of 0.9.
effective policy instrument (the LTV and LTD ratios, respectively) to restrain the credit gap would also generate significant reductions in output. In the GK11 model, however, the most effective instrument—the LTV ratio—actually increases output in the short run. This is due to a strong initial increase in the value of capital. In this model, the tighter LTV ratio reduces the incentives of bankers to divert funds and makes it more profitable for households to invest through banks. This exerts a positive effect on private
704 Binder, Lieberknecht, Quintana, and Wieland investment. Following the LTV ratio shock, it is the latter effect that dominates in the short run. Next, we consider the perspective of the monetary authority. In all of these simulations, the central bank is assumed to implement the policy rule of Orphanides and Wieland (2013). The resulting policy paths, however, vary significantly across all models for all macroprudential instruments. Not only the response of output to the macroprudential policy shock but also the response of inflation differs substantially across models. Macroprudential policy creates new sources of risk for the central bank, and the presence of model uncertainty complicates the conduct of monetary policy, even without assuming that the central bank responds directly to financial stability concerns. This serves to highlight the importance of assessing the complementarity (or substitutability) of monetary and macroprudential policy. Finally, note that the implications for monetary policy will be different whether the macroprudential authority follows a rules-based framework or there is strategic interaction and coordination between the two policymakers. This simple exercise serves to illustrate some of the complexities inherent in designing and implementing an efficient macrofinancial stability framework. Given the high degree of model uncertainty, it is key to draw on diverse modeling approaches when analyzing the implications of a greater focus on financial stability in policymaking. The third generation of New Keynesian models can play a useful role in this regard.
23.4 Robust Monetary Policy Rules in Models with and without Financial Frictions Clearly, model uncertainty represents a serious challenge for central banks when aiming to identify the transmission mechanisms of monetary policy. In this section, we consider one approach for evaluating the implications of model uncertainty for the design of monetary policy. Specifically, a policy rule could be called robust to model uncertainty if it performs well across a range of relevant models (see McCallum 1988). At the heart of this kind of analysis lies the realization that the uncertainties surrounding the true structure of the economy are substantial. Consequently, the models policymakers can use to design their policies are at best a rough approximation of the true data-generating process they face. In response to the challenges faced in understanding causal effects in the economy, coupled with the necessity of forming quantitative predictions about the impact of their actions, policymakers would be well advised to look for guidance from a large set of relevant models, ideally encompassing a range of modeling strategies and paradigms.
Model Uncertainty in Macroeconomics 705 Here we apply this approach to the new generation of financial frictions models. In particular, we analyze whether a direct and systematic response by the central bank to financial-sector variables improves performance within a set of models of the euro area economy.22 The policymaker is assumed to face a finite set of relevant models M. One of the models could be treated as the “true model” of the economy, but it is impossible to know ex ante which. The models differ in the assumed economic structure, in the estimation method and/or data sample used in estimation, and along other possible dimensions.23 This scenario captures the essence of the practice at policymaking institutions, which rely on a range of models to inform their policy discussions and actions. Adalid et al. (2005), for instance, compare rules optimized in backward- looking models with those from forward-looking models (which include models from the first and second New Keynesian DSGE generations) developed by staff of the Eurosystem, and they find several cases of explosive dynamics.24 Orphanides and Wieland (2013) consider a set of eleven euro area models (including models from all New Keynesian DSGE generations as well as one traditional Keynesian-style model) and find many instances when rules optimized, for example, in a New Keynesian model generate explosive dynamics in a traditional model with primarily backward- looking dynamics. Also, they find that policy rules optimized to perform well in one model may induce multiple equilibria in other models. Earlier studies such as those by Levin, Wieland, and Williams (2003) and Levin and Williams (2003) find similar results for models of the US economy. In sum, a recurrent finding in the literature is that model-specific optimized polices are not robust to model uncertainty and can lead to substantial welfare losses. In this context, Kuester and Wieland (2010) apply Bayesian model averaging as a means of designing policies for the euro area. They find that rules obtained in this manner are fairly robust to model uncertainty. Essentially, the strategy consists in having the policymaker mix the models considered relevant. This is done by attributing a certain weight to each individual model—which may reflect the policymaker’s subjective beliefs or can be estimated from the data25—and finding the common policy rule that minimizes the weighted average of model-specific loss functions. Formally, the model-averaging rule is obtained by choosing the parameters of the rule {ρ, α, β, β , h} such that they solve the following optimization problem: M
min L = ∑ ω m[Varm (π) + Varm ( y ) + Varm (∆i)] {ρ,α ,β ,β ,h} m =1 s.t . it = ρ it −1 + α Et (πt + h ) + β Et ( yt + h ) + β Et ( yt + h − yt + h − 4 ) 0 = Et [ f m (z t , xtm , xtm+1 , xtm−1 , θm )] ∀m ∈ M ,
and there exists a unique and stable equilibrium ∀m ∈ M.26
(1)
706 Binder, Lieberknecht, Quintana, and Wieland The variables in the policy rule are expressed in percent deviations from their nonstochastic steady-state values: i denotes the quarterly annualized nominal interest rate, π annual inflation, y the output gap, and h ∈{0, 2, 4} the central bank’s forecast horizon. The last line denotes the structure of each model m ∈ M , which is a function of each model’s parameters, θm , model-specific variables, x m , and variables common across all models, z .27 Note that the central bank must take the models (including parameter values) as given. They serve as constraints. This specification follows Orphanides and Wieland (2013) in considering policies under commitment to a simple rule while abstracting from the ZLB constraint. The class of rules considered here has been found to be more robust under model uncertainty than more complicated rules that respond to a greater number of variables (see Levin, Wieland, and Williams 1999). Furthermore, such simple rules are transparent and easily explained to the public.28 The values ω m ≥ 0 are the weights associated with each model. We consider only rules that induce a unique and stable equilibrium, because we think both unstable and multiple equilibria are undesirable from a policy perspective. The first case necessarily violates the central bank’s mandate of price stability, and the second gives rise to sunspot shocks that are unrelated to economic fundamentals, thus generating additional volatility in macroeconomic variables.29 The performance criterion is an ad hoc loss function in the tradition of Tinbergen (1952) and Theil (1958) that relates closely to standard central bank mandates and policy practices. It depends on the variances of annual inflation, the output gap, and the change in the interest rate. This is different from studies such as Schmitt-Grohé and Uribe (2007) that emphasize the direct use of households’ utility in policy evaluation exercises. However, in the context of model uncertainty, there exist good reasons for keeping the performance criteria constant across all models. First, not all models admit a utility- based loss function, as they may not be derived from microeconomic optimization problems. Second, as Wieland and Wolters (2013) point out, “a utility-based welfare- function can be extremely model specific. Paez-Farrell (2014) shows that different theories of inflation persistence can result in an observationally equivalent Phillips curve, but imply different loss functions and lead to different policy prescriptions. Therefore, optimal simple rules based on structural loss functions are not robust to model uncertainty” (310).30 Finally, Blanchard, Erceg, and Lindé (2016) provide additional arguments for giving more weight to ad hoc loss functions when evaluating policies. Most relevant to our analysis is their observation that the assumptions of models similar to SW (that is, models in which households perfectly share consumption risk and in which all variations in labor take place at the intensive margin) are likely to underestimate the costs of large output gaps, so that the structural utility functions in these models may not give sufficient weight to such costs. As a baseline, we use equal weights on the different models.31 Concerning the components of the loss function, the following comments are in order. Following Woodford (2011), the objective function of the representative household in a small-scale
Model Uncertainty in Macroeconomics 707 New Keynesian DSGE model can be approximated with a quadratic function of inflation and the output gap, where the relative weights are determined by the structure of the model. We set the weight on the output gap equal to the weight on inflation following the analysis of Debortoli et al. (2016), who show that for a standard second-generation model,32 this loss function approximates the representative household’s objective function, conditional on the central bank behaving optimally. The change in the policy instrument is included following Kuester and Wieland (2010) so as to rule out policies that would imply frequent and large changes that differ greatly from usual policymaking practice.33 Previous work concerning robust policies with euro area models has found that models with predominantly backward-looking elements (in which inflation tends to be very persistent) tend to favor rules that are forecast-based with high response coefficients on inflation and moderate inertia in the policy rate. Forward-looking models, in contrast, tend to favor outcome-based rules with more inertia and more moderate reactions to inflation (see Adalid et al. 2005; Orphanides and Wieland 2013). Levin, Wieland, and Williams (2003) derive a robust benchmark rule for the US economy that features a unity coefficient on the lagged interest rate and moderate responses to the forecast for one-year-ahead inflation and the current output gap. Here we focus on the implications of new macrofinancial models and compare them to models where financial frictions are not of great relevance for the macroeconomy. Thus, we ask, what does the relevance of third-generation New Keynesian DSGE models—vis-à-vis earlier generations—imply for robust policies in the euro area? A thorough analysis of this issue is provided by Afanasyeva et al. (2016), who employ a large set of macroeconomic models estimated for the euro area. This set of models is presented in table 23.3, along with the corresponding reference papers and the policymaking institutions where they were developed or those to which the authors had a close affiliation at the time of publication. The set of models considered in Afanasyeva et al. (2016) encompasses several generations of New Keynesian models and could be thought to capture the degree of model uncertainty present in the euro area. All models are estimated. They cover a broad range of frictions proposed in the literature.34 Table 23.4 reports the model-averaging rules computed with this set of models.35 The macroeconomic models with financial frictions considered prescribe a notably weaker response to both inflation and the output gap than do earlier-generation models. The response to output gap growth, on the other hand, is stronger relative to the earlier- generation models. The smoothing parameter is close to unity in all cases. Finally, while the model-averaging policy for models with financial frictions models is outcome/ nowcast-based, it is forecast-based for the earlier-generation models. While we have just considered one set of rules, these characteristics survive changes in the loss function weights, changes in the model weights, changes in the model set, and a reduction in the number of policy parameters. That said, the most fundamental difference between the two sets of models concerns their prescribed responses to inflation and the output gap. This difference carries over to the model-averaging policy obtained from the full model
708 Binder, Lieberknecht, Quintana, and Wieland Table 23.3 Set of Euro Area Models from Afanasyeva et al. 2016 Label
Reference
Institution
1 EA_AWM05
Dieppe, Küster, and McAdam 2005
ECB
2 EA_CW05fm
Coenen and Wieland 2005,
ECB
Early-generations models
Fuhrer-Moore-staggered contracts 3 EA_CW05ta
Coenen and Wieland 2005,
ECB
Taylor-staggered contracts 4 G3_CW03
Coenen and Wieland 2003
ECB
5 EA_SW03
Smets and Wouters 2003
ECB
6 EA_QUEST3
Ratto, Roeger, and in’t Veld 2009
EC
7 EA_GE10
Gelain 2010a*
ECB
8 EA_GNSS10
Gerali et al. 2010
Banca d’Italia
9 EA_QR14
Quint and Rabanal 2014
IMF
10 EA_CFOP14poc
Carlstrom et al. 2014a —privately optimal contract
Federal Reserve System**
11 EA_CFOP14bgg
Carlstrom et al. 2014a —Bernanke, Gertler and Gilchrist 1999—contract
Federal Reserve System**
12 EA_CFOP14cd
Carlstrom et al. 2014a— Christensen and Dib 2008 contract
Federal Reserve System**
Third-generation models
Note: * For the ECB working paper version of Gelain 2010a, see Gelain 2010b. ** For the estimation of these models using euro area data, see Afanasyeva et al. 2016.
set. Thus, the new models with financial frictions matter even if models from all generations and prior to the GFC are taken into account. Figure 23.8 provides an indication for the reasons underlying this finding. It reports the average impulse response functions of inflation and the output gap across models given a one-percentage-point monetary policy shock. To this end, common policy rules are used in each model. We consider two different rules that have been found to provide a good description of the conduct of monetary policy in the euro area before the Great Recession, namely, the rules of Gerdesmeier et al. (2004) and Orphanides and Wieland (2013). The figure shows that on average, monetary policy causes notably larger effects in the models with financial frictions than in earlier-generation models, consistent with the finding from section 23.3. For a one-percentage-point unanticipated increase in the monetary policy rate, the reduction in the output gap and inflation is roughly twice as
Table 23.4 Model-Averaging Policy Rules Rule
Interest lag (ρ)
Inflation (α )
Output gap (β )
Output gap growth (β ) h
All models
0.983
0.255
0.138
0.524
0
Early generations
0.984
1.158
0.986
0.041
4
Third generation
1.030
0.062
0.032
0.625
0
Note: The text at the top right of the table is “h," which stands in for horizon. The Greek letters are, from left to right: rho, alpha, beta, and beta tilde.
Gerdesmeier and Roffia (2004) rule Inflation
0.01 0 –0.02
–0.1
–0.03
–0.15
–0.04 –0.05
–0.2
–0.06 0
2
4
6
8
10 12 14 16 18 20
Output gap
0 –0.02 –0.04 –0.06 –0.08 –0.1 –0.12 –0.14 –0.16 –0.18 –0.2
Inflation
0 –0.05
–0.01
–0.07
Orphanides and Wieland (2013) rule
–0.25
0
2
4
6
8
10 12 14 16 18 20
Output gap
0.05 0 –0.05 –0.1 –0.15 –0.2 –0.25 –0.3
0
2
4
6
8
10 12 14 16 18 20 Early generations mean
–0.35
0
2
4
6
8
10 12 14 16 18 20
Third generations mean
Figure 23.8 Contractionary Monetary Policy Shock of One Percentage Point Notes: Average impulse response following a tightening of the monetary policy rate of 1 percent. All impulse responses are in percentage deviations from the nonstochastic steady state, one period is a quarter, and inflation is the annual inflation rate. The column titles indicate the monetary policy rules assumed in each case.
710 Binder, Lieberknecht, Quintana, and Wieland large on impact in the financial frictions group than in the earlier-generations models. The greater effects of monetary policy can be traced back to the amplification resulting from the financial accelerator mechanism. It is present in five of the six third-generation models in the set considered. Due to the greater effectiveness of monetary policy (on average) in the models with financial frictions, the systematic response of policy to inflation and the output gap need not be as pronounced as in the earlier-generation models, lest it risk destabilizing the macroeconomy. Similarly, a monetary policy causing larger effects—all else being equal—leads to less persistent inflation dynamics (also visible in the figure), which in turn allow for more moderate and less preemptive actions by the central bank. By contrast, the backward-looking dynamics in some of the earlier-generation models tend to favor forward-looking policies with a strong reaction to inflation (on this point, see Adalid et al. 2005; Orphanides and Wieland 2013). In sum, including the models with financial frictions in the policymaker’s model set has important implications for the model-averaging policies that would tend to deliver more robust performance under model uncertainty about the euro area economy. It has been argued for some time that monetary policy should work to reduce the risk of economy-wide fluctuations that emanate from the financial sector. The findings discussed so far suggest that this would require a stronger policy response to output growth. However, it would seem natural to respond directly to financial variables such as credit growth or asset price growth. A policy that incorporates such a direct response to financial variables is often referred to as “leaning against the wind” (LAW). The third-generation New Keynesian models provide a natural environment for testing the implications of such policies, because they incorporate financial-market imperfections and financial variables such as credit and asset prices. Afanasyeva et al. (2016) define a set of financial variables common across all the models considered. The central banks’ optimization problem under model averaging is then written as min
{ρ, α ,β ,β˘ , h , j }
Lm = Varm (π) + Varm ( y ) + Varm (∆i)
s.t . it = ρ it −1 + α Et (πt + h ) + βEt ( yt + h ) + β Et ( g tj+ h )
0 = Et [ f m (z t , xtm , xtm+1 , xtm−1 , θm )] ∀m ∈ M
where the variables are as in equation 1, g j is the jth element of
real credit real credit growth real credit / GDP g= , external finance premium leverage asset prices
and there exists a unique and stable equilibrium.36
Model Uncertainty in Macroeconomics 711 Note that relative to equation 1, the monetary policy rule has been restricted by eliminating output gap growth (i.e., β = 0 ) from the rule. This restriction yields significant computational advantages, because otherwise, the number of policy coefficients would increase and thereby increase the computation time. Furthermore, rules that only respond to inflation gap and the output gap perform substantially worse than rules that also include output growth. Thus, restricting the output growth coefficient to zero enhances the possibility of beneficial stabilization effects from LAW, because a response to credit growth or asset price growth may make up for the lacking response to output growth. This analysis has substantial value added, because the LAW literature mostly focuses on calibrated models, shock-specific analyses, or analyses that concentrate on just one financial variable. In our analysis, all models have been estimated on euro area data, the class of rules considered assumes full commitment (that is, it must hold under all shocks), and the policymaker can choose which financial variable to react to. The results are reported in table 23.5. Columns 4–8 show the optimal policy rule coefficients and forecast horizons under LAW and no LAW (i.e., β = 0 ). Additionally, column 3 reports the percent reduction in the loss function and the implied inflation premium (IIP) achieved by LAW relative to no LAW, while column 2 reports the percent reduction in the loss function relative to the rules that include output growth.37 Note that the value of the loss function under LAW cannot be higher than under no LAW. This is because the no-LAW policy rule is nested in the LAW rule. In other words, under LAW, the central bank can always choose to set the LAW coefficient, β , equal to zero and thus cannot be worse off than under no LAW. The IIP is defined as the reduction in the standard deviation of inflation under the no-LAW policy that is necessary to match the loss achieved under LAW.38 It does not correspond to the actual improvement in price stability. Rather, it is meant to provide a more intuitive measure of the overall reduction in loss. The main message that emerges from table 23.5 is that including one of the financial variables in the rule with lagged interest rate, inflation, and the output gap only reduces the loss function a little. In other words, the LAW policy rule does not improve stability very much relative to the no-LAW policy. This is not for lack of possibility for improvement. Indeed, adding output growth to the no-LAW policy would substantially improve performance. This can be seen from column 2, which indicates substantial percent increases in loss when comparing the LAW policy rule to the (nonnested) rule with output growth in addition to output gap and inflation. If the rule with output growth would be extended to include financial rules, the potential for performance improvement would not be larger than in the LAW/no LAW comparison reported here. The optimized coefficients on the financial variables in the LAW policy rules are typically very small. Even so, this message need not be taken as a mortal blow to the concern about credit and asset price booms and the recommendations for a leaning-against-asset-price or leaning-against-credit-growth monetary policy. Rather, it could also be taken as an indication that the modeling assumptions of the models with financial frictions need to
−2.5
−2.6
−19.0
−19.5
−20.5
−21.2
−21.7
−22.5
EA_CFOP14poc with LAW
without LAW
EA_CFOP14bgg with LAW
without LAW
EA_CFOP14cd with LAW
without LAW
−1.6
without LAW
without LAW
−0.8
EA_GNSS10 with LAW
EA_QR14 with LAW
−10.1
−10.2
without LAW
Gain wrt output gap growth rule (%)
EA_GE10 with LAW
Model
Table 23.5 Model-Specific LAW Policies
[0.25]
0.7
[0.24]
0.6
[0.21]
0.5
[0.01]
0.1
[0.33]
0.8
[0.16]
0.1
Gain (%) [IIP]
0.8609
0.8042
0.8644
0.8087
0.8659
0.8130
1.1211
1.1275
1.2216
1.3251
1.0829
1.0836
Interest lag (ρ)
0.2358
0.2415
0.2307
0.2384
0.2269
0.2416
1.1933
1.1977
0.8205
0.9218
0.0029
0.0034
Inflation (α )
0.1845
0.0798
0.1837
0.0819
0.1885
0.0965
0.6123
0.6135
0.3678
0.4035
0.0092
0.0092
Output gap (β )
(Credit growth)
0.1630
(Credit growth)
0.1567
(Credit growth)
0.1477
(Leverage)
-0.0007
(Real credit)
0.0442
(Real credit)
-0.0003
LAW coefficient (β )
4
4
4
4
4
4
0
0
0
0
2
2
h
Model Uncertainty in Macroeconomics 713 be revisited. First, the greater effectiveness of monetary policy with financial frictions seems somewhat at odds with the experience of the GFC. It was not the case that central banks were able to jump-start the economy with rapid interest-rate cuts. The recession has had lasting effects. Second, the assumption of fully credible commitment to the policy rule together with rational and homogeneous expectations formation by market participants may attribute too many self-stabilizing properties to the macroeconomy. In other words, the expectations channel in these models may overstate the extent of control central banks can exert over aggregate demand. In this regard, one could adapt the models for analysis under learning and imperfect credibility or heterogeneous expectations formation and revisit the question of LAW versus no LAW. Third, it may be important to consider nonlinearities in the financial cycle. For example, Filardo and Rungcharoenkitkul (2016) suggest that LAW policies may deliver greater performance improvements in a model in which the financial cycle is modeled as a nonlinear Markov regime-switching process. Having previously argued against the use of model-specific policy rules, we take up the issue of model averaging next. The model averaging rule39 under LAW is given as
it = 1.0708 it −1 + 0.0141 πt + 0.0455 yt − 0.0009 real creditt (2)
In this rule, the LAW coefficient goes to zero. The smoothing parameter is close to unity, the coefficients on inflation and the output gap are quite small, and the rule responds to current outcomes. This is all similar to the third-generation model averaging rule reported in table 23.4, except that output growth is missing from the rule. In the absence of a significant response to output growth, this model averaging policy would not be very robust relative to performance in earlier-generation models.
23.5 Forecasting Performance of Models with and without Financial Frictions Finally, we turn our attention to another central use of macroeconomic modeling at central banks: forecasting.40 As policymakers aim to achieve their mandate, which may include stable price, stable growth and employment, and even financial stability, they need to obtain an assessment of the likely course of macroeconomic developments. Any action they will take is predicated on a view of the way events will unfold as a response to their actions—or inaction. In this regard, macroeconomic models are an essential ingredient of the policymaker’s toolbox. They can provide quantitative measures of the likely effects of changes in policy. Similarly, policymakers must constantly confront the challenge of responding to a changing economic environment. Thus, they need models to construct a probability distribution for the future
714 Binder, Lieberknecht, Quintana, and Wieland path of relevant economic variables in order to react in timely fashion to the perceived balance of risks.41 While both structural and nonstructural macroeconomic models can be used for forecasting, policymakers by no means use them in a “mechanical” way. Rather, model-based forecasts represent key inputs in a collaborative process taking place within established organizational frameworks. As Wieland and Wolters (2013) state:42 [A central bank’s] staff forecast is a judgmental projection that is derived from quantitative data, qualitative information, different economic models, and various forecasting techniques. The forecast does not result from a mechanical run of any large-scale macroeconometric model; nor is it derived directly by add-factoring any such model. Instead, it relies heavily on the expertise of sector specialists and senior advisers. (254).
The forecasts resulting from such analysis are best understood as expert, rather than model-based, forecasts. Central banks typically have flagship structural models such as the Federal Reserve’s FRB/US model or the Bank of Canada’s ToTEM II model. By contrast, nonstructural models are employed as “satellite models,” which serve to cross- check and adjust the main model’s forecast, particularly in areas where it is most likely to be deficient. Sims (2002), for instance, surveys the forecasting practices at four major central banks43 and concludes, “Some [satellite] models produce regular forecasts that are seen by those involved in the policy process, but none except the primary model have regular well-defined roles in the process” (3). As an illustration of the relationship between central banks’ main (structural) models and satellite (typically nonstructural) models, table 23.6 presents two well-documented examples of forecasting analyses carried out at major central banks.44 Central banks can use structural models also to develop a clear economic narrative associated with their medium-run outlook. These models bring to bear an explicit account of market participants’ forward-looking expectations and how they influence their decision-making in response to changes in policy. They allow for obtaining economic interpretations of macroeconomic fluctuations, as well as welfare-based evaluation and optimal design of policies.45 Here we focus on structural models’ forecasting performance and review some of the evidence supporting the contribution of financial frictions to the empirical fit of macroeconomic models. In particular, we comment on their contribution to macroeconomic models’ forecasting performance in light of the recent financial crisis.46 In contrast to the previous section, here we study models of the US economy, in part because many articles on forecast evaluation have used US data.47 We aim to make five main points: (i) model-based forecasts, if conditioned on appropriate data, can compete with professional forecasts; (ii) model uncertainty implies that there is no single, preferred model in terms of forecasting performance; (iii) the empirical fit of second-generation New Keynesian DSGE models used at central banks
Model Uncertainty in Macroeconomics 715 Table 23.6 Satellite Models at Central Banks Institution
Framework / Economic report
Central model NAWM
ECB
Broad macroeconomic Projection exercise Macroeconomic Projection exercise
NMCM
BoE
Quarterly Inflation Report
COMPASS
Satellite models
Reference
DSGE, VAR, SVAR, BVAR, GVAR, DFM, VECM, ARIMA, Bottom- up (ADL-based) model, ALI model, and Bridge and mixed-frequency models
ECB (2016)
Modified COMPASS models, SVAR, BVAR, VECM, ARMA, PTM, BSM, DSGE, and a suite of “Statistical” models
Burgess et al. (2013)
Note: SVAR is structural VAR, BVAR is Bayesian VAR, GVAR is global VAR, DFM is dynamic factor model, VECM is vector error correction model, AR(I)MA is autoregressive (integrated) moving average model, ADL is autoregressive distributed lag model, ALI is area-wide leading indicator, PTM is posttransformation model, and BSM is balance sheet model.
can be improved by extending them with explicit financial frictions; (iv) evidence on the forecasting performance of models with financial frictions relative to second- generation models supports the view that these frictions can play an important role in generating accurate forecasts; and (v) looking forward, central banks are likely to improve forecasting performance by addressing model uncertainty through some form of model averaging. The first three issues relate to point forecasts. In practice, however, central banks may be interested in using distribution forecasts from several models. This dimension of forecasting practices is considered in the final two points, which are developed below. With regard to the first point, Wieland and Wolters (2011) provide a systematic comparison of forecasting performance for a set of six macroeconomic models relative to “expert” or “professional” forecasts (as proxied by the Survey of Professional Forecasters and the Federal Reserve’s Greenbook) over the previous five US recessions. The set of models includes a Bayesian VAR (BVAR) with three observables, a small-scale New Keynesian model with predominantly backward-looking elements, and four New Keynesian models encompassing the first and second generations, varying in size from three to eleven observables. Wieland and Wolters work with historical data vintages to ensure comparability between model and professional forecasts. They show that although model-based forecasts are produced with a markedly smaller information set than that of the professional forecasters, their root mean squared errors (RMSEs) are very similar when model forecasts are initialized at expert nowcasts. Concerning the issue of model uncertainty (point ii), two results are worth stressing. First, Wieland and Wolters document a substantial degree of heterogeneity between the model-based forecasts, which varies over time and is roughly equal to that present
716 Binder, Lieberknecht, Quintana, and Wieland in expert forecasts. They conclude that model uncertainty, as measured by their set of models, can account for the diversity of forecasts within expert forecasters: [W]hile we can only speculate about the sources of disagreement among expert forecasters, the extent of disagreement among our six model forecasts can be traced to differences in modelling assumptions, different data coverage and different estimation methods. These three sources of disagreement are found to be sufficient to generate an extent of heterogeneity that is similar to the heterogeneity observed among expert forecasts. . . . As a consequence of these findings, we would argue that it is not necessary to take recourse to irrational behavior or perverse incentives in order to explain the dynamics of expert forecast diversity. Rather, this diversity may largely be due to model uncertainty and belief updating in a world where the length of useful data series is limited by structural breaks.” (2011, 275).
Second, the authors find that no single model consistently outperforms the group but that the models’ mean forecast actually tends to outperform all individual models. Viewed through the lens of Bayesian model averaging, this result should not come as a surprise. When every model is misspecified, no single model can be expected to systematically dominate the others, provided every model in the set is sufficiently detailed. Rather, different models will prevail in forecasting performance depending on the relative importance of the frictions affecting the economy, which may vary over time, and each model’s ability to capture them and correctly identify the relevant shocks. This suggests that in forecasting, as in policy robustness, the policymaker may also find it beneficial to make use of a range of models; a point we come back to below. Regarding the improved fit of models with financial-sector frictions (point iii), following the work of BGG, several contributions documented that models with a financial accelerator mechanism appear to better fit the US data. De Graeve (2008) uses Bayesian methods to estimate a medium-scale DSGE model similar to those of CEE and SW but extended with a financial accelerator as in BGG. He shows that the extended version achieves a significantly better fit compared to the core model. Similarly, Christensen and Dib (2008) use maximum likelihood methods to show that their model specification with financial frictions is favored by the data, relative to the counterpart without financial frictions. For the euro area, analogous results can be found in Gelain (2010a) and Villa (2016). Gelain extends the SW model by appending the BGG framework and employs Bayesian methods to show the superior fit of the extended model. Villa, on the other hand, also works with the core SW model but finds on the basis of Bayesian factor analysis that the best fit is achieved by incorporating a financial sector as in Gertler and Karadi (2011)—even improving on the SW plus BGG specification. There are other studies that explore alternative specifications. In sum, there is some empirical evidence that financial frictions improve the fit of medium-scale structural models on US and euro area data in recent history. Yet in reviewing these results, it is worth noting that the above-cited studies were carried out employing exclusively nonfinancial data in model estimation. Thus, the improved fit of models with financial frictions cannot be attributed to additional shocks and/or observables.
Model Uncertainty in Macroeconomics 717 Regarding the issue of financial frictions models’ forecasting performance, point iv), it is worth noting that the findings of De Graeve (2008) and Christensen and Dib (2008) were available to the central banking community well before the financial crisis of 2007– 2009.48 Thus, their implications could be taken into account. Yet Lindé, Smets, and Wouters 2016) argue that such models were not used in the routine conduct of monetary policy: “Pre-crisis DSGE models typically neglected the role of financial frictions. This additional transmission mechanism was considered non-vital for forecasting output and inflation during the great moderation period, and by Occam’s razor arguments this mechanism was typically left out” (52). However, as is well known by now, the baseline precrisis New Keynesian DSGE models failed to predict the large contraction in GDP and the fall in inflation that took place at the end of 2008.49 In figure 23.9, we have reproduced fi gures 2.13 and 2.14 from Del Negro and Schorfheide (2013), which show that the fourth-quarter 2008 realizations of output and inflation (dashed lines) fall outside the predictive density (gray bands) generated by the SW model. This is often considered a significant failure of the workhorse DSGE model in its usefulness for the conduct of monetary policy. Note that the average blue chip forecast (diamonds) comes much closer to the actual data. However, this may well be due to the larger information set available to the blue chip reporting forecasters. So how well could the financial frictions version of a medium-size New Keynesian DSGE model such as the CEE and SW models have fared in this context? This question is addressed by Del Negro and Schorfheide (2013). They conduct the following counterfactual exercise. They estimate a version of the SW model with financial frictions as in BGG (hereafter SW + BGG) with historical data vintages and conditioning the fourth-quarter 2008 forecast on current information about the Baa ten-year Treasury spread (which is taken to be the empirical counterpart of the EFP) and the federal funds rate. Recall that blue chip forecasts for fourth-quarter GDP are produced in the month of January, Output
3 3 2.5 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 –0.5 –0.5 –1 –1 –1.5 –1.5 –2 –2 2004 2005 2006 2007 2008 2009 2010 2011 2012
1.5 1 0.5 0
Inflation
1.5 1 0.5 0
–0.5 –0.5 2004 2005 2006 2007 2008 2009 2010 2011 2012
Figure 23.9 Fourth-Quarter 2008 SW Predictive Density Notes: This figure reproduces fi gures 2.13 and 2.14 from Del Negro and Schorfheide 2013, which shows the predictive density for fourth-quarter 2008 US GDP and inflation (gray bands) generated by the SW model, the realization of output and inflation (dashed lines), and the corresponding average blue chip forecast (gray diamonds). All variables are in percent.
718 Binder, Lieberknecht, Quintana, and Wieland at the end of which the advance release of fourth-quarter GDP is published. By then, the full fourth-quarter trajectory of the spread has already been observed. This timing allows for the model forecasts of all other variables to be conditioned on the realized level of the spread, as well as the policy interest rate. It is important that the models and methods used in this exercise were available to central banks before the Great Recession. The result is striking and is presented in figure 23.10, which reproduces the same figures from Del Negro and Schorfheide (2013) as before. It is clear that model forecasts for both Output
3 3 2.5 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 –0.5 –0.5 –1 –1 –1.5 –1.5 –2 –2 2004 2005 2006 2007 2008 2009 2010 2011 2012
1.5
Inflation
1 0.5 0 –0.5
1.5 1 0.5 0 –0.5
–1 –1 2004 2005 2006 2007 2008 2009 2010 2011 2012
Figure 23.10 Fourth-Quarter 2008 SW + BGG with Spread and FFR Predictive Density Notes: This figure reproduces fi gures 2.13 and 2.14 from Del Negro and Schorfheide 2013, which shows the predictive density for fourth-quarter 2008 US GDP and inflation (gray bands) generated by the SW + BGG model (while conditioning on the Baa ten-year Treasury spread and the federal funds rate), the realization of output and inflation (dashed lines), and the corresponding average blue chip forecast (gray diamonds). All variables are in percent.
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 1987:Q4 1990:Q4 1993:Q4 1996:Q4 1999:Q4 2002:Q4 2005:Q4 2008:Q4
Figure 23.11 Baa Ten-Year Treasury Spread Notes: This figure shows the Baa ten-year Treasury spread from 1987 to 2010 in percent per annum. NBER recession dates are indicated in gray. Data are in quarterly terms.
Model Uncertainty in Macroeconomics 719 output and inflation come much closer to the actual realization, with output falling inside the predictive density.50 Del Negro and Schorfheide write: [The SW + BGG] model produces about the same forecast as Blue Chip for 2008:Q4. Unlike Blue Chip forecasters, the agents in the laboratory DSGE economy have not seen the Fed Chairman and the Treasury Secretary on television painting a dramatically bleak picture of the U.S. economy. Thus, we regard it as a significant achievement that the DSGE model forecasts and the Blue Chip forecasts are both around –1.3%. More importantly, we find this to be convincing evidence on the importance of using appropriate information in forecasting with structural models. (2013, 123).
This result demonstrates the superior forecasting performance of the SW + BGG model, relative to the SW model, during the GFC. It supports financial frictions as a feature that contributes positively to the forecasting performance of structural models. Note, however, that conditioning on the Baa ten-year Treasury spread is crucial in generating this forecast. Figure 23.11 plots the spread from 1988 to 2010 with the NBER-dated recessions marked in gray. The graph shows that the spread peaks at an extraordinarily high level in the fourth quarter of 2008. Thus, conditioning on this variable serves to incorporate into the information set of the model the level of financial-market distress that hit the economy following the Lehman Brothers bankruptcy in September 2008. This allows the model to correctly interpret the configuration of data as foreshadowing a deep contraction in economic activity due to binding financial constraints. The standard SW model, in contrast, is unable to accurately forecast the economic contraction of the fourth quarter of 2008 because it omits variables and shocks related to financial frictions. This misspecification of the SW model is examined in detail by Lindé, Smets, and Wouters (2016). In estimating their model, they account explicitly for the ZLB on interest rates and derive the shocks implied during the Great Recession. They conclude that the model necessitates a “cocktail of extremely unlikely shocks” (2016, 2187) to explain the recession. Further, they show that these shocks—which mainly relate to risk premium and investment-specific technology shocks—are markedly non-Gaussian and highly correlated with the Baa-Aaa and term spreads. This finding lends additional support to the view that financial shocks played an important role during the period. Lindé, Smets, and Wouters (2016) also estimate a version of the SW model with financial frictions, as in Christiano, Motto, and Rostagno (2008). Essentially, this model includes the Christensen and Dib (2008) contract plus a working capital channel as in the CEE model, as well as a Markov-switching process which affects the elasticity of the EFP to entrepreneurial net worth. Lindé, Smets, and Wouters (2016) find that the shocks driving the contraction in output growth, as implied by the extended model, were “huge negative shocks in net worth and/or risk premiums” (55).51 In line with the findings of
720 Binder, Lieberknecht, Quintana, and Wieland Del Negro and Schorfheide (2013), this version of the model is also able to generate, using the appropriate data vintage, a predictive density for fourth-quarter 2008 GDP that encompasses the realized observation.52 Thus, it is not surprising that the SW model’s forecasting performance significantly deteriorates during the Great Recession. One might ask why flagship models at central banks did not already give more weight to financial frictions when BGG conceptualized this framework within a New Keynesian model. The reason may simply be that although the model with financial frictions outperforms the SW model during the GFC, this is not the case over longer time spans, as we would expect given the findings of Wieland and Wolters (2011) mentioned above. Figure 23.12 compares the forecasting performance of four models for two different time periods. Specifically, we recursively estimate three different second- generation New Keynesian DSGE models and a BVAR model over rolling windows of historical data vintages and compute the RMSE from one to eight period-ahead forecasts. The time periods under consideration are 1996–2006 and 2007–2009. The models are the Rotemberg and Woodford (1997; RW97) model, which features
1996–2006 Output
3
4.8
2.8
4.6
2.6
4.4
2.4
4.2
2.2
4
2
3.8
1.8
1
2
3
4
5
6
7
8
Inflation
1.8 1.6 1.4 1.2 1 0.8 0.6
1
2
3
4
5
2007–2009 Output
6
BVAR
7
8
BGG99
3.6
2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1 0.8
1
2
3
4
5
6
7
8
6
7
8
Inflation
1
RW97
2
3
4
5
BGG99 + Spread
Figure 23.12 RSME Comparison Notes: This figure shows the RSMEs for output and inflation from one-to eight-period-ahead forecasts for four models: a Bayesian VAR, the RW97 model, and two versions of the BGG99 model (BGG99 and BGG99 + Spread). Statistics are computed by recursively estimating the models over rolling windows of historical (quarterly) data vintages. Column titles indicate the sample under consideration in each case.
Model Uncertainty in Macroeconomics 721 no financial frictions, and two versions of the BGG99 model (BGG99 and BGG99 + Spread). The first two models and the BVAR have as observables output, inflation, and the federal funds rate, while the BGG99 + Spread model has the Baa ten-year Treasury spread as an additional observable and incorporates a financial shock. The panels on the right-hand side, which refer to models’ forecasting performance during the GFC, confirm what might be expected after reviewing the results of Del Negro and Schorfheide (2013) and Lindé, Smets, and Wouters (2016). Namely, the model that performs best during this period is the BGG99 + Spread model (long dashed line), which achieves the minimum RMSE in both output and inflation for all forecast horizons except for the one-quarter-ahead output forecast. Note that we have not exploited the real-time observability of the spread, which would likely increase the model’s performance further. Between the RW97 (dash-dotted line) and BGG99 (solid line) models, there is no clear difference in terms of output forecasts, but the RW97 model outperforms with regard to inflation. During this period, the BVAR is the worst-performing model in terms of forecasting output but is as accurate as the BGG99 + Spread model in terms of inflation. Finally, the comparison between BGG99 and BGG99 + Spread makes clear that financial frictions do not improve models’ forecasting performance unless they are supplemented with additional shocks/observables. By contrast, the panels on the left-hand side in figure 23.12 show that both versions of the BGG99 model underperform relative to the RW97 model in the decade leading up to the GFC. The RW97 model, in turn, underperforms relative to the BVAR in terms of output and achieves roughly the same degree of accuracy in terms of inflation. Thus, the relative performance of the models with and without financial frictions and relative to nonstructural models varies over time. No single model is consistently better than the alternatives. Consequently, each model should be given due consideration in designing and implementing policy. As regards second-and third-generation New Keynesian DSGE models, their relative performance over time is documented by Del Negro, Hasegawa, and Schorfheide (2016). They find that the SW + BGG model outperforms the SW model during the early-2000s dot-com recession and during the Great Recession. During these periods, the spread is high relative to previous periods, as indicated previously in figure 23.8. At other times, the contrary is true, and it is the SW model that outperforms. As for the final point (point v), we may surmise from the preceding discussion that incorporating financial frictions into structural models can lead to a better forecasting performance. However, this result is conditional on financial constraints being particularly relevant for macroeconomic dynamics at that time. Thus, third-generation models are unlikely to outperform second-generation models in an unconditional sense. Essentially, the specter of model uncertainty remains present. Dismissing earlier-generation models would be ill advised. Rather, new methods are called for to explicitly address the issue of model uncertainty as it pertains to forecasting. In this regard, the work of Del Negro, Hasegawa, and Schorfheide (2016)—which follows in the spirit of Leamer (1978) and Kapetanios, Labhard, and Price (2006)—offers a
722 Binder, Lieberknecht, Quintana, and Wieland promising approach to dealing with model uncertainty. The authors develop a “dynamic pooling strategy,” which allows the policymaker to mix the predictive densities of two different models by means of weighted averaging. This is similar to Bayesian model averaging but assumes that the model space is incomplete and allows for the model weights to be data-dependent and time-varying. They show that such a specification is able to perform better on average than either of the individual models, while allowing the weight on the better-performing model to be increased quickly in real-time forecasting. This procedure could be extended to a larger set of models. It could incorporate the suggestions of Lindé, Smets, and Wouters (2016) on explicitly accounting for nonlinearities and including Markov-switching processes for key financial parameters, both of which should lead to more accurate forecasts. Finally, the dynamic pooling strategy could be combined with a dynamic optimization of the central bank’s policy rule as developed in section 23.4 to further insure against model uncertainty. If properly implemented, these avenues represent promising tools for dealing with model uncertainty in real time and could result in better policy performance going forward.
23.6 Conclusion Central banks have a long history of using macroeconomic models. Internally consistent and theoretically anchored structural macroeconomic models that perform reasonably well in matching empirical observations constitute a powerful tool to conduct monetary policy analysis. In particular, such macroeconomic models yield sensible quantitative and qualitative measures of the economy’s current state and likely future outcomes due to changes in central bank decision making. During the 2000s, a second generation of New Keynesian DSGE models developed that fulfilled these criteria. At the same time, methods for model solution and estimation were improved substantially so as to make it easy to estimate such models and use them for real-time forecasting. These models could be used to provide answers to some typical questions of monetary policymakers. As a result, models in the vein of Christiano, Eichenbaum, and Evans (2005) and Smets and Wouters (2003; 2007) were quickly added to the toolkits of central banks for the evaluation of monetary policy. However, the GFC highlighted the need to investigate the role of financial intermediation for business cycles and policy transmission in further detail. Building on earlier research on financial frictions such as Bernanke, Gertler, and Gilchrist (1999), New Keynesian DSGE models have been extended to account for financial- market imperfections. Several new approaches toward integrating more detailed characterizations of the financial sector led to a third generation of New Keynesian DSGE models. While the firm-based financial frictions mechanism in the spirit of Bernanke, Gertler, and Gilchrist (1999) is by now well established, still more work on how to include frictions in the banking sector of macroeconomic models is needed.
Model Uncertainty in Macroeconomics 723 Central bank modelers have already taken steps to include such models with financial frictions in the policy process. Model uncertainty remains even more of a key concern for policymaking. Comparative analysis of different third-generation New Keynesian DSGE models indicates a great deal of heterogeneity. The transmission of monetary and fiscal policy is quite different with models that include different types of financial frictions. Furthermore, there is some evidence that monetary policy has stronger effects in the presence of financial frictions due to the amplification effects. In particular, these models imply stronger peak responses and/or more persistent effects of monetary policy shocks on the real economy. Against this background, it is important to compare the implications of different models for policy design and search for policy rules that perform well across a range of models. To this end, we used model averaging techniques, focusing on models estimated for the euro area. We found that the models with financial frictions that we considered prescribe a weaker response to both inflation and the output gap but a stronger response to output gap growth than earlier-generation models. This is likely due to the stronger effects of monetary policy, on average, in the financial frictions models. They allow the systematic response to inflation and the output gap to be less pronounced. However, we found no strong evidence that reacting to financial-sector variables—LAW—leads to substantial gains in terms of more stable macroeconomic dynamics in the models considered. This finding may well be due to some particular modeling choices. The greater effectiveness of monetary policy with financial frictions seems somewhat at odds with the experience of the GFC. It was not the case that central banks were able to jump-start the economy with rapid interest-rate cuts. Furthermore, the policy rules we found to perform well may depend too much on the expectations channel of policy and thus on fully credible commitment by the policymaker and rational, homogeneous expectations by market participants. It would be useful to revisit the question of LAW policies under learning and imperfect credibility and with nonlinearities in the financial sector (Filardo and Rungcharoenkitkul 2016). Finally, we reviewed the contribution of financial frictions to models’ forecasting performance in light of the recent financial crisis. Previous research showed that the empirical fit of New Keynesian DSGE models can be improved by adding financial frictions. Moreover, conditioning on appropriate data, the addition of financial frictions can play a key role in generating accurate forecasts, in particular, when financial shocks are a key driving force for the business cycle. More specifically, enhancing the Smets and Wouters (2007) model by the financial accelerator mechanism of Bernanke, Gertler, and Gilchrist (1999) and conditioning on the Baa ten-year Treasury spread lead to superior forecasting performance during the Great Recession relative to a version without financial frictions. However, no single model variant outperforms in terms of forecasting performance over longer time spans. Against the backdrop of model uncertainty, this suggests that third-generation models are unlikely to outperform second-generation models in an unconditional sense. Some form of model averaging thus represents a promising approach in generating appropriate forecasts.
724 Binder, Lieberknecht, Quintana, and Wieland Looking forward, structural macroeconomic models will remain essential tools for policymakers. However, as in the past, these models need to be constantly adapted in line with observed economic developments in order to better address the challenges policymakers face in an evolving economic environment. The speedy adoption of New Keynesian DSGE models featuring financial frictions in the toolkit of central banks should be acclaimed. Still, central banks would appear well advised to increase the diversity of modeling approaches they employ and to take macroeconomic model competition as well as recent criticisms of the New Keynesian DSGE framework more into account. Considering competing modeling paradigms in macroeconomics such as models with heterogeneous agents, agent-based models, or new macrofinance models seems advisable. In combination with further developments of the New Keynesian DSGE models, such a pluralistic approach to macroeconomic modeling is likely to yield a more robust framework for future monetary policy analysis.
Notes 1. As discussed by Fukac and Pagan 2011, coexisting with structural (“interpretative”) models have been reduced-form (“summative”) models. While one of the main uses of summative models has been the calculation of forecasts, they have also been employed to obtain improved insight on specific aspects of economic theory that are of relevance for central banks. See, for example Binder, Chen, and Zhang 2010 for the use of global vector autoregressions to examine the overshooting of exchange rates in response to unanticipated changes in monetary policy and Schorfheide, Song, and Yaron 2014 for the use of Bayesian state space models to decompose the origins of risk premiums. While this is beyond the scope of this chapter, there are useful ongoing debates about the relative merits of different approaches in macroeconomic modeling aiming to strike a good trade-off between consistency with economic theory, adequacy in capturing the data, and relevance for macroeconomic policy (see, for example, Caballero 2010; Chari, Kehoe, and McGrattan 2009; Pesaran and Smith 2011). 2. See Dhaene and Barten 1989 for a detailed review of the Tinbergen model. The model consists of twenty-four structural equations with Keynesian elements and was built to assess whether the Netherlands should leave the gold standard. 3. The Bank of Canada adopted the so-called RDX1 model and later versions RDX2 and RDXF (Helliwell 1969), while the Federal Reserve System started using the MPS and the MCM model (De Leeuw and Gramlich 1968). 4. The Bank of Canada replaced the RDXF model in the early 1990s with the QPM (Coletti et al. 1996; Poloz, Rose, and Tetlow 1994). At the Federal Reserve, the MPS was replaced with the FRB/US model, an econometric model with explicit treatment of private-sector expectations (Brayton and Tinsley 1996). 5. The terminology of model generations used here follows Orphanides and Wieland 2013 and Wieland et al. 2016. 6. Their framework was already widely circulated in working-paper format in 2001.
Model Uncertainty in Macroeconomics 725 7. Alternative means to incorporate financial frictions are borrowing constraints faced by households or including the financial accelerator mechanism in housing markets. For a combination of both of these, see, for example, Iacoviello and Neri 2010; Quint and Rabanal 2014. 8. We draw on published research papers, working papers, or other publicly available documentation. In particular, the table is based on Wieland et al. 2016; 2012, Coenen et al. 2012, Kilponen et al. 2015, and Lindé, Smets, and Wouters 2016, who provide discussions of models used at policy institutions that complement our discussion here. By no means do we claim to provide a complete overview. 9. For example, the publication list of the Board of Governors of the Federal Reserve System staff in 2015–2016 includes many research papers dealing with financial-sector issues. 10. For a detailed review of the history of macroeconomic model comparison, see Wieland et al. 2016 and the references therein. 11. In their model, the bank optimization problem implies a link between bank lending and the conditions faced in the market for funding, which in turn depend on households’ liquidity demand. 12. The suitability of models with financial frictions to explain the period after the Great Recession is also supported by the somewhat improved empirical fit, as documented in Del Negro, Giannoni, and Schorfheide 2015; Villa 2016; Lindé, Smets, and Wouters et al. 2016; and others, a point to which we return in section 23.5. 13. Cogan et al. 2010 extend the SW model with rule-of-thumb consumers, while Cwik and Wieland 2011 compare the SW03 model to TAY93, the ECB’s Area-Wide Model, the EU- QUEST III model, and a medium-scale model developed at the IMF. 14. The calibration follows Bernanke, Gertler, and Gilchrist 1999 and Christiano, Eichenbaum, and Rebelo 2011. 15. The increase in government consumption is unanticipated by agents, but the time path of fiscal stimulus is fully revealed upon implementation in the initial period. 16. Considering a marginal increase in government consumption ensures that the length of the ZLB phase is not altered. used are it = 1.5πQt + 0.5 ⋅ 4 yt , it = 0.8it −1 + (1 − 0.8)πQt + Q (1 − 0.8) ⋅ 0.5 ⋅ 4( yt − yt −1 ) and it = 10πt , respectively. Here i is the quarterly annualized Q nominal interest rate, π is the quarterly annualized inflation rate, and y is the output gap. f f 0 18. Partial impulse responses are defined as x t = x t − x t , with x t being the percent deviation response to the shock that drives the economy to the ZLB, combined with fiscal stimulus, 0 while x t is the response to the recessionary shock only. We thus compare the scenario of a binding ZLB and accompanied fiscal stimulus to a scenario of a binding ZLB only. 19. For a detailed account of the relevant literature, see the substantive reviews of Galati and Moessner 2013; Bank of England 2011; German Council of Economic Experts 2014. 20. For substantive discussions of the mechanisms through which these instruments take effect, see Hanson, Kashyap, and Stein 2011; Bank of England 2011; Nier et al. 2013. For a precise description how these instruments are incorporated into the models, see Binder et al. 2017. 21. In each case, the macroprudential instrument is modeled as an AR(1) process with an autoregressive coefficient equal to 0.9. 22. We have made all of the models considered available at www.macromodelbase.com. 17. The
policy
rules
726 Binder, Lieberknecht, Quintana, and Wieland 23. For simplicity, we abstract from uncertainty due to mismeasurement of macroeconomic variables. For a treatment of this issue within the current framework, see Orphanides and Wieland 2013. 24. Specifically, the forward-looking models considered in this study are Coenen and Wieland 2005 from the first generation and SW from the second generation. 25. For algorithms aimed at an optimal choice of weights, see Kuester and Wieland 2010; Del Negro, Hasegawa, and Schorfheide 2016. 26. For a full description of the numerical strategy employed to solve this problem, see Afanasyeva et al. 2016. 27. Note that we include the annualized nominal interest rate, annual inflation, and the output gap in the common variables. 28. Additionally, this class of rules contains several benchmark rules as special cases. For the performance of benchmarks such as the Taylor rule, the Gerdesmeier et al. 2004 rule, and the 1st-difference rule of Orphanides and Wieland 2013, see Afanasyeva et al. 2016. 29. Since we work with the first-order approximation of each model, multiple equilibria and indeterminacy are equivalent terms in our application. 30. See Levin and Williams 2003 for a similar argument. 31. Afanasyeva et al. 2016 show that the results presented here continue to hold under a set of different weights. 32. Specifically, the model used in their analysis is the SW model. 33. Alternatively, one could eliminate the Varm (∆i) term from the loss function and add a restriction that Varm (∆i) ≤ σ2 , where the latter term is the variance observed in the data. This specification would yield similar results, as shown, for example, in Wieland et al. 2013, but carries computational disadvantages. 34. Most of the third-generation New Keynesian DSGE models in the set, as well as the SW model, have been described in section 23.2. For brevity of exposition, we refer the reader to Afanasyeva et al. 2016 for a description of the remaining models and for a presentation of the euro area estimation of all the versions of the EA_CFOP14 model, which the authors estimate. 35. Specifically, the table reproduces the “normalized loss Bayesian rules” in Afanasyeva et al. 2016. In deriving these rules, the weights in the policymaker’s problem have been set such that the minimum level of each model’s loss function is normalized to unity. This choice of weights prevents “loss outliers” from excessively influencing the results (a problem identified in Kuester and Wieland 2010 and Adalid et al. 2005) and results in a flatter distribution of model loss increases. 36. Again, see Afanasyeva et al. 2016 for the computational algorithm employed in finding these coefficients. 37. There is no general agreement on whether rules are best evaluated through absolute or percent changes in models’ loss functions. Here we present both statistics, since we think their implications coincide, which allows us to avoid taking a stand on this issue. For a discussion, see Kuester and Wieland 2010; Levin and Williams 2003. 38. For the general definition and justification of this concept, see Kuester and Wieland 2010. 39. In deriving this rule, we set equal weights on all models. 40. For brevity of exposition, we do not deal with the methodological aspects of the topic but focus instead on one issue: financial frictions models’ contribution to forecasting performance. See Wieland et al. 2013 and Del Negro and Schorfheide 2013 for thorough expositions on the subject.
Model Uncertainty in Macroeconomics 727 41. For empirical evidence documenting that policymakers do indeed adjust their policies in response to economic forecasts, see Wieland and Wolters 2013. 42. The original text deals with the Federal Reserve’s forecasts, but the statement holds true for other central banks as well. 43. Specifically, the Federal Reserve, the ECB, the Bank of England, and the Riksbank. 44. In contrast to central banks’ flagship models, descriptions of satellite models are typically not available to the general public. 45. For specific expositions on these issues, see European Central Bank 2016; Burgess et al. 2013; Alessi et al. 2014. 46. For applications to fiscal policy, see Coenen et al. 2012; Wieland et al. 2013. 47. For papers dealing with real-time forecasting in the context of the euro area, see Coenen, Levin, and Wieland 2005; Christoffel, Warne, and Coenen 2010; Smets, Warne, and Wouters 2014. 48. The dates of publication for some of the cited papers is misleading, since they had been circulated as working papers earlier. 49. See Wieland et al. 2013; Del Negro and Schorfheide 2013; Lindé, Smets, and Wouters 2016. 50. In subsequent work, the authors show that the third-generation model’s forecast can be improved even further; see Del Negro et al. 2015. 51. These results are in line with earlier work by Christiano, Motto, and Rostagno 2014, although the set of financial shocks is different between the models. 52. However, it should be noted that this exercise employs methods that were not available before the crisis. Interestingly, Lindé, Smets, and Wouters 2016 also document that estimating the SW plus financial frictions model with nonlinear methods results in higher price-stickiness parameter values and a flatter Phillips curve, which helps account for the sluggishness of inflation during the Great Recession.
References Adalid, R., G. Coenen, P. McAdam, and S. Siviero. 2005. “The Performance and Robustness of Interest-Rate Rules in Models of the Euro Area.” International Journal of Central Banking 1, no. 1: 95–132. Adam, K., and R. M. Billi. 2006. “Optimal Monetary Policy under Commitment with a Zero Bound on Nominal Interest Rates.” Journal of Money, Credit and Banking 38, no. 7: 1877–1905. Adam, K., and R. M. Billi. 2007. “Discretionary Monetary Policy and the Zero Lower Bound on Nominal Interest Rates.” Journal of Monetary Economics 54, no. 3: 728–752. Adolfson, M., S. Laséen, L. Christiano, M. Trabandt, and K. Walentin. 2013. “Ramses II— Model Description.” Sveriges Riksbank occasional paper 12. Adolfson, M., S. Laséen, J. Lindé, and M. Villani. 2007. “RAMSES: A New General Equilibrium Model for Monetary Policy Analysis.” Sveriges Riksbank Economic Review 2: 5–40. Afanasyeva, E., M. Binder, J. Quintana, and V. Wieland. 2016. “Robust Monetary and Fiscal Policies under Model Uncertainty.” MACFINROBODS working paper 11.4. Afanasyeva, E., and J. Güntner. 2015. “Lending Standards, Credit Booms, and Monetary Policy.” Hoover Institution economics working paper 15115. Aldasoro, I., D. Delli Gatti, and E. Faia. 2015. “Bank Networks: Contagion, Systemic Risk and Prudential Policy.” SAFE working paper 87.
728 Binder, Lieberknecht, Quintana, and Wieland Alessi, L., E. Ghysels, L. Onorante, R. Peach, and S. Potter. 2014. “Central Bank Macroeconomic Forecasting during the Global Financial Crisis: The European Central Bank and Federal Reserve Bank of New York Experiences.” Journal of Business & Economic Statistics 32, no. 4: 483–500. Alfarano, S., T. Lux, and F. Wagner. 2005. “Estimation of Agent-Based Models: The Case of an Asymmetric Herding Model.” Computational Economics 26, no. 1: 19–49. Almeida, V., G. L. de Castro, R. M. Félix, P. Júlio, and J. R. Maria. 2013. “Inside PESSOA—A Detailed Description of the Model.” Banco de Portugal working papers w201316. Altig, D. E., L. J. Christiano, M. Eichenbaum, and J. Lindé. 2005. “Firm-Specific Capital, Nominal Rigidities and the Business Cycle.” CEPR discussion paper 4858. Anderson, D., B. L. Hunt, M. Kortelainen, M. Kumhof, D. Laxton, D. V. Muir, S. Mursula, and S. Snudden. 2013. “Getting to Know GIMF: The Simulation Properties of the Global Integrated Monetary and Fiscal Model.” IMF working paper 13/55. Anderson, T. W., and J. B. Taylor. 1976. “Some Experimental Results on the Statistical Properties of Least Squares Estimates in Control Problems.” Econometrica 44, no. 6: 1289–1302. Andrle, M., M. Kumhof, D. Laxton, and D. V. Muir. 2015. “Banks in the Global Integrated Monetary and Fiscal Model.” IMF working paper 15/150. Angelini, P., S. Neri, and F. Panetta. 2011. “Monetary and Macroprudential Policies.” Bank of Italy economic working paper 801. Auerbach, A. J., and M. Obstfeld. 2005. “The Case for Open-Market Purchases in a Liquidity Trap.” American Economic Review 95, no. 1: 110–137. Ball, L. M. 1999. “Policy Rules for Open Economies.” In Monetary Policy Rules, edited by J. B. Taylor, 127–156. Chicago: University of Chicago Press. Ball, L., J. Gagnon, P. Honohan, and S. Krogstrup. 2016. “What Else Can Central Banks Do?” Geneva Reports on the World Economy 18: i–110. Bank of England. 2011. “Instruments of Macroprudential Policy: A Discussion paper.” Basel Committee on Banking Supervision. 2010. Basel III: International Framework for Liquidity Risk Measurement, Standards and Monitoring. Basel: Bank for International Settlements. Belke, A., and J. Klose. 2013. “Modifying Taylor Reaction Functions in the Presence of the Zero- Lower-Bound—Evidence for the ECB and the Fed.” Economic Modelling 35 (C): 515–527. Benes, J., and Kumhof, M. 2011. “Risky Bank Lending and Optimal Capital Adequacy Regulation.” IMF Working Paper Series, WP/11/130 http://www.imf.org/external/pubs/cat/ longres.aspx?sk=24900.0. Bernanke, B. S., and A. S. Blinder. 1988. “Credit, Money, and Aggregate Demand.” American Economic Review 78, no. 2: 435–439. Bernanke, B. S., and M. Gertler. 1995. “Inside the Black Box: The Credit Channel of Monetary Policy Transmission.” Journal of Economic Perspectives 9, no. 4: 27–48. Bernanke, B. S., M. Gertler, and S. Gilchrist. 1999. “The Financial Accelerator in a Quantitative Business Cycle Framework.” In Handbook of Macroeconomics, Vol. 1, edited by J. B. Taylor and M. Woodford, 1341–1393. Amsterdam: Elsevier. Binder, M., Q. Chen, and X. Zhang. 2010. “On the Effects of Monetary Policy Shocks on Exchange Rates.” CESifo working paper 3162. Binder, M., P. Lieberknecht, J. Quintana, and V. Wieland. 2017. “Robust Macroprudential and Regulatory Policies under Model Uncertainty.” MACFINROBODS working paper 11.5. Binder, M., P. Lieberknecht, and V. Wieland. 2016. “Fiscal Multipliers, Fiscal Sustainability and Financial Frictions.” MACFINROBODS working paper 6.4.
Model Uncertainty in Macroeconomics 729 Blanchard, O. 2016. “Do DSGE Models Have a Future?” Peterson Institute for International Economics policy brief 16-11. Blanchard, O., G. Dell’Ariccia, and P. Mauro. 2010. “Rethinking Macroeconomic Policy.” Journal of Money, Credit and Banking 42: 199–215. Blanchard, O., C. Erceg, and J. Lindé. 2016. “Jump-Starting the Euro Area Recovery: Would a Rise in Core Fiscal Spending Help the Periphery?” NBER Macroeconomics Annual 2016 31: 1–99. Bletzinger, T., and M. Lalik. 2017. “The Impact of Constrained Monetary Policy on the Fiscal Multipliers on Output and Inflation.” European Central Bank working paper 2019. Bokan, N., A. Gerali, S. Gomes, P. Jacquinot, and M. Pisani. 2016. “EAGLE- FLI: A Macroeconomic Model of Banking and Financial Interdependence in the Euro Area.” European Central Bank working paper 1923. Borio, C., et al. 2011. “Toward an Operational Framework for Financial Stability: ‘Fuzzy’ Measurement and Its Consequences.” Central Banking, Analysis, and Economic Policies Book Series 15: 63–123. Brave, S., J. R. Campbell, J. D. M. Fisher, and A. Justiniano. 2012. “The Chicago Fed DSGE Model.” Federal Reserve Bank of Chicago working paper 2012-02. Brayton, F., and P. Tinsley. 1996. “A Guide to FRB/US: A Macroeconomic Model of the United States.” Federal Reserve System finance and economics discussion series 1996-42. Brayton, F., T. Laubach, and D. L. Reifschneider. 2014. “The FRB/US Model: A Tool for Macroeconomic Policy Analysis.” Federal Reserve System notes 2014-04-03. Brockmeijer, J., M. Moretti, J. Osinski, N. Blancher, J. Gobat, N. Jassaud, C. H. Lim, et al. 2011. “Macroprudential Policy: An Organizing Framework.” IMF background paper 1–32. Brubakk, L., and P. Gelain. 2014. “Financial Factors and the Macroeconomy—A Policy Model.” Norges Bank staff memo 2014/10. Brubakk, L., T. A. Husebø, J. Maij, K. Olsen, and M. Østnor. 2006. “Finding NEMO: Documentation of the Norwegian Economy Model.” Norges Bank staff memo 2006/6. Brunnermeier, M. K., and Y. Sannikov. 2014. “A Macroeconomic Model with a Financial Sector.” American Economic Review 104, no. 2: 379–421. Buiter, W. 2009. “The Unfortunate Uselessness of Most ‘State of the Art’ Academic Monetary Economics.” VOXeu.org, March 6, 2009. https://voxeu.org/article/ macroeconomics-crisis-irrelevance Burgess, S., E. Fernandez-Corugedo, C. Groth, R. Harrison, F. Monti, K. Theodoridis, and M. Waldron. 2013. “The Bank of England’s Forecasting Platform: COMPASS, MAPS, EASE and the Suite of Models.” Bank of England working paper 471. Buzaushina, A., M. Hoffmann, M. Kliem, M. Krause, and S. Moyen. 2011. “An Estimated DSGE Model of the German Economy.” Mimeo. Deutsche Bundesbank. Caballero, R. J. 2010. “Macroeconomics after the Crisis: Time to Deal with the Pretense-of- Knowledge Syndrome.” Journal of Economic Perspectives 24, no. 4: 85–102. Campbell, J. R., C. L. Evans, J. D. Fisher, and A. Justiniano. 2012. “Macroeconomic Effects of Federal Reserve Forward Guidance.” Brookings Papers on Economic Activity 43, no. 1: 1–80. Carlstrom, C. T., T. S. Fuerst, A. Ortiz, and M. Paustian. 2014a. “Estimating Contract Indexation in a Financial Accelerator Model.” Journal of Economic Dynamics and Control 46 (C): 130–149. Carlstrom, C. T., T. S. Fuerst, and M. Paustian. 2014b. “Targeting Long Rates in a Model with Segmented Markets.” Federal Reserve Bank of Cleveland working paper 1419.
730 Binder, Lieberknecht, Quintana, and Wieland Carrillo, J., and C. Poilly. 2013. “How Do Financial Frictions Affect the Spending Multiplier during a Liquidity Trap?” Review of Economic Dynamics 16, no. 2: 296–311. Chari, V. V., P. J. Kehoe, and E. R. McGrattan. 2009. “New Keynesian Models: Not Yet Useful for Policy Analysis.” American Economic Journal: Macroeconomics 1, no. 1: 242–266. Chen, H., V. Cúrdia, and A. Ferrero. 2012. “The Macroeconomic Effects of Large-Scale Asset Purchase Programmes.” Economic Journal 122, no. 564: 289–315. Christensen, I., and A. Dib. 2008. “The Financial Accelerator in an Estimated New Keynesian Model.” Review of Economic Dynamics 11, no. 1: 155–178. Christiano, L. J., M. Eichenbaum, and C. L. Evans. 2005. “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy.” Journal of Political Economy 113, no. 1: 1–45. Christiano, L., M. Eichenbaum, and S. Rebelo. 2011. “When Is the Government Spending Multiplier Large?” Journal of Political Economy 119, no. 1: 78–121. Christiano, L., R. Motto, and M. Rostagno. 2008. “Shocks, Structures or Monetary Policies? The Euro Area and US after 2001.” Journal of Economic Dynamics and Control 32, no. 8: 2476–2506. Christiano, L. J., R. Motto, and M. Rostagno. 2014. “Risk Shocks.” American Economic Review 104, no. 1: 27–65. Christiano, L., M. Rostagno, and R. Motto. 2010. “Financial Factors in Economic Fluctuations.” European Central Bank working paper 1192. Christoffel, K., G. Coenen, and A. Warne. 2008. “The New Area-Wide Model of the Euro Area: A Micro-Founded Open-Economy Model for Forecasting and Policy Analysis.” European Central Bank working paper 0944. Christoffel, K., A. Warne, and G. Coenen. 2010. “Forecasting with DSGE Models.” European Central Bank working paper 1185. Chung, H., M. T. Kiley, and J.-P. Laforte. 2010. “Documentation of the Estimated, Dynamic, Optimization-Based (EDO) Model of the U.S. Economy: 2010 Version.” Federal Reserve System finance and economics discussion series 2010-29. Clarida, R., M. Gertler, and J. Gali. 1999. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37, no. 4: 1661–1707. Coenen, G., A. Orphanides, and V. Wieland. 2004. “Price Stability and Monetary Policy Effectiveness When Nominal Interest Rates Are Bounded at Zero.” B.E. Journal of Macroeconomics 4, no. 1: 1–25. Coenen, G., C. J. Erceg, C. Freedman, D. Furceri, M. Kumhof, R. Lalonde, D. Laxton, et al. 2012. “Effects of Fiscal Stimulus in Structural Models.” American Economic Journal: Macroeconomics 4, no. 1: 22–68. Coenen, G., A. Levin, and V. Wieland. 2005. “Data Uncertainty and the Role of Money As an Information Variable for Monetary Policy.” European Economic Review 49, no. 4: 975–1006. Coenen, G., and V. Wieland. 2003. “The Zero-Interest-Rate Bound and the Role of the Exchange Rate for Monetary Policy in Japan.” Journal of Monetary Economics 50, no. 5: 1071–1101. Coenen, G., and V. W. Wieland. 2004. “Exchange-Rate Policy and the Zero Bound on Nominal Interest Rates.” American Economic Review 94, no. 2: 80–84. Coenen, G., and V. Wieland. 2005. “A Small Estimated Euro Area Model with Rational Expectations and Nominal Rigidities.” European Economic Review 49, no. 5: 1081–1104. Cogan, J. F., T. Cwik, J. B. Taylor, and V. Wieland. 2010. “New Keynesian versus Old Keynesian Government Spending Multipliers.” Journal of Economic Dynamics and Control 34, no. 3: 281–295.
Model Uncertainty in Macroeconomics 731 Coletti, D., B. Hunt, D. Rose, and R. Tetlow. 1996. “The Bank of Canada’s New Quarterly Projection Model, Part 3: The Dynamic Model, QPM.” Bank of Canada technical report 75. Cwik, T., and V. Wieland. 2011. “Keynesian Government Spending Multipliers and Spillovers in the Euro Area. Economic Policy 26, no. 67: 493–549. Dawid, H., P. Harting, and M. Neugart. 2014. “Economic Convergence: Policy Implications from a Heterogeneous Agent Model.” Journal of Economic Dynamics and Control 44 (C): 54–80. Debortoli, D., J. Kim, J. Linde, and R. Nunes. 2016. “Designing a Simple Loss Function for the Fed: Does the Dual Mandate Make Sense?” Korea University Institute of Economic Research discussion paper 1601. De Graeve, F. 2008. “The External Finance Premium and the Macroeconomy: US Post-WWII Evidence.” Journal of Economic Dynamics and Control 32, no. 11: 3415–3440. De Leeuw, F., and E. M. Gramlich. 1968. “The Federal Reserve–MIT Economic Model. Federal Reserve Bulletin, January. Delli Gatti, D., and S. Desiderio. 2015. “Monetary Policy Experiments in an Agent-Based Model with Financial Frictions.” Journal of Economic Interaction and Coordination 10, no. 2: 265–286. Del Negro, M. D., S. Eusepi, M. Giannoni, A. M. Sbordone, A. Tambalotti, M. Cocci, R. B. Hasegawa, and M. H. Linder. 2013. “The FRBNY DSGE model.” Federal Reserve Bank of New York staff report 647. Del Negro, M., M. P. Giannoni, and F. Schorfheide. 2015. “Inflation in the Great Recession and New Keynesian Models.” American Economic Journal: Macroeconomics 7, no. 1: 168–196. Del Negro, M., R. B. Hasegawa, and F. Schorfheide. 2016. “Dynamic Prediction Pools: An Investigation of Financial Frictions and Forecasting Performance.” Journal of Econometrics 192, no. 2: 391–405. Del Negro, M., and F. Schorfheide. 2013. “DSGE Model-Based Forecasting.” Handbook of Economic Forecasting 2: 57–140. De Resende, C., A. Dib, and N. Perevalov. 2010. “The Macroeconomic Implications of Changes in Bank Capital and Liquidity Requirements in Canada: Insights from the BoC-GEM-FIN.” Bank of Canada discussion paper 10-16. Dhaene, G., and A. P. Barten. 1989. “When It All Began.” Economic Modelling 6, no. 2: 203–219. Dib, A. 2010. “Banks, Credit Market Frictions, and Business Cycles.” Bank of Canada staff working paper 10-24. Dieppe, A., A. González Pandiella, S. Hall, and A. Willman. 2011. “The ECB’s New Multi- Country Model for the Euro Area: NMCM— with Boundedly Rational Learning Expectations.” European Central Bank working paper 1316. Dieppe, A., K. Küster, and P. McAdam. 2005. “Optimal Monetary Policy Rules for the Euro Area: An Analysis Using the Area Wide Model. Journal of Common Market Studies 43, no. 3: 507–537. Dorich, J., M. K. Johnston, R. R. Mendes, S. Murchison, and Y. Zhang. 2013. “ToTEM II: An Updated Version of the Bank of Canada’s Quarterly Projection Model.” Bank of Canada technical report 100. Edge, R. M., M. T. Kiley, and J.-P. Laforte. 2007. “Documentation of the Research and Statistics Division’s Estimated DSGE Model of the U.S. Economy: 2006 Version.” Federal Reserve System finance and economics discussion series 2007-53.
732 Binder, Lieberknecht, Quintana, and Wieland Eggertsson, G. B. 2011. “What Fiscal Policy Is Effective at Zero Interest Rates?” NBER Macroeconomics Annual 2010 25: 59–112. Eggertsson, G. B., and P. Krugman. 2012. “Debt, Deleveraging, and the Liquidity Trap: A Fisher-Minsky-Koo Approach.” Quarterly Journal of Economics 127, no. 3: 1469–1513. Eggertsson, G. B., and M. Woodford. 2003. “The Zero Bound on Interest Rates and Optimal Monetary Policy.” Brookings Papers on Economic Activity 34, no. 1: 139–235. Ellison, M., and A. Tischbirek. 2014. “Unconventional Government Debt Purchases As a Supplement to Conventional Monetary Policy.” Journal of Economic Dynamics and Control 43 (C): 199–217. Erceg, C. J., L. Guerrieri, and C. Gust. 2006. “SIGMA: A New Open Economy Model for Policy Analysis.” International Journal of Central Banking 2, no. 1: 1–50. European Central Bank. 2016. “A Guide to the Eurosystem/ ECB Staff Macroeconomic Projection Exercises.” European Central Bank technical report. Fagiolo, G., and A. Roventini. 2012. “Macroeconomic Policy in DSGE and Agent-Based Models.” Revue de l’OFCE 5: 67–116. Faia, E., and Schnabel, I. 2015. “The Road from Micro-to Macro-Prudential Regulation.” In Financial Regulation: A Transatlantic Perspective, edited by Ester Faia, Andreas Hackethal, Michael Haliassos, and Katja Langenbucher, 3–22. Cambridge: Cambridge University Press. Fernández-Villaverde, J. 2010. “Fiscal Policy in a Model with Financial Frictions.” American Economic Review 100, no. 2: 35–40. Filardo, A., and P. Rungcharoenkitkul. 2016. “A Quantitative Case for Leaning against the Wind.” BIS working paper 594. Financial Stability Board. 2009. “Addressing Financial System Procyclicality: A Possible Framework.” Note for the FSF Working Group on Market and Institutional Resilience, Bank for International Settlements. Fuhrer, J. C., and B. F. Madigan. 1997. “Monetary Policy When Interest Rates Are Bounded at Zero.” Review of Economics and Statistics 79, no. 4: 573–585. Fuhrer, J. C., and G. R. Moore. 1995. “Monetary Policy Trade-Offs and the Correlation b etween Nominal Interest Rates and Real Output.” American Economic Review 85, no. 1: 219–239. Fujiwara, I., N. Hara, Y. Hirose, and Y. Teranishi. 2005. “The Japanese Economic Model (JEM).” Monetary and Economic Studies 23, no. 2: 61–142. Fukac, M., and A. Pagan. 2011. “Structural Macro- Econometric Modeling in a Policy Environment.” In Handbook of Empirical Economics and Finance, edited by A. Ullah and D. Giles, 215–246. Boca Raton, FL: Chapman and Hall. Furceri, D., and A. Mourougane. 2010. “The Effects of Fiscal Policy on Output: A DSGE Analysis.” OECD Economics Department Working Papers, No. 770, OECD Publishing, Paris. https://doi.org/10.1787/5kmfp4z3njg0-en. Gaballo, G., E. Mengus, B. Mojon, and P. Andrade. 2016. “Forward Guidance and Heterogenous Beliefs.” Society for Economic Dynamics meeting paper 1182. Gai, P., A. Haldane, and S. Kapadia. 2011. “Complexity, Concentration and Contagion.” Journal of Monetary Economics 58, no. 5: 453–470. Galati, G., and R. Moessner. 2013. “Macroprudential Policy—A Literature Review.” Journal of Economic Surveys 27, no. 5: 846–878. Gambacorta, L., and D. Marques-Ibanez. 2011. “The Bank Lending Channel: Lessons from the Crisis.” Economic Policy 26, no. 66: 135–182. Gelain, P. 2010a. “The External Finance Premium in the Euro Area: A Dynamic Stochastic General Equilibrium Analysis.” North American Journal of Economics and Finance 21, no. 1: 49–7 1.
Model Uncertainty in Macroeconomics 733 Gelain, P. 2010b. “The External Finance Premium in the Euro Area: A Useful Indicator for Monetary Policy?” European Central Bank working paper 1171. Gerali, A., S. Neri, L. Sessa, and F. M. Signoretti. 2010. “Credit and Banking in a DSGE Model of the Euro Area.” Journal of Money, Credit and Banking 42, series 1: 107–141. Gerdesmeier, D., et al. 2004. “Empirical Estimates of Reaction Functions for the Euro Area.” Swiss Journal of Economics and Statistics 140, no. 1: 37–66. Gerke, R., M. Jonsson, M. Kliem, M. Kolasa, P. Lafourcade, A. Locarno, K. Makarski, and P. McAdam. 2013. “Assessing Macro-Financial Linkages: A Model Comparison Exercise.” Economic Modelling 31: 253–264. German Council of Economic Experts. 2014. “Annual Report 2014/15.” Technical report. Gertler, M., and P. Karadi. 2011. A Model of Unconventional Monetary Policy.” Journal of Monetary Economics 58, no. 1: 17–34. Gertler, M., and P. Karadi. 2013. “QE 1 vs. 2 vs. 3 . . . : A Framework for Analyzing Large-Scale Asset Purchases As a Monetary Policy Tool.” International Journal of Central Banking 9, no. 1: 5–53. Gertler, M., and N. Kiyotaki. 2010. “Financial Intermediation and Credit Policy in Business Cycle Analysis.” In Handbook of Monetary Economics, Vol. 3, edited by B. M. Friedman and M. Woodford, 547–599. San Diego: Elsevier. Giannoni, M., C. Patterson, and M. D. Negro. 2016. “The Forward Guidance Puzzle.” Society for Economic Dynamics meeting paper 143. Gomes, S., P. Jacquinot, and M. Pisani. 2012. “The EAGLE: A Model for Policy Analysis of Macroeconomic Interdependence in the Euro Area.” Economic Modelling 29, no. 5: 1686–1714. Goodfriend, M., and R. King. 1997. “The New Neoclassical Synthesis and the Role of Monetary Policy.” NBER Macroeconomics Annual 12: 231–296. Haavelmo, T. 1944. “The Probability Approach in Econometrics.” Econometrica 12: iii–115. Hanson, S. G., A. K. Kashyap, and J. C. Stein. 2011. “A Macroprudential Approach to Financial Regulation.” Journal of Economic Perspectives 25, no. 1: 3–28. Harrison, R., K. Nikolov, M. Quinn, G. Ramsay, A. Scott, and R. Thomas. 2005. “The Bank of England Quarterly Model.” Bank of England technical report. Helliwell, J. F. 1969. The Structure of RDX1. Ottawa: Bank of Canada. Holmstrom, B., and J. Tirole. 1997. “Financial Intermediation, Loanable Funds, and the Real Sector.” Quarterly Journal of Economics 112, no. 3: 663–691. Iacoviello, M., and S. Neri. 2010. “Housing Market Spillovers: Evidence from an Estimated DSGE Model.” American Economic Journal: Macroeconomics 2, no. 2: 125–164. Ireland, P. N. 2004. “Money’s Role in the Monetary Business Cycle.” Journal of Money, Credit and Banking 36, no. 6: 969–983. Jakab, Z., and M. Kumhof. 2015. “Banks Are Not Intermediaries of Loanable Funds—and Why This Matters.” Bank of England working paper 529. Kapetanios, G., V. Labhard, and S. Price. 2006. “Forecasting Using Predictive Likelihood Model Averaging.” Economics Letters 91, no. 3: 373–379. Kashyap, A. K., J. C. Stein, and D. W. Wilcox. 1993. “Monetary Policy and Credit Conditions: Evidence from the Composition of External Finance.” American Economic Review 83, no. 1: 78–98. Kilponen, J., M. Pisani, S. Schmidt, V. Corbo, T. Hlédik, J. Hollmayr, S. Hurtado, et al. 2015. “Comparing Fiscal Multipliers across Models and Countries in Europe.” European Central Bank working paper 1760.
734 Binder, Lieberknecht, Quintana, and Wieland Klein, L. R. 1969. “Estimation on Interdependent Systems in Macroeconometrics.” Econometrica 37, no. 2: 171–192. Krugman, P. R. 1998. “It’s Baaack: Japan’s Slump and the Return of the Liquidity Trap.” Brookings Papers on Economic Activity 29, no. 2: 137–206. Krugman, P. 2009. “Paul Krugman’s London Lectures. Dismal Science: The Nobel Laureate Speaks on the Crisis in the Economy and in Economics.” Economist, June 11, 2009. Kuester, K., and V. Wieland. 2010. “Insurance Policies for Monetary Policy in the Euro Area.” Journal of the European Economic Association 8, no. 4: 872–912. Kühl, M. 2016. “The Effects of Government Bond Purchases on Leverage Constraints of Banks and Non-Financial Firms.” Deutsche Bundesbank discussion paper 38. Kydland, F. E., and E. C. Prescott. 1977. “Rules Rather Than Discretion: The Inconsistency of Optimal Plans.” Journal of Political Economy 85, no. 3: 473–491. Kydland, F. E., and E. C. Prescott. 1982. “Time to Build and Aggregate Fluctuations.” Econometrica 50, no. 6: 1345–1370. Lalonde, R., and D. Muir. 2007. “The Bank of Canada’s Version of the Global Economy Model (BoC-GEM).” Bank of Canada technical report 98. Leamer, E. E. 1978. Specification Searches: Ad Hoc Inference with Nonexperimental Data, Vol. 53. New York: John Wiley. Levin, A. T., V. Wieland, and J. Williams. 1999. “Robustness of Simple Monetary Policy Rules under Model Uncertainty.” In Monetary Policy Rules, edited by J. B. Taylor, 263–318. Chicago: University of Chicago Press. Levin, A., V. Wieland, and J. C. Williams. 2003. “The Performance of Forecast-Based Monetary Policy Rules under Model Uncertainty.” American Economic Review 93, no. 3: 622–645. Levin, A. T., and J. C. Williams. 2003. “Robust Monetary Policy with Competing Reference Models.” Journal of Monetary Economics 50, no. 5: 945–975. Lindé, J., F. Smets, and R. Wouters. 2016. “Challenges for Central Banks’ Macro Models.” In Handbook of Macroeconomics, Vol. 2, edited by J. B. Taylor and H. Uhlig, 2185–2262. Amsterdam: Elsevier. Lombardo, G., and P. McAdam. 2012. “Financial Market Frictions in a Model of the Euro Area.” Economic Modelling 29, no. 6: 2460–2485. Lucas, R. J. 1976. “Econometric Policy Evaluation: A Critique.” Carnegie-Rochester Conference Series on Public Policy 1, no. 1: 19–46. McCallum, B. T. 1988. Robustness Properties of a Rule for Monetary Policy.” In Carnegie- Rochester Conference Series on Public Policy 29: 173–203. McCallum, B. T., and E. Nelson. 1999. “An Optimizing IS-LM Specification for Monetary Policy and Business Cycle Analysis.” Journal of Money, Credit and Banking 31, no. 3: 296–316. McKay, A., E. Nakamura, and J. Steinsson. 2016. “The Power of Forward Guidance Revisited.” American Economic Review 106, no. 10: 3133–3158. Meh, C. A., and K. Moran. 2010. “The Role of Bank Capital in the Propagation of Shocks.” Journal of Economic Dynamics and Control 34, no. 3: 555–576. Murchison, S., and A. Rennison. 2006. “ToTEM: The Bank of Canada’s New Quarterly Projection Model.” Bank of Canada technical report 97. Nakov, A. 2008. “Optimal and Simple Monetary Policy Rules with Zero Floor on the Nominal Interest Rate.” International Journal of Central Banking 4, no. 2: 73–127. Nier, E., H. Kang, T. Mancini, H. Hesse, F. Columba, R. Tchaidze, and J. Vandenbussche. 2013. “The Interaction of Monetary and Macroprudential Policies.” IMF background paper.
Model Uncertainty in Macroeconomics 735 Orphanides, A., and V. Wieland. 1998. “Price Stability and Monetary Policy Effectiveness When Nominal Interest Rates Are Bounded at Zero.” Federal Reserve System finance and economics discussion series 35. Orphanides, A., and V. Wieland. 2000. “Efficient Monetary Policy Design Near Price Stability.” Journal of the Japanese and International Economies 14, no. 4: 327–365. Orphanides, A., and V. Wieland. 2013. “Complexity and Monetary Policy.” International Journal of Central Banking 9, no. 1: 167–204. Paez-Farrell, J. 2014. “Resuscitating the Ad Hoc Loss Function for Monetary Policy Analysis.” Economics Letters 123, no. 3: 313–317. Pesaran, H. M., and R. P. Smith. 2011. “Beyond the DSGE Straitjacket.” CESifo working paper 3447. Poloz, S., D. Rose, and R. Tetlow. 1994. “The Bank of Canada’s New Quarterly Projection Model (QPM): An Introduction.” Bank of Canada Review (Autumn): 23–38. Quint, D., and P. Rabanal. 2014. “Monetary and Macroprudential Policy in an Estimated DSGE Model of the Euro Area.” International Journal of Central Banking 10, no. 2: 169–236. Ratto, M., W. Roeger, and J. in’t Veld. 2009. “QUEST III: An Estimated Open-Economy DSGE Model of the Euro Area with Fiscal and Monetary Policy.” Economic Modelling 26, no. 1: 222–233. Reifschneider, D., and J. C. Williams. 2000. “Three Lessons for Monetary Policy in a Low- Inflation Era.” Journal of Money, Credit and Banking 32, no. 4: 936–966. Romer, P. 2016. “The Trouble with Macroeconomics.” Speech. Commons Memorial Lecture, Omicron Delta Epsilon Society, January 5. Rotemberg, J., and M. Woodford. 1997. “An Optimization-Based Econometric Framework for the Evaluation of Monetary Policy.” NBER Macroeconomics Annual 12: 297–361. Rudebusch, G., and L. E. Svensson. 1999. “Policy Rules for Inflation Targeting.” In Monetary Policy Rules, edited by J. B. Taylor, 203–262. Chicago: University of Chicago Press. Schmidt, S. 2013. “Optimal Monetary and Fiscal Policy with a Zero Bound on Nominal Interest Rates.” Journal of Money, Credit and Banking 45, no. 7: 1335–1350. Schmidt, S., and V. Wieland. 2013. The New Keynesian Approach to Dynamic General Equilibrium Modeling: Models, Methods and Macroeconomic Policy Evaluation, Vol. 1 of Handbook of Computable General Equilibrium Modeling, edited by P. Dixon and D. Jorgenson, 1439–1512. Amsterdam: Elsevier. Schmitt-Grohé, S., and M. Uribe. 2007. “Optimal Simple and Implementable Monetary and Fiscal Rules.” Journal of Monetary Economics 54, no. 6: 1702–1725. Schorfheide, F., D. Song, and A. Yaron. 2014. “Identifying Long-Run Risks: A Bayesian Mixed- Frequency Approach.” NBER working paper 20303. Sims, C. A. 1980. “Macroeconomics and Reality.” Econometrica 48, no. 1: 1–48. Sims, C. 2002. “The Role of Models and Probabilities in the Monetary Policy Process.” Brookings Papers on Economic Activity 33, no. 2: 1–62. Smets, F., A. Warne, and R. Wouters. 2014. “Professional Forecasters and Real- Time Forecasting with a DSGE Model.” International Journal of Forecasting 30, no. 4: 981–995. Smets, F., and R. Wouters. 2003. “An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area.” Journal of the European Economic Association 1, no. 5: 1123–1175. Smets, F., and R. Wouters. 2007. “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach.” American Economic Review 97, no. 3: 586–606. Stähler, N., N. Gadatsch, and K. Hauzenberger. 2014. “Getting into GEAR: German and the Rest of Euro Area Fiscal Policy during the Crisis.” Evidence-Based Economic Policy Conference, German Economic Association, Hamburg.
736 Binder, Lieberknecht, Quintana, and Wieland Stähler, N., and C. Thomas. 2012. “FiMod—A DSGE Model for Fiscal Policy Simulations.” Economic Modelling 29, no. 2: 239–261. Stevens, G. V., R. Berner, P. Clark, E. Hernandez-Cata, H. Howe, and S. Kwack. 1984. “The U.S. Economy in an Interdependent World: A Multicountry Model.” Federal Reserve System technical report. Svensson, L. E. O. 1997a. “Inflation Forecast Targeting: Implementing and Monitoring Inflation Targets.” European Economic Review 44, no. 6: 1111–1146. Svensson, L. E. O. 1997b. “Optimal Inflation Targets, ‘Conservative’ Central Banks, and Linear Inflation Contracts.” American Economic Review 87, no. 1: 98–114. Taylor, J. B. 1979. “Estimation and Control of a Macroeconomic Model with Rational Expectations.” Econometrica 47, no. 5: 1267–1286. Taylor, J. B. 1993a. “Discretion versus Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public Policy 39, no. 1: 195–214. Taylor, J. B. 1993b. Macroeconomic Policy in a World Economy. New York: W. W. Norton. Taylor, J. B. 2000. “Reassessing Discretionary Fiscal Policy.” Journal of Economic Perspectives 14, no. 3: 21–36. Taylor, J. B., and V. Wieland. 2012. “Surprising Comparative Properties of Monetary Models: Results from a New Model Database.” Review of Economics and Statistics 94, no. 3: 800–816. Theil, H. 1958. Economic Forecasts and Policy. Amsterdam: North-Holland. Tinbergen, J. 1936. “An Economic Policy for 1936.” In Jan Tinbergen, Selected Papers, edited by L. K. L. H. Klaassen, and H. Witteveen, 37–84. Amsterdam: North-Holland. Tinbergen, J. 1952. On the Theory of Economic Policy. Amsterdam: North-Holland. Townsend, R. M. 1979. “Optimal Contracts and Competitive Markets with Costly State Verification.” Journal of Economic Theory 21, vol. 2: 265–293. Turner, A. 2010. “What Do Banks Do, What Should They Do and What Public Policies Are Needed to Ensure Best Results for the Real Economy?” Lecture. Cass Business School. Villa, S. 2016. “Financial Frictions in the Euro Area and the United States: A Bayesian Assessment.” Macroeconomic Dynamics 20, no. 5: 1313–1340. Vlcek, J., and S. Roger. 2012. “Macrofinancial Modeling at Central Banks: Recent Developments and Future Directions.” IMF working paper 12/21. Walsh, C. E. 2003. Monetary Theory and Policy, 2nd ed., Vol. 1.Cambridge: MIT Press. Wieland, V., and M. Wolters. 2013. “Forecasting and Policy Making.” Handbook of Economic Forecasting 2: 239–325. Wieland, V., E. Afanasyeva, M. Kuete, and J. Yoo. 2016. “New Methods for Macro–Financial Model Comparison and Policy Analysis.” In Handbook of Macroeconomics, Vol., edited by J. B. Taylor and H. Uhlig, 1241–1319. Amsterdam: Elsevier. Wieland, V., T. Cwik, G. J. Müller, S. Schmidt, and M. Wolters. 2012. “A New Comparative Approach to Macroeconomic Modeling and Policy Analysis.” Journal of Economic Behavior & Organization 83, no. 3: 523–541. Wieland, V., and M. H. Wolters. 2011. “The Diversity of Forecasts from Macroeconomic Models of the US Economy.” Economic Theory 47, nos. 2–3: 247–292. Woodford, M. 2011. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton: Princeton University Press.
chapter 24
W hat Has Pu bl i sh i ng Infl ation Fore c asts Ac c om plished? C e nt ra l Banks and T h e i r C ompeti tors Pierre L. Siklos
24.1 Introduction Perhaps one of the most important elements in efforts by central banks around the world to become more transparent has been their willingness to provide the public with numerical expressions of the economic outlook. Moreover, since these same central banks have staked a considerable part of their reputation on their ability to maintain low and stable inflation rates, there has naturally been considerable emphasis on forecasts of inflation. This is, of course, especially true in so-called inflation-targeting economies because their remit is explicitly based on a numerical objective. Nevertheless, a case can also be made that a similar explanation holds for many other central banks, either because price stability is mandated or because there is a general understanding, often publicized by central banks themselves, of what inflation rate is being aimed for.
I am grateful to the Hoover Institution and the Bank of Japan’s Institute for Monetary and Economic Studies for their financial support. I am also grateful to several central banks, the Bank for International Settlements, and the Bank of Japan for their assistance in providing me with some of the data collected for this study. Earlier versions of this chapter were presented at an invited session of the 8th International Conference in Computational and Financial Econometrics (Pisa), the Japanese Centre for Economic Research, the Bank of Japan, the University of Stellenbosch, and the Zurich Workshop on the Economics of Central Banking. I am also grateful to two anonymous referee for comments on earlier drafts.
738 Siklos The behavior of inflation worldwide over the past decade or so, especially since the onset of the global financial crisis (GFC), has arguably been the focus of intense debate, especially in monetary policy circles. The profession as well as central bankers remain divided over whether the dynamics of inflation are reasonably well explained by a link to past and anticipated future inflation together with the degree of economic slack (e.g., see Mavroeidis, Plagborg-Møller, and Stock 2014). Indeed, the factors believed to drive inflation have also waxed and waned over the years, with exchange rates, changes in liquidity, the choice of policy regimes, and, more recently, economic slack or weak wage growth among the principal candidates used to explain low inflation and, by implication, expectations that inflation will continue to remain low. A common feature in much of the extant empirical work that attempts to capture the role of expectations is the reliance on point forecasts such as ones published by professional forecasters, collected from households or firms surveys, by public institutions or agencies, and, more recently, by central banks. Typically, however, studies continue to rely on forecasts from a single source or emphasize the importance of point forecasts. This can be useful when attempting to understand the forecasting process, as in Capistrán and Timmermann (2008). However, unless the policymakers consider forecasts from a wide variety of sources, a more representative interpretation of dispersion about the outlook for inflation may well be ignored. Put differently, a focus on point forecasts can mislead the public into believing that there is greater consensus about future inflation than is actually the case. Indeed, a variety of explanations have been proposed for the formation of expectations. These range from Mankiw and Reis’s (2002) sticky information hypothesis, where information about macroeconomic fundamentals spreads slowly (see also Carroll 2003), to Sims’s (2015) rational inattention, where agents do not always react to new information that is observed. Irrespective of the hypothesis in question, forecasters will disagree about the inflation outlook. This chapter investigates the evolution of inflation forecast disagreement and, in particular, what happens when central bank forecasts serve as the benchmark. In doing so, I highlight the importance of moving beyond the mere reporting of point forecasts. Additional insights about the role of monetary policy can also be obtained by looking at the dispersion of forecasts.1 Traditionally, some mean or median serves as the obvious benchmark. However, if information is indeed sticky, forecasters suffer from inattention, or the public simply relies on central bank forecasts because it is convenient to do so or in the belief that they are perhaps more likely to be accurate over time, then the benchmark against which forecast disagreement is evaluated will matter. Indeed, the possibility that the public coordinates its forecasts with those of the central bank has been used, at least in theory, to suggest that transparency need not always be beneficial (see Morris and Shin 2002).2 Even if this outcome is unlikely to emerge in practice (Svensson 2006), an analysis of disagreement may help identify some of the differences in dispersion of the private sector between households’ forecasts and those of the central bank. Given the variety of explanations for the formation of expectations, the spread of inflation forecasts published by central banks ought to have considerable impact,
What Has Publishing Inflation Forecasts Accomplished? 739 particularly if they are credible. Moreover, paralleling the growing availability of central bank forecasts has been the rise in the availability of macroeconomic and financial data. The implication of these developments raises questions about whether forecasts should lead to more or less disagreement about the inflation outlook. Dovern (2015), for example, claims, “it is surprising to observe high degrees of disagreement across agents when it comes to forecasting the future state of the economy” (16). However, it is far from clear why disagreement should not persist or even rise, given the range of theoretical explanations of expectations formation, let alone different interpretations of the weight and value of different information produced by central banks. Other explanations include substantial changes in the credibility of central banks over time (e.g., see Bordo and Siklos 2016) and varying opinions about the quality of central banks’ forecasting record (e.g., Bank of England Independent Evaluation Office 2015; Stockton 2012; IMF Independent Evaluation Office 2013; Ng and Wright 2013; Alessi et al. 2014, just to name a few).3 This chapter considers short-term sources of inflation forecast disagreement in nine advanced economies and takes account of the role of domestic versus global factors among other determinants. Although the focus is on the evolution of inflation forecast disagreement, other contributions are also made. First, forecast disagreement should be evaluated relative to several benchmarks and not only the mean of all available forecasts as is traditionally the case.4 Second, a set of indicators of the content of central bank announcements is considered that is inspired from widely used metrics in the psychology and political science literatures. The aim is to illustrate their use and potential importance as well as to point out that there exists considerable scope to improve the quantification of the qualitative element of monetary policy. Third, I adapt an idea from the model confidence set approach (Hansen, Lunde, and Nason 2011) to obtain a quasi-confidence interval for inflation forecast disagreement. Finally, some forecasters may change their outlook, especially for data that are frequently revised (e.g., the output gap). Accordingly, an indicator of the impact of such revisions is also considered as a separate determinant of forecast disagreement. The main findings can be summarized as follows. Estimates of disagreement can be highly sensitive to the chosen benchmark, and central banks need not always be the benchmark of choice. The range of forecast disagreement can be high even when the level of disagreement is low. Hence, some form of uncertainty, perhaps economic policy or monetary policy uncertainty, may well represent additional factors at play. There is comparatively little evidence, other than perhaps among professional forecasters, that forecasts are strongly coordinated with those of the central bank. The evidence that central bank communication influences forecast disagreement is not strong. The quality of central bank communication may be one proximate explanation. However, there is little guidance about how best to measure the content of central bank press releases in particular. Finally, at least over the period considered, that is, covering the end of the Great Moderation and the GFC, there is consistent evidence that global factors impact forecast disagreement.
740 Siklos Section 24.2 briefly considers the connection between inflation forecasts and monetary policy. Section 24.3 summarizes the relevant literature. Section 24.4 provides a working definition of forecast disagreement and describes the sources of data. Section 24.5 outlines the methodology used to investigate the determinants of inflation forecast disagreement. Section 24.6 describes the empirical results. The final section, 24.7 concludes.
24.2 Forecasts and Monetary Policy Forecast accuracy is often established as a condition that holds on average over a particular sample period as opposed to each and every period of time.5 Yet many forecasts, especially ones that are published by public agencies, professional forecasters, and central banks, face a trade-off between accuracy and the underpinnings of such forecasts, that is, their logical consistency and plausibility. Otherwise, forecasts risk being poorly communicated, with deleterious consequences for the reputation of the forecaster (e.g., see Drehmann and Juselius 2013; Zellner 2002). It comes as no surprise, then, that “judgment” almost always plays a role. This is particularly true of forecasts published by central banks. Therefore, while useful, the exclusive emphasis on forecast accuracy can be misplaced.6 Questions have also been raised about the desirability of anchoring expectations to some inflation goal (e.g., inter alia, Coibion and Gorodnichenko 2015; Bernanke 2007; Constâncio 2015; Kamada, Nakajima, and Nishiguchi 2015; Strohsal and Winkelmann 2015).7 Therefore, investigating forecast disagreement ought also to provide some insights into forecasters’ sensitivity to short-term shocks. For example, whereas central banks might “look through” the effects of one-off supply or uncertainty shocks (e.g., oil prices or economic policy uncertainty), households and even professional forecasters might not (e.g., Lahiri, Peng, and Sheng 2015; Reifschneider and Tulip 2007; Boero, Smith, and Wallis 2015; Glas and Hartmann 2015). Once again, one is led to examining forecast disagreement for additional clues. Admittedly, a difficulty here is that the horizon over which these expectations apply plays an important role. Monetary policy still acts with long and variable lags. Although central bankers tend to think that current policy decisions can take up to two years to take full effect, some view the appropriate horizon to be much longer.8 I return to this issue below as data limitations hamper our ability to estimate measures of forecast disagreement beyond the short term, especially when cross-country comparisons are made. The role and importance of forecast disagreement are also underscored by the fact that it is one of the components in an indicator of economic policy uncertainty (e.g., see Baker, Bloom, and Davis 2016).9 Nevertheless, because the scope of economic policy uncertainty is vast, lately there has been an attempt to narrow the focus to monetary policy alone (Husted, Rogers, and Sun 2016). Sources of disagreement over the outlook stems from different weights forecasters might apply to global versus domestic shocks.
What Has Publishing Inflation Forecasts Accomplished? 741 This has led some research to place relatively greater emphasis on global over other determinants (inter alia, see Borio and Filardo 2007; Ciccarelli and Mojon 2010). While realized inflation across many parts of the world may look similar, leading in some cases to deceptively similar point inflation forecasts (see, however, below), in reality, differences in the stance of monetary policy, owing in no small part to the spread of unconventional monetary policy, are arguably much larger (e.g., Carney 2015a). Divergences in the effective stance of monetary policy could become more apparent from an analysis of forecast disagreement over time. Since 2008 in the United States, and later in several other advanced economies, the dramatic fall in central bank policy rates has also meant that the outlook is incompletely communicated via only forecasts. Instead, there has been increased emphasis and importance placed on what central bankers say (or write; see Blinder et al. 2008). Unlike point forecasts or the “fan charts” that have become the staple of many central bank monetary policy reports, there is greater room for interpretation about the meaning and intent of central banks’ statements that accompany their decisions and deliberations. As a result, there may be scope for disagreement in the inflation outlook to be influenced by communication from the monetary authority when there is little prospect for change in the policy rate or in the central bank’s estimate of the future outlook for inflation.
24.3 Related Literature on Forecast Disagreement As noted in section 24.1, forecast disagreement can be traced in part to different beliefs about how agents process information and the incentives to revise their expectations in light of new information.10 At least two rather different theoretical views on the subject are the sticky information and rational inattention interpretations of expectations formation. Andrade and Le Bihan (2013), for example, reject the rational inattention hypothesis but also find some difficulty with the sticky information model.11 In addition, a hybrid model that combines various hypotheses of expectations formation also fails to fit the data well. Clearly, there continues to be scope for improving our understanding of what drives inflation expectations in particular. Nevertheless, it is also clear that the lack of theoretical consensus is also reflected in how forecast disagreement is measured (also see Mokinski, Sheng, and Yang 2015 for a recent survey). Glas and Hartmann (2015) consider the tension between forecast disagreement and uncertainty and rely on data from the Survey of Professional Forecasters (SPF) conducted by the European Central Bank (ECB) to show that rising inflation uncertainty typically precedes a deterioration of forecasting performance, while disagreement is primarily determined by the state of the macroeconomy. Bachmann, Elstner, and Sims (2013) also report, based on the German Ifo12 Survey of Business Climate, that forecast errors are correlated with forecast dispersion and, unlike some studies to be
742 Siklos cited below, conclude that uncertainty and disagreement may be treated as proxies for each other. More recently, there is a shift in emphasis to consider the behavior of inflation forecast obtained from household surveys and qualitative forecasts more generally (e.g., see Maag 2009; Dräger and Lamla 2016). At a theoretical level, the link between uncertainty and disagreement has also attracted considerable interest.13 Lahiri, Peng, and Sheng (2015) make the case that uncertainty represents one element of disagreement, but it is not the only one. They then develop what they refer to as theoretically sound measures of disagreement and uncertainty but assume that forecast errors are stationary. The empirical evidence is far from reaching a consensus on the time-series properties of forecast errors.14 Nevertheless, the authors are correct in pointing out that there exists a common component across forecasts or forecast errors. Data availability and data structure also play a role in the ability to separately measure forecast uncertainty and disagreement as argued in Boero, Smith, and Wallis (2015).15 They conclude, using data from the Bank of England’s Survey of External Forecasters, that forecast disagreement can be useful to proxy uncertainty but only when these measures disagreement exhibit large changes. Clements and Galvão (2014) propose a distinction between ex ante and ex post measures of uncertainty, the latter being determined by realized data while the former is determined by models or probabilistic considerations, and conclude, for example, that ex ante tracks well ex post uncertainty when the forecast horizon is short. Jurado, Ludvigson, and Ng (2015) also consider the dispersion versus uncertainty concepts with the objective of empirically identifying, for the United States, salient uncertainty “events.” As far as the sample covered by the present study is concerned, the most important such event is the GFC of 2007–2009. Nevertheless, they also find that uncertainty rises in recessions as well as when the forecast horizon lengthens. The authors interpret uncertainty as the common latent factor among individual measures of uncertainty. Surveying empirical studies that examine varieties of forecasts, it is still the case that research tends to rely primarily on US data. This is not surprising given the number of published forecasts and the span of available data that can be brought to bear on the problem being investigated (also see below). Typically, investigators will resort to the SPF or forecasts published by the Federal Reserve (i.e., Greenbook or Federal Open Market Committee [FOMC]) and occasionally Blue Chip or a survey (e.g., University of Michigan Survey). While accuracy and efficiency of forecasts continue to be the focus of some studies (e.g., Chang and Hanson 2015; IMF Independent Evaluation Office 2013), attention has turned in recent years to asking whether the data are also informative about the forecasters’ knowledge about how monetary policy functions and whether their forecasts are consistent with core economic relations (e.g., a Taylor rule, a New Keynesian Phillips curve). Studies in this vein include that by Carvalho and Nechio (2014), who report that households’ expectations are consistent with a simple Taylor rule type of specification but not some of the other macroeconomic relations they examine. Dräger (2015) considers
What Has Publishing Inflation Forecasts Accomplished? 743 the case of Swedish households, while Dräger, Lamla, and Pfajfar (2016) explore a range of US-based forecasts and find that households’ forecasts are less consistent with theoretical propositions from macroeconomics than ones produced by professional forecasters. Kamada, Nakajima, and Nishiguchi (2015), relying on Japanese data, report that expectations differ according to whether the household is or is not well informed, based on an opinion survey conducted by the Bank of Japan. Dovern (2015) attempts to go beyond a unidimensional analysis of forecast disagreement by proposing a multivariate version. However, his analysis is restricted to the SPF and Consensus Economics forecasts. Moreover, the multivariate measure assumes that forecasters effectively use the same model. Finally, it is unclear whether a multivariate indicator of this kind actually provides new insights about the evolution of forecast disagreement over time, at least compared to existing estimates (e.g., see Siklos 2013). Other themes that have attracted attention recently include the role that news plays in influencing expectations. Bauer (2015), for example, uses the Blue Chip and SPF forecasts to estimate their sensitivity to macroeconomic news and concludes that a policy of targeting inflation contributes to reducing the volatility of inflation expectations and, therefore, represents an effective anchoring device.16 Anchoring of inflation expectations among households in Japan is the focus of the studies by Kamada, Nakajima, and Nishiguchi (2015) and Nishiguchi, Nakajima, and Imabukko (2014). These studies report that central bank announcements (e.g., the introduction of quantitative and qualitative easing, or QQE) can shift the distribution of expectations toward the announced objective.17 Central bank communication also figures prominently in Dräger, Lamla, and Pfajfar (2016). They rely on this signal as a device to ascertain how households versus professionals respond to such signals. The authors conclude that while households’ expectations formation processes are generally less consistent with macroeconomic theory than those employed by professional forecasters, efforts by central banks to become more transparent have narrowed the gap between forecasters. Strohsal, Melnick, and Nautz (2015) and Strohsal and Winkelmann (2015) also consider the anchoring issue and the role of news effects and reach the strong conclusion, in the US case, that inflation was almost “perfectly” anchored since 2004. Other than in 2008, the central banks investigated18 were able to control inflation expectations. Coibion and Gorodnichenko (2015) argue that the anchoring of expectations is a double-edged sword, since a sharp change in, say, commodity prices (e.g., oil) ought to lead to a revision of expectations, even if only for a short time; otherwise, the risk of deflation in an already low inflation environment would be much greater. Hence, the US economy, where the 2008–2009 financial crisis originated, may well have benefited from the absence of perfect expectations anchoring among households.19 Nevertheless, it is also possible that this conclusion holds only for US data, as the evidence for Japan in Nishiguchi, Nakajima, and Imabukko (2014) and Eurozone data reported in Bachmann, Elstner, and Sims (2013) would seem to contradict the claim made by Coibion and Gorodnichenko (2015).
744 Siklos The anchoring of inflation expectations is also relevant for monetary policy more generally, especially since an increasing number of central banks drove their policy rates near to, or reached, the zero or effective lower bound (ZLB or ELB),20 as well as introducing additional monetary policy loosening measures generally referred to as quantitative easing (QE). In particular, the current constellation of global economic slack and persistently low inflation21 would seem to offer the opportunity for monetary policy to become “irresponsible” by permitting observed expectations of inflation to rise so that they are safely away from deflation (e.g., see Woodford 2012). Others counter that a central bank’s reputation is at stake, and consequently, an unanchoring of expectations might result (e.g., Koo 2009). As noted above, the goal of anchoring inflation expectations is possibly the most cherished one among central bankers next to the maintenance of financial stability, especially following the events since 2007. It is important to stress, as noted in section 24.1, that there is no consensus on the horizon over which expectations ought to be anchored. What about the influence of central bank forecasts on the forecasts of others? As noted above, the theoretical predictions of Morris and Shin (2002) are difficult to test directly and, in any case, have been criticized by others as unrealistic (e.g., Svensson 2006). The literature that investigates the performance of central bank forecasts is generally restricted to the US experience for reasons already stated, though studies referred to earlier for the United Kingdom, Sweden, and the Eurozone, notably, are now also available. Other than Siklos (2013), only Hubert (2015) provides recent evidence for Sweden, the United Kingdom, Canada, Japan, and Switzerland.22
24.4 Measuring Forecast Disagreement, Uncertainty, and Data As noted above, there is no universally agreed-upon measure of inflation forecast disagreement. Inflation is defined here as the annual rate of change in a consumer price index (CPI) as this is the purchasing power indicator for which forecasts are most frequently published.23 The results reported below consider the squared deviations measure.24 Let dthj represent forecast disagreement at time t, over a forecast of horizon h, for economy j. Then, N
dthj =
j 1 ∑ (F j − Fgthj )2 , (1) N j − 1 i =1 ith
where F is the inflation forecast, Nj is the number of forecasts, i identifies the forecast, and F j represents the mean forecast value across forecasting groups g (to be defined in greater detail below) in economy j. Equation 1 is a measure of forecast dispersion. The absolute standard deviation of dthj is then normalized to produce values that range between 0 and 1. Accordingly, the following expression is reported as
What Has Publishing Inflation Forecasts Accomplished? 745
dthj =
N
j 1 (F j − Fgthj )2 (2) ∑ N j − 1 i =1 ith
suitably normalized.25 Cross-economy comparisons are then more easily made. Forecasts can be grouped in a variety of ways to generate forecast combinations. These include ones prepared by central banks (CB), forecasts conducted among households and businesses (S), a set of forecasts (P) by public agencies (i.e., Organization for Economic Cooperation and Development [OECD], International Monetary Fund [IMF], Consensus), as well as a group consisting of professional (PR) forecasts (e.g., Consensus, SPF). Mean values of d are then calculated for each economy j in the data set. Grouping forecasts is likely to be useful for a variety of reasons. For example, some of the data used in this study are projections, and others are actual forecasts. Moreover, the assumptions and models (whether of the implicit or explicit variety) used to generate inflation forecasts are also likely to differ across the available sources. Not to be forgotten is the considerable evidence that favors simple forecast combinations over other forms of aggregation of forecast or even forecasts by specific forecasters (e.g., see Timmermann 2006). Of particular interest in this study is the case where the benchmarks are forecasts j published by the central bank, that is, FCBth . The potential connection between disagreement and uncertainty was raised earlier. Lahiri, Peng, and Sheng (2015; also see references therein) outline the requirements under which measures of disagreement can underestimate forecast uncertainty or, rather, the conditions under which the two move in parallel with each other. Boero, Smith, and Wallis (2015) cover much the same ground and report that forecast disagreement and uncertainty are reasonable proxies for each other, especially when the former exhibits large and frequent changes. Among the difficulties researchers encounter when attempting to disentangle the two concepts is that most forecasts are aggregated. Even forecasts that have a probabilistic element can be classified into arbitrary bins. Hence, the precise information needed to measure a “pure” form of uncertainty is unobserved. For these reasons, the present study focuses on disagreement as proxied by a measure of dispersion around different candidates for some central tendency.26 The sampling frequency of the raw data ranges from monthly to semiannual forecasts. The focus is on the short-term inflation outlook, namely, the one-year-ahead horizon, although the precise horizon can be a little longer depending on the data source (see below). Hence, h in equation 2 is set to 1. Moreover, forecasts are either of the fixed event (e.g., a forecast for inflation for a particular calendar year) or a fixed horizon (e.g., one quarter or one year ahead) variety. Hence, many of the forecasts used below end up covering a period of up to two calendar years. Since most monetary policy actions are believed to take about two years to influence economic activity, including inflation, the time horizon is appropriate for an analysis of forecast disagreement. Fixed event data are converted into a fixed horizon using a simple procedure.27 All raw data, however, were converted to the quarterly frequency to facilitate estimation of the macroeconomic determinants of equation 2. Data at the monthly or semiannual
746 Siklos frequencies are converted to quarterly data via quadratic-match averaging.28 Finally, survey data need to be converted from index form into an inflation rate. Two approaches are often employed, namely, the regression and probability methods. The former is associated with the work of Pesaran (1985; 1987), while the latter is best known from the work of Carlson and Parkin (1975). Both techniques are used, and the mean of the two resulting series serves as the proxy for inflation expectations or forecasts from the relevant households and firms data (see also Siklos 2013). Ideally, I would also have liked to collect data on medium-term to long-term inflation forecasts for all of the nine economies considered here. However, other than Consensus forecasts and some forecasts from the SPF, it proved to be impossible to obtain a complete, let alone comparable, set of forecasts five to ten years out for all of the forecasting groups analyzed here. Nine economies are examined, five of which explicitly target inflation. The inflation- targeting countries are Australia (Reserve Bank of Australia, RBA), Canada (Bank of Canada, BoC), New Zealand (Reserve Bank or New Zealand, RBNZ), Sweden (Sveriges Riksbank, SR or Riksbank), and the United Kingdom (Bank of England, BoE). The remaining economies are not considered to have adopted inflation targeting, though all of them aim for price stability and have even indicated a numerical objective that they aim to meet as a means of guiding inflationary expectations: the Eurozone (European Central Bank, ECB), Japan (Bank of Japan, BoJ), Switzerland (Swiss National Bank, SNB), and the United States (Federal Reserve System, Fed).29 The full sample, before any data transformations are applied, is first quarter 1999 to fourth quarter 2014.30 Macroeconomic and financial time series were obtained from the International Financial Statistics CD-ROM (June–August 2015 editions), the Bank for International Settlements (BIS), OECD release data and revisions database (http://stats.oecd.org/ mei/default.asp?rev=1), and the databases of the individual central banks. They include an output gap, both domestic and global, obtained by applying an Hodrick-Prescott (H- P) filter to the log of real GDP;31 the price gap, namely, the difference between observed and core inflation rates;32 and an indicator of commodity price inflation (i.e., inflation in oil prices; Brent crude price per barrel). Finally, to capture the possible effects of uncertainty, the VIX is added.33 Table 24.1 updates details about the number and types of forecasts that were also the basis of the results in Siklos (2013). A total of eighty-three forecasts from a variety of sources are used. The majority of them (forty-three) are from professionals or various international institutions such as the IMF (the World Economic Outlook [WEO]) and the OECD. Professional forecasts include the mean forecast from Consensus, forecasts collected from the Economist, as well as SPF for the United States and the Eurozone. All nine central banks in the data set publish inflation forecasts.34 More than a third of the forecasts (twenty-seven) are obtained from households and firms surveys. Short-term forecasts are important, since they can signal the emergence of some underlying shift in the credibility of monetary policy. Blinder et al. (2008) point out that “short-run” communication, such as the release of an inflation forecast, is likely to have
What Has Publishing Inflation Forecasts Accomplished? 747 Table 24.1 Numbers of Forecasts and Forecast Types Economy
Total
Australia
8
3
1
Canada
7
1
1*
Eurozone
Survey type
Central bank
9
3
1
12
6
2**
New Zealand
7
1
1
Sweden
8
3
1
Japan
Switzerland
6
1
1
United Kingdom
12
5
3***
United States
14
4
2****
Totals
83
27
13
Note: * BoC’s baseline forecast. ** Two versions of BoJ Monetary Policy Committee forecast. *** BoE unconditional and conditional forecasts as well as BoE staff forecasts; **** Greenbook and FOMC forecasts. Professional and public forecasts are the remaining forecasts. The latter are forecasts published by government or international agencies.
a wide variety of effects, and this could be revealed in the behavior of forecast disagreement. Moreover, short-term forecasts are precisely those that are likely to be the focus of transparent central banks. Finally, since the study aims to determine the sensitivity of expectations to central bank announcements, a focus on the one-year horizon is warranted. When central banks loosened monetary policy in response to the Great Recession, verbal forms of communication became increasingly important as a means of signaling the stance of monetary policy. This is also highlighted by the fact that even if the stance of monetary policy has changed, policy rates in many advanced economies hardly changed at all. At the same time, language was carefully crafted to signal policy easing in the foreseeable future or at least until there are clear signs that the state of the economy improves and, with it, risks for inflation exceeding its objective.35 In such an environment, the distinction between positive and negative news announcement is germane to a study of forecast disagreement. Of course, incorporating the narrative element of monetary policy is not new (e.g., see Romer and Romer 1989; 2004), but the manner in which these effects are incorporated does differ across studies (e.g., see Lombardi, Siklos, and St. Amand 2019). I evaluate the content of monetary policy announcements by developing a proxy for the tone of central bank communication. I evaluate the conduct of monetary policy by generating a vector of variables that quantifies the content of the language used in central bank press releases. Typically, researchers have attempted to interpret whether the monetary authority aims to tighten or loosen policy based on a reading and subsequent
748 Siklos assessment of the bias in central bank announcements, turning these evaluations typically into binary dummy variables. Instead, I use the DICTION 6.0 algorithm (see Hart, Childers, and Lind 2013) to quantify the tone of communication emanating from central banks.36 Alternatively, DICTION can be viewed as an algorithm that collects ideas that are expressed in a document. Next, I compute an idea density indicator of the number of ideas per, say, ten words.37 To remain consistent with the relevant literature on central bank communication, I refer to the idea density measure as an indicator of the content of central bank communication. The DICTION algorithm transforms a collection of words (i.e., “bags of words”) into a numerical indicator the content or tenor of the document. This relies on a dictionary of words that convey meaning along various dimensions. There is no unique way of doing so, and indeed, the selection of words used to create such indicators can be crucial (e.g., see Loughran and McDonald 2016). Indeed, there exist a large number of algorithms all claiming to accurately quantify the content of documents. Assuming that consumers of inflation forecasts are varied, from the general public to the specialist, it is likely that the words used, especially by central banks, will be a mix of everyday and more specialized words. Expressions and terms are grouped into two categories: positive and negative.38 A positive tone signals current and anticipated improvements in economic conditions, and this is also interpreted as signaling higher future inflation. In contrast, a negative tone implies a weakening of economic conditions, and this is viewed as being conducive to the central bank expecting lower future inflation. The numerical value associated with the tone of central bank announcements (i.e., press releases and minutes if these are published) is based on the press release that accompanies central bank announcements of the policy rate setting. DICTION calculates the relative frequency with which words used in central bank communication are consistent with the chosen categories that are associated with either positive or negative sentiment about overall economic conditions. The words are drawn from a dictionary of more than ten thousand words. There is always a subjective element in the measurement of the content of any document, and central bank press releases and committee minutes, where available, are no exception. However, evaluating the tone of central bank announcements is likely made easier since central bankers are well known to carefully weigh the usage or removal of specific terms.39
24.5 Econometric Specification Once forecast disagreement is estimated, it is natural to ask what kind of economic phenomena can explain its evolution over time. It is hypothesized that forecast disagreement can be explained by a few common determinants. Some are domestic, while others are considered global. In addition, it is conceivable that the level of forecast disagreement is influenced by the level of inflation or its forecasted value. The simplest way to proxy this is by relying on a mean forecast, either across forecast sources or across economies. I consider
What Has Publishing Inflation Forecasts Accomplished? 749 the mean inflation forecast across the various available sources as well as an estimate that extracts the first principal component of these same forecasts. The same approaches are used to generate a common global factor based on forecasts across countries. Accordingly, the specification used to estimate the determinants of inflation forecast disagreement is written as
dtj+ h t = α j + β0 Γ tj , D + β1Γ tj ,G + ηtj+ h t (3)
where d is defined above (see equation 1) and inflation forecast disagreement Γ D , Γ G are, respectively, the vectors of domestic (D) and global (G) determinants affecting inflation forecast disagreement.40 The subscript t + h|t indicates that disagreement is based on the horizon t + h, where h = 1, that is, for a fixed horizon period of one year ahead conditional on information at time t. Note also that since a global component is included for each j, forecast disagreement in each economy is a function of both domestic and international determinants. As noted above, domestic factors include variables likely to impact not only forecasts but also forecast disagreement, including an indicator of uncertainty (here the VIX), oil prices, and a measure of the qualitative content of central bank press releases announcing the stance of monetary policy, as measured by their tone. When the domestic or global common factors are estimated via the mean or principal components with factor loadings that are constant,41 the common factor is obtained from the following estimated relationship:
Γ tj ,k = ptj ,k v tj ,k + etj ,k , (4)
where Γ has been defined, k = D,G and ptj ,k represent the factor loadings based on the time-series vector v , and etj ,k is a residual term.
24.6 Empirical Results Figure 24.1 and figure 24.2 show CPI headline inflation rates as well as two estimates of one-year-ahead forecasts of inflation. In figure 24.1, inflation forecasts are proxied by the overall mean of all available forecasts for each one of the nine economies in the study (see table 24.1). In figure 24.2, the proxy for one-year-ahead inflation rates shown is the first principal component of inflation forecasts. Generally, estimates reveal that a single factor explains more than 70 percent of the variation in inflation forecasts (results not shown). However, for Australia, the United Kingdom, Sweden, and the United States, either two or three factors were estimated, although the additional factors explain less than 20 percent of the variation in available forecasts. Moreover, in all nine economies, the factor loadings for the first principal component are all positive. Due to the unbalanced nature of the data set for inflation forecasts, for the sake of comparison, the sample shown in both figures is from 2002 on, although forecasts begin in 1999.
5 4 3 2 1 0 –1 –2
0
1
2
3
4
5
Sweden
United Kingdom
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Canada
6 5 4 3 2 1 0 –1 –2
4 3 2 1 0 –1 –2 –3
4 3 2 1 0 –1 –2
United States
Japan
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Switzerland
Notes: Inflation is annualized log change in CPI headline and core (excludes energy, food, and indirect taxes). The mean of all available forecasts is used.
New zealand
Eurozone
5 4 3 2 1 0 –1
Figure 24.1 Headline Inflation and the Mean of Inflation Forecasts
0
6 5 4 3 2 1
–1
0
1
2
3
4
1
2
3
4
5
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Australia
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
6
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Australia
Eurozone
New zealand 3 2 1 0 –1 –2 –3 –4
–2
–1
0
1
2
3
3 2 1 0 –1 –2 –3 –4
Canada
United Kingdom
Sweden
3 2 1 0 –1 –2 –3 –4
4 3 2 1 0 –1 –2 –3
–3
–2
–1
0
1
2
3
Switzerland
Japan
United States
Notes: The first principal component is estimated over an unbalanced panel of all available inflation forecasts. Estimation is based on principal factors, with the number of factors estimated via the Kaiser-Guttman approach. Headline inflation is defined in fi gure 24.1. For the United States, the full sample excludes the Greenbook forecasts and forecasts from the Atlanta Federal Bank survey; for the United Kingdom, full sample estimates exclude Monetary Policy Committee staff estimate.
Figure 24.2 Headline Inflation and the First Principal Component of Inflation Forecasts
–2
–1
0
1
2
3
–3
–2
–1
0
1
2
3
–2
–1
0
1
2
3
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
752 Siklos Both sets of forecasts respond to observed inflation, although estimates generated by the factor model follow observed inflation somewhat more closely. More often than not, inflation forecasts also move contemporaneously with headline inflation rates. However, there are a few instances, most notably around the GFC of 2008–2009, when forecasts lag behind inflation for a short period of time. Finally, there is little visual evidence that forecasts deviate persistently from observed inflation. Of course, by combining a wide range of forecasts, I am exploiting the well-known benefits of combining forecasts. Since there is a longer sample when mean inflation forecasts are used, the econometric evidence presented relies on mean inflation forecasts for the common factor. The conclusions reached are unaffected when the principal components proxy is employed. Relying on the small sample correction for the widely used Diebold-Mariano (1995) test, I am unable to reject the null that central bank forecasts are the same as the other forecast combinations considered for Australia, Japan, New Zealand, and Sweden. For Canada and the Eurozone, the only rejections are obtained vis-à-vis households and firms forecasts, with central bank forecasts proving to be superior. In the case of Switzerland, the United Kingdom, and the United States, there are frequent rejections, with the exceptions of public forecasts for the United Kingdom and households and firms forecasts for the United States. In the case where the null is rejected, central bank forecasts are deemed superior to the rest (results not shown). Hence, it is far from universally the case that reliance on central bank forecasts, based on their average accuracy, is the preferred choice. Figure 24.3 plots the cumulative revisions in the growth rate of the output gap. As noted above, output data are frequently revised and may well have an impact on forecast disagreement. Each quarterly vintage is revised every quarter. I constructed an output gap, using the H-P filter with a smoothing parameter of 100,000, and then calculated the growth rate on an annualized basis. The change in the output gap over time is then due to revisions in the data, which are summed over time. It is immediately observed from figure 24.3 that in most economies, revisions are positive pre-GFC. This means that temporal revisions to the output gap suggest that they are becoming larger over time. In contrast, revisions turn negative post-GFC and may well be one reason unconventional monetary policies (e.g., QE) were introduced. After some time, revisions begin to rise once again in most cases (Australia, Japan, and the Eurozone are three exceptions) beginning around 2012. It is conceivable, then, that (some) forecasters may revise their forecasts based on this kind of new information leading to changes in forecast disagreement. Figures 24.4 and 24.5 provide graphical evidence of the evolution of inflation forecast disagreement based on two different aggregations of the data set. Figure 24.4 provides an estimate of the mean of d (see equation 2) for each of the nine economies in the sample. Moreover, a quasi-confidence interval is also provided by estimating the range of estimates of forecast disagreement. Recall that there is no a priori reason to believe that the only benchmark is the overall mean inflation forecast, the traditional choice in these exercises. Since in most cases we do not observe the model(s)
2
New zealand
10 8 6 4 2 0 –2
United Kingdom
2000 2002 2004 2006 2008 2010 2012 2014
10 8 6 4 2 0 –2
Sweden
6 4 2 0 –2 –4 –6
8
Japan
United States
6 4 2 2000 2002 2004 2006 2008 2010 2012 2014
10 8 6 4 2 0 –2
Eurozone
2000 2002 2004 2006 2008 2010 2012 2014
0
Switzerland
2000 2002 2004 2006 2008 2010 2012 2014
4
6 4 2 0 –2 –4
2000 2002 2004 2006 2008 2010 2012 2014
6
2000 2002 2004 2006 2008 2010 2012 2014
12 10 8 6 4 2 0
Canada
8
0
2000 2002 2004 2006 2008 2010 2012 2014
Australia
2000 2002 2004 2006 2008 2010 2012 2014
3 2 1 0 –1 –2
2000 2002 2004 2006 2008 2010 2012 2014
Accumulated data revisions—quarterly Vintages
What Has Publishing Inflation Forecasts Accomplished? 753
Figure 24.3 Real-Time Revisions over Time Note: The output gap is first estimated using an HP filter (smoothing parameter of 100,000). The change in growth rate of the output gap for quarterly vintages since 1999 (where available) is then evaluated and cumulated over time.
used by the individuals or groups that provide inflation forecasts but all forecasters acknowledge that the forecasts of others, together with some judgment, influence how their expectations are formed, a total of six separate versions of inflation forecast disagreement were estimated. The resulting quasi-confidence interval is akin to the model confidence set approach but, in the present case, without explicit knowledge of the range of model specifications. The evidence suggests considerable variation in inflation forecast disagreement across economies and over time. Indeed, in most cases, there tend to be brief periods of sharply higher levels of disagreement followed, typically, by longer periods of low disagreement. Greater disagreement over the inflation outlook is clearly visible in almost every economy around the time of the GFC. However, there are other periods of higher forecast disagreement not directly associated with a financial crisis. For example, in the United States, there is rising disagreement in the period leading up to the tech bubble of 2001 and again during the phase when the Federal Reserve gradually tightens monetary policy until 2005. Similarly, the sharp fall in disagreement after the GFC is reversed during the period when QE policies are introduced by the Fed (i.e., QE2) in
Australia
Eurozone
New zealand
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Canada
United Kingdom
Sweden
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Switzerland
Japan
United States
Notes: Estimate based on equation 2 applied to all forecast groupings (all except central bank, central bank only, professionals, public, and households and firms forecasts). For details of each grouping, see the main text (an appendix is also available upon request).
Figure 24.4 Quasi-Confidence Intervals for Inflation Forecast Disagreement
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
What Has Publishing Inflation Forecasts Accomplished? 755 Central banks
Professional
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
Public
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Survey
1.0
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1.0
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1.0
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
1.0
Figure 24.5 Inflation Forecast Disagreement: Global Note: See notes to Figure 3. Equation (2) is used to calculate disagreement by grouping different sources of forecasts across all of the 9 economies in the study.
2010. In Japan, forecast disagreement actually exceeds levels reached during the GFC when Governor Kuroda launches QQE in 2013. Notice, too, that there is rising inflation forecast disagreement in the early 2000s over the period when the BoJ is in the midst of QE1. Other illustrations of the impact of monetary policy decisions on forecast disagreement include the SNB’s decision to target the Swiss franc’s exchange rate in 2011 and the RBA’s change of course in setting of its policy rate (the cash rate) beginning in November 2011. Clearly, no matter how appropriate these unconventional policies are, there are obvious limits in central banks’ ability to manage their impact on forecast disagreement. Whether better communication would have moderated some of these effects is as yet unclear (see below). It is also worthwhile remarking on the behavior of the quasi-confidence intervals. These tend to be wider the higher the level of forecast disagreement. Nevertheless, it is interesting to observe that in the United States, there is the appearance of a narrow range about the short-term inflation outlook at the height of the GFC. This is not the case in the other economies considered. In other words, forecasters were in accord about the high level of disagreement about the inflation outlook. Similar to the point estimates, the quasi-confidence intervals can grow wider or narrower very quickly. Whether the bands shown can be likened to an estimate of forecast uncertainty is
756 Siklos unclear. Nevertheless, it should be pointed out that low levels of disagreement need not imply narrow quasi-confidence bands, as the examples of New Zealand, Australia, Canada, and Sweden demonstrate. It is interesting that for the Eurozone, the results are similar but not the same as those of López-Pérez (2016), who studies uncertainty in the ECB’s SPF. The quasi-confidence intervals in figure 24.4 reveal that uncertainty rises as early as 2000, not 2001, but does decline in 2004. I also find a rise in 2005 and 2006 not reported in the same study. Thereafter, the results mirror ones reported by López-Pérez. Figure 24.5 plots forecast disagreement on a global scale. Instead of the quasi- confidence bands, the point estimates for four different benchmarks, namely central banks, professional forecasters, forecasts of public agencies, and households and firms forecasts, are shown alongside the conventional approach of using the mean of all forecasts as the benchmark. The term global is used here because inflation forecasts across all nine economies are averaged (a similar result is obtained when the first principal component is used as the global inflation forecast proxy). The motivation, as discussed earlier, is the view that inflation has increasingly been overtaken in recent years by a global dimension. It is clear that the selection of the benchmark has a considerable impact on forecast disagreement. Conventional calculations of forecast disagreement come closest to ones generated based on professional forecasts, while the least coherent benchmark is the one based on central bank forecasts. Households, firms, and public forecasts represent intermediate cases. Like their domestic counterpart (see figure 24.4), all show a rise in forecast disagreement around the time of the GFC, but whereas overall forecast disagreement falls sharply since then, forecast disagreement relative to the central banks benchmark remains elevated. Estimates of forecast disagreement also demonstrate a common response to the events of the early 2000s, when the tech bubble burst and concerns were expressed then about the potential for deflation spreading outside Japan. Tables 24.2 and 24.3 present estimates based on equation 3. Both ask the extent to which we can find a few common determinants that drive inflation forecast disagreement as a function of different benchmarks. To economize on space, three benchmarks are chosen: all forecasts excluding ones from the central bank, central bank forecasts, and households and firms forecasts. It is difficult to find common determinants of disagreement, but this is not surprising if different forecasts are thought to be driven by different information sets. Nevertheless, differentials between domestic and global output gaps, the gap between headline and core inflation, and average forecasts of inflation relative to a global mean forecast of inflation come closest to what could be termed common driving factors of inflation forecast disagreement.42 It is interesting to note that gaps in headline versus core inflation have relatively small effects on forecast disagreement. The only consistent exceptions are Australia and the Eurozone, where the gap raises disagreement in the former but reduces it in
–0.061 (0.180)
0.063 (0.023)*
0.181 (0.056)*
0.271 (0.218)
0.075 (0.215)
–0.000 (0.001)
–0.001 (0.003)
–0.015 (0.028)
0.409
4.749
0.004
Constant
GAPQ-GLOBALGAP
PGAP
MEAN-F_MEAN
POS1D-NEG1D
OILPRICEPCH
VIX
Real-time revisions
R-squared
F-statistic
Prob(F-stat)
0.000
6.969
0.504
–0.007 (0.017)
0.009 (0.002)*
–0.001 (0.001)
–0.055 (0.339)
–0.132 (0.083)
–0.042 (0.032)
–0.026 (0.049)
0.095 (0.173)
CAN
0.001
4.108
0.390
–0.18 (0.024)
0.002 (0.004)
0.001 (0.002)
0.304 (0.828)
–0.025 (0.117)
–0.113 (0.091)
0.053 (0.018)*
0.160 (0.170)
CHE
0.000
12.266
0.641
–0.032 (0.013)**
0.003 (0.002)
–0.000 (0.001)
–0.292 (0.227)
–0.067 (0.076)
–0.124 (0.035)*
0.161 (0.041)*
0.334 (0.098)*
EUR
0.000
16.563
0.707
–0.040 (0.016)**
0.146 (0.004)*
0.001 (0.001)
–0.489 (0.270)***
0.181 (0.096)***
0.009 (0.052)
0.098 (0.036)*
0.253 (0.266)
GBR
0.000
4.860
0.415
–0.028 (0.016)***
0.001 (0.003)
0.002 (0.001)***
–0.094 (0.092)
0.241 (0.060)*
–0.119 (0.060)***
–0.008 (0.037)
0.775 (0.093)*
JPN
0.000
5.119
0.427
–0.017 (0.011)
0.005 (0.003)***
0.001 (0.001)
0.435 (0.204)**
0.235 (0.058)*
–0.012 (0.035)
0.001 (0.018)
–0.084 (0.095)
NZL
0.003
3.734
0.353
0.001 (0.012)
0.010 (0.004)*
0.001 (0.001)
–0.268 (0.273)
0.271 (0.091)*
0.014 (0.032)
0.022 (0.032)
0.351 (0.143)**
SWE
0.000
4.970
0.420
0.016 (0.348)
0.004 (0.004)
–0.003 (0.002)
0.205 (0.219)
0.306 (0.138)**
0.080 (0.051)
–0.094 (0.037)*
–0.282 (0.155)***
USA
Note: The title refers to the benchmark used to calculate disagreement (equation 2). Least squares estimation with heteroscedasticity and autocorrelation-consistent standard errors (HAC). The full sample is 2001Q1–2014Q4 when central bank communication variables are included, otherwise 1999Q1–2014Q4. * Statistically significant at the 1% level; ** at the 5% level; *** at the 10% level. GAPQ is the domestic output gap; GLOBALGAP is the global output gap; PGAP is the headline versus core inflation differential; MEAN is the arithmetic mean of all domestic inflation forecasts; F_MEAN is the global inflation forecast; POS1D-NEG1D is the differential between positive and negative sentiment in central press releases issued following a policy rate decision (times 10); OILPRICEPCH is the rate of change (annualized) in the United Kingdom Brent price of crude per barrel; VIX is the Chicago Board Options Exchange (CBOE) implied volatility of S&P 500 index. Real-time revisions are the accumulated revisions in the growth rate of the output gap. AUS = Australia, CAN = Canada, CHE = Switzerland, EUR = Eurozone, GBR = United Kingdom, JPN = Japan, NZL = New Zealand, SWE = Sweden, USA = United States.
AUS
Dep. Var.
Table 24.2a Determinants of Forecast Disagreement: All Forecasts Excluding Central Bank Forecasts
–0.013 (0.313)
0.065 (0.059)
0.302 (0.110)*
0.163 (0.415)
–0.091 (0.461)
–0.001 (0.002)
0.005 (0.006)
–0.011 (0.066)
0.475
3.106
0.018
Constant
GAPQ-GLOBALGAP
PGAP
MEAN-F_MEAN
POS1D-NEG1D
OILPRICEPCH
VIX
Real-time revisions
R-squared
F-statistic
Prob(F-stat)
Note: See note to table 24.2a.
AUS
Dep. Var.
0.000
7.095
0.623
–0.033 (0.024)
0.007 (0.004)**
–0.004 (0.001)*
0.401 (0.480)
–0.253 (0.141)***
0.035 (0.050)
–0.132 (0.084)
0.211 (0.270)
CAN
0.025
2.649
0.334
–0.150 (0.023)
0.006 (0.004)
–0.000 (0.002)
0.793 (0.848)
0.205 (0.117)***
–0.019 (0.091)
0.029 (0.018)
0.256 (0.177)
CHE
0.000
10.719
0.610
–0.035 (0.013)
–0.002 (0.002)
0.000 (0.008)
–0.312 (0.225)
–0.076 (0.076)
–0.115 (0.034)*
0.181 (0.041)*
0.375 (0.097)*
EUR
0.000
25.443
0.788
–0.018 (0.089)**
0.012 (0.002)*
0.000 (0.005)
–0.228 (0.148)
0.090 (0.053)***
0.033 (0.029)
0.048 (0.020)**
0.005 (0.146)
GBR
Table 24.2b Determinants of Forecast Disagreement: Central Bank Forecasts
0.000
8.347
0.560
–0.012 (0.018)
0.005 (0.004)
0.002 (0.001)
0.016 (0.109)
0.377 (0.073)*
–0.080 (0.071)
–0.017 (0.044)
0.709 (0.113)*
JPN
0.000
4.777
0.410
–0.038 (0.011)*
0.004 (0.003)
0.002 (0.001)
0.292 (0.212)
0.204 (0.060)*
–0.032 (0.037)
–0.022 (0.019)
0.062 (0.099)
NZL
0.123
1.734
0.202
0.002 (0.012)
0.008 (0.004)**
–0.001 (0.001)
–0.216 (0.264)
0.137 (0.088)
0.005 (0.031)
0.033 (0.031)
0.234 (0.139)***
SWE
0.000
6.327
0.480
0.030 (0.017)***
0.003 (0.004)
0.001 (0.002)
0.247 (0.203)
0.254 (0.128)**
0.067 (0.047)
–0.075 (0.034)**
–0.251 (0.144)***
USA
–0.208 (0.169)
0.036 (0.022)***
0.151 (0.053)*
0.392 (0.205)***
0.198 (0.202)
0.000 (0.001)
–0.001 (0.003)
–0.003 (0.026)
0.384
4.277
0.001
Constant
GAPQ-GLOBALGAP
PGAP
MEAN-F_MEAN
POS1D-NEG1D
OILPRICEPCH
VIX
Real-time revisions
R-squared
F-statistic
Prob(F-stat)
Note: See note to table 24.2a.
AUS
Dep. Var.
0.000
5.352
0.454
–0.012 (0.015)
0.007 (0.002)*
–0.002 (0.001)**
–0.028 (0.327)
–0.073 (0.075)
0.005 (0.029)
–0.026 (0.044)
0.070 (0.162)
CAN
0.300
1.287
0.281
0.075 (0.041)***
–0.002 (0.006)
0.001 (0.003)
–0.423 (13.04)
–0.187 (0.176)
–0.248 (0.138)***
0.073 (0.039)***
–0.152 (0.307)
CHE
0.000
10.369
0.602
–0.033 (0.013)*
–0.002 (0.002)
0.001 (0.001)
–0.242 (0.221)
–0.062 (0.074)
–0.135 (0.034)*
0.184 (0.040)*
0.310 (0.095)*
EUR
0.000
13.515
0.663
–0.034 (0.017)***
0.013 (0.004)*
0.002 (0.001)
–0.473 (0.292)
0.239 (0.104)**
–0.058 (0.056)
0.145 (0.039)*
0.162 (0.288)
GBR
0.000
5.899
0.462
–0.021 (0.014)
0.001 (0.003)
0.001 (0.001)
–0.066 (0.084)
0.225 (0.054)*
–0.092 (0.054)***
0.005 (0.034)
0.681 (0.085)*
JPN
Table 24.2c Benchmark–Determinants of Forecast Disagreement: Households and Firms Forecasts
0.001
4.272
0.384
0.033 (0.017)**
0.014 (0.004)*
–0.003 (0.001)**
0.271 (0.310)
–0.127 (0.088)
0.105 (0.054)***
0.001 (0.028)
0.051 (0.145)
NZL
0.001
4.298
0.385
0.013 (0.014)
0.009 (0.004)**
0.001 (0.002)
–0.214 (0.309)
0.379 (0.103)*
0.015 (0.036)
0.017 (0.036)
0.442 (0.162)*
SWE
0.000
9.448
0.579
0.022 (0.015)
0.004 (0.003)
0.000 (0.002)
0.268 (0.177)
0.365 (0.112)*
0.069 (0.041)**
–0.090 (0.030)*
–0.364 (0.126)*
USA
–0.117 (0.197)
0.074 (0.025)*
0.228 (0.061)*
0.187 (0.238)
0.170 (0.235)
–0.000 (0.001)
0.004 (0.004)
0.010 (0.030)
0.481
6.352
0.000
Constant
GAPQ-GLOBALGAP
PGAP
MEAN-F_MEAN
POS1D-NEG1D
OILPRICEPCH
VIX
Real-time revisions
R-squared
F-statistic
Prob(F-stat)
0.000
8.397
0.550
–0.018 (0.019)
0.011 (0.003)*
–0.001 (0.001)
–0.119 (0.383)
–0.177 (0.094)***
–0.050 (0.035)
–0.033 (0.055)
0.171 (0.195)
CAN-MAX
0.001
4.138
0.392
–0.037 (0.027)
0.004 (0.004)
–0.000 (0.002)
0.168 (0.951)
–0.015 (0.135)
–0.090 (0.105)
0.058 (0.021)*
0.248 (0.195)
CHE-MAX
0.000
10.724
0.610
–0.034 (0.014)**
0.000 (0.003)
–0.000 (0.001)
–0.325 (0.242)
–0.090 (0.081)
–0.103 (0.037)*
0.180 (0.044)*
0.363 (0.104)*
EUR-MAX
0.000
13.141
0657
–0.057 (0.020)*
0.015 (0.005)*
0.001 (0.001)
–0.535 (0.333)
0.152 (0.119)
–0.003 (0.064)
0.119 (0.045)*
0.402 (0.329)
GBR-MAX
0.000
6.275
0.478
–0.004 (0.018)**
0.000 (0.004)
0.000 (0.001)
–0.348 (0.108)*
0.263 (0.070)*
–0.105 (0.070)
–0.011 (0.044)
1.039 (0.108)*
JPN-MAX
0.001
3.094
0.311
0.020 (0.017)
0.012 (0.004)*
–0.002 (0.001)
0.374 (0.316)
0.108 (0.090)
0.045 (0.055)
0.007 (0.029)
–0.011 (0.147)
NZL-MAX
Note: See note to table 24.2a. The benchmark is all forecasts. MAX is the highest level of disagreement at time t in each economy. See fi gure 24.3.
AUS-MAX
Dep. Var.
Table 24.3a Determinants of Forecast Disagreement: Maximum Levels of Disagreement
0.014
2.849
0.294
0.006 (0.014)
0.010 (0.004)**
0.000 (0.001)
–0.263 (0.305)
0.276 (0.101)*
0.014 (0.035)
0.027 (0.036)
0.442 (0.160)*
SWE-MAX
0.000
4.951
0.419
0.014 (0.020)
0.004 (0.004)
–0.000 (0.002)
0.193 (0.232)
0.337 (0.148)**
0.097 (0.055)***
–0.087 (0.039)**
–0.237 (0.166)
USA-MAX
–0.100 (0.151)
0.051 (0.019)*
0.158 (0.047)*
0.246 (0.183)
0.109 (0.180)
–0.000 (0.001)
0.000 (0.003)
–0.004 (0.023)
0.429
5.145
0.000
Constant
GAPQ-GLOBALGAP
PGAP
MEAN-F_MEAN
POS1D-NEG1D
OILPRICEPCH
VIX
Real-time revisions
R-squared
F-statistic
Prob(F-stat)
Note: See note to table 24.3a.
AUS-MEAN
Dep. Var.
0.000
7.896
0.535
–0.012 (0.016)
0.009 (0.002)*
–0.001 (0.001)
–0.090 (0.327)
–0.148 (0.080)***
–0.037 (0.030)
–0.025 (0.047)
0.116 (0.166)
CAN-MEAN
0.002
3.853
0.375
–0.019 (0.022)
0.002 (0.003)
0.000 (0.002)
0.239 (0.755)
–0.019 (0.107)
–0.106 (0.083)
0.047 (0.016)*
0.180 (0.155)
CHE-MEAN
0.000
11.601
0.629
–0.032 (0.013)**
–0.005 (0.002)
0.000 (0.001)
–0.293 (0.225)
–0.0733 (0.0754)
–0.122 (0.034)*
0.172 (0.041)*
0.335 (0.097)*
EUR-MEAN
0.000
15.666
0.696
–0.038 (0.014)*
0.012 (0.004)*
0.001 (0.001)
–0.461 (0.245)***
0.164 (0.087)***
–0.012 (0.047)
0.101 (0.033)*
0.262 (0.242)
GBR-MEAN
Table 24.3b Determinants of Forecast Disagreement: Mean Levels of Disagreement
0.000
5.500
0.445
–0.028 (0.015)***
0.001 (0.003)
0.001 (0.001)
–0.152 (0.086)***
0.221 (0.055)*
–0.095 (0.056)***
0.002 (0.035)
0.737 (0.087)*
JPN-MEAN
0.002
3.785
0.356
–0.010 (0.011)
0.006 (0.003)**
0.000 (0.001)
0.365 (0.201)***
0.165 (0.057)*
0.008 (0.035)
–0.006 (0.018)
–0.018 (0.093)
NZL-MEAN
0.004
3.460
0.335
0.001 (0.011)
0.009 (0.003)*
0.000 (0.001)
–0.227 (0.238)
0.225 (0.079)*
0.009 (0.028)
0.018 (0.028)
0.315 (0.125)**
SWE-MEAN
0.000
5.746
0.456
0.018 (0.018)
0.004 (0.004)
–0.000 (0.002)
0.220 (0.209)
0.308 (0.132)**
0.080 (0.049)
–0.093 (0.035)*
–0.281 (0.148)***
USA-MEAN
–0.020 (0.066)
0.015 (0.008)***
0.056 (0.020)*
0.111 (0.080)
0.044 (0.078)
–0.000 (0.003)
–0.000 (0.001)
–0.012 (0.010)
0.353
3.743
0.003
Constant
GAPQ-GLOBALGAP
PGAP
MEAN-F_MEAN
POS1D-NEG1D
OILPRICEPCH
VIX
Real-time revisions
R-squared
F-statistic
Prob(F-stat)
Note: See Notes to table 24.3a.
AUS-MIN
Dep. Var.
0.000
5.247
0.433
–0.012 (0.015)
0.006 (0.002)*
–0.001 (0.0008)***
–0.103 (0.300)
–0.082 (0.073)
–0.008 (0.028)
–0.023 (0.043)
0.086 (0.153)
CAN-MIN
0.022
2.659
0.293
–0.001 (0.015)
0.001 (0.002)
0.001 (0.001)
0.438 (0.524)
–0.001 (0.074)
–0.109 (0.058)***
0.027 (0.011)**
0.085 (0.107)
CHE-MIN
0.000
10.244
0.599
–0.031 (0.013)**
–0.002 (0.002)
0.001 (0.001)
–0.250 (0.221)
–0.060 (0.074)
–0.137 (0.034)*
0.179 (0.040)*
0.307 (0.095)*
EUR-MIN
0.000
19.599
0.741
–0.018 (0.008)**
0.009 (0.002)*
0.000 (0.004)
–0.194 (0.134)
0.100 (0.048)**
0.013 (0.026)
0.054 (0.018)*
0.038 (0.132)
GBR-MIN
0.000
6.279
0.478
–0.018 (0.012)
0.003 (0.003)
0.001 (0.001)
–0.024 (0.070)
0.227 (0.045)*
–0.095 (0.045)**
–0.028 (0.028)
0.500 (0.071)*
JPN-MIN
Table 24.3c Determinants of Forecast Disagreement: Minimum Levels of Disagreement
0.002
3.822
0.358
–0.010 (0.007)
0.004 (0.002)**
0.000 (0.001)
0.224 (0.126)***
0.051 (0.036)
0.017 (0.022)
–0.013 (0.011)
0.011 (0.059)
NZL-MIN
0.001
3.092
0.311
0.003 (0.007)
0.003 (0.002)
0.000 (0.001)
–0.194 (0.155)
0.177 (0.051)*
0.006 (0.018)
0.001 (0.018)
0.249 (0.081)*
SWE-MIN
0.000
8.696
0.599
0.026 (0.015)†
0.003 (0.003)
0.006 (0.002)
0.261 (0.173)
0.297 (0.109)*
0.060 (0.040)
–0.085 (0.029)*
–0.330 (0.122)*
USA-MIN
What Has Publishing Inflation Forecasts Accomplished? 763 the latter. Central banks, of course, are fond of stressing that even if the control of headline inflation is their mandate, operationally core inflation is more relevant, especially in the short run. The reason is that changes in core inflation allow the monetary authority to look past temporary supply-side factors that need not be reflected in inflation expectations. Nevertheless, there is no reason, a priori, for forecasters to react in the same fashion across different economies. Indeed, as shown, for example, in table 24.2, forecast disagreement is consistently affected by the inflation gap (PGAP) in only two economies. There is only a smattering of reaction to this variable in a few other economies. These results could conceivably also reflect the relative success of some central banks at communicating threats to meeting their operational inflation objective. A rise in domestic output relative to global economic conditions raises forecast disagreement except in the United States, where it has the opposite effect unless the benchmark is a central bank forecast. Turning to the global variables, oil price inflation has effectively no impact on forecast disagreement other than a modest reduction in Canada when households and firms forecasts serve as the benchmark. The failure of oil prices to significantly impact disagreement is not surprising, since forecasters, including households and firms, typically view these prices as having a fairly immediate but possibly short-run impact on inflation. Of course, forecasters may still disagree about the pass-through effects, although these should emerge via the uncertainty around the point estimates of forecast disagreement. The VIX, a commonly used indicator of stock-market volatility, is also sometimes interpreted as a measure of uncertainty,43 has a statistically significant impact on disagreement in relatively few cases, but the effect appears to be economically small. If the VIX is a proxy for uncertainty, then the estimates of disagreement appear to be very nearly of the pure variety.44 Of course, the addition of real-time revisions may also represent another form of uncertainty that can influence inflation forecasts. Interestingly, cumulative real-time revisions tend to consistently reduce forecast in the Eurozone, the United Kingdom, and Japan. In contrast, these same revisions raise disagreement in the United States, where, arguably, these types of observations are more readily available. Nevertheless, in several of the oldest inflation-targeting economies (Australia, Canada, Sweden, and, to a slightly lesser extent, New Zealand), forecast disagreement is not seen as being affected by data revisions. This could reflect the anchoring of inflation expectations, although the evidence here is only suggestive. Forecast disagreement also tends to rise when the domestic inflation outlook is higher than average global inflation forecasts. To the extent that central banks are concerned about sources of the unanchoring of inflation expectations, the gap between domestic and global inflation forecasts nay represent one such source. Finally, central bank communication has little influence on forecast disagreement, with the exception of the United Kingdom, where it does serve to reduce disagreement. In contrast, RBNZ communication is seen to increase forecast disagreement in
764 Siklos New Zealand. If more was expected from central bank communication, then the results could perhaps also be explained by the sampling frequency or the manner in which the content of central bank announcements is proxied. The last point is an important one, and the results presented here at least serve as a reminder that evaluating the content of central bank communication is fraught with difficulty. In any event, the objective is to highlight that communication can matter. Future research will determine more carefully what role this plays in explaining forecasts and forecast disagreement.45 It may also be the case that central bank communication can influence inflation expectations but not the extent to which different forecasters see the short-term inflation outlook.46 One could interpret the result that central bank communication does not increase forecast disagreement as a communications success. In table 24.3, the mean of various proxies of disagreement and the maximum or minimum levels of disagreement (see figure 24.4) are instead regressed against the same determinants used to generate the results shown in table 24.2. In general, it appears that the same variables broadly drive the entire distribution of estimates of inflation forecast disagreement. Therefore, to the extent that the quasi-confidence intervals shown are informative about uncertainty over the inflation outlook, differences of opinion do not appear to stem from a shift of focus to different determinants, at least among the ones considered here. Finally, figure 24.6 serves as another reminder that findings about short-term inflation forecast disagreement can be highly sensitive to the benchmark used to generate estimates of d as well as illustrating that in spite of a global component to inflation, disagreement can differ substantially across countries. To illustrate, I begin with a reminder that based on estimates shown in figure 24.4, it may be useful to characterize inflation forecast disagreement as operating in one of two states, namely, a high or a low disagreement state. Hence, it seems appropriate to estimate a Markov-switching model where, in most cases, disagreement is determined by a constant and a regime-dependent AR(1) term. The resulting smoothed probabilities of being in a high disagreement state are plotted in figure 24.6 for the same three cases reported in table 24.2.47 Three results are apparent from these figures. First, when the benchmark relies on the central bank’s forecast, professional forecasts can yield similar estimates of being in the high disagreement state. However, there are at least two notable exceptions. Inflation forecast disagreement remains in the high state for the United States, while a central bank benchmark gives the impression that forecasters are in the low disagreement state. Japan’s experience is essentially the reverse. Second, it is also clear that disagreement vis-à-vis the central bank or vis-à-vis households and firms forecasts can yield sharply different interpretations about the likelihood of being in a high disagreement state. Finally, even if there is a significant global element driving inflation forecast disagreement, largely driven by common shocks (e.g., the GFC), the domestic component remains important, and all three parts of the figure suggest that divergences in views across the economies examined persist, whether economies are in crisis conditions or not.
2
0 00
02 04 06 08 10 12 14 20 20 20 20 20 20 20
0.0
0.2
0.4
2
0 00
02 04 06 08 10 12 14 20 20 20 20 20 20 20
0.0
0.2
0.4
0.6
0.8
1.0
1.2
20
00
00
00
20
20
Switzerland
20
02 004 006 008 010 012 014 2 2 2 2 2 2
United States
02 004 006 008 010 012 014 2 2 2 2 2 2
Japan
02 004 006 008 010 012 014 2 2 2 2 2 2
20
20
Notes: Forecast disagreement is estimated via Markov switching (BHHH optimization method, Huber-White standard errors, Knuth random number generator for starting values) with a constant and with regime-specific AR(1) variables, except for Australia, Canada, Switzerland, New Zealand, and Sweden, where the AR(1) is not regime-specific and provided a better fit. The title of the figure indicates the benchmark used to calculate forecast disagreement.
Figure 24.6a Markov-Switching Model of Forecast Disagreement: Central Bank Benchmark
0.0
0.2
0.4
0.6
0.6
0.8
1.0
0.8
1.0
New zealand
1.2
Sweden
0.0
02 004 006 008 010 012 014 2 2 2 2 2 2
20
0.0
0.0
02 004 006 008 010 012 014 2 2 2 2 2 2
0.2
0.2
0.2
20
0.4
0.4
0.4
00
0.6
0.6
0.6
20
0.8
0.8
0.8
00
0.0
0.0
20
0.2
0.2
United Kingdom
0.4
0.4
02 04 06 08 10 12 14 20 20 20 20 20 20 20
0.6
0.6
0
0.8
0.8
0 20
1.0
1.0
1.0
Eurozone
02 04 06 08 10 12 14 20 20 20 20 20 20 20
1.2
Canada 1.2
1.0
0
0 20
Australia
1.0
0.0
0.2
0.4
0.6
0.8
1.0
08 10 12 14 20 20 20 20
0
0 20
2
0 20
4
0 20
8
10 12 14 20 20 20
0
2
0 20
00 002 004 006 008 010 012 014 2 2 2 2 2 20 2 2
New zealand
0 20
6
8 0 20
10 12 14 20 20 20
0.0 4
0.0
0 20
0.2
0.2 2
0.4
0.4
0 20
0.6
1.0
0.0
0.6
0
10 12 14 20 20 20
0.8
0 20
8
0 20
Sweden
6
0 20
0.8
1.0
4
0 20
20
00
00
00
20
20
02
02
02
20
20
20
20
06
08 10 12 14 20 20 20 20
06 08 10 12 14 20 20 20 20 20
Japan
20
04
06 08 10 12 14 20 20 20 20 20
United States
04
04
20
20
Switzerland
Note: See note to fi gure 24.6a. For Canada, Switzerland, and Sweden, the non-regime-specific AR(1) model provided a better fit.
Figure 24.6b Markov-Switching Model of Forecast Disagreement: Professional Forecasters Benchmark
0.0
0.2
0.4
0.6
0.8
1.0
1.2
06 08 10 12 14 20 20 20 20 20
0.0
4
0 20
0.0
2
0.2
0.2
0 20
0.4
0.4
0.4
0
0.6
0.6
0.6
0 20
0.8
0.8
0.8
0.2
1.0
1.0
0 20
0 20
United Kingdom
6
0 20
1.0
Eurozone
6
0 20
0.0
4
0 20
0.0
2
0.0
0 20
0.2
0.2
0.2
0
0.4
0.4
0.4
0 20
0.6
0.6
0.6
0.8
0.8
0.8
1.0
1.0
Canada
1.0
Australia
766 Siklos
0.2 0.0
0.2
0.0 0
0 20
0
0 20
00
20
02 04 06 08 10 12 14 20 20 20 20 20 20 20
SWEDEN
02 04 06 08 10 12 14 20 20 20 20 20 20 20
UNITED KINGDOM
02 004 006 008 010 012 014 2 2 2 2 2 2
20
CANADA
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
20
00
00
00
20
20
20
02 004 006 008 010 012 014 2 2 2 2 2 2
UNITED STATES
02 004 006 008 010 012 014 2 2 2 2 2 2
JAPAN
02 004 006 008 010 012 014 2 2 2 2 2 2
20
20
SWITZERLAND
Notes: See note to fi gure 24.6a. For Canada, Switzerland, the Eurozone, and the United States, only a constant is included. For Sweden, an AR(2) non-regime-specific model provided a better fit.
Figure 24.6c Markov-Switching Model of Forecast Disagreement: Households and Firms Forecasts Benchmark
0.4
0.4
00 002 004 006 008 010 012 014 2 20 2 2 2 2 2 2
0.6
0.6
0.0
0.8
NEW ZEALAND
02 04 06 08 10 12 14 20 20 20 20 20 20 20
1.0
0
0.8
0 20
0.2
0.4
0.6
0.8
1.0
1.2
1.0
0.0
0.2
0.4
0.6
0.8
EUROZONE
0.0
0.0
1.0
0.2
0.2
02 004 006 008 010 012 014 2 2 2 2 2 2
0.4
0.4
20
0.6
0.6
00
0.8
0.8
20
1.0
AUSTRALIA
1.0
What Has Publishing Inflation Forecasts Accomplished? 767
768 Siklos
24.7 Conclusion This chapter has examined the evolution of disagreement over the short-term inflation outlook in nine advanced economies during the decade and a half beginning in the 2000s. The chapter focuses on how disagreement is largely shaped by the benchmark against which this concept is evaluated and the role of potential shocks to the inflation process such as the GFC. Overall, forecast disagreement is highly sensitive to the group of forecasters chosen as the benchmark, a feature largely neglected in the relevant literature and important in view of the growing number of forecasts published by central banks. Several other conclusions are also drawn. The evidence also reaffirms the power of forecast combinations to deliver superior forecasts. Next, and not surprisingly, the GFC led to a spike in inflation forecast disagreement that was short-lived. Nevertheless, there were other periods when forecast disagreement rose sharply in some economies but not in others. Estimation of a quasi-confidence interval for forecast disagreement finds that variation around a mean level of disagreement is high when disagreement reaches a high state. Nevertheless, the relationship is not a straightforward one, as there are instances when the range of disagreement is high even when the average level of disagreement is low. If the quasi-confidence intervals represent a measure of forecast uncertainty, then low and high levels of forecast disagreement can coexist with high levels of uncertainty. While there is a global component in forecast disagreement that is empirically relevant, the domestic determinants appear to be of first-order importance. More important, there appear to be relatively few indications that forecasts are coordinated with those of central banks. Also, central bank communication appears to play a relatively minor role in explaining forecast disagreement, but this could be interpreted as a success for the monetary authorities. Moreover, data revisions are also found to affect forecast disagreement. Finally, it also appears that forecast disagreement can be reasonably seen as a variable that operates in two regimes, namely, high and low disagreement regimes. A number of extensions and unresolved questions remain. If central bank communication is thought, a priori, to be a separate and significant determinant of disagreement, the present study may not have measured it precisely enough, or the quarterly sampling frequency is too coarse to properly capture its significance.48 Alternatively, the communication variable is proxied reasonably well, certainly no worse than an output gap, but is related to levels of expected inflation and not disagreement over the inflation outlook. It would also be worthwhile to explore asymmetries in forecast disagreement, such as whether recessions versus recoveries lead forecasters to focus on different determinants.49 Alternatively, inflations versus disinflations (and, possibly, deflations) could represent another source of asymmetry. We leave these extensions to future research. Finally, given the differences in behavior of forecast disagreement across the nine economies examined, there is potentially scope for asking whether forecast disagreement is an indicator of the degree of perceived cross-economy divergences in monetary policy. These extensions are left for future research.
What Has Publishing Inflation Forecasts Accomplished? 769
Notes 1. An alternative approach to communicating forecast uncertainty is via the estimation of density forecasts. This is a separate literature that space limitations prevent me from directly addressing. However, see Rossi 2014. 2. The notion that the forecasts of others, not just those published by central banks, may have macroeconomic effects is not new (e.g., see Townsend 1983). 3. An important consideration in the evaluation of all forecasts, but perhaps especially central bank forecasts, is whether performance is different in the expansionary phase of the business cycle. The answer, at least for the United States, seems to be in the affirmative (e.g., see Tien, Sinclair, and Gamber 2015). 4. The median is another option, of course. However, examination of the individual forecasts suggests that the mean and the median are not statistically different from each other in the overwhelming number of cases. 5. While some forecasters can be persistently better than others, it is never the case that one forecaster routinely dominates others through time. Indeed, forecasts based on households or firm surveys can do quite well even if the individuals surveyed seemingly have comparatively little expertise (e.g., see Tetlock 2005). 6. It is well known, for example, that the mean squared prediction error (MSPE) between the best and the worst forecast need not be statistically significantly different from each other. Hence, forecasting accuracy is not the only characteristic of a forecast that matters. 7. One should not underestimate the importance central bankers place on anchoring inflation expectations. For example, Yellen (2015) argues, “the presence of well-anchored inflation expectations greatly enhances a central bank’s ability to pursue both of its objectives—namely, price stability and full employment.” Kuroda (2015) goes so far as to suggest that “If Japan can successfully overcome deflation and re-anchor inflation expectations, as it is now in the process of doing, this will represent a major step in monetary policy history not only in Japan but also around the globe.” Draghi (2014) also refers to the importance of anchoring in the context of a decline in eurozone inflation: “The firm anchoring of inflation expectations is critical under any circumstances, as it ensures that temporary movements in inflation do not feed into wages and prices and hence become permanent. But it is even more critical in the circumstances we face today.” Carney (2015b) explains why an anchor matters in the following terms: “Those expectations matter as they feed into the wage and price setting processes that ultimately determine inflation. That is why central banks are keenly alert to the possibility that low inflation could de-anchor medium-term inflation expectations, increasing the persistence of inflation.” 8. There is no precise definition of the “long run,” but Janet Yellen, in the January 16, 2009 FOMC transcripts, is quoted as saying that the time horizon is longer than six years. 9. Indeed, forecast dispersion has occasionally also been mentioned as a possible leading indicator of an impending economic slowdown or possible looming crisis (Mackintosh 2015). Indicators of economic policy uncertainty, however, consider only one source of forecast dispersion (e.g., Survey of Professional Forecasters [SPF]). 10. Siklos (2013) provides a brief survey of the literature that deals with forecast disagreement and its measurement (also see section 24.3 below) that is up to date until 2010. Below I focus on the evidence and issues that have received attention since then. 11. They also consider the “noisy” information model wherein forecasters update their outlook subject to imperfect access to the relevant data.
770 Siklos 12. The Institute for Economic Research at the University of Munich. 13. This development has partly been encouraged, as noted earlier, by the creation of an index of economic policy uncertainty. See Baker, Bloom, and Davis (2016) and http://www. policyuncertainty.com. 14. However, such a conclusion is more likely to be reached for professional and central bank forecasts than for households’ forecasts of inflation. See, for example, Ng and Wright 2013; IMF Independent Evaluation Office 2013; Mavroeidis, Plagborg-Møller, and Stock 2014 for reviews of the issues. 15. For example, details about the probabilistic structure that underlies forecasters’ outlook play a role (i.e., the definition of the bins, or categories, that expectations fall into). 16. To the extent that more individuals’ expectations become anchored, this ought to reduce forecast disagreement. See Badarinza and Buchmann 2009 for evidence from the Eurozone. 17. Shortly after the appointment of Governor Haruhiko Kuroda at the Bank of Japan, the central bank announced its determination to meet a 2 percent inflation target. The original intention was to achieve the target within two years. However, the sharp drop in oil prices in 2014 has forced the bank to delay meeting this objective. 18. Economies examined were the United States, the Eurozone, the United Kingdom, and Sweden. 19. Their evidence is primarily based on the University of Michigan Survey data. A general problem with some survey data is that inflation is often not clearly defined, and participants must often respond with a forecast to the nearest integer. 20. This was in recognition that in a few instances, some policy rates have become negative. 21. This is the “dog that didn’t bark” (International Monetary Fund 2013) that continues to preoccupy policymakers. 22. That study only examines how central bank forecasts influence Consensus forecasts. Siklos (2013) considers their influence on a broader set of forecasts. Aruoba and Schorfheide (2015) also indicate that the issue of whether private agents and central bank forecasts can have the appearance of being coordinated is an important one for our understanding of the dynamics of inflation. 23. This potentially adds some “noise” to the interpretation of the results for at least two reasons. First, some central banks (e.g., the Fed) may target a different price index (e.g., the personal consumption expenditures or PCE index), while all central banks tend to be more concerned with a measure of core inflation (i.e., one that typically strips food, energy, and indirect taxes). Since judgment also influences forecasts, the various benchmarks considered may not be perfectly comparable. 24. The measure used here comes closest to the one used in Lahiri and Sheng 2008, while the transformation applied yields a version that is the normalized absolute deviation of forecasts implemented in Banternghansa and McCracken 2009. Forecast disagreement is sometimes also evaluated, for example, by calculating the interquartile range of forecasts (e.g., Mankiw, Reis, and Wolfers 2003; Capistrán and Timmermann 2008). The indicator used here has the virtue of retaining all the available information. 25. There is, of course, no unique normalization, but the estimates of dthj discussed below are bounded between [0,1] (i.e., using the transform (d − dmin ) / (dmax − dmin )). 26. In an earlier version, some proxies for forecast uncertainty were generated for US data, and these support the contention of Boero, Smith, and Wallis (2015), who examine UK
What Has Publishing Inflation Forecasts Accomplished? 771 data. Lahiri, Peng, and Sheng (2015) and Boero, Smith, and Wallis (2015) show that the variance of (density) forecasts is the sum of average individual uncertainty and a measure of the dispersion, or disagreement, between individual density forecasts. The first element is generally unobserved. 27. Consider a monthly forecast of inflation (π) for calendar year t, released in month m (with quarterly data, we replace twelve months with four quarters per year). Denote such a forecast as π mFE,t , where FE refers to the fixed event nature of the forecast. Hence, a forecast for the fixed event one year ahead would be written π mFE,t +1. The transformation from FE to FH, where FH represents a fixed horizon forecast, is π mFH,t = [(13 − m) 12]π mFE,t + [(m − 1) 12]π mFE,t +1 . 28. Essentially, this fits a local quadratic polynomial for each observation of the low-frequency series. This polynomial is then used to fill in the missing observations at the higher frequency. 29. The ECB aims for inflation in the harmonized index of consumer prices (HICP) of “a year-on-year increase . . . for the euro area of below 2%.” See https://www.ecb.europa.eu/ mopo/strategy/pricestab/html/index.en.html. The BoJ’s objective has changed over time, although its mandate has been to achieve some form of price stability. The 2 percent objective has been made more explicit, however, since the appointment of Governor Haruhiko Kuroda in 2013. See https://www.boj.or.jp/en/announcements/release_2013/k130122b.pdf. The SNB defines price stability as “a rise in consumer prices of less than 2% per year.” See http://www.snb.ch/en/iabout/snb/id/snb_tasks. Finally, the Fed, under Chairman Ben Bernanke, declared in 2012 that “inflation at the rate of 2% (as measured by the annual change in the price index for personal consumption expenditures, or PCE) is most consistent over the longer run with the Federal Reserve’s statutory mandate.” See http://www. federalreserve.gov/faqs/money_12848.htm. Nevertheless, many have argued that a 2 percent medium-term objective was effectively adopted under Alan Greenspan’s chairmanship of the FOMC. 30. A separate appendix is available on request; Siklos 2016, also provides the details. 31. Both the standard two-sided gap and a one-sided gap were estimated. The empirical results rely on the one-sided gap measure. The global output gap measure is constructed as in Borio and Filardo 2007. Since the chapter was written, Hamilton (2018) has argued that none of the variants of the H-P adequately fulfill their objective. Future research may well conclude that the output gap has more, or less, of an impact on forecast disagreement if it is measured in the manner proposed by Hamilton. 32. It is common for central banks to make the distinction between the two inflation indicators, especially as they are not likely to respond to supply-side shocks unless these feed into expectations, while demand-side shocks typically elicit a response. 33. The data were obtained from http://www.cboe.com/micro/vix/historical.aspx. I also considered, where available, the economic policy uncertainty index (i.e., for the United States, Europe, Canada, and Japan) due to Baker, Bloom, and Davis 2016, and the results are largely the same as the ones described below. 34. The RBA forecasts were excluded from Siklos 2013, where a total of seventy-four forecasts were used. Hence, the number of forecasts has risen by approximately 12 percent relative to that study. 35. For example, see Filardo and Hofmann 2014 for a critical review of forward guidance practices.
772 Siklos 36. There exist several algorithms that attempt to evaluate the content of documents. Others that have been used by economists include Wordscores, Leximancer, and General Inquirer, to name three. 37. The choice of ten words is arbitrary, and the selection of ideas that are grouped together is also dictated by the chosen language quantification algorithm. As noted by Loughran and McDonald (2016), there is no unique normalization. However, it is essential to normalize because text length (and complexity) can vary considerably. 38. By default, DICTION classifies words in a document according to the following characteristics: certainty, a collection of words indicating resoluteness; optimism, language that endorses a position or a concept; activity, with the words suggesting ideas or stances being implemented and avoiding inertia; realism, meant to inform the reader of tangible results or recognizable facts; and commonality, language that draws attention to common values or positions in a text. Examples of words that are “positive” include consensus and successful; negative words would include apprehension and detrimental. Lombardi, Siklos, and St. Amand (2019) use a modified classification but provide more examples of words that can be construed as positive or negative. 39. One of many examples that come to mind is the Fed dropping the word patient in its press releases in mid-2015. 40. It is possible, at the quarterly frequency, that some forecasters (e.g., central banks) have information for some variables at time t. However, even in this case, since forecasts involve judgment, it is not necessarily the case that the most current information will affect inflation forecasts. In any case, as a sensitivity test, I also estimate equation 3 with the right- hand-side variables entering with a lag, and the conclusions are unaffected. Although there is some serial correlation in inflation forecasts disagreement, it is modest and, often, statistically insignificant (results not shown). 41. Given the length of the sample, I also experimented with time-varying common factor in a rolling fashion with windows that range from two to five years in length, and the results were little changed. 42. As noted above, the specifications reflect a preference for parsimony, partly due to the sampling frequency of the data. Nevertheless, the conclusions are broadly similar if more conventional variables (e.g., term spread, asset prices, nominal effective exchange rates) are used instead and the variables expressed as differentials enter individually. 43. Bekaert, Hoerova, and Lo Duca (2013) find that the VIX can be decomposed into an uncertainty component as well as a risk-aversion component. Moreover, they conclude that there is a strong empirical connection between how loose monetary policy is and the VIX. Of course, precrisis, loose monetary policy was a harbinger of higher inflation, but since the crisis, this no longer appears to be the case, since financial stability considerations loom larger. 44. I also included a dummy variable for unconventional monetary policy actions in the United States, and this variable proved statistically insignificant in the overwhelming number of cases. The dummies, set to 1 when a policy action is announced and 0 otherwise, account for the launch of QE1 (November 2008–March 2009), the launch of QE2 (November 2010), and the tapering of bond purchases in 2014. 45. An anonymous referee has correctly pointed out that netting the positive and negative elements of tone may also have played a role in the results obtained. This is no doubt correct, but the estimated specification partly reflects the need to conserve degrees of freedom. In any event, netting out the two elements of tone can also be justified by noting that most central bank statements are likely to contain both positive and negative words.
What Has Publishing Inflation Forecasts Accomplished? 773 Also, central banks may be biased away from the use of negative words for fear of spreading undue fear in financial markets especially. Finally, the relative importance of the change in tone may also be influenced by the chosen benchmark. One could have evaluated instead the change in tone relative, say, to the mean of positive and negative characteristics in central bank announcements obtained in a pre-GFC sample. Alternatively, one might have included a measure of the similarity of central bank press releases over time. However, it is unclear whether the level or variance of forecast disagreement may be affected. See Ehrmann and Talmi 2016. These kinds of extensions are also left for future research. 46. When some of the other variants of the communication variable shown in table 24.2 are used, there are a few other cases where the variable is found to be statistically significant (Switzerland, United Kingdom, and Sweden). 47. There may be more than one Markov-switching model to describe forecast disagreement regimes. However, we do know from the large literature on inflation persistence that it is a function of the policy regime in place. Since the AR(1) parameter is a summary indicator of inflation persistence, it is reasonable to assume that this parameter differs by regime. 48. Hubert and Labondance (2017) report that the content of central bank statements influences financial-market expectations over and above the effects of monetary policy decisions and central banks’ forecasts. 49. Equation 3 was also estimated by replacing oil price inflation (largely insignificant in the vast majority of estimated regressions) with a recession dummy (NBER for the United States, CEPR for the Eurozone, C. D. Howe Institute for Canada, and the Economic Cycle Research Institute’s dating scheme for the remaining economies considered). Some evidence that forecast disagreement is higher is found for Canada, the Eurozone, Sweden, and the United States but not for the other economies examined.
References Alessi, L., E. Ghysels, L. Onorante, R. Peach, and S. Potter. 2014. “Central Bank Macroeconomic Forecasting during the Global Financial Crisis: The European Central Bank and the Federal Reserve Bank of New York Experiences.” ECB working paper 1688, July. Andrade, P., and H. Le Bihan. 2013. “Inattentive Professional Forecasters.” Journal of Monetary Economics 60: 967–982. Aruoba, S. B., and F. Schorfheide. 2015. “Inflation during and after the Zero Lower Bound.” Federal Reserve Bank of Kansas City Economic Policy Symposium, August. Bachmann, R., S. Elstner, and E. R. Sims. 2013. “Uncertainty and Economic Activity: Evidence from Business Survey Data.” American Economic Journal: Macroeconomics 52, no. 2: 217–249. Badarinza, C., and M. Buchmann. 2009. “Inflation Perceptions and Expectation in the Euro Area.” ECB working paper 1088, September. Baker, S., N. Bloom, and S. Davis. 2016. “Measuring Economic Policy Uncertainty.” Quarterly Journal of Economics 131, no. 4: 1593–1636. Bank of England Independent Evaluation Office. 2015. “Evaluating Forecast Performance,” November. Banternghansa, C., and M. McCracken. 2009. “Forecast Disagreement among FOMC Members.” Federal Reserve Bank of St. Louis working paper 2009-059A, December. Bauer, M. 2015. “Inflation Expectations and the News.” International Journal of Central Banking (March): 1–40.
774 Siklos Bekaert, G., M. Hoerova, and M. Lo Duca. 2013. “Risk, Uncertainty, and Monetary Policy.” Journal of Monetary Economics 60 (October): 771–788. Bernanke, B. 2007. “Inflation Expectations and Inflation Forecasting.” Speech NBER Monetary Economics Workshop Summer Institute, Cambridge, July 10. Blinder, A. S., M. Ehrmann, M. Fratzcher, J. de Haan, and D.-J. Jansen. 2008. “Central Bank Communication and Monetary Policy: A Survey of Theory and Evidence.” Journal of Economic Literature 46, no. 4: 910–945. Boero, G., J. Smith, and K. Wallis. 2015. “The Measurement and Characteristics of Professional Forecasters’ Uncertainty.” Journal of Applied Econometrics 30 (November–December): 1029–1046. Bordo, M., and P. Siklos. 2016. “Central Bank Credibility: An Historical and Quantitative Exploration.” In Central Banks at a Crossroads: What Can We Learn from History? edited by M. D. Bordo, Ø. Eitrheim, M. Flandreau, and J. F. Qvigstad, 62–144. New York: Cambridge University Press. Borio, C., and A. Filardo. 2007. “Globalization and Inflation: New Cross-Country Evidence on the Global Determinants of Inflation.” BIS working paper 227. Capistrán, C., and A. Timmermann. 2008. “Disagreement and Biases in Inflation Expectations.” Journal of Money, Credit and Banking 41 (March–April): 365–396. Carlson, J., and M. Parkin. 1975. “Inflation Expectations.” Economica 42: 123–138. Carroll, C. D. 2003. “Macroeconomic Expectations of Households and Professional Forecasters.” Quarterly Journal of Economics 118: 269–288. Carney, M. 2015a. “Inflation in a Globalized World.” Speech. Federal Reserve Bank of Kansas City Economic Policy Symposium: Inflation Dynamics and Monetary Policy, August 29. Carney, M. 2015b. “Writing the Path Back to Target.” Speech. Advanced Manufacturing Research Centre, Sheffield, March. Carvalho, C., and F. Nechio. 2014. “Do People Understand Monetary Policy?” Journal of Monetary Economics 66: 108–123. Chang, A., and T. Hanson. 2015. “The Accuracy of Forecasts Prepared for the Federal Open Market Committee.” FOMC finance and economics working paper 2015-062, July. Ciccarelli, M., and B. Mojon. 2010. “Global Inflation.” Review of Economics and Statistics 92 (August): 524–535. Clements, M., and A. B. Galvão. 2014. “Measuring Macroeconomic Uncertainty: U.S. Inflation and Output Growth.” University of Reading discussion paper 2014-4, May. Coibion, O., and Y. Gorodnichenko. 2015. “Is the Phillips Curve Alive and Well After All? Inflation Expectations and the Missing Deflation.” American Economic Journals: Macroeconomics 7, no. 1: 197–232. Constâncio, V. 2015. “Understanding Inflation Dynamics and Monetary Policy.” Speech. Federal Reserve Bank of Kansas City Economic Policy Symposium: Inflation Dynamics and Monetary Policy, August 29. Diebold, F., and R. Mariano. 1995. “Comparing Predictive Accuracy.” Journal of Business and Economic Statistics 13: 253–263. Dovern, J. 2015. “A Multivariate Analysis of Forecast Disagreement: Confronting Models of Disagreement with Survey Data.” European Economic Review 80: 18–35. Dräger, L. 2015. “Inflation Perceptions and Expectations in Sweden—Are Media Reports the Missing Link?” Oxford Bulletin of Economics and Statistics 77, no. 5: 681–700. Dräger L., and M. Lamla. 2016. “Disagreement à la Taylor: Evidence from Survey Microdata.” Scandinavian Journal of Economics 85 (June): 84–111.
What Has Publishing Inflation Forecasts Accomplished? 775 Dräger L., M. Lamla, and D. Pfajfar. 2016. “Are Survey Expectations Theory-Consistent? The Role of Central Bank Communication and News.” European Economic Review 85: 84–111. Draghi, M. 2014. “Monetary Policy in a Prolonged Period of Low Inflation.” Speech. ECB Forum on Central Banking, Sintra, May. Drehmann, M., and M. Juselius. 2013. “Evaluating Early Warning Indicators of Banking Crises: Satisfying Policy Requirements.” BIS working paper 421, August. Ehrmann, M., and J. Talmi. 2016. “Starting from a Blank Page? Semantic Similarity in Central Bank Communication and Market Volatility.” Bank of Canada working paper 2016-37, July. Filardo, A., and B. Hofmann. 2014. “Forward Guidance at the Zero Lower Bound.” BIS Quarterly Review (March): 37–53. Glas, A., and M. Hartmann. 2015. “Inflation Uncertainty, Disagreement and Monetary Policy: Evidence from the ECB Survey of Professional Forecasters.” Heidelberg University, February. Hamilton, J. D. 2018. “Why You Should Never Use the Hodrick-Prescott Filter.” Review of Economics and Statistics 100 (December): 831–843. Hansen, B., A. Lunde, and J. Nason. 2011. “The Model Confidence Set.” Econometrica 79 (March): 453–497. Hart, R., J. Childers, and C. Lind. 2013. Political Tone: How Leaders Talk and Why. Chicago: University of Chicago Press. Hubert, P. 2015. “Do Central Bank Forecasts Influence Private Agents? Forecasting Performance versus Signals.” Journal of Money, Credit and Banking 47 (June): 771–789. Hubert, P., and F. Labondance. 2017. “Central Bank Statements and Policy Expectations.” Bank of England staff working paper 648. Husted, L., J. Rogers, and B. Sun. 2016. “Measuring Monetary Policy Uncertainty: The Federal Reserve, January 1985–January 2016.” IFDP Notes, April 11. https://www.federalreserve.gov/ econresdata/notes/ifdp-notes/2016/measuring-monetary-policy-uncertainty-the-federal- reserve-january-1985-january-2016-20160411.html IMF Independent Evaluation Office. 2013. “IMF Forecasts: Process, Quality, and Country Perspectives,” February. International Monetary Fund. 2013). “The Dog That Didn’t Bark: Has Inflation Been Muzzled or Was It Just Sleeping?” World Economic Outlook, chap. 3, April. Jurado, K., S. C. Ludvigson, and S. Ng. 2015. “Measuring Uncertainty.” American Economic Review 105: 1177–1216. Kamada, K., J. Nakajima, and S. Nishiguchi. 2015. “Are Household Expectations Anchored in Japan?” Bank of Japan working paper 15-E-8, July. Koo, R. C. 2009. The Holy Grail of Macroeconomics. New York: John Wiley. Kuroda, H. 2015. “What We Know and What We Do Not Know about Inflation Expectations.” Speech. Economic Club of Minnesota, April. Lahiri, K., H. Peng, and X. Sheng. 2015. “Measuring Uncertainty of a Combined Forecast and Some Tests for Forecaster Heterogeneity.” CESifo working paper 5468, August. Lahiri, K., and X. Sheng. 2008. “Evolution of Forecast Disagreement in a Bayesian Learning Model.” Journal of Econometrics 144: 325–340. Lombardi, D., P. L. Siklos, and S. St. Amand. 2019. “Asset Price Spillovers from Unconventional Monetary Policy: A Global Empirical Perspective.” International Journal of Central Banking (forthcoming).
776 Siklos López-Pérez, V. 2016. “Measures of Macroeconomic Uncertainty for the ECB’s Survey of Professional Forecasters.” Equilibrium 11, no. 1: 9–41. Loughran, T., and B. McDonald. 2016. “Textual Analysis in Accounting and Finance: A Survey.” Journal of Accounting Research 54 (September): 1187–1230. Maag, T. 2009. “On the Accuracy of the Probability Method for Quantifying Beliefs about Inflation.” KOF working paper 230, Zurich. Mackintosh, J. 2015. “There Might Be Trouble Ahead.” Financial Times, October 15. Mankiw, N. G., and R. Reis. 2002. “Sticky Information versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve.” Quarterly Journal of Economics 117: 1295–1328. Mankiw, N. G., R. Reis, and J. Wolfers. 2003. “Disagreement about Inflation Expectations.” NBER working paper 9796, June. Mavroeidis, S., M. Plagborg-Møller, and J. Stock. 2014. “Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve.” Journal of Economic Literature 52 (March): 124–188. Mokinski, F., X. Sheng, and J. Yang. 2015. “Measuring Disagreement in Qualitative Expectations.” Journal of Forecasting 34 (August): 405–426. Morris, S., and H. Shin. 2002. “Social Value of Public Information.” American Economic Review 52, no. 5: 1521–1534. Ng, S., and J. Wright. 2013. “Facts and Challenges from the Great Recession for Forecasting and Macroeconomic Modeling.” Journal of Economic Literature 51 (December): 1120–1154. Nishiguchi, S. J. Nakajima, and K. Imabukko. 2014. “Disagreement in Households’ Inflation Expectations and Its Evolution.” Bank of Japan Review 2014-E-1, March. Pesaran, H. 1985. “Formation of Inflation Expectations in British Manufacturing Industries.” Economic Journal 95: 948–975. Pesaran, H. 1987. The Limits to Rational Expectations. Oxford: Basil Blackwell. Reifschneider, D., and P. Tulip. 2007. “Gauging the Uncertainty of the Economic Outlook from Historical Forecasting Errors.” FEDS working paper 2007-60, November. Romer, C. D., and D. H. Romer. 1989. “Does Monetary Policy Matter? A New Test in the Spirit of Friedman and Schwartz.” NBER Macroeconomics Annual, edited by O. Blnchard and S. Fischer, 121–170. Cambridge: MIT Press. Romer, C. D., and D. H. Romer. 2004. “A New Measure of Monetary Policy Shocks: Derivation and Implications.” American Economic Review 94 (September): 1055–1084. Rossi, B. 2014. “Density Forecasts in Economics, Forecasting and Policymaking.” ICREA- Universitate Pompeu-Frabra working paper. Siklos, P. 2013. “Sources of Disagreement in Inflation Forecasts.” Journal of International Economics 90: 218–231. Siklos, P. L. 2016. “Forecast Disagreement and the Inflation Outlook: New International Evidence.” Bank of Japan IMES discussion paper 2016-E-3, March. Sims, C. 2015. “Rational Inattention and Monetary Economics.” In Handbook of Monetary Policy, Forthcoming. Amsterdam: Elsevier. Stockton, D. 2012. “Review of the Monetary Policy Committee’s Forecasting Capability.” Report to the Court of the Bank of England, October. Strohsal, T., R. Melnick, and D. Nautz. 2015. “The Time- Varying Degree of Inflation Expectations Anchoring.” Freie University, Berlin, May. Strohsal, T., and L. Winkelmann. 2015. “Assessing the Anchoring of Inflation Expectations.” Journal of International Money and Finance 50: 33–48.
What Has Publishing Inflation Forecasts Accomplished? 777 Svensson, L. 2006. “Social Value of Public Information: Comment—Morris and Shin Is Actually Pro-Transparency, Not Con.” American Economic Review 96, no. 1: 448–452. Tetlock, P. 2005. Expert Political Judgment: How Good Is It? How Can We Know? Princeton: Princeton University Press. Tien, P.-L., T. Sinclair, and E. Gamber. 2015. “Do Fed Forecast Errors Matter?” Georgetown University, working paper, November. Timmermann, A. 2006. “Forecast Combinations.” In Handbook of Economic Forecasting, edited by G. Elliott, C. W. J. Granger, and A. Timmermann, 135–196. Amsterdam: Elsevier. Townsend, R. 1983. “Forecasting the Forecasts of Others.” Journal of Political Economy 91 (August): 546–588. Woodford, M. 2012. “Monetary Policy Accommodation at the Interest-Rate Lower Bound.” In The Changing Policy Lanscape, edited by Jackson Hole, 185–288. Federal Reserve Bank of Kanas City. Yellen, J. 2015. “Normalizing Monetary Policy: Prospects and Perspectives.” Speech. The New Normal Monetary Policy, Federal Reserve Bank of San Francisco research conference, March. Zellner, A. 2002. “Comment on the State of Macroeconomic Forecasting.” Journal of Macroeconomics 24 (December): 499–502.
Index
Abenomics 252 aggregate demand 178, 179, 194, 196, 197, 275, 344, 363, 633, 635, 692, 694, 697, 698, 713, 728 aggregate expenditure 347 aggregate income 341 aggregate inflation 412 aggregate production 583 aggregate risk 663, 670 aggregate supply 169, 196, 197, 213, 226, 275, 340 Asian crisis 20, 359, 466, 486, 565, 651 asset-backed securities 190, 315, 384, 599 asset bubbles 179, 489 asset-market 255, 357, 365
bail in 23, 461, 468, 471, 474, 475, 478, 510, 512, 513, 526, 527, 579, 590–592, 602, 605, 606, 608, 609, 652 bail out 22, 23, 62, 85, 111, 186–188, 190, 191, 361, 365, 461, 462, 474, 504, 505, 510–5 13, 527, 529, 533, 534, 536, 539–544, 546–548, 566, 579, 580, 586, 590, 594, 599, 607, 608, 612, 618, 629, 645, 648 balance sheet balance-sheet asset 672 balance-sheet composition 469 balance-sheet expansion 556–558 balance sheet policies 392 balance sheet recession 566, 580, 582 bank bank capital 112, 169, 370, 371, 381–384, 395, 397, 400, 402–405, 470, 471, 473, 478, 538, 545, 678, 685, 731, 734 bank capital channel 370, 381, 382, 471, 685 bank credit 146, 186, 187, 262, 372, 383, 440, 473, 633, 640 bank creditors 629, 637, 639 bank crisis 20, 22, 552, 556, 562, 568, 580, 600, 618, 620, 621, 625, 628, 631, 635, 638, 644 bank cryptocurrencies 34 bank debt 370, 383, 512, 513
bank deposit 123, 379, 385, 403, 531, 568, 574, 638, 660, 668, 670 bank depositors 641, 670 bank failure 20, 457, 465, 466, 486, 504, 505, 507–513, 515, 525, 526, 545, 546, 586, 587, 600, 617, 638 bank lending 113, 359, 370, 374, 380, 381, 383–386, 392, 395, 396, 399–403, 464, 471, 514, 610, 668, 669, 686, 694, 725, 728, 732 bank lending channel 370, 380, 381, 383–386, 402, 403, 732 bank liabilities 134, 383, 395, 531, 637 bank regulation 34, 191, 457, 462, 478–480, 502, 503, 505, 506, 508, 526, 543, 617 bank regulator 445, 502, 540, 541, 628 bank reserves 131, 132, 134, 137, 139, 146, 184, 200, 319, 440, 442, 443, 447, 448, 471, 486, 668, 670 bank resolution 468, 512, 587, 589, 591, 605, 608, 610, 614, 617, 618, 622, 637 banking banking assets 181 banking crisis 73, 92, 111, 112, 126, 358, 476, 479, 502, 504–507, 509, 511, 520–522, 527–530, 533–535, 540, 544, 546, 548, 615–617, 636, 638, 647–650, 775 banking industry 346, 380, 400, 402, 404, 514, 519, 524, 589 banking institution 396, 602 banking sector 61, 83, 95, 112, 113, 117, 118, 120, 138, 140, 146, 380, 396, 458, 464, 483, 487, 505, 510, 519, 523, 526, 553, 566, 568, 570, 576, 597, 606, 610, 616, 670, 685, 686, 702, 722 banking services 515 banking supervision 61, 63, 69, 71, 82, 90, 92, 95, 125, 396, 458, 460, 463, 466, 472, 478–480, 483, 486–489, 492, 501, 503, 546, 586, 614, 617, 677, 700, 728 banking supervisors 493, 524 banking supervisory 487, 488, 493, 526
780 Index banking (Cont.)
banking system 109, 110, 112, 121, 123, 125, 126, 169, 171, 173, 174, 177, 181, 185, 188, 346, 369–371, 392, 396, 397, 441–443, 446, 450, 453, 460, 470, 501, 504, 514, 526, 529–533, 535, 544, 548, 568, 571, 578, 587, 590, 598, 599, 602, 606, 607, 619, 629, 636, 638, 639, 642, 652, 669, 670, 673 banking union 172, 181, 188, 371, 396, 459, 464, 477, 479, 503, 601, 612, 614, 615, 617, 618 banknote 31, 107, 135, 138, 143–147, 149–152, 154, 156, 157, 164, 167, 168, 436, 438, 445, 446 Bank of America 632 Bank of Canada 30, 33, 238, 249, 253, 255, 257, 264, 271–274, 324, 368, 391, 603, 650, 681, 683, 686, 687, 714, 724, 731, 733–735, 746, 775 Bank of Central African States 324, 325 Bank of England 34, 35, 39, 56–58, 64, 84, 88, 99, 107, 110, 123–125, 131, 132, 171, 217, 233, 250, 251, 254, 284, 293, 311, 312, 318, 324, 326–329, 368, 391, 394, 400, 402–404, 438, 440, 443, 448, 450–453, 458, 466, 476, 483, 488, 493, 497, 500, 501, 503, 510, 526, 527, 581, 591, 607, 609, 612, 616, 627, 648, 649, 651, 652, 683, 687, 694, 725, 727–729, 733, 739, 742, 746, 773, 775, 776 BOE 18, 39, 41, 42, 154, 168, 171, 172, 182, 233, 235, 236, 238, 240, 242, 244, 246, 248, 250, 260–262, 312, 318–322, 326, 329, 394, 395, 399, 438–443, 445, 446, 448, 450, 483, 487, 488, 490–494, 496, 497, 500, 591, 594, 598, 603, 609, 627, 633, 639, 640, 644, 715, 746, 747 Bank of Finland 453, 594, 610 Bank of International Settlements 361, 365 Bank of Israel 293 Bank of Italy 84, 90, 500, 728 Bank of Japan 557 Bank of Spain 360, 367 bankruptcy 149, 204, 211, 215, 348, 356, 397, 513, 541, 624, 627, 640, 641, 656, 657, 661–663, 666, 667, 675–678, 719 Basel 113, 114, 361, 396, 397, 400, 458–463, 466–474, 476–480, 490, 503, 507–509, 545, 586, 587, 599–601, 611, 614, 624, 625, 668, 677, 700, 728 Basel Accord 458, 466 Basel committee 114, 396, 458, 463, 467, 470, 473, 478, 503, 586, 587, 600, 601, 614, 677, 700, 728
Basel III 396, 728 Basel requirements 461, 473, 545 Basel standard 397, 459, 467, 469, 478 Bayesian 51, 54, 56, 58, 250, 393, 405, 682, 705, 715, 716, 720, 722, 724, 726, 735, 736, 775 Bayesian DSGE 405, 735 Bayesian model 58, 705, 716, 722 Bayesian VAR 250, 393, 715, 720 BBK-Model 687 benchmark 27, 257, 346, 395, 416, 418, 424, 681, 692, 707, 726, 738, 739, 745, 752, 756, 757, 760, 763–768, 770, 773 Bernanke 33, 34, 56, 83, 84, 87, 175, 176, 187, 190, 191, 215–218, 224, 239, 242, 251, 264, 265, 268–271, 282, 284, 285, 346, 359, 366, 398, 400, 458, 475, 478, 490, 540, 546, 552, 555, 556, 559, 578, 579, 581, 626–628, 633, 644–646, 651, 663, 670, 675, 684, 685, 687, 708, 722, 723, 725, 728, 740, 771, 774 business cycle 20, 84, 86, 90, 93, 146, 205, 224, 337, 367, 387, 399, 404, 405, 407, 418–421, 426, 429, 430, 547, 553, 555, 557, 559, 561, 563, 565, 567, 569, 571, 573, 575, 577, 579, 581, 583, 584, 655, 675–677, 680, 681, 684–686, 697, 700, 701, 722, 723, 728, 731, 733–735, 769 business cycle dynamics 685 business-cycle fluctuations 146 business cycle theory 20, 90, 553, 555, 557, 559, 561, 563, 565, 567, 569, 571, 573, 575, 577, 579, 581, 583 business standards 506, 519, 520
capital capital-accumulation 692, 698, 699 capital investment 449 capital market 113, 174, 176, 177, 185, 186, 188, 354, 355, 384, 393, 403, 444, 445, 447, 448, 451, 453, 459, 469, 514, 553, 565, 649 capital ratio 112, 113, 120, 180, 185, 361, 374, 383, 461, 467, 471, 473, 474, 477, 507, 592, 611 capital regulation 88, 113, 366, 367, 382, 400, 401, 405, 458, 479, 537, 664, 675 capital requirement 370, 371, 383, 397, 458, 460, 466–478, 480, 490, 504, 507, 509, 537–539, 543, 545, 596, 604, 685, 686 capital utilization 679, 682
Index 781 central bank central bank announcements 739, 743, 747, 748, 764, 773 central bank asset 240–242, 253, 557, 694, 695 central bank autonomy 86 central bank balance sheet 140, 153, 160, 551, 552, 556, 558, 578, 606 central bank committee 41, 285, 327, 627 central bank communication 28, 84, 87, 189, 231–233, 235, 237, 239, 241, 243, 245, 247– 251, 253, 255, 261, 263, 265–267, 269, 271, 273, 275, 277, 279, 283, 285, 286, 288, 315, 323, 324, 327, 328, 390, 559, 652, 739, 743, 747, 748, 757, 763, 764, 768, 774, 775 central bank credibility 235, 247, 252, 255, 327, 646, 774 central bank credit 633 central bank crisis 20, 22, 552, 556, 562, 568, 580, 620, 621, 625, 628, 631, 635, 638, 644 central bank crisis management 20, 22, 552, 556, 562, 568, 580, 620, 625, 638, 644 central bank cryptocurrencies 34 central bank decision 57, 303, 622, 626, 629, 722 central bank evaluation 659 central bank forecast 27, 328, 330, 738, 739, 744, 752, 756–758, 763, 769, 770, 775 central bank functions 489 central bank governance 58, 60–62, 64, 65, 82, 84, 91, 93, 109, 119, 500, 651 central bank governor 63, 68, 69, 71, 74, 85, 89, 94, 123, 190, 291, 325, 467, 638, 644, 665 central bank independence 59–61, 63, 65, 67, 69, 71–73, 75, 77, 78, 81, 83, 85–96, 121, 125, 126, 134, 149, 151, 156, 159, 170, 183, 220–222, 484, 494, 499, 500, 502, 559, 637, 644, 646, 648–651 central bank insolvency 134, 147, 149, 151, 157, 158, 168, 169 central bank interest rate 204, 553–555, 562, 577 central bank intervention 161, 511 central bank involvement 69, 71, 95, 457, 460, 465, 608 central bank laws 86, 499, 500, 502 central bank legislation 68 central bank liabilities 134 central bank liquidity 470, 568, 569, 629, 635, 641, 648, 670 central bank liquidity injections 670 central bank loans 446
central bank mandates 119, 327, 706 central bank model 482, 686 central bank objectives 292 central bank operations 440, 441, 469, 470 central bank period 257, 258 central bank policy 27, 254, 360, 365, 366, 415, 484, 486, 620, 625–628, 637, 645, 741 central bank policymaking 625 central bank politics 56 central bank portfolio 446, 449 central bank profits 629 central bank purchases 161, 241, 250, 415, 443 central bank reaction 696 central bank reform 88 central bank responsibility 483, 594 central bank solvency 151, 156, 157, 169 central bank statements 236, 772, 773, 775 central bank supervision 486 central bank supervisory 465 central bank transparency 18, 34, 89, 232, 254, 288, 291, 292, 323, 327, 328, 648, 649 central banker 18, 24, 28, 33, 41, 57–59, 62, 67, 82–84, 89, 92, 94, 184, 190, 205, 245, 287–290, 293, 304, 328, 330, 331, 365, 407, 444, 472, 476, 484, 485, 490, 556, 559, 622, 624–626, 643, 645, 650, 686, 738, 740, 741, 744, 748, 769 central bank liquidity interventions 635 central bank of Hungary 293 central bank of Ireland 641 central bank of Norway 293, 696 central bank of the Netherlands 440 Citigroup 632 commercial bank 30, 116, 120, 346, 400, 401, 403, 436, 437, 439–442, 444, 447–450, 482, 483, 486, 532, 535, 536, 553, 561, 566, 568, 572, 574, 578, 632, 668, 670, 685 communication 24, 28, 40, 41, 84, 87, 91, 105, 189, 192, 231–235, 237–239, 241–243, 245–251, 253–255, 259–261, 263, 265–267, 269, 271, 273, 275, 277, 279, 283, 285–288, 290, 291, 293, 312, 315, 323, 324, 327, 328, 363, 368, 370, 388, 390, 391, 474, 523, 524, 545, 551, 558, 559, 627, 639, 643, 644, 652, 676, 739, 741, 743, 746–748, 755, 757, 763, 764, 768, 773–775 corporate bond 155, 176, 177, 216, 257, 259, 261, 326, 384, 446, 633 corporate debt 387, 401 corporate finance 404, 527
782 Index corporate governance 99–102, 116, 121, 124, 125, 520, 677 corporate insolvency 513 corporate loans 373 corporate savings 567, 580 Cournot 34, 347, 356, 366, 369 credit credit-easing 392, 634 credit growth 186, 187, 316, 473, 474, 561, 562, 572, 576, 577, 580, 589, 596, 680, 710–7 12 credit growth limits 589 credit rating 539, 660 credit risk 185, 372, 380, 392, 403, 465, 507, 518, 632, 645, 656, 658, 665, 673 credits 140, 155, 440, 659 creditor 22, 23, 135, 137, 153, 181, 189, 348, 354, 382, 461, 475, 509, 511–513, 533, 588, 590, 593, 602, 608, 612, 613, 629, 637, 639, 641, 668 crisis crisis avoidance 21, 586, 587, 589–591, 593, 595, 597, 599, 601, 603, 605–607, 609–613, 615, 617 crisis intervention 621, 635 crisis management 20–22, 24, 25, 468, 477, 482, 483, 491, 493, 499, 500, 551–553, 555–563, 565–571, 573–577, 579–581, 583, 586, 587, 589, 590, 597, 616, 619, 620, 622, 625–631, 633–638, 641, 642, 644, 645 crisis resolution 611 crisis response 643, 644 Czech National Bank 293, 391
debt crisis 73, 74, 133, 168, 172, 176, 314, 315, 317, 392, 405, 488, 551, 552, 575, 578, 600, 642, 643 debt-deflation 685 default-risk-free bonds 206 deposit 20, 123, 140, 143, 152, 175, 181, 184, 191, 200, 313, 314, 346, 369, 371–373, 377, 379, 380, 382, 389, 393, 396–398, 401–404, 415, 441, 459–461, 463, 464, 466, 467, 470, 479, 480, 529–534, 537, 538, 541, 543, 546, 547, 574, 587–590, 592, 593, 598, 603, 604, 607, 612–614, 616–618, 628, 632, 637, 639, 667, 670, 676 deposit insurance 20, 181, 191, 380, 396, 459–461, 464, 466, 467, 470, 479, 480, 529, 531–534, 537, 538, 541, 543, 546, 547, 587, 589, 590, 592, 603, 604, 607, 612–614, 616–618, 632, 676
deposit insurer 463, 588, 628 deposit interest rate 175, 184, 314, 346, 369, 372, 373, 379, 393, 396, 402, 574, 637 deposit market 379, 397, 401 depository institutions 593, 614 deposit reserves 441 deposit spreads 574 depreciation 73, 154, 155, 250, 339, 341, 342, 345, 441, 570, 571, 629, 636, 638, 641, 642 depression 178, 179, 185, 282, 358, 359, 437, 453, 483, 534, 535, 563, 576, 578, 581, 646, 650, 656, 663, 668 deregulation 18, 21, 380, 385, 452, 457, 460, 476, 501, 563, 649 digital currency 29–31, 33, 34 disproportional weights 41 domestic monetary expansion 561 domestic monetary institutions 502 domestic monetary policies 68 Draghi 176, 189, 191, 315, 632, 643, 648, 769, 775 DSGE 26, 27, 245, 250, 254, 367, 405–407, 418, 434, 435, 655, 656, 660, 661, 663–665, 673, 677, 679, 682, 684–695, 697, 698, 701, 705, 707, 714–7 17, 719–724, 726, 729–733, 735, 736 dynamic index 61, 74, 80 dynamic programming 657, 672, 673 dynamic response 702
economic reality-based monetary policy 221 ERMP 221, 222 effective lower bound 24, 30, 232, 238, 248, 255, 264, 633, 744 emerging market economies 33, 445, 462, 466, 559, 563, 565, 616, 649 endogeneity 240, 505–508, 515, 525 endogeneity paradigm 505, 507, 508, 515 endogeneity problem 506, 507, 525 Eurex 249 euro area crisis 403, 624, 643 euro area overnight interbank rate 313 EONIA 313, 314, 326, 400 Eurobarometer 644 euro crisis 359, 361, 647, 665 Eurocurrency 440 Eurodollar 258, 283, 668 Euro Interbank Offered Rate 389 EURIBOR 389, 390, 400 European Banking Authority 382, 401, 503, 510, 514, 526, 604, 612, 614, 616
Index 783 European Central Bank 61, 131, 133, 171, 190, 191, 204, 233, 256, 266, 285, 288, 289, 312, 324, 328, 360, 392, 401, 402, 441, 459, 487, 500, 501, 503, 556, 557, 559, 565, 578–580, 590, 608, 614, 624, 648, 650, 676, 683, 686, 687, 694, 727–733, 741, 746, 773 ECB 23, 24, 61, 70, 77, 133, 151, 152, 154, 155, 158, 166–168, 171, 172, 174–179, 181–183, 187–189, 191, 204, 212, 217, 233, 238–240, 242, 244, 245, 249–252, 254, 261, 262, 284, 289, 293, 306, 312–315, 324, 325, 328, 329, 392, 393, 398–405, 441–443, 449, 451, 459, 464, 487, 488, 490, 492, 493, 495, 496, 500, 503, 504, 524, 590, 595, 596, 599, 600, 604–606, 610–613, 615, 616, 624, 625, 627, 632, 633, 641–644, 646–649, 686, 696, 708, 715, 727, 728, 732, 741, 746, 771, 773, 775 European crisis 131, 580 euro reserves 155 Eurostat 364, 366, 557, 558, 572 euro stoxx 262 euro trap 618, 651 exchange rate exchange-rate channel 242, 249, 250, 694 exchange-rate crisis 77 exchange rate dynamics 615 exchange-rate income risk 151 exchange rate mechanism 115, 318 ERM 115 exchange-rate policy 262, 730 exchange-rate regime 77, 85, 87, 583 exchange-rate risk 157 exchange-rate shock 61, 73, 77, 80 exchange-rate stability 622 exchange-rate system 361 exchange rate targeters 306, 307 exchange rate targeting 332, 333, 441 exchange rate volatility 90 expansionary effect 695 expansionary policies 172, 239 expansionary fiscal policies 178 expansionary monetary policy 174, 178, 187, 188, 566, 576, 686, 689, 690, 693 expectations hypothesis 202, 224–226, 237, 416 EH 202–208, 218, 223, 416, 417, 420, 429 external competitiveness 178, 359, 360 external debt 446 external effects 536 external finance 347, 684, 696, 710, 731–733 external financing 670
external resources 590 external shocks 638, 700 external stakeholders 622, 623 external target 114, 115 external value 114, 619
federal funds rate 177, 210, 211, 226, 234–237, 259, 264, 267–269, 277–283, 317, 318, 400, 403, 406, 415, 697, 717, 718, 721 Federal Open Market Committee 39, 175, 206, 234, 263, 316, 464, 697, 742, 774 FOMC 39, 41, 42, 53–58, 175, 206, 208–211, 216, 219, 221, 224, 226, 234–238, 251, 254, 255, 259, 263, 264, 266–274, 276, 277, 279–284, 316, 318, 326, 464, 651, 742, 747, 769, 771, 773, 774 Federal Reserve Bank 58, 91, 94, 169, 170, 191, 224–227, 252, 253, 255, 256, 263, 266, 326, 329, 402, 437, 452, 466, 480, 481, 499, 527, 546–548, 557, 582, 614, 625, 646–648, 651, 677, 687, 729, 731, 773, 774, 777 Fed 61, 84, 94, 154, 161, 172, 173, 175–181, 185, 187, 189, 199, 200, 204, 210, 211, 215–221, 223, 225, 234, 236–241, 251, 260–262, 264, 266, 269, 270, 273, 276, 278, 282, 312, 316–318, 326, 404–406, 434, 436, 437, 441, 443, 449, 458, 475, 552, 555, 557, 559, 565, 611, 624, 627, 629, 632, 634, 635, 644–646, 648, 649, 687, 719, 728, 729, 731, 746, 753, 770–772, 777 Federal Reserve Bank of Atlanta 546, 548 Federal Reserve Bank of Boston 647 Federal Reserve Bank of Chicago 480, 687, 729 Federal Reserve Bank of Cleveland 729 Federal Reserve Bank of Dallas 263, 266 Federal Reserve Bank of Kansas 169, 170, 225, 227, 253, 256, 329, 614, 625, 646, 647, 651, 677, 773, 774, 777 Federal Reserve Bank of Minneapolis 548 Federal Reserve Bank of New York 252, 255, 402, 437, 452, 499, 648, 731 Federal Reserve Bank of Richmond 225, 452, 547 Federal Reserve Bank of San Francisco 191, 227, 252, 481, 527, 582, 777 Federal Reserve Bank of St. Louis 58, 91, 94, 225, 226, 252, 253, 452, 773 Federal Reserve Board 225, 365, 405, 458, 539 Federal Reserve bond 223, 251
784 Index Federal Reserve System 82, 132, 255, 263, 266, 316, 541, 547, 614, 683, 687, 708, 724, 725, 729–731, 734, 736, 746 financial financial asset 172, 187, 190, 471, 472, 532, 567, 580, 642, 660, 672 financial cycle 21, 472, 473, 552, 577, 578, 581, 595, 606, 615, 700, 713 financial data 739 financial deepening 190, 583 financial deregulation 380, 452, 457, 460, 476, 501, 563, 649 financial development 209, 312, 322, 404, 476, 481, 483, 531, 651, 665, 668, 669 financial disruptions 679, 700 financial disturbances 666 financial econometrics 737 financial economics 142, 224, 253, 401, 404, 435, 479, 480, 545–547, 677, 680 financial engineering 643 financial firms 362, 387, 474, 480, 515, 519, 520, 522, 523, 539, 541, 621, 734 financial friction 25–27, 479, 655, 656, 658, 663, 665–667, 673, 674, 679, 680, 684–688, 692–695, 698, 699, 704, 705, 707, 708, 710, 711, 713–7 17, 719–728, 730–732, 736 financial independence 169, 637 financial industry 527, 572 financial innovation 380, 384, 386, 463, 477, 502, 507, 515 financial instability 91, 459, 520, 616, 678 financial institution 23, 28–30, 83, 112, 116, 126, 166, 182, 186, 200, 291, 347, 354, 367, 377–380, 387, 390, 398, 403, 404, 462, 463, 474, 479, 485, 486, 490, 492, 494, 495, 498, 505, 524, 535, 539, 543, 551, 557, 561–563, 565, 566, 579, 586–589, 596, 601, 603, 607, 608, 616, 627–629, 631, 632, 643, 658, 660, 678, 684, 700, 701 financial instruments 436, 478, 658 financial intermediary 82, 109, 250, 384, 385, 388, 394, 398, 407, 408, 413, 459, 530, 656, 658, 660, 667, 684, 685, 700, 701 financial intermediation 91, 172, 402, 404, 451, 453, 480, 514, 648, 657, 664–666, 674, 677, 684, 695, 722, 733 financial market 25–27, 31, 70, 92, 111, 136, 172, 174, 177, 179, 185, 186, 188–190, 204, 205, 207, 216, 217, 221, 225, 232, 233, 236–241, 243, 245, 247, 248, 250, 252–254, 260, 263,
264, 270, 274–276, 285, 287, 289, 290, 292, 308, 319, 321, 324, 328, 359, 369, 377, 380, 381, 385, 386, 391, 392, 396, 398, 399, 417, 445, 447, 448, 451–453, 458, 494, 495, 501, 502, 526, 535, 539, 547, 552, 553, 555, 556, 562–566, 570, 572–576, 580, 583, 595, 596, 598, 604, 622, 626–628, 634, 640, 642, 657, 659, 660, 663, 674, 683, 684, 694, 697, 710, 719, 722, 734, 773 financial modeling 667 financial policy 91, 172, 182, 368, 397, 491, 493, 627, 642, 649 financial product 364, 380, 462, 474, 518, 520, 595, 639 financial regulation 18, 90, 91, 93, 143, 192, 367, 463, 475, 479, 480, 488, 500, 526–528, 539, 543, 545, 548, 616–618, 661, 666, 674, 677, 733, 735 financial regulators 528 financial risk 486, 535, 683, 700 financial sector 28, 73, 84, 93, 127, 131, 406, 407, 457, 458, 462, 465, 472, 474, 476, 477, 490, 499, 500, 519, 521, 522, 526, 535, 547, 561, 566, 572, 573, 585, 595, 596, 613, 615, 616, 621, 645, 651, 655, 658, 664, 669, 670, 680, 683–686, 688, 691, 694, 695, 698, 700, 701, 705, 710, 716, 722, 723, 725, 729 financial services 18, 35, 41, 117, 190, 458, 478, 488, 499, 500, 502, 503, 515, 517, 545, 622, 630, 639, 647, 649 financial shock 67, 73, 622, 719, 721, 723, 727 financial stability financial stability management 498, 701 financial stability measure 499, 597, 610 financial stability objective 116, 292, 494 financial stability policy 343, 617, 623, 634 financial stability responsibilities 117, 120, 483, 490, 492 financial stability risks 492 financial system 17, 18, 21, 23, 27, 61, 89, 90, 109, 120, 140, 171, 172, 180, 182, 185, 190, 191, 337, 346, 348, 351, 369, 394, 400, 401, 436, 444, 445, 449, 450, 472–474, 476, 479, 480, 486–493, 498, 501, 514, 527, 534, 535, 539, 541, 543, 551, 585, 586, 588–590, 598, 602, 608, 611, 621, 625, 631, 632, 640, 642, 644, 645, 647, 656, 674, 700, 732 financial variable 287, 290, 337, 351, 393, 421, 430, 710, 711
Index 785 financial crisis Global Financial Crisis 60, 88, 171, 173, 175, 177, 179, 181, 183, 185, 187, 189, 191, 192, 253, 263, 292, 301, 312, 323, 359, 368, 380–382, 384, 385, 391, 392, 395, 457, 463, 482, 489, 520, 585, 593, 615, 620, 667, 679, 728, 738, 773 great financial crisis 232, 246, 607 great recession 92, 232, 246, 251, 646, 647, 679, 682, 683, 686, 691, 694–696, 708, 718–721, 723, 725, 727, 731, 747, 776 financial liberalization 466, 476, 580, 638 fintech 28, 32, 33, 35, 459 fiscal fiscal austerity 361 fiscal authority 133, 134, 136, 138, 139, 141, 143–145, 147–149, 151–155, 157–159, 161–165, 167, 168, 183, 416, 620, 624, 628, 629, 643, 645 fiscal burden 93, 131, 133–145, 147–149, 151–155, 157–159, 161, 163–165, 167, 169, 616 fiscal crisis 131, 146, 161, 166, 168, 170, 552 fiscal deficits 599 fiscal expansion 178, 360 fiscal policy 84, 85, 88, 133, 167, 168, 178, 182, 188, 416, 449, 451, 485, 606, 611, 645, 658, 688, 693, 697–700, 723, 727, 732, 735, 736 fiscal redistribution 164 fiscal shocks 67, 73 fiscal smoothing 146 fiscal spending 729 fiscal stimulus 73, 82, 90, 93, 131, 697–699, 725, 730 forex 171, 173–175, 188, 189 forex interventions 171, 173–175, 189 forex market 174, 188 forex purchases 174 forex reserves 174 forward guidance 173, 175, 177, 188, 199, 202, 203, 224, 225, 232, 233, 235, 236, 238, 248, 251–255, 257, 259, 264, 279, 288, 312, 326, 327, 370, 387, 390, 405, 557, 695, 729, 732–734, 771, 775
GDP deflator 365 GDP growth 237, 313, 316, 319, 330, 374, 552, 559, 572 GDP targeting 194, 212–214, 225 global financial 23, 60, 171, 173, 175, 177, 179, 181, 183, 185, 187, 189, 191, 192, 253, 263, 292, 301, 312, 323, 359, 361, 368, 380–382, 384, 385, 391, 392, 395, 457, 463, 473, 476, 479, 481, 482, 489, 520, 525, 551, 585, 593, 615, 620, 667, 679, 728, 738, 773
global financial deregulation 457 global financial development 476, 481 global financial disruptions 679 global financial imbalances 525 global financial system 23, 473, 479, 551 Goodhart’s law 576 government bond 30, 31, 33, 133–135, 137, 141, 150, 156, 159–162, 166–168, 215, 217, 239, 249, 254, 257, 258, 260, 261, 319, 398, 401, 408, 413, 416, 437, 438, 442, 446, 449, 450, 513, 557, 562, 568, 570, 574–576, 580, 633, 642, 645, 650, 694, 696, 734 government budget 148, 638 government debt 74, 111, 113, 142, 173, 176, 347, 406, 413, 416, 445, 446, 449, 488, 513, 552, 557, 575, 590, 599, 631, 642, 732 government security 131, 137, 176, 437, 444, 449, 450, 531, 544 Great Depression 178, 179, 185, 282, 358, 359, 453, 483, 534, 535, 576, 578, 581, 646, 668 Great Moderation 26, 34, 188, 212, 488, 552, 559, 625, 717, 739 Greek bonds 166 Greek crises 24, 172, 187, 643 Greek government debt 513, 599, 642 Greenspan 21, 57, 204, 219, 226, 264, 266, 268–271, 282, 285, 355, 367, 490, 556, 573, 581, 595, 606, 614, 616, 651, 771 gross return 156 gross yield 408 growth coefficient 711 growth dynamics 552 growth effect 571, 576 growth limits 589 growth models 344 growth prospects 595 growth target 200, 213 growth theory 570
helicopter money 82, 173, 178, 188, 579 Hodrick-Prescott filter 746, 775 hyperinflation 71, 134, 149, 168, 169, 188, 197, 437, 635, 638
inflation expectations 124, 146, 195, 196, 200–202, 209, 231–235, 241–247, 250–254, 256, 258, 260, 268, 275, 289, 317, 357, 360, 368, 694, 741, 743, 744, 746, 763, 764, 769, 773–7 76
786 Index inflation forecast 115, 243, 254, 736, 738–757, 763–765, 767–769, 771–773, 775–777 inflation forecasting 251, 774 inflation rate 62, 63, 70, 71, 77, 80, 81, 85–87, 96, 118, 175, 179, 191, 196, 198–200, 212, 219, 231, 238, 242, 244, 246, 248, 258, 276, 277, 282, 316, 365, 410, 412, 423, 426, 484, 485, 558, 559, 577, 633, 638, 665, 689–692, 703, 709, 725, 737, 746, 749, 752 inflation target 33, 39, 94, 115, 118, 123, 133, 151, 173, 179, 184, 188, 190–192, 203, 212, 213, 219, 232, 233, 243–247, 251, 252, 254, 272, 311, 318, 360, 488, 556, 559, 560, 576, 578, 579, 595, 736, 770 inflation-targeting 58, 73, 75–77, 79, 80, 98, 219, 233, 248, 265, 306, 308, 482, 488, 558, 559, 561, 577, 578, 594, 737, 746, 763 Ingves 45–52, 64, 91 interbank offered rate 209, 389, 520 interbank rate 202–204, 313 interest-rate pass-through 369, 371, 374, 391, 392, 395, 396
Japanese bubble 555, 570 Japanese deflation 634 Japanese financial crisis 565 judicial bodies 40 judicial court 39, 40, 53, 54
Kuroda 755, 769–771, 775
Lehman Brothers 172, 176, 186, 189, 204, 284, 313, 317, 359, 392, 402, 510, 541, 544, 586, 588, 590, 593, 607, 608, 616, 624, 627, 646, 656, 719 liability 32, 109, 131, 132, 134, 136, 138, 147–150, 153, 157–160, 162, 163, 166–168, 189, 353, 370, 380–383, 389, 393, 395, 396, 398, 405, 408, 413, 445, 469, 504, 510–513, 530, 531, 534, 540, 567, 576, 579, 601, 608, 609, 613, 621, 631, 637–639, 642, 660, 670, 672, 675, 677 long bond 406, 407, 414, 416, 417, 423, 425, 426, 429, 430, 434 long debt 416, 429, 430 long rate 237, 240, 343, 417, 435, 729 long-term long-term assets 241, 247, 469, 530 long-term bond 162, 203, 216–218, 237, 241, 345, 414–416, 423, 429, 638, 695
long-term debt 216, 217, 415, 416, 672 long-term deflation 246 long-term equilibrium 205, 352, 570 long-term inflation expectations 245, 317 long-term inflation forecasts 746 long-term interest rates 174, 216, 224, 231, 233, 241, 256, 260, 316, 317, 330, 394, 557
macrobanking 686 macroeconomic macroeconomic conditions 67, 354 macroeconomic consequences 245 macroeconomic data 321 macroeconomic determinants 745 macroeconomic developments 319, 331, 492, 660, 695, 713 macroeconomic disturbances 331 macroeconomic dynamics 701, 721, 723, 736 macroeconomic economists 193 macroeconomic effects 251, 253, 256, 327, 666, 682, 696, 729, 730, 769 macroeconomic expectations 774 macroeconomic factors 64 macroeconomic fluctuations 403, 714 macroeconomic forecasting 728, 773, 777 macroeconomic forecasts 310–312, 330, 464 macroeconomic implications 731 macroeconomic indicators 63 macroeconomic model 25, 232, 247, 311, 330, 404, 660, 666, 671, 679–683, 686–688, 694, 707, 713–7 15, 722, 724, 725, 729, 736 macroeconomic modelers 679 macroeconomic objective 62, 118, 331 macroeconomic outcomes 25, 62, 233, 331, 679 macroeconomic policy 84, 131, 191, 310, 338, 484, 724, 729, 732, 735, 736 macroeconomic policymaking 625 macroeconomic research 679, 694 macroeconomic risks 242, 615 macroeconomic shocks 61, 62, 65, 67, 80, 82, 83 macroeconomic stability 484, 666 macroeconomic stabilization 486 macroeconomic theory 178, 666, 743 macroeconomic trends 64 macroeconomic uncertainty 774, 776 macroeconomic variables 25, 63, 245, 259, 319, 337, 499, 706, 726 macroeconomic volatility 700 macrofinance 226, 421, 429, 435, 684, 724
Index 787 macrofinancial 581, 619–621, 623–625, 627, 629, 631, 633, 635, 637, 639, 641, 643, 645, 647, 649, 651, 688, 697–699, 704, 707, 733, 736 macroprudential macroprudential agency 492, 493, 496, 498 macroprudential authority 17, 701, 702, 704 macroprudential consensus 483, 485, 487, 489, 491, 493–495, 497–499, 501 macroprudential crisis 493 macroprudential instrument 27, 355, 361–363, 365, 500, 700–704, 725 macroprudential interventions 606 macroprudential judgments 496, 497 macroprudential market 495 macroprudential objective 18, 492 macroprudential oversight 491, 492 macroprudential policies 27, 35, 292, 365, 366, 397, 400, 402, 480, 489, 494, 501, 546, 650, 679, 680, 728, 734 macroprudential policy 35, 352, 367, 458, 472, 474, 475, 479, 480, 482, 489–491, 494, 496, 500, 501, 537, 546, 547, 615, 626, 688, 693, 700–704, 728, 729, 732, 735 macroprudential regulation 17, 20, 131, 173, 180, 183, 188–190, 396, 458–460, 462–464, 468, 472, 474, 475, 528, 529, 531, 533–537, 539, 541–545, 547 macroprudential responsibility 488, 489, 492, 493 macroprudential shock 702 macroprudential stability 24, 483, 486, 488, 489, 491, 493–495, 498, 589 macroprudential supervision 24, 61, 182, 486, 491, 498, 588 macroprudential supervisory 495 management 17, 20–25, 58, 91, 93, 100–103, 111, 113, 116, 121, 124, 180, 181, 192, 232, 246, 359, 385, 386, 396, 397, 401, 404, 405, 442, 446, 449–451, 457, 460, 468, 469, 477, 482–484, 488, 489, 498, 500, 505, 513, 522–524, 527, 551–553, 555–563, 565–571, 573–577, 579– 583, 585–587, 589–591, 593, 595, 597, 599, 601, 603, 605, 607, 609–611, 613, 615, 617, 619, 620, 622, 625, 626, 630, 633–636, 638, 641, 642, 644, 645, 649, 651, 667, 672, 701 management decisions 644 management intervention 620, 630 management policy 597, 620, 627–629, 633 marginal 24, 143, 146, 195, 196, 204, 205, 314, 340, 346, 362–364, 373, 374, 398, 407, 412, 418, 423, 515–517, 530, 552, 568, 570, 576, 580, 599, 661, 662, 668, 675, 699, 725 marginal cost of financing 146
marginal cost of funding 373 marginal costs of regulation 516 marginal efficiency of capital 195, 204, 570 marginal efficiency of investment 407, 662 microprudential 182, 190, 396, 458–460, 468, 473, 474, 477, 485–494, 498, 533, 534, 537, 538, 588, 606, 622, 623, 628, 700 microprudential bank 628 microprudential banking 486–489 microprudential policies 491 microprudential regulation 190, 396, 458–460, 468, 486, 533, 534, 537, 538 microprudential regulator 491, 498 microprudential requirements 477 microprudential stability 486, 491, 494 microprudential supervision 485–487, 489, 493, 498, 588, 623 microprudential supervisor 494, 623 microprudential supervisory 493 model model comparison 688, 698, 725, 733, 736 model competition 724 model description 139, 727 model diversity 26 model economies 193 model economy 407, 408, 432 model estimation 716 model features 407, 689, 690 model featuring 393 model forecasts 715, 716, 718, 719 model generation 559 model generations 724 model innovations 384 model simulations 237 model solution 722 model specification 716, 753 model structures 689 model uncertainty 25, 26, 28, 681, 683, 685, 688–691, 693, 695, 697–699, 701, 703– 707, 710, 711, 713–7 17, 719, 721–723, 725, 727–729, 731, 733–735 moderation 26, 34, 174, 188, 212, 488, 552, 559, 574, 625, 717, 739 monetary monetary aggregate 115, 211, 306, 307, 332, 333, 355, 484, 559, 578 monetary analysis 313 monetary authority 18, 25, 26, 39, 40, 69, 84, 94, 125, 149, 329, 343, 344, 402, 451, 501, 645, 669, 674, 704, 741, 747, 763, 768 monetary base 146, 189, 559, 560, 562, 578, 631, 638, 667, 668, 694, 695, 697
788 Index monetary (Cont.) monetary change 381 monetary committee 498 monetary community 300, 306, 325 monetary consequences 447 monetary constitution 125 monetary cycles 86 monetary dynamics 169 monetary economics 34, 86–90, 93, 133, 144, 169, 191, 224, 225, 251, 252, 255, 256, 286, 327, 365, 367, 399, 400, 404, 435, 444, 480, 546, 547, 582, 618, 649, 650, 677, 727, 729, 730, 732–735, 773, 774, 776 monetary economists 484 monetary economy 170, 671, 677 monetary equilibria 678 monetary equilibrium 160, 676 monetary expansion 179, 551, 560, 561, 563, 566, 571, 572, 579, 580, 667, 668 monetary growth 656, 663 monetary models 736 monetary policies 57, 62, 68, 83, 173, 174, 178, 187, 188, 191, 232, 240, 242, 251–253, 260, 261, 283, 286, 290, 291, 394, 405, 470, 497, 552, 557, 575, 576, 579, 582, 700, 730, 752 monetary policy monetary policy analysis 26, 330, 682, 688, 724, 727, 735 monetary policy announcement 236, 240, 259, 290, 321, 325, 747 monetary policy authority 59, 62, 83, 233 monetary policy committee 39–41, 43, 45, 47, 49, 51, 53, 55, 57, 58, 117, 235, 248, 318, 329, 488, 603, 606, 648, 651, 652, 747, 751, 776 MPC 39, 42, 45, 51, 53, 56–58, 117, 235, 236, 248, 318–322, 326, 328, 488, 493, 494, 496, 497, 649 monetary policy communications 391 monetary policy conference 403 monetary policy crisis 551, 552, 555, 558, 560, 562, 563, 565, 566, 568–571, 573–576 monetary policy decision 240, 271, 285, 288, 289, 308, 312, 316, 319–322, 325, 326, 330, 331, 369, 498, 579, 623, 646, 650, 651, 755, 773 monetary policy forecast 740 monetary policy institutions 59, 60, 69, 70 monetary policy instrument 95, 172, 173, 175, 188, 189, 392 monetary policy objective 115, 301, 307, 308, 310, 311, 316, 318, 329, 605 monetary policy regime 64, 205
monetary policy shock 224, 240, 242, 260, 264, 275, 276, 280–282, 345, 682, 686, 689– 693, 708, 709, 728, 776 monetary policy strategy 231, 250, 254, 290, 306, 308, 310, 311, 316, 318, 326, 650 monetary policy transmission 87, 331, 369, 381, 394, 401, 405, 560, 691, 692, 697 monetary policy transparency 218, 220, 223, 226, 288, 289, 292–294, 301–307, 323, 324, 328, 329 monetary policymakers 68, 82, 218, 287, 628, 647, 722 monetary policymaking 55, 56, 82, 88, 126, 188, 327 monetary regime 84, 94 monetary responsibilities 488 monetary sector 658, 661 monetary shock 275–282, 284, 285, 402 monetary sovereignty 169 monetary stability 18, 64, 65, 67, 93, 95, 109, 112, 114, 116, 117, 121, 224, 363, 452, 483, 486– 489, 491–494, 496, 498, 598, 615, 625, 629, 634, 650 monetary strategy 82, 193 monetary target 93, 125, 360, 361, 502 monetary targeters 306, 307 monetary targeting 502 monetary theory 201, 285, 452, 581, 736 monetary transactions 315, 392, 632, 647 monetary transmission 290, 310, 312, 369–371, 373, 391, 393, 394, 396, 397, 400, 401, 403–405, 445, 448, 471, 646 monetary transmission disturbances 310, 312 monetary transmission mechanism 373, 391, 394, 403, 404, 445, 448, 471 monetary transmission policy 400 monetary transparency 223 monetary union 74, 91, 126, 131, 300, 306, 312, 315, 324, 325, 579 monetary velocity 631 monetization 83, 85, 642, 675 money money demand 202, 690 money-macro 666 money supply 31, 92, 123, 194, 195, 198–200, 202, 213, 215, 216, 219, 222, 225, 330, 440, 486, 494, 553, 558, 559, 561, 578, 579, 665, 667, 678 money supply growth 123, 330, 558, 559, 578, 579 money supply targeting 194, 199, 222 monopolist 144, 346, 347, 411
Index 789 monopoly 105, 110, 138, 144, 145, 340, 347, 355, 377, 411, 429, 438, 460, 483, 571 moral hazard 62, 105, 106, 125, 126, 137, 461, 462, 465, 475, 486, 504, 505, 510, 511, 513, 529, 533, 534, 537–539, 542, 543, 586, 619, 622, 628, 634, 639, 685, 692, 695, 699, 701, 702 mortgage-backed securities 173, 176, 278, 317, 394, 634 mortgage-servicing 360
New Keynesian DSGE framework 682, 685, 724 New Keynesian DSGE generations 705 New Keynesian DSGE model 245, 679, 682, 684– 686, 688–695, 697, 698, 701, 707, 714, 717, 720–724, 726 New Keynesian literature 684 New Keynesian model 34, 184, 190, 191, 201, 203, 680–691, 693–698, 704, 705, 707, 710, 715, 720, 730, 731 New Keynesian modeling 682 new neoclassical synthesis 682, 733 new policy instrument 406, 700 nominal nominal GDP 194, 212–215, 221, 225 nominal GDP targeting 194, 212, 213, 225 nominal GDP targets 214 nominal government bonds 168 nominal government liability 138 nominal growth target 213 nominal income growth 146, 152 nominal interest rate 133, 139, 140, 145, 148, 151, 156, 157, 191, 195, 198, 202, 212, 219, 261, 360, 409, 556, 670, 685, 689–692, 697, 700, 706, 725–727, 730, 732, 734, 735 nominal money supply 194, 202 nominal reserves debt 150
open economy 90, 91, 121, 125, 174, 344, 728, 730, 732, 735 open market 17, 31, 39, 160, 169, 175, 176, 206, 210, 211, 226, 234, 263, 265, 315, 316, 436–453, 464, 486, 578, 624, 631, 665, 697, 742, 774 open market operations 17, 31, 160, 169, 170, 176, 210, 211, 226, 265, 315, 436–453, 486, 578, 665 open market policy 450–452 open market purchases 437, 439, 624, 631, 728 open market sales 450 open market sterilization 448
output aggregate output 404 output cycle 215 output effect 689, 692, 695, 698 output expansion 373 output gap 60, 124, 203, 215, 219, 220, 330, 423, 425, 428, 682, 699, 706–713, 723, 725, 726, 739, 746, 752, 756, 757, 768, 771 output growth 198, 212–215, 223, 325, 682, 710, 711, 713, 719, 774 output-inflation 92 output losses 546, 566 output stabilization 266 output volatility 625 overcompensation 431 overestimation 566 overinvestment 358, 552–555, 564, 565, 569, 577, 579, 580, 582 ownership 29, 100, 124, 341, 352–354, 466, 512, 662
pass-through, 347, 369–371, 373–375, 377, 379, 380, 390–392, 394–396, 399, 401–403, 405, 763 policy policy analysis 26, 27, 226, 311, 330, 628, 681, 682, 686, 688, 722, 724, 727, 729, 730, 732, 733, 735, 736 policy evaluation 583, 679, 686, 706, 734, 735 policy failures 83, 644 policy framework 244, 289, 306–308, 323–325, 328, 330, 332, 338, 344, 441, 447, 501, 626, 677, 701 policy imbalances 639 policy-induced 384 policy instrument 17, 95, 171–173, 175, 182, 188, 189, 209, 213, 231, 283, 291, 330, 338, 352, 392, 406, 439, 443, 448, 472, 479, 623, 634, 688, 700, 702, 703, 707 policy objective 115, 200, 212, 220, 243, 288, 291, 292, 301, 307, 308, 310, 311, 316, 318, 329, 397, 468, 485, 486, 489, 597, 605 policy rule 84, 94, 170, 218–222, 225, 330, 416, 422, 424, 559, 561, 562, 583, 628, 639, 680, 689–693, 696, 698–700, 703–706, 708, 709, 711, 713, 722, 723, 725, 728, 731, 734–736 policy transmission 87, 191, 331, 369, 381, 394, 400, 401, 405, 560, 680, 684, 685, 688, 691, 692, 697, 722, 728 policy transparency 218, 220, 223, 226, 288, 289, 292–294, 301–308, 310, 311, 313, 315, 319, 320, 323, 324, 328, 329, 331
790 Index policy (Cont.) policy uncertainty 249, 739, 740, 769–771, 773, 775 policymaker 17, 18, 26–28, 33, 42, 55, 57, 61, 62, 64–68, 71, 73, 77, 82, 85, 86, 172, 195, 197, 198, 200, 201, 203, 204, 208–210, 212–215, 218–222, 287, 289, 290, 311, 321, 362, 373, 406, 407, 434, 537, 540, 558, 620, 628, 643, 644, 647, 679–681, 686, 688, 689, 693, 700–702, 704, 705, 710, 711, 713, 714, 716, 722–724, 726, 727, 738, 770 policymaking 55, 56, 65, 82, 85, 88, 119, 126, 188, 193, 194, 208, 215, 218–220, 222, 287, 288, 327, 406, 622, 623, 625, 627, 643, 680, 688, 704, 705, 707, 723, 776 policy-tightening 271 political economy 34, 57–59, 61, 65, 82, 87–89, 91– 94, 124, 126, 191, 224, 225, 327, 368, 479, 500, 502, 546, 548, 581–583, 615, 636, 645, 647–650, 673, 676–678, 730, 734, 777 political economy characteristics 61 political economy dynamics 636 political economy framework 65, 82 political economy model 61 political legitimacy 624, 643, 644 political science 42, 53, 55, 57, 58, 64, 87, 90–92, 500, 502, 647, 650, 739 political transparency 288, 289, 310, 311, 318, 324, 329 post-Bretton woods 68, 625 post-crisis 27, 119, 179, 180, 187, 189, 283, 288–293, 301, 303–305, 307, 308, 310–315, 317, 319, 321, 323, 325, 327, 329, 331, 333, 394–396, 399, 468, 475, 476, 478, 482, 492, 494– 496, 504, 505, 527, 633, 648, 649 post-crisis period 283, 288, 292, 304, 308, 311, 313, 314, 323, 399 post-crisis Reforms 478 post crisis regulation 527 post crisis regulatory reforms 180 post Keynesian economics 405 post-Lehman period 314 potential output 201, 203, 213–215, 219, 223, 225, 316, 566 price-level 33, 140, 212, 222, 697 price-level targeting 33, 212, 222, 697 proportionality 341, 505, 506, 508, 513–518, 526 prudential 61, 83, 92, 182, 366, 395, 396, 401, 404, 457–471, 473, 475–477, 479–481, 489, 490, 493, 503, 504, 508, 521, 598, 616, 639, 727 prudential policy 366, 400, 458, 475, 476, 479, 616, 727
prudential regulation 94, 182, 395, 401, 404, 457– 463, 465–471, 473, 475–481, 499, 735 prudential regulator 459, 461, 463, 639 prudential requirements 396, 504 prudential standards 521 prudential supervision 458–460, 462, 463, 465, 467, 475, 548 prudential supervisor 61, 92, 464, 465, 475 public public debt 63, 95, 131, 133, 134, 137, 139, 142, 147, 148, 158, 161, 169, 451, 452, 574, 576 public deficits 95 public expenditure 451, 574 public financial policies 91, 649 public institution 137, 738 public insurance mechanism 570 public investment 570, 576 public legitimacy 627 public liability 148, 159, 167 public sector 99, 101, 182, 451, 562, 568, 574, 579, 587, 589, 594, 598, 599, 646 public spending 115, 167, 565
quantitative quantitative assessments 136 quantitative economics 367 quantitative effect 416, 690 quantitative evaluation 648 quantitative evidence 395 quantitative predictions 704 quantitative restrictions 178 quantitative easing 64, 134, 155, 170, 171, 173, 192, 212, 215, 237, 238, 251, 253–255, 270, 290, 399, 416, 443, 451, 452, 470, 494, 499, 597, 599, 633, 668, 694, 696, 744 QE 27, 33, 34, 64, 77, 134, 155, 161, 170–177, 188, 189, 212, 215–218, 220–222, 226, 237–2 42, 256, 260–262, 270, 273, 280, 283, 314, 319, 416, 435, 443, 450, 494–499, 599, 600, 606, 633, 634, 668, 694–697, 733, 744, 752, 753 quasi-confidence 739, 752, 753, 755, 756, 764, 768
Rajan 386, 404, 621, 625, 645, 648, 651 rate of growth 198, 200, 610 rate of inflation 187, 195–198, 203, 208, 213, 218, 219, 289, 441, 484, 485, 498, 644 rate of substitution 418 rate of unemployment 73, 175, 316 real business cycle 84, 655, 681
Index 791 real debt 136, 566 real effect 242, 344, 400, 407, 410, 429, 544, 657, 665, 667, 681, 682, 692, 694, 697 real estate bubble 565, 580, 641 real estate crash 337, 361, 362 real estate market 338, 348, 356, 358, 361, 362, 490, 562, 564, 579 real GDP 73, 74, 76, 79, 81, 194, 196, 214, 241, 242, 313, 316, 319, 534–536, 544, 559, 746 real growth 560, 571, 572, 578 real income 194, 340–342, 371, 580 real interest rate 21, 139, 142, 152, 157, 195, 219, 231, 232, 234, 245, 250, 258, 275, 276, 282, 339, 344–348, 354, 359, 365, 484, 552, 695 reform 70, 86, 88, 120, 124, 150, 168, 172, 179–182, 188, 192, 449, 451, 492, 502, 504–506, 509, 520, 524, 525, 527, 547, 548, 610, 614, 615, 617, 651, 666, 674 reformers 172 reforming 82, 92, 121, 127, 610, 616, 650 regulatory reform 171–173, 179–182, 187, 188, 191, 502, 504, 505, 509, 674 regulatory regime 504–513, 515–517, 519, 521, 523, 525, 527 regulatory requirements 462, 463, 513, 516, 517, 701 repurchase agreement 139, 166, 440, 442, 450 reserve reserve policy 125 reserve position 267, 268, 441, 447, 448, 450, 451 reserve requirement 173, 185, 188, 383, 474, 596, 638 reserve resources 451 reserve system 82, 132, 255, 263, 266, 316, 442, 541, 547, 614, 683, 687, 708, 724, 725, 729–731, 734, 736, 746 Reserve Bank of Australia 254, 328, 582, 746, 755, 771 Reserve Bank of India 625 Reserve Bank of New Zealand 62, 100, 107, 117–120, 123, 126, 210, 212, 233, 248, 257, 265, 293, 301, 304, 324, 391, 591, 597, 609, 612, 613, 617, 618, 746, 763 Riksbank 40, 45, 48, 50–52, 56, 90, 238, 257, 284, 293, 391, 445, 607, 683, 686, 687, 727, 746 SR 746 risk risk capital requirements 467 risk management 91, 180, 396, 404, 459, 468, 470, 476, 522–524, 610, 621 risk market 386 risk measurement 728 risk-neutral 371, 664
risk premium 202, 237, 250, 348, 354, 381, 398, 429, 434, 471, 686, 687, 694, 719, 724 risk-weight 120, 458, 467–471, 474, 476, 477, 507, 545, 586, 592, 611, 668 robust monetary policy 680, 704, 734 Royal Bank of Scotland 25, 639, 640
secular stagnation 177, 183, 192, 552, 576, 583 securities 17, 31, 32, 35, 122, 131, 132, 136, 161, 164, 166, 173, 176, 177, 185, 186, 190, 195, 200, 202, 205, 211, 215–218, 239, 241, 242, 275, 278, 290, 291, 315, 317, 359, 371–373, 384–386, 394, 398, 401, 436–446, 448–450, 452, 461, 463, 466, 467, 469, 471, 473–475, 477, 479, 480, 487, 531, 535, 544, 548, 567, 572, 592, 599, 609, 610, 624, 630–632, 634 seigniorage 93, 134, 143–150, 152–154, 156, 157, 159, 162, 164, 167, 169, 620, 645 short term short-term assets 241 short-term bond 165, 203, 415 short-term government bonds 160, 408 short-term government debt 176, 347 short-term government securities 531 short-term inflation expectations 242, 244, 247 short-term inflation forecast 764 short-term inflation outlook 745, 755, 764, 768 short term interest rates 31, 173, 176, 202–208, 211, 223, 224, 226, 233, 237, 252, 257, 313, 390, 415, 417, 484, 490, 693 short-term liabilities 413, 469, 530 short-term loans 202, 639, 664, 671 short-term policy 176, 178, 183, 184, 233, 237, 370, 434, 471 short-term private bonds 134 short-term repo 477 short-term responsiveness 471 short-term security 202, 205, 216 short-term shocks 740 short-term yields 205 short-term ZLB 184 social convention 623, 626 social costs 504, 510–512, 525 social preferences 64 social security 136 social trust 87 social value 328, 776, 777 social welfare 65, 66, 118, 478 solvency 24, 134, 139, 148, 151–153, 155–157, 161, 163, 164, 168, 169, 472, 536, 598, 614, 636, 637 stagflation 198, 484, 488, 548
792 Index stagnation 82, 177, 183, 192, 552, 566, 576, 583 stakeholder 103, 511, 514, 515, 526, 592, 622–624, 629 stock market 111, 219, 239, 252–254, 368, 383, 449, 451, 562, 564, 565, 573, 579, 581, 586, 595, 597, 763
term premium 163, 202, 203, 205–208, 217, 237, 241, 247, 260, 261, 406, 407, 409, 411, 413, 415– 419, 421–435 term premium peg 407, 416, 430–434 time-varying risk effects 434 time-varying risk premium 434, 686 too big to fail 113, 461, 509, 540, 547, 586, 588, 590, 618 total factor productivity 407, 569, 581
unconventional channels 395 unconventional monetary policies 167, 173, 232, 239, 241, 242, 247, 249–253, 260, 261, 283, 286, 288, 290, 291, 391–394, 396, 405, 406, 557, 575, 576, 695, 752, 755
variability 406, 407, 409, 411, 413, 415, 417–419, 421– 426, 429, 431, 434, 435
variance 24, 240, 257, 276, 279, 280, 284, 285, 378, 465, 484, 485, 690, 726, 771, 773 vector autoregression 241, 404, 656, 682, 724 VAR 242, 244, 249, 250, 257, 259, 264, 275, 278, 280, 281, 393, 405, 682, 690, 705, 710, 715, 720, 726, 757–762
welfare-improving 659 welfare-increasing 429 welfare-reducing 434
Yellen 219, 378, 399, 464, 472, 481, 769, 777
zero-coupon 136 zero-interest 568 zero-interest-rate 566–569, 730 zero lower bound 171, 191, 192, 248, 251, 252, 255, 256, 263, 329, 370, 583, 668, 693, 727, 773, 775 ZLB 33, 171–178, 183, 184, 188–190, 248, 261–264, 268–271, 274, 276, 277, 282, 283, 285, 370, 693–700, 706, 719, 725, 744 zero-nominal-interest 693