242 39 37MB
English Pages 2897 [2880] Year 2015
Cheng-Few Lee John C. Lee Editors
Handbook of Financial Econometrics and Statistics 1 3Reference
Handbook of Financial Econometrics and Statistics
Cheng-Few Lee • John C. Lee Editors
Handbook of Financial Econometrics and Statistics
With 281 Figures and 490 Tables
Editors Cheng-Few Lee Department of Finance and Economics, Rutgers Business School Rutgers, The State University of New Jersey Piscataway, NJ, USA and Graduate Institute of Finance National Chiao Tung University Hsinchu, Taiwan John C. Lee Center for PBBEF Research North Brunswick, NJ, USA
ISBN 978-1-4614-7749-5 ISBN 978-1-4614-7750-1 (eBook) ISBN 978-1-4614-7751-8 (print and electronic bundle) DOI 10.1007/978-1-4614-7750-1 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2014940762 # Springer Science+Business Media New York 2015 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Financial econometrics and statistics have become very important tools for empirical research in both finance and accounting. Econometric methods are important tools for doing asset pricing, corporate finance, options and futures, and conducting financial accounting research. Important econometric methods used in this research include: single equation multiple regression, simultaneous regression, panel data analysis, time series analysis, spectral analysis, non-parametric analysis, semi-parametric analysis, GMM analysis, and other methods. Portfolio theory and management research have used different statistical distributions, such as normal distribution, stable distribution, and log normal distribution. Options and futures research have used binomial distribution, log normal distribution, non-central chi square distribution, Poission distribution, and others. Auditing research has used sampling survey techniques to determine the sampling error and non-sampling error for auditing. Based upon our years of experience working in the industry, teaching classes, conducting research, writing textbooks, and editing journals on the subject of financial econometrics and statistics, this handbook will review, discuss, and integrate theoretical, methodological, and practical issues of financial econometrics and statistics. There are 99 chapters in this handbook. Chapter 1 presents an introduction of financial econometrics and statistics and shows how readers can use this handbook. The following chapters, which have been contributed by accredited authors, can be classified by the following 14 topics. i. Financial Accounting (Chapters 2, 9, 10, 61, 97) ii. Mutual Funds (Chapters 3, 24, 25, 68, 88) iii. Microstructure (Chapters 4, 44, 96, 99) iv. Corporate Finance (Chapters 5, 21, 30, 38, 42, 46, 60, 63, 75, 79, 95) v. Asset Pricing (Chapters 6, 15, 22, 28, 34, 36, 39, 45, 47, 50, 81, 85, 87, 93) vi. Options (Chapters 7, 32, 37, 55, 65, 84, 86, 90, 98) vii. Portfolio Analysis (Chapters 8, 26, 35, 53, 67, 73, 80, 83) viii. Risk Management (Chapters 11, 13, 16, 17, 23, 27, 41, 51, 54, 72, 91, 92) ix. International Finance (Chapters 12, 40, 43, 59, 69) x. Event Study (Chapters 14) xi. Methodology (Chapters 18, 19, 20, 29, 31, 33, 49, 52, 56, 57, 58, 62, 74, 76, 77, 78, 82, 89)
v
vi
Preface
xii. Banking Management (Chapters 64) xiii. Pension Funds (Chapters 66) xiv. Futures and Index Futures (Chapters 48, 70, 71, 94) In addition to this classification, based upon the keywords of chapter 2-99, we classify the information into a) finance and accounting topics and b) methodology topics. This information can be found in chapter 1 of this handbook. In the preparation of this handbook, first, we would like to thank the member of advisory board and contributors of this handbook. In addition, we would like to make note that we appreciate the extensive help from the Editor Mr. Brian Foster, our research assistants Tzu Tai, Lianne Ng, and our secretary Ms. Miranda Mei-Lan Luo. Finally, we would like to thank the financial support from the Wintek Corporation and APEX International Financial Engineering Res. & Tech. Co. Ltd. that allowed us to write the edition of this book. There are undoubtedly some errors in the finished product, both typo-graphical and conceptual. I would like to invite readers to send suggestions, comments, criticisms, and corrections to the author Professor Cheng F. Lee at the Department of Finance and Economics, Rutgers University at the email address lee@business. rutgers.edu. December 2012
Cheng-Few Lee John C. Lee
Advisory Board
Ivan Brick Rutgers, The State University of New Jersey, USA Stephen Brown New York University, USA Charles Q. Cao Penn State University, USA Chun-Yen Chang National Chiao Tung University, Taiwan Wayne Ferson Boston College, USA Lawrence R. Glosten Columbia University, USA Martin J. Gruber New York University, USA Hyley Huang Wintek Corporation, Taiwan Richard E. Kihlstrom University of Pennsylvania, USA E. H. Kim University of Michigan, USA Robert McDonald Northwestern University, USA Ehud I. Ronn The University of Texas at Austin, USA
vii
About the Editors
Cheng-Few Lee is a Distinguished Professor of Finance at Rutgers Business School, Rutgers University and was chairperson of the Department of Finance from 1988–1995. He has also served on the faculty of the University of Illinois (IBE Professor of Finance) and the University of Georgia. He has maintained academic and consulting ties in Taiwan, Hong Kong, China and the United States for the past four decades. He has been a consultant to many prominent groups including, the American Insurance Group, the World Bank, the United Nations, The Marmon Group Inc., Wintek Corporation, and Polaris Financial Group. Professor Lee founded the Review of Quantitative Finance and Accounting (RQFA) in 1990 and the Review of Pacific Basin Financial Markets and Policies (RPBFMP) in 1998, and serves as managing editor for both journals. He was also a co-editor of the Financial Review (1985–1991) and the Quarterly Review of Economics and Finance (1987–1989). In the past 39 years, Dr. Lee has written numerous textbooks ranging in subject matters from financial management to corporate finance, security analysis and portfolio management to financial analysis, planning and forecasting, and business statistics. In addition, he edited two popular books, Encyclopedia of Finance (with Alice C. Lee) and Handbook of Quantitative Finance and Risk Management (with Alice C. Lee and John Lee). Dr. Lee has also published more than 200 articles in more than 20 different journals in finance, accounting, economics, statistics, and management. Professor Lee was ranked the most published finance professor worldwide during the period 1953–2008. Professor Lee was the intellectual force behind the creation of the new Masters of Quantitative Finance program at Rutgers University. This program began in 2001 and has been ranked as one of the top ten quantitative finance programs in the United States. These top ten programs are located at Carnegie Mellon University, Columbia University, Cornell University, New York University, Princeton University, Rutgers University, Stanford University, University of California at Berkley, University of Chicago, and University of Michigan. John C. Lee is a Microsoft Certified Professional in Microsoft Visual Basic and Microsoft Excel VBA. He has a Bachelor and Masters degree in accounting from the University of Illinois at Urbana-Champaign. John has worked over 20 years in both the business and technical fields as an accountant, auditor, systems analyst and as a business software developer. He is the ix
x
About the Editors
author of the book on how to use MINITAB and Microsoft Excel to do statistical analysis which is a companion text to Statistics of Business and Financial Economics, 2nd and 3rd, of which he is one of the co-authors. In addition, he has also coauthored the textbooks Financial Analysis, Planning and Forecasting, 2ed (with Cheng F. Lee and Alice C. Lee), and Security Analysis, Portfolio Management, and Financial Derivatives (with Cheng F. Lee, Joseph Finnerty, Alice C. Lee, and Donald Wort). John has been a Senior Technology Officer at the Chase Manhattan Bank and Assistant Vice President at Merrill Lynch. Currently, he is the Director of the Center for PBBEF Research.
Contents
Volume 1 1
Introduction to Financial Econometrics and Statistics . . . . . . . . . Cheng-Few Lee and John C. Lee
2
Experience, Information Asymmetry, and Rational Forecast Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . April Knill, Kristina L. Minnick, and Ali Nejadmalayeri
63
An Appraisal of Modeling Dimensions for Performance Appraisal of Global Mutual Funds . . . . . . . . . . . . . . . . . . . . . . . . G. V. Satya Sekhar
101
3
1
4
Simulation as a Research Tool for Market Architects . . . . . . . . . . Robert A. Schwartz and Bruce W. Weber
5
Motivations for Issuing Putable Debt: An Empirical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivan E. Brick, Oded Palmon, and Dilip K. Patro
149
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suresh Srivastava and Ken Hung
187
6
121
7
Nonparametric Bounds for European Option Prices Hsuan-Chu Lin, Ren-Raw Chen, and Oded Palmon
..........
207
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chin-Wen Huang, Chun-Pin Hsu, and Wan-Jiun Paul Chiou
233
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ken Hung and Kuo-Hao Lee
253
Market-Based Accounting Research (MBAR) Models: A Test of ARIMAX Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anastasia Maggina
279
9
10
xi
xii
11
12
13
Contents
An Assessment of Copula Functions Approach in Conjunction with Factor Model in Portfolio Credit Risk Management . . . . . . . Lie-Jane Kao, Po-Cheng Wu, and Cheng-Few Lee Assessing Importance of Time-Series Versus Cross-Sectional Changes in Panel Data: A Study of International Variations in Ex-Ante Equity Premia and Financial Architecture . . . . . . . . . . . Raj Aggarwal and John W. Goodell Does Banking Capital Reduce Risk? An Application of Stochastic Frontier Analysis and GMM Approach . . . . . . . . . . . . Wan-Jiun Paul Chiou and Robert L. Porter
299
317
349
14
Evaluating Long-Horizon Event Study Methodology . . . . . . . . . . James S. Ang and Shaojun Zhang
15
The Effect of Unexpected Volatility Shocks on Intertemporal Risk-Return Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kiseok Nam, Joshua Krausz, and Augustine C. Arize
413
Combinatorial Methods for Constructing Credit Risk Ratings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Kogan and Miguel A. Lejeune
439
16
17
Dynamic Interactions Between Institutional Investors and the Taiwan Stock Returns: One-Regime and Threshold VAR Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bwo-Nung Huang, Ken Hung, Chien-Hui Lee, and Chin W. Yang
383
485
18
Methods of Denoising Financial Data Thomas Meinl and Edward W. Sun
......................
519
19
Analysis of Financial Time Series Using Wavelet Methods . . . . . . Philippe Masset
539
20
Composite Goodness-of-Fit Tests for Left-Truncated Loss Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anna Chernobai, Svetlozar T. Rachev, and Frank J. Fabozzi
575
Effect of Merger on the Credit Rating and Performance of Taiwan Security Firms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suresh Srivastava and Ken Hung
597
On-/Off-the-Run Yield Spread Puzzle: Evidence from Chinese Treasury Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rong Chen, Hai Lin, and Qianni Yuan
617
21
22
23
Factor Copula for Defaultable Basket Credit Derivatives . . . . . . . Po-Cheng Wu, Lie-Jane Kao, and Cheng-Few Lee
639
Contents
24
25
xiii
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Win Lin Chou, Shou Zhong Ng, and Yating Yang
657
Market Segmentation and Pricing of Closed-End Country Funds: An Empirical Analysis . . . . . . . . . . . . . . . . . . . . . Dilip K. Patro
669
Volume 2 26
27
28
29
A Comparison of Portfolios Using Different Risk Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jing Rung Yu, Yu Chuan Hsu, and Si Rou Lim Using Alternative Models and a Combining Technique in Credit Rating Forecasting: An Empirical Study . . . . . . . . . . . . . . . . . . . . Cheng-Few Lee, Kehluh Wang, Yating Yang, and Chan-Chien Lien
707
729
Can We Use the CAPM as an Investment Strategy?: An Intuitive CAPM and Efficiency Test . . . . . . . . . . . . . . . . . . . . Fernando Go´mez-Bezares, Luis Ferruz, and Maria Vargas
751
Group Decision-Making Tools for Managerial Accounting and Finance Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wikil Kwak, Yong Shi, Cheng-Few Lee, and Heeseok Lee
791
........
841
30
Statistics Methods Applied in Employee Stock Options Li-jiun Chen and Cheng-der Fuh
31
Structural Change and Monitoring Tests Cindy Shin-Huei Wang and Yi Meng Xie
...................
873
32
Consequences for Option Pricing of a Long Memory in Volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stephen J. Taylor
903
33
Seasonal Aspects of Australian Electricity Market . . . . . . . . . . . . Vikash Ramiah, Stuart Thomas, Richard Heaney, and Heather Mitchell
34
Pricing Commercial Timberland Returns in the United States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bin Mei and Michael L. Clutter
957
Optimal Orthogonal Portfolios with Conditioning Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wayne E. Ferson and Andrew F. Siegel
977
35
935
xiv
Contents
36
Multifactor, Multi-indicator Approach to Asset Pricing: Method and Empirical Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . 1003 Cheng-Few Lee, K. C. John Wei, and Hong-Yi Chen
37
Binomial OPM, Black–Scholes OPM, and Their Relationship: Decision Tree and Microsoft Excel Approach . . . . . . . . . . . . . . . . 1025 John C. Lee
38
Dividend Payments and Share Repurchases of US Firms: An Econometric Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061 Alok Bhargava
39
Term Structure Modeling and Forecasting Using the Nelson-Siegel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093 Jian Hua
40
The Intertemporal Relation Between Expected Return and Risk on Currency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105 Turan G. Bali and Kamil Yilmaz
41
Quantile Regression and Value at Risk . . . . . . . . . . . . . . . . . . . . . 1143 Zhijie Xiao, Hongtao Guo, and Miranda S. Lam
42
Earnings Quality and Board Structure: Evidence from South East Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169 Kin-Wai Lee
43
Rationality and Heterogeneity of Survey Forecasts of the Yen-Dollar Exchange Rate: A Reexamination . . . . . . . . . . . . . . . . 1195 Richard Cohen, Carl S. Bonham, and Shigeyuki Abe
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249 Gerard L. Gannon
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277 Ken Hung and Suresh Srivastava
46
Alternative Methods for Estimating Firm’s Growth Rate Ivan E. Brick, Hong-Yi Chen, and Cheng-Few Lee
47
Econometric Measures of Liquidity . . . . . . . . . . . . . . . . . . . . . . . . 1311 Jieun Lee
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting: Application to Equity Index Futures Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1325 Oscar Carchano, Young Shin (Aaron) Kim, Edward W. Sun, Svetlozar T. Rachev, and Frank J. Fabozzi
. . . . . . 1293
Contents
xv
49
Computer Technology for Financial Service . . . . . . . . . . . . . . . . . 1341 Fang-Pang Lin, Cheng-Few Lee, and Huimin Chung
50
Long-Run Stock Return and the Statistical Inference . . . . . . . . . . 1381 Yanzhi Wang
Volume 3 51
Value-at-Risk Estimation via a Semi-parametric Approach: Evidence from the Stock Markets . . . . . . . . . . . . . . . . . . . . . . . . . 1399 Cheng-Few Lee and Jung-Bin Su
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431 Long Kang
53
Internet Bubble Examination with Mean-Variance Ratio . . . . . . . 1451 Zhidong D. Bai, Yongchang C. Hui, and Wing-Keung Wong
54
Quantile Regression in Risk Calibration . . . . . . . . . . . . . . . . . . . . 1467 Shih-Kang Chao, Wolfgang Karl Ha¨rdle, and Weining Wang
55
Strike Prices of Options for Overconfident Executives . . . . . . . . . 1491 Oded Palmon and Itzhak Venezia
56
Density and Conditional Distribution-Based Specification Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509 Diep Duong and Norman R. Swanson
57
Assessing the Performance of Estimators Dealing with Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563 Heitor Almeida, Murillo Campello, and Antonio F. Galvao
58
Realized Distributions of Dynamic Conditional Correlation and Volatility Thresholds in the Crude Oil, Gold, and Dollar/Pound Currency Markets . . . . . . . . . . . . . . . . . . . . . . . . . . 1619 Tung-Li Shih, Hai-Chin Yu, Der-Tzon Hsieh, and Chia-Ju Lee
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey . . . 1647 Ahmed Hachicha and Cheng-Few Lee
60
Determination of Capital Structure: A LISREL Model Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1669 Cheng-Few Lee and Tzu Tai
61
Evidence on Earning Management by Integrated Oil and Gas Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685 Raafat R. Roubi, Hemantha Herath, and John S. Jahera Jr.
xvi
Contents
62
A Comparative Study of Two Models SV with MCMC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697 Ahmed Hachicha, Fatma Hachicha, and Afif Masmoudi
63
Internal Control Material Weakness, Analysts Accuracy and Bias, and Brokerage Reputation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719 Li Xu and Alex P. Tang
64
What Increases Banks Vulnerability to Financial Crisis: Short-Term Financing or Illiquid Assets? . . . . . . . . . . . . . . . . . . . 1753 Gang Nathan Dong and Yuna Heo
65
Accurate Formulas for Evaluating Barrier Options with Dividends Payout and the Application in Credit Risk Valuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1771 Tian-Shyr Dai and Chun-Yuan Chiu
66
Pension Funds: Financial Econometrics on the Herding Phenomenon in Spain and the United Kingdom . . . . . . . . . . . . . . 1801 Mercedes Alda Garcı´a and Luis Ferruz
67
Estimating the Correlation of Asset Returns: A Quantile Dependence Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829 Nicholas Sim
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1857 Shin Yun Wang and Cheng-Few Lee
69
Econometric Analysis of Currency Carry Trade . . . . . . . . . . . . . . 1877 Yu-Jen Wang, Huimin Chung, and Bruce Mizrach
70
Evaluating the Effectiveness of Futures Hedging . . . . . . . . . . . . . 1891 Donald Lien, Geul Lee, Li Yang, and Chunyang Zhou
71
Analytical Bounds for Treasury Bond Futures Prices . . . . . . . . . . 1909 Ren-Raw Chen and Shih-Kuo Yeh
72
Rating Dynamics of Fallen Angels and Their Speculative Grade-Rated Peers: Static vs. Dynamic Approach . . . . . . . . . . . . 1945 Huong Dang
73
Creation and Control of Bubbles: Managers Compensation Schemes, Risk Aversion, and Wealth and Short Sale Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1983 James S. Ang, Dean Diavatopoulos, and Thomas V. Schwarz
74
Range Volatility: A Review of Models and Empirical Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2029 Ray Yeutien Chou, Hengchih Chou, and Nathan Liu
Contents
75
xvii
Business Models: Applications to Capital Budgeting, Equity Value, and Return Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2051 Thomas S. Y. Ho and Sang Bin Lee
Volume 4 76
VAR Models: Estimation, Inferences, and Applications . . . . . . . . 2077 Yangru Wu and Xing Zhou
77
Model Selection for High-Dimensional Problems Jing-Zhi Huang, Zhan Shi, and Wei Zhong
78
Hedonic Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2119 Ben J. Sopranzetti
79
Optimal Payout Ratio Under Uncertainty and the Flexibility Hypothesis: Theory and Empirical Evidence . . . . . . . . . . . . . . . . . 2135 Cheng-Few Lee, Manak C. Gupta, Hong-Yi Chen, and Alice C. Lee
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2177 Thomas C. Chiang and Jiandong Li
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217 Hong-Yi Chen, Sheng-Syan Chen, Chin-Wen Hsin, and Cheng-Few Lee
82
A VG-NGARCH Model for Impacts of Extreme Events on Stock Returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2263 Lie-Jane Kao, Li-Shya Chen, and Cheng-Few Lee
83
Risk-Averse Portfolio Optimization via Stochastic Dominance Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2281 Darinka Dentcheva and Andrzej Ruszczynski
84
Implementation Problems and Solutions in Stochastic Volatility Models of the Heston Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303 Jia-Hau Guo and Mao-Wei Hung
85
Stochastic Change-Point Models of Asset Returns and Their Volatilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317 Tze Leung Lai and Haipeng Xing
86
Unspanned Stochastic Volatilities and Interest Rate Derivatives Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2337 Feng Zhao
. . . . . . . . . . . . . 2093
xviii
Contents
87
Alternative Equity Valuation Models . . . . . . . . . . . . . . . . . . . . . . . 2401 Hong-Yi Chen, Cheng-Few Lee, and Wei K. Shih
88
Time Series Models to Predict the Net Asset Value (NAV) of an Asset Allocation Mutual Fund VWELX . . . . . . . . . . . . . . . . . . 2445 Kenneth D. Lawrence, Gary Kleinman, and Sheila M. Lawrence
89
Discriminant Analysis and Factor Analysis: Theory and Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2461 Lie-Jane Kao, Cheng-Few Lee, and Tzu Tai
90
Implied Volatility: Theory and Empirical Method . . . . . . . . . . . . 2477 Cheng-Few Lee and Tzu Tai
91
Measuring Credit Risk in a Factor Copula Model Jow-Ran Chang and An-Chi Chen
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2519 Chuan-Hsiang Han
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2535 Cheng-Few Lee, Chiung-Min Tsai, and Alice C. Lee
94
A Generalized Model for Optimum Futures Hedge Ratio . . . . . . 2561 Cheng-Few Lee, Jang-Yi Lee, Kehluh Wang, and Yuan-Chung Sheu
95
Instrumental Variables Approach to Correct for Endogeneity in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2577 Chia-Jane Wang
96
Application of Poisson Mixtures in the Estimation of Probability of Informed Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2601 Emily Lin and Cheng-Few Lee
97
CEO Stock Options and Analysts’ Forecast Accuracy and Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2621 Kiridaran Kanagaretnam, Gerald J. Lobo, and Robert Mathieu
98
Option Pricing and Hedging Performance Under Stochastic Volatility and Stochastic Interest Rates . . . . . . . . . . . . . . . . . . . . . 2653 Charles Cao, Gurdip S. Bakshi, and Zhiwu Chen
99
The Le Ch^ atelier Principle of the Capital Market Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2701 Chin W. Yang, Ken Hung, and Matthew D. Brigida
. . . . . . . . . . . . 2495
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2709 Subject Index
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2749
Reference Index
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2769
Contributors
Shigeyuki Abe Faculty of Policy Studies, Doshisha University, Kyoto, Japan Raj Aggarwal University of Akron, Akron, OH, USA Mercedes Alda Garcı´a Facultad de Economı´a y Empresa, Departamento de Contabilidad y Finanzas, Universidad de Zaragoza, Zaragoza, Spain Heitor Almeida University of Illinois at Urbana-Champaign, Champaign, IL, USA James S. Ang Department of Finance, College of Business, Florida State University, Tallahassee, FL, USA Augustine C. Arize Texas A & M University-Commerce, Commerce, TX, USA Zhidong D. Bai KLAS MOE & School of Mathematics and Statistics, Northeast Normal University, Changchun, China Department of Statistics and Applied Probability, National University of Singapore, Singapore, Singapore Gurdip S. Bakshi Department of Finance, College of Business, University of Maryland, College Park, MD, USA Turan G. Bali McDonough School of Business, Georgetown University, Washington, DC, USA Alok Bhargava School of Public Policy, University of Maryland, College Park, MD, USA Carl S. Bonham College of Business and Public Policy, University of Alaska Anchorage, Anchorage, AK, USA Ivan E. Brick Department of Finance and Economics, Rutgers, The State University of New Jersey, Newark/New Brunswick, NJ, USA Matthew D. Brigida Department of Finance, Clarion University of Pennsylvania, Clarion, PA, USA Murillo Campello Cornell University, Ithaca, NY, USA xix
xx
Contributors
Charles Cao Department of Finance, Smeal College of Business, Penn State University, University Park, PA, USA Oscar Carchano Department of Financial Economics, University of Valencia, Valencia, Spain Jow-Ran Chang National Tsing Hua University, Hsinchu City, Taiwan Shih-Kang Chao Ladislaus von Bortkiewicz Chair of Statistics, C.A.S.E. – Center for Applied Statistics and Economics, Humboldt-Universita¨t zu Berlin, Berlin, Berlin, Germany An-Chi Chen KGI Securities Co. Ltd., Taipei, Taiwan Hong-Yi Chen Department of Finance, National Central University, Taoyuan, Taiwan Li-jiun Chen Department of Finance, Feng Chia University, Taichung City, Taiwan Li-Shya Chen Department of Statistics, National Cheng-Chi University, Taipei City, Taiwan Ren-Raw Chen Graduate School of Business Administration, Fordham University, New York, NY, USA Rong Chen Department of Finance, Xiamen University, Xiamen, China Sheng-Syan Chen National Central University, Zhongli City, Taiwan Zhiwu Chen School of Management, Yale University, New Haven, USA Anna Chernobai Department of Finance, M.J. Whitman School of Management, Syracuse University, Syracuse, NY, USA Thomas C. Chiang Department of Finance, Drexel University, Philadelphia, PA, USA Wan-Jiun Paul Chiou Department of Finance and Law College of Business Administration, Central Michigan University, Mount Pleasant, MI, USA Chun-Yuan Chiu National Chiao–Tung University, Taiwan, Republic of China Institute of Information Management, National Chiao Tung University, Taiwan, Republic of China Hengchih Chou Department of Shipping and Transportation Management, National Taiwan Ocean University, Keelung, Taiwan Ray Yeutien Chou Institute of Economics, Academia Sinica and National Chiao Tung University, Taipei, Taiwan Win Lin Chou Department of Economics and Finance, City University of Hong Kong, Hong Kong, China
Contributors
xxi
Huimin Chung Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan Michael L. Clutter Warnell School of Forestry and Natural Resources, University of Georgia, Athens, GA, USA Richard Cohen University of Hawaii Economic Research Organization and Economics, University of Hawaii at Manoa, Honolulu, HI, USA Tian-Shyr Dai National Chiao-Tung University, Taiwan, Republic of China Huong Dang University of Canterbury, Christchurch, New Zealand Darinka Dentcheva Department of Mathematical Sciences, Stevens Institute of Technology, Hoboken, NJ, USA Dean Diavatopoulos Finance, Villanova University, Villanova, PA, USA Gang Nathan Dong Columbia University, New York, NY, USA Diep Duong Department of Business and Economics, Utica College, Utica, NY, USA Frank J. Fabozzi EDHEC Business School, EDHEC Risk Institute, Nice, France Luis Ferruz Facultad de Economı´a y Empresa, Departamento de Contabilidad y Finanzas, Universidad de Zaragoza, Zaragoza, Spain Wayne E. Ferson University of Southern California, Los Angeles, CA, USA Cheng-der Fuh Graduate Institute of Statistics, National Central University, Zhongli City, Taiwan Antonio F. Galvao University of Iowa, Iowa City, IA, USA Gerard L. Gannon Deakin University, Burwood, VIC, Australia Fernando Go´mez-Bezares Universidad de Deusto, Bilbao, Spain John W. Goodell College of Business Administration, University of Akron, Akron, OH, USA Hongtao Guo Bertolon School of Business, Salem State University, Salem, MA, USA Jia-Hau Guo Institution of Finance, College of Management, National Chiao Tung University, Hsinchu, Taiwan Manak C. Gupta Temple University, Philadelphia, PA, USA Ahmed Hachicha Department of Economic Development, Faculty of Economics and Management of Sfax, University of Sfax, Sfax, Tunisia Fatma Hachicha Department of Finance, Faculty of Economics and Management of Sfax, Sfax, Tunisia
xxii
Contributors
Chuan-Hsiang Han Department of Quantitative Finance, National Tsing Hua University, Hsinchu, Taiwan, Republic of China Wolfgang Karl Ha¨rdle Ladislaus von Bortkiewicz Chair of Statistics, C.A.S.E. – Center for Applied Statistics and Economics, Humboldt–Universita¨t zu Berlin, Berlin, Berlin, Germany Lee Kong Chian School of Business, Singapore Management University, Singapore, Singapore Richard Heaney Accounting and Finance, The University of Western Australia, Perth, Australia Yuna Heo Rutgers Business School, Rutgers, The State University of New Jersey, Newark-New Brunswick, NJ, USA Hemantha Herath Department of Accounting, Faculty of Business, Brock University, St. Catharines, ON, Canada Thomas S. Y. Ho Thomas Ho Company Ltd, New York, NY, USA Der-Tzon Hsieh Department of Economics, National Taiwan University, Taipei, Taiwan Chin-Wen Hsin Yuan Ze University, Zhongli City, Taiwan Chun-Pin Hsu Department of Accounting and Finance, York College, The City University of New York, Jamaica, NY, USA Yu Chuan Hsu National Chi Nan University, Nantou, Taiwan Jian Hua Baruch College (CUNY), New York, NY, USA Bwo-Nung Huang National Chung-Cheng University, Minxueng Township, Chiayi County, Taiwan Chin-Wen Huang Department of Finance, Western Connecticut State University, Danbury, CT, USA Jing-Zhi Huang Smeal College of Business, Penn State University, University Park, PA, USA Yongchang C. Hui School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China Ken Hung Division of International Banking & Finance Studies, Texas A&M International University, Laredo, TX, USA Mao-Wei Hung College of Management, National Taiwan University, Taipei, Taiwan John S. Jahera Jr. Department of Finance, College of Business, Auburn University, Auburn, AL, USA
Contributors
xxiii
Kiridaran Kanagaretnam Schulich School of Business, York University, Toronto, ON, Canada Long Kang Department of Finance, Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, China The Options Clearing Corporation and Center for Applied Economics and Policy Research, Indiana University, Bloomington, IN, USA Lie-Jane Kao Department of Finance and Banking, Kainan University, Taoyuan, ROC, Taiwan Young Shin (Aaron) Kim College of Business, Stony Brook University, Stony Brook, NY, USA Gary Kleinman Montclair State University, Montclair, NJ, USA April Knill The Florida State University, Tallahassee, FL, USA Alexander Kogan Rutgers Business School, Rutgers, The State University of New Jersey, Newark–New Brunswick, NJ, USA Rutgers Center for Operations Research (RUTCOR), Piscataway, NJ, USA Joshua Krausz Yeshiva University, New York, NY, USA Wikil Kwak University of Nebraska at Omaha, Omaha, NE, USA Tze Leung Lai Stanford University, Stanford, CA, USA Miranda S. Lam Bertolon School of Business, Salem State University, Salem, MA, USA Kenneth D. Lawrence New Jersey Institute of Technology, Newark, NJ, USA Sheila M. Lawrence Rutgers, The State University of New Jersey, New Brunswick, NJ, USA Alice C. Lee State Street Corp., USA Cheng-Few Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan Chia-Ju Lee College of Business, Chung Yuan University, Chungli, Taiwan Chien-Hui Lee National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan Geul Lee University of New South Wales, Sydney, Australia Heeseok Lee Korea Advanced Institute of Science and Technology, Yuseong-gu, Daejeon, South Korea
xxiv
Contributors
Jang-Yi Lee Tunghai University, Taichung, Taiwan Jieun Lee Economic Research Institute, Bank of Korea, Seoul, South Korea John C. Lee Center for PBBEF Research, North Brunswick, NJ, USA Kin-Wai Lee Division of Accounting, Nanyang Business School, Nanyang Technological University, Singapore, Singapore Kuo-Hao Lee Department of Finance, College of Business, Bloomsburg University of Pennsylvania, Bloomsburg, PA, USA Sang Bin Lee Hanyang University, Seong-Dong-Ku, Seoul, Korea Miguel A. Lejeune George Washington University, Washington, DC, USA Jiandong Li Chinese Academy of Finance and Development (CAFD) and Central University of Finance and Economics (CUFE), Beijing, China Chan-Chien Lien Treasury Division, E.SUN Commercial Bank, Taipei, Taiwan Donald Lien The University of Texas at San Antonio, San Antonio, TX, USA Si Rou Lim National Chi Nan University, Nantou, Taiwan Emily Lin St. John’s University, New Taipei City, Taiwan Fang-Pang Lin National Center for High Performance Computing, Hsinchu, Taiwan Hai Lin School of Economics and Finance, Victoria University of Wellington, Wellington, New Zealand Hsuan-Chu Lin Graduate Institute of Finance and Banking, National ChengKung University, Tainan, Taiwan Nathan Liu Department of Finance, Feng Chia University, Taichung, Taiwan Gerald J. Lobo C.T. Bauer College of Business, University of Houston, Houston, TX, USA Anastasia Maggina Business Consultant/Research Scientist, Avlona, Attikis, Greece Afif Masmoudi Department of Mathematics, Faculty of Sciences of Sfax, Sfax, Tunisia Philippe Masset Ecole Hoˆtelie`re de Lausanne, Le-Chalet-a`-Gobet, Lausanne 25, Switzerland Robert Mathieu School of Business and Economics, Wilfrid Laurier University, Waterloo, ON, Canada Bin Mei Warnell School of Forestry and Natural Resources, University of Georgia, Athens, GA, USA
Contributors
xxv
Thomas Meinl Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany Kristina L. Minnick Bentley University, Waltham, MA, USA Heather Mitchell RMIT University, Melbourne, VIC, Australia Bruce Mizrach Department of Economics, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA Kiseok Nam Yeshiva University, New York, NY, USA Ali Nejadmalayeri Department of Finance, Oklahoma State University, Oklahoma, OK, USA Shou Zhong Ng Hong Kong Monetary Authority, Hong Kong, China Oded Palmon Department of Finance and Economics, Rutgers Business School – Newark and New Brunswick, Piscataway, NJ, USA Dilip K. Patro RAD, Office of the Comptroller of the Currency, Washington, DC, USA Robert L. Porter Department of Finance School of Business, Quinnipiac University, Hamden, CT, USA Svetlozar T. Rachev Department of Applied Mathematics and Statistics, College of Business, Stony Brook University, SUNY, Stony Brook, NY, USA FinAnalytica, Inc, New York, NY, USA Vikash Ramiah School of Economics, Finance and Marketing, RMIT University, Melbourne, Australia Raafat R. Roubi Department of Accounting, Faculty of Business, Brock University, St. Catharines, ON, Canada Andrzej Ruszczynski Department of Management Science and Information Systems, Rutgers, The State University of New Jersey, Piscataway, NJ, USA G.V. Satya Sekhar Department of Finance, GITAM Institute of Management, GITAM University, Visakhapatnam, Andhra Pradesh, India Robert A. Schwartz Zicklin School of Business, Baruch College, CUNY, New York, NY, USA Thomas V. Schwarz Stetson University, DeLand, FL, USA Yuan-Chung Sheu National Chiao-Tung University, Hsinchu, Taiwan Yong Shi University of Nebraska at Omaha, Omaha, NE, USA Chinese Academy of Sciences, Beijing, China Zhan Shi Smeal College of Business, Penn State University, University Park, PA, USA
xxvi
Contributors
Tung-Li Shih Department of Hospitality Management, Ming Dao University, Changhua Peetow, Taiwan Wei K. Shih Bates White Economic Consulting, Washington, DC, USA Andrew F. Siegel University of Washington, Seattle, WA, USA Nicholas Sim School of Economics, University of Adelaide, Adelaide, SA, Australia Ben J. Sopranzetti Rutgers, The State University of New Jersey, Newark, NJ, USA Suresh Srivastava University of Alaska Anchorage, Anchorage, AK, USA Jung-Bin Su Department of Finance, China University of Science and Technology, Nankang, Taipei, Taiwan Edward W. Sun KEDGE Business School and BEM Management School, Bordeaux, France Norman R. Swanson Department of Economics, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA Tzu Tai Department of Finance and Economics, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Alex P. Tang Morgan State University, Baltimore, MD, USA Stephen J. Taylor Lancaster University Management School, Lancaster, UK Stuart Thomas RMIT University, Melbourne, VIC, Australia Chiung-Min Tsai Central Bank of the Republic of China (Taiwan), Taipei, Taiwan, Republic of China Maria Vargas Universidad de Zaragoza, Zaragoza, Spain Itzhak Venezia School of Business, The Hebrew University, Jerusalem, Israel Bocconi University, Milan, Italy Chia-Jane Wang Manhattan College, Riverdale, NY, USA Cindy Shin-Huei Wang CORE, Universite´ Catholique de Louvain and FUNDP, Academie Louvain, Louvain-la-Neuve, Belgium Department of Quantitative Finance, National TsingHwa University, Hsinchu City, Taiwan Kehluh Wang Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan Shin Yun Wang National Dong Hwa University, Shou-Feng, Hualien, Taiwan
Contributors
xxvii
Weining Wang Ladislaus von Bortkiewicz Chair of Statistics, C.A.S.E. – Center for Applied Statistics and Economics, Humboldt-Universita¨t zu Berlin, Berlin, Berlin, Germany Yanzhi Wang Yuan Ze University, Taiwan Department of Finance, College of Management, National Taiwan University, Taipei, Taiwan Yu-Jen Wang Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan Bruce W. Weber Lerner College of Business and Economics, University of Delaware, Newark, DE, USA K. C. John Wei Hong Kong University of Science and Technology, Kowloon, Hong Kong Wing-Keung Wong Department of Economics, Hong Kong Baptist University, Kowloon, Hong Kong Po-Cheng Wu Department of Finance and Banking, Kainan University, Taoyuan, ROC, Taiwan Yangru Wu Rutgers Business School – Newark and New Brunswick, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA Zhijie Xiao Department of Economics, Boston College, Chestnut Hill, MA, USA Yi Meng Xie School of Business and Administration, Beijing Normal University, Beijing, China Department of Economics, University of Southern California, Los Angeles, CA, USA Haipeng Xing SUNY at Stony Brook, Stony Brook, NY, USA Li Xu Washington State University, Richland, WA, USA Chin W. Yang Clarion University of Pennsylvania, Clarion, PA, USA National Chung Cheng University, Chia–yi, Taiwan Li Yang University of New South Wales, Sydney, Australia Yating Yang Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan Shih-Kuo Yeh Department of Finance, National Chung Hsing University, Taichung 402, Taiwan, Republic of China Hai-Chin Yu Department of International Business, Chung Yuan University, Chungli, Taiwan
xxviii
Contributors
Jing Rung Yu National Chi Nan University, Nantou, Taiwan Qianni Yuan Department of Finance, Xiamen University, Xiamen, China Kamil Yilmaz College of Administrative Sciences and Economics, Koc University, Istanbul, Turkey Shaojun Zhang School of Accounting and Finance, Faculty of Business, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Feng Zhao The University of Texas at Dallas, Richardson, TX, USA Wei Zhong Wang Yanan Institute for Studies in Economics and Department of Statistics, School of Economics, Xiamen University, Xiamen, China Chunyang Zhou Shanghai Jiaotong University, Shanghai, China Xing Zhou Rutgers Business School – Newark and New Brunswick, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA
1
Introduction to Financial Econometrics and Statistics Cheng-Few Lee and John C. Lee
Contents 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Financial Econometrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Single Equation Regression Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Simultaneous Equation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Panel Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Alternative Methods to Deal with Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Time Series Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.6 Spectral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Financial Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Important Statistical Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Principle Components and Factor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Nonparametric and Semi-parametric Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Applications of Financial Econometrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Applications of Financial Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Overall Discussion of Papers in this Handbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Summary and Conclusion Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Brief Abstracts and Keywords for Chapters 2 to 99 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 4 4 5 6 6 7 7 8 8 8 8 8 9 10 10 17 18 58
C.-F. Lee (*) Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] J.C. Lee Center for PBBEF Research, North Brunswick, NJ, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_1, # Springer Science+Business Media New York 2015
1
2
C.-F. Lee and J.C. Lee
Abstract
The main purposes of this introduction chapter are (i) to discuss important financial econometrics and statistics which have been used in finance and accounting research and (ii) to present an overview of 98 chapters which have been included in this handbook. Sections 1.2 and 1.3 briefly review and discuss financial econometrics and statistics. Sections 1.4 and 1.5 discuss application of financial econometrics and statistics. Section 1.6 first classifies 98 chapters into 14 groups in accordance with subjects and topics. Then this section has classified the keywords from each chapter into two groups: finance and accounting topics and methodology topics. Overall, this chapter gives readers of this handbook guideline of how to apply this handbook to their research.
1.1
Introduction
Financial econometrics and statistics have become very important tools for empirical research in both finance and accounting. Econometric methods are important tools for asset-pricing, corporate finance, options, and futures, and conducting financial accounting research. Important econometric methods used in this research include: single equation multiple regression, simultaneous regression, panel data analysis, time-series analysis, spectral analysis, nonparametric analysis, semiparametric analysis, GMM analysis, and other methods. Portfolio theory and management research have used different statistics distributions, such as normal distribution, stable distribution, and log-normal distribution. Options and futures research have used binomial distribution, log-normal distribution, non-central Chi-square distribution, Poisson distribution, and others. Auditing research has used sampling survey techniques to determine the sampling error and non-sampling error for auditing. Risk management research has used Copula distribution and other distributions. Section 1.1 is the introduction. Section 1.2 discusses financial econometrics. In this section, we have six subsections. These subsections include single equation regression methods, simultaneous equation models, panel data analysis, as well as alternative methods to deal with measurement error, time-series analysis, and spectral analysis. In the next section, Sect. 1.3, we discuss financial statistics. Within financial statistics, we discuss six subtopics, including statistical distributions; principle components and factor analysis; nonparametric, semiparametric, and GMM analyses; and cluster analysis. After exploring these topics, we discuss the applications of financial econometrics and financial statistics in Sects. 1.4 and 1.5. In Sect. 1.6, we discuss the overview of all papers included in this handbook in accordance with the subject and methodologies used in the papers. Finally in Sect. 1.7, we summarize all the chapters in this handbook and add our concluding remarks.
1
Introduction to Financial Econometrics and Statistics
3
As mentioned previously, Sect. 1.2 covers the topic of financial econometrics. We divide this section into six subsections. Within Sect. 1.2.1, we talk about single equation regression methods. We discuss some important issues related to single equation regression methods, including Heteroskedasticity, Specification Error, Measurement Error, Skewness and the Kurtosis Effect, Nonlinear Regression and Box-Cox transformation, Structural Change, the Chow Test and Moving Chow Test, Threshold Regression, Generalize Fluctuation Test, Probit and Logit Regression for Credit Risk Analysis, Poisson Regression, and Fuzzy Regression. The next subsection, Sect. 1.2.2, analyzes simultaneous equation models. Within the realm of simultaneous equation models, we discuss two-stage least squares estimation (2SLS) method, seemly unrelated regression (SUR) method, three-stage least squares estimation (3SLS) method, and disequilibrium estimation method. In Sect. 1.2.3, we study panel data analysis, in which we go over fixed effect model, random effect model, and clustering effect. The next subsection, Sect. 1.2.3, explores alternative methods to deal with measurement error. The alternative methods we look over in this section includes LISREL model, multifactor and multi-indicator (MIMIC) model, partial least square method, and grouping method. After we discuss alternative methods to deal with measurement error, we examine in Sect. 1.2.4 time-series analysis. We include in our section about time-series analysis some important models, including ARIMA, ARCH, GARCH, fractional GARCH, and combined forecasting. In Sect. 1.2.5, we look into spectral analysis. In the following section, Sect. 1.3, we discuss financial statistics, along with four subsequent subtopics. In our first subsection, Sect. 1.3.1, we discuss some important statistical distributions. This subsection will look into the different types of distributions that are in statistics, including Binomial and Poisson distribution, normal distribution, log-normal distribution, Chi-square distribution, and non-central Chi-square distribution, Wishart distribution, symmetric and non-symmetric stable distributions, and other known distributions. Then, we talk about principal components and factor analysis in Sect. 1.3.2. In the following subsection, Sect. 1.3.3, we examine nonparametric, semi-parametric, and GMM analyses. The last subsection, Sect. 1.3.4, explores cluster analysis. After discussing financial econometrics, we explore the applications of this topic in different types of financial and accounting field research. In Sect. 1.4, we describe these applications, including asset-pricing research, corporate finance research, financial institution research, investment and portfolio research, option pricing research, future and hedging research, mutual fund research, hedge fund research, microstructure, earnings announcements, real option research, financial accounting, managerial accounting, auditing, term structure modeling, credit risk modeling, and trading cost/transaction cost modeling. We also discuss applications of financial statistics into different types of financial and accounting field research. Section 1.5 will include these applications in asset-pricing research, investment and portfolio research, credit risk management research, market risk research, operational risk research, option pricing research, mutual fund research, hedge fund research, value-at-risk research, and auditing.
4
C.-F. Lee and J.C. Lee
1.2
Financial Econometrics
1.2.1
Single Equation Regression Methods
There are important issues related to single equation regression estimation method. They are (a) Heteroskedasticity, (b) Specification error, (c) Measurement error, (d) Skewness and kurtosis effect, (e) Nonlinear regression and Box-Cox transformation, (f) Structural change, (g) Chow test and moving Chow test, (h) Threshold regression, (i) Generalized fluctuation, (j) Probit and Logit regression for credit risk analysis, (k) Poisson regression, and (l) Fuzzy regression. These issues are briefly discussed as follows: (a) Heteroskedasticity – White (1980) and Newvey and West (1987) are two important papers discussing how the heteroskedasticity test can be performed. The latter paper discusses heteroskedasticity when there are serial correlations. (b) Specification error – Specification error occurs when there is missing variable in a regression analysis. To test the existence of specification error, we can refer to the papers by Thursby (1985), Fok et al. (1996), Cheng and Lee (1986), and Maddala et al. (1996). (c) Measurement error – Management error problem is when there exists imprecise independent variable in a regression analysis. Papers by Lee and Jen (1978), Kim (1995, 1997, 2010), Miller and Modigliani (1966), and Lee and Chen (2012) have explored how measurement error methods can be applied to finance research. Lee and Chen have discussed alternative errors in variable estimation methods and their application in finance research. (d) Skewness and kurtosis effect – Both skewness and kurtosis are two important measurement variables to prepare stock variation analysis. Papers by Lee (1976a), Sears and Wei (1988), and Lee and Wu (1985) discuss the skewness and kurtosis issue in asset pricing. (e) Nonlinear regression and Box-Cox transformation – Nonlinear regression and Box-Cox transformation are important tools for finance, accounting, and urban economic researches. Papers by Lee (1976, 1977), Lee et al. (1990), Frecka and Lee (1983), and Liu (2006) have discussed how nonlinear regression and Box-Cox transformation techniques can be used to improve the specification of finance and accounting research. Kau and Lee (1976), and Kau et al. (1986) have explored how Box-Cox transformation can be used to conduct the empirical study of urban structure. (f) Structural change – Papers by Yang (1989), Lee et al. (2011b, 2013) have discussed how the structural change model can be used to improve the empirical study of dividend policy and the issuance of new equity. (g) Chow test and Moving Chow test – Chow (1960) has proposed a dummy variable approach to examine the existence of structure change for regression analysis. Zeileis et al. (2002)
1
(h)
(i)
(j)
(k)
(l)
Introduction to Financial Econometrics and Statistics
5
have developed software programs to perform the Chow test and other structural change models which has been frequently used in finance and economic research. Threshold regression – Hansen (1996, 1997, 1999, 2000a, and 2000b) have explored the issue of threshold regressions and their applications in detecting structure change for regression. Generalize fluctuation test – Kuan and Hornik (1995) have discussed how the generalized fluctuation test can be used to perform structural change to regression. Probit and Logit regression for credit risk analysis – Probit and Logit regressions are frequently used in credit risk analysis. Ohlson (1980) used the accounting ratio and macroeconomic data to do credit risk analysis. Shumway (2001) has used accounting ratios and stock rate returns for credit risk analysis in terms of Probit and Logit regression techniques. Most recently, Hwang et al. (2008, 2009) and Cheng et al. (2010) have discussed Probit and Logit regression for credit risk analysis by introducing nonparametric and semi-parametric techniques into this kind of regression analysis. Poisson regression – Lee and Lee (2012) have discussed how the Poisson Regression can be performed, regardless of the relationship between multiple directorships, corporate ownership, and firm performance. Fuzzy regression – Shapiro (2005), Angrist and Lavy (1999), and Van Der Klaauw (2002) have discussed how Fuzzy Regression can be performed. This method has the potential to be used in finance accounting and research.
1.2.2
Simultaneous Equation Models
In this section, we will discuss alternative methods to deal with simultaneous equation models. There are (a) two-stage least squares estimation (2SLS) method, (b) seemly unrelated regression (SUR) method, (c) three-stage least squares estimation (3SLS) method, (d) disequilibrium estimation method, and (e) generalized method of moments. (a) Two-stage least squares estimation (2SLS) method – Lee (1976a) has applied this to started market model; Miller and Modigliani (1966) have used 2SLS to study cost of capital for utility industry; Chen et al. (2007) have discuss the two-stage least squares estimation (2SLS) method for investigating corporate governance. (b) Seemly unrelated regression (SUR) method – Seemly unrelated regression has frequently used in economic and financial research. Lee and Zumwalt (1981) have discussed how the seemly unrelated regression method can be applied in asset-pricing determination.
6
C.-F. Lee and J.C. Lee
(c) Three-stage least squares estimation (3SLS) method – Chen et al. (2007) have discussed how the three-stage least squares estimation (3SLS) method can be applied in corporate governance research. (d) Disequilibrium estimation method – Mayer (1989), Martin (1990), Quandt (1988), Amemiya (1974), and Fair and Jaffee, (1972) have discussed how alternative disequilibrium estimation method can be performed. Tsai (2005), Sealey (1979), and Lee et al. (2011a) have discussed how the disequilibrium estimation method can be applied in asset-pricing test and banking management analysis. (e) Generalized method of moments – Hansen (1982) and Hamilton (1994, ▶ Chap. 14) have discussed how GMM method can be performed. Chen et al. (2007) have used the two-stage least squares estimation (2SLS), three-stage squares method, and GMM method to investigate corporate governance.
1.2.3
Panel Data Analysis
In this section, we will discuss important issues related to panel data analysis. They are (a) fixed effect model, (b) random effect model, and (c) clustering effect model. Three well-known textbooks by Wooldridge (2010), Baltagi (2008) and Hsiao (2003) have discussed the applications of panel data in finance, economics, and accounting research. Now, we will discuss the fixed effect, random effect, and clustering effect in panel data analysis. (a) Fixed effect model – Chang and Lee (1977) and Lee et al. (2011a) have discussed the role of the fixed effect model in panel data analysis of dividend research. (b) Random effect model – Arellano and Bover (1995) have explored the random effect model and its role in panel data analysis. Chang and Lee (1977) have applied both fix effect and random effect model to investigating the relationshipbetween price per share, dividend per share, and retained earnings per share. (c) Clustering effect model – Papers by Thompson (2011), Cameron et al. (2006), and Petersen (2009) review the clustering effect model and its impact on panel data analysis.
1.2.4
Alternative Methods to Deal with Measurement Error
In this section, we will discuss alternative methods of dealing with measurement error problems. They are (a) LISREL model, (b) multifactor and multi-indicator (MIMIC) model, and (c) partial least square method, and (d) grouping method. (a) LISREL model – Papers by Titman and Wessal (1988), Chang (1999), Chang et al. (2009), Yang et al. (2010) have described the LISREL model and its way to resolve the measurement error problems of finance research.
1
Introduction to Financial Econometrics and Statistics
7
(b) Multifactor and multi-indicator (MIMIC) model – Chang et al. (2009) and Wei (1984) have applied in the multifactor and multiindicator (MIMIC) model in capital structure and asset-pricing research. (c) Partial least square method – Papers by Core (2000), Ittner et al. (1997), and Lambert and Lacker (1987) have applied the partial least square method to deal with measurement error problems in accounting research. (d) Grouping method – Papers by Lee (1973), Chen (2011), Lee and Chen (2013), Lee (1977b), Black et al. (1972), Blume and Friend (1973), and Fama and MacBeth (1973) analyze grouping method and its way to deal with measurement error problem in capital asset-pricing tests. There are other errors in variable method, such as (i) Classical method, (ii) instrumental variable method, (iii) mathematical programming method, (iv) maximum likelihood method, (v) GMM method, and (vi) Bayesian Statistic Method. Lee and Chen (2012) have discussed all above-mentioned methods in details.
1.2.5
Time Series Analysis
In this section, we will discuss important models in time-series analysis. They are (a) ARIMA, (b) ARCH, (c) GARCH, (d) fractional GARCH, and (e) combined forecasting. – Two well-known textbooks by Anderson (1994) and Hamilton (1994) have discussed the issues related to time-series analysis. We will discuss some important topics in time-series analysis in the following subsections. – Myers (1991) discloses ARIMA’s role in time-series analysis: Lien and Shrestha (2007) discuss ARCH and its impact on time-series analysis: Lien (2010) discusses GARCH and its role in time-series analysis: Leon and Vaello-Sebastia (2009) further research into GARCH and its role in time series in a model called Fractional GARCH. – Granger and Newbold (1973), Granger and Newbold (1974), Granger and Ramanathan (1984) have theoretically developed combined forecasting methods. Lee et al. (1986) have applied combined forecasting methods to forecast market beta and accounting beta. Lee and Cummins (1998) have shown how to use the combined forecasting methods to perform cost of capital estimates.
1.2.6
Spectral Analysis
Anderson (1994), Chacko and Viceira (2003), and Heston (1993) have discussed how spectral analysis can be performed. Heston (1993) and Bakshi et al. (1997) have applied spectral analysis in the evaluation of option pricing.
8
C.-F. Lee and J.C. Lee
1.3
Financial Statistics
1.3.1
Important Statistical Distributions
In this section, we will discuss different statistical distributions. They are: (a) Poisson distribution, (c) normal distribution, (d) log-normal distribution, (e) Chi-square distribution, (f) non-central Chi-square distribution. Two well-known textbooks by Cox et al. (1979) and Rendleman and Barter (1979) have used binomial, normal, and lognormal distributions to develop an option pricing model. The following subsections note some famous authors that provide studies on these different statistical distributions. Black and Sholes (1973) have used lognormal distributions to derive the option pricing model. Finally, Aitchison and Brown (1973) is a well-known book to investigate lognormal distribution. Schroder (1989) has derived the option pricing model in terms of non-central Chi-square distribution. Fama (1971) has used stable distributions to investigate the distribution of stock rate of returns. Chen and Lee (1981) have derived statistics distribution of Sharpe performance measure and found that Sharpe performance measure can be described by Wishart distribution.
1.3.2
Principle Components and Factor Analysis
Anderson’s (2003) book entitled “An Introduction to Multivariate Statistical Analysis” has discussed principal components and factor analysis in detail. Chen and Shimerda (1981), Pinches and Mingo (1973), and Kao and Lee (2012) discuss how principal components and factor analyses can be used to do finance Lee et al. (1989) and accounting research.
1.3.3
Nonparametric and Semi-parametric Analyses
Ait-Sahalia and Lo (2000), and Hutchison et al. (1994) have discussed how nonparametric can be used in risk management and derivative securities evaluation. Hwang et al. (2010), and Hwang et al. (2007) have used semi-parametric to conduct credit risk analysis.
1.3.4
Cluster Analysis
The detailed procedures to discuss how cluster analysis can be used to find groups in data can be found in the textbook by Kaufman and Rousseeuw (1990). Brown and Goetzmann (1997) have applied cluster analysis in mutual fund research.
1
Introduction to Financial Econometrics and Statistics
1.4
9
Applications of Financial Econometrics
In this section, we will briefly discuss how different methodologies of financial econometrics will be applied to the topics of finance and accounting. (a) Asset-pricing Research – Methodologies used in asset-pricing research include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Two-stage least squares estimation (2SLS) method, (8) Seemly unrelated regression (SUR) method, (9) Three-stage least squares estimation (3SLS) method, (10) Disequilibrium estimation method, (11) Fixed effect model, (12) Random effect model, (13) Clustering effect model of panel data analysis, (14) Grouping method, (15) ARIMA, (16) ARCH, (17) GARCH, (18) Fractional GARCH, and (19) Wishart distribution. (b) Corporate Finance Research: – Methodologies used in Corporate finance research include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Probit and Logit regression for credit risk analysis, (8) Poisson regression, (9) Fuzzy regression, (10) Two-stage least squares estimation (2SLS) method, (11) Seemly unrelated regression (SUR) method, (12) Three-stage least squares estimation (3SLS) method, (13) Fixed effect model, (14) Random effect model, (15) Clustering effect model of panel data analysis, and (16) GMM Analysis. (c) Financial Institution Research – Methodologies used in Financial Institution research include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Probit and Logit regression for credit risk analysis, (8) Poisson regression, (9) Fuzzy regression, (10) Two-stage least squares estimation (2SLS) method, (11) Seemly unrelated regression (SUR) method, (12) Three-stage least squares estimation (3SLS) method, (13) Disequilibrium estimation method, (14) Fixed effect model, (15) Random effect model, (16) Clustering effect model of panel data analysis, (17) Semiparametric analysis. (d) Investment and Portfolio Research – Methodologies used in investment and portfolio research include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Probit and Logit regression for credit risk analysis, (8) Poisson regression, and (9) Fuzzy regression. (e) Option Pricing Research – Methodologies used in option pricing research include (1) ARIMA, (2) ARCH, (3) GARCH, (4) Fractional GARCH, (5) Spectral analysis, (6) Binomial distribution, (7) Poisson distribution, (8) normal distribution,
10
C.-F. Lee and J.C. Lee
(9) log-normal distribution, (10) Chi-square distribution, (11) non-central Chi-square distribution, and (12) Nonparametric analysis. (f) Future and Hedging Research – Methodologies used in future and hedging research include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Probit and Logit regression for credit risk analysis, (8) Poisson regression, and (9) Fuzzy regression. (g) Mutual Fund Research – Methodologies used in mutual fund research include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Probit and Logit regression for credit risk analysis, (8) Poisson regression, (9) Fuzzy regression, and (10) Cluster analysis. (h) Credit Risk Modeling – Methodologies used in credit risk modeling include (1) Heteroskedasticity, (2) Specification error, (3) Measurement error, (4) Skewness and kurtosis effect, (5) Nonlinear regression and Box-Cox transformation, (6) Structural change, (7) Two-stage least squares estimation (2SLS) method, (8) Seemly unrelated regression (SUR) method, (9) Three-stage least squares estimation (3SLS) method, (10) Disequilibrium estimation method, (11) Fixed effect model, (12) Random effect model, (13) Clustering effect model of panel data analysis, (14) ARIMA, (15) ARCH, (16) GARCH, and (17) Semiparametric analysis. (i) Other Application – Financial econometrics is also important tools to conduct research in (1) Trading cost/transaction cost modeling, (2) Hedge fund research, (3) Microstructure, (4) Earnings announcement, (5) Real option research, (6) Financial accounting, (7) Managerial accounting, (8) Auditing, and (9) Term structure modeling.
1.5
Applications of Financial Statistics
Financial statistics is an important tool for research in (1) Asset-pricing research, (2) Investment and portfolio research, (3) Credit risk management research, (4) Market risk research, (5) Operational risk research, (6) Option pricing research, (7) Mutual fund research, (8) Hedge fund research, (9) Value-at-risk research, and (10) Auditing research.
1.6
Overall Discussion of Papers in this Handbook
In this section, we classify 98 papers (chapters 2–99) which have been presented in Appendix 1 in accordance with (A) Chapter titles and (B) Keywords. (A) Chapter title classification in terms of Chapter Titles
1
Introduction to Financial Econometrics and Statistics
11
Based on chapter titles, we classify 98 chapters into the following 14 topics: (i) Financial Accounting (▶ Chaps. 2, 9, 10, 61, 97) (ii) Mutual Funds (▶ Chaps. 3, 24, 25, 68, 88) (iii) Microstructure (▶ Chaps. 4, 44, 47, 96) (iv) Corporate Finance (▶ Chaps. 5, 21, 30, 38, 42, 46, 60, 63, 75, 79, 95) (v) Asset Pricing (▶ Chaps. 6, 15, 22, 28, 34, 36, 39, 45, 50, 81, 85, 87, 93, 99) (vi) Options (▶ Chaps. 7, 32, 37, 55, 65, 84, 86, 90, 98) (vii) Portfolio Analysis (▶ Chaps. 8, 26, 35, 53, 67, 73, 80, 81, 83) (viii) Risk Management (▶ Chaps. 11, 13, 16, 17, 23, 27, 41, 51, 54, 72, 91, 92) (ix) International Finance (▶ Chaps. 12, 40, 43, 59, 69) (x) Event Study (▶ Chap. 14) (xi) Methodology (▶ Chaps. 18, 19, 20, 29, 31, 33, 46, 49, 52, 56, 57, 58, 62, 74, 76, 77, 78, 82, 89) (xii) Banking Management (▶ Chap. 64) (xiii) Pension Funds (▶ Chap. 66) (xiv) Futures and Index Futures (▶ Chaps. 48, 70, 71, 94) (B) Keywords classification Based on the keywords in Appendix 1, we classify these keywords into two groups: (i) finance and accounting topics and (ii) methodology topics. The number behind each keyword is the chapter it is associated with. (i) Finance and Accounting Topics Abnormal earnings (87), Accounting earnings (87), Activity-based costing system (27), Agency costs (5, 97), Aggregation bias (43), Analyst experience (2), Analyst forecast accuracy (63), Analysts’ forecast accuracy (97), Analysts’ forecast bias (63, 97), Arbitrage pricing theory (APT) (6, 7, 36, 81), Asset (93), Asset allocation (45), Asset allocation fund (88), Asset pricing (34, 81), Asset return predictability (76), Asset returns (52), Asset-pricing returns (96), Asymmetric information (5), Asymmetric mean reversion (15), Asymmetric stochastic volatility (62), Asymmetric volatility response (15), Balanced scorecard (29), Bank capital (13), Bank holding companies (13), Bank risks (13), Bank stock return (6), Banks (12), Barrier option (65), Basket credit derivatives (23), Behavioral finance (55, 66, 73), Bias (57), Bias reduction (92), Bid-ask spreads (96, 99), Binomial option pricing model (37), Black-Scholes model (7, 90), Black-Sholes option pricing model (37), Board structure (42), Bond ratings (89), Bottom-up capital budgeting (75), Bounded complexity (85), Bounds (71), Brier score (72), Brokerage reputation (63), Business cycle (67), Business models (75), Business performance evaluation (29), Business value of firm, Buy-and-hold return (50), Calendar-time (50), Calendar-time portfolio approach (14), Call option (37), Capital asset-pricing model (CAPM) (6, 25, 28, 36, 81, 93), Capital budgeting (75, 29), Capital markets (25), Capital structure (5, 60), Carry trade (69), Case-Shiller home price indices (19), CEO compensation (97), CEO stock options (97), Change of measure (30), Cheapestto-deliver bond (71), Chicago board of trade, (71), Cholesky decomposition (23), Closed-end Funds (25), Comparative financial systems (12), Composite trapezoid rule (51), Comprehensive earnings (87), Compromised solution (89),
12
C.-F. Lee and J.C. Lee
Compustat database (38), Compound sum method (46), Conditioning information (35), Constant/dynamic hedging (44), Contagious effect (11), Corner portfolio (45), Corporate earnings (9), Corporate finance (5), Corporate merger (21), Corporate ownership structure (42), Corporate policies (38), Corporation regulation (9), Correlated defaults (11), Cost of capital (93), Country funds (25), Credit rating (21), Credit rating (27), Credit risk (27, 65, 91), Credit risk index (27), Credit risk rating (16), Credit VaR (91), Creditworthiness (16), Cumulative abnormal return (50), Cumulative probability distribution (45), Currency market (58), Cyberinfrastructure (49), Daily realized volatility (40), Daily stock price (82), Debt maturity (64), Delivery options (71), Delta (45), Demand (33), Demonstration effect (17), Deterioration of bank asset quality (64), Determinants of capital structure (60), Discount cash flow model (46), Discretionary accruals (61), Discriminant power (89), Disposition effect (22), Dividends (38, 65, 79), Domestic investment companies (17), Double exponential smoothing (88), Duality (83), Dynamics (67), Earning management (61), Earnings change (10), Earnings level (10), Earnings quality (42), Earnings surprises (81), Economies of scale (21), Ederington hedging effectiveness (70), Effort allocation (97), Effort aversion (55), EGB2 distribution (80), Electricity (33), Empirical Bayes (85), Empirical corporate finance (95), Employee stock option (30), Endogeneity (38, 95), Endogeneity of variables (13), Endogenous supply (93), Equity valuation models (87), Equity value (75), European option (7), European put (5), Evaluation (34), Evaluation of funds (3), Exactly identified (93), Exceedance correlation (52), Exchange rate (43, 59), Executive compensation schemes (55), Exercise boundary (30), Expected market risk premium (15), Expected stock return (80), Expected utility (83), Experimental control (4), Experimental economics (4), Extreme events (67), Fallen angel (72), Finance panel data (24), Financial analysts (2), Financial crisis (64), Financial institutions (12), Financial leverage (75), Financial markets (12), Financial modeling (3), Financial planning and forecasting (87), Financial ratios (21), Financial returns (62), Financial service (49), Financial simulation (49), Financial statement analysis (87), Financial strength (16), Firm and time effects (24), Firm Size (9), Firm’s performance score (21), Fixed operating cost (75), Flexibility hypothesis (79), Foreign exchange market (40), Foreign investment (17), Fourier inversion (84), Fourier transform (19), Free cash flow hypothesis (79), Frequentist segmentation (85), Fund management (53), Fundamental analysis (87), Fundamental asset values (73), Fundamental transform (84), Futures hedging (70), Gamma (45), Generalized (35), Generalized autoregressive conditional heteroskedasticity (51), Global investments (3), Gold (58), Green function (84), Grid and cloud computing (49), Gross return on investment (GRI) (75), Group decision making (29), Growth option (75), Growth rate (46), Hawkes process (11), Heavy-tailed data (20), Hedge ratios (98), Hedging (98), Hedging effectiveness (94), Hedging performance (98), Herding (66), Herding towards book-to-market factor (66), Herding towards momentum factor (66), Herding towards size factor (66), Herding towards the market (66), High end computing (49), High-dimensional data (77), Higher moments (80), High-frequency data (40), High-order moments (57),
1
Introduction to Financial Econometrics and Statistics
13
Historical simulation (45), Housing (78), Illiquidity (30), Imitation (66), Implied standard deviation (ISD) (90), Implied volatility (32, 90), Impression management (61), Impulse response (76), Incentive options (55), Income from operations (61), Independence screening (77), Index futures (44), Index options (32), Inflation targeting (59), Information asymmetry (2, 96), Information content (92), Information content of trades (76), Information technology (49), Informational efficiency (76), Instantaneous volatility (92), Institutional investors (17), Insurance (20), Intangible assets (38), Interest rate risk (6), Interest rate volatility (86), Internal control material weakness (63), Internal growth rate (46), Internal rating (16), International capital asset pricing model (ICAPM) (25, 40), Internet bubble (53), Intertemporal riskreturn relation (15), Intraday returns (44), Investment (67), Investment equations (57), Investment risk taking (97), Investment strategies (68), Investment style (68), Issuer default (23), Issuer-heterogeneity (72), Kernel pricing (7), Laboratory experimental asset markets (73), Lead-lag relationship (17), Lefttruncated data (20), Legal traditions (12), Limited dependent variable model (99), Liquidity (22, 99), Liquidity risk (64), Local volatility (92), Logical analysis of data (16), Logical rating score (16), Long run (59), Long-run stock return (50), Lower bound (7), Management earnings (9), Management entrenchment (5), Management myopia (5), Managerial effort (55), Market anomalies (44), Market efficiency (28, 73), Market microstructure (4, 96), Market model (99), Market perfection (79), Market performance measure (75), Market quality (76), Market segmentation (25), Market uncertainties (67), Market-based accounting research (10), Markov property (72), Martingale property (94), Micro-homogeneity (43), Minimum variance hedge ratio (94), Mis-specified returns (44), Momentum strategies (81), Monetary policy shock (59), Mutual funds (3), NAV of a mutual fund (88), Nelson-Siegel curve (39), Net asset value (25), Nonrecurring items (61), Net present value (NPV) (75), Oil (58), Oil and gas industry (61), OLS hedging strategy (70), On-/off-the-run yield spread (22), Online estimation (92), Operating earnings (87), Operating leverage (75), Operational risk (20), Opportunistic disclosure management (97), Opportunistic earnings management (97), Optimal hedge ratio (94), Optimal payout ratio (79), Optimal portfolios (35), Optimal tradeoffs (29), Option bounds (7), Option prices (32), Option pricing (49, 65), Option pricing model (90), Optional bias (2), Options on S&P 500 index futures (90), Oracle property (77), Order imbalance (96), Out-of-sample return (8), Out-of-the-money (7), Output (59), Overconfidence (55), Overidentifying restrictions (95), Payout policy (79), Pension funds (66), Percent effective spread (99), Performance appraisal (3), Performance evaluation (8), Performance measures (28), Performance values (68), Persistence (44), Persistent change (31), Poison put (5), Political cost (61), Portfolio management (3, 35, 70), Portfolio optimization (8, 83), Portfolio selection (26), Post-earnings-announcement drift (81), Post-IT policy (59), Predicting returns (35), Prediction of price movements (3), Pre-IT policy (59), Preorder (16), Price impact (99), Price indexes (78), Price level (59),
14
C.-F. Lee and J.C. Lee
Price on earnings model (10), Pricing (78), Pricing performance (98), Probability of informed trading (PIN) (96), Property (78), Property rights (12), Put option (37), Put-call parity (37), Quadratic cost (93), Quality options (71), Random number generation (49), Range (74), Rank dependent utility (83), Rating migration (72), Rational bias (2), Rational expectations (43), Real estate (78), Real sphere (59), Realized volatility (74), Recurrent event (72), Recursive (85), Reflection principle (65), Regime-switching hedging strategy (70), Registered trading firms (17), Relative value of equity (75), Research and development expense (61), Restrictions (59), Retention option (75), Return attribution (75), Return models (10), Reverse-engineering (16), Risk (83), Risk adjusted performance (3), Risk aversion (55), Risk management (41, 49, 67, 74, 80), Risk measurement (26), Risk premium (80), Risk-neutral pricing (32), Robust estimation (41), S&P 500 index (7), Sarbanes-Oxley act (63), SCAD penalty (77), Scale-by-scale decomposition (19), Seasonality (33), Semi-log (78), Sentiment (30), Shape parameter (82), Share prices (59), Share repurchases (38), Sharpe ratios (35, 53), Short run (59), Short selling (26), Short-term financing (64), Signaling hypothesis (79), Sigma (37), Smile shapes (32), Smooth transition (74), Special items (61), Speculative bubbles (73), Spot price (33), Stationarity (10), Statistical learning (77), Stochastic discount factors (25, 35), Stochastic interest rates (98), Stochastic order (83), Stochastic volatility (44, 84, 85, 92, 98), Stock market overreaction (15), Stock markets (67), Stock option (65), Stock option pricing (98), Stock price indexes (62), Stock/futures (44), Strike price (55), Structural break (31), Subjective value (30), Substantial price fluctuations (82), Sustainable growth rate, synergy (21), Synthetic utility value (68), Systematic risk (3, 79), TAIEX (45), Tail risk (67), Timberland investments (34), Time-varying risk (25), Timevarying risk aversion (40), Time-varying volatility (15), Timing options (71), Tobin’s model (99), Top-down capital budgeting (75), Total risk (79), Tournament (73), Trade direction (96), Trade turnover industry (9), Transaction costs (99), Transfer pricing (29), Treasury bond futures (71), Trend extraction (18), Trust (12), Turkish economy (59), U.S. stocks (52), Ultrahigh-dimensional data (77), Unbiasedness (43), Uncertainty avoidance (12), Uncovered interest parity (69), Unexpected volatility shocks (15), Unsystematic risk (3), Utility-based hedging strategy (70), VaR-efficient frontier (45), Variability percentage adjustment (21), Visual Basic for applications (37), Volatility index (VIX) (92), Volatility (37, 80), Volatility co-persistence (44), Volatility daily effect (92), Volatility dependencies (62), Volatility feedback effect (15), Weak efficiency (43), Weak instruments (95), Wealth transfer (75), Write-downs (61), Yaari’s dual utility (83), Yield curve (39), Zero-investment portfolio (50). (ii) Methodology Topics A mixture of Poisson distribution (98), Analyst estimation (ANOVA) (2, 28), Analytic hierarchy process (29), Analysis of variance (19), Anderson-Darling statistic (20), Anderson-Rubin statistic (95), ANST-GARCH model (asymmetric nonlinear smooth transition- GARCH model) (15), Approximately normal distribution (28), ARCH (41, 44), ARCH models (32), ARX-GARCH (autoregressive (AR) mean process with exogenous (X)
1
Introduction to Financial Econometrics and Statistics
15
variables- GARCH model) (85), Asset-pricing tests (35), Asset-pricing regression (24) Asymmetric dependence (52), Asymptotic distribution (44), Autocovariance (99), Autoregression (62), Autoregressive conditional jump intensity (82), Autoregressive model (88), Autoregressive moving average with exogenous variables (10), Autoregressive parameters (44), Bankruptcy prediction (27), Bayesian updating (2), Binomial distribution (28), Block bootstrap (56), Block granger causality (17), Bootstrap (8, 50), Bootstrap test (14), Bootstrapped critical values (24), Boundary function (31), Box-Cox (78), Bubble test (31), Change-point models (85), Clayton copula (8, 11), Cluster standard errors (24), Custering effect (79), Co-integration (76), Co-integration breakdown test (31), Combination of forecasts (88), Combinatorial optimization (16), Combined forecasting (87), Combining forecast (27), Complex logarithm (84), Conditional distribution (56), Conditional market model (50), Conditional skewness (80), Conditional value-atrisk (26, 83), Conditional variance (70), Conditional variance estimates (44), Contemporaneous jumps (85), Contingency tables (28), Contingent claim model (75), Continuous wavelet transform (19), Cook’s distance (63), Copula (8, 41, 67, 74, 91), Correction method (92), Correlation (67, 73), Correlation analysis (99), CoVar (54), Covariance decomposition (72), Cox-Ingersoll-Ross (CIR) model (22, 71), Cross-sectional and time-series dependence (42), CUSUM squared test (31), Data-mining (16), Decision trees (37), Default correlation (23, 91), Dickey-Fuller test (10), Dimension reduction (77), Discrete wavelet transform (19), Discriminant analysis (89), Distribution of underlying asset (7), Double clustering (24), Downside risk model (26), Dynamic conditional correlation (52, 58, 74), Dynamic factor model (11), Dynamic random-effects models (38), Econometric methodology (38), Econometric modeling (33), Econometrics (12), Error component two-stage least squares (EC2SLS) (12), Error in variable problem (60, 96), Estimated cross-sectional standard deviations of betas (66), Event study methodology (5, 50), Ex ante probability (82), Excess kurtosis (44), Exogeneity test (95), Expectation–maximization (EM) algorithm (96), Expected return distribution (45), Explanatory power (89), Exponential trend model (88), Extended Kalman filtering (86), Factor analysis (68, 89), Factor copula (23, 91), Factor model (39, 50), Fama-French three-factor model (14), Feltham and Ohlson model (87), Filtering methods (19), Fixed effects (57, 63, 79), Forecast accuracy (2, 9), Forecast bias (2), Forecasting complexity (97), Forecasting models (27), Fourier transform method (92), Frank copula (11), GARCH (8, 40, 41, 96), GARCH hedging strategy (70), GARCH models (48, 52), GARCH-in-mean (40), GARCH-jump model (82), GARJI model (82), Gaussian copula (8), Generalized correlations (77), Generalized hyperbolic distribution (94), Generalized method of moments (13), Gibbs sampler (62), Generalized least square (GLS) (36), Generalized method of moments (GMM) (5, 25, 43, 57, 95), Generalized two-stage least squares (G2SLS) (12), Goal programming (89), Goodness-of-fit test (82, 20), Granger-causality test (76), Gumbel copula (8, 11), Hazard model (72),
16
C.-F. Lee and J.C. Lee
Heath-Jarrow-Morton model (86), Hedonic models (78), Heston (84), Heterogeneity (43), Heteroskedasticity (57), Hidden Markov models (85), Hierarchical clustering with K-Means approach (30), Hierarchical system (68), Huber estimation (66), Hyperparameter estimation (85), Hypothesis testing (53), Ibottson’s RATS (50), Infinitely divisible models (48), Instrumental variable (IV) estimation (95, 57), Johnson’s Skewness-adjusted t-test (14), Joint-normality assumption (94), Jones (1991) model (61), Jump detection (18), Jump dilution model (30), Jump process (56), Kalman filter (66), Kolmogorov-Smirnov statistic (20), Kupiec’s proportion of failures test (48), Large-scale simulations (14), Latent variable (60), Least squares (78), Likelihood maximization (32), Linear filters (18), Linear trend model (88), LISREL approach (36, 60), Locally linear quantile regression (54), Logistic smooth transition regression model (69), Logit regression (27), Log-likelihood function (99), Lognormal (65), Long-horizon event study (14), Long memory process (31, 32), Loss distribution (20), Loss function (51), MAD model (26), Matching procedure (63), Mathematical optimization (55), MATLAB (90), Maximum likelihood (35, 38, 52), Maximum likelihood estimation (MLE) (36, 44, 71), Maximum sharp measure (94), Markov Chain Monte Carlo (MCMC) (62, 69, 85), Mean-variance ratio (53), Measurement error (36, 57), Method of maximum likelihood (51), Method of moments (35), A comparative study of two models SV with MCMC algorithm (62), Microsoft Excel (37), Multiple indicator multiple causes (MIMIC) (36), Minimum generalized semi-invariance (94), Minimum recording threshold (20), Minimum value of squared residuals (MSE loss function) (10), Minimum variance efficiency (35), Misspecification (44), Model formulation (38), Model selection (56, 77), Monitoring fluctuation test (31), Monte Carlo simulation (11, 23, 32, 49, 57), Moving average method (88), Moving estimates processes (79), MSE (62), Multifactor diffusion process (56), Multifactor multi-indicator approach (36), Multiple criteria and multiple constraint linear programming (29), Multiple criteria decision making (MCDM) (68), Multiple indicators and multiple causes (MIMIC) model (60), Multiple objective programming (26), Multiple regression (6, 9), Multi-resolution analysis (19), Multivariate technique (89), Multivariate threshold autoregression model (17), MV model (26), Nonlinear filters (18), Nonlinear Kalman filter (22), Nonlinear optimization (38), Non-normality (41), Nonparametric (7), Nonparametric density estimation (86), Nonparametric tests (28), Normal copula (11), Normal distribution (45), Ohlson model (87), Order flow models (4), Ordered logit (27), Ordered probit (27), Ordinary least-squares regression (63, 73), Ordinary least-squares (OLS) (90, 39, 95, 36), Orthogonal factors (6), Outlier (33), Out-of-sample forecasts (56), Panel data estimates (12, 40, 38), Panel data regressions (42, 2), Parametric approach (51), Parametric bootstrap (35), Partial adjustment (93), Partial linear model (54), Penalized leastsquares (77), Prediction test (31), Principle component analysis (89, 91), Principle component factors (21), Probability density function (27), Quantile autoregression (QAR) (41), Quadratic trend model (88), Quantile dependence (67), Quantile regression (41, 54, 67), Quasi-maximum likelihood (22),
1
Introduction to Financial Econometrics and Statistics
17
Quasi-maximum likelihood estimation strategy (48), Random walk models (4), Rank regressions (63), Realized distribution (58), Rebalancing model (26), Recursive filters (85), Recursive programming (37), Reduced-form model (23, 93), Regime-switch model (69), Regression models (85, 78), Revenue surprises rotation-corrected angle (84), Ruin probability (20), Seemingly unrelated regressions (SUR) (40, 93), Semi-parametric approach (51), Semiparametric model (54), Serial correlation (44), Shrinkage (77), Simple adjusted formula (84), Simulations (55), Simultaneous equations (60, 93, 95, 87), Single clustering (24), Single exponential smoothing (88), Skewed generalized student’s t (51), Skewness (57), Specification test (56), Spectral analysis (19), Standard errors in finance panel data (30), Standardized Z (21), State-space model (39, 66), Static factor model (11), Stepwise discriminant analysis (89), Stochastic dominance (83), Stochastic frontier analysis (13), Structural change model (79), Structural equation modeling (SEM) (60), Structural VAR (17), Student’s t-copula (8, 52), Student’s t-distribution (62), Seemingly unrelated regression (SUR) (43), Survey forecasts (43), Survival analysis (72), SVECM models (59), Stochastic volatility (SVOL) (62), Tail dependence (52), Taylor series expansion (90), t-Copula (11), Tempered stable distribution (48), Term structure (32, 39, 71), Term structure modeling (86), Time-series analysis (18, 34, 41), Timeheterogeneity (72), Time-series and cross-sectional effects (12), Time-varying covariate (72), Time-varying dependence (8), Time-varying parameter (34), Time-varying rational expectation hypothesis (15), Trading simulations (4), Two-sector asset allocation model (45), Two-stage estimation (52), Two-stage least square (2SLS) (95), Two-way clustering method of standard errors (42), Unbounded autoregressive moving average model (88), Unconditional coverage test (51), Unconditional variance (70), Uniformly most powerful unbiased test (53), Unit root tests (10), Unit root time series (31), Unweighted GARCH (44), Value-at-risk (VAR) (45, 54, 83, 20, 26, 41, 48, 51, 76), Variable selection (77), Variance decomposition (76), Variance estimation (70), Variance reduction methods (32), Variance-gamma process (82), VG-NGARCH model (82), Visual Basic for applications (VBA) (37), Volatility forecasting (74), Volatility regime switching (15), Volatility threshold (58), Warren and Shelton model (87), Wavelet (18), Wavelet filter (19), Weighted GARCH, (44), Weighted least-squares regression (14), Wilcoxon rank test (21), Wilcoxon two-sample test (9), Wild-cluster bootstrap (24), and Winter’s method (88).
1.7
Summary and Conclusion Remarks
This chapter has discussed important financial econometrics and statistics which have been used in finance and accounting research. In addition, this chapter has presented an overview of 98 chapters which have been included in this handbook. In Sect. 1.2 “Financial Econometrics,” we have six subsections which are: a single equation regression methods, Simultaneous equation models, Panel data analysis, Alternative methods to deal with measurement error, Time-series analysis, and
18
C.-F. Lee and J.C. Lee
Spectral Analysis. Section 1.3 “Financial Statistics” has four subsections: Important Statistical Distributions, Principle components and factor analysis, Nonparametric and Semi-parametric analyses, Cluster analysis review and discuss financial econometrics and statistics. In Sect. 1.4 “Applications of financial econometrics,” we briefly discuss how different methodologies of financial econometrics will be applied to the topics of finance and accounting. These methods include: Assetpricing Research, Corporate Finance Research, Financial Institution Research, Investment and Portfolio Research, Option Pricing Research, Future and Hedging Research, Mutual Fund Research, and Credit Risk Modeling. Section 1.5, “Applications of Financial Statistics,” states that financial statistics is an important tool to conduct research in the areas of (1) Asset-pricing Research, (2) Investment and Portfolio Research, (3) Credit Risk Management Research, (4) Market Risk Research, (5) Operational Risk Research, (6) Option Pricing Research, (7) Mutual Fund Research, (8) Hedge Fund Research, (9) Value-at-risk Research, and (10) Auditing. Section 1.6 is an “Overall Discussion of Papers in this Handbook.” It classifies 98 chapters into 14 groups in accordance to Chapter title and keywords.
Appendix 1: Brief Abstracts and Keywords for Chapters 2 to 99 Chapter 2: Experience, Information Asymmetry, and Rational Forecast Bias This chapter uses a Bayesian model of updating forecasts in which the bias in forecast endogenously determines how the forecaster’s own estimates weigh into the posterior beliefs. The model used in this chapter predicts a concave relationship between accuracy in forecast and posterior weight that is put on the forecaster’s selfassessment. This chapter then uses a panel regression to test the analytical findings and find that an analyst’s experience is indeed concavely related to the forecast error. Keywords: Financial analysts, Forecast accuracy, Information asymmetry, Forecast bias, Bayesian updating, Panel regressions, Rational bias, Optional bias, Analyst estimation, Analyst experience
Chapter 3: An Overview of Modeling Dimensions for Performance Appraisal of Global Mutual Funds (Mutual Funds) This paper examines various performance models derived by financial experts across the globe. A number of studies have been conducted to examine investment performance of mutual funds of the developed capital markets. The measure of performance of financial instruments is basically dependent on three important models derived independently by Sharpe, Jensen, and Treynor. All three models are based on the assumptions that (1) all investors are averse to risk, and are single period expected utility of terminal wealth maximizers, (2) all investors have
1
Introduction to Financial Econometrics and Statistics
19
identical decision horizons and homogeneous expectations regarding investment opportunities, (3) all investors are able to choose among portfolios solely on the basis of expected returns and variance of returns, (4) all trans-actions costs and taxes are zero, and (5) all assets are infinitely divisible. Overall, this paper has examined nine alternative mutual funds measure. The method used in this kind of research is regression analysis. Keywords: Financial modeling, Mutual funds, Performance appraisal, Global investments, Evaluation of funds, Portfolio management, Systematic risk, Unsystematic risk, Risk adjusted performance, Prediction of price movements
Chapter 4: Simulation as a Research Tool for Market Architects This chapter uses simulation to gain insights into trading and market structure topic by two statistical methods. The statistical methods we use include experimental design, and careful controls over experimental parameters such as the instructions given to participants. The first is discrete event simulation and the model of computer-generated trade order flow that we describe in Sect. 3. To create a realistic, but not ad hoc, market background, we use draws from a log-normal returns distribution to simulate changes in a stock’s fundamental value, or P*. The model uses price-dependent Poisson distributions to generate a realistic flow of computergenerated buy and sell orders whose intensity and supply-demand balance vary over time. The order flow fluctuations depend on the difference between the current market price and the P* value. In Sect. 4, we illustrate the second method, which is experimental control to create groupings of participants in our simulations that have the same trading “assignment.” The result is the ability to make valid comparisons of trader’s performance in the simulations. Keywords: Trading simulations, Market microstructure, Order flow models, Random walk models, Experimental economics, Experimental control
Chapter 5: Motivations for Issuing Putable Debt: An Empirical Analysis This paper is the first to examine the motivations for issuing putable bonds in which the embedded put option is not contingent upon a company-related event. We find that the market favorably views the issue announcement of these bonds that we refer to as bonds with European put options or European putable bonds. This response is in contrast to the response documented by the literature to other bond issues (straight, convertible, and most studies examining poison puts), and to the response documented in the current paper to the issue announcements of poison-put bonds. Our results suggest that the market views issuing European putable bonds as helping mitigate security mispricing. Our study is an application of important statistical methods in corporate finance, namely, Event Studies and the use of General Method of Moments for cross-sectional regressions.
20
C.-F. Lee and J.C. Lee
Keywords: Agency costs, Asymmetric information, Corporate finance, Capital structure, Event study methodology, European put, General method of moments, Management myopia, Management entrenchment, Poison put
Chapter 6: Multi Risk-Premia Model of U.S. Bank Returns: An Integration of CAPM and APT Interest rate sensitivity of bank stock returns has been studied using an augmented CAPM: a multiple regression model with market returns and interest rate as independent variables. In this chapter, we test an asset-pricing model in which the CAPM is augmented by three orthogonal factors which are proxies for the innovations in inflation, maturity risk, and default risk. The methodologies used in this chapter are multiple regression and factor analysis. Keywords: CAPM, APT, Bank stock return, Interest rate risk, Orthogonal factors, Multiple regression
Chapter 7: Non-parametric Bounds for European Option Prices This chapter derives a new nonparametric lower bound and provides an alternative interpretation of Ritchken’s (1985) upper bound to the price of the European option. In a series of numerical examples, our new lower bound is substantially tighter than previous lower bounds. This is prevalent especially for out-of-the-money (OTM) options where the previous lower bounds perform badly. Moreover, we present that our bounds can be derived from histograms which are completely nonparametric in an empirical study. We first construct histograms from realizations of S&P 500 index returns following Chen et al. (2006), calculate the dollar beta of the option and expected payoffs of the index and the option, and eventually obtain our bounds. We discover violations in our lower bound and show that those violations present arbitrage profits. In particular, our empirical results show that out-of-themoney calls are substantially overpriced (violate the lower bound). The methodologies used in this chapter are nonparametric, option pricing model, and histograms methods. Keywords: Option bounds, Nonparametric, Black-Scholes model, European option, S&P 500 index, Arbitrage, Distribution of underlying asset, Lower bound, Out-of-the-money, Kernel pricing
Chapter 8: Can Time-Varying Copulas Improve Mean-Variance Portfolio? This chapter evaluates whether constructing a portfolio using time-varying copulas yields superior returns under various weight updating strategies. Specifically, minimum-risk portfolios are constructed based on various copulas and the Pearson
1
Introduction to Financial Econometrics and Statistics
21
correlation, and a 250-day rolling window technique is adopted to derive a sequence of time-varied dependences for each dependence model. Using daily data of the G-7 countries, our empirical findings suggest that portfolios using time-varying copulas, particularly the Clayton-dependence, outperform those constructed using Pearson correlations. The above results still hold under different weight updating strategies and portfolio rebalancing frequencies. The methodologies used in this chapter are Copulas, GARCH, Student’s t-Copula, Gumbel Copula, Clayton Copula, TimeVarying Dependence, Portfolio Optimization, and Bootstrap. Keywords: Copulas, Time-varying dependence, Portfolio optimization, Bootstrap, Out-of-sample return, Performance evaluation, GARCH, Gaussian copula, Student’s t-copula, Gumbel copula, Clayton copula
Chapter 9: Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience This chapter examines the accuracy of the earnings forecasts by the following test methodologies. Multiple Regression Models are used to examine the effect of six factors: firm size, market volatility, trading volume turnover, corporate earnings variances, type of industry, and experience. If the two-sample groups are related, Wilcoxon Two-Sample Test will be used to determine the relative earnings forecast accuracy. Readers are well advised and referred to the chapter appendix for methodological issues such as sample selection, variable definition, regression model, and Wilcoxon tow-sample test. Keywords: Multiple regression, Wilcoxon two-sample test, Corporate earnings, Forecast accuracy, Management earnings, Firm size, Corporation regulation, Volatility, Trade turnover, Industry
Chapter 10: Market-Based Accounting Research (MBAR) Models: A Test of ARIMAX Modeling This study uses standard models such as earnings level and earnings changes, among others. Models that fit better to the data drawn from companies listed on the Athens Stock Exchange have been selected employing autoregressive integrated moving average with exogenous variables (ARIMAX) models. Models I (price on earnings model) “II (returns on change in earnings divided by beginning-of-period price and prior period)” V (returns on change in earnings over opening market value), VII (returns deflated by lag of 2 years on earnings over opening market value), and IX (differenced-price model) have statistically significant coefficients of explanatory variables. These models take place with backward looking information instead of forward looking information that recent literature is assessed. The methodologies used in this chapter are price on earnings model, return models, autoregressive moving average with exogenous variables (ARIMAX), minimum value of squared residuals (MSE loss function), and Dickey-Fuller test.
22
C.-F. Lee and J.C. Lee
Keywords: Market-based accounting research (MBAR), Price on earnings model, Earnings level, Earnings change, Return models, Autoregressive moving average with exogenous variables (ARIMAX), Minimum value of squared residuals (MSE loss function), Unit root tests stationarity, Dickey-Fuller test
Chapter 11: An Assessment of Copula Functions Approach in Conjunction with Factor Model in Portfolio Credit Risk Management This study uses a mixture of the dynamic factor model of Duffee (1999) and a contagious effect in the specification of a Hawkes process, a class of counting processes which allows intensities to depend on the timing of previous events (Hawkes 1971). Using the mixture factor- contagious-effect model, Monte Carlo simulation is performed to generate default times of two hypothesized firms. The goodness-of-fit of the joint distributions based on the most often used copula functions in literature, including the Normal, t-, Clayton, Frank, and Gumbel copula, respectively, is assessed against the simulated default times. It is demonstrated that as the contagious effect increases, the goodness-of-fit of the joint distribution functions based on copula functions decreases, which highlights the deficiency of the copula function approach. Keywords: Static factor model, Dynamic factor model, Correlated defaults, Contagious effect, Hawkes process, Monte Carlo simulation, Normal copula, t-copula, Clayton copula, Frank copula, Gumbel copula
Chapter 12: Assessing Importance of Time-Series Versus Cross-Sectional Changes in Panel Data: A Study of International Variations in Ex-Ante Equity Premia and Financial Architecture This chapter uses simultaneous equation modeling and uses Hausman test to determine whether to report fixed or random-effects estimates. We first report random-effects estimates based on the estimation procedure of Baltagi (Baltagi 1981; Baltagi and Li 1995; Baltagi and Li 1994). We consider that the error component two-stage least squares (EC2SLS) estimator of Baltagi and Li (1995) is more efficient than the generalized two-stage least squares (G2SLS) estimator of Balestra and Varadharajan-Krishnakumar (1987). For our second estimation procedure, for comparative purposes, we use the dynamic panel modeling estimates recommended by Blundell and Bond (1998). We employ the model of Blundell and Bond (1998), as these authors argue that their estimator is more appropriate than the Arellano and Bond (1991) model for smaller time periods relative to the size of the panels. We also use this two-step procedure, use as an independent variable the first lag of the dependent variable, reporting robust standard errors of Windmeijer (2005). Thus, our two different panel estimation techniques place differing emphasis on cross-sectional and time-series effects, with the Baltagi-Li
1
Introduction to Financial Econometrics and Statistics
23
estimator emphasizing cross-sectional effects and the Blundell-Bond estimator emphasizing time-series effects. Keywords: Panel data estimates, Time-series and cross-sectional effects, Econometrics, Financial institutions, Banks, Financial markets, Comparative financial systems, Legal traditions, Uncertainty avoidance, Trust, Property rights, Error component two-stage least squares (EC2SLS), The generalized two-stage least squares (G2SLS)
Chapter 13: Does Banking Capital Reduce Risk?: An Application of Stochastic Frontier Analysis and GMM Approach This chapter employs stochastic frontier analysis to create a new type of instrumental variable. The unrestricted frontier model determines the highest possible profitability based solely on the book value of assets employed. We develop a second frontier based on the level of bank holding company capital as well as the amount of assets. The implication of using the unrestricted model is that we are measuring the unconditional inefficiency of the banking organization. This chapter applies generalized method of moments (GMM) regression to avoid the problem caused by departure from normality. To control for the impact of size on a bank’s risk-taking behavior, the book value of assets is considered in the model. The relationship between the variables specifying bank behavior and the use of equity is analyzed by GMM regression. Keywords: Bank capital, Generalized method of moments, Stochastic frontier analysis, Bank risks, Bank holding companies, Endogeneity of variables
Chapter 14: Evaluating Long-Horizon Event Study Methodology This chapter examines the performance of more than 20 different testing procedures that fall into two categories. First, the buy-and-hold benchmark approach uses a benchmark to measure the abnormal buy-and-hold return for every event firm, and tests the null hypothesis that the average abnormal return is zero. Second, the calendar-time portfolio approach forms a portfolio in each calendar month consisting of firms that have had an event within a certain time period prior to the month, and tests the null hypothesis that the intercept is zero in the regression of monthly portfolio returns against the factors in an asset-pricing model. This chapter also evaluates the performance of bootstrapped Johnson’s skewness-adjusted t-test. This computation-intensive procedure is considered because the distribution of long-horizon abnormal returns tends to be highly skewed to the right. The bootstrapping method uses repeated random sampling to measure the significance of relevant test statistics. Due to the nature of random sampling, the resultant measurement of significance varies each time such a procedure is used. We also evaluate simple nonparametric tests, such as the Wilcoxon signed-rank test or the Fisher’s sign test, which are free from random sampling variation.
24
C.-F. Lee and J.C. Lee
Keywords: Long-horizon event study, Johnson’s Skewness-adjusted t-test, Weighted least-squares regression, Bootstrap test, Calendar-time portfolio approach, Fama-French three-factor model, Johnson’s skewness-adjusted t-statistic, Large-scale simulations
Chapter 15: Effect of Unexpected Volatility Shocks on Intertemporal Risk-Return Relation This chapter employs the ANST-GARCH model that is capable of capturing the asymmetric volatility effect of a positive and negative return shock. The key feature of the model is the regime-shift mechanism that allows a smooth, flexible transition of the conditional volatility between different states of volatility persistence. The regime-switching mechanism is governed by a logistic transition function that changes values depending on the level of the previous return shock. With a negative (positive) return shock, the conditional variance process is described as a high (low)-persistence-in-volatility regime. The ANST-GARCH model describes the heteroskedastic return dynamics more accurately and generates better volatility forecasts. Keywords: Intertemporal risk-return relation, Unexpected volatility shocks, Time-varying rational expectation hypothesis, Stock market overreaction, Expected market risk premium, Volatility feedback effect, Asymmetric mean reversion, Asymmetric volatility response, Time-varying volatility, Volatility regime switching, ANST-GARCH model
Chapter 16: Combinatorial Methods for Constructing Credit Risk Ratings This chapter uses a novel method, the Logical Analysis of Data (LAD), to reverseengineer and construct credit risk ratings which represent the creditworthiness of financial institutions and countries. LAD is a data-mining method based on combinatorics, optimization, and Boolean logic that utilizes combinatorial search techniques to discover various combinations of attribute values that are characteristic of the positive or negative character of observations. The proposed methodology is applicable in the general case of inferring an objective rating system from archival data, given that the rated objects are characterized by vectors of attributes taking numerical or ordinal values. The proposed approaches are shown to generate transparent, consistent, self-contained, and predictive credit risk rating models, closely approximating the risk ratings provided by some of the major rating agencies. The scope of applicability of the proposed method extends beyond the rating problems discussed in this study, and can be used in many other contexts where ratings are relevant. This study also uses multiple linear regression to derive the logical rating scores.
1
Introduction to Financial Econometrics and Statistics
25
Keywords: Credit risk rating, Reverse-engineering, Logical analysis of data, Combinatorial optimization, Data-mining, Creditworthiness, Financial strength, Internal rating, Preorder, Logical rating score
Chapter 17: Dynamic Interactions in the Taiwan Stock Exchange: A Threshold VAR Models This chapter constructs a six-variable VAR model (including NASDAQ returns, TSE returns, NT/USD returns, net foreign purchases, net domestic investment companies (dic) purchases, and net registered trading firms (rtf) purchases) to examine: (i) the interaction among three types of institutional investors, particularly to test whether net foreign purchases lead net domestic purchases by dic and rtf (the so-called demonstration effect); (ii) whether net institutional purchases lead market returns or vice versa, and (iii) whether the corresponding lead-lag relationship is positive or negative? Readers are well advised to refer to chapter appendix for detailed discussion of the unrestricted VAR model, the structural VAR model, and the threshold VAR analysis. The methodologies used in this chapter are multivariate threshold autoregression model, structural VAR, and Block Granger Causality. Keywords: Demonstration effect, Multivariate threshold autoregression model, Foreign investment, Lead-lag relationship, Structural VAR, Block Granger causality, Institutional investors, Domestic investment companies, Registered trading firms, Qualified foreign institutional investors
Chapter 18: Methods of Denoising Financial Data This chapter uses denoising analysis which imposes new challenges for financial data mining due to the irregularities and roughness observed in financial data, particularly, for instantaneously collected massive amounts of tick-by-tick data from financial markets for information analysis and knowledge extraction. Inefficient decomposition of the systematic pattern (the trend) and noises of financial data will lead to erroneous conclusions since irregularities and roughness of the financial data make the application of traditional methods difficult. The methodologies used in this chapter are linear filters, nonlinear filters, time-series analysis, trend extraction, and wavelet. Keywords: Jump detection, Linear filters, Nonlinear filters, Time-series analysis, Trend extraction, Wavelet
Chapter 19: Analysis of Financial Time: Series Using Wavelet Methods This chapter presents a set of tools, which allow gathering information about the frequency components of a time-series. In the first step, we discuss spectral
26
C.-F. Lee and J.C. Lee
analysis and filtering methods. Spectral analysis can be used to identify and to quantify the different frequency components of a data series. Filters permit to capture specific components (e.g. trends, cycles, seasonalities) of the original time-series. In the second step, we introduce wavelets, which are relatively new tools in economics and finance. They take their roots from filtering methods and Fourier analysis, but overcome most of the limitations of these two methods. Their principal advantages derive from: (i) combined information from both time-domain and frequency-domain and (ii) their flexibility as they do not make strong assumptions concerning the data generating process for the series under investigation. Keywords: Filtering methods, Spectral analysis, Fourier transform, Wavelet filter, Continuous wavelet transform, Discrete wavelet transform, Multi-resolution analysis, Scale-by-scale decomposition, Analysis of variance, Case-Shiller home price indices
Chapter 20: Composite Goodness-of-Fit Tests for Left Truncated Loss Sample This chapter derives the exact formulae for several goodness-of-fit statistics that should be applied to loss models with left-truncated data where the fit of a distribution in the right tail of the distribution is of central importance. We apply the proposed tests to real financial losses, using a variety of distributions fitted to operational loss and the natural catastrophe insurance claims data. The methodologies discussed in this chapter are goodness-of-fit tests, loss distribution, ruin probability, value-at-risk, Anderson-Darling statistic, Kolmogorov-Smirnov statistic. Keywords: Goodness-of-fit tests, Left-truncated data, Minimum recording threshold, Loss distribution, Heavy-tailed data, Operational risk, Insurance, Ruin probability, Value-at-risk, Anderson-Darling statistic, Kolmogorov-Smirnov statistic
Chapter 21: Effect of Merger on the Credit Rating and Performance of Taiwan Security Firms This chapter identifies and defines variables for merger synergy analysis followed by principal component factor analysis, variability percentage adjustment, and performance score calculation. Finally, Wilcoxon sign rank test is used for hypothesis testing. We extract principle component factors from a set of financial ratios. Percentage of variability explained and factor loadings are adjusted to get a modified average weight for each financial ratio. This weight is multiplied by the standardized Z value of the variable, and summed a set of variables get a firm’s performance score. Performance scores are used to rank the firm. Statistical significance of difference in pre- and post-merger rank is tested using the Wilcoxon sign rank.
1
Introduction to Financial Econometrics and Statistics
27
Keywords: Corporate merger, Financial ratios, Synergy, Economies of scale, Credit rating, Variability percentage adjustment, Principle component factors, Firm’s performance score, Standardized Z, Wilcoxon rank test
Chapter 22: On-/Off-the-Run Yield Spread Puzzle: Evidence from Chinese Treasury Markets This chapter uses on-/off-the-run yield spread to describe “on-/off-the-run yield spread puzzle” in Chinese treasury markets. To explain this puzzle, we introduce a latent factor in the pricing of Chinese off-the-run government bonds and use this factor to model the yield difference between Chinese on-the-run and off-the-run issues. We use the nonlinear Kalman filter approach to estimate the model. The methodologies used in this chapter are CIR model, nonlinear Kalman filter and Quasi-maximum likelihood model. Keywords: On-/off-the-run yield spread, Liquidity, Disposition effect, CIR model, Nonlinear Kalman filter, Quasi-maximum likelihood
Chapter 23: Factor Copula for Defaultable Basket Credit Derivatives This chapter uses a factor copula approach to evaluate basket credit derivatives with issuer default risk and demonstrate its application in a basket credit linked note (BCLN). We generate the correlated Gaussian random numbers by using the Cholesky decomposition, and then, the correlated default times can be decided by these random numbers and the reduced-form model. Finally, the fair BCLN coupon rate is obtained by the Monte Carlo simulation. We also discuss the effect of issuer default risk on BCLN. We show that the effect of issuer default risk cannot be accounted for thoroughly by considering the issuer as a new reference entity in the widely used one factor copula model, in which constant default correlation is often assumed. A different default correlation between the issuer and the reference entities affects the coupon rate greatly and must be taken into account in the pricing model. Keywords: Factor copula, Issuer default, Default correlation, Reduced-form model, Basket credit derivatives, Cholesky decomposition, Monte Carlo simulation
Chapter 24: Panel Data Analysis and Bootstrapping: Application to China Mutual Funds This chapter estimates double- and single-clustered standard errors by wild-cluster bootstrap procedure. To obtain the wild bootstrap samples in each cluster, we reuse the regressors (X), but modify the residuals by transforming the OLS residuals with weights which follow the popular two-point distribution suggested by Mammen (1993) and others. We then
28
C.-F. Lee and J.C. Lee
compare them with other estimates in a set of asset-pricing regressions. The comparison indicates that bootstrapped standard errors from double clustering outperform those from single clustering. They also suggest that bootstrapped critical values are preferred to standard asymptotic t-test critical values to avoid misleading test results. Keywords: Asset-pricing regression, Bootstrapped critical values, Cluster standard errors, Double clustering, Firm and time effects, Finance panel data, Single clustering, Wild-cluster bootstrap
Chapter 25: Market Segmentation and Pricing of Closed-End Country Funds: An Empirical Analysis This chapter finds that for closed-end country funds, the international CAPM can be rejected for the underlying securities (NAVs) but not for the share prices. This finding indicates that country fund share prices are determined globally, whereas the NAVs reflect both global and local prices of risk. Cross-sectional variations in the discounts or premiums for country funds are explained by the differences in the risk exposures of the share prices and the NAVs. Finally, this chapter shows that the share price and NAV returns exhibit predictable variation, and country fund premiums vary over time due to time-varying risk premiums. The chapter employs Generalized Method of Moments (GMM) to estimate stochastic discount factors and examines if the price of risk of closed-end country fund shares and NAVs is identical. Keywords: Capital markets, Country funds, CAPM, Closed-end funds, Market segmentation, GMM, Net asset value, Stochastic discount factors, Time-varying risk, International asset pricing
Chapter 26: A Comparison of Portfolios Using Different Risk Measurements This study uses three different risk measurements: the Mean-variance model, the Mean Absolute Deviation model, and the Downside Risk model. Meanwhile short selling is also taken into account since it is an important strategy that can bring a portfolio much closer to the efficient frontier by improving a portfolio’s riskreturn trade-off. Therefore, six portfolio rebalancing models, including the MV model, MAD model and the Downside Risk model, with/without short selling, are compared to determine which is the most efficient. All models simultaneously consider the criteria of return and risk measurement. Meanwhile, when short selling is allowed, models also consider minimizing the proportion of short selling. Therefore, multiple objective programming is employed to transform multiple objectives into a single objective in order to obtain a compromising solution. An example is used to perform simulation, and the results indicate that the MAD model, incorporated with a short selling model, has the highest market value and lowest risk.
1
Introduction to Financial Econometrics and Statistics
29
Keywords: Portfolio selection, Risk measurement, Short selling, MV model, MAD model, Downside risk model, Multiple objective programming, Rebalancing model, Value-at-risk, Conditional value-at-risk
Chapter 27: Using Alternative Models and a Combining Technique in Credit Rating Forecasting: An Empirical Study This chapter first utilizes the ordered logit and the ordered probit models. Then, we use ordered logit combining method to weight different techniques’ probability measures, as described in Kamstra and Kennedy (1998) to form the combining model. The samples consist of firms in the TSE and the OTC market, and are divided into three industries for analysis. We consider financial variables, market variables as well as macroeconomic variables and estimate their parameters for out-of-sample tests. By means of Cumulative Accuracy Profile, the Receiver Operating Characteristics, and McFadden, we measure the goodness-of-fit and the accuracy of each prediction model. The performance evaluations are conducted to compare the forecasting results, and we find that combing technique does improve the predictive power. Keywords: Bankruptcy prediction, Combining forecast, Credit rating, Credit risk, Credit risk index, Forecasting models, Logit regression, Ordered logit, Ordered probit, Probability density function
Chapter 28: Can We Use the CAPM as an Investment Strategy?: An Intuitive CAPM and Efficiency Test The aim of this chapter is to check whether certain playing rules, based on the undervaluation concept arising from the CAPM, could be useful as investment strategies, and can therefore be used to beat the Market. If such strategies work, we will be provided with a useful tool for investors, and, otherwise, we will obtain a test whose results will be connected with the efficient Market hypothesis (EMH) and with the CAPM. The methodology used is both intuitive and rigorous: analyzing how many times we beat the Market with different strategies, in order to check whether when we beat the Market, this happens by chance. Keywords: ANOVA, Approximately normal distribution, Binomial distribution, CAPM, Contingency tables, Market efficiency, Nonparametric tests, Performance measures
Chapter 29: Group Decision Making Tools for Managerial Accounting and Finance Applications This chapter adopts an Analytic Hierarchy Process (AHP) approach to solve various accounting or finance problems such as developing a business performance evaluation system and developing a banking performance evaluation system. AHP uses
30
C.-F. Lee and J.C. Lee
hierarchical schema to incorporate nonfinancial and external performance measures. Our model has a broader set of measures that can examine external and nonfinancial performance as well as internal and financial performance. While AHP is one of the most popular multiple goals decision-making tools, Multiple Criteria and Multiple Constraint (MC2) Linear Programming approach also can be used to solve group decision-making problems such as transfer pricing and capital budgeting problems. The methodologies used in this chapter are Analytic Hierarchy Process, multiple criteria and multiple constraint linear programming, and balanced scorecard and business performance evaluation. Keywords: Analytic hierarchy process, Multiple criteria and multiple constraint linear programming, Business performance evaluation, Activity-based costing system, Group decision making, Optimal trade-offs, Balanced scorecard, Transfer pricing, Capital budgeting
Chapter 30: Statistics Methods Applied in Employee Stock Options This study provides model-based and compensation-based approaches to price subjective value of employee stock options (ESOs). In model-based approach, we consider a utility-maximizing model that the employee allocates his wealth among the company stock, market portfolio, and risk-free bond, and then derive the ESO formulae which take into account illiquidity and sentiment effects. By using the method of change of measure, the derived formulae are simply like that of the market values with altered parameters. To calculate compensation-based subjective value, we group employees by hierarchical clustering with K-Means approach and back out the option value in an equilibrium competitive employment market. Further, we test illiquidity and sentiment effects on ESO values by running the regressions which consider the problem of standard errors in finance panel data. Keywords: Employee stock option, Sentiment, Subjective value, Illiquidity, Change of measure, Hierarchical clustering with K-Means approach, Standard errors in finance panel data, Exercise boundary, Jump diffusion model
Chapter 31: Structural Change and Monitoring Tests This chapter focuses on various structural change and monitoring tests for a class of widely used time-series models in economics and finance, including I(0), I(1), I(d) processes and the co-integration relationship. In general, structural change tests can be categorized into two types: One is the classical approach to testing for structural change, which employs retrospective tests using a historical data set of a given length; the other one is the fluctuation-type test in a monitoring scheme, which means for given a history period for which a regression relationship is known to be stable,
1
Introduction to Financial Econometrics and Statistics
31
we then test whether incoming data are consistent with the previously established relationship. Several structural changes such as CUSUM squared tests, the QLR test, the prediction test, the multiple break test, bubble tests, co-integration breakdown tests, and the monitoring fluctuation test are discussed in this chapter, and we further illustrate all details and usefulness of these tests. Keywords: Co-integration breakdown test, Structural break, Long memory process, Monitoring fluctuation test, Boundary function, CUSUM squared test, Prediction test, Bubble test, Unit root time series, Persistent change
Chapter 32: Consequences of Option Pricing of a Long Memory in Volatility This chapter use conditionally heteroskedastic time-series models to describe the volatility of stock index returns. Volatility has a long memory property in the most general models and then the autocorrelations of volatility decay at a hyperbolic rate; contrasts are made with popular, short memory specifications whose autocorrelations decay more rapidly at a geometric rate. Options are valued for ARCH volatility models by calculating the discounted expectations of option payoffs for an appropriate risk-neutral measure. Monte Carlo methods provide the expectations. The speed and accuracy of the calculations is enhanced by two variance reduction methods, which use antithetic and control variables. The economic consequences of a long memory assumption about volatility are documented, by comparing implied volatilities for option prices obtained from short and long memory volatility processes. Keywords: ARCH models, Implied volatility, Index options, Likelihood maximization, Long memory, Monte Carlo, Option prices, Risk-neutral pricing, Smile shapes, Term structure, Variance reduction methods
Chapter 33: Seasonal Aspects of Australian Electricity Market This chapter develops econometric models for seasonal patterns in both price returns and proportional changes in demand for Australian electricity. Australian Electricity spot prices differ considerably from equity spot prices in that they contain an extremely rapid mean reversion process. The electricity spot price could increase to a market cap price of AU$12,500 per Megawatt Hour (MWh) and revert back to a mean level (AUD$30) within a half hour interval. This has implications for derivative pricing and risk management. We also model extreme spikes in the data. Our study identifies both seasonality effects and dramatic price reversals in the Australian electricity market. The pricing seasonality effects include time-of-day, day-of-week, monthly, and yearly effects. There is also evidence of seasonality in demand for electricity. Keywords: Electricity, Spot price, Seasonality, Outlier, Demand, Econometric modeling
32
C.-F. Lee and J.C. Lee
Chapter 34: Pricing Commercial Timberland Returns in the United States This chapter uses both parametric and nonparametric approaches to evaluate privateand public-equity timberland investments in the United States. Private-equity timberland returns are proxied by the NCREIF Timberland Index, whereas public-equity timberland returns are proxied by the value-weighted returns on a dynamic portfolio of the US publicly traded forestry firms that had or have been managing timberlands. Static estimations of the capital asset-pricing model and Fama-French three-factor model are obtained by ordinary least squares, whereas dynamic estimations are obtained by state-space specifications with the Kalman filter. In estimating the stochastic discount factors, linear programming is used. Keywords: Alternative asset class, Asset pricing, Evaluation, Fama-French three-factor model, Nonparametric analysis, State-space model, Stochastic discount factor, Timberland investments, Time series, Time-varying parameter
Chapter 35: Optimal Orthogonal Portfolios with Conditioning Information This chapter derives and characterizes optimal orthogonal portfolios in the presence of conditioning information in the form of a set of lagged instruments. In this setting, studied by Hansen and Richard (1987), the conditioning information is used to optimize with respect to the unconditional moments. We present an empirical illustration of the properties of the optimal orthogonal portfolios. The methodology in this chapter includes regression and maximum likelihood parameter estimation, as well as method of moments estimation. We form maximum likelihood estimates of nonlinear functions as the functions evaluated at the maximum likelihood parameter estimates. Keywords: Asset-pricing tests, Conditioning information, Minimum variance efficiency, Optimal portfolios, Predicting returns, Portfolio management, Stochastic discount factors, Generalized, Method of moments, Maximum likelihood, Parametric bootstrap, Sharpe ratios
Chapter 36: Multi-factor, Multi-indicator Approach to Asset Pricing: Method and Empirical Evidence This chapter uses a multifactor, multi-indicator approach to test the capital assetpricing model (CAPM) and the arbitrage pricing theory (APT). This approach is able to solve the measuring problem in the market portfolio in testing CAPM, and it is also able to directly test APT by linking the common factors to the macroeconomic indicators. We propose a MIMIC approach to test CAPM and APT. The beta estimated from the MIMIC model by allowing measurement error on the market
1
Introduction to Financial Econometrics and Statistics
33
portfolio does not significantly improve the OLS beta, while the MLE estimator does a better job than the OLS and GLS estimators in the cross-sectional regressions because the MLE estimator takes care of the measurement error in beta. Therefore, the measurement error problem on beta is more serious than that on the market portfolio. Keywords: Capital asset-pricing model, CAPM, Arbitrage pricing theory, Multifactor multi-indicator approach, MIMIC, Measurement error, LISREL approach, Ordinary least square, OLS, General least square, GLS, Maximum likelihood estimation, MLE
Chapter 37: Binomial OPM, Black-Scholes OPM and Their Relationship: Decision Tree and Microsoft Excel Approach This chapter will first demonstrate how Microsoft Excel can be use to create the Decision Trees for the Binomial Option Pricing Model. At the same time, this chapter will discuss the Binomial Option Pricing Model in a less mathematical fashion. All the mathematical calculations will be taken care by the Microsoft Excel program that is presented in this chapter. Finally, this chapter also uses the Decision Tree approach to demonstrate the relationship between the Binomial Option Pricing Model and the Black-Scholes Option Pricing Model. Keywords: Binomial option pricing model, Decision trees, Black-Sholes option pricing model, Call option, Put option, Microsoft Excel, Visual Basic for applications, VBA, Put-call parity, Sigma, Volatility, Recursive programming
Chapter 38: Dividend Payments and Share Repurchases of U.S. Firms: An Econometric Approach This chapter uses the econometric methodology to deal with the dynamic interrelationships between dividend payments and share repurchases and investigate endogeneity of certain explanatory variables. Identification of the model parameters is achieved in such models by exploiting the cross-equations restrictions on the coefficients in different time periods. Moreover, the estimation entails using nonlinear optimization methods to compute the maximum likelihood estimates of the dynamic random-effects models and for testing statistical hypotheses using likelihood ratio tests. This study also highlights the importance of developing comprehensive econometric models for these interrelationships. It is common in finance research to spell out “specific hypotheses” and conduct empirical research to investigate validity of the hypotheses. Keywords: Compustat database, Corporate policies, Dividends, Dynamic random-effects models, Econometric methodology, Endogeneity, Maximum likelihood, Intangible assets, Model formulation, Nonlinear optimization, Panel data, Share repurchases
34
C.-F. Lee and J.C. Lee
Chapter 39: Term Structure Modeling and Forecasting Using the Nelson-Siegel Model In this chapter, we illustrate some recent developments in the yield curve modeling by introducing a latent factor model called the dynamic Nelson-Siegel model. This model not only provides good in-sample fit, but also produces superior out-of-sample performance. Beyond Treasury yield curve, the model can also be useful for other assets such as corporate bond and volatility. Moreover, the model also suggests generalized duration components corresponding to the level, slope, and curvature risk factors. The dynamic Nelson-Siegel model can be estimated via a one-step procedure, like the Kalman filter, which can also easily accommodate other variables of interests. Alternatively, we could estimate the model through a two-step process by fixing one parameter and estimating with ordinary least squares. The model is flexible and capable of replicating a variety of yield curve shapes: upward sloping, downward sloping, humped, and inverted humped. Forecasting the yield curve is achieved through forecasting the factors and we can impose either a univariate autoregressive structure or a vector autoregressive structure on the factors. Keywords: Term structure, Yield curve, Factor model, Nelson-Siegel curve, State-space model
Chapter 40: The Intertemporal Relation Between Expected Return and Risk On Currency The literature has so far focused on the risk-return trade-off in equity markets and ignored alternative risky assets. This chapter examines the presence and significance of an intertemporal relation between expected return and risk in the foreign exchange market. This chapter tests the existence and significance of a daily riskreturn trade-off in the FX market based on the GARCH, realized, and range volatility estimators. Our empirical analysis relies on the maximum likelihood estimation of the GARCH-in-mean models, as described in Appendix A. We also use the seemingly unrelated (SUR) regressions and panel data estimation to investigate the significance of a time-series relation between expected return and risk on currency. Keywords: GARCH, GARCH-in-mean, Seemingly unrelated regressions (SUR), Panel data estimation, Foreign exchange market, ICAPM, High-frequency data, Time-varying risk aversion, High-frequency data, Daily realized volatility
Chapter 41: Quantile Regression and Value-at-Risk This chapter studies quantile regression (QR) estimation of Value-at-Risk (VaR). VaRs estimated by the QR method display some nice properties. In this chapter, different QR models in estimating VaRs are introduced. In particular, VaR estimation based on quantile regression of the QAR models, Copula models,
1
Introduction to Financial Econometrics and Statistics
35
ARCH models, GARCH models, and the CaViaR models is systematically introduced. Comparing the proposed QR method with traditional methods based on distributional assumptions, the QR method has the important property that it is robust to non-Gaussian distributions. Quantile estimation is only influenced by the local behavior of the conditional distribution of the response near the specified quantile. As a result, the estimates are not sensitive to outlier observations. Such a property is especially attractive in financial applications since many financial data like, say, portfolio returns (or log returns), are usually not normally distributed. To highlight the importance of the QR method in estimating VaR, we apply the QR techniques to estimate VaRs in International Equity Markets. Numerical evidence indicates that QR is a robust estimation method for VaR. Keywords: ARCH, Copula, GARCH, Non-normality, QAR, Quantile regression, Risk management, Robust estimation, Time series, Value-at-risk
Chapter 42: Earnings Quality and Board Structure: Evidence from South East Asia Using a sample of listed firms in Southeast Asia countries, this chapter examines the association among board structure and corporate ownership structure in affecting earnings quality. The econometric method employed is regressions of panel data. In a panel data setting, I address both cross-sectional and time-series dependence. Following Gow et al. (2010), I employ the two-way clustering method where the standard errors are clustered by both firm and year in my regressions of panel data. Keywords: Earnings quality, Board structure, Corporate ownership structure, Panel data regressions, Cross-sectional and time-series dependence, Two-way clustering method of standard errors
Chapter 43: Rationality and Heterogeneity of Survey Forecasts of the Yen-Dollar Exchange Rate: A Reexamination This chapter examines the rationality and diversity of industry-level forecasts of the yen-dollar exchange rate collected by the Japan Center for International Finance. We compare three specifications for testing rationality: the “conventional” bivariate regression, the univariate regression of a forecast error on a constant and other information set variables, and an error correction model (ECM). We extend the analysis of industry-level forecasts to a SUR-type structure using an innovative GMM technique (Bonham and Cohen 2001) that allows for forecaster crosscorrelation due to the existence of common shocks and/or herd effects. Our GMM tests of micro-homogeneity uniformly reject the hypothesis that forecasters exhibit similar rationality characteristics. Keywords: Rational expectations, Unbiasedness, Weak efficiency, Microhomogeneity, Heterogeneity, Exchange rate, Survey forecasts, Aggregation bias, GMM, SUR
36
C.-F. Lee and J.C. Lee
Chapter 44: Stochastic Volatility Structures and Intra-day Asset Price Dynamics This chapter uses conditional volatility estimators as special cases of a general stochastic volatility structure. The theoretical asymptotic distribution of the measurement error process for these estimators is considered for particular features observed in intraday financial asset price processes. Specifically, I consider the effects of (i) induced serial correlation in returns processes, (ii) excess kurtosis in the underlying unconditional distribution of returns, (iii) market anomalies such as market opening and closing effects, and (iv) failure to account for intraday trading patterns. These issues are considered with applications in option pricing/trading strategies and the constant/dynamic hedging frameworks in mind. The methodologies used in this chapter are ARCH, maximum likelihood method, and unweighted GARCH. Keywords: ARCH, Asymptotic distribution, Autoregressive parameters, Conditional variance estimates, Constant/dynamic hedging, Excess kurtosis, Index futures, Intraday returns, Market anomalies, Maximum likelihood estimates, Misspecification, Mis-specified returns, Persistence, Serial correlation, Stochastic volatility, Stock/futures, Unweighted GARCH, Volatility co-persistence
Chapter 45: Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market This chapter examines the riskiness of the Taiwan stock market by determining the VaR from the expected return distribution generated by historical simulation. Value-at-risk (VaR) measures the worst expected loss over a given time horizon under normal market conditions at a specific level of confidence. VaR is determined by the left tail of the cumulative probability distribution of expected returns. Our result indicates the cumulative probability distribution has a fatter left tail, compared with the left tail of a normal distribution. This implies a riskier market. We also examined a two-sector asset allocation model subject to a target VaR constraint. The VaR-efficient frontier of the TAIEX traded stocks recommended, mostly, a corner portfolio. Keywords: Value-at-risk, Asset allocation, Cumulative probability distribution, Normal distribution, VaR-efficient frontier, Historical simulation, Expected return distribution, Two-sector asset allocation model, Delta, gamma, Corner portfolio, TAIEX
Chapter 46: Alternative Methods for Estimating Firm’s Growth Rate The most common valuation model is the dividend growth model. The growth rate is found by taking the product of the retention rate and the return on equity. What is less well understood are the basic assumptions of this model. In this paper, we demonstrate that the model makes strong assumptions regarding the financing mix of the firm. In addition,
1
Introduction to Financial Econometrics and Statistics
37
we discuss several methods suggested in the literature on estimating growth rates and analyze whether these approaches are consistent with the use of using a constant discount rate to evaluate the firm’s assets and equity. The literature has also suggested estimating growth rate by using the average percentage change method, compound-sum method, and/or regression methods. We demonstrate that the average percentage change is very sensitive to extreme observations. Moreover, on average, the regression method yields similar but somewhat smaller estimates of the growth rate compared to the compoundsum method. We also discussed the inferred method suggested by Gordon and Gordon (1997) to estimate the growth rate. Advantages, disadvantages, and the interrelationship among these estimation methods are also discussed in detail. Keywords: Compound sum method, Discount cash flow model, Growth rate, Internal growth rate, Sustainable growth rate
Chapter 47: Econometric Measures of Liquidity A security is liquid to the extent that an investor can trade significant quantities of the security quickly, at or near the current market price, and bearing low transaction costs. As such, liquidity is a multidimensional concept. In this chapter, I review several widely used econometrics or statistics-based measures that researchers have developed to capture one or more dimensions of a security’s liquidity (i.e., limited dependent variable model (Lesmond et al. 1999) and autocovariance of price changes (Roll 1984)). These alternative proxies have been designed to be estimated using either low-frequency or high-frequency data, so I discuss four liquidity proxies that are estimated using low-frequency data and two proxies that require high-frequency data. Low-frequency measures permit the study of liquidity over relatively long time horizons; however, they do not reflect actual trading processes. To overcome this limitation, high-frequency liquidity proxies are often used as benchmarks to determine the best low-frequency proxy. In this chapter, I find that estimates from the effective tick measure perform best among the four low-frequency measures tested. Keywords: Liquidity, Transaction costs, Bid-ask spread, Price impact, Percent effective spread, Market model, Limited dependent variable model, Tobin’s model, Log-likelihood function, Autocovariance, Correlation analysis
Chapter 48: A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting: Application to Equity Index Futures Markets The chapter uses GARCH model and quasi-maximum likelihood estimation strategy to investigate equity index futures markets. We present the first empirical evidence for the validity of the ARMA-GARCH model with tempered stable innovations to estimate 1-day-ahead value-at-risk in futures markets for the S&P 500, DAX, and Nikkei. We also provide empirical support that GARCH
38
C.-F. Lee and J.C. Lee
models based on the normal innovations appear not to be as well suited as infinitely divisible models for predicting financial crashes. In our empirical analysis, we forecast 1 % value-at-risk in both spot and futures markets using normal and tempered stable GARCH models following a quasi-maximum likelihood estimation strategy. In order to determine the accuracy of forecasting for each specific model, backtesting using Kupiec’s proportion of failures test is applied. Keywords: Infinitely divisible models, Tempered stable distribution, GARCH models, Value-at-risk, Kupiec’s proportion of failures test, Quasi-maximum likelihood estimation strategy
Chapter 49: Computer Technology for Financial Service This chapter examines the core computing competence for financial services. Securities trading is one of the few business activities where a few seconds of processing delay can cost a company big fortune. Grid and Cloud computing will be briefly described. How the underlying algorithm for financial analysis can take advantage of Grid environment is chosen and presented. One of the most popular practiced algorithms Monte Carlo Simulation is used in our cases study for option pricing and risk management. The various distributed computational platforms are carefully chosen to demonstrate the performance issue for financial services. Keywords: Financial service, Grid and cloud computing, Monte Carlo simulation, Option pricing, Risk management, Cyberinfrastructure, Random number generation, High end computing, Financial simulation, Information technology
Chapter 50: Long-Run Stock Return and the Statistical Inference This chapter introduces the long-run stock return methodologies and their statistical inference. The long-run stock return is usually computed by using a holding strategy more than 1 year but up to 5 years. Two categories of long-run return methods are illustrated in this chapter: the event-time approach and calendar-time approach. The event-time approach includes cumulative abnormal return, buy-and-hold abnormal return, and abnormal returns around earnings announcements. In former two methods, it is recommended to apply the empirical distribution (from the bootstrapping method) to examine the statistical inference, whereas the last one uses classical t-test. In addition, the benchmark selections in the long-run return literature are introduced. Moreover, the calendar-time approach contains mean monthly abnormal return, factor models, and Ibbotson’s RATS, which could be tested by time-series volatility. Keywords: Long-run stock return, Buy-and-hold return, Factor model, Eventtime, Calendar-time, Cumulative abnormal return, Ibottson’s RATS, Conditional market model, Bootstrap, Zero-investment portfolio
1
Introduction to Financial Econometrics and Statistics
39
Chapter 51: Value-at-Risk Estimation via a Semi-Parametric Approach: Evidence from the Stock Markets This study utilizes the parametric approach (GARCH-based models) and the semiparametric approach of Hull and White (1998) (HW-based models) to estimate the Value-at-Risk (VaR) through the accuracy evaluation of accuracy for the eight stock indices in Europe and Asia stock markets. The measure of accuracy includes the unconditional coverage test by Kupiec (1995) as well as two loss functions, quadratic loss function and unexpected loss. As to the parametric approach, the parameters of generalized autoregressive conditional heteroskedasticity (GARCH) model are estimated by the method of maximum likelihood and the quantiles of asymmetric distribution like skewed generalized student’s t (SGT) can be solved by composite trapezoid rule. Sequentially, the VaR is evaluated by the framework proposed by Jorion (2000). Turning to the semi-parametric approach of Hull and White (1998), before performing the traditional historical simulation, the raw return series is scaled by a volatility ratio where the volatility is estimated by the same procedure of parametric approach. Keywords: Value-at-risk, Semi-parametric approach, Parametric approach, Generalized autoregressive conditional heteroskedasticity, Skewed generalized student’s t, Composite trapezoid rule, Method of maximum likelihood, Unconditional coverage test, Loss function
Chapter 52: Modeling Multiple Asset Returns by a Time-Varying t Copula Model This chapter illustrates a framework to model joint distributions of multiple asset returns using a time-varying Student’s t copula model. We model marginal distributions of individual asset returns by a variant of GARCH models and then use a Student’s t copula to connect all the margins. To build a time-varying structure for the correlation matrix of t copula, we employ a dynamic conditional correlation (DCC) specification. We illustrate the two-stage estimation procedures for the model and apply the model to 45 major US stocks returns selected from nine sectors. As it is quite challenging to find a copula function with very flexible parameter structure to account for difference dependence features among all pairs of random variables, our time-varying t copula model tends to be a good working tool to model multiple asset returns for risk management and asset allocation purposes. Our model can capture time-varying conditional correlation and some degree of tail dependence, while it also has limitations of featuring symmetric dependence and inability of generating high tail dependence when being used to model a large number of asset returns. Keywords: Student’s t copula, GARCH models, Asset returns, U.S. stocks, Maximum likelihood, Two-stage estimation, Tail dependence, Exceedance correlation, Dynamic conditional correlation, Asymmetric dependence
40
C.-F. Lee and J.C. Lee
Chapter 53: Internet Bubble Examination with Mean-Variance Ratio This chapter illustrates the superiority of the mean-variance ratio (MVR) test over the traditional SR test by applying both tests to analyze the performance of the S&P 500 index and the NASDAQ 100 index after the bursting of the Internet bubble in 2000s. This shows the superiority of the MVR test statistic in revealing short-term performance and, in turn, enables investors to make better decisions in their investments. The methodologies used in this chapter are mean-variance ratio, Sharpe ratio, hypothesis testing, and uniformly most powerful unbiased test. Keywords: Mean-variance ratio, Sharpe ratio, Hypothesis testing, Uniformly most powerful unbiased test, Internet bubble, Fund management
Chapter 54: Quantile Regression in Risk Calibration This chapter uses the CoVaR (Conditional VaR) framework to obtain accurate information on the interdependency of risk factors. The basic technical elements of CoVaR estimation are two levels of quantile regression: one on market risk factors; another on individual risk factor. Tests on the functional form of the two-level quantile regression reject the linearity. A flexible semiparametric modeling framework for CoVaR is proposed. A partial linear model (PLM) is analyzed. In applying the technology to stock data covering the crisis period, the PLM outperforms in the crisis time, with the justification of the backtesting procedures. Moreover, using the data on global stock markets indices, the analysis on marginal contribution of risk (MCR) defined as the local first order derivative of the quantile curve sheds some light on the source of the global market risk. Keywords: CoVAR, Value-at-risk, Quantile regression, Locally linear quantile regression, Partial linear model, Semi-parametric model
Chapter 55: Strike Prices of Options for Overconfident Executives This chapter uses Monte Carlo simulation to investigate the impacts of managerial overconfidence on the optimal strike prices of executive incentive options. Although it has been shown that optimally managerial incentive options should be awarded in-the-money, in practice, most firms award them at-the-money. We show that the optimal strike prices of options granted to overconfident executive are directly related to their overconfidence level, and that this bias brings the optimal strike prices closer to the institutionally prevalent at-the-money prices. The Monte Carlo simulation procedure uses a Mathematica program to find the optimal effort by managers and the optimal (for stockholders) contract parameters. An expanded discussion of the simulations, including the choice of the functional forms and the calibration of the parameters, is provided.
1
Introduction to Financial Econometrics and Statistics
41
Keywords: Overconfidence, Managerial effort, Incentive options, Strike price, Simulations, Behavioral finance, Executive compensation schemes, Mathematica optimization, Risk aversion, Effort aversion
Chapter 56: Density and Conditional Distribution Based Specification Analysis This chapter uses densities and conditional distributions analysis to carry out consistent specification testing and model selection among multiple diffusion processes. In this chapter, we discuss advances to this literature introduced by Corradi and Swanson (2005), who compare the cumulative distribution (marginal or joint) implied by a hypothesized null model with corresponding empirical distributions of observed data. In particular, parametric specification tests in the spirit of the conditional Kolmogorov test of Andrews (1997) that rely on block bootstrap resampling methods in order to construct test critical values are discussed. The methodologies used in this chapter are continuous time simulation methods, single process specification testing, multiple process model selection, and multifactor diffusion process, block bootstrap, and jump process. Keywords: Multifactor diffusion process, Specification test, Out-of-sample forecasts, Conditional distribution, Model selection, Block bootstrap, Jump process
Chapter 57: Assessing the Performance of Estimators Dealing with Measurement Errors This chapter describes different procedures to deal with measurement error in linear models, and assess their performance in finite samples using Monte Carlo simulations, and data on corporate investment. We consider the standard instrumental variables approach proposed by Griliches and Hausman (1986) as extended by Biorn (2000) [OLS-IV], the Arellano and Bond (1991) instrumental variable estimator, and the higher-order moment estimator proposed by Erickson and Whited (2000, 2002). Our analysis focuses on characterizing the conditions under which each of these estimators produces unbiased and efficient estimates in a standard “errors in variables” setting. In the presence of fixed effects, under heteroscedasticity, or in the absence of a very high degree of skewness in the data, the EW estimator is inefficient and returns biased estimates for mismeasured and perfectly measured regressors. In contrast to the EW estimator, IV-type estimators (OLS-IV and AB-GMM) easily handle individual effects, heteroskedastic errors, and different degrees of data skewness. The IV approach, however, requires assumptions about the autocorrelation structure of the mismeasured regressor and the measurement error. We illustrate the application of the different estimators using empirical investment models. Keywords: Investment equations, Measurement error, Monte Carlo simulations, Instrumental variables, GMM, Bias, Fixed effects, Heteroscedasticity, Skewness, High-order moments
42
C.-F. Lee and J.C. Lee
Chapter 58: Realized Distributions of Dynamic Conditional Correlation and Volatility Thresholds in the Crude Oil, Gold, and Dollar/Pound Currency Markets This chapter proposes a modeling framework for the study of co-movements in price changes among crude oil, gold, and dollar/pound currencies that are conditional on volatility regimes. Methodologically, we extend the Dynamic Conditional Correlation (DCC) multivariate GARCH model to examine the volatility and correlation dynamics depending on the variances of price returns involving a threshold structure. The results indicate that the periods of market turbulence are associated with an increase in co-movements in commodity (gold and oil) prices. The results imply that gold may act as a safe haven against major currencies when investors face market turmoil. Keywords: Dynamic conditional correlation, Volatility threshold, Realized distribution, Currency market, Gold, Oil
Chapter 59: Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey We estimate Two SVECM (Structural Vector Error Correction) Models for the Turkish economy based on imposing short run and Long-run restrictions that accounts for examining the behavior of the real sphere in the Pre-IT policy (before Inflation-Targeting adoption) and Post-IT policy (after InflationTargeting Adoption). Responses reveals that an expansionary interest policy shock leads to a decrease in price level, a fall in output, an appreciation in the exchange rate, an improvement in the share prices in the very short run for the most of Pre-IT period. Keywords: SVECM models, Turkish economy, Short run, Long run, Restrictions, Inflation targeting, Pre-IT policy, Post-IT policy, Share prices, Exchange rate, Monetary policy shock, Output, Price level, Real sphere
Chapter 60: Determination of Capital Structure: A LISREL Model Approach In this chapter, we employ structural equation modeling (SEM) in LISREL system to solve the measurement errors problems in the analysis of the determinants of capital structure and find the important factors consistent with capital structure theory by using date from 2002 to 2010. The purpose of this chapter is to investigate whether the influences of accounting factors on capital structure change and whether the important factors are consistent with the previous literature. The methodologies discussed in this chapter are structural equation modeling (SEM), multiple indicators and multiple causes (MIMIC) model, LISREL system, simultaneous equations, and SEM with confirmatory factor analysis (CFA) approach.
1
Introduction to Financial Econometrics and Statistics
43
Keywords: Capital structure, Structural equation modeling (SEM), Multiple indicators and multiple causes (MIMIC) model, LISREL system, Simultaneous equations, Latent variable, Determinants of capital structure, Error in variable problem
Chapter 61: Evaluating the Effectiveness of Futures Hedging This chapter examines the Ederington hedging effectiveness (EHE) comparisons between unconditional OLS hedge strategy and other conditional hedge strategies. It is shown that OLS hedge strategy outperforms most of the optimal conditional hedge strategies when EHE is used as the hedging effectiveness criteria. Before concluding that OLS hedge is better than the others; however, we need to understand under what circumstances the result is derived. We explain why OLS is the best hedge strategy under EHE criteria in most cases, and how most conditional hedge strategies are judged as inferior to OLS hedge strategy by an EHE comparison. Keywords: Futures hedging, Portfolio management, Ederington hedging effectiveness, Variance estimation, Unconditional variance, Conditional variance, OLS hedging strategy, GARCH hedging strategy, Regime-switching hedging strategy, Utility-based hedging strategy
Chapter 62: Evidence on Earning Management by Integrated Oil and Gas Companies This chapter uses Jones Model (1991) which projects the expected level of discretionary accruals and demonstrates specific test methodology for detection of earnings management in the oil and gas industry. This study utilized several parametric and nonparametric statistical methods to test for such earnings management. By comparing actuals versus projected accruals, we are able to compute the total unexpected accruals. We also correlate unexpected total accruals with several difficult to manipulate indicators that reflect company’s level of activities. Keywords: Earning management, Jones (1991) model, Discretionary accruals, Income from operations, Nonrecurring items, Special items, Research and development expense, Write-downs, Political cost, Impression management, Oil and gas industry
Chapter 63: A Comparative Study of Two Models SV with MCMC Algorithm This chapter examines two asymmetric stochastic volatility models used to describe the volatility dependencies found in most financial returns. The first is the autoregressive stochastic volatility model with Student’s t-distribution (ARSV-t), and the second is the basic Svol of JPR (1994). In order to estimate these models, our analysis is based on the Markov Chain Monte Carlo (MCMC) method. Therefore,
44
C.-F. Lee and J.C. Lee
the technique used is a Metropolishastings (Hastings 1970), and the Gibbs sampler (Casella and George 1992; Gelfand and Smith 1990; Gilks et al. 1993). The empirical results concerned on the Standard and Poor’s 500 composite Index (S&P), CAC40, Nasdaq, Nikkei, and Dow-Jones stock price indexes reveal that the ARSV-t model provides a better performance than the Svol model on the Mean Squared Error (MSE) and the Maximum Likelihood function. Keywords: Autoregression, Asymmetric stochastic volatility, MCMC, Metropolishastings, Gibbs sampler, Volatility dependencies, Student’s t-distribution, SVOL, MSE, Financial returns, Stock price indexes
Chapter 64: Internal Control Material Weakness, Analysts’ Accuracy and Bias, and Brokerage Reputation This chapter uses the Ordinary Least-Squares (OLS) methodology in the main tests to examine the impact of internal control material weaknesses (ICMW hereafter) on sell side analysts. We match our ICMW firms with non-ICMWs based on industry, sales, and assets. We re-estimate the models using rank regression technique to assess the sensitivity of the results to the underlying functional form assumption made by OLS. We use Cook’s distance to test the outliers. Keywords: Internal control material weakness, Analyst forecast accuracy, Analyst forecast bias, Brokerage reputation, Sarbanes-Oxley act, Ordinary least squares regressions, Rank regressions, Fixed effects, Matching procedure, Cook’s distance
Chapter 65: What Increases Banks’ Vulnerability to Financial Crisis: Short-Term Financing or Illiquid Assets? This chapter applies Logit and OLS econometric techniques to analyze the Federal Reserve Y-9C report data. We show that short-term financing is a response to the adverse economic shocks rather than a cause of the recent crisis. The likelihood of financial crisis actually stems from the illiquidity and low creditworthiness of the investment. Our results are robust to endogeneity concerns when we use a difference-in-differences (DiD) approach with the Lehman bankruptcy in 2008 proxying for an exogenous shock. Keywords: Financial crisis, Short-term financing, Debt maturity, Liquidity risk, Deterioration of bank asset quality
Chapter 66: Accurate Formulae for Evaluating Barrier Options with Dividends Payout and the Application in Credit Risk Valuation This chapter approximates the discrete dividend payout by a stochastic continuous dividend yield, so the post dividend stock price process can be approximated by another log-normally diffusive stock process with a stochastic continuous payout ratio up to the ex-dividend date. Accurate approximation analytical pricing
1
Introduction to Financial Econometrics and Statistics
45
formulae for barrier options are derived by repeatedly applying the reflection principle. Besides, our formulae can be applied to extend the applicability of the first passage model – a branch of structural credit risk model. The stock price falls due to the dividend payout in the option pricing problem is analog to selling the firm’s asset to finance the loan repayment or dividend payout in the first passage model. Thus, our formulae can evaluate vulnerable bonds or the equity values given that the firm’s future loan/dividend payments are known. Keywords: Barrier option, Option pricing, Stock option, Dividend, Reflection principle, Lognormal, Credit risk
Chapter 67: Pension Funds: Financial Econometrics on the Herding Phenomenon in Spain and the United Kingdom This chapter uses the estimated cross-sectional standard deviations of betas to analyze if manager’s behavior enhances the existence of herding phenomena and the impact of the Spanish and UK pension funds investment on the market efficiency. We also estimate the betas with an econometric technique less applied in the financial literature: state-space models and the Kalman filter. Additionally, in order to obtain a robust estimation, we apply the Huber estimator. Finally, we apply several models and study the existence of herding toward the market, size, book-tomarket, and momentum factors. Keywords: Herding, Pension funds, State-space models, Kalman filter, Huber estimation, Imitation, Behavioral finance, Estimated cross-sectional standard deviations of betas, Herding toward the market, Herding toward size factor, Herding toward book-to-market factor, and Herding toward momentum factor
Chapter 68: Estimating the Correlation of Asset Returns: A Quantile Dependence Perspective This chapter uses the Copula Quantile-on-Quantile Regression (C-QQR) approach to construct the correlation between the conditional quantiles of stock returns. This new approach of estimating correlation utilizes the idea that the condition of a stock market is related to its return performance, particularly to the conditional quantile of its return, as the lower return quantiles reflect a weak market while the upper quantiles reflect a bullish one. The C-QQR approach uses the copula to generate a regression function for modeling the dependence between the conditional quantiles of the stock returns under consideration. It is estimated using a two-step quantile regression procedure, where in principle, the first step is implemented to model the conditional quantile of one stock return, which is then related in the second step to the conditional quantile of another return. Keywords: Stock markets, Copula, Correlation, Quantile regression, Quantile dependence, Business cycle, Dynamics, Risk management, Investment, Tail risk, Extreme events, Market uncertainties
46
C.-F. Lee and J.C. Lee
Chapter 69: Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies This chapter uses the criteria measurements to evaluate investment style and investigate multiple criteria decision-making (MCDM) problem. To achieve this objective, first, we employ factor analysis to extract independent common factors from those criteria. Second, we construct the evaluation frame using hierarchical system composed of the above common factors with evaluation criteria, and then derive the relative weights with respect to the considered criteria. Third, the synthetic utility value corresponding to each investment style is aggregated by the weights with performance values. Finally, we compare with empirical data and find that the model of MCDM predicts the rate of return. Keywords: Investment strategies, Multiple Criteria Decision Making (MCDM), Hierarchical system, Investment style, Factor analysis, Synthetic utility value, Performance values
Chapter 70: Econometric Analysis of Currency Carry Trade This chapter investigates carry trade strategy in the currency markets whereby investors fund positions in high interest rate currencies by selling low interest rate currencies to earn the interest rate differential. In this chapter, we first provide an overview of the risk and return profile of currency carry trade; second, we introduce two popular models, the regime-switch model and the logistic smooth transition regression model, to analyze carry trade returns because the carry trade returns are highly regime dependent. Finally, an empirical example is illustrated. Keywords: Carry trade, Uncovered interest parity, Markov chain Monte Carlo, Regime-switch model, Logistic smooth transition regression model
Chapter 71: Analytical Bounds for Treasury Bond Futures Prices This study employs a maximum likelihood estimation technique presented by Chen and Scott (1993) to estimate the parameters for two-factor Cox-Ingersoll-Ross models of the term structure. Following the estimation, the factor values are solved for by matching the short rate with the cheapest-to-deliver bond price. Then, upper bounds and lower bounds for Treasury bond futures prices can be calculated. This study first shows that the popular preference-free, closed-form cost of carry model is an upper bound for the Treasury bond futures price. Then, the next step is to derive analytical lower bounds for the futures price under one- and two-factor Cox-Ingersoll-Ross models of the term structure. Keywords: Treasury bond futures, Delivery options, Cox-Ingersoll-Ross models, Bounds, Maximum likelihood estimation, Term structure, Cheapest-todeliver bond, Timing options, Quality options, Chicago board of trade
1
Introduction to Financial Econometrics and Statistics
47
Chapter 72: Rating Dynamics of Fallen Angels and Their Speculative Grade-Rated Peers: Static Versus Dynamic Approach This study adopts the survival analysis framework (Allison 1984) to examine issuer-heterogeneity and time-heterogeneity in the rating migrations of fallen angels (FAs) and their speculative grade-rated peers (FA peers). Cox’s hazard model is considered the pre-eminent method to estimate the probability that an issuer survives in its current rating grade at any point in time t over the time horizon T. In this study, estimation is based on two Cox’s hazard models, including a proportional hazard model (Cox 1972) and a dynamic hazard model. The first model employs a static estimation approach and timeindependent covariates, whereas the second uses a dynamic estimation approach and time-dependent covariates. To allow for any dependence among rating states of the same issuer, the marginal event-specific method (Wei et al. 1989) was used to obtain robust variance estimates. For validation purpose, the Brier score (Brier 1950) and its covariance decomposition (Yates 1982) were applied to assess the forecast performance of estimated models in forming time-varying survival probability estimates for issuers out-of-sample. Keywords: Survival analysis, Hazard model, Time-varying covariate, Recurrent event, Brier score, Covariance decomposition, Rating migration, Fallen angel, Markov property, Issuer-heterogeneity, Time-heterogeneity
Chapter 73: Creation and Control of Bubbles: Managers Compensation Schemes, Risk Aversion, and Wealth and Short Sale Constraints This chapter takes an alternative approach of inquiry – that of using laboratory experiments – to study the creation and control of speculative bubbles. The following three factors are chosen for analysis: the compensation scheme of portfolio managers, wealth and supply constraints, and the relative risk aversion of traders. Under a short investment horizon induced by a tournament compensation scheme, speculative bubbles are observed in markets of speculative traders and in mixed markets of conservative and speculative traders. The primary method of analysis is to use live subjects in a laboratory setting to generate original trading data, which are compared to their fundamental values. Standard statistical techniques are used to supplement analysis in explaining the divergence of asset prices from their fundamental values. Keywords: Speculative bubbles, Laboratory experimental asset markets, Fundamental asset values, Tournament, Market efficiency, Behavioral finance, Ordinary least squares regression, Correlation
48
C.-F. Lee and J.C. Lee
Chapter 74: Range Volatility: A Review of Models and Empirical Studies In this chapter, we survey the significant development of range-based volatility models, beginning with the simple random walk model up to the conditional autoregressive range (CARR) model. For the extension to range-based multivariate volatilities, some approaches developed recently are adopted, such as the dynamic conditional correlation (DCC) model, the double smooth transition conditional correlation (DSTCC) GARCH model, and the copula method. At last, we introduce different approaches to build bias-adjusted realized range to obtain a more efficient estimator. Keywords: Range, Volatility forecasting, Dynamic conditional correlation, Smooth transition, Copula, Realized volatility, Risk management
Chapter 75: Business Models: Applications to Capital Budgeting, Equity Value, and Return Attribution This chapter describes a business model in a contingent claim modeling framework. The chapter then provides three applications of the business model. Firstly, the chapter determines the optimal capital budgeting decision in the presence of fixed operating costs, and shows how the fixed operating cost should be accounted by in an NPV calculation. Secondly, the chapter determines the values of equity value, the growth option, the retention option as the building blocks of primitive firm value. Using a sample of firms, the chapter illustrates a method in comparing the equity values of firms in the same business sector. Thirdly, the chapter relates the change in revenue to the change in equity value, showing how the combined operating leverage and financial leverage may affect the firm valuation and risks. Keywords: Bottom-up capital budgeting, Business model, Capital budgeting, Contingent claim model, Equity value, Financial leverage, Fixed operating cost, Gross return on investment (GRI), Growth option, Market performance measure, NPV, Operating leverage, Relative value of equity, Retention option, Return attribution, Top-down capital budgeting, Wealth transfer
Chapter 76: VAR Models: Estimation, Inferences, and Applications This chapter provides a brief overview of the basic Vector autoregression (VAR) approach by focusing on model estimation and statistical inferences. VAR models have been used extensive in finance and economic analysis. Applications of VAR models in some finance areas are discussed, including asset pricing, international finance, and market microstructure. It is shown that such approach provides a powerful tool to study financial market efficiency, stock return predictability, exchange rate dynamics, and information content of stock trades and market quality.
1
Introduction to Financial Econometrics and Statistics
49
Keywords: VAR, Granger-causality test, Impulse response, Variance decomposition, Co-integration, Asset return predictability, Market quality, Information content of trades, Informational efficiency
Chapter 77: Model Selection for High-Dimensional Problems This chapter introduces penalized least squares, which seek to keep important predictors in a model, while penalizing coefficients associated with irrelevant predictors. As such, under certain conditions, penalized least squares can lead to a sparse solution for linear models and achieve asymptotic consistency in separating relevant variables from irrelevant ones. We then review independence screening, a recently developed method for analyzing ultrahigh-dimensional data where the number of variables or parameters can be exponentially larger than the sample size. Independence screening selects relevant variables based on certain measures of marginal correlations between candidate variables and the response. Finally, we discuss and advocate multistage procedures that combine independence screening and variable selection and that may be especially suitable for analyzing high-frequency financial data. Keywords: Model selection, Variable selection, Dimension reduction, Independence screening, High-dimensional data, Ultrahigh-dimensional data, Generalized correlations, Penalized least squares, Shrinkage, Statistical learning, SCAD penalty, Oracle property
Chapter 78: Hedonic Regression Models The chapter examines three specific, different hedonic specifications: the linear, semi-log, and Box-Cox transformed hedonic models and applies them to real estate data. It also discusses recent innovations related to hedonic models and how these models are being used in contemporary studies. This provides a basic overview of the nature and variety of hedonic empirical pricing models that are employed in the economics literature. It explores the history of hedonic modeling and summarizes the field’s utility-theory-based, microeconomic foundations. It also provides a discussion of and potential solutions for common problems associated with hedonic modeling. Keywords: Hedonic models, Regression, Real estate, Box-Cox, Pricing, Price indexes, Semi-log, Least squares, Housing, Property
Chapter 79: Optimal Payout Ratio Under Uncertainty and the Flexibility Hypothesis: Theory and Empirical Evidence We theoretically extend the proposition of DeAngelo and DeAngelo’s (2006) optimal payout policy in terms of the flexibility dividend hypothesis. We also
50
C.-F. Lee and J.C. Lee
introduce growth rate, systematic risk, and total risk variables into the theoretical model. We use a panel data collected in the USA from 1969 to 2009 to empirically investigate the impact of growth rate, systematic risk, and total risk on the optimal payout ratio in terms of the fixed-effects model. Furthermore, we implement the moving estimates process to find the empirical breakpoint of the structural change for the relationship between the payout ratio and risks and confirm that the empirical breakpoint is not different from our theoretical breakpoint. Our theoretical model and empirical results can therefore be used to identify whether flexibility or the free cash flow hypothesis should be used to determine the dividend policy. Keywords: Dividends, Payout policy, Optimal payout ratio, Flexibility hypothesis, Free cash flow hypothesis, Signaling hypothesis, Fixed effect, Clustering effect, Structural change model, Moving estimates processes, Systematic risk, Total risk, Market perfection
Chapter 80: Modeling Asset Returns with Skewness, Kurtosis, and Outliers This chapter uses an exponential generalized beta distribution of the second kind (EGB2) to model the returns on 30 Dow-Jones industrial stocks. The model accounts for stock return characteristics, including fat tails, peakedness (leptokurtosis), skewness, clustered conditional variance, and leverage effect. The goodness-of-fit statistic provides supporting evidence in favor of EGB2 distribution in modeling stock returns. The EGB2 distribution used in this chapter is a four parameter distribution. It has a closed-form density function, and its higher-order moments are finite and explicitly expressed by its parameters. The EGB2 distribution nests many widely used distributions such as normal distribution, log-normal distribution, Weibull distribution, and standard logistic distribution. Keywords: Expected stock return, Higher moments, EGB2 distribution, Risk management, Volatility, Conditional skewness, Risk premium
Chapter 81: Does Revenue Momentum Drive or Ride Earnings or Price Momentum? This chapter performs dominance test to show that revenue surprises, earnings surprises, and prior returns, each lead to significant momentum returns that cannot be fully explained by the others, suggesting that each convey some exclusive and unpriced information content. Also, the joint implications of revenue surprises, earnings surprises, and prior returns are underestimated by investors, particularly when information variables point in the same direction. Momentum
1
Introduction to Financial Econometrics and Statistics
51
cross-contingencies are observed in that momentum profits driven by firm fundamental information positively depend on the accompanying firm market information, and vice versa. A three-way combined momentum strategy may offer monthly return as high as 1.44%. Keywords: Earnings surprises, Momentum strategies, Post-earningsannouncement drift, Revenue surprises
Chapter 82: A VG-NGARCH Model for Impacts of Extreme Events on Stock Returns This chapter compares two types of GARCH models, namely, the VG-NGARCH and the GARCH-jump model with autoregressive conditional jump intensity, i.e., the GARJI model, to make inferences on the log of stock returns when there are irregular substantial price fluctuations. The VG-NGARCH model imposes a nonlinear asymmetric structure on the conditional shape parameters in a variancegamma process, which describes the arrival rates for news with different degrees of influence on price movements, and provides an ex ante probability for the occurrence of large price movements. On the other hand, the GARJI model, a mixed GARCH-jump model proposed by Chan and Maheu (2002), adopts two independent autoregressive processes to model the variances corresponding to moderate and large price movements, respectively. Keywords: VG-NGARCH model, GARCH-jump model, Autoregressive conditional jump intensity, GARJI model, Substantial price fluctuations, Shape parameter, Variance-gamma process, Ex ante probability, Daily stock price, Goodness-of-fit
Chapter 83: Risk-Averse Portfolio Optimization via Stochastic Dominance Constraints This chapter presents a new approach to portfolio selection based on stochastic dominance. The portfolio return rate in the new model is required to stochastically dominate a random benchmark. We formulate optimality conditions and duality relations for these models and construct equivalent optimization models with utility functions. Two different formulations of the stochastic dominance constraint, primal and inverse, lead to two dual problems which involve von Neuman–Morgenstern utility functions for the primal formulation and rank dependent (or dual) utility functions for the inverse formulation. We also discuss the relations of our approach to value-at-risk and conditional value-at-risk. Keywords: Portfolio optimization, Stochastic dominance, Stochastic order, Risk, Expected utility, Duality, Rank dependent utility, Yaari’s dual utility, Value-at-risk, Conditional value-at-risk
52
C.-F. Lee and J.C. Lee
Chapter 84: Implementation Problems and Solutions in Stochastic Volatility Models of the Heston Type This chapter compares three major approaches to solve the numerical instability problem inherent in the fundamental solution of the Heston model. In this chapter, we used the fundamental transform method proposed by Lewis to reduce the number of variables from two to one and separate the payoff function from the calculation of the Green function for option pricing. We show that the simple adjusted-formula method is much simpler than the rotation-corrected angle method of Kahl and Ja¨ckel and also greatly superior to the direct integration method of Shaw if taking computing time into consideration. Keywords: Heston, Stochastic volatility, Fourier inversion, Fundamental transform, Complex logarithm, Rotation-corrected angle, Simple adjusted formula, Green function
Chapter 85: Stochastic Change-Point Models of Asset Returns and Their Volatilities This chapter considers two time-scales and uses the “short” time-scale to define GARCH dynamics and the “long” time-scale to incorporate parameter jumps. This leads to a Bayesian change-point ARX-GARCH model, whose unknown parameters may undergo occasional changes at unspecified times and can be estimated by explicit recursive formulas when the hyperparameters of the Bayesian model are specified. Efficient estimators of the hyperparameters of the Bayesian model can be developed. The empirical Bayes approach can be applied to the frequentist problem of partitioning the time series into segments under sparsity assumptions on the change-points. Keywords: ARX-GARCH, Bounded complexity, Contemporaneous jumps, Change-point models, Empirical Bayes, Frequentist segmentation, Hidden Markov models, Hyperparameter estimation, Markov chain Monte Carlo, Recursive filters, Regression models, Stochastic volatility
Chapter 86: Unspanned Stochastic Volatilities and Interest Rate Derivatives Pricing This chapter first reviews the recent literature on the Unspanned Stochastic Volatilities (USV) documented in the interest rate derivatives markets. The USV refers to the volatilities factors implied in the interest rate derivatives prices that have little correlation with the yield curve factors. We then present the result in Li and Zhao (2006) that a sophisticated DTSM without USV feature can have serious difficulties in hedging caps and cap straddles, even though they capture bond yields well. Furthermore, at-the-money straddle hedging errors are highly correlated with cap-implied volatilities and can explain a large fraction of hedging errors of all caps and straddles across moneyness and maturities. We also present a multifactor
1
Introduction to Financial Econometrics and Statistics
53
term structure model with stochastic volatility and jumps that yields a closed-form formula for cap prices from Jarrow et al. (2007). The three-factor stochastic volatility model with Poisson jumps can price interest rate caps well across moneyness and maturity. The econometric methods in this chapter include extended Kalman filtering, maximum likelihood estimation with latent variables, local polynomial method, and nonparametric density estimation. Keywords: Term structure modeling, Interest rate volatility, Heath-JarrowMorton model, Nonparametric density estimation, Extended Kalman filtering
Chapter 87: Alternative Equity Valuation Models This chapter examines alternative equity valuation models and their ability to forecast future stock prices. We use simultaneous equations estimation technique to investigate the stock price forecast ability of Ohlson’s model, Feltham and Ohlson’s Model, and Warren and Shelton’s (1971) model. Moreover, we use the combined forecasting methods proposed by Granger and Newbold (1973) and Granger and Ramanathan (1984) to form combined stock price forecasts from individual models. Finally, we examine whether comprehensive earnings can provide incremental price-relevant information beyond net income. Keywords: Ohlson model, Feltham and Ohlson model, Warren and Shelton model, Equity valuation models, Simultaneous equations estimation, Fundamental analysis, Financial statement analysis, Financial planning and forecasting, Combined forecasting, Comprehensive earnings, Abnormal earnings, Operating earnings, Accounting earnings
Chapter 88: Time Series Models to Predict the Net Asset Value (NAV) of an Asset Allocation Mutual Fund VWELX This research examines the use of various forms of time-series models to predict the total net asset value (NAV) of an asset allocation mutual fund. The first set of model structures included simple exponential smoothing, double exponential smoothing, and the Winter’s method of smoothing. The second set of predictive models used represented trend models. They were developed using regression estimation. They included linear trend model, quadratic trend model, and an exponential model. The third type of method used was a moving average method. The fourth set of models incorporated the Box-Jenkins method, including an autoregressive model, a moving average model, and an unbounded autoregressive and moving average method. Keywords: NAV of a mutual fund, Asset allocation fund, Combination of forecasts, Single exponential smoothing, Double exponential smoothing, Winter’s method, Linear trend model, Quadratic trend model, Exponential trend model, Moving average method, Autoregressive model, Moving average model, Unbounded autoregressive moving average model
54
C.-F. Lee and J.C. Lee
Chapter 89: Discriminant Analysis and Factor Analysis: Theory and Method This chapter discusses three multivariate techniques in detail: discriminant analysis, factor analysis, and principal component analysis. In addition, the stepwise discriminant analysis by Pinches and Mingo (1973) is improved using a goal programming technique. These methodologies are applied to determine useful financial ratios and the subsequent bond ratings. The analysis shows that the stepwise discriminant analysis fails to be an efficient solution as the hybrid approach using the goal programming technique outperforms it, which is a compromised solution for the maximization of the two objectives, namely, the maximization of the explanatory power and the maximization of discriminant power. Keywords: Multivariate technique, Discriminant analysis, Factor analysis, Principle component analysis, Stepwise discriminant analysis, Goal programming, Bond ratings, Compromised solution, Explanatory power, Discriminant power
Chapter 90: Implied Volatility: Theory and Empirical Method This chapter reviews the different theoretical methods used to estimate implied standard deviation and to show how the implied volatility can be estimated in empirical work. The OLS method for estimating implied standard deviation is first introduced and the formulas derived by applying a Taylor series expansion method to Black-Scholes option pricing model are also described. Three approaches of estimating implied volatility are derived from one, two, and three options, respectively. Because of these formulas with the remainder terms, the accuracy of these formulas depends on how an underlying asset is close to the present value of exercise price in an option. The formula utilizing three options for estimating implied volatility is more accurate rather than other two approaches. In this chapter, we use call options on S&P 500 index futures in 2010 and 2011 to illustrate how MATLAB can be used to deal with the issue of convergence in estimating implied volatility of future options. Keywords: Implied volatility, Implied standard deviation (ISD), Option pricing model, MATLAB, Taylor series expansion, Ordinary least-squares (OLS), BlackScholes Model, Options on S&P 500 index futures
Chapter 91: Measuring Credit Risk in a Factor Copula Model This chapter uses a new approach to estimate future credit risk on target portfolio based on the framework of CreditMetricsTM by J.P. Morgan. However, we adopt the perspective of factor copula and then bring the principal component analysis concept into factor structure to construct a more appropriate dependence structure among credits. In order to examine the proposed method, we use real market data instead of a virtual one. We also develop a tool for risk analysis which is convenient to use,
1
Introduction to Financial Econometrics and Statistics
55
especially for banking loan businesses. The results show the fact that people assume dependence structures are normally distributed will indeed lead to risks underestimate. On the other hand, our proposed method captures better features of risks and shows the fat-tail effects conspicuously even though assuming the factors are normally distributed. Keywords: Credit risk, Credit VaR, Default correlation, Copula, Factor copula, Principal component analysis
Chapter 92: Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods This chapter conducts some simulation tests to justify the effectiveness of the Fourier transform method. Malliavin and Mancino (2009) proposed a nonparametric Fourier transform method to estimate the instantaneous volatility under the assumption that the underlying asset price process is a semi-martingale. Two correction schemes are proposed to improve the accuracy of volatility estimation. By means of these Fourier transform methods, some documented phenomena such as volatility daily effect and multiple risk factors of volatility can be observed. Then, a linear hypothesis between the instantaneous volatility and VIX derived from Zhang and Zhu (2006) is investigated. Keywords: Information content, Instantaneous volatility, Fourier transform method, Bias reduction, Correction method, Local volatility, Stochastic volatility, VIX, Volatility daily effect, Online estimation
Chapter 93: A Dynamic CAPM with Supply Effect: Theory and Empirical Results This chapter first theoretically extends Black’s CAPM, and then uses price, dividend per share, and earnings per share to test the existence of supply effect with US equity data. A simultaneous equation system is constructed through a standard structural form of a multi-period equation to represent the dynamic relationship between supply and demand for capital assets. The equation system is exactly identified under our specification. Then, two hypotheses related to supply effect are tested regarding the parameters in the reduced-form system. The equation system is estimated by the Seemingly Unrelated Regression (SUR) method, since SUR allows one to estimate the presented system simultaneously while accounting for the correlated errors. Keywords: CAPM, Asset, Endogenous supply, Simultaneous equations
Chapter 94: A Generalized Model for Optimum Futures Hedge Ratio This chapter proposes the generalized hyperbolic distribution as the joint log-return distribution of the spot and futures. Using the parameters in this distribution, we derive several most widely used optimal hedge ratios: minimum variance,
56
C.-F. Lee and J.C. Lee
maximum Sharpe measure, and minimum generalized semivariance. To estimate these optimal hedge ratios, we first write down the log-likelihood functions for symmetric hyperbolic distributions. Then, we estimate these parameters by maximizing the log-likelihood functions. Using these MLE parameters for the generalized hyperbolic distributions, we obtain the minimum variance hedge ratio and the optimal Sharp hedge ratio. Also based on the MLE parameters and the numerical method, we can calculate the minimum generalized semivariance hedge ratio. Keywords: Optimal hedge ratio, Generalized hyperbolic distribution, Martingale property, Minimum variance hedge ratio, Minimum generalized semiinvariance, Maximum Sharp measure, Joint-normality assumption, Hedging effectiveness
Chapter 95: Instrument Variable Approach to Correct for Endogeneity in Finance This chapter reviews the instrumental variables (IV) approach to endogeneity from the point of view of a finance researcher who is implementing instrumental variable methods in empirical studies. This chapter is organized into two parts. Part I discusses the general procedure of the instrumental variable approach, including Two-Stage Least Square (2SLS) and Generalized Method of Moments (GMM), the related diagnostic statistics for assessing the validity of instruments, which are important but not used very often in finance applications, and some recent advances in econometrics research on weak instruments. Part II surveys corporate finance applications of instrumental variables. We found that the instrumental variables used in finance studies are usually chosen arbitrarily, and very few diagnostic statistics are performed to assess the adequacy of IV estimation. The resulting IV estimates thus are questionable. Keywords: Endogeneity, OLS, Instrumental variable (IV) estimation, Simultaneous equations, 2SLS, GMM, Overidentifying restrictions, Exogeneity test, Weak instruments, Anderson-Rubin statistic, Empirical corporate finance
Chapter 96: Application of Poisson Mixtures in the Estimation of Probability of Informed Trading This research first discusses the evolution of probability of informed trading in the finance literature. Motivated by asymmetric effects, e.g., return and trading volume in up and down markets, this study modifies a mixture of the Poisson distribution model by different arrival rates of informed buys and sells to measure the probability of informed trading proposed by Easley et al. (1996). By applying the expectation–maximization (EM) algorithm to estimate the parameters of the model, we derive a set of equations for maximum likelihood estimation and these equations are encoded in a SAS Macro utilizing SAS/IML for implementation of the methodology.
1
Introduction to Financial Econometrics and Statistics
57
Keywords: Probability of informed trading (PIN), Expectation–maximization (EM) algorithm, A mixture of Poisson distribution, Asset-pricing returns, Order imbalance, Information asymmetry, Bid-ask spreads, Market microstructure, Trade direction, Errors in variables
Chapter 97: CEO Stock Options and Analysts’ Forecast Accuracy and Bias This chapter uses ordinary least squares estimation to investigate the relations between CEO stock options and analysts’ earnings forecast accuracy and bias. Our OLS models relate forecast accuracy and forecast bias (the dependent variables) to CEO stock options (the independent variable) and controls for earnings characteristics, firm characteristics, and forecast characteristics. In addition, the models include controls for industry and year. We use four measures of options: new options, existing exercisable options, existing unexercisable options, and total options (sum of the previous three), all scaled by total number of shares outstanding, and estimate two models for each dependent variable, one including total options and the other including new options, existing exercisable options, and existing unexercisable options. We also use both contemporaneous as well as lagged values of options in our main tests. Keywords: CEO stock options, Analysts’ forecast accuracy, Analysts’ forecast bias, CEO compensation, Agency costs, Investment risk taking, Effort allocation, Opportunistic earnings management, Opportunistic disclosure management, Forecasting complexity
Chapter 98: Option Pricing and Hedging Performance Under Stochastic Volatility and Stochastic Interest Rates This chapter fills this gap by first developing an implementable option model in closed form that admits both stochastic volatility and stochastic interest rates and that is parsimonious in the number of parameters. Based on the model, both delta-neutral and single-instrument minimum variance hedging strategies are derived analytically. Using S&P 500 option prices, we then compare the pricing and hedging performance of this model with that of three existing ones that, respectively, allow for (i) constant volatility and constant interest rates (the Black-Scholes), (ii) constant volatility but stochastic interest rates, and (iii) stochastic volatility but constant interest rates. Overall, incorporating stochastic volatility and stochastic interest rates produces the best performance in pricing and hedging, with the remaining pricing and hedging errors no longer systematically related to contract features. The second performer in the horse-race is the stochastic volatility model, followed by the stochastic interest rates model and then by the Black-Scholes. Keywords: Stock option pricing, Stochastic volatility, Stochastic interest rates, Hedge ratios, Hedging, Pricing performance, and Hedging performance
58
C.-F. Lee and J.C. Lee
Chapter 99: The Le Chaˆtelier Principle of the Capital Market Equilibrium This chapter purports to provide a theoretical underpinning for the problem of the Investment Company Act. The theory of the Le Chatelier Principle is well known in thermodynamics: The system tends to adjust itself to a new equilibrium as far as possible. In capital market equilibrium, added constraints on portfolio investment in each stock can lead to inefficiency manifested in the right-shifting efficiency frontier. According to the empirical study, the potential loss can amount to millions of dollars coupled with a higher risk-free rate and greater transaction and information costs. Keywords: Markowitz model, Efficient frontiers, With constraints, Without constraints, Le Chatelier principle, Thermodynamics, Capital market equilibrium, Diversified mutual funds, Quadratic programming, Investment company act
References Aitchison, J., & Brown, J. A. C. (1973). Lognormal distribution, with special reference to its uses in economics. London: Cambridge University Press. Ait-Sahalia, Y., & Lo, A. W. (2000). Nonparametric risk management and implied risk aversion. Journal of Econometrics, 94, 9–51. Amemiya, T. (1974). A note on a fair and Jaffee model. Econometrica, 42, 759–762. Anderson, T. W. (1994). The statistical analysis of time series. New York: Wiley-Interscience. Anderson, T. W. (2003). An introduction to multivariate statistical analysis. New York: WileyInterscience. Angrist, J. D., & Lavy, V. (1999). Using Maimonides’ rule to estimate the effect of class size on scholastic achievement. Quarterly Journal of Economics, 14(2), 533–575. Arellano, M., & Bover, O. (1995). Another look at the instrumental variable estimation of errorcomponents models. Journal of Econometrics, 68(1), 29–51. Bakshi, G., Cao, C., & Chen, Z. (1997). Empirical performance of alternative option pricing models. Journal of Finance, 52(5), 2003–2049. Baltagi, B. (2008). Econometric analysis of panel data (4th ed.). Chichester: Wiley. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654. Black, F., Jensen, M. C., & Scholes, M. (1972). The capital asset pricing model: Some empirical tests. In M. C. Jensen (Ed.), Studies in the theory of capital markets. New York: Praeger. Blume, M. E., & Friend, I. (1973). A new look at the capital asset pricing model. Journal of Finance, 28, 19–33. Brown, S. J., & Goetzmann, W. N. (1997). Mutual fund styles. Journal of Financial Economics, 43(3), 373–399. Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2006). Robust inference with multiway clustering (Working Paper). Chacko, G., & Viceira, L. M. (2003). Spectral GMM estimation of continuous-time processes. Journal of Econometrics, 116(1), 259–292. Chang, C. F. (1999). Determinants of capital structure and management compensation: The partial least squares approach. Ph.D. dissertation, Rutgers University. Chang, H. S., & Lee, C. F. (1977). Using pooled time-series and cross section data to test the firm and time effects in financial analysis. Journal of Financial and Quantitative Analysis, 12, 457–471.
1
Introduction to Financial Econometrics and Statistics
59
Chang, C., Lee, A. C., & Lee, C. F. (2009). Determinants of capital structure choice: A structural equation modeling approach. Quarterly Review of Economic and Finance, 49(2), 197–213. Chen, H. Y. (2011). Momentum strategies, dividend policy, and asset pricing test. Ph.D. dissertation, Rutgers University. Chen, S. N., & Lee, C. F. (1981). The sampling relationship between Sharpe’s performance measure and its risk proxy: Sample size, investment horizon and market conditions. Management Science, 27(6), 607–618. Chen, K. H., & Shimerda, T. A. (1981). An empirical analysis of useful finance ratios. Financial Management, 10(1), 51–60. Chen, W. P., Chung, H., Lee, C. F., & Liao, W. L. (2007). Corporate governance and equity liquidity: Analysis of S&P transparency and disclosure rankings. Corporate Governance: An International Review, 15(4), 644–660. Cheng, D. C., & Lee, C. F. (1986). Power of alternative specification errors tests in identifying misspecified market models. The Quarterly Review of Economics and Business, 16(3), 6–24. Cheng, K. F., Chu, C. K., & Hwang, R. C. (2010). Predicting bankruptcy using the discrete-time semiparametric hazard model. Quantitative Finance, 10, 1055–1066. Chow, G. C. (1960). Tests of equality between sets of coefficients in two linear regressions. Econometrica, 28, 591–605. Core, J. E. (2000). The directors’ and officers’ insurance premium: An outside assessment of the quality of corporate governance. Journal of Law, Economics, and Organization, 16(2), 449–477. Cox, J. C., Ross, S. A., & Rubinstein, M. (1979). Option pricing: A simplified approach. Journal of Financial Economics, 7(3), 229–263. Fair, R. C., & Jaffee, D. M. (1972). Methods of estimation of market in disequilibrium. Econometrica, 40, 497–514. Fama, E. F. (1971). Parameter estimates for symmetric stable distributions. Journal of the American Statistical Association, 66, 331–338. Fama, E. F., & MacBeth, J. D. (1973). Risk, return, and equilibrium: Empirical tests. Journal of Political Economy, 81, 607–636. Fok, R. C. W., Lee, C. F., & Cheng, D. C. (1996). Alternative specifications and estimation methods for determining random beta coefficients: Comparison and extensions. Journal of Financial Studies, 4(2), 61–88. Frecka, T. J., & Lee, C. F. (1983). Generalized financial ratio adjustment processes and their implications. Journal of Accounting Research, 21, 308–316. Gordon, J., & Gordon, M. (1997). The fiinite horizon expected return model. The Financial Analyst Journal, 53, 52–61. Granger, C. W. J., & Newbold, P. (1973). Some comments on the evaluation of economic forecasts. Applied Economics, 5(1), 35–47. Granger, C. W. J., & Newbold, P. (1974). Spurious regressions in econometrics. Journal of Econometrics, 2, 111–120. Granger, C. W. J., & Ramanathan, R. (1984). Improved methods of combining forecasts. Journal of Forecasting, 3, 197–204. Hamilton, J. D. (1994). Time series analysis. Princeton: Princeton University Press. Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50(4), 1029–1054. Hansen, B. E. (1996). Inference when a nuisance parameter is not identified under the null hypothesis. Econometrica, 64(2), 413–430. Hansen, B. E. (1997). Approximate asymptotic p-values for structural change tests. Journal of Business and Economic Statistics, 15, 60–67. Hansen, B. E. (1999). Threshold effects in non-dynamic panels: Estimation, testing, and inference. Journal of Econometrics, 93, 345–368. Hansen, B. E. (2000a). Sample splitting and threshold estimation. Econometrica, 68(3), 575–603. Hansen, B. E. (2000b). Testing for structural change in conditional models. Journal of Econometrics, 97, 93–115.
60
C.-F. Lee and J.C. Lee
Heston, S. L. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of Financial Studies, 6(2), 327–343. Hsiao, C. (2003). Analysis of panel data (2nd ed.). Cambridge MA: Cambridge University Press. Hutchinson, J. M., Lo, A. W., & Poggio, T. (1994). A nonparametric approach to pricing and hedging derivative securities via learning networks. Journal of Finance, 49(3), 851–889. Hwang, R. C., Cheng, K. F., & Lee, J. C. (2007). A semiparametric method for predicting bankruptcy. Journal of Forecasting, 26, 317–342. Hwang, R. C., Wei, H. C., Lee, J. C., & Lee, C. F. (2008). On prediction of financial distress using the discrete-time survival model. Journal of Financial Studies, 16, 99–129. Hwang, R. C., Cheng, K. F., & Lee, C. F. (2009). On multiple-class prediction of issuer crediting ratings. Journal of Applied Stochastic Models in Business and Industry, 25, 535–550. Hwang, R. C., Chung, H., & Chu, C. K. (2010). Predicting issuer credit ratings using a semiparametric method. Journal of Empirical Finance, 17(1), 120–137. Ittner, C. D., Larcker, D. F., & Rajan, M. V. (1997). The choice of performance measures in annual bonus contracts. Accounting Review, 72(2), 231–255. Jones, J. (1991). Earnings management during import relief investigations. Journal of Accounting Research, 29, 193–228. Kao, L. J., & Lee, C. F. (2012). Alternative method for determining industrial bond ratings: theory and empirical evidence. International Journal of Information Technology & Decision Making, 11(6), 1215–1236. Kau, J. B., & Lee, C. F. (1976). Functional form, density gradient and the price elasticity of demand for housing. Urban Studies, 13(2), 181–192. Kau, J. B., Lee, C. F., & Sirmans, C. F. (1986). Urban econometrics: Model developments and empirical results. Greenwich: JAI Press. Research in Urban Economics, Vol. 6. Kaufman, L., & Rousseeuw, P. J. (1990). Finding groups in data: An introduction to cluster analysis (9th ed.). New York: Wiley-Interscience. Kim, D. (1995). The errors in the variables problem in the cross-section of expected stock returns. Journal of Finance, 50(5), 1605–1634. Kim, D. (1997). A reexamination of firm size, book-to-market, and earnings price in the crosssection of expected stock returns. Journal of Financial and Quantitative Analysis, 32(4), 463–489. Kim, D. (2010). Issues related to the errors-in-variables problems in asset pricing tests. In Handbook of quantitative finance and risk management (Part V, Chap. 70, pp. 1091–1108). Kuan, C. M., & Hornik, K. (1995). The generalized fluctuation test: A unifying view. Econometric Reviews, 14, 135–161. Lambert, R., & Larcker, D. (1987). An analysis of the use of accounting and market measures of performance in executive compensation contracts. Journal of Accounting Research, 25, 85–125. Lee, C. F. (1973). Errors-in-variables estimation procedures with applications to a capital asset pricing model. Ph.D. dissertation, The State University of New York at Buffalo. Lee, C. F. (1976a). A note on the interdependent structure of security returns. Journal of Financial and Quantitative Analysis, 11, 73–86. Lee, C. F. (1976b). Functional form and the dividend effect of the electric utility industry. Journal of Finance, 31(5), 1481–1486. Lee, C. F. (1977a). Functional form, skewness effect and the risk-return relationship. Journal of Financial and Quantitative Analysis, 12, 55–72. Lee, C. F. (1977b). Performance measure, systematic risk and errors-in-variable estimation method. Journal of Economics and Business, 39, 122–127. Lee, A. (1996). Cost of capital and equity offerings in the insurance industry. Ph.D. dissertation, The University of Pennsylvania in Partial. Lee, C. F., & Chen, H. Y. (2013). Alternative errors-in-variable models and their application in finance research (Working Paper). Rutgers University. Lee, A. C., & Cummins, J. D. (1998). Alternative models for estimating the cost of capital for property/casualty insurers. Review of Quantitative Finance and Accounting, 10(3), 235–267.
1
Introduction to Financial Econometrics and Statistics
61
Lee, C. F., & Jen, F. C. (1978). Effects of measurement errors on systematic risk and performance measure of a portfolio. Journal of Financial and Quantitative Analysis, 13(2), 299–312. Lee, K. W., & Lee, C. F. (2012). Multiple directorships, corporate ownership, firm performance (Working Paper). Rutgers University. Lee, C. F., & Wu, C. C. (1985). The impacts of kurtosis on risk stationarity: Some empirical evidence. Financial Review, 20(4), 263–269. Lee, C. F., & Zumwalt, J. K. (1981). Associations between alternative accounting profitability measures and security returns. Journal of Financial and Quantitative Analysis, 16, 71–93. Lee, C. F., Newbold, P., Finnerty, J. E., & Chu, C. C. (1986). On accounting-based, marketbased and composite-based beta predictions: Method and implications. Financial Review, 21, 51–68. Lee, C. F., Wei, K. C. J., & Bubnys, E. L. (1989). The APT versus the multi-factor CAPM: Empirical evidence. Quarterly Review of Economics and Business, 29, 6–16. Lee, C. F., Wu, C. C., & Wei, K. C. J. (1990). Heterogeneous investment horizon and capital asset pricing model: Theory and implications. Journal of Financial and Quantitative Analysis, 25, 361–376. Lee, C. F., Tsai C. M., & Lee A. C. (2011a). Asset pricing with disequilibrium price adjustment: Theory and empirical evidence. Quantitative Finance. Lee, C. F., Gupta, M. C., Chen, H. Y., & Lee, A. C. (2011b). Optimal payout ratio under uncertainty and the flexibility hypothesis: Theory and empirical evidence. Journal of Corporate Finance, 17(3), 483–501. Lee, C. F., Chien, C. C., Lin, H. C., Lin, Y. C., & Lin, F. C. (2013). Tradeoff between reputation concerns and economic dependence for auditors – Threshold regression approach (Working Paper). Rutgers University. Leon, A., & Vaello-Sebastia, A. (2009). American GARCH employee stock option valuation. Journal of Banking and Finance, 33(6), 1129–1143. Lien, D. (2010). An note on the relationship between the variability of the hedge ratio and hedging performance. Journal of Futures Market, 30(11), 1100–1104. Lien, D., & Shrestha, K. (2007). An empirical analysis of the relationship between hedge ratio and hedging horizon using wavelet analysis. Journal of Futures Market, 27(2), 127–150. Liu, B. (2006). Two essays in financial econometrics: I: Functional forms and pricing of country funds. II: The term structure model of inflation risk premia. Ph.D. dissertation, Rutgers University. Maddala, G. S., Rao, C. R., & Maddala, G. S. (1996). Handbook of statistics 14: Statistical methods in finance. Amsterdam: Elsevier Science & Technology. Martin, C. (1990). Corporate borrowing and credit constrains: Structural disequilibrium estimates for the U.K. Review of Economics and Statistics, 72(1), 78–86. Mayer, W. J. (1989). Estimating disequilibrium models with limited a priori price-adjustment information. Journal of Econometrics, 41(3), 303–320. Miller, M. H., & Modigliani, F. (1966). Some estimates of the cost of capital to the utility industry, 1954-7. American Economic Review, 56(3), 333–391. Myers, R. J. (1991). Estimating time-varying optimal hedge ratios on futures markets. Journal of Futures Markets, 11(1), 39–53. Newey, W. K., & West, K. D. (1987). A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica, 55(3), 703–708. Ohlson, J. S. (1980). Financial ratios and the probabilistic prediction of bankruptcy. Journal of Accounting Research, 19, 109–131. Petersen, M. A. (2009). Estimating standard errors in finance panel data sets: Comparing approaches. Review of Financial Studies, 22, 435–480. Pinches, G. E., & Mingo, K. A. (1973). A multivariate analysis of industrial bond ratings. Journal of Finance, 28(1), 1–18. Quandt, R. E. (1988). The econometrics of disequilibrium. New York: Basil Blackwell Inc.
62
C.-F. Lee and J.C. Lee
Rendleman, R. J., Jr., & Barter, B. J. (1979). Two-state option pricing. Journal of Finance, 24, 1093–1110. Rubinstein, M. (1994). Implied binomial trees. Journal of Finance, 49, 771–818. Schroder, M. (1989). Computing the constant elasticity of variance option pricing formula. Journal of Finance, 44(1), 211–219. Sealey, C. W., Jr. (1979). Credit rationing in the commercial loan market: Estimates of a structural model under conditions of disequilibrium. Journal of Finance, 34, 689–702. Sears, R. S., & John Wei, K. C. (1988). The structure of skewness preferences in asset pricing model with higher moments: An empirical test. Financial Review, 23(1), 25–38. Shapiro, A. F. (2005). Fuzzy regression models (Working Paper). Penn State University. Shumway, T. (2001). Forecasting bankruptcy more accurately: A simple hazard model. The Journal of Business, 74, 101–124. Thompson, S. B. (2011). Simple formulas for standard errors that cluster by both firm and time. Journal of Financial Economics, 99(1), 1–10. Thursby, J. G. (1985). The relationship among the specification error tests of Hausman, Ramsey and Chow. Journal of the American Statistical Association, 80(392), 926–928. Titman, S., & Wessels, R. (1988). The determinants of capital structure choice. Journal of Finance, 43(1), 1–19. Tsai, G. M. (2005). Alternative dynamic capital asset pricing models: Theories and empirical results. Ph.D. dissertation, Rutgers University. Van Der Klaauw, W. (2002). Estimating the effect of financial aid offers on college enrollment: A regression-discontinuity approach. International Economic Review, 43(4), 1249–1287. Wei, K. C. (1984). The arbitrage pricing theory versus the generalized intertemporal capital asset pricing model: Theory and empirical evidence. Ph.D. dissertation, University of Illinois at Urbana-Champaign. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48(4), 817–838. Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data (2nd ed.). Cambridge, MA: The MIT Press. Yang, C. C. (1989). The impact of new equity financing on firms’ investment, dividend and debtfinancing decisions. Ph.D. dissertation, The University of Illinois at Urbana-Champaign. Yang, C. C., Lee, C. F., Gu, Y. X., & Lee, Y. W. (2010). Co-determination of capital structure and stock returns – A LISREL approach: An empirical test of Taiwan stock markets. Quarterly Review of Economic and Finance, 50(2), 222–233. Zeileis, A., Leisc, F., Hornik, K., & Kleiber, C. (2002). Strucchange: An R package for testing for structural change in linear regression models. Journal of Statistical Software, 7, 1–38.
2
Experience, Information Asymmetry, and Rational Forecast Bias April Knill, Kristina L. Minnick, and Ali Nejadmalayeri
Contents 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Theoretical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Empirical Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Empirical Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proof of Proposition 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proof of Proposition 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Alternate Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Alternate Proxies for Information Asymmetry for Table 2.5 . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64 67 71 75 78 95 96 97 97 97 98 98 98 99
Abstract
We use a Bayesian model of updating forecasts in which the bias in forecast endogenously determines how the forecaster’s own estimates weigh into the posterior beliefs. Our model predicts a concave relationship between accuracy in forecast and posterior weight that is put on the forecaster’s self-assessment.
A. Knill (*) The Florida State University, Tallahassee, FL, USA e-mail: [email protected] K.L. Minnick Bentley University, Waltham, MA, USA e-mail: [email protected] A. Nejadmalayeri Department of Finance, Oklahoma State University, Oklahoma, OK, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_2, # Springer Science+Business Media New York 2015
63
64
A. Knill et al.
We then use a panel regression to test our analytical findings and find that an analyst’s experience is indeed concavely related to the forecast error. This study examines whether it is ever rational for analysts to post biased estimates and how information asymmetry and analyst experience factor into the decision. Using a construct where analysts wish to minimize their forecasting error, we model forecasted earnings when analysts combine private information with consensus estimates to determine the optimal forecast bias, i.e., the deviation from the consensus. We show that the analyst’s rational bias increases with information asymmetry, but is concavely related with experience. Novice analysts post estimates similar to the consensus but as they become more experienced and develop private information channels, their estimates become biased as they deviate from the consensus. Highly seasoned analysts, who have superior analytical skills and valuable relationships, need not post biased forecasts. Keywords
Financial analysts • Forecast accuracy • Information asymmetry • Forecast bias • Bayesian updating • Panel regressions • Rational bias • Optional bias • Analyst estimation • Analyst experience
2.1
Introduction
Extant evidence suggests an intimate link between an analyst’s experience and her forecasting performance. Analysts who are experienced and highly specialized often forecast better than others (Clement and Tse 2005; Bernhardt et al. 2006). One way they do so is by posting an optimistic bias (Mest and Plummer 2003; Gu and Xue 2007). Novice analysts with limited resources tend to herd with others, which results in almost no bias (Bernhardt et al. 2006). In theory, superior forecasters produce better estimates either by resolving information asymmetry or by offering a better assessment. Lim (2001) suggests that analysts can improve forecast accuracy by strategically biasing their forecasts upwards, which placates management, and in essence purchases additional information.1 Analysts with
1
Beyer (2008) argues that, even without incentives to appease management, analysts may still post forecasts that exceed median earnings because managers can manipulate earnings upward to prevent falling short of earnings forecasts. Moreover, Conrad et al. (2006) find support for the idea that analysts’ “. . . recommendation changes are “sticky” in one direction, with analysts reluctant to downgrade.” Evidence also indicates that analysts rarely post sell recommendations for a stock, suggesting that losing a firm’s favor can be viewed as a costly proposition. At the extreme, firms even pursue legal damages for an analyst’s unfavorable recommendations. In a 2001 congressional hearing, president and chief executive officer of the Association for Investment Management and Research told the US House of Representatives Committee on Financial Services, Capital Markets Subcommittee, that “. . .In addition to pressures within their firms, analysts can also be, and have been, pressured by the executives of corporate issuers to issue favorable reports and recommendations. Regulation Fair Disclosure notwithstanding, recent history. . .has shown that companies retaliate against analysts who issue ‘negative’
2
Experience, Information Asymmetry, and Rational Forecast Bias
65
long histories of examining firms in a particular industry can also offer a unique perspective, and they signal their ability by posting biased estimates to signal superior ability (Bernhardt et al. 2006). Of course, analysts do not indefinitely and indiscriminately bias forecasts to appease the firm or signal their ability. This is mainly because analysts can learn from the forecasts of other analysts (Chen and Jiang 2006). By incorporating information from other forecasts, analysts can improve the accuracy of their own forecasts without posting biased estimates. The important question then is: given the analyst’s own assessment ability and her efficacy in procuring private information versus using consensus information, how should she construct an optimal forecast? Does that optimal forecast ever include bias, and how does information asymmetry affect this decision? We address these questions by analytically and empirically examining how an analyst’s experience and information asymmetry affect her forecasting. In so doing, we also account for the role of the consensus estimate in an analyst’s forecast. We begin by modeling the problem of optimal forecasting. To be specific, we combine key features of current rational forecasting models by Lim (2001) and Chen and Jiang (2006). As in Chen and Jiang (2006), analysts in our model form rational (i.e., minimum squared error) forecasts by weighing both public and private information.2 Following Lim (2001), our analysts post rational forecasts that deviate from the consensus to purchase private information from managers. Motivated by Lim (2001) and Bernhardt et al. (2006), we also allow analysts to post biased forecasts because they have more expertise than the consensus. The novelty of our approach is that we directly model how information asymmetry and analyst experience combine to affect the purchase of private information for use in the forecast deviation. We are also able to model how analysts with different levels of experience, i.e., novice, moderately experienced, and highly seasoned analysts, construct their forecast. We analytically derive the optimal deviation from the consensus as one that minimizes the mean squared error while allowing for rational Bayesian updating based on public and private knowledge. Our analysis shows that even in a rational forecast framework, analysts’ forecast deviation depends on the observed consensus deviation of other analysts. When analysts observe others deviating from the consensus (especially those with more experience), they gain enough insight to avoid posting a large deviation themselves. Our results confirm the findings of both Bernhardt et al. (2006) and Chen and Jiang (2006) – that analysts can rationally herd. In the presence of the informative consensus, analysts choose to herd with each other, rather than post estimates that are biased. Our theory suggests that the likelihood of posting biased estimates, conditional on the consensus, is significantly influenced by the analyst’s ability to process recommendations by denying them direct access to company executives and to companysponsored events that are important research tools. Companies have also sued analysts personally, and their firms, for negative coverage....” 2 See also Han et al. (2001).
66
A. Knill et al.
information. Consistent with Hong et al. 2000), we show that novice analysts essentially herd. Without either honed analytical abilities or valuable relationships, these analysts follow the premise that the consensus is more accurate than their own private information. Second, we show that a moderately experienced analyst relies more on her sources inside the firm than her analytical skills. This manifests itself as a biased forecast since she must appease management to tap into those sources. This link between experience and deviation is not monotonic. Highly seasoned analysts do not purchase as much additional information (or they purchase the information at a reduced price), either because they possess superior analytical skills or because their valuable relationships with firms afford them beneficial information without the optimistic forecast. This preferential treatment is akin to that which is afforded to companies in relationship lending with banks (Petersen and Rajan 1994). Indeed, Carey et al. (1998) argue that the relationship lending of banks can also be ascribed to some nonbank financial intermediaries. Although the firms that an analyst covers are not necessarily financial, the same relationship could certainly exist and is based on information asymmetry. We further demonstrate that as the analyst-firm information asymmetry increases, so does the bias. Similar to the results found in Mest and Plummer (2003), analysts find private channels and analytical ability valuable mitigating factors when faced with information asymmetry. Moderately experienced analysts, who begin to tap into reliable private channels without the valuable relationships that might afford preferential treatment, post larger deviations with the hope of ascertaining better information. This suggests that both information asymmetry and experience interactively affect the way analysts balance public and private information to form forecasts. Our model also shows that the effect of information asymmetry and analyst experience on rational deviation depends on the dispersion of and the correlation between public and private signals. The quality of private information channels, the informativeness of consensus, and the connectedness of public and private signals significantly affect how analysts form forecasts. To examine the validity of our analytical findings, we empirically investigate how analyst experience and information asymmetry affect forecast deviation from the consensus.3 Our empirical results confirm our theoretical predictions that rational bias (i.e., deviation from the consensus) is concavely related to analyst experience and positively associated with information asymmetry. Novice analysts and highly seasoned analysts post forecasts with smaller bias, while moderately seasoned analysts post estimates that deviate further from the consensus. Moderately seasoned analysts can benefit from a positive bias if they have the confidence to separate from the herd and access reliable private sources of information.
3 Clement and Tse (2005) are the closest to our analysis; however while they admit that the observed link between inexperience and herding can be a complex issue that might have other roots than just career concerns, they do not provide detailed insight as to what and how this complexity develops.
2
Experience, Information Asymmetry, and Rational Forecast Bias
67
These results are stronger for earlier forecasts versus later ones. As analysts become highly seasoned, with external information networks and superior skills in forecasting, they find appeasing management to purchase information less useful (or less costly in the same way that relationship lending affords firms cheaper capital). As one might expect, when information asymmetry increases, rational deviation from the consensus also increases. The degree of information asymmetry faced by seasoned analysts has a significant positive effect on the forecast deviation (information asymmetry also affects novice analysts but not as extensively). This study contributes to a growing literature on rational bias in analyst forecasting. Motivated by recent works by Bernhardt et al. (2006), Beyer (2008), Chen and Jiang (2006), Lim (2001), and Mest and Plummer (2003), we take an integrated modeling approach to arrive at a rational forecast bias, which is based on Bayesian updating of public consensus and endogenously acquired private information. Our approach is unique in that the forecast bias affects how public and private information are combined. Specifically, analysts can use bias to improve forecast accuracy by purchasing private information but can also learn from other analysts by following the consensus. Our analytical findings, which are empirically confirmed, show that unlike behavioral models, analysts can rationally herd. Unlike signaling models, seasoned analysts do not always post biased estimates. It also shows the value of the relationships that both moderately experienced and highly seasoned analysts leverage to gain reliable private information. These results are of practical interest because they provide evidence that analysts can and may optimally bias their earnings estimates and that this optimal bias differs across both analyst experience and information asymmetry. This knowledge may be useful to brokerage houses for training purposes and/or evaluation of analyst performance, particularly across industries with different levels of information asymmetry.
2.2
Theoretical Design
Recent studies show that earnings forecasts reflect public information and private assessments (Boni and Womack 2006; Chen and Jiang 2006; Lim 2001; Ramnath 2002). As Bernhardt et al. (2006) note, analysts’ forecasts of public information partly relate to the consensus, common information, and unanticipated market-wide shocks. We thus model analyst earnings forecasts using two components of earnings information: common and idiosyncratic, with some uncertainty about each component. In doing so, we also assume, as does Lim (2001), that the idiosyncratic component directly relates to the private information analysts can obtain from the firm by posting an optimistic view of the firm’s prospects (Nutt et al. 1999). Our typical analyst also observes a previously posted forecast whereby the analyst can learn about both the common and idiosyncratic components of earnings by incorporating previously disclosed information. Since we assume analysts are Bayesians, they can learn from previous forecasts as an alternative to posting a positive
68
A. Knill et al.
deviation to purchase private signals from the firm. Since these prior estimates partially reflect private information, the analysts may heavily weigh them into their assessment, depending on the perceived information asymmetry and experience of the previous analysts.4 In this section, we model analyst earnings forecasting in the presence of forecast uncertainty. Following recent studies (Lim 2001; Ramnath 2002), our analyst has an unconditional estimate, E ¼ X + e, about earnings, X N(0, s2), with some uncertainty, e N(0, te2). As in Chen and Jiang (2006), our analyst also observes a noisy consensus forecast, Ec ¼ X + ec, with a consensus uncertainty, ec N(0, tc2). As a Bayesian, our analyst forms a conditional forecast, F ¼ w E + (1 – w) Ec, by combining her unconditional estimate with the consensus. The optimal weight in her conditional forecast minimizes her squared forecast error.5 As in Lim (2001), the optimal forecasting, however, is endogenous to private information acquisition. Our analyst’s forecast precision relates to the private information analysts can obtain from the firm by posting an optimistic view of the firm’s prospects. That is to say, while forecast precision, te, consists of a selfaccuracy component, t0, reflecting the analyst expertise, the forecast precision can be improved by t(b) through placating managers by posting positively biased, b, forecasts. The marginal precision per bias, ∂t/∂b, reflects the analyst’s information asymmetry. This is because an analyst faced with greater information asymmetry should derive a larger marginal benefit from a biased forecast. Since the conditional forecast partially reflects private information, the analyst’s optimal rational forecast (and forecast bias) depends on the analyst’s expertise and information asymmetry. As noted before, the objective of the analyst is to minimize her squared forecast error, F – X: h i h i min E ðF XÞ2 ¼ min ðwE þ ð1 wÞEc Þ2 wjb
wjb
(2.1)
where E[•] is the expectation operator. Let’s assume that the analyst’s estimate and the consensus are correlated, E[e, ec] ¼ r (t tc)1. This correlation reflects the extent to which the analyst uses common private channels. The analyst objective, Eq. 2.1, can be rewritten as
4
Here, we focus only on the case of one-period sequential forecasting. However, we believe that the main implications of our model hold true for a multi-period sequential forecasting setting. Since we assume that the probabilistic characteristics of different components are known and analysts can gauge each others’ experience and the amount of information asymmetry perfectly, there would be no incentive to deviate from posting commensurate optimal, rational forecasts. If expert analysts intentionally deviate from their optimal forecasts, no other analyst can compensate for their experience or information asymmetry [for more discussion, see Trueman (1990)]. 5 For more details on Bayesian methods of inference and decision making, see Winkler (1972).
2
Experience, Information Asymmetry, and Rational Forecast Bias
h i min w2 ðt0 þ tðbÞÞ2 þ ð1 wÞ2 tc 2 þ 2rwð1 wÞðt0 þ tðbÞÞ1 t1 : c wjb
69
(2.2)
The optimal weight which solves the aforementioned objective function is w¼
ðt0 þ tðbÞÞ2 rtc ðt0 þ tðbÞÞ ðt0 þ tðbÞÞ2 þ t2c 2rtc ðt0 þ tðbÞÞ
(2.3)
Proposition 1 Assuming positively biasing forecasts have increasing but
diminishing returns, i.e., ∂t/∂b > 0 and ∂2t/∂b2 < 0, the optimal weight on analyst’s own unconditional estimate is concavely related to the analyst’s expertise (i.e., self-accuracy, t0). For analysts with self-accuracy less (more) than – t(b) + 0.5 r1 tc, the optimal weight on the analyst’s own unconditional estimate increases (decreases) with the expertise. At the optimal weight, the analyst’s conditional precision, t0 + t(b), equals 0.5 r1 tc. Proof See the Appendix 1. As the analyst’s expertise, t0, increases, her unconditional estimate precision rises relative to the consensus estimate precision. The analyst would then gain more accuracy by placing more weight on her own assessment, consistent with the findings of Chen and Jiang (2006). Forecast bias, however, affects how the analyst’s own unconditional estimate weighs into the optimal forecast. As forecast bias increases, the analyst relies more on her own assessment because larger bias improves precision through more privately acquired information. However, this reliance on private information is limited. When the precision of private information reaches a certain limit, 0.5 r1 tc, the analyst starts to reduce her reliance on her own assessment. Since the consensus contains information, an analyst does not need to rely solely on private information to improve her overall accuracy. Interestingly, the threshold on the reliance of private information is inversely related to the correlation between analyst’s own and consensus precisions. When the signals from the consensus and analyst are highly correlated, the analyst need not post largely biased forecasts to improve her accuracy. At low correlations, however, the analyst almost exclusively relies on her own assessment and uses private channels heavily by posting biased forecasts. As Mest and Plummer (2003) show, when uncertainty about the company is high, management becomes a more important source of information. This confirms Bernhardt et al. (2006) contention that signal “correlatedness” affects analyst forecasting. Proposition 1 has an interesting testable implication. As noted, following previous studies (Lim 2001; Chen and Jiang 2006), our analysts minimize the squared error to arrive at the optimal bias. In a cross section of analysts, this implies that as long as all analysts follow the squared-error-minimization rule, then the implication of Proposition 1 holds empirically: there would be a concave relation between experience and the weight an analyst places on her own
70
A. Knill et al.
unconditional assessment. However, since the weighted average of selfassessment and consensus belief is theoretically identical with forecast error, this then also implies that in the cross section, analysts’ forecast errors and their experience are also related concavely. Note that our analysts choose how to combine their own unconditional forecast and the consensus forecast to arrive at their reported forecast. They do so by accounting for the inherent error in each of these forecasts and choose a weight that minimizes the squared forecast error: the difference between reported forecast (i.e., conditional forecast) and the observed earnings. So for a novice analyst, the choice is what weight to put on her unconditional forecast knowing that the forecasts reported by other analysts contain much more experience and perhaps less information asymmetry. In the extreme case, where the novice analyst has no confidence on her own assessment, the optimal weight for her unconditional forecast is zero. She will fully herd. A highly seasoned analyst does the same thing: she also chooses the weight she puts on her assessment vis-a`-vis consensus. In the alternate extreme case, where the highly seasoned analyst has utmost confidence on her assessment, she puts 100 % weight on her unconditional forecast. Moderately seasoned analysts thus fall somewhere in between; the weight they put on their own unconditional forecasts is between zero and one. In such a setting, all analysts arrive at their own squared-error-minimizing forecast. From the onset, however, as econometricians, we can only observe their reported conditional forecast. This means that we can only focus on the implication of analyst squarederror-minimizing exploiting cross-sectional differences in the data. As noted, the observed error is equal to the weighted average of unconditional forecast and the consensus belief. This implies that observed error is directly linked with the unobservable weight. However from Proposition 1, we know that the unobservable optimal weight is concavely related to experience, which in turn, implies that, in the cross section of analysts, the observed forecast error is concavely linked with the experience as well. Proposition 2 Assuming positively biasing forecasts have increasing but
diminishing returns, i.e., ∂t/∂b > 0 and ∂2t/∂b2 < 0, the optimal weight on an analyst’s own unconditional estimate increases monotonically with information asymmetry (i.e., the marginal accuracy for bias, ∂t/∂b).
Proof See the Appendix 1. As the efficacy of the analyst’s private information acquisition increases, that is, as ∂t/∂b rises, the analyst gains more precision with every cent of bias. More resourceful analysts, such as those employed by large investment houses, either have better private channels or can gain more from the same channels (see, e.g., Chen and Jiang 2006; Clement and Tse 2005). As such, these analysts’ estimates become more accurate more rapidly as they bias their forecasts to purchase information from the firm. From proposition 1, we know that at the optimal weight, the sum of the analyst’s own estimate and consensus precision is constant.
2
Experience, Information Asymmetry, and Rational Forecast Bias
71
As information asymmetry increases, i.e., marginal precision per bias rises, a larger bias is needed to obtain the optimal weight of the analyst’s estimate and the consensus. Interestingly, as an analyst becomes more experienced, her optimal weight on her own estimate becomes less sensitive to information asymmetry.
2.3
Empirical Method
We empirically test selected comparative statics to draw conclusions about the validity of our analytical results. We focus our attention on analytical predictions that result directly from the endogeneity of private information acquisition. Specifically, we test three of our model’s implications: (1) whether analyst experience and forecast deviation are concavely related, (2) whether information asymmetry and analyst forecast deviation are positively related, and (3) whether there is interplay between the effects of experience and information asymmetry. We follow Chen and Jiang (2006) and empirically define bias as the difference between an analyst’s forecast and the consensus estimate. We calculate consensus using only the most recent estimate from each analyst and include only those estimates that are less than 90 days old (Lim 2001). We first examine the impact of analyst experience on forecast deviation. As analysts become more experienced, they post a more positive (i.e., increasing) deviation to improve their information gathering, resulting in better forecasts. Proposition 1 from the theoretical model suggests that this relationship is nonlinear. Highly seasoned analysts achieve forecast accuracy on their own without relying on procured information, or they possess valuable relationship with management that does not necessitate purchasing information, causing this nonlinearity. Empirically, we expect the relationship between deviation and experience to be concave (i.e., a positive coefficient on experience and a negative coefficient on experience squared). To test our contentions, we estimate the following OLS panel regression: DFCi, t ¼ a0 þ Z0 Experiencei, t þ Z1 Experiencei, t 2 þ b0 Num Revisionsi, t þ b1 Same Quarteri, t þ b2 Reg:FDi, t þ F0 Xt4 þ F1 I þ F2 t þ e, (2.4) where DFC is the Deviation from the consensus for analyst i in quarter t. We have several analyst-specific controls: Experience, Experience squared, and the NumRevisions. Following our analytical model, analysts with more experience should be better at forecasting earnings and should have better channels of private information. Following Leone and Wu (2007), we measure Experience in two ways: using an absolute measure (the natural log of the quarters of experience) and a relative measure (the natural log of the quarters of experience divided by the
72
A. Knill et al.
average of this measure for analysts following firms in the same industry). We include Experience2 to test for a nonlinear relationship between Deviation and Experience. To control for varying degrees of accuracy due to intertemporal Bayesian updating, we include NumRevisions, which is the number of times the analyst revises her estimate. Studies such as Givoly and Lakonishok (1979) suggest that information has been acquired (i.e., purchased). This suggests a positive relationship between the number of revisions and the resulting deviation; thus we expect the coefficient, b0, to be positive. Same quarter is an indicator variable for horizon value, where the variable is equal to one if the estimate occurs in the same quarter as the actual earnings report is released and zero otherwise. Including Same quarter can help us understand whether analysts are more likely to be optimistic early in the period than late in the period (Mikhail et al. 1997; Clement 1999; Clement and Tse 2005).6 We expect the coefficient on this variable, b1, to be negative. Reg. FD is an indicator variable equal to one if the quarter date is after Reg. FD was passed (October 23, 2000), and zero otherwise.7 Although extant literature is divided on whether or not Reg. FD has decreased the information asymmetry for analysts, if information asymmetry is decreased and more public information available, analysts will be less likely to purchase private information, and the coefficient on Reg. FD, b2, would be negative. Indeed, Zitzewitz (2002) shows that although private information has decreased post Reg. FD, the amount of public information has improved. Both Brown et al. (2004) and Irani (2004) show that information asymmetry decreased following Reg. FD.8 We control for firm effects through the inclusion of firm-specific variables such as Brokerage reputation, Brokerage size, Accruals, Intangible assets, and Return st. dev. (vector X in Eq. 2.4). Following Barber et al. (2000), we use the number of companies a brokerage house follows per year in total as a proxy for brokerage size. Brokerage house reputation may play an integral role in accessing to private information (Agrawal and Chen 2012). To calculate broker reputation, we start with the Carter and Manaster (1990), Carter et al. (1998) and the Loughran and Ritter (2004) rankings. When a firm goes public, the prospectus lists all of the firms that are in the syndicate, along with their shares. More prestigious underwriters are listed higher in the underwriting section. Based upon where the underwriting brokerage firm is listed, they are assigned a value of 0–9, where nine is the highest ranking. As Carter and Manaster (1990) suggest that prestigious financial institutions provide a lower level of risk (i.e., lower information asymmetry), we include control variables for these characteristics of the firm and expect the relationships to be negative.9 6
Horizon value and the NumRevisions are highly correlated at 65 %. We therefore orthogonalize horizon value in the equation to ensure that multicollinearity is not a problem between these two variables. 7 http://www.sec.gov/rules/final/33-7881.htm. 8 See Lin and Yang (2010) for a study of how Reg. FD affects analyst forecasts of restructuring firms. 9 Brokerage reputation and Brokerage size are highly correlated at 67 %. We therefore orthogonalize brokerage reputation in the equation to ensure that multicollinearity is not a problem between these two variables.
2
Experience, Information Asymmetry, and Rational Forecast Bias
73
As suggested by Bannister and Newman (1996) and Dechow et al. (1998), companies that consistently manage their earnings are easier to forecast. We include Accruals (in $ millions) to control for the possibility that earnings are easier to forecast for companies that manage their earnings, resulting in less information “bought” from the company.10 We expect the coefficient on this control variable to be negative. We include Intangible assets based on the fact that companies with greater levels of intangible assets are more difficult to forecast based on uncertainty of future performance (Hirschey and Richardson 2004).11 Thus, we expect the coefficient to be positive. We include Return standard deviation to proxy for firm risk. More volatile firms are less likely to voluntarily issue public disclosures, making it necessary for analysts to purchase private information (Waymire 1986). The standard deviation is defined as the monthly standard deviation of stock returns over a calendar year. All firm-level variables are lagged one year (i.e., four quarters). We include both industry (one-digit SIC code) and time indicators to control for industries/times where it is easier to forecast (see Kwon (2002) and Hsu and Chiao (2010), e.g., to see how analyst accuracy differs across industry).12 Finally, a formal fixed effects treatment around analysts is taken to ensure that standard errors are not understated. Our theoretical analysis also examines the effect of information asymmetry on forecast deviation. In an environment with high information asymmetry, analysts without adequate, reliable resources need to post a positive deviation to access information, as shown in Proposition 2. We expect a positive coefficient on Information asymmetry. We test this relationship by estimating the following OLS panel regression: DFCi, t ¼ a0 þ l0 Information Asymmetryi, t þ b0 Num Revisionsi, t þ b1 Same Quarteri, t þ b2 Reg:FDi, t þ F0 Xt4 þ F1 I þ F2 t þ e,
(2.5) where t is the quarter in which we measure deviation and i denotes the ith analyst. We define Information asymmetry three different ways: (1) the inverse of analyst coverage, i.e., 1/(number of brokerage houses following the firm), (2) the standard deviation of the company’s forecasts (102), and (3) the relative firm size, i.e., the difference between the firm’s assets and the quarterly median assets for the industry. These definitions are constructed such that the direction of the expected marginal coefficient is congruent with that of information asymmetry. 10
Following Stangeland and Zheng (2007), we measure Accruals as income before extraordinary items (Data #237) minus cash flow from operations, where cash flow from operations is defined as net cash flow from operating activities (Data #308) minus extraordinary items and discontinued operations (Data #124). 11 Following Hirschey and Richardson (2004), we calculate Intangibles as intangible assets to total assets (Data 33/Data #6). 12 As an alternate proxy for industry fixed effects, Fama-French 12-industry classifications (Fama and French 1997) are used. Results using these proxies are available upon request.
74
A. Knill et al.
The first definition, the inverse of analyst coverage, is based on evidence that the level of financial analyst coverage affects how efficiently the market processes information (Bhattacharya 2001). Analyst coverage proxies for the amount of public and private information available for a firm and, therefore, captures a firm’s information environment (Zhang 2006). The second definition, Forecast dispersion, is supported in papers such as Krishnaswami and Subramaniam (1998) and Thomas (2002), among others. Lastly, we use Relative firm size as a proxy for information asymmetry. This is equally supported by the literature, including but not limited to Petersen and Rajan (1994) and Sufi (2007). The firm-specific control variables are the same as in Eq. 2.4, and a fixed effects treatment around analysts is taken. As previously explained, our model allows for the interaction of experience and information asymmetry; the concavity of the relationship between experience and Deviation may change at different levels of information asymmetry. Thus, we estimate Eq. 2.4 for analysts with both high and low levels of information asymmetry (i.e., based on relation to the industry-quarter median forecast dispersion). We expect the coefficients on Experience and Experience squared to increase when there is more information asymmetry. Similarly, we demonstrate that experience affects the link between information asymmetry and deviation. While the deviation of the novice and highly seasoned analysts is only marginally affected by information asymmetry, the deviation of the moderately experienced analyst is highly affected by information asymmetry. Since Proposition 2 suggests that information asymmetry will affect analysts differently based on their experience, we segment our sample into three groups: novice, moderately experienced, and highly seasoned analysts. To form these experience groups, we create quarterly terciles of analysts based on their experience. The bottom third comprises the novice analysts; the middle third, moderately seasoned analysts; and the top third, highly seasoned analysts. We examine the model in Eq. 2.5 separately for our three experience subsamples. The coefficient on information asymmetry should be larger for experienced analysts. It is possible that a resolution of idiosyncratic uncertainty makes deviation insensitive to both analyst experience and information asymmetry. Regulations such as Reg. FD were enacted to reduce the uncertainty around companies (De Jong and Apilado 2008). If Reg. FD reduced firm-specific uncertainty more than common uncertainty, then the coefficients on Experience, Experience squared, and information asymmetry should be smaller after Reg. FD. In other words, if Reg. FD was effective in “leveling the playing field” for analysts with regard to preferential treatment of some analysts over others, the implications of this model should no longer exist. Extant evidence suggests that by prohibiting exclusive private communication of pertinent information, Reg. FD would cause overall earnings uncertainty to decline (Baily et al. 2003). In Eqs. 2.4 and 2.5, we simply control for any effect that Reg. FD might have. By using interactive terms, however, we can explore this relationship more fully by offering a comparison of the relationship between Experience/ Information asymmetry and Deviation both with and without Reg. FD. If most of
2
Experience, Information Asymmetry, and Rational Forecast Bias
75
the decrease in uncertainty comes from improving the common component of earnings (or equivalently, reducing the amount of private information), then the enactment of Reg. FD should make any remaining private information even more valuable and should lead to an increase in the importance of information asymmetry on deviation. Further, this effect should be disparate based on analyst experience. Specifically, experienced analysts (moderately experienced more so than highly seasoned) should see a drop in their deviation from consensus based on an increase in information asymmetry since their private information is no longer quite so private, i.e., there is less information to buy. Novice analysts, on the other hand, will likely be relatively unaffected since they depend mostly on consensus anyway. In short, since the intent of the regulation is to even the playing field for analysts, the effect of Reg. FD on the impact of experience could be a reduction in its importance. This would especially be the case with relative experience. If Reg. FD achieved what it set out to achieve, we should see a reduction (i.e., flattening) of the concavity of the experience relationship with deviation from consensus.
2.4
Data
Consistent with Brown and Sivakumar (2003) and Doyle et al. (2003), we define earnings as the First Call reported actual earnings per share.13 Our sample includes companies based in the United States with at least two analysts, consists of 266,708 analyst-firm-quarter forecasts from 1995 to 2007, and includes all analysts’ revisions. Variables are winsorized at the 1 % level to ensure that results are not biased by outliers. First Call data is well suited for examining analyst revisions because most of the analysts’ estimates in the data have the date that they were published by the broker. These revisions are reflected daily, which aids in understanding the changes in deviation based on changes in the information environment. One limitation with First Call data is that it identifies only brokerage houses, not individual analysts. Following Leone and Wu (2007), we make the assumption that for each firm quarter there is only one analyst in each brokerage house following the firm. It is noteworthy that this biases our study against finding a nonlinear impact for analyst’s experience. Panel A of Table 2.1 shows the summary statistics of the variables used in our analysis. The average Deviation from the consensus is 5.16¢. However, there is wide dispersion, from 54.67¢ to 112.50¢, as compared to an average Forecast error of 3.79¢, with a range of 131–72¢. The analysts in our sample have on average 12.71 quarters of Experience (the most experienced analyst has 13 years experience, and the least experienced analyst has no prior experience following the firm). Analysts revise their estimates 2.49 times per quarter. The companies they
13
As a robustness test, we use I/B/E/S data. Results may be found in Appendix 2.
Panel A. Summary statistics Variable N Source DFC (¢) 266,708 First call; own calculation Quarters follow 266,708 First call; own calculation Experience 266,708 First call; own calculation Relative experience 266,708 First call; own calculation Analyst coverage1 266,708 First call; own calculation Forecast dispersion (¢) 253,673 First call; own calculation Relative firm size 217,126 COMPUSTAT; own calculation Forecast error (¢) 266,708 First call; own calculation Same quarter 266,708 First call; own calculation Broker reputation 266,708 Carter and Manaster (1990) Broker size 266,708 First call; own calculation Reg. FD 266,708 Securities Exchange Commission Number of revisions 266,708 First call; own calculation Accruals 266,708 COMPUSTAT Intangible assets 266,708 COMPUSTAT Return std. dev. 266,708 CRSP Panel B. Correlation 1 2 3 4 5 6 7 DFC (1) 1.00 Quarters follow (2) 0.02 1.00 Experience (3) 0.01 0.89 1.00 Relative experience (4) 0.01 0.70 0.72 1.00
Table 2.1 Data characteristics
8
9
Mean 5.16*** 12.71*** 2.15*** 0.97*** 0.21*** 7.90*** 9.68*** 3.79*** 0.21*** 0.02*** 6.36*** 0.63*** 2.49*** 0.44*** 0.14*** 0.13*** 10
11
Median 1*** 10*** 2.30*** 1*** 0.13*** 4.19*** 0*** 0*** 0*** 0.20*** 6.63*** 1*** 2*** 0.09*** 0.06*** 0.11*** 12
13
Std. dev. 20.43 10.37 0.98 0.51 0.22 10.66 60.80 24.00 0.41 1.46 1.03 0.48 2.60 1.09 0.18 0.08 14
15
Min 54.67 1 0 0.07 0.03 0 1430.35 131 0 4.80 0 0 0 7.52 0 0.03
16
Max 112.5 52 3.81 2.40 1 67.44 671.78 72 1 3.77 7.79 1 12 0.90 0.92 0.47
76 A. Knill et al.
0.01 0.32 0.01 0.89 0.10 0.02 0.00 0.05 0.14 0.01 0.00 0.13
0.19 0.06 0.10 0.04 0.14 0.01 0.30 0.33 0.21 0.17 0.02 0.21
0.18 0.05 0.08 0.00 0.26 0.03 0.32 0.28 0.27 0.14 0.03 0.18
0.01 0.00 0.01 0.01 0.13 0.04 0.19 0.01 0.12 0.02 0.01 0.04 1.00 0.11 0.10 0.03 0.02 0.02 0.18 0.17 0.15 0.19 0.02 0.09 1.00 0.03 0.27 0.03 0.01 0.03 0.02 0.21 0.11 0.10 0.12 1.00 0.03 0.00 0.01 0.05 0.06 0.03 0.41 0.06 0.10 1.00 0.09 0.02 0.01 0.07 0.11 0.00 0.01 0.12 1.00 0.00 0.05 0.05 0.00 0.01 0.02 0.05
1.00 0.08 1.00 0.18 0.19 1.00 0.03 0.10 0.09 1.00 0.03 0.08 0.09 0.06 1.00 0.02 0.07 0.15 0.04 0.01 1.00 0.03 0.08 0.05 0.05 0.10 0.03 1.00
Analyst information comes from First Call for the term 1995–2007. Company information comes from Compustat/CRSP. DFC is the bias, defined as the difference between an analyst’s forecast and the consensus estimate, where we calculate consensus using only the most recent estimate from each analyst and include only those estimates that are less than 90 days old. Quarters follow is the number of quarters a brokerage firm has been following a company. Experience is the natural log of the quarters of experience. Relative experience is the natural log of the quarters of experience divided by the average of this measure for analysts following firms in the same industry. Analyst coverage1 is equal to 1/(number of brokerage houses following the firm), and forecast dispersion is the standard deviation of the company’s forecasts (102). Relative firm size is the difference between the firm’s assets and the quarterly median assets for the industry. Forecast error is the difference between the estimate and the actual earnings. Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Brokerage reputation is a ranking of brokerage reputation, where 0 is the worst and 9 is the best. Brokerage size is the natural log of the number of companies per quarter a brokerage house follows. Reg. FD is an indicator variable that takes on a value of 1 if Reg. FD is in effect and 0 otherwise. NumRevisions is the total number of revisions the analyst made during the quarter for the quarter-end earnings. Accruals are income before extraordinary items minus cash flow from operations, where cash flow from operations is defined as net cash flow from operating activities minus extraordinary items and discontinued operations. Intangible assets are intangible assets to total assets. Return std. dev. is the standard deviation of the 12-month returns. Panel A shows the univariate statistics and Panel B shows the pairwise correlations Asterisks in Panel A represent statistical significance relative to zero Bolded numbers in Panel B represent 5 % or 1 % significance
Analyst coverage1 (5) Forecast dispersion (6) Relative firm size (7) Forecast error (8) Same quarter (9) Broker reputation (10) Broker size (11) Reg. FD (12) NumRevisions (13) Accruals (14) Intangible assets (15) Return std. dev. (16)
2 Experience, Information Asymmetry, and Rational Forecast Bias 77
78
A. Knill et al.
follow have a Return standard deviation of 13 %, Intangibles of 14 %, and Accruals of $0.44 (in millions) on average. Panel B of Table 2.1 shows the correlation matrix for the variables used in the analysis. There exists notable significant relation in the variables: Forecast error and Experience with Deviation from consensus. The relation actually foreshadows one of the main results of the paper, which is that private information can be used to decrease forecast error. The correlation found in this table, 0.89, is not a problem econometrically because fitted values are used in the specification found in Table 2.5. The only other correlations that would be considered a problem econometrically are for variables not used in the same specification, i.e., Experience and Relative experience.
2.5
Empirical Findings
Table 2.2 presents the results of our estimation from Eq. 2.4, which examines how analyst experience affects analyst deviation from the consensus. The coefficients on our control variables all exhibit the expected signs. There is a negative relation between the horizon value (Same quarter) and Deviation, which suggests that analysts are more likely to be optimistic early in the period than late in the period. Supporting the contentions of Carter and Manaster (1990), there is a negative relationship between the brokerage characteristics – Reputation and Size – with Deviation. There is also a negative relationship between Reg. FD and Deviation, suggesting that the enactment of Reg. FD and its mandatory indiscriminant information revelation have made forecasting easier. NumRevisions is positively related to Deviation; revisions are made when valuable information content is received (Givoly and Lakonishok 1979), suggesting that private information is paid for through incremental deviation of their forecasts over time. We find that higher Accruals lead to less deviation from the consensus. Firms who actively manage earnings make forecasting easier; thus analysts do not have to purchase private information. The positive sign on Intangible assets is expected as more intangible assets make forecasting more difficult, necessitating the procurement of private information. Finally, as the standard deviation of Returns increases, future firm performance is more difficult to predict. Turning to our variables of interest, we find that there is a highly significant positive relationship between Experience and Deviation, as evidenced by a 0.829¢ increase in deviation for every additional unit of experience (specification 1) and a 0.581¢ increase in deviation for every additional unit of relative experience (specification 2).14 The results in specifications (3) and (4) confirm our contention that this relationship is not linear. The squared Experience variable
14
Inasmuch as the Experience variable is transformed using the natural logarithm, one unit of experience is approximately equal to two quarters of experience. For tractability, we refer to this as a unit in the empirical results.
[0.152]
0.165***
0.529***
[0.103]
0.493***
[0.151]
0.186***
[0.037]
1.244***
[0.232]
17.800***
[0.571]
8.269***
[1.434]
0.508***
[0.103]
0.453***
[0.151]
0.122***c
[0.037]
1.370***
[0.232]
19.384***
[0.578]
6.239***
[1.439]
Yes
Yes
0.690***
[0.227]
[0.227]
Yes
[0.103]
2.648***
2.458***
Yes
0.495***
[0.016]
Industry FE
[0.228]
1.232***
1.204***
[0.016]
Yes
Yes
[1.438]
6.501***
[0.578]
19.063***
[0.232]
1.237***
[0.037]
2.275***
[0.016]
1.189***
[0.110]
[0.107]
1.992***
2.472***
Yes
Yes
[1.437]
6.913***
[0.571]
17.700***
[0.231]
1.264***
[0.037]
0.159***
[0.151]
0.518***
[0.103]
0.499***
[0.228]
2.403***
[0.016]
1.227***
[0.107]
2.364***
4.345*
Yes Yes
Yes
Yes
[2.621]
2.193 [2.682]
[0.976]
26.308***
[0.393]
2.718***
[0.061]
0.373***
[0.275]
0.696**
[0.185]
0.573***
[0.387]
0.591
[0.025]
0.965***
[0.975]
26.067***
[0.393]
2.718***
[0.061]
0.363***
[0.275]
0.695**
[0.185]
0.642***
[0.388]
0.731*
[0.025]
0.958***
[0.118]
2.372***
1.735***
[0.107]
0.953*** [0.087]
[0.263]
2.961*** [0.301]
4.250***
[0.118]
[0.039]
0.581***
1.341***
0.595***
[0.081]
[0.571]
[0.153]
[0.049]
6.751***
3.024***
0.829***
Firm FE
Constant
Return std. dev.
Intangible assets
Accruals
Broker size
Broker reputation
Reg. FD
NumRevisions
Same quarter
RelExp2
Relative experience
Experience2
Experience
6
5
4
Early estimates (horizon is longer)
3
1
2
All estimates
Table 2.2 Impact of experience on analyst forecasting
Yes
Yes
[1.467]
6.268***
[0.612]
7.387***
[0.241]
0.097
[0.041]
0.114***
[0.153]
0.489***
[0.104]
0.300***
[0.406]
1.687***
[0.030]
1.545***
[0.038]
0.360***
[0.137]
1.945***
Yes
Yes
[1.469]
6.633***
[0.612]
7.448***
[0.242]
0.146
[0.041]
0.112***
[0.152]
0.391**
[0.104]
0.242**
[0.405]
1.467***
[0.030]
1.569***
[0.055]
0.764***
[0.167]
2.712***
8
Late estimates (horizon is shorter) 7
Yes
Yes
[0.975]
6.286***
[0.448]
7.944***
[0.161]
1.415***
[0.028]
0.204***
[0.112]
0.519***
[0.075]
0.205***
[0.167]
0.368**
[0.013]
0.499***
[0.078]
0.833***
[0.028]
0.169***
[0.109]
0.950***
Yes
Yes
[0.974]
6.423***
[0.441]
7.265***
[0.161]
1.411***
[0.028]
0.205***
[0.112]
0.476***
[0.075]
0.206***
[0.167]
0.391**
[0.013]
0.512***
[0.076]
0.935***
[0.083]
0.712***
[0.186]
1.781***
10
Estimates in low IA environment 9
Yes
Yes
[3.041]
13.895***
[1.003]
17.102***
[0.440]
1.974***
[0.066]
0.465***
[0.270]
0.759***
[0.186]
0.714***
[0.408]
3.394***
[0.027]
1.388***
[0.205]
4.181***
[0.073]
0.986***
[0.283]
4.745***
(continued)
Yes
Yes
[3.038]
14.185***
[0.991]
15.640***
[0.440]
2.054***
[0.066]
0.452***
[0.268]
0.451*
[0.186]
0.714***
[0.408]
3.613***
[0.027]
1.444***
[0.200]
4.791***
[0.221]
2.267***
[0.488]
5.572***
12
Estimates in high IA environment 11
2 Experience, Information Asymmetry, and Rational Forecast Bias 79
266,708
130
0.06
Observations
# Brokerage houses
R-squared
0.06
130
266,708
Yes
0.06
130
266,708
Yes
0.06
130
266,708
Yes
0.07
121
134,296
Yes
0.07
121
134,296
Yes
0.04
130
132,412
Yes
0.04
130
132,412
Yes
8
Late estimates (horizon is shorter) 7
0.04
128
133,574
Yes
0.04
128
133,574
Yes
10
Estimates in low IA environment 9
0.08
128
133,134
Yes
0.08
128
133,134
Yes
12
Estimates in high IA environment 11
Table 2.2 presents the results of our estimation from Eq. 2.4. Experience is the natural log of the number of quarters the analyst has followed the firm for which forecast error is calculated. Relative experience is the analyst experience scaled by the average analyst experience (same industry). Information asymmetry (IA) is proxied based on the forecast dispersion – the standard deviation of estimates. Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Same quarter is orthogonalized (on NumRevisions) to ensure that multicollinearity is not a problem between these two variables. NumRevisions is the total number of revisions the analyst made during the quarter for the quarter-end earnings. Reg. FD is an indicator variable that takes on a value of 1 if Reg. FD is in effect and 0 otherwise. X is a vector of firm-specific variables including Broker reputation, Broker size, Accruals, Intang. assets, and Return std dev. Brokerage reputation is a ranking of brokerage reputation, where 0 is the worst and 9 is the best. Brokerage size is the natural log of the number of companies per quarter a Brokerage house follows. Brokerage reputation is orthogonalized (on brokerage size) to ensure that multicollinearity is not a problem between these two variables. Accruals are the accrued revenue/liabilities utilized for earnings smoothing. Intang. assets are the covered firm’s intangible assets value relative to its total assets. Return std. dev. is the standard deviation of the covered firm’s return. I is a vector of one-digit SIC industry dummies. T is a vector of time dummies. A formal fixed effects treatment around analysts is employed. Standard errors are reported in brackets * ** , , and *** indicate significance levels of 10 %, 5 %, and 1 %, respectively (two-tailed test). In columns 1–4 we use the full sample. In columns 5–8, we use the median value of horizon value to divide the firms into early (columns 5–6) and late estimates (7–8) groups to perform the analysis. In columns 9–12, we use the median value of forecast dispersion to divide the firms into low information asymmetry (columns 9–10) and high information asymmetry (11–12) groups to perform the analysis
Yes
Time FE
6
5
4
Early estimates (horizon is longer)
3
1
2
All estimates
Table 2.2 (continued)
80 A. Knill et al.
2
Experience, Information Asymmetry, and Rational Forecast Bias
81
(both definitions) is significantly and negatively related to Deviation (0.595¢ in specification 3 for Experience and 1.735¢ in specification 4 for Relative experience). Confirming the analytical results, the empirical results suggest that there exists a trade-off between public and private information. The results suggest that when analysts first begin to follow a firm and gain experience, they are more likely to post small deviations, indicating that they weigh public information (i.e., previous forecasts and the consensus), more heavily than private information. A possible explanation is that they have not yet acquired the preferred access to information that long-term relationships afford experienced analysts or the analytical expertise to wade through noisy information. As analysts gain confidence in their analytical ability and relationships with key personnel within the firm, however, they are more likely to post large deviations to gain private information. Highly seasoned analysts put almost no weight on the consensus in creating their earnings forecasts and rely almost exclusively on their own private information. This information costs them very little, either because they have honed their analytical ability so well that they don’t need the information or because the cost of said information is reduced due to their preferential relationship with the firm. If our hypotheses are true, we would expect that analysts are more optimistic early in the period and less so (and perhaps even pessimistic) later in the period (shortly before earnings are announced).15 In order to test this, we segment our sample by the median horizon value into two categories, early estimates (i.e., longer horizon) and late estimates (i.e., shorter horizon). The results are substantially stronger for earlier estimates as compared to late estimates. The coefficient on the Experience variable is 6.751 when the horizon value in long and 1.945 when there is a short horizon value (specification 5 vs. 7). The coefficients on the squared Experience variable are also more extreme for the earlier analysts (1.341 vs. 0.360), showing that the concavity of the function is more pronounced early in the forecasting period and less so later. This pronounced concavity suggests that the distinction between moderately experienced analysts and novice/highly experienced analysts is more pronounced earlier in the estimation period. Essentially, for every additional quarter of experience, there is a 5.410¢ increase in deviation when horizon is long and a 1.585¢ increase in deviation when horizon is short. Results for Relative experience are qualitatively identical, albeit with a reduced difference between early and late. Results are qualitatively identical when we divide horizon into terciles and define early as the highest quartile and late as the lowest. We can conclude from these results that analysts are indeed more optimistic earlier in the estimation period. Next, we rerun the model in Eq. 2.4 on low and high information asymmetry subsamples (using forecast dispersion and segmenting at the median), shown in Table 2.2, columns 9–12. As Proposition 2 suggests, the empirical results indicate that the impact of experience on optimal deviation is smaller for analysts when information asymmetry is low. One additional quarter of Experience increases the
15
We are grateful to an anonymous referee for this point.
82
A. Knill et al.
analyst deviation by 0.950¢ (specification 9) in a low information asymmetry environment versus 4.745¢ (specification 11) in a high information asymmetry environment. One additional unit of Relative experience increases the analyst Deviation by 1.781¢ (specification 10) in a low information asymmetry environment versus 5.572¢ (specification 12) in a high information asymmetry environment. We note once again that this relationship is not linear, as evidenced by the statistically significant negative squared term. This implies that moderately experienced analysts post higher deviations from consensus than do their less and more experienced colleagues. Table 2.3 provides alternate specifications around the Reg. FD control variable. Specifications (1) through (4) exclude the Reg. FD control to ensure that results are not reliant on its inclusion. Overall, results from Table 2.2 remain consistent. That said, we note that Relative experience in Specification (2) is no longer statistically significant. The substantiation of the nonlinearity of this proxy for Experience in Specification (4), albeit muted, suggests that this is likely due to the fact that we are no longer controlling for the impact of Reg. FD on the deviation from consensus, which we would expect would affect Relative Experience more so than experience (i.e., no longer benefitting highly seasoned analysts for the preferred relationships through better access to information) through the concavity of the function. In other words, without controlling for Reg. FD independently, the “average” impact of the proxy (i.e., before and after Reg. FD) is muted. Specifications (4) through (8) explore further the effect Reg. FD has on the relationship between Experience and Deviation. We first note that the nonlinearity of the relationship between Experience and forecast deviation (Deviation) is intact, indicating that leveling the playing field through Reg. FD has not (completely) changed how forecasts are fundamentally linked with the information. Focusing on the more potent results, we look to the Relative experience results (Specifications 7 and 8), and we note that Reg. FD is effective in reducing the private information component, which in turn reduces the importance of relative experience. Empirically, this translates into a reduction in the concavity of the impact of Relative experience; the “leveling of the playing field” flattens out that curve. We see this in the Reg. FD interaction terms: specifically, a positive and significant marginal effect on (Reg. FD * Relative experience) and negative and significant marginal effect on (Reg. FD * Relative experience2). More concretely stated, when access to private information is reduced, the relationship-based competitive advantage of relative experience – preferential access to private information – is taken away. These results highlight a distinction between precision and the value of preferential relationships between veteran analysts and management of the firm. Since all analysts gain analytical expertise over time following the firm (though the marginal effect of this would seem to wane for the most experienced, thus explaining the maintained nonlinearity), but only some may gain preferential treatment by the firm, one could argue that Experience is a better proxy for analytical precision from the model and Relative experience is a better proxy for the preferential treatment provided to veteran analysts that possess valuable relationships with top management of the firms they are covering (i.e., management will provide private
Reg. FD
Reg. FD * relative experience2
Reg. FD * relative experience
Relative experience2
Relative experience
Reg. FD * experience2
Reg. FD * experience
0.072 [0.082]
Dependent variable ¼ DFC (deviation from consensus) 1 2 Experience 0.302*** [0.050] Experience2 3 2.116*** [0.157] 0.485*** [0.040]
0.511* [0.269] 0.274** [0.120]
4
0.319 [0.318]
0.962*** [0.100]
5 1.528*** [0.088]
Table 2.3 Time impact of Reg. FD on the relationship between experience/relative experience and deviation
2.308*** [0.288]
0.331* [0.172]
0.822*** [0.149]
6
1.398*** [0.362]
7 2.764*** [0.271] 0.400*** [0.082] 0.026 [0.326] 0.168* [0.092]
7.019*** [0.503] 3.128*** [0.243] 3.715*** [0.586] 1.831*** [0.277] 0.975*** [0.358] (continued)
8
2 Experience, Information Asymmetry, and Rational Forecast Bias 83
3 Yes Yes Yes 266,708 130 0.06
4 Yes Yes Yes 266,708 130 0.06
5 Yes Yes Yes 266,708 130 0.06
6 Yes Yes Yes 266,708 130 0.06
7 Yes Yes Yes 266,708 130 0.06
8 Yes Yes Yes 266,708 130 0.06
Table 2.3 provides alternate specifications around the Reg. FD control variable. Experience is the natural log of the number of quarters the analyst has followed the firm for which forecast error is calculated. Relative experience is the analyst experience scaled by the average analyst experience (same industry). Reg. FD is an indicator variable that takes on a value of 1 if Reg. FD is in effect and 0 otherwise. Control variables included in the analysis but not shown above include same quarter, NumRevisions, Reg. FD, Broker reputation, Broker size, Accruals, Intang. assets, and Return std. dev. Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Same quarter is orthogonalized (on NumRevisions) to ensure that multicollinearity is not a problem between these two variables. NumRevisions is the total number of revisions the analyst made during the quarter for the quarter-end earnings. Brokerage reputation is a ranking of Brokerage reputation, where 0 is the worst and 9 is the best. Brokerage size is the natural log of the number of companies per quarter a brokerage house follows. Brokerage reputation is orthogonalized (on brokerage size) to ensure that multicollinearity is not a problem between these two variables. Accruals are the accrued revenue/liabilities utilized for earnings smoothing. Intang. assets are the covered firm’s intangible assets value relative to its total assets. Return standard deviation is the standard deviation of the covered firm’s return. I is a vector of one-digit SIC industry dummies. T is a vector of time dummies. A formal fixed effects treatment around analysts is employed. Standard errors are reported in brackets * ** , , and *** indicate significance levels of 10 %, 5 %, and 1 %, respectively (two-tailed test)
Dependent variable ¼ DFC (deviation from consensus) 1 2 Firm FE Yes Yes Industry FE Yes Yes Time FE Yes Yes Observations 266,708 266,708 # Brokerage houses 130 130 Model R2 0.06 0.06
Table 2.3 (continued)
84 A. Knill et al.
2
Experience, Information Asymmetry, and Rational Forecast Bias
85
information to only analysts with the oldest relationships). In short, Reg. FD reduces the importance of the competitive advantage of preferential treatment (proxied by Relative experience) while retaining the importance of analytical ability (proxied by Experience). This is seen empirically in the insignificant change in concavity of Experience and a decrease in the concavity of Relative experience. In this light, we see that the impact of Reg. FD is what we would expect, both with regard to Experience (seen in Specifications 5 and 6) and Relative experience. Panel A of Table 2.4 shows the results of Eq. 2.5, which examines directly how Information asymmetry affects Deviation, controlling for firm, analyst, and regulatory factors. Specifications (1) through (3) look at the effects of Information asymmetry on the full sample of analysts, using the inverse of Analyst coverage, Forecast dispersion, and Relative firm size, respectively. The first proxy (specification 1) is the inverse of Analyst coverage. As predicted in the theoretical section, Information asymmetry is positively related to Deviation. With fewer resources at their disposal, analysts post greater deviations as a means to purchase private information. This supplements the marginally valuable public information available, leading to more accurate forecasts. When information asymmetry is measured by Analyst coverage, a unit increase in the information asymmetry measure leads to a 0.630¢ additional Deviation. When using Forecast dispersion as the proxy, a one-unit increase in the information asymmetry measure leads to a 0.630¢ increase in Deviation. Lastly, using Relative firm size as the proxy for information asymmetry, a one-unit increase in the information asymmetry measure leads to 0.006¢ additional Deviation. From the multivariate analysis, it is evident that Forecast dispersion is the best proxy for Deviation from the consensus. The specifications where information asymmetry is proxied by Forecast dispersion have approximately two times the predictive power of the analyst coverage and size proxy (R-Squared of 0.15 vs. 0.06 and 0.08, respectively). In specifications (4)–(6) of Table 2.4, we include indicator variables for the three experience terciles (suppressing the constant term in the model) to examine whether information asymmetry affects analysts in a different manner based on experience. We first note that the marginal effect on Information asymmetry remains positive as in the first three specifications. Looking to the marginal effect of novice analysts – the least experienced third of analysts in a given quarter – they find sticking closer to the herd a better alternative (Zhou and Lai 2009). The cost of acquiring precise information for an inexperienced analyst who lacks necessary resources would be prohibitively large, i.e., the analyst would need to post an outrageously optimistic (positively deviated) forecast. The marginal effect on moderately experienced analysts – the middle third in experience in a given quarter – suggests that they post deviations that are more positive than the average analyst in an effort to purchase valuable private information. The marginal effect for the tercile of highly seasoned analysts suggests that they post a deviation with a smaller magnitude. This nonlinear relationship exists regardless of the proxy for information asymmetry. Controlling for the information asymmetry in the estimation environment, these results fall in line with our prediction of nonlinearity for Experience and our conjecture that highly seasoned analysts possess both superior
Firm FE Industry FE Time FE Observations
Highly seasoned
Moderately experienced
Novice
Relative firm size
Yes Yes Yes 266,708
Panel A: All analysts Dependent variable ¼ DFC (deviation from consensus) 1 1 0.630*** Analyst coverage [0.191] Forecast dispersion
Table 2.4 Impact of information asymmetry on analyst forecasting
Yes Yes Yes 253,673
0.630*** [0.004]
2
Yes Yes Yes 217,126
0.006*** [0.001]
3
5.015*** [1.079] 6.495*** [1.080] 6.100*** [1.082] Yes Yes Yes 266,708
4 0.627*** [0.188]
2.756*** [1.045] 3.920*** [1.046] 3.320*** [1.049] Yes Yes Yes 253,673
0.626*** [0.004]
5
0.007*** [0.001] 2.305** [1.156] 3.959*** [1.157] 3.824*** [1.159] Yes Yes Yes 217,126
6
86 A. Knill et al.
# Brokerage houses 130 Model R2 0.06 Novice ¼ Moderate? Moderate ¼ Highly seasoned? Novice ¼ Moderate ¼ Highly seasoned? Panel B: Experience subsamples Dependent variable ¼ DFC (deviation from consensus) Information asymmetry¼ Coverage 1 Novice 0.591** [0.264] Observations 91,993 # Brokerage houses 130 Model R2 0.05 Moderately experienced 1.075*** [0.334] Observations 88,392 # Brokerage houses 115 Model R2 0.07 Highly seasoned 0.456 [0.432] Observations 86,323 # Brokerage houses 85 Model R2 0.08
130 0.15
Forecast dispersion 2 0.555*** [0.006] 85,906 130 0.14 0.720*** [0.007] 83,878 114 0.19 0.624*** [0.007] 83,889 85 0.15
124 0.08
130 0.12 75*** 26*** 38***
Relative firm size 3 0.007*** [0.002] 73,888 124 0.06 0.014*** [0.002] 72,656 109 0.09 0.001 [0.001] 70,582 82 0.09
130 0.21 62*** 49*** 39***
(continued)
124 0.14 68*** 7*** 35***
2 Experience, Information Asymmetry, and Rational Forecast Bias 87
Forecast dispersion 2 7,709*** 5,862***
Relative firm size 3 2,421*** 3,742***
Table 2.4 shows the results of Eq. 2.5. We create three groups of experience: the bottom third is the novice analysts, the middle third are the moderately seasoned analysts, and the top third are the highly seasoned analysts. Information asymmetry is defined in three ways: Analyst coverage1, Forecast dispersion, and Relative firm size. Analyst coverage1 is the inverse of the number of analyst covering the firm. Forecast dispersion is the standard deviation of estimates. Relative firm size is equal to the covered firm size minus industry-quarter median firm size (in $billions). Control variables included in the analysis but not shown above include Same quarter, NumRevisions, Reg. FD, Broker reputation, Broker size, Accruals, Intang. assets, and Return std. dev. Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Same quarter is orthogonalized (on NumRevisions) to ensure that multicollinearity is not a problem between these two variables. NumRevisions is the total number of revisions the analyst made during the quarter for the quarter-end earnings. Brokerage reputation is a ranking of brokerage reputation, where 0 is the worst and 9 is the best. Brokerage size is the natural log of the number of companies per quarter a brokerage house follows. Brokerage reputation is orthogonalized (on brokerage size) to ensure that multicollinearity is not a problem between these two variables. Accruals are the accrued revenue/liabilities utilized for earnings smoothing. Intang. assets are the covered firm’s intangible assets value relative to its total assets. Return standard deviation is the standard deviation of the covered firm’s return. I is a vector of one-digit SIC industry dummies. T is a vector of time dummies. A formal fixed effects treatment around analysts is employed. Panel A Specifications 1–3 use the full sample and do not suppress the constant. Panel A Specifications 4–6 suppress the constant so we can include the three experience indicator variables. To test whether the coefficients on the experience indicator variables are statistically different, we use a Wald test for significance. Panel B shows the results of the same analysis using experience-based subsamples. We use a Hausman estimation to test whether there is statistical significance between each experience estimation. Standard errors are reported in brackets * ** , , and *** indicate significance levels of 10 %, 5 %, and 1 %, respectively (two-tailed test)
Panel B: Experience subsamples Dependent variable ¼ DFC (deviation from consensus) Information asymmetry¼ Coverage 1 Novice ¼ Moderate? 2,780*** Moderate ¼ Highly seasoned? 1,978***
Table 2.4 (continued)
88 A. Knill et al.
2
Experience, Information Asymmetry, and Rational Forecast Bias
89
analytical abilities, which would suggest that perhaps they have to purchase less information to achieve precision, and valuation relationships, which allow them to purchase information at reduced “costs” (i.e., deviation from consensus). To ensure that the marginal effects are significantly different, we perform Wald tests to confirm that the differences between the coefficients are statistically significant. As can be seen at the bottom of Panel A, the differences are statistically significant in every case. In Panel B, we examine experience-based subsamples to see more intricately how experience affects the impact of Information asymmetry on Deviation. Results confirm that Experience affects the link between Information asymmetry and Deviation. The moderately experienced analyst is the most affected by an increase in information asymmetry, regardless of the definition of information asymmetry. It is interesting to note that the results for Analyst coverage and Forecast dispersion are much stronger than Relative firm size. We use a Hausman test of statistical significance and find that the experience estimations are all statistically different. These empirical results nicely complement the theoretical predictions of our model. Taken collectively, the results suggest that analysts’ rational forecast deviation from the consensus is a result of analysts maximizing their objective function of minimizing error while taking into consideration the information environment in which they operate. To provide evidence that the deviation from the consensus discussed in the rest of the paper is indeed rationale, we test whether an analyst posting an estimate with the optimal deviation from consensus, empirically represented by the fitted value of either Eqs. 2.4 or 2.5, achieves a lower forecast error (defined as estimate minus actual earnings). Here, a negative association between fitted Deviation from consensus (DFC*) and Forecast error (FE) is indicative of optimal forecasting. Chen and Jiang (2006) argue that a positive (negative) association between observed Deviation from consensus (DFC) and Forecast error (FE) is indicative of an analyst overweighting (underweighting) her own assessment. This is because if analysts follow a Bayesian updating rule and minimize squared forecast error, there should be no link between observed deviation from forecast and forecast error. They find that analysts, on average, overweigh their own belief, and thus the link between observed deviation from consensus and forecast error is positive. In this paper, we extend Chen and Jiang’s model to allow for rational biasing of forecasts, similar to Lim (2001). As such, we find that there should be a concave relationship between Experience and Deviation from consensus. If our theory is correct and if our empirical model of deviation from consensus truly captures the essence of rational forecasting that occurs among analysts, the fitted Deviation from consensus should be devoid of analysts’ overconfidence about their own information processing. In fact, if fitted Deviation reflects the tendency toward more rational forecasting, we should find a negative relationship between fitted Deviation and Forecast error. Looking to the results, which are found in Table 2.5, Panel A, we see that this is indeed what we find. In all specifications, the fitted Deviation (DFC) is negatively related with Forecast error. All specifications lead to a significant decline in the forecast error of analyst estimation. Specifically, we find that a 1¢ increase in bias
DFC*2
Observations # Brokerage houses Model R2 DFC*
Constant
DFC*
First-stage regressors included
Panel A: All analysts Dependent variable ¼ forecast error
Experience Experience2 Reg. FD NumRevisions Controls 1 1.081*** [0.009] 2.350*** [0.851] 266,708 130 0.05 0.456*** [0.016] 0.050*** [0.001] Relative experience Relative experience2 Reg. FD NumRevisions Controls 2 1.079*** [0.009] 2.150** [0.851] 266,708 130 0.05 0.444*** [0.017] 0.050*** [0.001]
Table 2.5 Optimal deviation from consensus and forecast error
Forecast dispersion Reg. FD NumRevisions Controls 3 1.029*** [0.006] 2.586*** [0.850] 253,673 130 0.12 0.427*** [0.010] 0.023*** [0.000]
Experience Experience2 Forecast dispersion Reg. FD NumRevisions Controls 4 1.030*** [0.006] 2.845*** [0.850] 253,673 130 0.12 0.433*** [0.010] 0.022*** [0.000]
Relative experience Relative experience2 Forecast dispersion Reg. FD NumRevisions Controls 5 1.029*** [0.006] 2.425*** [0.850] 253,673 130 0.12 0.427*** [0.010] 0.023*** [0.000]
90 A. Knill et al.
1.508* [0.841] 266,708 130 0.06 3.97 5.49 4.71
1.765** [0.840] 266,708 130 0.06
3.97 5.50 4.71
3.27 4.19 3.92
1.487* [0.791] 253,673 130 0.14 3.26 4.18 3.94
1.707** [0.781] 253,673 130 0.14 3.27 4.18 3.92
1.378* [0.790] 253,673 130 0.14
Table 2.5 Panel A presents the results of regressing forecast error on the regressors specified in each column. Experience is the natural log of the number of quarters the analyst has followed the firm for which forecast error is calculated. Relative experience is the analyst experience scaled by the average analyst experience (same industry). NumRevisions is the total number of revisions the analyst made during the quarter for the quarter-end earnings. Reg. FD is an indicator variable that takes on a value of 1 if Reg. FD is in effect and 0 otherwise. Controls are vector of firm-specific variables including Same quarter, Broker reputation, Broker size, Accruals, Intang. assets, and Return std dev. Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Same quarter is orthogonalized (on NumRevisions) to ensure that multicollinearity is not a problem between these two variables. Brokerage reputation is a ranking of brokerage reputation, where 0 is the worst and 9 is the best. Brokerage size is the natural log of the number of companies per quarter a brokerage house follows. Brokerage reputation is orthogonalized (on brokerage size) to ensure that multicollinearity is not a problem between these two variables. Accruals are the accrued revenue/liabilities utilized for earnings smoothing. Intang. assets are the covered firm’s intangible assets value relative to its total assets. Return standard deviation is the standard deviation of the covered firm’s return. I is a vector of one-digit SIC industry dummies. T is a vector of time dummies. Panel B presents the fitted values of DFC from each specification for each experience level. A formal fixed effects treatment around analysts is employed. Standard errors are reported in brackets * ** *** , , indicate significance levels of 10 %, 5 %, and 1 %, respectively (two-tailed test)
Observations # Brokerage houses Model R2 Panel B: Experience subsamples Novice analysts Moderately experienced Highly seasoned
Constant
2 Experience, Information Asymmetry, and Rational Forecast Bias 91
Intangible assets
Accruals
Broker size
Broker reputation
Reg. FD
NumRevisions
Same quarter
Forecast dispersion
Relative Experience2
Relative Experience
Experience2
Experience
[0.141]
0.01
[0.047]
[0.142]
0.001
[0.048]
[0.263]
1.032***
1.025***
0.121
[0.110]
[0.110]
[0.263]
0.140***
0.183*
0.172
0.153
[0.143]
[0.366]
[0.260]
0.870***
[0.045]
1.006***
[0.110]
0.275**
[0.368]
5.004***
4.349***
[0.096]
[0.366]
[0.097]
[0.099]
2.860***
[0.981]
0.419
[0.980]
0.38
3.306*** [0.593]
3.317*** [0.593]
[0.457]
0.132
[0.431]
0.052
[0.948]
[0.458]
0.095
[0.430]
0.106
[0.948]
5.084***
[0.088]
4.747***
1.721***
[0.089]
[0.492]
1.598***
[0.515]
2.455***
Random effects
1.186***
46.26
[25.114]
3.35
[212.638]
92.249
[21.599]
7.038
[65.397]
60.993
[110.413]
[0.232]
1.216***
[0.037]
0.171***
[0.125]
0.514***
[0.088]
0.368***
[0.228]
2.223***
[0.016]
0.053 [0.899]
[0.110]
2.008***
[0.039]
0.475***
[0.155]
2.158***
7
[3.657]
5.852
[0.157]
4.923***
2.564***
2.479***
0.616***
[0.563]
[0.005]
0.365
0.829***
[0.123]
6
0.259***
[1.266]
0.157
5
[0.267]
1.716***
[0.186]
[0.042]
1.990
***
[0.669] 0.595***
[0.141]
2.822
0.127***
0.817
***
With only one analyst
3
4
2
1
***
Without revisions
Dependent variable ¼ DFC (deviation from consensus)
Table 2.6 Alternate samples
[0.231]
1.268***
[0.037]
0.160***
[0.127]
0.404***
[0.090]
0.400***
[0.228]
2.412***
[0.016]
1.226***
[0.107]
2.357***
[0.120]
0.351***
[0.267]
0.749***
8
[0.227]
4.191***
[0.036]
0.413***
[0.123]
0.347***
[0.086]
0.520***
[0.222]
2.560***
[0.015]
0.773***
[0.104]
3.189***
[0.004]
0.629***
9
I/B/E/S data
[0.209]
0.881***
[0.030]
0.464***
[0.187]
3.243***
[0.016]
1.076***
[0.090]
1.751***
[0.036]
0.089**
[0.117]
1.396***
10
[0.210]
0.861***
[0.030]
0.498***
[0.187]
3.490***
[0.015]
1.141***
[0.085]
2.044***
[0.077]
0.658***
[0.191]
1.938***
11
[0.191]
2.196***
[0.027]
0.244***
[0.170]
3.268***
[0.014]
0.700***
[0.077]
2.494***
[0.003]
0.718***
12
92 A. Knill et al.
130
0.03
Observations
# Brokerage houses{
Model R2
Yes
0.03
130
66,550
0.07
130
62,014
Yes
Yes
Yes
[1.392]
9.734***
[0.664]
2.975***
0.06
119
13,181
Yes
Yes
Yes
[5.760]
8.532
[2.079]
10.716***
0.06
119
13,181
Yes
Yes
Yes
[5.770]
10.407*
[2.063]
9.805***
0.24
22
146
Yes
Yes
Yes
[165.519]
57.535
[35.868]
0.384
0.06
130
266,708
Yes
Yes
No
[1.258]
3.465***
[0.579]
18.238***
0.06
130
266,708
Yes
Yes
No
[1.267]
4.612***
[0.570]
17.662***
0.16
130
253,673
Yes
Yes
No
[1.219]
2.021*
[0.568]
5.043*** 0.828
Yes
0.07
7708
295,717
Yes
Yes
0.07
7708
295,717
Yes
Yes
Yes
[0.694]
2.682*** [0.693]
[0.472]
17.969***
[0.473]
19.153***
1.710***
0.23
7708
295,717
Yes
Yes
Yes
[0.619]
2.010***
[0.436]
Analyst information comes from First Call (except in Specifications (10)–(12)). Company information comes from Compustat/CRSP. Experience is the natural log of the number of quarters the analyst has followed the firm for which forecast error is calculated. Relative experience is the analyst experience scaled by the average analyst experience (same industry). Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Same quarter is orthogonalized (on NumRevisions) to ensure that multicollinearity is not a problem between these two variables. Forecast dispersion is the standard deviation of estimates. X is a vector of firm-specific variables including Broker reputation, Broker size, Accruals, Intang. assets, and Return std dev. Brokerage reputation is a ranking of brokerage reputation, where 0 is the worst and nine is the best. Brokerage size is the natural log of the number of companies per quarter a brokerage house follows. Brokerage reputation is orthogonalized (on brokerage size) to ensure that multicollinearity is not a problem between these two variables. Accruals are the accrued revenue/liabilities utilized for earnings smoothing. Intang. assets are the covered firm’s intangible assets value relative to its total assets. Return standard deviation is the standard deviation of the covered firm’s return. I is a vector of one-digit SIC industry dummies. T is a vector of time dummies. Specifications (1)–(3) include only original estimates (i.e., no revisions). Specifications (4)–(6) include only firms with one analyst following the firm. Specifications (7)–(9) use random effects. Specifications (10)–(12) use I/B/E/S data. A formal fixed effects treatment around analysts is employed unless otherwise noted. Standard errors are reported in brackets. * ** , , and *** indicate significance levels of 10 %, 5 %, and 1 %, respectively { In sample derived from I/B/E/S data, this is actually the number of analysts
Yes
66,550
Time FE
Yes
Yes
[1.405]
[1.409]
Yes
11.004***
10.593***
Yes
[0.657]
[0.666]
Industry FE
9.187***
10.012***
Firm FE
Constant
Return std. dev.
2 Experience, Information Asymmetry, and Rational Forecast Bias 93
,
1.505*** [0.066] 217,126 124
1.504*** [0.066] 217,126 124
1.046*** [0.008]
1.048*** [0.008]
1.513*** [0.066] 217,126 124
2.150** [0.851] 266,708 130
1.079*** [0.009]
1.081*** [0.009]
2.347*** [0.851] 266,708 130
Relative experience2 Inf. asymmetry Reg. FD NumRevisions SameQtr Controls 3
Experience2 Inf. Asymmetry Reg. FD NumRevisions SameQtr Controls 2
, and *** indicate significance levels of 10 %, 5 %, and 1 %, respectively
* **
Observations # Brokerage houses
Constant
2.171** [0.851] Observations 266,708 # Brokerage houses 130 Panel B: Relative firm size DFC 1.046*** [0.008] DFC squared
Constant
Inf. asymmetry Reg. FD NumRevisions SameQtr Controls 1 Panel A: Analyst coverage1 DFC 1.079*** [0.009] DFC squared
Table 2.7 Alternate proxies for information asymmetry
0.496*** [0.015] 0.041*** [0.001] 1.100*** [0.066] 217,126 124
0.446*** [0.017] 0.050*** [0.001] 1.523* [0.842] 266,708 130
Inf. asymmetry Reg. FD NumRevisions SameQtr Controls 4
0.496*** [0.015] 0.042*** [0.001] 1.122*** [0.066] 217,126 124
0.456*** [0.016] 0.050*** [0.001] 1.762** [0.840] 266,708 130
Experience2 Inf. asymmetry Reg. FD NumRevisions SameQtr Controls 5
0.496*** [0.015] 0.041*** [0.001] 1.100*** [0.066] 217,126 124
0.444*** [0.017] 0.050*** [0.001] 1.508* [0.841] 266,708 130
Relative experience2 Inf. asymmetry Reg. FD NumRevisions SameQtr Controls 6
94 A. Knill et al.
2
Experience, Information Asymmetry, and Rational Forecast Bias
95
leads to a decrease in Forecast error from a low of 1.029 (specification 3) to 1.081¢ (specification 1), depending upon the specification. Considering that the sample average Forecast error is 3.79¢, this is a significant decrease, suggesting that posting such deviations is indeed rational. Given the high marginal effect and considering the nonlinearity of the impact of experience on deviation from consensus, we further test a squared term of the fitted deviation from consensus variable. It would make sense that continually adding a penny to one’s estimate would cease to increase the precision of the resulting estimation at some point. In doing so, we see that the linear effect is reduced considerably. The forecast error is decreased to less than half of a penny on average and this reduction is decreasing with every marginal penny of bias. Finally, we examine the fitted values of Deviation from consensus for the different terciles of analyst experience, shown in Table 2.5, Panel B. For all five specifications, we see evidence of nonlinearity with these fitted values. Once again, the moderately experienced analyst is found to have the highest “optimal” deviation. This confirms nicely both the theoretical and previous empirical results and helps us to come full circle.
2.6
Robustness
To check for consistency in the results, we reexamine specifications (3) and (4) from Table 2.2 and specification (2) from Table 2.4 on subsamples, altering our empirical methodology and using a different data source. We analyze two subsamples: (1) excluding revisions and (2) including only firms covered by one analyst. We alter our empirical methodology by using random effects as opposed to the fixed effect treatment used in the base specification of the paper. Lastly, we alter our data source by using I/B/E/S data. Because the main empirical tests use First Call data, which only looks at brokerage houses, one potential concern is that perhaps we might not be able to control for analysts’ fixed effects effectively. Given our specifications and variables we include in the model, we find that there are only 130 unique brokerage houses. Since I/B/E/S reports analysts rather than brokerage houses, we use their data and find results hold when individual analysts fixed effect (7,708 unique analysts) are included. Due to the nature of IBES data, we cannot, however, control for broker reputation and size. Results are qualitatively identical across all robustness tests and are included in Appendix 2. For brevity, we only use one information asymmetry proxy in Table 2.5. We choose forecast dispersion as our proxy for information asymmetry since it provides the best model fit (i.e., Model R2) of the three proxies. To show that our results are robust to the other two information asymmetry proxies, we rerun Table 2.5 using these alternate proxies. Our results are qualitatively identical and are shown in Appendix 3. In results not reported, we examine three final robustness tests. To address any concerns about the power of our empirics based on the number of observations, we examine the subsample of manufacturing firms. While this subsample is less than
96
A. Knill et al.
half of the original, the economic and statistical significance of coefficients remains, providing support for our empirical results. We include alternate industry dummies using Fama-French industry classifications (SIC codes are used for classification in the base specifications). Results once again remain. Finally, to address any concerns about sample selection problems and the dependence of the power of our empirics on the number of brokerage houses, we run the estimations not including Broker reputation (which reduces our sample to 130 houses) and have 315 brokerage houses. Our results remain. These results are available upon request. The congruence of the results from both the included and unreported robustness tests provides further credence to the conclusions of this study.
2.7
Conclusions
To understand the exact role experience and information asymmetry play in forming a rational deviation, we offer an integrated framework in which an optimum forecast is derived from weighing public and private information, where the resulting forecast bias (i.e., deviation from the consensus) improves accuracy through the acquisition of private information. We study how information asymmetry and analyst experience affect the efficacy of the feedback from bias on private information acquisition. Our construct enables a realistic optimization, where rational forecast bias emerges as an outcome of a trade-off between public and private information while minimizing forecast squared error. We show both analytically and empirically that this trade-off and the resulting rational deviation are determined by experience and information asymmetry. While the optimal bias and information asymmetry are monotonically positively related, the optimal bias and experience are concavely linked. Moderately experienced analysts find it optimal to purchase private information with a positively biased deviation from the consensus. The superior analytical skills and access to private information (at a lower cost) of highly seasoned analysts leads them to optimally rely on private information. We use Reg. FD as an experiment to further illuminate the relationship between experience and forecast bias and to make a distinction between access to private information and analytical expertise. We find that, as we would expect, the biases of experienced analysts, who weigh private information more heavily, are affected more. The extent to which information asymmetry and analyst experience play a role in determining the analyst’s bias is directly affected by the dispersion of the common and idiosyncratic signals of all analysts as well as the extent of their correlation with each other. Our results may help to paint a clearer picture of the types of information produced by analysts. Institutions may be able to use these results to design continuing education programs and to make hiring/firing decisions. Our results can also help to explain why and how changes in the information environment caused by regulatory changes (e.g., Reg. FD), index inclusions, and public disclosures affect the role of analysts as information processors.
2
Experience, Information Asymmetry, and Rational Forecast Bias
97
Appendix 1: Proofs The Objective Function Given that the analyst’s forecast is a weighted average of the analyst’s unconditional estimate and the consensus, F ¼ wE + (1 – w)Ec, the objective function can be expressed as h i 1 1 min w2 ðt0 þ tðbÞÞ2 þ ð1 wÞ2 t2 c þ 2rwð1 wÞðt0 þ tðbÞÞ tc wjb
(2.6)
The first-order condition then is 1 1 2wðt0 þ tðbÞÞ2 2ð1 wÞt2 c þ 2rð1 wÞðt0 þ tðbÞÞ tc
2rwðt0 þ tðbÞÞ1 t1 c 0 By collecting terms, we then have n o 1 1 1 1 w ðt0 þ tðbÞÞ2 þ t2 2r ð t þ t ð b Þ Þ t ¼ t2 0 c c c rðt0 þ tðbÞÞ tc This means that the optimal weight is w¼
ðt0 þ tðbÞÞ2 rtc ðt0 þ tðbÞÞ ðt0 þ tðbÞÞ2 þ t2c 2rtc ðt0 þ tðbÞÞ
(2.7)
Proof of Proposition 1 By taking the derivative of Eq. 2.7 with respect to t0, we have ∂w 2t2c ðt0 þ tðbÞÞ 2rtc ðt0 þ tðbÞÞ2 rt3c ¼ h i2 ∂t0 ðt0 þ tðbÞÞ2 þ t2c 2rtc ðt0 þ tðbÞÞ
(2.8)
Clearly, since the denominator of ∂w/∂t0 is positive, then the sign is only a function of the numerator. This implies that the sign changes when the numerator, 2tc(t0 + t(b)) 2r(t0 + t(b))2 rt2c , is at maximum. To find the maximum, we solve for t0 that satisfies the first-order conditions of the numerator. The firstorder condition yields tc 2r(t0 + t(b)) 0. Thus, at optimal weight t0 + t(b) ¼ 0.5r1tc.
98
A. Knill et al.
Proof of Proposition 2 By taking the derivative of Eq. 2.7 with respect to bias, we have h i ∂t 2t2c ðt0 þ tðbÞÞ 2rtc ðt0 þ tðbÞÞ2 rt3c ∂b ∂w ¼ h i2 ∂b ðt0 þ tðbÞÞ2 þ t2c 2rtc ðt0 þ tðbÞÞ
(2.9)
Clearly, since the denominator of ∂w/∂b is positive, then the sign is only a function of the numerator. This implies (1) that since ∂t/∂b is positive, then the optimal weight would be monotonically increasing with ∂t/∂b or information asymmetry, and (2) that the optimal weight is nonlinearly, concavely related to private information precision. Since the first term in the numerator is a quadratic function of analyst’s own precision, the maximum in the function is the point at which the numerator changes sign. This point, however, is exactly the same point at which ∂w/∂t0 maximizes. For biases at which t0 + t(b) falls below 0.5r1tc., then so long as bias increases so does the optimal weight.
Appendix 2: Alternate Samples See Table 2.6.
Appendix 3: Alternate Proxies for Information Asymmetry for Table 2.5 See Table 2.7. Appendix 3 presents the results of regressing forecast error on the regressors specified in each column. Inf. asymmetry is analyst coverage in Panel A and relative firm size in Panel B. Same quarter is a dummy variable equal to one if the estimate is in the same quarter as the actual and zero otherwise. Same quarter is orthogonalized (on NumRevisions) to ensure that multicollinearity is not a problem between these two variables. Controls is a vector of firm-specific variables including broker reputation, broker size, accruals, intang. assets, and return std dev. Brokerage reputation is a ranking of brokerage reputation, where 0 is the worst and 9 is the best. Brokerage size is the natural log of the number of companies per quarter a brokerage house follows. Brokerage reputation is orthogonalized (on brokerage size) to ensure that multicollinearity is not a problem between these two variables. Accruals are the accrued revenue/liabilities utilized for earnings smoothing. Intang. assets are the covered firm’s intangible assets value relative to its total assets. Return standard deviation is the standard deviation of the covered firm’s return. I is a vector of one-digit SIC industry dummies. T is a vector of time dummies.
2
Experience, Information Asymmetry, and Rational Forecast Bias
99
References Agrawal, A., & Chen, M. (2012). Analyst conflicts and research quality. Quarterly Journal of Finance, 2, 179–218. Baily, W., Li, H., Mao, C., & Zhong, R. (2003). Regulation fair disclosure and earnings information: Market, analysts, and corporate response. Journal of Finance, 58, 2487–2514. Bannister, J., & Newman, H. (1996). Accrual usage to manage earnings toward financial analysts’ forecasts. Review of Quantitative Finance and Accounting, 7, 259–278. Barber, B., Lehavy, R., Trueman, B. (2000). Are all brokerage houses created equal? Testing for systematic differences in the performance of brokerage house stock recommendations. University of California at Davis and University of California at Berkeley (unpublished), March. Bernhardt, D., Campello, M., & Kutsoati, E. (2006). Who herds? Journal of Financial Economics, 80, 657–675. Beyer, A. (2008). Financial analysts’ forecast revisions and managers’ reporting behavior. Journal of Accounting and Economics, 46, 334–348. Bhattacharya, N. (2001). Investors’ trade size and trading responses around earnings announcements: An empirical investigation. Accounting Review, 76, 221–244. Boni, L., & Womack, K. (2006). Analysts, industries, and price momentum. Journal of Financial Quantitative Analysis, 41, 85–109. Brown, L., & Sivakumar, K. (2003). Comparing the quality of two operating income measures. Review of Accounting Studies, 4, 561–572. Brown, S., Hillegeist, S., & Lo, K. (2004). Conference calls and information asymmetry. Journal of Accounting and Economics, 37, 343–366. Carey, M., Post, M., & Sharp, S. (1998). Does corporate lending by banks and finance companies differ? Evidence on specialization in private debt contracting. Journal of Finance, 53, 845–878. Carter, R., & Manaster, S. (1990). Initial public offerings and underwriter reputation. Journal of Finance, 45, 1045–1068. Carter, R., Dark, F., & Singh, A. (1998). Underwriter reputation, initial returns, and the long-run performance of IPO stocks. Journal of Finance, 53, 285–311. Chen, Q., & Jiang, W. (2006). Analysts’ weighting of private and public information. Review of Financial Studies, 19, 319–355. Clement, M. (1999). Analyst forecast accuracy: Do ability, resources, and portfolio complexity matter? Journal of Accounting and Economics, 27, 285–303. Clement, M., & Tse, S. (2005). Financial analyst characteristics and herding behavior in forecasting. Journal of Finance, 60, 307–341. Conrad, J., Cornell, B., Landsman, W., & Rountree, B. (2006). How do analyst recommendations respond to major news? Journal of Financial and Quantitative Analysis, 41, 25–49. De Jong, P., & Apilado, V. (2008). The changing relationship between earnings expectations and earnings for value and growth stocks during Reg FD. Journal of Banking and Finance, 33, 435–442. Dechow, P., Kothari, S., & Watts, R. (1998). The relation between earnings and cash flows. Journal of Accounting and Economics, 25, 133–168. Doyle, J., Lundholm, R., & Soliman, M. (2003). The predictive value of expenses excluded from pro forma earnings. Review of Accounting Studies, 8, 145–174. Fama, E., & French, K. (1997). Industry costs of equity. Journal of Financial Economics, 43, 153–193. Givoly, D., & Lakonishok, J. (1979). The information content of financial analysts’ forecasts of earnings. Journal of Accounting and Economics, 2, 165–185. Gu, Z., & Xue, J. (2007). Do analysts overreact to extreme good news in earnings? Review of Quantitative Finance and Accounting, 29, 415–431. Han, B., Manry, D., & Shaw, W. (2001). Improving the precision of analysts’ earnings forecasts by adjusting for predictable bias. Review of Quantitative Finance and Accounting, 17, 81–98.
100
A. Knill et al.
Hirschey, M., & Richardson, V. (2004). Are scientific indicators of patent quality useful to investors? Journal of Empirical Finance, 11, 91–107. Hong, H., Kubik, J., & Salomon, A. (2000). Security analysts’ career concerns and herding of earnings forecasts. RAND Journal of Economics, 31, 121–144. Hsu, D., & Chiao, C. (2010). Relative accuracy of analysts’ earnings forecasts over time: A Markov chain analysis. Review of Quantitative Finance and Accounting, 37, 477–507. Irani, A. (2004). The effect of regulation fair disclosure on the relevance of conference calls to financial analysts. Review of Quantitative Finance and Accounting, 22, 15–28. Krishnaswami, S., & Subramaniam, V. (1998). Information asymmetry, valuation, and the corporate spin-off decision. Journal of Financial Economics, 53, 73–112. Kwon, S. (2002). Financial analysts’ forecast accuracy and dispersion: High-tech versus low-tech stocks. Review of Quantitative Finance and Accounting, 19, 65–91. Leone, A., & Wu, J. (2007). What does it take to become a superstar? Evidence from institutional investor rankings of financial analysts. Simon School of Business Working Paper No. FR 02-12. Available at SSRN: http://ssrn.com/abstract=313594 or http://dx.doi.org/10.2139/ssrn.313594. Lim, T. (2001). Rationality and analysts; forecast deviation. Journal of Finance, 56, 369–385. Lin, B., & Yang, R. (2010). Does regulation fair disclosure affect analysts’ forecast performance? The case of restructuring firms. Review of Quantitative Finance and Accounting, 38, 495–517. Loughran, T., & Ritter, J. (2004). Why has IPO underpricing changed over time? Financial Management, 33(3), 5–37. Mest, D., & Plummer, E. (2003). Analysts’ rationality and forecast bias: Evidence from analysts’ sales forecasts. Review of Quantitative Finance and Accounting, 21, 103–122. Mikhail, M., Walther, D., & Willis, R. (1997). Do security analysts improve their performance with experience? Journal of Accounting Research, 35, 131–157. Nutt, S., Easterwood, J., & Easterwood, C. (1999). New evidence on serial correlation in analyst forecast errors. Financial Management, 28, 106–117. Petersen, M., & Rajan, R. (1994). The benefits of lending relationships: Evidence from small business data. Journal of Finance, 49, 3–37. Ramnath, S. (2002). Investor and analyst reactions to earnings announcements of related firms: An empirical analysis. Journal of Accounting Research, 40, 1351–1376. Stangeland, D., & Zheng, S. (2007). IPO underpricing, firm quality, and analyst forecasts. Financial Management, 36, 1–20. Sufi, A. (2007). Information asymmetry and financing arrangements: Evidence from syndicated loans. Journal of Finance, 62, 629–668. Thomas, S. (2002). Firm diversification and asymmetric information: Evidence from analysts’ forecasts and earnings announcements. Journal of Financial Economics, 64, 373–396. Trueman, B. (1990). Theories of earnings-announcement timing. Journal of Accounting and Economics, 13, 285–301. Waymire, G. (1986). Additional evidence on the accuracy of analyst forecasts before and after voluntary managed earnings forecasts. Accounting Review, 61, 129–141. Winkler, R. L. (1972). An introduction to Bayesian inference and decision. New York: Holt, Rinehart and Winston. Zhang, F. (2006). Information uncertainty and stock returns. Journal of Finance, 61, 105–137. Zhou, T., & Lai, R. (2009). Herding and information based trading. Journal of Empirical Finance, 16, 388–393. Zitzewitz, E. (2002). Regulation fair disclosure and the private information of analysts. Available at SSRN: http://ssrn.com/abstract=305219 or http://dx.doi.org/10.2139/ssrn.305219.
3
An Appraisal of Modeling Dimensions for Performance Appraisal of Global Mutual Funds G.V. Satya Sekhar
Contents 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Performance Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 A Review on Various Models for Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Jensen Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Fama Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Treynor and Mazuy Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Statman Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Choi Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Elango Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.7 Chang, Hung, and Lee Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.8 MM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.9 Meijun Qian’s Stage Pricing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102 102 111 111 111 112 112 112 113 113 114 114 116 116
Abstract
A number of studies have been conducted to examine investment performance of mutual funds of the developed capital markets. Grinblatt and Titman (1989, 1994) found that small mutual funds perform better than large ones and that performance is negatively correlated to management fees but not to fund size or expenses. Hendricks, Patel, and Zeckhauser (1993), Goetzmann and Ibbotson (1994), and Brown and Goetzmann (1995) present evidence of persistence in mutual fund performance. Grinblatt and Titman (1992) and Elton, Gruber,
G.V. Satya Sekhar Department of Finance, GITAM Institute of Management, GITAM University, Visakhapatnam, Andhra Pradesh, India e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_3, # Springer Science+Business Media New York 2015
101
102
G.V. Satya Sekhar
and Blake (Journal of Financial Economics 42:397–421, 1996) show that past performance is a good predictor of future performance. Blake, Elton, and Grubber (1993), Detzler (1999), and Philpot, Hearth, Rimbey, and Schulman (1998) find that performance is negatively correlated to fund expense, and that past performance does not predict future performance. However, Philpot, Hearth, and Rimbey (2000) provide evidence of short-term performance persistence in high-yield bond mutual funds. In their studies of money market mutual funds, Domian and Reichenstein (1998) find that the expense ratio is the most important factor in explaining net return differences. Christoffersen (2001) shows that fee waivers matter to performance. Smith and Tito (1969) conducted a study into 38 funds for 1958–1967 and obtained similar results. Treyner (1965) advocated the use of beta coefficient instead of the total risk. Keywords
Financial modeling • Mutual funds • Performance appraisal • Global investments • Evaluation of funds • Portfolio management • Systematic risk • Unsystematic risk • Risk-adjusted performance • Prediction of price movements
3.1
Introduction
Performance of financial instruments is basically dependent on three important models derived independently by Sharpe, Jensen, and Treynor. All three models are based on the assumptions that (1) all investors are averse to risk and are singleperiod expected utility of terminal wealth maximizers, (2) all investors have identical decision horizons and homogeneous expectations regarding investment opportunities, (3) all investors are able to choose among portfolios solely on the basis of expected returns and variance of returns, (4) all transactions costs and taxes are zero, and (5) all assets are infinitely divisible.
3.2
Performance Evaluation Methods
The following paragraphs indicate a brief description of the studies on “performance evaluation of mutual funds.” Friend et al. (1962) offered the first empirical analysis of mutual funds performance. Sharpe (1964), Treynor and Mazuy (1966), Jensen (1968), Fama (1972), and Grinblatt and Titman (1989, 1994) are considered to be classical studies in performance evaluation methods. Sharpe (1964) made a significant contribution in the methods of evaluating mutual funds. His measure is based on capital asset prices, market conditions with the help of risk and return probabilities. Sharpe (1966) developed a theoretical measure better known as reward to variability ratio that considers both average return and risk simultaneously in its ambit. It tested efficacy through a sample of 34 open-ended funds considering annual returns and standard deviation of annual return risk surrogate for the period for 1954–1963.
3
An Appraisal of Modeling Dimensions
103
The average reward to variability ratio of 34 funds was considerably smaller than Dow Jones portfolio and considered enough to conclude that average mutual funds performance was distinctly inferior to an investment in Dow Jones Portfolio. Treynor (1965) advocated the use of beta coefficient instead of the total risk. He argues that using only naı¨ve diversification, the unsystematic variability of returns of the individual assets in a portfolio typically average out of zero. So he considers measuring a portfolio’s return relative to its systematic risk more appropriate. Treynor and Mazuy (1966) devised a test of ability of the investment managers to anticipate market movements. The study used the investment performance outcomes of 57 investment managers to find out evidence of market timing abilities and found no statistical evidence that the investment managers of any of the sample funds had successfully outguessed the market. The study exhibited that the investment managers had no ability to outguess the market as a whole but they could identify under priced securities. Michael C. Jensen (1967) conducted an empirical study of mutual funds during the period 1954–1964 for 115 mutual funds. His results indicate that these funds are not able to predict security prices well enough to outperform a buy-the-market-andhold policy. His study ignores the gross management expenses to be free. There was very little evidence that any individual fund was able to do significantly better than which investors expected from mere random chance. Jensen (1968) measured the performance as the return in excess of equilibrium return mandated by capital asset pricing model. Jensen’s measure is based on the theory of the pricing of capital assets by Sharpe (1964), Linter (1965), and Treynor. Smith and Tito (1969) conducted a study into 38 funds for 1958–1967 and published results relating to performance of mutual funds. However, Mc Donald (1974) examined 123 mutual funds for 1960–1969 measures to be closely correlated; more importantly, he found that on an average, mutual funds perform about as well as native “buy and hold” strategy. Fama (1972) suggested alternative methods for evaluating investment performance with somewhat finer breakdowns of performance on the stock selection, market timing, diversification, and risk bearing. It devised mechanism for segregation part of an observed investment return due to managers’ ability to pick up the best securities at a given level of risk from part that is due to the prediction of general market price movements. Dunn and Theisen (1983) study is about ranking by the annual performance of 201 institutional portfolios for the period 1973 through 1982 without controlling for fund risk. They found no evidence that funds performed within the same quartile over the 10-year period. They also found that ranks of individual managers based on 5-year compound returns revealed no consistency. Eun et al. (1991) reported similar findings. The benchmarks used in their study were the Standard and Poor’s 500 Index, the Morgan Stanley Capital International World Index, and a self-constructed index of US multinational firms. For the period 1977–1986, the majority of international funds outperformed the US market.
104
G.V. Satya Sekhar
However, they mostly failed to outperform the world index. The sample consisted of 19 US-based international funds, and the Sharpe measure was used to assess excess returns. Barua and Varma (1993b) have examined the relationship between the NAV and the market price on Mastershares. They conclude that market prices are far more volatile than what can be justified by volatility of NAVs. The prices also show a mean reverting behavior, thus perhaps providing an opportunity for discovering a trading rule to make abnormal profits in the market. Such a rule would basically imply buying Mastershares whenever the discount from NAV was quite high and selling Mastershares whenever the discount was low. Droms and Walker (1994) used a cross-sectional/time-series regression methodology. Four funds were examined over 20 years (1971–1990), and 30 funds were analyzed for a 6-year period (1985–1990). The funds were compared to the Standard and Poor’s 500 Index, the Morgan Stanley Europe, Australia, and Far East Index (EAFE) which proxies non-US stock markets, and the World Index. Applying the Jensen, Sharpe, and Treynor indices of performance, they found that international funds have generally underperformed the US market and the international market. Additionally, their results indicated that portfolio turnover, expense ratios, asset size, load status, and fund size are unrelated to fund performance. Bauman and Miller (1995) studied the persistence of pension and investment fund performance by type of investment organization and investment style. They employed a quartile ranking technique, because they noted that “investors pay particular attention to consultants’ and financial periodicals’ investment performance rankings of mutual funds and pension funds.” They found that portfolios managed by investment advisors showed more consistent performance (measured by quartile rankings) over market cycles and that funds managed by banks and insurance companies showed the least consistency. They suggest that this result may be caused by a higher turnover in the decision-making structure in these less consistent funds. This study controls for the effects of turnover of key decision makers by restricting the sample to those funds with the same manager for the entire period of study. Volkman and Wohar (1995) extend this analysis to examine factors that impact performance persistence. Their data consists of 322 funds over the period 1980–1989 and shows performance persistence is negatively related to size and negatively related to levels of management fees. Elton et al. (1996) examined the predictability of stock mutual funds performance based on risk-adjusted future performance. It also demonstrated application of modern portfolio techniques on past data to improve selection, which permitted construction of portfolio funds that significantly outperformed a rule based on the past rank alone. The portfolio so selected was reported to have small, but statistically significant, positive risk-adjusted returns during a period when mutual funds in general had negative risk-adjusted returns. Jayadeve (1996) paper enlightens performance evaluation based on monthly returns. His paper focuses on performance of two growth-oriented mutual funds
3
An Appraisal of Modeling Dimensions
105
(Mastergain and Magnum Express) on the basis of monthly returns compared to benchmark returns. For this purpose, risk-adjusted performance measures suggested by Jensen and Treynor and Sharpe are employed. Carhart (1997) shows that expenses and common factors in stock returns such as beta, market capitalization, 1-year return momentum, and whether the portfolio is value or growth oriented “almost completely” explain short-term persistence in risk-adjusted returns. He concludes that his evidence does not “support the existence of skilled or informed mutual fund portfolio managers.” Yuxing Yan (1999) examined performance of 67 US mutual funds and the S&P 500 Index with 10-year daily return data from 1982 to 1992. The S&P index was used as benchmark index. Daily data are transformed into weekly data for computational reasons. In the calculations, it was assumed that the S&P 500 market index is a good one, i.e., it is efficient and its variance is constant. Redmand et al.’s (2000) study examines the risk-adjusted returns using Sharpe’s Index, Treynor’s Index, and Jensen’s alpha for five portfolios of international mutual funds during 1985–1994. The benchmarks for competition were the US market proxied by the Vanguard Index 500 mutual fund and a portfolio of funds that invest solely in US stocks. The results show that for 1985 through 1994 the portfolio of international mutual funds outperformed the US market and the portfolio of US mutual funds. Rahul Bhargava et al. (2001) evaluated the performance of 114 international equity managers over the January 1988 to December 1997 period. Performance tests are conducted using Sharpe and Jensen performance methodologies. Three major findings are reported. First, international equity managers, on an average, were unable to outperform the MSCI world market proxy during the sample period. Second, geographic asset allocation and equity style allocation decisions enhanced the performance of international managers during the sample period. Third, separately managed funds were outperformed mutual funds. Sadhak’s (2003) study is an attempt to evaluate the performance of Indian mutual funds with the help of data pertaining to (a) trends in income and expenses, (b) investment yield and risk-associated returns, and (c) returns of Indian mutual funds vis-a`-vis returns of other emerging markets. Bala Ramasamy and Yeung’s (2003) survey focused on Malaysia where the mutual fund industry started in the 1950s but only gained importance in the 1980s with the establishment of government-initiated program. The sample size consisting of 56 financial advisors representing various life insurance and mutual fund companies resulted in 864 different profiles of mutual funds. The cojoint analysis was employed to generate the questionnaire and analyze its results. The results of this survey point to three important factors which dominate the choice of mutual funds. These are consistent past performance, size of funds, and costs of transaction. Chang et al. (2003) identified hedging factor in the equilibrium asset pricing model and used this benchmark to construct a new performance measure. Based on this measure, they are able to evaluate mutual fund managers hedging timing ability in addition to more traditional security selectivity and timing. While security
106
G.V. Satya Sekhar
selectivity performance involves forecasts of price movements of selected individual stock, market timing measures the forecasts of next period realizations of the market portfolio. The empirical evidence indicates that the selectivity measure is positive on average and the market timing measure is negative on average. Obeid (2004) has suggested a new dimension called “modified approach for riskadjusted performance of mutual funds.” This method can be considered as more powerful, because it allows not only for an identification of active resources but also for identification of risk. He observed two interesting results: first, it can be shown that in some cases, a superior security selection effect is largely dependent on taking higher risks. Second, even in the small sample analyzed in the study, significant differences appear between each portfolio manager’s styles of selection. Gupta OP and Amitabh Gupta (2004) published their research on select Indian mutual funds during a 4-year period from 1999 to 2003 using weekly returns based on NAVs for 57 funds. They found that fund managers have not outperformed the relevant benchmark during the study period. The funds earned an average return of 0.041 per week against the average market return of 0.035 %. The average risk-free rate was 0.15 % per week, indicating that the sample funds have not earned even equivalent to risk-free return during the study period. Subash Chander and Japal Singh (2004) considered selected funds during the period from November 1993 to March 2003 for the purpose of their study. It was found that the Alliance Mutual Fund and Prudential ICICI Mutual Funds have posted better performance for the period of study in that order as compared to other funds. Pioneer ITI, however, has shown average performance and Templeton India mutual fund has staged a poor show. Amit Singh Sisodiya (2004) makes comparative analysis of performance of different mutual funds. He explains that a fund’s performance when viewed on the basis of returns alone would not give a true picture about the risk the fund would have taken. Hence, a comparison of risk-adjusted return is the criteria for analysis. Bertoni et al. (2005) analyzed the passive role that, implicitly, would place institutional investors in such a context. The study was conducted in Italy using empirical evidence from the Italian stock exchange (Comit Index). This study finds that three factors reduce the freedom of institutional investors to manage their portfolio – the market target size, the fund structure, and the benchmarking. Sudhakar and Sasi Kumar (2005) made a case study of Franklin Templeton mutual fund. The sample consists of a total of ten growth-oriented mutual funds during the period from April 2004 to March 2005. NIFTY based on NSE Index was used as the proxy for the market index, and each scheme is evaluated with respect to the NSE index to find out whether the schemes were able to beat the market or not. It was found that most of the growth-oriented mutual funds have been able to deliver better returns than the benchmark indicators. In the sample study, all the funds have positive differential returns indicating better performance and diversification of the portfolio, except two funds with negative differential returns, viz., Franklin India Bluechip Fund and Templeton India Income Fund.
3
An Appraisal of Modeling Dimensions
107
Martin Eling (2006) made a remarkable contribution to the theory of “performance evaluation measures.” In this study, data envelopment analysis (DEA) is presented as an alternative method for hedge fund performance measurement. As an optimization result, DEA determines an efficiency score, which can be interpreted as a performance measure. An important result of the empirical study is that completely new rankings of hedge funds compared to classic performance measures. George Comer (2006) examined the stock market timing ability of two samples of hybrid mutual funds. The results indicate that the inclusion of bond indices and a bond timing variable in a multifactor Treynor-Mazuy model framework leads to substantially different conclusion concerning the stock market timing performance of these funds relative to the traditional Treynor-Mazuy model find less stock timing ability over the 1981–1991 time period provide evidence of significant stock timing ability across the second fund sample during the 1999–2000 period. Yoon K. Choi (2006) proposed an incentive-compatible portfolio performance evaluation measure. In this model, a risk-averse portfolio manager is delegated to manage a fund, and his portfolio construction (and information-gathering) effort is not directly observable to investors, in which managers are to maximize investors’ gross returns net of managerial compensation. He considers the effect of organizational elements such as economics of scale on incentive and thus on performance. Ramesh Chander (2006) study examined the investment performance of managed portfolios with regard to sustainability of such performance in relation to fund characteristics, parameter stationarity, and benchmark consistency. The study under consideration is based on the performance outcome of 80 investment schemes from public as well as private sectors for the 5-year period encompassing January 1998 through December 2002. The sample comprised 33.75 % of small, 26.75 % of medium, 21.25 % of large, and 18.75 % of the giant funds. Ramesh Chander (2006a) study on market timing abilities enables us to understand how well the manager has been able to achieve investment targets and how well risk has been controlled in the process. The results reported were unable to generate adequate statistical evidence in support of manager’s successful market timing. It persisted across measurement criteria, fund characteristics, and the benchmark indices. However, absence of performance is noted for alternative sub-periods signifying the negation of survivorship bias. Beckmann et al. (2007) found that Italian female professionals do not only assess themselves as more risk averse than their male colleagues, they also prefer a more passive portfolio management compared to the level they are allowed to. Besides, in a competitive tournament scenario near the end of the investment period, female asset managers do not try to become the ultimate top performer when they have outperformed the peer group. However in case of underperformance, the risk of deviating from the benchmark makes female professionals more willing than their male colleagues to seize a chance of catching up. Gajendra Sidana (2007) made an attempt to classify hundreds of mutual funds employing cluster analysis and using a host of criteria like the 1-year-old return, 2-year annualized return, 3-year annualized return, 5-year annualized return, alpha,
108
G.V. Satya Sekhar
and beta. The data is obtained from value research. The author finds inconsistencies between investment style/objective classification and the return obtained by the fund. Coates and Hubbard (2007) reviewed the structure, performance, and dynamics of the mutual fund industry and showed that they are consistent with competition. It was also found that concentration and barriers to entry are low, actual entry is common and continuous, pricing exhibits no dominant long-term trend, and market shares fluctuate significantly. Their study also focused on “effects of competition on fee” and “pricing anomalies.” They suggested legal interventions are necessary in setting fee in mutual funds of United States. Subha and Bharati’s (2007) study is carried out for open-ended mutual fund schemes and 51 schemes are selected by convenient sampling method. NAVs are taken for a period of 1 year from 1 October 2004 to 30 September 2005. Out of the 51 funds, as many as 18 schemes earned higher returns than the market return. The remaining 33 funds however generated lower returns than the market. Sondhi’s (2007) study analyzes the financial performance of 36 diversified equity mutual funds in India, in terms of rates of return, comparison with riskfree return, benchmark comparison, and risk-adjusted returns of diversified equity funds. Fund size, ownership pattern of AMC, and type of fund are the main factors considered in this study. The study reveals that private sector is dominating public sector. Cheng-Ru Wu et al.’s (2008) study adopts modified Delphi method and the analytical hierarchy process to design an assessment method for evaluating mutual fund performance. The most important criteria for mutual fund performance should be “mutual fund style” followed by “market investment environment.” This result indicates investor’s focus when they evaluate the mutual fund performance. Eleni Thanou’s (2008) study examines the risk-adjusted overall performance of 17 Greek Equity Mutual Funds between the years 1997 and 2005. The study evaluated performance of each fund based on the CAPM performance methodology, calculating the Treynor and Sharpe Indexes for the 9-year period as well as for three sub-periods displaying different market characteristics. The results indicated that the majority of the funds under examination followed closely the market, achieved overall satisfactory diversification, and some consistently outperformed the market, while the results in market timing are mixed, with most funds displaying negative market timing capabilities. Kajshmi et al. (2008) studied a sample of schemes in the 8-year period. This study considers performance evaluation and is restricted to the schemes launched in the year 1993 when the industry was thrown open to private sector under the regulated environment by passing the SEBI (Mutual Funds) Regulations 1993. The performance of the sample schemes were in line with that of the market as evident from the positive beta values. All the sample schemes were not well diversified as depicted by the differences in the Jensen alpha and Sharpe’s differential return. Massimo Masa and Lei Zhang (2008) found the importance of organizational structure on Asset Management Company of mutual fund. Their study found that more hierarchical structures invest less in firms located close to them and deliver
3
An Appraisal of Modeling Dimensions
109
lower performance. An additional layer in hierarchical structure reduces the average performance by 24 basis points per month. At the same time, more hierarchical structures leads to herd more and to hold less concentrated portfolios. Manuel Ammann and Michael Verhofen (2008) examined the impact of prior performance on the risk-taking behavior of mutual fund managers. Their sample taken from US funds started in January 2001 and ended in December 2005. The study found that prior performance in the first half of the year has, in general, a positive impact on the choice of the risk level in the second half of the year. Successful fund managers increase the volatility and the beta and assign a higher proportion of their portfolio to value stocks, small firms, and momentum stocks in comparison to unsuccessful fund managers. Onur et al. (2008) study evaluates the performance of 50 large US-based international equity funds using risk-adjusted returns during 1994–2003. This study provides documentation on the risk-adjusted performance of international mutual funds. The evaluation is based on objective performance measures grounded in modern portfolio theory. Using the methodology developed by Modigliani and Miller in 1997, the study reports the returns that would have accrued to these mutual funds for a 5-year holding period as well as a 10-year holding period. It is evident from the empirical results of this study that the funds with the highest average returns may lose their attractiveness to investors once the degree of risk embedded in the fund has been factored into the analysis. Qiang Bu and Nelson Lacey (2008) examined the determinants of US mutual fund terminations and provided estimates of mutual fund hazard functions. Their study found that mutual fund termination correlates with a variety of fund-specific variables as well as with market variables such as the S&P 500 Index and the shortterm interest rate. This was tested with the underlying assumptions of the semiparametric Cox model and reject proportionality. They also found that different fund categories exhibit distinct hazard functions depending on the fund’s investment objectives. David M. Smith (2009) discussed the size and market concentration of the mutual fund industry, the market entry and exit of mutual funds, the benefits and costs of mutual fund size changes, the principal benefits and costs of ownership from fund shareholders’ perspective, etc. This study is based on data from Morningstar (2009) about US mutual fund industry, which was composed of 607 fund families. Baker et al. (2010) investigated the relation between the performance and characteristics of 118 domestic actively managed institutional equity mutual funds. The results showed that the large funds tend to perform better, which suggests the presence of significant economies of scale. The evidence indicates a positive relation between cash holding and performance. They also found evidence in a univariate analysis that expense ratio class is an important determinant of performance, and the results are significant in a multivariate setting using Miller’s active alpha as a performance metric. Khurshid et al. (2009) studied the structure of the mutual fund industry in India and analyzed the state of competition among all the mutual funds in private sector
110
G.V. Satya Sekhar
and public sector. The levels of competition and their trends have been obtained for the periods March 2003–March 2009. This study found overall mutual fund industry is facing a high competitive environment. An increasing trend of competition was observed within bank institution, private sector foreign, and private sector joint venture mutual funds. Mohit Gupta and Aggarwal’s (2009) study focused on the portfolio creation and industry concentration of 18 ELSS schemes during April 2006 to April 2007. Mutual fund industry concentration was the variable used in classification or cluster creation. This exercise was repeated each month for the period under study. Finally portfolio performance was compared with index fund, portfolio of three randomly picked funds of the previous month, and the return and risk parameters of ELSS category as a whole. Talat Afza and Ali Rauf’s (2009) study aims to provide guidelines to the managers of open-ended Pakistani mutual funds and benefit small investors by pointing out the significant variables influencing the fund performance. An effort has been made to measure the fund performance by using Sharpe ratio with the help of pooled time-series and cross-sectional data and focusing on different fund attributes such as fund size, expenses, age, turnover, loads, and liquidity. The quarterly sample data are collected for all the open-ended mutual funds listed on Mutual Fund Association of Pakistan (MUFAP), for the years 1999–2006. The results indicate that among various funds attributes are: lagged return, liquidity and had significant impact on fund performance. Amar Ranu and Depali Ranu (2010) critically examined the performance of equity funds and found out the top 10 best performing funds among 256 equity mutual fund schemes in this category. They considered three factors for selection: (a) mutual funds having 5 years of historical performance, (b) fund schemes having a minimum of Rs.400 crore of assets under management, and (c) funds which have average return more than 22.47. They found that HDFC TOP 200 (Growth) option was outperforming among the top 10 best performing equity funds. Sunil Wahal and Albert Wang (2010) found impact of the entry of new mutual funds on incumbents using the overlap in their portfolio holdings as a measure of competitive intensity. Their study revealed that funds with high overlap also experience quantity competition through lower investor flows, have lower alphas, and higher attrition rates. These effects only appeared after the late 1990s, at which point there appears to be endogenous structural shift in the competitive environment. Their concluding remark is that “the mutual fund market has evolved into one that displays the hallmark features of a competitive market.” Sukhwinder Kaur Dhanda et al.’s (2012) study considered the BSE-30 as a benchmark to study the performance of mutual funds in India. The study period has been taken from 1 April 2009 to 31 March 2011. The findings of the study reveal that only three schemes have performed better than benchmark. In the year 2009, HDFC Capital Builder has the top performer. It was 69.18 returns and 26.37 SD and 0.78 beta. HDFC Capital Builder scheme has given the reward for variability and volatility. HDFC Top 200 Fund and Birla Sun Life Advantage Funds are on second and third position in terms of return. HDFC Top 200 Fund
3
An Appraisal of Modeling Dimensions
111
has shown better performance than Birla Sun Life Advantage Fund in terms of SD, beta, Sharpe ratio, and Treynor ratio. Birla Sun Life Advantage Fund has more risk than the benchmark. Kotak Select Focus Fund has the poorer performer in terms of risk and return. Except two schemes all other schemes have performed better than benchmark. Except Kotak Select Focus Fund all other schemes are able to give reward for variability and volatility.
3.3
A Review on Various Models for Performance Evaluation
3.3.1
Jensen Model
Given the additional assumption that the capital market is in equilibrium, all three models yield the following expression for the expected one-period return on any security (or portfolio) j: E R j ¼ R F þ bJ ½ E ð R m Þ R F
(3.1)
RF ¼ the one-period risk-free interest rate. bJ ¼ Cov(j RJ, RM)/s2 RM ¼ the measure of risk (hereafter called systematic risk) which the asset pricing model implies is crucial in determining the prices of risky assets. E(RM) ¼ the expected one-period return on the “market portfolio” which consists of an investment in each asset in the market in proportion to its fraction of the total value of all assets in the market. It implies that the expected return on any asset is equal to the risk-free rate plus a risk premium given by the product of the systematic risk of the asset and the risk premium on the market portfolio.
3.3.2
Fama Model
In Fama’s decomposition performance evaluation measure of portfolio, overall performance can be attributed to selectivity and risk. The performance due to selectivity is decomposed into net selectivity and diversification. The difference between actual return and risk-free return indicates overall performance: Rp Rf
(3.2)
wherein Rp is actually return on the portfolio, which is monthly average return of fund and Rf is monthly average return on treasury bills 91 days. The overall performance further can be bifurcated into performance due to selectivity and risk.
112
G.V. Satya Sekhar
Thus,
Rp Rf ¼ Rp Rp bp þ Rp bp Rf
(3.3)
In other words, overall performance ¼ selectivity + risk
3.3.3
Treynor and Mazuy Model
Treynor and Mazuy developed a prudent and exclusive model to measure investment managers’ market timing abilities. This formulation is obtained by adding squared extra return in the excess return version of the capital asset pricing model as given below: 2þ Rpt Rft ¼ a þ bp Rmt Rft þ yp Rmt Rft ept
(3.4)
where Rpt is monthly return on the fund, Rft is monthly return on 91 days treasury bills, Rmt is monthly return on market index, and Ept is error term. This model involves running a regression with excess investment return as dependent variable and the excess market return and squared excess market return as independent variables. The value of coefficient of squared excess return acts as a measure of market timing abilities that has been tested for significance of using t-test. Significant and positive values provide evidence in support of the investment manager’s successful market timing abilities.
3.3.4
Statman Model
Statman measured mutual funds using the following equation (Statman 2000): eSDAR (excess standard deviation and adjusted return) ¼ Rf þ R p Rf
Sm =Sp Rm
(3.5)
In this formulae, Rf ¼ monthly return on 3-month treasury bills, Rp ¼ monthly return on fund portfolio, Rm ¼ monthly return on the benchmark index, Sp ¼ standard deviation of portfolio p’s return, and Sm ¼ standard deviation of return on the benchmark index. This model is used for short-term investment analysis. The performance is compared with it benchmark on monthly basis.
3.3.5
Choi Model
Choi provides a theoretical foundation for an alternative portfolio performance measure that is incentive-compatible. In this model, a risk-averse portfolio manager
3
An Appraisal of Modeling Dimensions
113
is delegated to manage a fund, and his portfolio construction (and informationgathering) effort is not directly observable to investors. The fund manager is paid on the basis of the portfolio return that is a function of effort, managerial skill, and organizational factors. In this model, the effect of institutional factors is described by the incentive contractual form and disutility (or cost) function of managerial efforts in fund operations. It focuses on the cost function as an organizational factor (simply, scale factor). It was assumed that the disutility function of each fund is determined by the unique nature of its operation (e.g., fund size) and is an increasing function of managerial effort at an increasing rate.
3.3.6
Elango Model
Elango’s model also compares the performance of public sector funds vs private sector mutual funds in India. In order to examine the trend in performance of NAV during the study period, growth rate in NAV was computed. The growth rate was computed based on the following formula (Elango 2003): Growth rate: Rg ¼ ðY t Y 0 =Y 0 Þ 100
(3.6)
Rg: growth rate registered during the current year Yt: yield in current year Y0: yield in previous year In order to examine whether past is any indicator of future growth in the NAV, six regression analyses were carried out. NAV of base year was considered as the dependent variable and current year as in the independent variable. Equation: Y ¼ A þ b X
(3.7)
Dependent variable: Y ¼ NAV of 1999–2000 Independent variable: X ¼ NAV of 2000–2001 In the same way, the second regression equation computed using NAVs of 2000–2001 and 2001–2002, as dependent and independent variables.
3.3.7
Chang, Hung, and Lee Model
The pricing model adopted by Jow-Ran Chang, Nao-Wei Hung, and Cheng-Few Lee is based on competitive equilibrium version of intemporal asset pricing model derived in Campbell. The dynamic asset pricing model incorporates hedging risk as well as market. This model uses a log-linear approximation to the budget constraint to substitute out consumption from a standard intertemporal asset pricing model. Therefore, asset risk premia are determined by the covariances of asset returns with the market return and with news about the discounted value of all future market returns. Formally, the pricing restrictions on asset i imported by the conditional version of the model are
114
G.V. Satya Sekhar
Et r i , tþ1 r f , tþ1 ¼ V iI =2 þ gV im þ ðg 1ÞV ih
(3.8)
where Etri, t + 1, log return on asset; rf, t + 1, log return on riskless asset; Vii denotes Vart (ri,t + 1); g is the agent’s coefficient of relative risk aversion; Vim denotes Covt (ri, t + 1, rm,t + 1) and Vih ¼ Covt (ri,t + 1, (Et + 1 Et), _1j ¼ 1 rj rm,t + 1 + j); the parameter, r ¼ 1 exp(c w); and c w is the mean log consumption to wealth ratio. This states that the expected excess log return in an asset, adjusted for a Jensen’s inequality effect, is a weighted average of two covariances: the covariance with the return from the market portfolio and the covariance with news about future returns on invested wealth. The intuition in this equation that assets are priced using their covariances with the return on invested wealth and future returns on invested wealth.
3.3.8
MM Approach
Leah Modigliani and Franco Modigliani are better known as M2 in the investment literature. This measure is developed adjusting portfolio return. This adjustment is carried on the uncommitted (cash balances) part of the investment portfolio at the riskless return so as to enable all portfolio holdings to participate in the return generation process. This adjustment is needed to bring out the level playing field for portfolio risk-return and vis-a`-vis market return. The effect of this adjustment is reported below (Modigliani and Modigliani 1997):
M2 ¼ Rp Rm
(3.9)
Rp ¼ ðRf ð1 Sdm=SdpÞÞ þ ðRp Sdm=SdpÞ
(3.10)
In this formulae * Rp ¼ expected return, Rf ¼ risk-free return, Sdm ¼ standard deviation of market portfolio, and Sdp ¼ standard deviation of managed portfolio. In case the managed portfolio has twice the standard deviation of the market, then, the portfolio would be half invested in the managed portfolio and the remaining half would be invested at the riskless rate. Likewise, in case the managed portfolio has lower standard deviation than the market portfolio, it would be levered by borrowing money and investing the money in managed portfolio. Positive M2 value indicates superior portfolio performance, while negative indicates actively managed portfolio manager’s inability to beat the benchmark portfolio performance.
3.3.9
Meijun Qian’s Stage Pricing Model
Meijun Qian’s (2009) study reveals about the staleness, which is measured prices imparts a positive statistical bias and a negative dilution effect on mutual fund performance. First, evaluating performance with non-synchronous data generates
3
An Appraisal of Modeling Dimensions
115
Table 3.1 Overview of different measures Measures Sharpe ratio
Treynor ratio
Jensen measure
M2 measure
Description Sharpe ratio ¼ fund return in excess of risk-free return/standard deviation of fund. Sharpe ratios are ideal for comparing funds that have a mixed asset classes Treynor ratio ¼ fund return in excess of risk-free return/beta of fund. Treynor ratio indicates relative measure of market risk This shows relative ratio between alpha and beta
Jensen model
It matches the risk of the market portfolio and then calculate appropriate return for that portfolio E(Rj) ¼ RF + bJ[E(Rm) RF]
Fama model Treynor and Mazuy model
Rp–Rf ¼ [Rp Rp(bp + Rp(bp–Rf)] (Rpt–Rft) ¼ a + bp (Rmt Rft) + yp (Rmt Rft)2+ept
Statman model
eSDAR ¼ Rf + (Rp Rf)(Sm/Sp)–Rm
Elango model
Rg ¼ (Yt Y0/Y0) 100
Interpretation The higher the Sharpe ratio, the better the fund returns relative to the amount of risk taken
The higher the Treynor ratio shows higher returns and lesser market risk of the fund Jensen measure is based on systematic risk. It is also suitable for evaluating a portfolio’s performance in combination with other portfolios A high value indicates that the portfolio has outperformed and vice versa The expected one-period return on the “market portfolio” which consists of an investment in each asset in the market in proportion to its fraction of the total value of all assets in the market Overall performance ¼ selectivity + risk This model involves running a regression with excess investment return as dependent variable and the excess market return and squared excess market return as independent variables This model used for short-term investment analysis. The performance is compared with it benchmark on monthly basis In order to examine whether past is any indicator of future growth in the NAV, six regression analyses were carried out. NAV of base year was considered as the dependent variable and current year as in the independent variable
a spurious component of alpha. Second, stale prices create arbitrage opportunities for high-frequency traders whose trades dilute the portfolio returns and hence fund performance. This paper introduces a model that evaluates fund performance while controlling directly for these biases. Empirical tests of the model show that alpha net of these biases is on average positive although not significant and about 40 basis points higher than alpha measured without controlling for the impacts of stale pricing. The difference between the net alpha and the measured alpha consists of three components: a statistical bias, the dilution effect of long-term fund flows, and
116
G.V. Satya Sekhar
the dilution effect of arbitrage flows. Thus, assuming that information generated in time t is not fully incorporated into prices until one period later, the observed fund return becomes a weighted average of true returns in the current and last periods: r t ¼ a þ br mt þ et ,
(3.11)
r t ¼ r t1 þ ð1 Þr t ,
(3.12)
where rt denotes the true excess return of the portfolio with mean m and variance s2 and rmt denotes the excess market return with mean mm and variance sm. Both rt and rmt are i.i.d, and the error term et is independent of rmt. Rt* is the observed excess return of the portfolio with zero flows, while is the weight on the lagged true return. That is, the higher the , the staler the prices. Assumedly, arbitrage traders can earn the return rt*, by trading at the fund’s reported net assets values (Table 3.1).
3.4
Conclusion
This paper is intended to examine various performance models derived by financial experts across the globe. A number of studies have been conducted to examine investment performance of mutual funds of the developed capital markets. The measure of performance of financial instruments is basically dependent on three important models derived independently by Sharpe, Jensen, and Treynor. All three models are based on the assumption that (1) all investors are averse to risk and are single-period expected utility of terminal wealth maximizers, (2) all investors have identical decision horizons and homogeneous expectations regarding investment opportunities, (3) all investors are able to choose among portfolios solely on the basis of expected returns and variance of returns, (4) all transactions costs and taxes are zero, and (5) all assets are infinitely divisible.
References Afza, T., & Rauf, A. (2009). Performance Evaluation of Pakistani mutual funds. Pakistan Economic and Social Review, 47(2), 199–214. Ammann, M., & Verhofen, M. (2008). The impact of prior performance on the risk-taking of mutual fund manager. Annals of Finance, (5), 69–90. Arugaslan, O., Edwards, E., & Samant, A. (2008). Risk-adjusted performance of international mutual funds. Managerial Finance, 34(1), 5–22. Baker, M., Litov, L., Wachter, J., & Wurgler, J. (2010). Can mutual fund managers pick stocks? Evidence from their trades prior to earning announcements. Journal of Financial and Quantitative Analysis, 45, 1111–1131. Baker Kent, H., Haslem, J. A., & Smith, D. M. Performance and Characteristics of Actively Managed Institutional Equity Mutual Funds [Electronic Copy]. http://ssrn.com/ abstract=1124577
3
An Appraisal of Modeling Dimensions
117
Barua, S. K., & Varma, J. R. (1993). Speculative dynamics: The case of master shares (Advances in financial planning and forecasting, Vol. 5). Greenwitch: Jai Press. Bauman, W. S., & Miller, R. E. (1995). Portfolio performance rankings in stock market cycles. Financial Analysts Journal, 51, 79–87. Beckmann, D., Lutje, T., & Rebeggiani, L. (2007). Italian asset managers’ behavior: Evidence on overconfidence, risk taking, and gender (Discussion Paper No. 358). Hannover: Leibniz Universitat Hannover, Department of Economics, ISSN: 0949-9962. Bertoni, A., Brtmetti, G., & Cesari, C. (2005). Mutual fund bench marking and market bubbles-a behavioural approach. Transition Studies Review, 12(1), 36–43. Bhargava, R., Gallo, J., & Swason, P. T. (2001). The performance, asset allocation and investment style of international equity manager. Review of Quantitative Finance and Planning, 17, 377–395. Born, E. K. (1983). International banking in the 19th and 20th centuries. New York: St. Martin’s Press. Brown, S. J., & Goetzmann, W. N. (1995). Performance persistence. The Journal of Finance, 50(2), 679–698. Bu, Q., & Lacey, N. (2008). On understanding mutual fund terminations. Journal of Economics and Finance, 33, 80–99. Carhart, M. M. (1997). Persistence in mutual fund performance. Journal of Finance, 52, 57–82. Chander, R. (2006). Informational efficiency, parameter stationarity and bench mark consistency of investment performance. The ICFAI Journal of Applied Finance. March 2006, p. 29–45. Chander, R. (2006b). Investment manager’s market timing abilities: Empirical evidences. The ICFAI Journal of Applied Finance, 12(8), 15–31. Chander, S., & Singh, J. (2004). Performance of mutual funds in India: An empirical evidence. The ICFAI Journal of Applied Finance, 10, 45–63. Cheng-Ru Wu, Hsin-Yuan Chang, Li-Syuan Wu (2008). A framework of assessable mutual fund performance. Journal of Modeling in Management, 3(2), 125–139. Choi, Y. K. (2006). Relative portfolio performance evaluation and incentive structure. Journal of Business, 79(2), 903–921. Christofeersen, S. E. (2001). Why do money fund managers voluntarily waive their fee? Journal of Finance, 74, 117–140. Coates, J. C., & Hubbard, G. R. (2007). Competition in the mutual fund industry: Evidence and implications for policy (Discussion Paper No.592). Source: http://ssrn.com/abstract¼1005426 Comer, G. (2006). Hybrid mutual funds and market timing performance. Journal of Business, 79(2), 771–797. Detzler, L. M. (1999). The performance of global bond mutual funds. Journal of Banking and Finance, 23, 1195–1217. Dhanda, S. K., Batra, G. S., & Anjum, B. (2012). Performance evaluation of selected open ended mutual funds in India. International Journal of Marketing, Financial Services & Management Research, 1(1), January 2012, ISSN 2277 3622. Online Available at http://indianresearchjournals.com Domain, D. L., & Rechenstein, W. (1998). Performance and persistence in money market mutual fund returns. Financial Services Review, 6, 169–183. Droms, W. G., & Walker, D. A. (1994). Investment performance of international mutual funds. Journal of Financial Research, 17, 1–14. Dunn, P. C., & Theisen, R. D. (1983). How consistently do active managers win? Journal of Portfolio Management, 9, 47–51. Elango, R. (2003). Which fund yields more returns? A comparative analysis on the NAV performance of select public v/s private/foreign open-ended mutual fund schemes in India. Mutual Funds. The Management Accountant, Vol. 39(4), (2004), p283–290. Eling, M. (2006). Performance measurement of hedge funds using data envelopment analysis. Financial Markets and Portfolio Management, 2o, 442–471.
118
G.V. Satya Sekhar
Elton, E. J., Gruber, M. J., & Blake, C. R. (1996). Market timing ability and volatility implied in investment newsletters’ asset allocation recommendations. Journal of Financial Economics, 42, 397–421. Eun, C. S., Kolodny, R., & Resnick, B. G. (1991). U.S. based international mutual funds: A performance evaluation. The Journal of Portfolio Management, 17, 88–94. Fama, E. F. (1972). Components of investment performance. Journal of Finance, 27, 551–567. Friend, I., Brown, F. E., Herman, E. S., & Vickers, D. (1962). A study of mutual funds. Washington, D.C.: U.S. Government Printing Office. Goetzman, W. N., & Ibbotson, R. G. (1994). Do winners repeat; predicting mutual fund performance. The Journal of Portfolio Management, 20, 9–18. Grinblatt, M., & Titman, S. (1989). Mutual fund performance – An analysis of quarterly holdings. Journal of Business, 62, 393–416. Grinblatt, M., & Titman, S. (1992). The persistence of mutual fund performance. Journal of Finance, 47(5), 1977–1984. Grinblatt, M., & Titman, S. (1994). Momentum investment strategies – Portfolio performance and herding: A study of mutual fund behavior. American Economic Review, 85, 1088–1105. Gupta, M., & Aggarwal, N. (2009). Mutual fund portfolio creation using industry concentration. The ICFAI Journal of Management Research, Viii(3), 7–20. Gupta, O. P., & Gupta, A. (2004). Performance evaluation of select Indian mutual fund schemes: An empirical study. The ICFAI Journal of Applied Finance, 10, 81–97. Hendricks, D., Patel, J., & Zeckhauser, R. (1993). Hot hands in mutual funds: A short run persistence of relative performance. Journal of Finance, 48, 93–130. Jayadeve, M. (1996). Mutual fund performance: An analysis of monthly returns. Finance India, X(1), 73–84. Jensen, M. C. (1967). The performance of mutual funds in the period 1945–1964. Journal of Finance, 23, 389–416. Jensen, M. C. (1968). The performance of mutual funds in the period 1945–1964. Journal of Finance, 23, 389–416. Jow-Ran Chang, Mao-Wei Hung, Cheng-few Lee (2003). An intemporal CAPM approach to evaluate mutual fund performance. Review of Quantitative Finance and Accounting, 20, 425–433. Kajshmi, N., Malabika, D., & Murugesan, B. (2008). Performance of the Indian mutual funds; A study with special reference to growth schemes. Asia-Pacific Business Review, Iv(3), 75–81. Khurshid, S. M. Z., Rohit, & Sing, G. P. (2009). Level and trends of competition among the mutual funds in India. Research Journal of Business Management, 3(2), 47–67. Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. The Review of Economics and Statistics, 47(1), 13–37. Massa, M., & Lei Zhang. (2008). The effects of organizational structure on asset management. http://ssrn.com/abstract¼1328189 Meijun Qian (2009). Stale prices and the performance evaluation of mutual funds. Journal of Financial and Quantitative Analysis. Mc Donald, J. G. (1974). Objectives and performance of mutual funds 1960–1967. Journal of Financial and Quantitative Analysis, 9, 311–333. Modigliani, F., & Modigliani, L. (1997). Risk adjusted performance. Journal of Portfolio Management, 23, 45–54. Obeid, A. T. (2004). A modified approach for risk-adjusted performance attribution. Financial Markets and Portfolio Management, 18(3), 285–305. Philpot, J., Hearth, D., Rimbey, J., & Schulman, C. (1998). Active management, fund size and bond mutual fund returns. The Financial Review, 33, 115–126. Ramasamy, B., & Yeung, C. H. M. (2003). Evaluating mutual funds in an emerging market: Factors that matter to financial advisors. International Journal of Bank Marketing, 21, 122–136. Ranu, A., & Ranu, D. (2010, January). The best performing equity funds. The Analyst. Redmand, A. L., Gullett, N. S., & Manakyan, H. (2000). The performance of global and international mutual funds. Journal of Financial and Strategic Decisions, 13(1), 75–85.
3
An Appraisal of Modeling Dimensions
119
Sadhak, H. (2003). Mutual funds in India: Marketing strategies and investment practices. New Delhi: Response Books. Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance, 19, 225–242. Sharpe, W. F. (1966). Mutual fund performance. Journal of Business, 39, 119–138. Sidana, G. (2007). Classifying mutual funds in India: Some results from clustering. Indian Journal of Economics and Business, II(2). Sisodiya, A. S. (2004, July). Mutual fund industry in India – Coming of age. Chartered Financial Analyst, pp. 17–22. Smith, D. M. (2009). The economics of mutual funds. In J. A. Haslem (Ed.), A companion to mutual funds (Chap. 3). New York: Wiley. Smith, K. & Tito, D. (1969). Risk-return measures of post-portfolio performance. Journal of Financial and Quantitative Analysis, 4, 449–471. Sondhi, H. J. (2007). Financial performance of equity mutual funds in India. New Delhi: Deep & Deep. Statman, M. (2000). Socially responsible mutual funds. Financial Analysts Journal, 56, 30–38. Subha, M. V., & Bharati, J. S. (2007). An empirical study on the performance of select mutual fund schemes in India. Journal of Contemporary Research in Management, I(1). Sudhakar, A., & Sasi, K. K. (2005, November). Performance evaluation of mutual funds: A case study. Southern Economist, pp. 19–23. Thanou, E. (2008). Mutual fund evaluation during up and down market conditions: The case of Greek equity mutual funds. International Research Journal of Finance and Economics, (13). Treynor, J. L. (1965). How to rate management of investment funds. Harvard Business Review, 43, 63–75. Treynor, J. L., & Mazuy, K. K. (1066). Can mutual funds outguess the markets. Harvard Business Review, 44, 131–136. Treynor, J. L., & Mazuy, K. K. (1966). Can mutual funds outguess the markets. Harvard Business Review, 44, 131–136. Volkman, D. A., & Wohar, M. E. (1995). Determinants of persistence in relative performance of mutual funds. Journal of Financial Research, 18, 415–430. Wahal, S., & Alber (Yan) Wang (2010). Competition among mutual funds. Journal of Financial Economics. Source: http://ssrn.com/abstract¼1130822 Yuxing Yan (1999). Measuring the timing ability of mutual fund managers. Annals of Operations Research, 87, 233–243.
4
Simulation as a Research Tool for Market Architects Robert A. Schwartz and Bruce W. Weber
Contents 4.1 Studying Market Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The Interaction Between Theoretical Modeling, Empirical Analysis, and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 An Application: Call Auction Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 An Application: The Search for an Equilibrium Price and Quantity . . . . . . . . . . . 4.2.3 The Realism of Computer-Generated Data Versus Canned Data . . . . . . . . . . . . . . . 4.3 An Equity Market Trading Simulation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Informed Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Liquidity Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Momentum Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Simulation in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Trading Instructions: Simulation A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Trading Instructions: Simulations B-1 and B-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Trading Instructions: Simulation C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modeling Securities Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Details on Simulation Model and Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Components of the Market Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
122 123 123 125 128 129 130 130 131 131 132 136 137 141 141 141 142 143 146
R.A. Schwartz (*) Zicklin School of Business, Baruch College, CUNY, New York, NY, USA e-mail: [email protected] B.W. Weber Lerner College of Business and Economics, University of Delaware, Newark, DE, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_4, # Springer Science+Business Media New York 2015
121
122
R.A. Schwartz and B.W. Weber
Abstract
Financial economists have three primary research tools at their disposal: theoretical modeling, statistical analysis, and computer simulation. In this chapter, we focus on using simulation to gain insights into trading and market structure topics, which are growing in importance for practitioners, policy-makers, and academics. We show how simulation can be used to gather data on trading decision behavior and to analyze performance in securities markets under controlled yet competitive conditions. We find that controlled simulations with participants are a flexible and reliable research tool when it comes to studying issues involving traders and market architecture. The role of the discrete event simulation model we have developed is to create a backdrop, or a controlled stochastic environment, for running market experiments with live subjects. Simulations enable us to gather data on trading participants’ decision making and to ascertain the ability of incentives and market structures to influence outcomes. The statistical methods we use include experimental design and careful controls over experimental parameters such as the instructions given to participants. Furthermore, results are assessed both at the individual level to understand how participants respond to incentives in a trading setting and also at the market level to know whether the predicted outcomes are achieved and how well the market operated. There are two statistical methods described in the chapter. The first is discrete event simulation and the model of computer-generated trade order flow that we describe in Sect. 4.3. To create a realistic, but not ad hoc, market background, we use draws from a log-normal returns distribution to simulate changes in a stock’s fundamental value, or P*. The model uses price-dependent Poisson distributions to generate a realistic flow of computer-generated buy and sell orders whose intensity and supply-demand balance vary over time. The order flow fluctuations depend on the difference between the current market price and the P* value. In Sect. 4.4, we illustrate the second method, which is experimental control to create groupings of participants in our simulations that have the same trading “assignment.” The result is the ability to make valid comparisons of traders’ performances in the simulations. Keywords
Trading simulations • Market microstructure • Order flow models • Random walk models • Experimental economics • Experimental control
4.1
Studying Market Structure
Financial economists have three major research methods at their disposal: theoretical modeling, statistical analysis, and simulation. We will focus on using simulation to gain insights into trading and market structure. In so doing, we show how simulation can be used to analyze participant behavior in a security market. We find that controlled simulations with participants are a flexible and reliable research tool when it comes to studying trading behavior and market architecture.
4
Simulation as a Research Tool for Market Architects
123
Using good experimental design we can draw statistically valid conclusions from simulations at both the individual level to understand how participants respond to incentives in a trading setting and also at the market level to know whether the predicted outcomes are achieved and how well the market operated. We begin by considering the interaction between the three tools.
4.2
The Interaction Between Theoretical Modeling, Empirical Analysis, and Simulation
A theoretical formulation based on a limited number of abstract assumptions enables complex reality to be translated into a simplified representation that can be rigorously analyzed. As such, theoretical modeling typically provides the underpinnings for both empirical and simulation analysis. Theory alone, however, can take us only so far. One cannot expect that every detailed aspect of reality can be analyzed from a theoretical vantage point (Clemons and Weber 1997). Moreover, in light of the literature on behavioral economics, it is clear that not all human behavior follows the dictates of rational economic modeling. We turn to empirical analysis both to provide confirmation of a theoretical model and to describe aspects of reality that theoretical analysis has not been able to explain (Zhang et al. 2011). But necessary data may not be available and empirical analysis, like theoretical modeling, has its limitations. Variability in important variables, such as differences across time or traded instruments, is difficult for empiricists to control (Greene 2011; Kennedy 2008). Moreover, empirical variables may change in correlated ways, yet this should not be mistaken for a causal mechanism. When we seek insights that neither theory nor empirical analysis can provide, simulation has an important role to play (Parker and Weber 2012). An example will help explain.
4.2.1
An Application: Call Auction Trading
Consider an electronic call auction. At a call, submitted orders are batched together for simultaneous execution at a single clearing price. For the batching, buy orders are cumulated from the highest price to the lowest to produce a function that resembles a downward sloping demand curve, and sell orders are cumulated from the lowest price to the highest to produce a function that resembles an upward sloping supply curve. The algorithm generally used for determining the clearing price at the time that the market is called finds the price which maximizes the number of shares that execute. In an abstract, theoretical model, the number of shares that trade is maximized at the price where the downward sloping buy curve crosses the upward sloping sell curve. Consequently, with continuous order functions, the clearing price which maximizes the number of shares that trade in a call auction is uniquely determined by the point where the two curves cross, and this price, in economics parlance, is an equilibrium value.
124
R.A. Schwartz and B.W. Weber
All told, this call auction procedure has excellent theoretic properties, and one might expect that designing a well-functioning call would be a straightforward task. This is not the case, however. In reality, as the saying goes, “the devil is in the details.” For theoretical modeling, we might for analytic convenience assume a large enough number of participants so that no one individual has any market power. We might further assume that the cumulated buy and sell curves are continuous functions and that the call auction is the only trading facility available. In reality, however, the buy and sell curves are step functions and some players’ orders will be large enough to impact a call’s clearing price. To illustrate, assume an exact match of 40,000 shares to buy and 40,000 shares to sell at a price of $50 and that, at the next higher price of $50.10, sell orders totaling 50,000 shares and buy orders totaling 30,000 shares exist. A buyer could move the price up by entering more than 10,000 shares at $50.10 or greater. Notice also that the real-world buy and sell curves are step functions (neither price nor quantity is a continuous variable) and thus that the cumulated buy orders may not exactly match the cumulated sell orders at the price where the two curves cross. Moreover, many exchange-based call auctions are offered along with continuous trading in a hybrid environment. These realities of the marketplace affect participants’ order placement decisions and, consequently, impact market outcomes for both price and the number of shares that trade. In response, participants will enter their orders strategically when coming to the market to trade. These strategic interactions and the decisions that market participants make depend on the call auction’s rules of order disclosure (i.e., its transparency) and its rules of order execution which apply when an exact cross is not obtained. What guidance do market architects have in dealing with questions such as these other than their own, hopefully educated, speculation? The questions being raised may be too context specific for theory to address and, if a new market structure is being considered, the data required for empirical analysis will not yet exist. This is when simulation analysis can be used to good advantage. Regarding transparency, alternative call auction structures include full transparency (i.e., display the complete set of submitted orders), partial transparency (e.g., display an indicated clearing price and any order imbalance at that price), or no transparency at all (i.e., be a dark pool). Regarding the procedure for dealing with an inexact cross, the alternatives for rationing orders on the “heavy” side of the market include pro rata execution, the application of time priority to orders at the clearing price exactly, and the application of time priority to all orders at the clearing price and better (i.e., to higher priced buys or to lower priced sells). These and other decisions have been debated in terms of, for instance, the ability to game or to manipulate an auction, the incentive to enter large orders, and the incentive to submit orders early in the book building period before the auction is called. But definitive answers are difficult to come by. This is when valuable guidance can be (and has been) obtained via the use of simulation analysis.
4
Simulation as a Research Tool for Market Architects
4.2.2
125
An Application: The Search for an Equilibrium Price and Quantity
In this section, we consider another application of simulation as a research tool: the search for an equilibrium price and quantity in a competitive marketplace. Determining equilibrium values for a resource’s unit price and quantity traded is a keystone of economic analysis. In the standard formulation, equilibrium is determined by the intersection of market demand and supply curves, with scant consideration given to just how the buy and sell orders of participants actually meet in a marketplace. In fact, the “marketplace” is typically taken to be nothing more than a mystical, perfectly frictionless environment, and the actual discovery of equilibrium values for price and quantity is implicitly assumed to be trivial. Real-world markets are very different. In the non-frictionless environment, a panoply of transaction costs interact to make price and quantity discovery an imperfect process. In this far more complex setting, two further issues need to be analyzed: (1) the trading decisions that market participants make when confronted by the imperfections (frictions) that characterize real-world markets and (2) the determination of actual prices and quantities based on the decisions of the individual participants. Both issues are ideally suited for simulation analysis. We consider this further in this section of the chapter, with particular reference to one specific market: the secondary market for trading equity shares of already issued stock. Underlying the simulation analysis is an economic model that is based on the following assumptions: 1. The decision maker qua investor is seeking to maximize his/her expected utility of wealth as of the end of a single holding period. 2. The investor is risk averse. 3. There are just two assets, one risk-free (cash) and one risky (equity shares). 4. There are no explicit trading cost (i.e., there are no commissions or borrowing costs, and short selling is unrestricted). 5. Share price and share holdings are continuous variables. 6. A brief trading period (T0 to T1) that is followed by a single investment period (T1 to T2). 7. The participant’s expectation of price at T2 is exogenous (i.e., independent of the stock’s current price). 8. The investor is a perfect decision maker when it comes to knowing his/her demand curve to hold shares of the risky asset. From this set of assumptions, and following the derivation described in Ho et al. (1985) and Francioni et al. (2010), we can obtain a participant’s demand to hold shares of the risky asset. The participant’s demand is described by two simple, linear functions: P0 ¼ a 2bN ðan ordinary demand curveÞ
126
R.A. Schwartz and B.W. Weber
and PR ¼ a bN ða reservation price demand curveÞ where P0 denotes price with respect to the ordinary curve, PR denotes a reservation price, and N is the number of shares held. The ordinary curve shows that if, for instance, the price of shares is P01 , the participant maximizes expected utility by holding N1 shares; if alternatively price is P02 , the participant maximizes expected utility by holding N2 shares; and so on. Values given by the reservation curve show that at a quantity N1, the maximum the participant would pay is PR1 when the alternative is to hold no shares at all; or, at a quantity N2, the maximum the participant would pay is PR2 when the alternative is to hold no shares at all; and so on. Identifying the reservation price demand curve enables us to obtain easily a monetary measure of the gains from trading. To facilitate the exposition, assume for the moment that the participant initially holds no shares. Then if, for instance, N1 shares are acquired, we have Surplus ¼ N1 PR P where “Surplus” denotes the gains from buying N1 shares and P is the price at which the N1 shares were bought (note that, for a purchase, we have Surplus > 0 for P < PR). The participant controls P via the price of his/her order but knows neither the price at which the order will execute (if price improvement is possible) nor whether or not the order will, in fact, execute. Because P is not known with certainty, Surplus is not known with certainty and thus the investor seeks to maximize the expected value of Surplus. It follows from the derivations cited above that the maximization of the expected value of Surplus is consistent with the maximization of the expected utility of the end of the holding period wealth (i.e., at time T2). With the demand curves to hold shares established, it is straightforward to obtain linear functions that describe the investor’s propensity to buy and to sell shares in relation to both ordinary and reservation prices. If the investor’s initial position is zero shares, the buy curve is the same as the downward sloping demand curve. By extending the demand curve up and through the price intercept (a) into the negative quadrant, we see that at prices higher than the intercept, a, the investor will want to hold a negative number of shares (i.e., establish a short position by selling shares that he/she does not currently own). By flipping the portion of the demand to hold curve that is in the negative quadrant into the positive quadrant (and viewing a negative shareholdings adjustment as a positive sell), we obtain a positively inclined (vertical) mirror image that is the sell function. The procedure just described can easily be replicated for any initial share holdings (either a long or a short position). The individual buy and sell curves can be aggregated across investors and, following standard economic theory, the intersection of the aggregate buy curve with the aggregate sell curve establishes the equilibrium values of share price and the number of shares that trade. This equilibrium would be achieved if all participants were, simultaneously,
4
Simulation as a Research Tool for Market Architects
127
to submit their complete demand functions to the market, as they presumably would in a perfect, frictionless environment. Continue to assume that investors know their continuous, negatively inclined demand curves to hold shares of the risky asset, but let us now consider how they might operate in a non-frictionless marketplace where they cannot all simultaneously submit their complete and continuous, downward sloping buy functions and upward sloping sell functions. What specific orders will they send to the market, and how will these orders interact so as to be turned into trades? Will equilibrium values for price and quantity be achieved? This highly complex issue might best be approached by observing participant behavior and, to this end, simulation can be used as the research tool. Here is how one might go about it. Let participants compete with each other in a networked, simulated environment. Each participant is given a demand curve to hold shares of a risky stock, and each is asked to implement that demand curve by submitting buy or sell orders to the market. The participants are motivated to place their orders strategically given their demand curves, the architectural structure of the marketplace, and the objective against which their performance is assessed – namely, the maximization of expected surplus. The simulation can be structured as follows. Give all of the participants in a simulation run the same demand curve, P ¼ 20 0:5N0 where N0 represents the number of shares initially held. Divide the participants into two equal groups, A and B, according to the number of shares they are initially holding, with N0A ¼ 4 for group A players and N0B ¼ 8 for group B players. Accordingly, the buy curve for each individual in group A is P ¼ 18 0:5Q and the sell curve for each individual in group B is P ¼ 16 þ 0:5Q where Q is the number of shares bought or sold. The associated reservation curves are PR ¼ 18 0:25Q ðfor the buyersÞ and PR ¼ 16 þ 0:25Q ðfor the sellersÞ From the ordinary (as opposed to the reservation) buy and sell curves, and recalling that the groups A and B are of equal size, we obtain an equilibrium price of 17 and an equilibrium quantity traded of 2 (per participant). From the
128
R.A. Schwartz and B.W. Weber
reservation buy and sell curves, we see that if each participant bought or sold two shares, the surplus for each would be 2ð17:50 17:00Þ ¼ $1:00 ðfor the buyersÞ and 2ð17:00 16:50Þ ¼ $1:00 ðfor the sellersÞ Participants, however, do not know the distribution of initial shareholdings across the other investors, and thus they know neither the buy and sell functions of all the other traders, nor the equilibrium price and quantity for the market. It is up to each of them individually to submit their orders wisely, and it is up to all of them collectively to find the equilibrium price along with the quantity to trade at that price. How do they operate? How well do they do? How quickly and successfully can they collectively find equilibrium values for P and Q? And how are participant decisions and market outcomes affected by different market structures? With regard to each of these issues, important insights can be obtained through the use of simulation as a research tool.
4.2.3
The Realism of Computer-Generated Data Versus Canned Data
A live equity trading simulation can either depend exclusively on person-to-person interaction or add computer-driven order flow; we focus on the latter. Most experimental economics research is based on the former. Computer-driven simulations can be based on either canned data or on data that the computer itself generates; we focus on the latter. Canned data have two major limitations. First, participants in the simulation cannot affect the stream of prices that they are trading against and, consequently, the dynamic, two-way interaction between live participants and the marketplace is absent. Second, canned data cannot be used to analyze new market structure because, quite simply, it is exclusively the product of the actual market within which it was generated. On the other hand, a simulation model based on computer-generated data faces a formidable challenge: capturing the dynamics of a real-world market. Canned data does not face this problem – it is, after all, generated in a real marketplace. We discuss the challenge of realism in this section of the chapter with specific reference to an equity market. It has been well established in the financial economics literature that, in an equity market which is fully efficient, security prices follow random walks. “Efficiency” in this financial markets context is generally understood as referring to “informational efficiency,” by which we mean that market prices reflect all existing information. In a nutshell, the raison d’eˆtre of a stock’s price following a random walk in a fully informationally efficient market can be understood as follows. If all (and we mean all)
4
Simulation as a Research Tool for Market Architects
129
information about a security is reflected in a stock’s market price, then only totally new, totally unanticipated information can cause the stock’s price to change. But totally new and thus totally unanticipated information can be either bullish or bearish with equal probability and thus, with equal probability, can lead to either positive or negative price changes (returns). Thus, the argument goes, in an informationally efficient market, returns are not predictable and stock prices follow random walks. It would be relatively straightforward to structure an equity market simulation based on machine-driven prices that follow a random walk. One would start a simulation run with an arbitrarily selected seed price and have that price evolve as the simulation progresses according to random draws from a (log-normal) returns distribution with arbitrary variance and a zero mean. Real-world prices do not evolve in this fashion, however. In a world characterized by trading costs, imperfect information, and divergent (i.e., nonhomogeneous) expectations based on publicly available information, prices do not follow simple random walks. Rather, price changes (returns) in relatively brief intervals of time (e.g., intraday) evolve in dynamic ways that encompass complex patterns of first-order and higher-order correlations. Structuring a computer simulation to produce prices that capture this dynamic property of real-world markets is the objective. In the next session of this chapter, we set forth the major properties of a machine-driven trading simulation, TraderEx, which we have formulated so as to achieve this goal.
4.3
An Equity Market Trading Simulation Model
In this section, we focus on a key conceptual foundation of the TraderEx simulation model: how the machine-generated order flow is structured. Above all else, it is this modeling that captures the dynamic property of trades and prices and, in so doing, that enables our software to compete with canned data in terms of realism. First, we present a brief overview of the functions the computer performs. Looking under the hood, the TraderEx software can be thought of, first and foremost, as a package of statistical distributions. The prices and sizes of the machine-driven orders that power the TraderEx simulation are determined by draws from these distributions. The software also maintains and displays an order book or set of dealer quotes, drives a ticker tape that displays trade prices and sizes, computes summary statistics for each simulation run (e.g., a stock’s volumeweighted average price and participant performance measures such as profit or loss), and provides post trade analytics (both statistics and graphs). The simulation game can be played either individually (i.e., one person interacting with the machine-driven order flow in a solitaire environment) or as a group (i.e., in a networked environment that also includes machine-driven order flow). While our machine-driven order flow, as we have said, is powered by random draws from distributions, we are able to tell meaningful economic stories about these statistical draws. These stories involve exogenous information change and three different economic agents: informed traders, liquidity traders, and noise traders. Interestingly, while the simulation requires this tripartite division, it has
130
R.A. Schwartz and B.W. Weber
also been established in the academic microstructure literature that, for an equity market not to fail, informed traders must interact with liquidity traders, and noise traders must also be part of the mix.
4.3.1
Informed Orders
In TraderEx, orders submitted by informed traders are specified with reference to an exogenous variable we call P* that can be viewed as an equilibrium price. That is, at the value P*, the aggregate flow of buy and sell orders is in balance (much as, in economic analysis, buy and sell orders are in balance at the price where a demand curve crosses a supply curve). More specifically, the TraderEx market is in equilibrium when the lowest posted offer is greater than P* and the highest posted bid is less than P*. When this equilibrium is achieved, no informed orders are submitted to the market, and an incoming (liquidity) order can be from a buyer or a seller with equal probability. On the other hand, if P* is greater than the lowest posted offer, informed orders kick in and the probability of an incoming order being from a buyer is raised to 0.6 (a parameter that can, of course, be adjusted). Equivalently, if P* is lower than the highest posted bid, informed orders again kick in and the probability of an incoming order being from a seller is raised to 0.6. This asymmetry between the buy and sell orders that exists when P* is not within the quotes keeps the quotes loosely linked to P*. P* evolves as the simulation progresses according to a Poisson arrival process. Each jump in P* symbolizes informational change. The size of the change in P* at each new arrival is determined by a draw from a log-normal returns distribution with a zero mean and a variance that is a controllable parameter.
4.3.2
Liquidity Orders
The second component of the order flow, liquidity orders, is also modeled as a Poisson arrival process, but with one important difference: at any point of time, the probability of the newly arriving liquidity order being a buy equals the probability of its being sell equals 0.5. All liquidity orders are priced, with the price determined by a draw from a double triangular distribution that is located with reference to the best posted bid and offer quotes. A new liquidity order is entered on the book as a limit order if it is a buy with a price lower than the best posted offer or if it is a sell with a price higher than the best posted bid. A new liquidity order with a price equal to or more aggressive than the best posted offer (for a buy) or the best posted bid (for a sell) is executed immediately as a market order. Liquidity orders can (randomly) cause the market’s bid and ask quotes to drift away from the equilibrium value, P*. When this occurs, informed orders that are entered as market orders pull market prices back towards P*.
4
Simulation as a Research Tool for Market Architects
4.3.3
131
Momentum Orders
The third component of the order flow is orders entered by noise traders. TraderEx activity includes just one kind of noise trader – a momentum player – and it operates as follows: whenever three or more buy orders (or sell orders) arrive sequentially, the conditional probability is increased that the next arriving order will also be a buy (or a sell). As in the microstructure literature, noise traders are needed in the simulation model to keep participants from too easily identifying price movements that have been caused by informed orders responding to a change in P*. This is what our momentum orders achieve. For instance, assume that P* jumps several ticks above the best posted offer. An accelerated arrival of informed buy orders would be triggered and prices on the TraderEx book would rise over a sequence of trades, causing a pattern of positively autocorrelated price movements that can, with relative ease, be detected by a live participant. But, to obscure this, the momentum orders create faux price trends that mimic, and therefore obfuscate, the informationinduced trends. Momentum orders play a further role in the TraderEx simulations. They systematically cause transaction prices to overshoot P*. Then, as informed orders kick in, prices in the simulation mean revert back to P*. This mean reversion and its associated accentuated short-run volatility encourage the placement of limit orders. This is because overshooting causes limit orders to execute, and limit order placers profit when price then mean reverts. To see this, assume that the stock is currently trading at the $23.00 level and that P* jumps from $23.00 to $24.00. As price starts to tick up to $24.00, momentum orders join the march and carry price beyond $24.00–$24.20. Assume a limit order to sell is on the book at $24.20 and that it executes. The limit order placer then benefits from having sold at $24.20 when the momentum move ends and when a P* of $24.00 exerts its influence and price mean reverts back towards $24.00.
4.4
Simulation in Action
Since the earliest version of TraderEx was developed in 1995, we have run hundreds of “live market” simulations with students in our trading and market microstructure electives and with executive education participants. In addition, we have developed training modules on trading for new hires at a number of global banks. We have also run controlled experimental economics studies of trading decision making and alternative market structures (Schwartz and Weber 1997). To illustrate the potential for research from using market simulation, we will examine the data generated by simulation participants’ behavior in a number of settings. In a January 2011 class session of the “Trading and Financial Market Structure” elective at London Business School, a simulation was run which covered
132
R.A. Schwartz and B.W. Weber
Fig. 4.1 Initial order book at start of simulated trading day. Limit orders to buy are on the left and limit orders to sell on the right. Participants enter buy and sell market orders in the white boxes at the top of the book. Limit orders are entered by clicking on the gray rectangles at the price level the user selects
1 day of trading in an order-driven market structure. In it, 42 graduate students were divided into 21 teams of two. The order book maintained price and time priority over limit orders. The price increment was 10 cents, and the order book at the open looked similar to Fig. 4.1 below.
4.4.1
Trading Instructions: Simulation A
Eight teams were each given the instruction to sell 1,500 units, and seven teams were each asked to buy 1,300. Five other teams had the role of either day traders or proprietary trading desks. Three of these five teams were instructed to buy 900 then sell 900 and to have a closing position of 0. Two teams were asked to sell 800, then buy 800, and to close with a flat position. A trial simulation was run, and performance metrics were discussed before the simulation began. The teams with a sell instruction were told they would be assessed on the basis of the highest average selling price, while buying teams competed on the basis of the lowest average buying price. The “prop” teams were told that maximizing closing P&L was their objective but that they should finish flat and have no “overnight risk.” A screen similar to the one the participants saw is shown in Fig. 4.2.
Simulation as a Research Tool for Market Architects
Fig. 4.2 End of a trading day in a networked order book simulation from the screen used by the instructor or simulation administrator
4 133
134
R.A. Schwartz and B.W. Weber
1,500 Closing position goals: +1300, -1,500, or 0
1,000
500
LBS9
LBS21
LBS7
LBS20
LBS15
LBS8
LBS13
LBS1
LBS19
LBS3
LBS5
LBS11
LBS16
LBS10
LBS2
LBS12
LBS6
LBS4
LBS14
LBS18
0
500
−1,000 −1,500 −2,000
Fig. 4.3 Final positions of 21 trading teams. Teams were supposed to end with holdings of either 1,500 (sellers), +1,300 (buyers), or flat (0, prop-day traders)
One of the first lessons of behavioral studies of trading done via simulation is that following instructions is not simple for participants. As Fig. 4.3 shows, seven of the 21 teams did not end the simulation with the position they were instructed to have. Three of the selling teams sold more than instructed and three of the proprietary trading teams had short, nonzero positions at the end of the day. Of course, the noncompliant teams had excuses – “the end of the day came too fast,” or “there were not enough willing buyers in the afternoon,” or “we didn’t want to pay more than the VWAP price on the screen.” These complaints could, of course, also apply in a real equity market. In the simulation, the market opened at £20.00. The day’s high was £23.60, the low £18.80, and the last trade of the day was at £21.50. The £4.80 high-low range (24 %) reflects a volatile day. The day’s volume-weighted average price (VWAP) was £20.03. Trading volume was 42,224 units and 784 trades took place. Although the teams were in the same market, and had the same buying or selling instructions with the same opportunities, there were vast differences in performance. As Fig. 4.4 shows, the best buying team paid £19.68, or £0.35 less than both the worst team and VWAP, adding nearly 2 % to the investment return. The best selling team received £0.95 more per share than VWAP and £1.23 per share more than the worst selling team. This outcome would add almost 5 % to one selling investor’s return relative to the average. The conclusion is clear: trading performance has a substantial impact on investors’ returns.
4
Simulation as a Research Tool for Market Architects
100%
90%
80% £19.683
70%
60%
50%
40%
135 30%
20%
10%
0%
LBS11
Buy 1,300
LBS5 LBS3 LBS19 LBS1 LBS8 LBS13
Sell 1,500
£20.977
LBS18 LBS14 LBS4 LBS6 LBS2 LBS12 LBS10 LBS16
£19.60
£19.70
£19.80
£19.90
VWAP = £20.03 £20.00 £20.10
£20.20
Fig. 4.4 Performance of trading teams as measured by average buying or selling price (scale shown on bottom of figure reading left to right) and the percentage of teams’ trading done via limit orders (scale shown on top of figure reading right to left)
Also shown in Fig. 4.4 is the team’s use of limit orders. Again, there is substantial variation, with the best buying team trading exclusively with limit orders and the best selling team completing 79 % of its trading with limit orders. Note that, as the chart shows, a higher use of limit orders did not assure good trading outcomes. Note in Fig. 4.4 that the buying teams all matched or improved on VWAP, while only three of the selling teams were able to for more than VWAP. Teams’ trading was completed with a varied combination of limit orders and market orders, with the best team, for instance, selling its 1,500 with 77 % limit orders and 23 % market orders. No significant correlation existed between order choice and performance. As Fig. 4.5 shows, among the five proprietary trading teams, the largest loss was generated by the team (LBS21) that also had the largest risk as measured by average absolute value of their inventory position during the trading day. The greatest profit came from LBS15, a team that only took a moderate level of risk. Again, the simulation reveals substantial variation and behavioral differences across traders.
136
R.A. Schwartz and B.W. Weber
Average Position -1,345
P&L
LBS21
LBS20
LBS15
LBS9
LBS7 −750
−500
−250
0
250
500
750
Fig. 4.5 Performance of 5-day trading teams. Collectively the proprietary traders lost money, and only two teams returned to a zero position
4.4.2
Trading Instructions: Simulations B-1 and B-2
A quote-driven market structure was used in two other simulations with the same graduate students. The underlying order flow and P* model simulated by the computer is the same as in the order-driven market. The students were put into 22 teams, and two markets (i.e., two separate networks) with 11 teams each were run. In each of the networks, seven teams were market makers, and the other teams were given large orders to fill over the course of the day. In B-1, the market opened at 20.00, and the day’s high and low prices were 21.00 and 18.70, respectively. The last trade was 19.50, and the VWAP was 20.00, with 706 trades generating a volume of 39,875 units. Participants were told that a unit represented 1,000 shares and that those with buy or sell instructions were handling institutional-sized orders that could affect prices. In their trading, four of the market makers generated a positive profit; as a group they earned 1.4 pence per share, or seven basis points, in their trading. As Fig. 4.6 shows, only three of the dealers (LBS1, LBS3, LBS2) ended the day with a flat (either zero or no more than 35 units) closing position. The chart shows the seven dealers’ average inventory position, closing profit or loss, and their closing inventory position. LBS4, for instance, had an average position over the day of +797, a loss of 674, and a closing position of +1,431.
4
Simulation as a Research Tool for Market Architects Average Position=> P&L=> Closing Position=>
137
LBS37
LBS9
LBS1
LBS3
LBS8
LBS4
LBS2
−1000
−500
0
500
1000
1500
Fig. 4.6 Performance of seven market maker trading teams. The teams that controlled risk by keeping their average absolute value of their positions below 350 were able to make profits. Teams LBS4 and LBS2 incurred large losses and held large risk positions
In the B-2 market, the simulation started at 20.00, and the day’s high and low prices were 22.30 and 19.00, respectively. The last trade was 22.20, and the VWAP was 20.22, with 579 trades generating volume of 36,161 units. Reflecting higher volatility, only two market makers generated a positive profit and, as a group, they lost 2.7 pence per share, or 13 basis points, in their trading. Figure 4.7 shows that three of the six dealers (LBS31, LBS25, LBS3) had a flat position at the end of the day. The other dealers had short positions reflecting the P* (equilibrium price) increases and buying pressure that drove the prices up during the simulation.
4.4.3
Trading Instructions: Simulation C
An order-driven market structure was used to study trading when participants are given either an upward sloping supply curve or a downward sloping demand curve. Rather than being instructed to buy a fixed quantity of shares, buyers and sellers are asked to maximize their surplus or profit, which is the number of shares bought or sold times the amount their reservation price differs from the average price they paid. The reservation prices for buyers are decreasing in the quantity they hold in their position, a demand curve. The reservation prices for sellers are increasing in the quantity they have sold, a supply curve. Our interest is in whether a group of trading participants will trade the optimal quantity in the market or whether in the absence of explicit order quantities they overtrade, or trade too little.
138
R.A. Schwartz and B.W. Weber Average Position=> P&L=> Closing Position=>
LBS31
LBS26
LBS30
LBS25
LBS3
LBS28
−1200
−900
−600
−300
0
300
600
Fig. 4.7 Performance of six market maker trading teams. Only two teams, LBS #31 and #26, were able to make profits, and the teams that allowed their average position to exceed 300 had the largest losses
In the example below, we provide the buyers’ reservation price function, which starts at $26.00 but decreases by one for each 1,000 shares bought. If a participant buys 6,000 shares over the course of the simulation, for instance, the reservation price is $20.00. If they paid on average $18, then the surplus generated is 12,000. Reservation BUY curve:
PR = $26 – (X units/1,000)
# shares bought
=
_________
PR
=
_________
= 26 – (# shares bought /1,000)
Average buy price Surplus = (# shares) × (PR – Avg buy price)
=
_________
=
_________
The sellers’ reservation price function is below. It starts at $12.00 but increases by one for each 1,000 shares bought. If a participant sells 2,000 shares, the reservation price is $14.00. If the sellers’ average selling price is $18, for instance, the surplus is 16,000. Reservation SELL curve:
PR = $12 + (X units/1,000)
# shares sold
=
_________
PR
=
_________
=
_________
=
_________
= 12 + (# shares sold/1,000)
Average sell price Surplus = (# shares) × (Avg sell price – PR)
4
Simulation as a Research Tool for Market Architects
139
Fig. 4.8 Trading experiment with supply and demand functions and surplus calculations based on a participant trading at the market average price over the day of $19.11. The optimal trading strategy was to end with a position of 3,500 long (buyer) or short (seller)
We provided the surplus functions above to a group of 28 participants. The group was split into teams of two, with seven buying teams and seven selling teams. Although the teams were not aware of the other teams’ curves, the equilibrium price was $19, and the maximum surplus was achieved by participants that built a position of 3,500 and bought at the lowest possible prices or sold at the highest prices (see Fig. 4.8). In our experiment, average per share trade price was $19.11. In the experiment, participants in the 14 teams built positions ranging from 1,100 to 7,500 (see Fig. 4.9). There was a substantial dispersion of performance, yet the market’s trade prices converged to within $0.11 of the $19.00 equilibrium price. The average ending position was 4,001, so participants were within 15 % of the optimal position implicit in the reservation price functions. The teams with the greatest surpluses were able to come close to the optimal position of 3,500 and to buy below the average price and sell at greater than the average price. As these three simulation exercises show, a number of insights can be gained into the effect of market structure on participant behavior and market outcomes (the quality of price and quantity discovery) by running “live market” simulations. First, participants’ performance varied widely despite being in the same environment with the same trading instructions. Second, markets are complicated and not
Fig. 4.9 Even when provided with reservation price functions, the live participants in the markets traded near the equilibrium and showed little inclination to “overtrade.” The average position was only 501 units away from the optimal closing position of 3,500
140 R.A. Schwartz and B.W. Weber
4
Simulation as a Research Tool for Market Architects
141
perfectly liquid; filling large orders, while competing to outperform price benchmarks, is challenging. Participants often did not complete their instructions even though completion was part of their assessment. Third, trading can add (or subtract) value for investment managers. Even in fairly simple simulation games, performance varied widely across participant teams. Finally, risk was not well controlled by the (admittedly inexperienced) trading participants. Although given specific position limits as guidelines, many participants had large average positions and suffered substantial losses from adverse price movements. Market architects today combine different trading systems designs (Zhang et al. 2011). Beyond studies of trading decision making, the live simulations provide a method for comparing alternative market structures. For instance, implicit trading costs incurred in handling a large order over a trading day can be contrasted in markets with and without dealers, and with and without a dark liquidity pool. Live simulations, by extending what can be learned with analytical modeling and empirical data analytics, provide a laboratory for examining a broad set of questions about trading behavior and market structures.
4.5
Conclusion
Simulation is a powerful research tool that can be used in conjunction with theoretical modeling and empirical research. While simulation can enable a researcher to delve into issues that are too detailed and specific for theory to handle, a simulation structure must itself be based on a theoretical model. We have illustrated this with reference to TraderEx, the simulation that we have developed and used to analyze details of market structure and trading in an equity market. One further research methodology incorporates simulation: experimental economics. This application applies the simulated market in a laboratory where multiple players are networked together. With a well-defined performance measure and carefully crafted alternative market structure and/or information environments, a simulation-based experimental application can yield valuable insights into the determinants of market outcomes when market structure affects individual behavior and when behavioral economics along with theoretically rational decision making characterizes participant actions.
Appendix Modeling Securities Trading The simulation methodology was chosen for its ability to accommodate critical institutional features of the market mechanism and off-exchange dealers’ operations. While higher level abstractions and simplifications could yield an analytically tractable model, it is not consistent with the goals of this chapter. These complexities included here are the specialist’s role, the use of large and small order sizes, and the limit and market orders.
142
R.A. Schwartz and B.W. Weber
Earlier market structure research provides useful insights, but missing institutional details prevent it from determining the effect of third markets. Garman’s (1976) model of an auction market identifies a stochastic order arrival process and a market structure consistent with negative serial autocorrelations or the tendency of price changes to reverse themselves. Garman’s model has no specialist role, and an analytic solution is obtained only for the case with one possible price. He notes “the practical difficulties of finding analytic solutions in the general case are considerable, and numerical techniques such as Monte Carlo methods suggest themselves.” Mendelson (1987) derives analytic results that provide a comparison of market consolidation or fragmentation on market performance. The work provides numerous insights into market design trade-offs. As a simplification, however, all orders from traders are for one unit of the security. Simulation has yielded useful results in other microstructure research. Garbade (1978) investigated the implications of interdealer brokerage (IDB) operations in a competing dealer market with a simulation model and concluded that there are benefits to investors from IDBs through reduced dispersion of quotes and transaction prices closer to the best available in the market. Cohen et al. (1985) used simulation to analyze market quality under different sets of trading priority rules. They showed that systems that consolidate orders and that maintain trading priorities by an order’s time of arrival in the market increase the quality of the market. Hakannson et al. (1985) studied the market effects of alternative price-setting and own-inventory trading policies for an NYSE-style specialist dealer using simulation. They found that pricing rules “independent of the specialist’s inventories break down.”
Further Details on Simulation Model and Environment Our simulation model has been used in experiments with subject-traders to test hypotheses concerning market structures. The simulation model is dynamic, with informational changes occurring in a way that creates the possibility of realizing trading profits from close attention to price changes and careful order handling. In our human-machine interactive environment, the computer generates orders from an unlimited number of “machine-resident” traders and investors. This enables us easily to satisfy the conditions for an active, competitive market. To be useful and valid, simulation models must reflect real-world dynamics without being burdened by unnecessary real-world detail. A simulation model also requires a strong theoretical foundation. The advantage of simulation over theoretical modeling is that “theorizing” requires abstracting away from some of the very details of market structure that exchange officials and regulators wish to study. Consequently, theoretical modeling can give only limited insight into the effects of market design changes on the behavior of market participants. The advantage of simulation vis-a`-vis empirically testing of new market structures is that the simulated experiments can be run at much lower cost and across a broader range of alternatives. The objective of our computer simulation is to provide a backdrop for assessing the decisions of live participants. Trading in the model is in a single security and is
4
Simulation as a Research Tool for Market Architects
143
the result of “machine-generated” order flow interacting with the order placement decisions of the live participants. Assumptions are made about the arrival process of investors’ orders, changes to a price, p*, which is an “equilibrium value” at which the expected arrival rate of buy orders equals the expected arrival rate of sell orders. P* follows a random walk jump process. In other words, the equilibrium value jumps randomly from one level at interarrival times based on sampling an exponential distribution. After a shift in p*, the orders of the informed traders pull the quotes and transaction prices up or down, causing them to trend towards the new p* level. Occasionally, market prices can also trend away from p* because of the orders of momentum traders or the chance arrival of a string of buy or sell liquidity orders. However, movements away from p* are unsustainable; eventually the imbalanced order flow causes a price reversal and market prices gravitate back towards p*. Little work in experimental economics has used computers to create background order flow into which participants individually enter orders (Smith and Williams 1992; Friedman 1993). Our test environment does. We use discrete event computer simulation to do the following: • Generate a background public order flow that can (1) be placed on a public limit order book for later execution, or (2) execute as market orders immediately against the public limit order book. • Give the live participants the opportunity to trade a quantity of stock. Depending on the market structure, the live participants can (i) place them in a public limit order book and wait for a trade to occur or (ii) execute them against the public limit order book immediately. Variants of the simulation model include market makers, or call auctions, or dark liquidity pools to facilitate transactions. • Maintain the screen which displays (i) orders on the public limit order book, (ii) a time-stamped record of all transaction sizes and prices for each trading session, and (iii) the users’ position, risk, and profit performance data. • Capture information concerning (i) the live participants’ decisions and (ii) market quality measures such as bid-ask spreads. In summary, with this realistic and theory-based model and the ability in the simulation to control the level of transparency provided, we have a rigorous environment to assess trading decisions and the effects of different market rules.
Components of the Market Model In the simulation model, assumptions are made about the arrival process of investors’ orders, elasticity of supply and demand, and order placement strategies, price volatility, and the proportions of market and limit orders. Order Arrival. Orders arrive according to a price-dependent Poisson function. Using time-stamped transactions data on six stocks traded on the London Stock Exchange, a Kolmogorov-Smirnov goodness-of-fit test fails to reject the null hypothesis of exponential interarrival in 17 out of 22 sample periods at the 0.10 level of significance. We would expect to reject in just over two cases due to random realizations. The fit is not perfect in part because transactions tend to
144
R.A. Schwartz and B.W. Weber $35.00
Price
Fig. 4.10 Buy and sell order arrival rates. At market prices greater than p*, sell orders will arrive with greater intensity than buy orders. At market prices less than p*, buy orders will arrive with greater intensity than sell orders
$34.00
S(p(t))
$33.00
Stochastic equilibrium, p*(t)
$32.00
D(p(t))
$31.00 5.5
6.5
7.5
8.5
9.5
Arrival Rate: Orders per Hour
cluster somewhat more than predicted by the theoretical model (Weber 1991). Given the shortcoming of using empirical distributions in simulations (Law and Kelton 1989), the Poisson assumption appears sufficiently justified for capturing the typical behavior of the order arrival process. The Poisson interarrival time, T, is exponentially distributed with bt equal to the mean interarrival time at time t. The mean interarrival time is set at the beginning of each experiment and assumed to hold constant. A realization at time t is thus Tt ¼ C(b). The supply and demand structure follows closely those previously developed in the market microstructure literature (Garbade and Silber 1979), in which buy and sell arrival rates are step functions of the difference between the quoted price and the equilibrium value of the security Fig. 4.10. Garman (1976) termed the intersection of the supply and demand functions a “stochastic equilibrium.” Demand/buy orders, D(p): lB ðpi ; p Þ ¼ a1 for pt < pi lB ðpi ; p Þ ¼ a1 þ a2 ðpt pi Þ for pi ¼ pt þ a with a ¼ tick size, 2ðtick sizeÞ, 3ðtick sizeÞ, . . . d lB ðpi ; p Þ ¼ a1 þ a2 d for pt pi > d Supply/sell orders, S(p): lS pj ; p ¼ a1 for pt > pi lS pj ; p ¼ a1 þ a2 pj pt for pj ¼ pt a with a ¼ tick size, 2ðtick sizeÞ, 3ðtick sizeÞ, . . . d lS pj ; p ¼ a1 þ a2 d for pj pt > d
4
Simulation as a Research Tool for Market Architects
145
The constant a1 reflects the proportion of arrivals that are market orders. The coefficient a2 determines the arrival rate of limit orders with reservation prices. Limit order traders are sensitive to discrepancies between available prices and the equilibrium value. The parameter, d, is the range around the equilibrium value from which limit prices for limit orders are generated. At a price pi lower than the equilibrium value at the time, pt*, the arrival rate of buy orders will exceed the rate of sell order arrivals. The resulting market buy orders and limit order bids will exceed the quantity of sell orders for below-equilibrium values. The arrival rate discrepancy will cause prices to rise since in expectation, orders will trade against the lowest offer quotes, and add new, higher priced bid quotes. Order Size. Orders are between one and 250 units of the security. This reflects a convenient normalization that is consistent with the empirically observable range of order sizes. A unit may represent, for instance, three round lots, or 300 shares. Beyond 250 units, we assume the trade would be handled as a block trade, and negotiated outside of the standard market design, or arrive in the market in smaller broken-up pieces. Large orders can have “market impact,” and can move prices up for large buyers, and force them down for larger sellers. The functioning of the market for large orders is consistent with observed trade discounts for large sell orders and premiums for large buy orders. Order Placement Strategies. The machine-generated order flow consists of liquidity, informed, and momentum trading orders. The liquidity orders are either limited price orders or market (immediately executable) orders. Market orders execute on arrival but are “priced” to reflect a maximum acceptable premium or discount to the current bid or offer. If the market order is large enough, its price impact (the need to hit successive lower priced bids or lift higher priced offers) will exceed the acceptable discount or premium, and the remaining order quantity will become a limit order after partially executing against the limit order book. Information Generation. Idiosyncratic information events occur that change the share value, p*, at which buying and selling order arrival rates are balanced. Information event occurs according to a Poisson arrival process. When an information innovation occurs, the price will have a random walk jump. Price Random Walk. Idiosyncratic information events occur that change the share value, p*, at which the arrival rate of buy orders equals the arrival rate of sell orders. The time between information change is assumed to be exponentially distributed with mean, 12 h. Empirical validation is difficult, because information affects the share values in unobservable ways. When there is a change in information that will shift the “balance price,” p* evolves according to a random walk without return drift. To assure nonnegative prices, the natural log of price is used, yielding a log-normal distribution for the equilibrium price. The white noise term, et, is normally distributed with variance linear in the time since the last observation. This is consistent with the price diffusion models used in the financial economics literature (Cox and Rubinstein 1985):
146
R.A. Schwartz and B.W. Weber
ln pt * ¼ ln pt T * + et where, et N(0, Ts2) where pt * LN(ln pt T *, Ts2) The natural logarithm of the current price is an unbiased estimator of the natural logarithm of any subsequent price: Eðln pt þ T j pt Þ ¼ ln
pt
Empirical validation for the random walk model comes from numerous tests, whose results “are remarkably consistent in their general finding of randomness . . . serial correlations are found to be small” (Malkiel 1987). Information Effects. If the bid and offer quotes straddle p*, there is no informed order flow and buying and selling order arrival rates will be equal. When p* is outside of the bid-offer range, additional one-sided market orders will be generated according to a Poisson process.
References Clemons, E., & Weber, B. (1997). Information technology and screen-based securities trading: Pricing the stock and pricing the trade. Management Science, 43(12), 1693–1708. Cohen, K., Conroy, R., & Maier, S. (1985). Order flow and the quality of the market. In Y. Amihud, T. Ho, & R. Schwartz (Eds.), Market making and the changing structure of the securities industry (pp. 93–109). Lanham, MD: Lexington Books. Cox, J., & Rubinstein, M. (1985). Options markets. Englewood Cliffs: Prentice-Hall. Francioni, R., Hazarika, S., Schwartz, R., & Reck, M. (2010). Security market microstructure: The analysis of a non-frictionless market. In C. F. Lee & A. C Lee (Eds.), Handbook of quantitative finance and risk management (pp. 333–353). Germany: Springer New York & Heidelberg. Friedman, D. (1993). How trading institutions affect financial market performance: Some laboratory evidence. Economic Inquiry, 31, 410–435. Garbade, K. (1978). The effect of interdealer brokerage on the transactional characteristics of dealer markets. Journal of Business, 51(3), 477–498. Garbade, K., & Silber, W. (1979). Structural organization of secondary markets: clearing frequency, dealer activity and liquidity risk. The Journal of Finance, 34, 577–593. Garman, M. (1976). Market microstructure. Journal of Financial Economics, 3, 257–275. Greene, W. H. (2011). Econometric analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall. Hakansson, N., Beja, A., & Kale, J. (1985). On the feasibility of automated market making by a programmed specialist. Journal of Finance, 40, 1–20. Ho, T., Schwartz, R., & Whitcomb, D. (1985). The trading decision and market clearing under transaction price uncertainty. Journal of Finance, 40, 21–42. Kennedy, P. (2008). A guide to econometrics (6th ed.). Hoboken, New Jersey: Wiley-Blackwell. Law, A., & Kelton, W. (1989) Simulation modeling and analysis, McGraw-Hill, New York, NY. Malkiel, B. (1987). Efficient market hypothesis. In B. Malkiel (Ed.), The New Palgrave: A dictionary of economics (pp. 127–134). London: Macmillan Press. Mendelson, H. (1987) Consolidation, Fragmentation, and Market Performance, Journal of Financial and Quantitative Analysis, 22, 189–207. Parker, C., & Weber, B. (2012). How IT can disrupt markets: A simulation analysis (Working Paper). London: London Business School. Schwartz, R., & Weber, B. (1997). Next-generation securities market systems: An experimental investigation of quote-driven and order-driven trading. Journal of Management Information Systems, 14, 57–79.
4
Simulation as a Research Tool for Market Architects
147
Smith, V., & Williams, A. (1992, December). Experimental market economics. Scientific American, pp. 116–121. Weber, B. (1991) Information technology and securities markets: Feasibility and desirability of alternative electronic trading systems, Unpublished dissertation, University of Pennsylvania. Zhang, S. S., Storkenmaier, A., Wagener, M., & Weinhardt, C. (2011). The quality of electronic markets. In Forty Fourth Hawaii International Conference on System Sciences, Kauai, Hawaii, pp. 1–10. Zhang, X. F. (2010). High frequency trading, stock volatility, and price discovery (Working Paper). New Haven, CT: Yale University.
5
Motivations for Issuing Putable Debt: An Empirical Analysis Ivan E. Brick, Oded Palmon, and Dilip K. Patro
Contents 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Data and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Sample of Firms Issuing Putable Bonds and the Put Bond Characteristics . . . . Appendix 2: Sample of Firms Issuing Poison Put Bonds and the Put Bond Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Estimating the Standard Abnormal Returns and the White t-Statistic . . . . . . . . . . Appendix 4: Generalized Method of Moments (GMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
150 152 154 156 162 174 175 179 181 182 184
Abstract
This paper examines the motivations for issuing putable bonds in which the embedded put option is not contingent upon a company-related event. We find We would like to thank Sanjay Deshmukh and conference participants at the Seattle FMA meetings, October 2000, for helpful comments on an earlier version of the paper circulated under a different title. The views expressed in this paper are strictly that of the authors and not of the OCC or the Comptroller of the Currency. I.E. Brick (*) Department of Finance and Economics, Rutgers, The State University of New Jersey, Newark/New Brunswick, NJ, USA e-mail: [email protected]; [email protected] O. Palmon Department of Finance and Economics, Rutgers Business School Newark and New Brunswick, Piscataway, NJ, USA e-mail: [email protected] D.K. Patro RAD, Office of the Comptroller of the Currency, Washington, DC, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_5, # Springer Science+Business Media New York 2015
149
150
I.E. Brick et al.
that the market favorably views the issue announcement of these bonds that we refer to as bonds with European put options or European putable bonds. This response is in contrast to the response documented by the literature to other bond issues (straight, convertible, and most studies examining poison puts) and to the response documented in the current paper to the issue announcements of poison put bonds. Our results suggest that the market views issuing European putable bonds as helping mitigate security mispricing. Our study is an application of important statistical methods in corporate finance, namely, event studies and the use of general method of moments for cross-sectional regressions. Keywords
Agency costs • Asymmetric information • Corporate finance • Capital structure • Event study methodology • European put • General method of moments • Management myopia • Management entrenchment • Poison put
5.1
Introduction
This paper examines the motivations for issuing putable bonds in which the embedded option is not contingent upon company-related events. The option entitles bondholders to sell the bond back to the firm on the exercise date (usually 3–10 years after the bond is issued) at a predetermined price (usually at par). We refer to these bonds as bonds with European put options or European putable bonds. Unlike the poison put bonds (i.e., bonds with event risk covenants) that have been studied by the literature,1 the exercise of the option in a putable bond is not contingent upon a company-related event. This distinction is important because a poison put protects the bondholder from a specific event (e.g., a takeover) and may be designed to help prevent that event. In contrast, putable debt provides protection to the bondholder against any deterioration in the value of her claim. This distinction is important also because the two types of embedded put options may serve different purposes. Crabbe and Nikoulis (1997) provide a good overview of the putable bond market. Corporate managers determine which contingent claims the company issues to finance its activities. This choice includes the debt-equity mix and the specific design of the debt. The design of the debt includes its maturity, seniority, collateral, and the type of embedded options included in the bond contract. The theoretical corporate finance literature indicates that including convertible, callable, and/or putable bonds in the capital structure may help mitigate agency costs and reduce asymmetric information. Haugen and Senbet (1981) theoretically demonstrate that the optimal combination of embedded call and put options should eliminate the
1
See for example, Crabbe (1991), Bae et al. (1994, 1997), Cook and Easterwood (1994), and Roth and McDonald (1999), Nash et al. (2003). Billett et al. (2007) find that 5 % of corporate bonds of their sample have a non-poison putable option.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
151
asset substitution problem. Bodie and Taggart (1978) and Barnea et al. (1980) show that bonds with call options can mitigate the underinvestment problem. Robbins and Schatzberg (1986) demonstrate that the increased interest cost of callable bond can be used to convey the true value of the firm to the market.2 The literature has not empirically examined the motivation and equity valuation impact of issuing debt with European put options. We fill this gap in the literature in several ways. First, we examine the stock price reaction around the announcement dates of the two types of putable bond issues: bonds with European put options and bonds with poison puts. Further, we consider four alternative motivations for incorporating either a European or a poison put option in a bond contract and test the hypotheses that putable debt is issued to reduce security mispricing, agency costs of debt, management entrenchment, and myopia. The first possible motivation for issuing putable debt is asymmetric information. Consider a company that is undervalued by the market. The market should also undervalue its risky straightdebt issue. However, if this firm were to issue putable bonds, the put option would be overvalued. (Recall that put options are negatively related to the value of the underlying asset.) Consequently, the market value of bonds with a put option is less sensitive to asymmetric information than straight bonds, minimizing market mispricing of these debt securities. Additionally, if the market overestimates the risk of the bond, it may overvalue the embedded put option at issuance, thereby increasing bond proceeds and benefiting the shareholders.3 The second possible motivation is mitigating agency costs of debt. For example, the existence of a put option mitigates the advantage to stockholders (and the loss to bondholders) from risk shifting, thereby reducing the incentive to shift risk. Risk shifting hurts putable bondholders less than holders of straight bonds because the value of the put option (which is held by bondholders) is an increasing function of the firm’s risk. The third possible motivation is the relatively low coupon rate (a myopic view that ignores the potential liability to the firm due to the put option). The fourth possible motivation is that the put option may serve to entrench management. This fourth motivation is most relevant for firms issuing poison puts since these put options are exercisable contingent upon company-related events that are usually related to a change in ownership. We find that the market reacts favorably to the issue announcement of European put bonds. We also examine the relationship between the abnormal returns around the put issue announcement date and firm characteristics that proxy for asymmetric information problems, potential agency costs (i.e., risk-shifting ability and the level of free cash flow), and management myopia. The empirical evidence is consistent with the view that the market considers issuing putable bonds as mitigating security mispricing caused by asymmetric information. The results do not support the idea
2
Other benefits posited by the literature include minimizing tax liabilities. See for example, Brick and Wallingford (1985), and Brick and Palmon (1993). 3 Brennan and Schwartz (1988) offer a similar argument to explain the benefits of issuing convertible bonds.
152
I.E. Brick et al.
that putable bonds are issued to obtain lower coupon rates (i.e., management myopia). Our empirical findings are robust to a number of alternate specifications. In contrast to European putable bonds, and consistent with the management entrenchment hypothesis (see Cook and Easterwood (1994) and Roth and McDonald (1999)), we find that the market reacts unfavorably to the issue announcement of poison put bonds. However, consistent with Bae et al. (1994) who argue that poison put bonds are useful in mitigating agency cost problems, we find that the abnormal returns around the issue announcement of poison put bonds are positively related to the protection level of the event risk covenant. Thus, our results are consistent with the view that European put bonds are effective in mitigating security mispricing problems, but, in contrast, poison put bonds are related to management entrenchment or mitigating agency cost problems. The paper’s organization is as follows. In the next section we summarize the previous literature. Section 5.3 develops the empirical hypotheses. We describe the data and empirical methodology in Sect. 5.4. The empirical results are summarized in Sect. 5.5. We offer concluding remarks in Sect. 5.6.
5.2
Literature Review
The theoretical corporate finance literature concludes that the firm’s financing decision may affect its equity value for several reasons. First, issuing debt increases firm value because it decreases its tax liabilities.4 Second, issuing debt may be, in part, a signaling mechanism that informs the market of private information. For example, Ross (1977) and Ravid and Sarig (1991) demonstrate that the manager of a firm with better prospects than the market perceives has an incentive to signal her firm’s quality by issuing a greater amount of debt than issued in a symmetric information environment. Third, prudent level of debt can reduce the agency costs arising from the conflict of interest between managers and shareholders as demonstrated by Jensen and Meckling (1976). For example, Jensen (1986) demonstrates that leverage can minimize the deleterious effect of free cash flow on the firm.5 However, leverage is also shown to generate other
4
See, for example, Modigliani and Miller (1963), Scott (1976) and Kim (1978). In contrast, Miller (1977) suggests that the tax benefit of interest is marginal. However, Mackie-Mason (1990) empirically demonstrates the significant impact of corporate taxes upon the observed finance choices of firms. 5 Hence, empirically, we would expect that as firms announce increased levels of debt, the stock price should increase. However, studies by Dann and Mikkelson (1984), Mikkelson and Partch (1986), Eckbo (1986), and Shyam-Sunder (1991) indicate that there is no systematic relationship between the announcement of firm’s debt financing and its stock price or that this relationship is weakly negative. One potential explanation for this result is that the market can predict future debt offerings as argued by Hansen and Chaplinsky (1993). Another potential explanation, as suggested by Miller and Rock (1985) and documented by Hansen and Crutchley (1990), is that raising external capital may indicate a cash shortfall.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
153
agency problems because of the conflict of interest between stockholders and bondholders. These agency problems include underinvestment and risk shifting (or asset substitution).6 Other studies indicate that these agency problems can be mitigated or eliminated by optimal security design. For example, Haugen and Senbet (1981) demonstrate that the optimal combination of embedded call and put options completely eliminates the asset substitution problem. Bodie and Taggart (1978) and Barnea et al. (1980) show that the call options of bonds can mitigate the underinvestment problem. Similarly, Green (1984) demonstrates that including convertible bonds in the capital structure may also mitigate the underinvestment problem. The literature has also demonstrated that an appropriate debt security design may alleviate asymmetric information problems. For example, Robbins and Schatzberg (1986) demonstrate that the increased interest cost of callable bond can be used to signal the value of the firm. Moreover, as similarly stated for the case of convertible bonds by Brennan and Schwartz (1988), the inclusion of a put option can minimize the mispricing of debt securities if the market overestimates the risk of default. Chatfield and Moyer (1986) find that putable bonds may be issued by financial institutions for asset-liability management in a period of volatile interest rates. Tewari and Ramanlal (2010) find that callable-putable bonds provide protection to bondholders and improved returns to stockholders. The literature examines the equity valuation impact of a special type of putable bond known as the poison put bond. In poison put bonds, the put option is exercisable contingent upon a company-related event such as a leveraged restructuring, takeover, or downgrading the debt credit rating to speculative grade. David (2001) shows that puts may have higher strategic value than intrinsic value. Crabbe (1991) finds that such event-related covenant put bonds reduce the cost of borrowing to the firm. Bae et al. (1994) conclude that stockholders benefit from the inclusion of event-related (risk) covenants. Furthermore, Bae et al. (1997) empirically document that the likelihood of a firm to include event risk covenants is positively related to the firm’s agency costs of debt. In contrast, Cook and Easterwood (1994) and Roth and McDonald (1999) find that the inclusion of poison put bonds benefits both management and bondholders at the expense of stockholders. In contrast to these studies of poison put options, our study examines debt issues in which the embedded put option is equivalent to a European put with a fixed exercise date that is usually 3–10 years after the issue date. That is, bondholders may exercise the put option only at the exercise date, and their ability to do so is not contingent upon any particular company-related event. Thus, we believe that issuing bonds with an embedded put option that is not contingent upon a companyrelated event (such as a change in ownership) is not likely to be motivated by 6 Myers (1977) demonstrates that shareholders may avoid some profitable net present value projects because the benefits accrue to the bondholders. Jensen and Meckling (1976) demonstrate that leverage increases the incentives for managers, acting as agents of the stockholders, to increase the risk of the firm.
154
I.E. Brick et al.
management entrenchment. Consequently, we consider the security mispricing motivation and two other possible motivations: mitigating agency costs and the myopic behavior on the part of the management.
5.3
Hypotheses
In this section, we describe four alternative motivations for incorporating a put option in a bond contract and outline their empirical implications. The first motivation is reduction in the level of security mispricing due to asymmetric information. Consider a company that is undervalued by the market.7 The market should also undervalue its risky straight-debt issue. However, if this firm were to issue putable bonds, the put option would be overvalued. Consequently, the market value of bonds with put option is less sensitive to asymmetric information than the market value of straight bonds, minimizing market mispricing of these debt securities. Additionally, the overvaluation of the implicit put option increases the debt proceeds at the issuance thereby benefiting the shareholders. This possible motivation has the following empirical implications. First, issuing bonds with put option should be associated with an increase in equity value. Second, this increase in firm value should be negatively related to the accuracy with which the firm’s value has been estimated prior to the bond issue. Third, because the value of the put option is directly related to the magnitude of firm undervaluation, the increase in equity value should be directly related to the value of the put option. The second possible motivation is mitigating the bondholder-stockholder agency costs.8 The value of a put option is an increasing function of the firm’s risk. Thus, its existence mitigates the gains to stockholders from, and hence their incentives for, risk shifting. This possible motivation has the following empirical implications. First, the benefit to a firm from incorporating a put option in the bond contract should be directly related to the firm’s ability to shift risk in a way that increases the value of stockholders claims at the expense of bondholders. Second, the inclusion of a put option should help restrain management from taking on negative net present value projects in the presence of Jensen’s (1986) free cash flow problem.9 In the absence of a put option, undertaking negative net present value projects should reduce security prices for both stockholders and bondholders.
7
We assume that managers of companies that are overvalued have no incentive to resolve security mispricing. Consequently, managers of undervalued firms could send a credible signal to the market of the firm’s undervaluation with the inclusion of a put feature. Overvalued firms should not be able to mimic since the put option represents a potential liability that is greater to these firms than of the undervalued firms. 8 The bondholders-stockholders agency conflict has been found to be important in security design by Bodie and Taggart (1978), Barnea et al. (1980), Haugen and Senbet (1981), Jung et al. (1996), and Lewis et al. (1998). 9 Equivalently, the inclusion of a put option should restrain management from consuming a suboptimal amount of perquisites or nonpecuniary benefits.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
155
In contrast, giving bondholders the option to put the bond back to the firm at face value shifts more of the negative price impact of undertaking negative net present value projects to the stockholders. This negative stock price impact may reduce management compensation that is tied to equity performance and/or induce tender offers that will ultimately help replace the current management. These increased costs to stockholders and management should induce management to refrain from undertaking negative net present value projects. Thus, we hypothesize that the benefit to the stockholders from incorporating a put option in the bond contract should be directly related to the level of free cash flow. Third, the gain in firm value should be related to the magnitude of the agency cost problem that the firm faces. The valuation of the put option by the market (assuming market efficiency and symmetric information) should be positively related to the magnitude of the agency cost problem faced by the company. Thus, the benefit to the firm should also be directly related to the aggregate value of the implied put option of the issue (scaled by firm size). Fourth, for our sample of poison put bonds, the benefit to shareholders should increase with the strength of the event risk covenant, since the greater the strength of the event risk covenant, the less likely management will engage in value decreasing activities. Bae et al. (1994) tested a similar set of hypotheses for a sample of poison put bonds.10 The third possible motivation is the low coupon rate (compared to the coupon rates of straight or callable debt issues). This motivation reflects management myopia as it ignores the potential liability to the firm due to the put option.11 That is, myopic management may not fully comprehend the increased risk to the firm’s viability that is posed by the put option. In particular, bondholders would have the right to force the firm to (prematurely) retire its debt at a time that is most inconvenient to the firm, which in turn can precipitate a financial crisis. Further, if the cost of financial distress is significant, given rational markets and myopic management, issuing putable debt may negatively impact the value of equity. This possible motivation has the following empirical implications. First, because management myopia implies that management pursues suboptimal policies, the issuance of putable debt should be associated with a decline in equity value.12 Second, the decline should be more severe the larger is the aggregate value of the implied put option. Third, because expected costs of financial distress are
10
Although bonds become due following formal default, hence all bonds in a sense become putable, bondholders usually recover substantially less than face value following formal default because equity value has already vanished. In contrast, a formal inclusion of put option may allow bondholders to recover the face value at the expense of shareholders when financial distress is not imminent but credit deterioration has occurred. 11 Other studies that have examined managerial myopia include Stein (1988), Meulbroek et al. (1990), Spiegel and Wilkie (1996), and Wahal and McConnell (2000). 12 Haugen and Senbet (1978, 1988) theoretically demonstrate that the organizational costs of bankruptcy are economically insignificant. However, if the putable bonds increase the potential of technical insolvency, then it will increase potential agency costs that are not necessarily insignificant.
156
I.E. Brick et al.
negatively related to the financial stability of the company, the decline should be more severe the lower the credit rating of the bond. The fourth possible motivation is that the use of put options enhances management entrenchment. This hypothesis is relevant for firms issuing poison but not European put bonds. In particular, many bonds with event risk covenants (i.e., poison puts) place restrictions on the merger and acquisition activity of the issuing firms, thereby strengthening the hands of management to resist hostile takeover bids. This possible motivation has the following empirical implications. First, issuing bonds with put options should be followed by a decrease in equity value. Second, in contrast to the mitigating agency costs hypothesis, the abnormal returns of equity around the issue announcement date should be inversely related to the level of event-related covenant protection offered to the bondholder.
5.4
Data and Methodology
Our sample of bonds with put options is taken from the Warga Fixed Income Database. The database includes pricing information on bonds included in the Lehman Brothers Bond Indices from January 1973. Using this database we select all bonds with fixed exercise date put option (i.e., not event contingent), issued between January 1973 and December 1996, excluding issues by public utilities, multilateral agencies (such as the World Bank), and sovereign issues. In our sample we keep only those issues for which we find announcement dates in the Dow Jones News Retrieval Service. This resulted in a sample of 158 bonds of which the earliest bond is issued in 1979. Our final sample of bonds is selected according to the following criteria: (a) The issuing company’s stock returns are available on the CRSP tapes. For 20 companies CRSP data are not available, resulting in 138 issues. To reduce confounding effects, all repeat issues by the same company within a year of a sample issue are eliminated. We also eliminated observations for which we find other contemporaneous corporate events. This further reduced our sample to 104 issues. (b) Furthermore, we eliminate from the sample companies that do not have sufficient accounting data, credit rating, issue size, and industry code in either the Compustat tapes, Moody’s Industrial Manuals, or the Warga’s Fixed Income database. This reduces our sample size by 13 firms. (c) We eliminate one more company for which we found no I/B/E/S data, thus yielding a final sample of 90 firms. The list of these firms and the characteristics of these bonds are reported in Appendix 1. The maturity of these put bonds ranges from 5 to 100 years, with an average initial maturity of 24 years. The put exercise dates ranges from 2 to 30 years, with an average put expiration period of 7 years.13
13
Four bonds have multiple put exercise dates.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
157
For the sake of comparison, we also construct a sample poison put bonds.14 The issue announcements of poison put bonds are taken from Dow Jones Interactive. We also searched LexisNexis but did not find any new announcements.15 This search resulted in 67 observations. Our final sample of bonds with poison put feature is further refined using criteria (a), (b), and (c) described above. Criterion (a) reduces the sample size to 57 observations, and applying criterion (b) further reduces it to 47 observations. The list of these firms and the characteristics of these bonds are reported in the Appendix 2. For these issues, we collect CRSP daily returns for our event study.16 We calculate the abnormal returns (AR) on equity for each firm around the date of the issue announcement, assuming that the stochastic process of returns is generated by the market model. We define the announcement date to be the earlier of the date on which the bond is issued and the date on which the issue information appears in the news wires, or published in the Wall Street Journal, as depicted by the Dow Jones Interactive. Our final sample includes only bonds whose issue announcement explicitly mentions the inclusion of a put option. We estimate the market model coefficients using the time period that begins 200 trading days before and ends 31 trading days before the event, employing the CRSP valueweighted market index as the benchmark portfolio. We use these coefficients to estimate abnormal returns for days 30 to +30. We calculate the t-statistics for the significance of the abnormal and cumulative abnormal returns using the methodology employed by Mikkelson and Partch (1986). See Appendix 3 for details. We examine the abnormal returns and their determinants for the sample of issuers of bonds with European put options. First, we test whether issuing these bonds significantly affects equity values. The stock price should, on average, react positively to the issue announcement if either reducing debt security mispricing due to asymmetric information or mitigating agency costs is a major motivation for issuing putable bonds. On the other hand, the stock price should, on average, react negatively to the issue announcement if management myopia (i.e., the relatively low coupon rate compared to straight or callable debt issues) is a major motivation of management in issuing putable bonds and if costs of financial distress are significant. Second, we estimate the following cross-sectional regression equation that relates the cumulative abnormal return of the issuing firm’s equity to firm
14
Unlike the European put bond sample which we were able to obtain from Warga’s Fixed Income Database, we obtained our sample of poison puts from Dow Jones Interactive by using keywords such as poison put, event risk, and covenant ranking. Warga’s database does not include an identifiable sample of poison put bonds. 15 We were only able to find a sample of poison put bonds with issue announcements using Dow Jones Interactive and LexisNexis for the period between 1986 until 1991. We did use these news services for the periods between 1979 and 2000. 16 The event study methodology was pioneered by Fama et al. (1969).
158
I.E. Brick et al.
characteristics that proxy for agency costs, asymmetric information problems, and managerial myopia: CAR3i ¼ b0 þ b1 FCFi þ b2 RISK i þ b3 SIZEi þ b4 INTSAVEDi þ b5 ANALYSTSi þ b6 FINSi þ ei,
(5.1)
where CAR3i is the 3-day (i.e., t ¼ 1, 1) cumulative abnormal return for firm I. The variables, FCF, RISK, SIZE, INTSAVED, and ANALYSTS proxy for agency costs, information asymmetry, or management myopia. FINS is a dummy variable that indicates whether the company is a financial institution. FCF is the level of free cash flow of the firm for the fiscal year prior to the issue announcement of putable bonds. We construct two alternative measures of FCF which are similar to the definition employed by Lehn and Poulsen (1989) and Bae et al. (1994). FCF1 is defined as [Earnings before Interest and Taxes – Taxes – Interest Expense – Preferred Dividend Payments – Common Stock Dividend Payments]/Total Assets. FCF2 is defined as [Earnings before Interest and Taxes – Taxes – Interest Expense]/Total Assets.17 The inclusion of a put option in the bond contract should help restrain management from misusing its cash flow resources. We therefore expect b1 to be positive if reducing agency cost is a major motivation for issuing putable bonds. RISK is a dummy variable that equals 1 if the putable bond is rated by Standard and Poor’s as BBB+ or below and zero otherwise.18 The impact of issuing putable bonds on equity value may be associated with the variable RISK because of two alternative hypotheses. First, the level of agency costs due to the asset substitution problem is positively related to the probability of financial distress. We use this dummy variable as a proxy for the firm’s ability and incentive to shift risk. According to the agency cost motivation, we expect b2 to be positive. Second, the greater the probability of financial distress, the more likely are bondholders to exercise their put option and force the firm to prematurely retire the debt at a time that is most inconvenient to the firm. Thus, according to the management myopia hypothesis, we expect b2 to be negative. SIZE is the natural logarithm of the total asset level of the issuing firm at the end of the fiscal year preceding the issue announcement date. SIZE may have two contrasting impacts on CAR3. First, SIZE may be interpreted as a proxy for the level of asymmetric information. We expect that the degree of asymmetric information is inversely related to SIZE. Hence, according to the debt security mispricing motivation, we expect the putable bond issue announcement to have a larger positive impact on small firms than on large firms. Consequently, we expect 17
Note that FCF1 is the actual free cash flow of the firm while FCF2 is the (maximum) potential free cash flow if dividends are not paid. We do not subtract capital expenditures from the free cash flow variables because they maybe discretionary and maybe allocated suboptimally to satisfy management’s interests. 18 Had we assigned the value of 1 to only junk bonds, the number of observations with the risk variable equal to one would be too small for statistical inference.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
159
b3 to be negative. On the other hand, we also expect that the risk of default is inversely related to firm size. The higher the probability of default, the lower the probability that the firm will be in existence and be able to pay its obligations (including the par value of the bond if the put option is exercised) at the exercise date of these European put options. Thus, size may be an indirect proxy of the aggregate value of the put option. In this case, we expect b3 to be positive. INTSAVED is the scaled (per $100 of assets for the fiscal year prior to the issue announcement date) annual reduction in interest expense due to the incorporation of a European put option in the bond contract. The annual interest expense reduction is calculated as the product of the dollar amount of the putable bond issue and the yield difference between straight and putable bonds. These yield differences are calculated by subtracting the yields to maturity of the putable bond from the yield to maturity of an equivalent non-putable bond, also taken from Warga’s Fixed Income Database. The equivalent bond has similar maturity, credit rating, and call feature as the putable bond. The equivalent non-putable bond is selected from the issuing firm if, at the issue date of the putable debt, the issuing firm has an appropriate (in terms of maturity, credit rating, and call feature) outstanding non-putable bond. Otherwise, we choose an appropriate equivalent bond from a firm of the same industry code as given by the Warga’s Fixed Income database. We posit that value of the embedded put option is directly related to the (present) value of INTSAVED. If the benefits of mitigating either agency costs or security mispricing are a major reason for issuing European put bonds, and if these benefits are directly related to the value of the embedded option, then b4 should be positive. In contrast, if managerial myopia is a major reason for issuing European put bonds, and if the expected reduction in equity value due to the cost of financial distress is related to the value of the option, then b4 should be negative. The b4 coefficient may be negative also because it proxies for a loss of interest tax shield due issuing putable rather than straight bonds. The variable ANALYSTS, defined as the natural logarithm of one plus the number of analysts who follow the issuing firm for the quarter prior to the putable bond issue announcement date, is another proxy for the degree of asymmetric information.19 We obtain the number of analysts following each firm, for the year prior to the bond announcement, from the I/B/E/S tapes. We hypothesize that the degree of asymmetric information is negatively related to ANALYSTS. Thus, if asymmetric information motivation is a major motivation for issuing putable debt, we expect b5 to be negative. We note that the European put and the poison put samples vary in their proportion of financial service companies. The European put sample contains 25 (out of 90) financial service companies, while the poison put contains only one. To control for the potential sector impact, we introduce the dummy variable FINS which equals one if the company is a financial institution and zero otherwise
19
An alternative measure of the degree of asymmetric information is the number of analysts. The empirical results are robust to this alternative measure.
160
I.E. Brick et al.
Table 5.1 A summary of expected signs of abnormal returns and their determinants for issuers of bonds with European put options Abnormal returns FCF RISK SIZE INTSAVED ANALYSTS
Agency cost Positive Positive Positive No prediction Positive No prediction
Security mispricing Positive No prediction No prediction Ambiguous Positive Negative
Managerial myopia Negative No prediction Negative No prediction Negative No prediction
The determinants of abnormal returns are: RISK is a dummy variable equal to one if the bond issue has an S&P bond rating of BBB+ or below and zero otherwise. FCF is defined in two separate ways. FCF1 and FCF2 are the two free cash flow measures as described in the text. In particular, FCF1 is defined as [Earnings before Interest and Taxes – Taxes – Interest Expense – Preferred Dividend Payments – Common Stock Dividend Payments]/Total Assets. FCF2 is defined as [Earnings before Interest and Taxes – Taxes – Interest Expense]/Total Assets. INTSAVED measures the relative amount of aggregate interest expense saved per $1,000 of total assets of the issuing firm. ANALYSTS is the natural logarithm of one plus the number of analysts who follow the issuing firm for the year prior to the putable bond issue announcement date. Those predictions that are confirmed by our empirical study are in bold letters, and those that are significantly different from zero are also underlined
for our regression analysis of the European put sample.20 Table 5.1 provides a summary of the expected signs of the regression coefficients for the sample of European put bonds. We estimate our regressions using the general method of moments (GMM) procedure. See Appendix 4. The procedure yields unbiased White t-statistics estimates that are robust to heteroscedasticity.21 Note that because our instruments are the regressors themselves, the parameter estimates from OLS and GMM are identical. This is also discussed in Appendix 3. We test the robustness and appropriateness of our specification by estimating alternative specifications that include additional variables. First, we include a dummy variable for the existence of a call feature to take into account the effectiveness of the call feature to mitigate agency costs. Second, we include a dummy variable to indicate that the bond’s put option expires within 5 years because effectiveness of the put bond in mitigating agency costs may be related to
20
In the next section, we report the regression results when we exclude financial service companies from our European put sample. Essentially, the basic results of our paper are not affected by the inclusion or exclusion of financial services company. 21 The violation of the homoscedastic assumption for OLS does not lead to biased regression coefficients estimators but potentially biases the computed t-statistic. The GMM procedure provides an asymptotically unbiased estimation of the t-statistics without specifying the heteroscedastic structure of the regression equation. The t-statistics obtained using GMM are identical to those obtained using ordinary least squares (OLS) in the absence of heteroscedasticity.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
161
time to expiration of the put option. Third, we also include interaction variables between these dummy variables and FCF, SPRISK, INTSAVE, and ANALYSTS. In alternative specifications, we include interest rate volatility measures to take into account the potential sensitivity of the value of the put option to interest rate volatility. It should be noted that the value of the put option depends on the volatility of the corporate bond yield. This volatility may be due to factors specific to the company (such as corporate mismanagement and asymmetric information about firm’s value) and macroeconomic factors relating to the stability and level of market-wide default-free interest rates. Our independent variables in Eqs. 5.1 and 5.2 control for firm-specific yield volatility. To incorporate the possible impact of market-wide interest rate volatility, we include several alternative variables. We measure interest rate volatility as the standard deviation of the monthly 5-year Fama-Bliss discount rate from CRSP. The standard deviation is alternatively measured during a period of 60 months and 24 months immediately prior and after the announcement date. Alternatively, we use the level of the Fama-Bliss discount rate as a proxy for interest rate volatility. We repeat the analysis for the sample of issuers of poison put bonds. As with the sample of bonds with an embedded European put option, we first examine the abnormal returns and their determinants for the sample of issuers of poison putable bonds. In essence, we test whether issuing these bonds significantly affects equity value. The stock price should, on average, react positively to the issue announcement if either mitigating agency costs or reducing debt security mispricing due to asymmetric information is a major motivation for issuing poison bonds. On the other hand, the stock price should, on average, react negatively to the issue announcement if management myopia (i.e., the relatively low coupon rate compared to straight or callable debt issues) or management entrenchment is a major motivation of management in issuing putable bonds. We use two alternative specifications for the cross-sectional study that relates the abnormal returns of the poison putable bond sample to firm characteristics. The first is described by Eq. 5.1 The second includes a variable, COVRANK, that equals the S&P event risk ranking on a scale of 1–5. S&P event risk ranking of one implies that the embedded put option provides the most protection to bondholders against credit downgrade events. Event risk ranking of five offers the least protection to bondholders against credit downgrade events. Thus, the second specification is: CAR3i ¼ b0 þ b1 FCFi þ b2 RISK i þ b3 SIZEi þ b4 INTSAVEDi þ b5 ANALYSTSi þ b6 COVRANK i þ ei:
(5.2)
If mitigating agency costs was a major motivation for issuing poison put bonds as suggested by Bae et al. (1994), then we would expect regression coefficients b1, b2, and b4 to be positive. We also expect b6 to be negative because COVRANK is negatively related to extent of bondholder protection, and this protection should deter management from asset substitution activities.
162
I.E. Brick et al.
In contrast, if, as Cook and Easterwood (1994) and Roth and McDonald (1999) argue, management entrenchment is a major motivation behind for issuing putable bonds, then we expect b1 to be negative since we expect the potential loss of value due to management entrenchment to be positively related to free cash flow and b6 to be positive.22
5.5
Empirical Results
This section presents the empirical results for tests of our hypotheses discussed above. Panel A of Table 5.2 provides summary statistics of issuers of bonds with a European put option. The average annual sales, long-term debt, and total assets for the fiscal year prior to the issue announcement date are $14.66 billion, $4.21 billion, and $30.79 billion, respectively. The mean leverage ratio, defined as the long-term debt to total assets, is 18.6 %. The average issue size of the putable bond is $190 million and represents, on average, 2.7 % of the firm’s assets. As measured by RISK, 24 % of putable bonds are rated below A-. The average free cash flow of the firm as a percentage of the firm’s total assets is below 4 %. The average amount of interest saved (INTSAVED) due to the inclusion of a put option feature in the bond issue is approximately 0.03 % of the total assets of the issuing firm. The maximum interest expense saved is as high as $1.23 per $100 of total assets.23 However, INTSAVED is negative for ten observations, which may be due to the lack of a closely matched straight bond. To guard against this error-in-variable problem, we estimate our regression Eq. 5.1 using two alternative samples: the entire sample and a sample comprising of positive INTSAVED observations. The number of analysts following a company ranges from 1 to 41. Panel B of Table 5.2 provides the corresponding summary statistics for the sample of issuers of poison put bonds. We note that these firms are smaller than the issuers of bonds with European put options. Additionally, issuers of poison puts have a smaller number of following analysts, and the bonds tend to be somewhat riskier than the issuers of bonds with European put options. Panel A of Table 5.3 presents the average daily abnormal performance of the equity of issuers of bonds with European put options in our sample for t ¼ 30 to t ¼ 30. These abnormal returns are obtained from a market model. Please note that the t-statistics presented in Table 5.3 are based on standardized abnormal returns. In an efficient market, we expect the market to impound the economic informational impact of the new bond issue on the day of the announcement (t ¼ 0). The average abnormal return at t ¼ 0 is almost 0.33 % and is significantly positive at the 1 % level.
22
If a major motivation for issuing putable bonds is to enhance management entrenchment, then we expect that the value of entrenchment is directly related to the level of free cash flow which management can misappropriate. 23 Crabbe (1991) demonstrates that bonds with poison puts reduce the cost of borrowing for firms.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
163
Table 5.2 Summary statistics of putable bond issuing firms Variable Mean Std dev Minimum Panel A: Sample of 90 issuers of bonds with European put options Sales 14,656.250 23,238.860 89.792 Long-term debt 4,206.870 7,568.520 51.522 Total assets 30,789.070 41,623.450 182.281 Leverage ratio 0.186 0.122 0.012 Issue size 190.162 106.701 14.300 Ebit 1,818.390 2,647.810 10.457 ISSUE 0.027 0.056 0.001 RISK 0.244 0.432 0.000 FCF1 0.020 0.093 0.082 FCF2 0.034 0.095 0.080 Number of analysts 21.922 8.388 1.000 INTSAVED 0.029 0.130 0.028 Panel B: Sample of 47 issuers of bonds with poison put options Sales 5,033.849 5,921.869 406.360 Long-term debt 1,203.060 2,148.865 25.707 Total assets 5,080.781 7,785.288 356.391 Leverage ratio 0.208 0.112 0.031 Issue size 158.114 80.176 50.000 Ebit 481.121 639.842 30.800 ISSUE 0.062 0.048 0.006 RISK 0.314 0.469 0.000 FCF1 0.034 0.032 0.075 FCF2 0.051 0.036 0.050 Number of analysts 18.803 7.792 2.000 INTSAVED 0.011 0.044 0.070
Maximum 124,993.900 50,218.300 174,429.410 0.457 500.000 13,275.700 0.439 1.000 0.823 0.824 41.000 1.226 34,922.000 13,966.000 51,038.000 0.476 350.000 3,825.000 0.251 1.000 0.111 0.132 39.000 0.216
Sales, long-term debt, total assets, and EBIT are in millions of dollars and are for the fiscal year prior to the putable bond issue announcement. Leverage ratio is the ratio of the long-term debt to total assets. Issue size is the dollar amount of putable bond issue in millions of dollars. ISSUE is the ratio of issue size to the total assets of the issuing firm as of the fiscal year prior to the issue announcement date. RISK is a dummy variable equal to one if the bond issue has an S&P bond rating of BBB+ or below and zero otherwise. FCF1 and FCF2 are the two free cash flow measures as described in the text. In particular, FCF1 is defined as [Earnings before Interest and Taxes – Taxes – Interest Expense – Preferred Dividend Payments – Common Stock Dividend Payments]/Total Assets. FCF2 is defined as [Earnings before Interest and Taxes – Taxes – Interest Expense]/Total Assets. Number of analysts is the number of analysts who follow the issuing firms for the year prior to the putable bond issue. INTSAVED measures the relative amount of aggregate interest expense saved per $100 of total assets of the issuing firm
This suggests that the market views favorably the announcement of putable bonds. This may be due to mitigating agency costs or resolving asymmetric information. However, for the 3-day window (t ¼ 1, 1), the abnormal return is 0.19 % but is not statistically significant from zero.
164
I.E. Brick et al.
Table 5.3 The average daily abnormal returns for firms issuing putable bonds from 30 days prior to the putable bond issuance announcement to 30 days after the announcement. The t-statistics are based on standardized abnormal returns. CAR is the cumulative abnormal return Event day Abnormal return t-statistic Panel A: Sample of 90 issuers of bonds with European put options 30 0.0006 0.1988 29 0.0004 0.2305 28 0.0011 0.8169 27 0.0003 0.1248 26 0.0002 0.2028 25 0.0003 0.0263 24 0.0003 0.2174 23 0.0002 0.1306 22 0.0007 0.2573 21 0.0003 0.0367 20 0.0024 1.6700 19 0.0000 0.1348 18 0.0025 1.6932 17 0.0029 1.7141 16 0.0005 0.0154 15 0.0022 1.4378 14 0.0022 1.4701 13 0.0002 0.1052 12 0.0007 0.4783 11 0.0014 1.2803 10 0.0003 0.1904 9 0.0014 0.5100 8 0.0006 0.6052 7 0.0001 0.4020 6 0.0005 0.1268 5 0.0027 1.3209 4 0.0001 0.0050 3 0.0020 1.4407 2 0.0028 1.8590 1 0.0021 1.4323 0 0.0033 2.6396 1 0.0007 0.3534 2 0.0007 0.3177 3 0.0002 0.2628 4 0.0010 0.7364 5 0.0000 0.2821 6 0.0036 2.4342 7 0.0002 0.4355 8 0.0004 0.4253
CAR 0.0006 0.0010 0.0021 0.0017 0.0019 0.0022 0.0025 0.0023 0.0030 0.0026 0.0003 0.0002 0.0022 0.0007 0.0002 0.0024 0.0046 0.0043 0.0050 0.0036 0.0039 0.0025 0.0019 0.0021 0.0026 0.0053 0.0052 0.0071 0.0043 0.0022 0.0055 0.0061 0.0068 0.0070 0.0060 0.0059 0.0023 0.0022 0.0018 (continued)
5
Motivations for Issuing Putable Debt: An Empirical Analysis
165
Table 5.3 (continued) Event day Abnormal return t-statistic 9 0.0007 0.5236 10 0.0010 1.0509 11 0.0039 2.8180 12 0.0010 0.7853 13 0.0002 0.0036 14 0.0013 0.7663 15 0.0006 0.0920 16 0.0014 1.2984 17 0.0002 0.0847 18 0.0001 0.2386 19 0.0002 0.3055 20 0.0005 0.3692 21 0.0006 0.0998 22 0.0016 1.1099 23 0.0014 0.8924 24 0.0015 1.1956 25 0.0001 0.3050 26 0.0001 0.0257 27 0.0004 0.4372 28 0.0011 0.4242 29 0.0004 0.8785 30 0.0010 0.5929 Panel B: Sample of 47 issuers of bonds with poison put options 30 0.0002 0.2385 29 0.0029 1.7698 28 0.0005 0.1247 27 0.0015 0.2635 26 0.0010 0.4019 25 0.0013 0.6437 24 0.0003 0.1287 23 0.0007 0.3447 22 0.0021 1.3803 21 0.0023 0.5731 20 0.0028 1.2905 19 0.0009 0.1682 18 0.0024 1.1787 17 0.0034 1.4394 16 0.0001 0.0490 15 0.0002 0.2326 14 0.0035 1.8138 13 0.0010 0.7301 12 0.0026 1.9908
CAR 0.0011 0.0021 0.0060 0.0070 0.0068 0.0055 0.0061 0.0075 0.0077 0.0076 0.0078 0.0083 0.0077 0.0061 0.0047 0.0061 0.0062 0.0061 0.0065 0.0054 0.0058 0.0048 0.0002 0.0032 0.0036 0.0021 0.0031 0.0018 0.0015 0.0022 0.0044 0.0020 0.0048 0.0058 0.0081 0.0048 0.0047 0.0049 0.0014 0.0005 0.0031 (continued)
166
I.E. Brick et al.
Table 5.3 (continued) Event day 11 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Abnormal return 0.0002 0.0026 0.0034 0.0002 0.0003 0.0019 0.0009 0.0006 0.0001 0.0003 0.0028 0.0028 0.0004 0.0019 0.0027 0.0022 0.0034 0.0006 0.0002 0.0036 0.0026 0.0017 0.0016 0.0008 0.0020 0.0004 0.0002 0.0005 0.0003 0.0005 0.0018 0.0014 0.0022 0.0015 0.0004 0.0015 0.0006 0.0004 0.0015 0.0014 0.0022 0.0002
t-statistic 0.0182 1.3253 1.5412 0.2221 0.0028 1.0805 0.7397 0.5318 0.2118 0.0870 1.5751 1.3777 0.5639 0.8618 1.4826 1.4571 1.2458 0.5491 0.1866 2.2457 1.4036 1.0479 1.1626 0.7991 1.0808 0.4107 0.2023 0.2077 0.3234 0.5278 0.9564 0.7079 1.0614 0.4152 0.2990 0.7324 0.2189 0.4731 0.2512 0.2703 1.1175 0.2668
CAR 0.0029 0.0003 0.0038 0.0040 0.0037 0.0018 0.0026 0.0032 0.0033 0.0036 0.0064 0.0092 0.0088 0.0069 0.0041 0.0019 0.0014 0.0009 0.0006 0.0030 0.0003 0.0020 0.0036 0.0043 0.0064 0.0068 0.0067 0.0072 0.0075 0.0070 0.0087 0.0073 0.0095 0.0110 0.0114 0.0129 0.0134 0.0130 0.0115 0.0102 0.0124 0.0126
−0.0150
−0.0100
−0.0050
0.0000
0.0050
Abnormal Returns Around Annoucement of European and Poiosn Put Bonds
CAR (European puts)
Event Date CAR (Poison puts)
Fig. 5.1 Cumulative abnormal returns from t ¼ 30 to t ¼ +30 around the issue announcement of bonds with European or poison put features
CAR
0.0100
−30 −29 −28 −27 −26 −25 −24 −23 −22 −21 −20 −19 −18 −17 −16 −15 −14 −13 −12 −11 −10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
5 Motivations for Issuing Putable Debt: An Empirical Analysis 167
168
I.E. Brick et al.
Table 5.4 Cross-sectional regression results for the European put sample: base case
Intercept FCF1 RISK SIZE INTSAVED ANALYSTS FINS H0:b1¼b2¼b3¼b4¼b5¼b6¼0 Adj. R2 Intercept FCF2 RISK SIZE INTSAVED ANALYSTS FINS H0:b1¼b2¼b3¼b4¼b5¼b6¼0 Adj. R2
Full sample of 90 firms Coef. t-ratio p-value 0.0388 1.69 0.095 0.0016 0.09 0.9289 0.0016 0.25 0.8026 0.0097 2.93 0.0044 0.0269 2.59 0.0115 0.0153 2.48 0.0152 0.0203 2.79 0.0065 p-value w2 26.38 0.0002 0.1185 Coef. t-ratio p-value 0.0390 1.71 0.0915 0.0004 0.02 0.9808 0.0016 0.24 0.8075 0.0097 2.93 0.0043 0.0270 2.59 0.0112 0.0152 2.47 0.0157 0.0202 2.73 0.0078 p-value w2 26.73 0.0002 0.1184
Sample of 80 bonds with positive INTSAVED Coef. t-ratio p-value 0.03657 1.58 0.1193 0.00392 0.22 0.8277 0.00245 0.37 0.7117 0.010637 3.03 0.0034 0.023819 2.27 0.0259 0.01848 2.82 0.0062 0.02062 2.75 0.0075 w2 p-value 29.19 0.000 0.1314 Coef. t-ratio p-value 0.03642 1.58 0.1191 0.00392 0.21 0.8309 0.00247 0.37 0.7094 0.010633 3.03 0.0034 0.023751 2.26 0.0266 0.01849 2.8 0.0065 0.02066 2.7 0.0085 w2 p-value 29.25 0.000 0.1314
This table reports the regression coefficients and their t-statistics of the following regression equation: CAR3i ¼ b0 + b1FCFi + b2RISKi + b3SIZEi + b4INTSAVEDi + b5ANALYSTSi + b6FINS + ei CAR3 is the cumulative abnormal return measured from the day before the announced issue to the day after the announced issue; FCF1 and FCF2 are the two free cash flow measures as described in the text; RISK is a dummy variable equal to one if the bond issue has an S&P bond rating of BBB+ or below and zero otherwise; SIZE is the natural logarithm of the total assets of the issuing firm at the end of the fiscal year prior to the issue announcement; INTSAVED measures the relative amount of aggregate interest expense saved per $100 of total assets of the issuing firm; ANALYSTS is the natural logarithm of one plus the number of analysts following the firm; and FINS is equal to one if the parent company is a financial institution and is equal to zero otherwise. The p-values assume a two-tail test. All the t-ratios are heteroscedasticity consistent
Panel B of Table 5.3 presents the corresponding results for the issuers of poison put bonds. Note that the abnormal return at t ¼ 0 is 0.28 % with a t-statistic of 1.38. Furthermore, during the 3-day (i.e., t ¼ 1, 1) window, the abnormal return is negative (0.52 %) and is significantly different than zero (t-statistic of 2.03). This negative abnormal return is consistent with the view that poison put bonds help entrench the current management. The cumulative abnormal returns of the two sample types of embedded put option are also depicted in Fig. 5.1. The patterns of CARs
5
Motivations for Issuing Putable Debt: An Empirical Analysis
169
around the issue announcement dates clearly show the positive trend of CARs for bonds with European puts versus the negative trend of CARs for bonds with poison puts. Next, to test our hypotheses, we examine what determines the cross-sectional variations in the CARs for the sample of European and poison put bonds. Table 5.4 reports the parameter and t-statistic estimates for regression Eq. 5.1 for the sample of issuers of bonds with European put options. The four regressions reported in Table 5.4 differ by the sample that is used and by the specification used for the free cash flow variable. The regression in the top left panel uses the entire sample of 90 observations and the variable FCF1. The regression in the top right panel reports the results using only the 80 observations in which the variable INTSAVED is positive. The regressions reported in the bottom panels use FCF2 instead of FCF1. First note that our estimates are robust to these alternative specifications. The Wald test of the hypothesis that all five independent variables are jointly zero is rejected at the 1 % significance level for all four regression specifications. The regression coefficients on the ANALYSTS variable, b5, are negative and significantly different from zero for all four regressions. These estimates are consistent with the reduction in security mispricing motivation for issuing putable bonds. In contrast, the regression coefficients for FCF and RISK are not significantly different from zero, indicating a lack of empirical support for the mitigating agency cost hypothesis. The regression coefficients for the INTSAVED variables are positive and are significantly different from zero in all four regressions. This result is again consistent with the security mispricing motivation and inconsistent with the management myopia motivation. Additionally, the regression coefficient b3 for the size variable is also positive and significantly different from zero, a result more consistent with size being related to the probability of survivorship of the firm. In summary, announcement of bonds with European puts is associated with positive abnormal returns which are related to our proxies for potential benefits from mitigating security mispricing. Panel A of Table 5.5 presents the corresponding estimates for the sample of issuers of poison put bonds, while panel B of Table 5.5 presents the estimates of Eq. 5.2 for the same sample. The estimates reported in Table 5.5 are different from those in Table 5.4. The coefficients of the ANALYSTS variable, b5, are significantly positive for the full sample. The coefficients for Size are significantly negative for the full sample. All the other variables are not significantly different than zero. The estimates of Eq. 5.2 indicate that the coefficients of the variable COVRANK are negative and significantly different from 0. Our results so far indicate that the equity abnormal returns around the issue announcement dates of poison put bonds are negative, consistent with the managerial entrenchment evidence in Cook and Easterwood (1994) and Roth and McDonald (1999). Additionally, the positive and significant coefficient of the ANALYSTS variable may also be consistent with the management entrenchment hypothesis because it appears that the market negative response to the issuance of poison putable bonds arises from the less followed firms, where the management
170
I.E. Brick et al.
Table 5.5 Cross-sectional regression results for the poison put sample Full sample of 47 firms Coef. t-ratio p-value Panel A: Sample of issuers of bonds with poison put options Intercept 0.015 0.530 0.602 FCF1 0.170 1.330 0.191 RISK 0.010 1.550 0.129 SIZE 0.007 2.250 0.030 INTSAVED 0.022 0.370 0.714 ANALYSTS 0.014 2.100 0.042 w2 p-value H0:b1¼b2¼b3¼b4¼b5¼0 11.927 0.036 Adj. R2 0.015 Intercept 0.018 0.730 0.472 FCF2 0.153 1.280 0.208 RISK 0.009 1.400 0.168 SIZE 0.007 2.480 0.017 INTSAVED 0.027 0.460 0.649 ANALYSTS 0.014 1.980 0.054 w2 p-value H0:b1¼b2¼b3¼b4¼b5¼0 11.538 0.042 Adj. R2 0.007
Sample of 28 bonds with positive INTSAVED Coef. t-ratio p-value 0.030 0.082 0.001 0.006 0.024 0.006
0.680 0.506 0.510 0.617 0.090 0.932 1.090 0.288 0.280 0.782 0.370 0.717 w2 p-value 1.906 0.861 0.181 0.026 0.640 0.532 0.045 0.270 0.788 0.000 0.030 0.978 0.005 1.020 0.318 0.016 0.180 0.858 0.006 0.330 0.744 w2 p-value 1.857 0.868 0.192 Sample of 23 bonds with Full sample of 40 firms positive INTSAVED Coef. t-ratio p-value Coef. t-ratio p-value Panel B: Sample of issuers of bonds with poison put options with COVRANK Intercept 0.052 1.200 0.239 0.136 2.550 0.021 FCF1 0.158 1.100 0.281 0.271 1.330 0.201 RISK 0.011 1.140 0.264 0.006 0.370 0.720 SIZE 0.008 1.800 0.081 0.014 2.370 0.031 INTSAVED 0.029 0.430 0.670 0.071 0.720 0.484 COVRANK 0.008 3.900 0.000 0.012 3.520 0.003 ANALYSTS 0.013 2.090 0.045 0.009 0.700 0.497 w2 p-value w2 p-value H0:b1¼b2¼b3¼b4¼b5¼b6¼0 48.723 0.000 20.513 0.002 Adj. R2 0.114 0.058 Intercept 0.053 1.440 0.160 0.117 3.030 0.008 FCF2 0.132 1.000 0.327 0.159 0.890 0.386 RISK 0.010 1.110 0.275 0.000 0.010 0.995 SIZE 0.009 2.030 0.051 0.013 2.590 0.020 INTSAVED 0.032 0.510 0.616 0.030 0.240 0.810 COVRANK 0.008 4.100 0.000 0.010 3.610 0.002 (continued)
5
Motivations for Issuing Putable Debt: An Empirical Analysis
171
Table 5.5 (continued) Sample of 23 bonds with Full sample of 40 firms positive INTSAVED Coef. t-ratio p-value Coef. t-ratio p-value Panel B: Sample of issuers of bonds with poison put options with COVRANK ANALYSTS 0.014 1.970 0.057 0.010 0.710 0.488 p-value w2 p-value w2 53.590 0.000 31.077 0.000 H0:b1¼b2¼b3¼b4¼b5¼b6 ¼ 0 Adj. R2 0.102 0.006 Panels A reports the regression coefficients and their t-statistics of the following regression equation: CAR3i ¼ b0 + b1FCFi + b2RISKi + b3SIZEi + b4INTSAVEDi + b5ANALYSTSi + ei Panel B reports the regression coefficients the regression coefficients and their t-statistics of the following regression equation: CAR3i ¼ b0 + b1FCFi + b2RISKi + b3SIZEi + b4INTSAVEDi + b5ANALYSTSi + b7COVRANKi + ei CAR3 is the cumulative abnormal return measured from the day before the announced issue to the day after the announced issue; FCF1 and FCF2 are the two free cash flow measures as described in the text; RISK is a dummy variable equal to one if the bond issue has a S&P bond rating of BBB+ or below, and zero otherwise; SIZE is the natural logarithm of the total assets of the issuing firm at the end of the fiscal year prior to the issue announcement; INTSAVED measures the relative amount of aggregate interest expense saved per $100 of total assets of the issuing firm; ANALYSTS is the natural logarithm of one plus the number of analysts following the firm; and COVRANK equals the S&P Event Risk Ranking on a scale of 1–5. The p-values assume a two-tail test. All the t-ratios are heteroscedasticity consistent
strategy may not be as well known prior to the bond issuance. However, these returns are negatively related to the event risk covenant ranking consistent with the agency cost evidence in Bae et al. (1994). The negative abnormal returns experienced by issuers of poison puts and the different coefficients on SIZE, INTSAVED, and ANALYSTS indicate that the European put bonds are viewed differently than poison put bonds. In summary, the results reported in Tables 5.2–5.5 are consistent with the view that European put bonds, where the put option does not depend on a specific company event, are more effective in mitigating problems that are associated with security mispricing. Furthermore, there is no empirical support for the hypothesis that European put bonds are used as a vehicle for management entrenchment. In contrast, the evidence for poison put bonds is consistent with both management entrenchment and mitigating agency costs. We now discuss several robustness checks of our basic results. Note that, as reported in Table 5.4, the regression coefficient of the financial industry dummy variable, b6, is negative and significantly different from zero. To verify that the differing results between the European put bond sample (where 25 out of 90 firms
172
I.E. Brick et al.
Table 5.6 Cross-sectional regression results for the European put sample excluding financial service companies Full sample of 65 firms Coef. t-ratio Intercept 0.0410 1.27 FCF1 0.0005 0.04 RISK 0.0028 0.41 SIZE 0.0108 2.79 INTSAVED 0.0248 1.85 ANALYSTS 0.0178 2.54 w2 32.66 H0:b1¼b2¼b3¼b4¼b5¼0 0.1450 Adj. R2 Intercept 0.0409 1.28 FCF2 0.0000 0.00 RISK 0.0028 0.41 SIZE 0.0108 2.78 INTSAVED 0.0248 1.85 ANALYSTS 0.0178 2.54 w2 32.66 H0:b1¼b2¼b3¼b4¼b5¼0 Adj. R2 0.1450
p-value 0.2087 0.9715 0.6821 0.0071 0.0694 0.0138 p-value 0.0000 0.2073 0.9992 0.6805 0.0072 0.0686 0.0137 p-value 0.0000
Sample of 61 bonds with positive INTSAVED Coef. t-ratio p-value 0.0416 1.25 0.2182 0.0016 0.10 0.9200 0.0033 0.49 0.6279 0.0114 2.80 0.0071 0.0242 1.79 0.0785 0.0191 2.70 0.0093 w2 p-value 32.10 0.0000 0.1524 0.0414 1.25 0.2181 0.0031 0.17 0.8676 0.0034 0.50 0.6206 0.0114 2.79 0.0071 0.0241 1.79 0.0784 0.0191 2.70 0.0091 w2 p-value 32.10 0.0000 0.1525
This table reports the regression coefficients and their t-statistics of the following regression equation: CAR3i ¼ b0 + b1FCFi + b2RISKi + b3SIZEi + b4INTSAVEDi + b5ANALYSTSi + ei CAR3 is the cumulative abnormal return measured from the day before the announced issue to the day after the announced issue; FCF1 and FCF2 are the two free cash flow measures as described in the text; RISK is a dummy variable equal to one if the bond issue has a S&P bond rating of BBB+ or below, and zero otherwise; SIZE is the natural logarithm of the total assets of the issuing firm at the end of the fiscal year prior to the issue announcement; INTSAVED measures the relative amount of aggregate interest expense saved per $100 of total assets of the issuing firm; ANALYSTS is the natural logarithm of one plus the number of analysts following the firm. The p-values assume a two-tail test. All the t-ratios are heteroscedasticity consistent
are from the financial service sector) and the poison put sample (where only one firm is from the financial service sector), we replicate the regressions of Tables 5.4 and 5.5 excluding all financial service companies. The estimates for the subsample of European put bonds are reported in Table 5.6. Note that the coefficients and significance levels are very similar to those reported in Table 5.4. Because only one poison put company is from the financial sector and because the estimates of the corresponding subsample of poison put bonds are very similar to those reported in Table 5.5, we do not report these estimates. Thus, we conclude that the differences between poison and European put bonds are not due to the different sector composition of our samples.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
173
Table 5.7 The cross-sectional regression results for the European put sample: impact of time to expiration Full sample of 90 firms Coef. t-ratio p-value Intercept 0.0288 1.27 0.2063 FCF1 0.0037 0.24 0.8147 RISK 0.0010 0.16 0.8703 SIZE 0.0088 2.74 0.0075 INTSAVED 0.0143 1.35 0.1814 ANALYSTS 0.0165 2.68 0.0090 FINS 0.0185 2.60 0.0112 EXPLT5 0.0097 1.91 0.0601 p-value w2 34.82 0.0000 H0:b1¼b2¼b3¼b4¼b5¼b6¼ b7¼0 0.1413 Adj. R2 Intercept 0.0291 1.3 0.1981 FCF2 0.0051 0.33 0.7397 RISK 0.0010 0.15 0.8794 SIZE 0.0088 2.74 0.0075 INTSAVED 0.0144 1.36 0.1784 ANALYSTS 0.0164 2.67 0.0092 FINS 0.0184 2.53 0.0133 EXPLT5 0.0097 1.91 0.0593 p-value w2 34.85 0.0000 H0:b1¼b2¼b3¼b4¼b5¼b6¼b7¼0 0.1415 Adj. R2
Sample of 80 bonds with positive INTSAVED Coef. t-ratio p-value 0.0282 1.22 0.2279 0.0007 0.05 0.9627 0.0021 0.32 0.7527 0.0098 2.88 0.0052 0.0131 1.22 0.2249 0.0193 2.95 0.0043 0.0194 2.61 0.0109 0.0083 1.37 0.1739 w2 p-value 37.88 0.000 0.1420 0.0283 1.22 0.2248 0.0010 0.06 0.9506 0.0021 0.31 0.7542 0.0098 2.88 0.0052 0.0132 1.22 0.2257 0.0193 2.93 0.0045 0.0194 2.57 0.0122 0.0083 1.38 0.1733 w2 p-value 37.82 0.000 0.1420
This table reports the regression coefficients and their t-statistics of the following regression equation: CAR3i ¼ b0 + b1FCFi + b2RISKi + b3SIZEi + b4INTSAVEDi + b5ANALYSTSi + b6FINS + b7EXPLT5 + ei CAR3 is the cumulative abnormal return measured from the day before the announced issue to the day after the announced issue; FCF1 and FCF2 are the two free cash flow measures as described in the text; RISK is a dummy variable equal to one if the bond issue has a S&P bond rating of BBB+ or below and zero otherwise; SIZE is the natural logarithm of the total assets of the issuing firm at the end of the fiscal year prior to the issue announcement; INTSAVED measures the relative amount of aggregate interest expense saved per $100 of total assets of the issuing firm; ANALYSTS is the natural logarithm of one plus the number of analysts following the firm; FINS is equal to one if the parent company is a financial institution and is equal to zero otherwise; and EXPLT5 is equal to one if the expiration of the embedded option is less than 5 years from the issue date and is equal to zero otherwise. The p-values assume a two-tail test. All the t-ratios are heteroscedasticity consistent
Next, we examine whether the term to expiration of European put bonds affects our estimates.24 Table 5.7 repeats the regressions of Table 5.4 when we introduce an additional explanatory dummy variable, EXPLT5, that is equal to one if the
24
Recall that by definition the option of poison put bonds does not have an expiration date.
174
I.E. Brick et al.
expiration date of the embedded European put option is less than 5 years from the issue date and zero otherwise. We find that INTSAVE is still positive but is no longer significant and the regression coefficient for EXPLT5 is positive and is significantly different from zero at the 10 % level for the full sample. The estimates and the significance of the other coefficients are largely unaffected. We conclude that these results still confirm the security mispricing hypothesis because the regression coefficients for ANALYSTS are significantly negative. Our results are robust to alternative specifications we described at the end of the previous section. In particular, including a dummy variable for the existence of a call feature, interaction variables and alternative measures of interest rate volatility do not affect our results. These results are not reported here but are available upon request. Finally, we test for multicollinearity by examining the eigenvalues of the correlation matrix for (non-dummy) independent variables. For orthogonal data the eigenvalue, l, for each variable should equal 1, and S1/l ¼ the number of regressors (i.e., five for our study). For our sample of bonds with an embedded European put option, this sum is 7.14 when FCF1 is used, and this sum ¼ 7.19 when FCF2 is used, indicating a lack of significant multicollinearity. Similar results are obtained for the poison put sample. Therefore, our findings are robust to alternate specifications and are not driven by multicollinearity.
5.6
Concluding Remarks
This paper examines the motivations and equity valuation impact of issuing European putable bonds, a bond containing an embedded European put option held by bondholders. The option entitles them to sell the bond back to the firm on the exercise date at a predetermined price. Unlike a poison put bond which has been studied by the literature, the exercise of the put option in a European putable bond is not contingent upon a company-related event. We find that the market reacts favorably to the issue announcement of such putable bonds. We consider three alternative motivations for incorporating a European put option in a bond contract: reducing the security mispricing impact of asymmetric information, mitigating agency costs, and the relatively low coupon rate (a myopic view that ignores the potential liability to the firm due to the put option). We test these hypotheses by conducting a cross-sectional empirical study of the impact of putable debt issue announcements on the equity value of the issuing companies. Our results indicate that the market favorably views putable bonds as a means to reduce security mispricing. We also find that the market reaction to poison put announcements differs from the market reaction to issue announcements for bonds with European put options.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
175
Appendix 1: Sample of Firms Issuing Putable Bonds and the Put Bond Characteristics Amount Issue Name Coupon ($000’s) date Air Products 7.34 100,000 06/15/ and Chemicals 1996 Inc American 8.50 150,000 06/09/ Express 1989 Anadarko 7.25 100,000 03/17/ Petroleum 1995 Corp Anadarko 7.73 100,000 09/19/ Petroleum 1996 Corp Bankamerica 7.65 150,000 04/26/ Corp 1984 Bausch & 6.56 100,000 08/12/ Lomb Inc 1996 Baxter 8.88 100,000 06/14/ International 1988 Inc Burlington 7.29 200,000 05/31/ Northern 1996 Santa Fe Champion 15.63 100,000 07/22/ International 1982 Corp Champion 6.40 200,000 02/15/ International 1996 Corp Chase 7.55 250,000 06/12/ Manhattan 1985 Corp – old Chrysler Corp 12.75 200,000 11/01/ 1984 Chrysler Corp 9.65 300,000 07/26/ 1988 Chrysler Corp 9.63 200,000 09/06/ 1988 Circus Circus 6.70 150,000 11/15/ Enterprise Inc 1996 Citicorp 9.40 250,000 12/06/ 1983 Citicorp 10.25 300,000 12/19/ 1984
Put S&P Maturity expiration YTM credit (years) (years) Callability (%) rating 30 12 N 7.20 7
10
5
N
8.56
5
30
5
N
6.90
9
100
30
N
7.21
9
10
2
C
30
5
N
6.71
8
30
5
C
8.96
9
40
12
N
7.39 10
12
3
C
30
10
N
6.69 10
12
5
C
8.87
15
5
N
12.75 10
20
5
C
9.65 10
20
2
C
9.65 11
100
7
N
6.66
9
12
2
C
11.05
4
10
2
C
10.31
4
12.23 10
15.68
9
5
(continued)
176
Amount Issue Name Coupon ($000’s) date Citicorp 8.75 250,000 12/12/ 1985 Coca-Cola 7.00 300,000 09/27/ Enterprises 1996 Commercial 8.50 100,000 02/05/ Credit 1988 Commercial 8.70 150,000 06/09/ Credit 1989 Commercial 8.70 100,000 06/11/ Credit 1990 Commercial 7.88 200,000 02/01/ Credit 1995 Conagra Inc 7.13 400,000 10/02/ 1996 Corning Inc 7.63 100,000 07/31/ 1994 Deere & Co 8.95 199,000 06/08/ 1989 Diamond 7.65 100,000 06/25/ Shamrock Inc 1996 Dow 8.48 150,000 08/22/ Chemical 1985 Eastman 7.25 125,000 04/08/ Kodak Co 1987 Eaton Corp 8.00 100,000 08/18/ 1986 Eaton Corp 8.88 38,000 06/14/ 1989 Eaton Corp 6.50 150,000 06/09/ 1995 Enron Corp 9.65 100,000 05/17/ 1989 First Chicago 8.50 98,932 05/13/ Corp 1986 First Interstate 7.35 150,000 08/23/ Bancorp 1984 First Interstate 9.70 100,000 07/08/ Bancorp 1985 First Union 7.50 250,000 04/25/ Corp (NC) 1995 First Union 6.82 300,000 08/01/ Corp (NC) 1996 Ford Motor Co 7.50 250,000 10/30/ 1985
I.E. Brick et al.
Put S&P Maturity expiration YTM credit (years) (years) Callability (%) rating 15 3 C 8.84 4 30
10
N
7.00
5
10
5
N
8.84
8
20
10
N
9.06
8
20
3
N
7.54
7
30
10
N
7.30
6
30
10
N
6.95
9
30
10
N
7.81
6
30
10
C
8.99
7
30
10
N
7.37 10
30
14
C
9.35
7
10
5
N
8.48
8
20
10
N
7.90
7
30
15
N
9.00
7
30
10
N
6.64
7
12
7
N
9.50 10
12
7
N
8.70
7
15
6
C
13.70
4
15
5
C
9.91
4
40
10
N
7.96
8
30
10
N
7.43
8
15
3
N
9.63
6
(continued)
5
Motivations for Issuing Putable Debt: An Empirical Analysis
Amount Issue Name Coupon ($000’s) date Ford Motor Co 9.95 300,000 02/10/ 1992 General 6.75 250,000 11/06/ Electric Co 1986 General 8.25 500,000 04/26/ Electric Co 1988 General 8.38 200,000 04/30/ Motors Corp 1987 General 8.63 400,000 06/09/ Motors Corp 1989 General 8.88 500,000 05/31/ Motors Corp 1990 Harris Corp 6.65 100,000 08/01/ 1996 Ingersoll6.48 150,000 06/01/ Rand Co 1995 Intl Business 13.75 125,000 03/09/ Machines 1982 Corp ITT Industries 8.50 100,000 01/20/ Inc 1988 ITT Industries 8.55 100,000 06/12/ Inc 1989 ITT Industries 3.98 100,000 02/15/ Inc 1990 Johnson 7.70 125,000 02/28/ Controls Inc 1995 K N Energy 7.35 125,000 07/25/ Inc 1996 Litton 6.98 100,000 03/15/ Industries Inc 1996 Lockheed 7.20 300,000 05/01/ Martin Corp 1996 Marriott Corp 9.38 250,000 06/11/ 1987 Merrill Lynch 11.13 250,000 03/26/ & Co 1984 Merrill Lynch 9.38 125,000 06/04/ & Co 1985 Merrill Lynch 8.40 200,000 10/25/ & Co 1989 Motorola Inc 8.40 200,000 08/15/ 1991 Motorola Inc 6.50 400,000 08/31/ 1995
177
Put S&P Maturity expiration YTM credit (years) (years) Callability (%) rating 40 3 N 8.56 6 25
5
C
6.80
2
30
3
C
8.41
2
10
5
N
8.38
5
10
5
N
8.59
5
20
5
N
8.80
5
10
5
N
6.90
8
30
10
N
6.64
7
12
3
C
14.05
2
10
5
N
8.50
7
20
9
N
9.20
7
15
3
N
8.60
7
20
10
N
7.83
7
30
10
N
7.47
9
40
10
N
7.01
9
40
12
N
7.29
9
20
10
N
9.74
8
15
5
C
13.36
4
12
6
C
9.48
4
30
5
N
8.67
6
40
10
N
8.36
4
30
10
N
6.55
4
(continued)
178
I.E. Brick et al.
Amount Issue Coupon ($000’s) date 9.25 300,000 08/03/ 1989
Name Occidental Petroleum Corp Penney 6.90 (JC) Co Philip Morris 7.00 Cos Inc Philip Morris 9.00 Cos Inc Philip Morris 6.95 Cos Inc Pitney Bowes 8.63 Inc Pitney Bowes 8.55 Inc Regions 7.75 Financial Corp Ryder System 9.50 Inc Seagram Co 4.42 Ltd Security 12.50 Pacific Corp Security 7.50 Pacific Corp Service Corp 7.00 International Southtrust 1.62 Corp State Street 7.35 Corp Suntrust 6.00 Banks Inc Triad Systems 14.00 Corp TRW Inc 9.35
Union Carbide 6.79 Corp United 8.50 Dominion Realty Trust Westinghouse 11.88 Electric
200,000 150,000 350,000 500,000 100,000 150,000 100,000 125,000 250,000 150,000 150,000 300,000 100,000 150,000 200,000 71,500 100,000 250,000 150,000
100,000
08/16/ 1996 07/15/ 1986 05/09/ 1988 06/01/ 1996 02/10/ 1988 09/15/ 1989 09/15/ 1994 07/01/ 1985 08/03/ 1988 09/27/ 1984 04/03/ 1986 05/26/ 1995 05/09/ 1995 06/15/ 1996 02/15/ 1996 08/09/ 1989 05/31/ 1990 06/01/ 1995 09/22/ 1994 03/19/ 1984
Put S&P Maturity expiration YTM credit (years) (years) Callability (%) rating 30 15 N 9.37 10
30
7
N
7.07
7
5
2
N
7.30
7
10
6
N
9.39
7
10
5
N
6.91
7
20
10
N
8.70
4
20
10
Y
8.64
4
30
10
N
8.07
7
15
2
C
9.69
7
30
15
N
9.90
7
12
6
C
12.62
3
15
3
C
7.54
3
20
7
N
6.80
9
30
10
N
7.28
8
30
10
N
7.23
5
30
10
N
6.33
7
8
3
C
30
10
N
9.28
30
10
N
6.83 10
30
10
N
8.60
9
12
3
C
13.38
6
13.99 16 7
(continued)
5
Motivations for Issuing Putable Debt: An Empirical Analysis
Amount Issue Name Coupon ($000’s) date Westinghouse 8.88 150,000 05/31/ Electric 1990 Whitman Corp 7.29 100,000 09/19/ 1996 WMX 8.75 250,000 04/29/ Technology 1988 WMX 7.65 150,000 03/15/ Technology 1991 WMX 6.22 150,000 05/09/ Technology 1994 WMX 6.65 200,000 05/16/ Technology 1995 WMX 7.10 450,000 07/31/ Technology 1996 Xerox Corp 11.25 100,000 08/25/ 1983
179
Put S&P Maturity expiration YTM credit (years) (years) Callability (%) rating 24 4 N 8.84 13 30
8
N
7.26
9
30
5
C
8.79
4
20
3
N
7.71
6
10
3
N
7.41
6
10
5
N
6.43
6
30
7
N
7.16
7
15
3
C
11.44
5
Appendix 2: Sample of Firms Issuing Poison Put Bonds and the Put Bond Characteristics
Name Aar Corp AMR AnheuserBusch Armstrong World Ashland Oil Becton Dickinson Bowater Inc Chrysler Financial Coastal Corporation Consolidated Freightways Corning Inc CPC Intl
Amount Coupon ($million) 9.500 65 9.750 200 8.750 250
Issue date 10/27/89 03/15/90 12/01/89
Years to maturity 12 10 10
S&P credit Callability YTM (%) rating N 9.425 10 N 9.85 7 N 8.804 5
9.750
125
08/18/89
19
N
9.5
5
11.125 9.950
200 100
10/08/87 03/13/89
30 10
C N
10.896 10
7 6
9.000 10.300
300 300
08/02/89 06/15/90
20 2
N N
9.331 10.43
7 11
10.250
200
12/06/89
15
N
10.02
11
9.125
150
08/17/89
10
N
9.202
6
8.750 7.780
100 200
07/13/89 12/15/89
10 15
N N
8.655 7.780
7 8 (continued)
180
Name Cummins Engine Cyprus Minerals Dresser Industries Eaton Corp Federal Express General American Transportation Georgia Pacific Grumman Corp Harris Corp Harsco International Paper Kerr-Mcgee Knight-Rydder Lockhee Corp Maytag Monsanto Morton International ParkerHannifin Penn Central Penn Central Potlatch Questar Ralston Purina Rite-Aid Rohm And Haas Safety-Kleen Sequa Corp Stanley Works Strawbridge And Clothier Union Camp Unisys
I.E. Brick et al.
Amount Coupon ($million) 9.750 100
Issue date 03/21/86
S&P Years to credit maturity Callability YTM (%) rating 30 C 10.276 9
10.125
04/11/90
12
N
10.172
10
03/10/89
11
C
9.650
8
9.373
150 68.8
9.000 9.200
100 150
03/18/86 11/15/89
30 5
C N
9.323 9.206
7 9
10.125
115
03/22/90
12
N
10.269
8
10.000 10.375 10.375 8.750 9.700
300 200 150 100 150
06/13/90 01/05/89 11/29/88 05/15/91 03/21/90
7 10 30 5 10
N C C N N
10.053 10.375 10.321 8.924 9.823
9 9 8 8 8
9.750 9.875 9.375 8.875 8.875 9.250
100 200 300 175 100 200
04/01/86 04/21/89 10/15/89 07/10/89 12/15/89 06/01/90
30 20 10 10 20 30
C N N N N N
9.459 10.05 9.329 9.1 8.956 9.358
8 5 7 8 7 5
9.750
100
02/11/91
30
C
9.837
7
9.750 10.875 9.125 9.875 9.250 9.625 9.373
200 150 100 50 200 65 100
08/03/89 05/01/91 12/01/89 06/11/90 10/15/89 09/25/89 11/15/89
10 20 20 30 20 27 30
N N N C N C C
9.358 11.016 9.206 9.930 9.45 9.99 9.618
11 11 8 6 8 6 7
9.250 9.625 8.250 8.750
100 150 75 50
09/11/89 10/15/89 04/02/86 10/24/89
10 10 10 7
N N C C
9.678 9.574 8.174 9.374
9 11 7 8
10.000 10.300
100 300
04/28/89 05/29/90
30 7
C N
10.185 10.794
6 10 (continued)
5
Motivations for Issuing Putable Debt: An Empirical Analysis
Amount Name Coupon ($million) United Airlines 12.500 150 United 8.875 300 Technologies VF Corp 9.500 100 Weyerhaeuser 9.250 200
181
Issue date 06/03/88 11/13/89
S&P credit Years to maturity Callability YTM (%) rating 7 C 12.500 15 30 N 9.052 5
10/15/89 11/15/90
10 5
C N
9.500 9.073
7 6
The S&P ratings are based on a scale from 1 (AAA+) to 24 (Not rated). In the Callability column, C denotes callable bond and N denotes non-callable bond
Appendix 3: Estimating the Standard Abnormal Returns and the White t-Statistic Fama et al. (1969) introduced the event study methodology when they analyzed the impact of stock dividend announcements upon stock prices. Essentially, they used the Market Model to estimate the stochastic relationship between stock returns and the market portfolio. In particular, we estimate the following regression: Ri, t ¼ ai þ bi Rm, t þ ei, t
(5.3)
We estimate the market model coefficients using the time period that begins 200 trading days before and ends 31 trading days before the event, employing the CRSP value-weighted market index as the benchmark portfolio. We use these coefficients to estimate abnormal returns for days 30 to +30. The abnormal return is defined as ARi, t ¼ Ri, t ai bi Rm, t :
(5.4)
The mean abnormal return for the sample of i firms, ARt, at time t is found by
ARt ¼
n X
ARi, t =n:
(5.5)
i¼1
In order to conduct tests of significance, we must ensure that ARi,t has identical standard deviation. Assuming that the forecast values are normally distributed, we can scale ARi,t by the standard deviation of the prediction, Sit, given by Eq. 5.4 In particular, n h 2 2 io1=2 Sit ¼ s2i þ ð1=EDÞ þ Rmt Rm =S Rmi Rm
(5.6)
182
I.E. Brick et al.
where s2i is the variance of the error term of Eq. 5.3, ED is the number of days to estimate the market model for firm I, Rmt is the return of the market portfolio, and Rm is the mean market return in the estimation period. Hence the standardized abnormal return, SARi,t, is equal to ARi,t/Sit. SARi,t is distributed normally with a mean of zero and a standard deviation equal to one. The mean standardized abnormal return for time t, SARt, is the sum of the SARi,t divided by n. SARt is normally distributed with a mean of zero and a standard deviation of the square root of l/n. The cumulative abnormal returns for days 1 to k, CARk, is the sum of mean abnormal returns for t ¼ 1 to k. The standardized cumulative mean excess returns for the k days after month t ¼ 0, SCARt, is equal to the sum of SARt for days 1 to k. SCART is normally distributed with a standard deviation of square root of k/n. Please see Campbell et al. (1996) for a detailed discussion of these tests as well as event studies in general. In our cross-sectional regressions where we regress the CARs on firm-specific variables, we estimate the model using GMM. However, since we are not interested in conditional estimates, our regressors are the instruments. Therefore our parameters estimates are the same that would be obtained by OLS. However, we use the White (1980) heteroscedastic-consistent estimator. In the standard regression model, Y ¼ Xb þ e
(5.7)
The OLS estimator of b ¼ (X 0 X) 1(X 0 y) with the covariance matrix 1
VaRðbÞ ¼ ðX0 XÞ ðX0 OXÞðX0 XÞ
1
(5.8)
When the errors are homoscedastic, O ¼ s2I and the variance reduces to s2(X0 X)1. However when O is unknown as shown by White (1980), Eq. 5.8 can be used using a consistent estimator of O. White showed that can be done using the residual from Eq. 5.7. So when the heteroscedasticity is of the unknown form, Using ðX0 XÞ
1
1 X0 diag e2 X ðX0 XÞ
(5.9)
gives us heteroscedastic-consistent White standard errors and White t-statistics.
Appendix 4: Generalized Method of Moments (GMM) GMM is a generalization of the method of moments developed by Hansen (1982). The moment conditions are derived from the model. Suppose Yt is a multivariate independently and identically distributed (i.i.d.) random variable. The econometric model specifies the relationship between Yt and the true parameters of the model (y0). To use GMM there must exist a function f(Yt, y0) so that
5
Motivations for Issuing Putable Debt: An Empirical Analysis
mðy0 Þ E½f ðYt ; y0 Þ ¼ 0:
183
(5.10)
In GMM, the theoretical expectations are replaced by sample analogs: gðy; Yt Þ ¼ 1=T
X
f ðYt ; yÞ:
(5.11)
The law of large numbers ensures that the RHS of above equation is the same as E½f ðYt ; y0 Þ:
(5.12)
The sample GMM estimator of the parameters may be written as (see Hansen 1982) h i0 X X Y ¼ arg min 1=T f ðYt ; yÞ WT 1=T f ðYt ; yÞ: (5.13) So essentially GMM finds the values of the parameters so that the sample moment conditions are satisfied as closely as possible. In our case for the regression model, yt ¼ Xt 0 b þ et :
(5.14)
The moment conditions include E½ðyt Xt 0 bÞxt ¼ E½et xt ¼ 0 for all t:
(5.15)
So the sample moment condition is 1=T
X
ðyt Xt 0 bÞxt
and we want to select b so that this is as close to zero as possible. If we select b as (X0 X)1(X0 y), which is the OLS estimator, the moment condition is exactly satisfied. Thus, the GMM estimator reduces to the OLS estimator and this is what we estimate. For our case the instruments used are the same as the independent variables. If, however, there are more moment conditions than the parameters, the GMM estimator above weighs them. These are discussed in detail in Greene (2008, Chap. 15). The GMM estimator has the asymptotic variance 1 1 X0 Z ðZ0 OZÞ Z0 X
(5.16)
In our case Z ¼ X since we use the independent variables as the instruments Z. The White robust covariance matrix may be used for O as discussed in Appendix 3 when heteroscedasticity is present. Using this approach, we estimate GMM with White heteroscedasticity consistent t-stats.
184
I.E. Brick et al.
References Bae, S. C., Klein, D. P., & Padmaraj, R. (1994). Event risk bond covenants, agency costs of debt and equity, and stockholder wealth. Financial Management, 23, 28–41. Bae, S. C., Klein, D. P., & Padmaraj, R. (1997). Firm characteristics and the presence of event risk covenants in bond indentures. Journal of Financial Research, 20, 373–388. Barnea, A., Haugen, R. A., & Senbet, L. W. (1980). A rationale for debt maturity structure and call provisions in the agency theoretic framework. Journal of Finance, 25, 1223–1234. Billett, M. T., King, T. D., & Mauer, D. C. (2007). Growth opportunities and the choice of leverage, debt maturity, and covenants. Journal of Finance, 62, 697–730. Bodie, Z., & Taggart, R. A. (1978). Future investment opportunities and the value of the call provision on a bond. Journal of Finance, 33, 1187–1200. Brennan, M. J., & Schwartz, E. S. (1988). The case for convertibles. Journal of Applied Corporate Finance, 1, 55–64. Brick, I. E., & Palmon, O. (1993). The tax advantages of refunding debt by calling, repurchasing, and putting. Financial Management, 22, 96–106. Brick, I. E., & Wallingford, B. A. (1985). The relative tax benefits of alternative call features in corporate debt. Journal of Financial and Quantitative Analysis, 20, 95–106. Campbell, J., Lo, A. W., & MacKinlay, C. (1996). The econometrics of financial markets. Princeton: Princeton University Press. Chatfield, R. E., & Moyer, R. C. (1986). Putting away bond risk: An empirical examination of the value of the put option on bonds. Financial Management, 15(2), 26–33. Cook, D. O., & Easterwood, J. C. (1994). Poison put bonds: An analysis of their economic role. Journal of Finance, 49, 1905–1920. Crabbe, L. (1991). Event risk: An analysis of losses to bondholders and ‘super poison put’ bond covenants. Journal of Finance, 46, 689–706. Crabbe, L., & Nikoulis, P. (1997). The putable bond market: Structure, historical experience, and strategies. The Journal of Fixed Income, 7, 47–60. Dann, L. Y., & Mikkelson, W. H. (1984). Convertible debt issuance, capital structure change and financing-related information: Some new evidence. Journal of Financial Economics, 13, 157–186. David, A. (2001). Pricing the strategic value of putable securities in liquidity crises. Journal of Financial Economics, 59, 63–99. Eckbo, B. E. (1986). Valuation effects of corporate debt offerings. Journal of Financial Economics, 5, 119–152. Fama, E., Fisher, L., Jensen, M., & Roll, R. (1969). The adjustment of stock prices to new information. International Economic Review, 10, 1–21. Green, R. (1984). Investment incentives, debt, and warrants. Journal of Financial Economics, 13, 115–136. Greene, W. (2008). Econometric analysis. Boston: Pearson. Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50, 1029–1054. Hansen, R. S., & Chaplinsky, S. (1993). Partial anticipation, the flow of information and the economic impact of corporate debt sales. Review of Financial Studies, 6, 709–732. Hansen, R. S., & Crutchley, C. (1990). Corporate earnings and financings: An empirical analysis. Journal of Business, 63, 347–371. Haugen, R. A., & Senbet, L. W. (1978). The insignificance of bankruptcy costs to the theory of optimal capital structure. Journal of Finance, 33, 383–393. Haugen, R. A., & Senbet, L. W. (1981). Resolving the agency problems of external capital through options. Journal of Finance, 26, 629–647. Haugen, R. A., & Senbet, L. W. (1988). Bankruptcy and agency costs: Their significance to the theory of optimal capital structure. Journal of Financial and Quantitative Analysis, 23, 27–38.
5
Motivations for Issuing Putable Debt: An Empirical Analysis
185
Jensen, M. C. (1986). Agency costs of free cash flow, corporate finance, and takeovers. American Economic Review, 76, 323–329. Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3, 305–360. Jung, K., Kim, Y., & Stulz, R. M. (1996). Timing, investment opportunities, managerial discretion, and the security issue decision. Journal of Financial Economics, 42, 159–185. Kim, E. H. (1978). A mean-variance theory of optimal capital structure and corporate debt capacity. Journal of Finance, 33, 45–63. Lehn, K., & Poulsen, J. (1989). Free cash flow and stockholders gain in going private transactions. Journal of Finance, 44, 771–788. Lewis, C., Rogalski, R., & Seward, J. K. (1998). Industry conditions, growth opportunities and the market reactions to convertible debt financing decisions. (Unpublished mimeo, Vanderbilt University). Mackie-Mason, J. K. (1990). Do taxes affect corporate financing decisions? Journal of Finance, 45, 1471–1493. Meulbroek, L. K., Mitchell, M. L., Mulherin, J. H., Netter, J. M., & Poulsen, A. B. (1990). Shark repellants and managerial myopia: An empirical test. Journal of Political Economy, 98, 1108–1117. Mikkelson, W. H., & Partch, M. M. (1986). Valuation effects of security offerings and the issuance process. Journal of Financial Economics, 15, 31–60. Miller, M. H. (1977). Debt and taxes. Journal of Finance, 32, 261–275. Miller, M. H., & Rock, K. (1985). Dividend policy under asymmetric information. Journal of Finance, 40, 1031–1051. Modigliani, F., & Miller, M. H. (1963). Corporate income taxes and the cost of capital: A correction. American Economic Review, 63, 433–443. Myers, S. (1977). Determinants of corporate borrowing. Journal of Financial Economics, 5, 147–175. Nash, R. C., Netter, J. M., & Poulsen, A. B. (2003). Determinants of contractual relations between shareholders and bondholders: Investment opportunities and restrictive covenants. Journal of Corporate Finance, 9, 201–232. Ravid, S. A., & Sarig, O. H. (1991). Financial signaling by committing to cash outflows. Journal of Financial and Quantitative Analysis, 26, 165–180. Robbins, E. H., & Schatzberg, J. D. (1986). Callable bonds: A risk-reducing signaling mechanism. Journal of Finance, 41, 935–950. Ross, S. (1977). The determination of financial structure: The incentive signaling approach. Bell Journal of Economics, 8, 23–40. Roth, G., & McDonald, C. G. (1999). Shareholder-management conflict and event risk covenants. Journal of Financial Research, 22, 207–225. Scott, J. (1976). A theory of optimal capital structure. Bell Journal of Economics, 7, 33–54. Shyam-Sunder, L. (1991). The stock price effect of risky versus safe debt. Journal of Financial and Quantitative Analysis, 26, 549–558. Spiegel, Y., & Wilkie, S. (1996). Investment in a new technology as a signal of firm value under regulatory opportunism. Journal of Economics and Management Strategy, 5, 251–276. Stein, J. C. (1988). Takeover threats and managerial myopia. Journal of Political Economy, 96, 61–80. Tewari, M., & Ramanlal, P. (2010). Is the put option in U.S. structured bonds good news for both bondholders and stockholders? International Research Journal of Finance and Economics, 52, 50–59. Wahal, S., & McConnell, J. J. (2000). Do institutional investors exacerbate managerial myopia. Journal of Corporate Finance: Contracting, Governance and Organization, 6, 307–329. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48, 817–838.
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT Suresh Srivastava and Ken Hung
Contents 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Multiple Regression Model of Bank Return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Pricing of Interest Rate Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Multi-Risk Premia Asset-Pricing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Data Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Two-Variable Regression Model and Pricing of Interest Rate Risk . . . . . . . . . . . . 6.5.2 Multi-risk Premia Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: An Integration of CAPM and APT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principal Component Factor Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Interest Rate Innovations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Orthogonalization Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Univariate ARMA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vector ARMA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
188 189 190 192 193 195 195 197 200 201 201 203 204 204 205 205
Abstract
Interest rate sensitivity of bank stock returns has been studied using an augmented CAPM, a multiple regression model with market returns and interest rate as independent variables. In this paper, we test an asset-pricing model in which the CAPM is augmented by three orthogonal factors which are proxies for the innovations in inflation, maturity risk, and default risk. The model proposed
S. Srivastava (*) University of Alaska Anchorage, Anchorage, AK, USA e-mail: [email protected] K. Hung Division of International Banking & Finance Studies, Texas A&M International University, Laredo, TX, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_6, # Springer Science+Business Media New York 2015
187
188
S. Srivastava and K. Hung
is an integration of CAPM and APT. The results of the two models are compared to shed light on sources of interest rate risk. Our results using the integrated model indicate the inflation beta to be statistically significant. Hence, innovations in short-term interest rates contain valuable information regarding inflation premium; as a result the interest rate risk is priced with respect to the short-term interest rates. Further, it also indicates that innovations in long-term interest rates contain valuable information regarding maturity premium. Consequently the interest rate risk is priced with respect to the long-term interest rates. Using the traditional augmented CAPM, our investigation of the pricing of the interest rate risk is inconclusive. It shows that interest rate risk was priced from 1979 to 1984 irrespective of the choice of interest rate variable. However, during the periods 1974–1978 and 1985–1990, bank stock returns were sensitive only to the innovations in the long-term interest rates. Keywords
CAPM • APT • Bank stock return • Interest rate risk • Orthogonal factors • Multiple regression
6.1
Introduction
Interest rate sensitivity of commercial bank stock returns has been the subject of considerable academic research. Stone (1974) proposed a multiple regression model incorporating both the market return and interest rate variables as returngenerating independent variables. While some studies have found the interest rate variable to be an important determinant of common stock returns of banks (Fama and Schwert 1977; Lynge and Zumwalt 1980; Christie 1981; Flannery and James 1984; Booth and Officer 1985), others have found the returns to be insensitive (Chance and Lane 1980) or only marginally explained by the interest rate factor (Lloyd and Shick 1977). A review of the early literature can be found in Unal and Kane (1988). Sweeney and Warga (1986) used the APT framework and concluded that the interest rate risk premium exists but varies over time. Flannery et al. (1997) tested a two-factor model for a broad class of security returns and found the effect of interest rate risk on security returns to be rather weak. Bae (1990) examined the interest rate sensitivity of depository and nondepository firms using three different maturity interest rate indices. His results indicate that depository institutions’ stocks are sensitive to actual and unexpected interest rate changes, and the sensitivity increases for longer-maturity interest rate variables. Song (1994) examined the two-factor model using time-varying betas. His results show that both market beta and interest rate beta varied over the period 1977–1987. Yourougou (1990) found the interest rate risk to be high during a period of great interest rate volatility (post-October 1979) but low during a period of stable interest rates (pre-October 1979). Choi et al. (1992) tested a three-factor model of bank stock returns using market, interest, and exchange rate variables. Their findings about interest rate risk are consistent with the observations of Yourougou (1990).
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
189
The issue of interest rate sensitivity remains empirically unresolved. Most of the studies use a variety of short-term and long-term bond returns as the interest rate factor without providing any rationale for their use. The choice of bond market index seems to affect the pricing of the interest rate risk. Yet, there is no consensus on the choice of the interest rate factor that should be used in testing the two-factor model. In this paper, we provide a plausible explanation of why pricing of interest rate risk differs with the choice of interest rate variable. We also suggest a hybrid return-generating model for bank stock returns in which the CAPM is augmented by three APT-type factors to account for unexpected changes in the inflation premium, the maturity-risk premium, and the default-risk premium. The use of three additional factors provides a better understanding of the interest rate sensitivity and offers a plausible explanation for the time-varying interest rate risk observed by other investigators. Our empirical investigation covers three distinction economic and bank regulatory environments: 1974–1978, a period of increasing but only moderately volatile interest rates in a highly regulated banking environment; (2) 1979–1984, a period characterized by high level of interest rates with high volatility, in which there was gradual deregulation of the banking industry; and (3) 1985–1990, a low interest rate and low-volatility period during which many regulatory changes were made in response to enormous bank loan losses and bankruptcies. The results of the multifactor asset-pricing model are compared with those from the two-factor model in order to explain the timevarying interest rate risk. The rest of this paper is divided into five sections. In Sect. 6.2, we describe the two-factor model of the bank stock return and the pricing of the interest rate risk. The multi-risk premia model and the specification of the factors are discussed in Sect. 6.3. The data for this analysis is described in Sect. 6.4. Section 6.5 presents empirical results, and Sect. 6.6 concludes the paper.
6.2
Multiple Regression Model of Bank Return
Stone (1974) proposed the following two-variable bank stock return-generating model: Rjt ¼ aj þ b1j Rmt þ b2j RIt þ ejt
(6.1)
where Rjt is the bank common stock return, Rmt is the market return, and RIt is the innovation in the interest rate variable. Coefficients aj and b1j are analogous to the alpha and beta coefficients of the market model, and b2j represents interest rate risk. Since then, numerous researchers have studied the pricing of interest rate risk with varying results. While Stone (1974) and others did not place an a priori restriction on the sign of b2j, the nominal contracting hypothesis implies that it should be positive. This is because the maturity of bank assets is typically longer than
190
S. Srivastava and K. Hung
that of liabilities.1 Support for this hypothesis was found by Flannery and James (1984) but not by French et al. (1983). An important issue in the empirical investigation of the two-factor model is the specification of an appropriate interest rate factor. Theoretical consideration of factor analysis requires that two factors, Rmt and RIt, be orthogonal whereby choice of the second factor (RIt) would not influence the first factor loading (blj). The resolution of this constraint requires a robust technique for determining the unexpected changes in the interest rate that is uncorrelated with the market return. There are three approaches to specify the interest rate factor (Appendix 2). In the first approach, the expected change in the interest rate is estimated using the high correlation between the observed interest rate and the market rate. The residual – difference between observed and estimated rates – is used as the interest rate factor. The second approach is to identify and estimate a univariate ARMA model for the interest rate variable and use the residuals from the ARMA model as the second factor. In the third approach, the interest rate variable (RIt) and the market return (Rmt) are treated as the components of a bivariate vector, which is modeled as a vector ARMA process. The estimated model provides the unanticipated change in interest rate variable to be used as the second factor in the augmented CAPM, Eq. 6.1. Srivastava et al. (1999) discuss the alternate ways of specifying the innovations in the interest rate variable and its influence on the pricing of the interest rate risk. In this paper, the error term from the regression of interest rates on market returns is used as the orthogonal interest rate factor in Eq. 6.1.
6.2.1
Pricing of Interest Rate Risk
In addition to changes in the level of expected or unexpected inflation, changes in other economic conditions produce effects on interest rate risk. For example, according to the intertemporal model of the capital market (Merton 1973; Cox et al. 1985), a change in interest rates alters the future investment opportunity set; as a result, investors require additional compensation for bearing the risk of such changes. Similarly, changes in the investor’s degree of risk aversion, default risk, or maturity risk of bank financial assets cause additional shifts in the future investment opportunities for the bank stockholders. The specific choice of the bond market index for the two-variable model determines what unexpected change is captured by the coefficient b2j. The nominal return on a debt security, R, is expressed as R ¼ Rrf þ MRP þ DRP þ LP
(6.2)
where Rrf is the real risk-free rate plus an inflation premium, DRP is the default-risk premium, MRP is the maturity-risk premium, and LP is the liquidity-risk premium.
1 The sign of b2j is negative when changes in bond yields and not the bond market return are used as the interest rate variable (see Sweeney and Warga 1986).
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
191
A change in nominal return consists of changes in the risk-free, liquidity-risk, default-risk, and maturity-risk rate. A change in the short-term risk-free rate can be attributed to changes in real rate or short-term inflation. A sizeable change in the real rate takes place over a longer time horizon. Therefore, it should not significantly impact monthly stock returns and cause the interest rate risk.2 However, Fama and Gibbons (1982) suggested that changes in real rate may cause changes in the magnitude of interest risk from period to period. Fama and Schwert (1977) argued that, in equilibrium, the risk premium in the short-term interest rate is compensation to the investor for changes in the level of expected inflation. However, French et al. (1983) pointed out that the interest rate risk is due to the unexpected changes in inflation. More specifically, the interest rate risk can be viewed as the compensation for expected or unexpected changes in the level of short-term inflation. Maturity risk is associated with the changes in the slope of the yield curve that caused a number of economic factors, such as supply and demand of long-term credit, long-term inflation, and risk aversion. Estimates of b2j using short-term T-bill returns as the second factor account for the changes in short-term inflation, whereas estimates of b2j using long-term T-note returns account for the unexpected changes in long-term inflation as well as maturity risk. The inflation expectation horizon approximates that of the maturity of the debt security. Estimating coefficient b2j using BAA bond returns as the interest rate factor explains the fluctuations in bank stock return due to unexpected changes in long-term inflation, maturity-risk premium, default-risk premium, or a combination of risk premia.3 As the risk premia are additive, in Eq. 6.2, the magnitude b2j depends on the choice of the interest rate index. A priori expectations about the relative size of b2j are: b2jf6-month T-billg > b2jf3-month T-billg b2j f7-year T-noteg > b2j f6-month T-billg
(6.3)
b2j fBAA-rated bondg > b2j f7-year T-noteg where the debt security within the brackets identifies the choice of the bond market index. A number of researchers have indicated that bank stock returns are sensitive to unexpected changes in long-term but not short-term interest rates. This observation is consistent with the expectations expressed in Eq. 6.3. However, we are unable to isolate and identify which component of the interest rate risk is being
2
Federal Reserve’s intervention changes the short-term interest rates. These rate changes take place after considerable deliberation and are often anticipated by financial markets. Hence, it neither generates interest rate innovations nor produces interest rate risk. 3 Liquidity premium is ignored in our discussion and subsequent model construction because our sample does not include financially distressed banks, so there are insufficient variations in the liquidity premium.
192
S. Srivastava and K. Hung
priced when a long-term bond return is used. It is feasible to observe a significant change in stock returns due to unexpected changes in defaultrisk premium (or maturity-risk premium), whether or not nominal contracting hypothesis is valid. The two-factor model can ascertain the pricing of interest rate risk without identifying the source. Commercial banks have a variety of nominal assets and liabilities with different sensitivities to unexpected changes in short-term inflation, maturity risk, and default risk. In the next section, we propose an asset-pricing model in which the interest rate factor of the two-factor model is replaced by three factors, each of which represents a different risk component of the bond return.4
6.3
Multi-Risk Premia Asset-Pricing Model
We propose a hybrid asset-pricing model to investigate the interest rate sensitivity of bank stock returns.5 The traditional CAPM is augmented by three additional APT-type factors to account for unexpected changes in the inflation premium, the maturity-risk premium, and the default-risk rating. Hence, the proposed return-generating model is written as Rjt ¼ aj þ b1j Rmt þ b2Pj DPt þ b2Mj DMRt þ b2Dj DDRt þ ejt
(6.4)
where DPt, DMRt, and DDRt are the proxies for the innovations in inflation, maturity risk, and default risk, respectively (specification of these principal component factors consistent with APT is discussed in the Appendix). Coefficients b1, b2P, b2M, and b2D are the measures of market risk, inflation risk, maturity risk, and default risk, respectively. The expected return is given by CAPM: E Rj ¼ aj þ b1j EðRmÞ
(6.5)
However, systematic risk is determined by the integrated model: Total systematic risk ¼ market risk þ inflation risk þ maturity risk þ default risk
4
(6.6)
An appropriate interest rate variable that should be used to examine the pricing of the interest rate risk in the two-variable framework is not easily available. However, one could construct an index composed of long-term US bonds and corporate bonds with duration equal to the net duration of bank assets and appropriate default risk. This interest rate variable will identify the true pricing of the interest rate risk. 5 This model’s conceptual framework is provided by Chen et al. (1986). However, their factors are not orthogonal.
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
193
The three APT-type factors have managerial implications regarding bank’s asset and liability management. The first factor, DPt, has implications pertaining to the management of the treasury securities portfolio held by the bank. The value of this portfolio is sensitive to the changes in the short-term interest rate. In addition, these changes impact the short-maturity funding gap. The null hypothesis b2P ¼ 0 implies that management has correctly anticipated future changes in short-term inflation and has taken steps to correctly hedge the pricing risk through the use of derivative contracts and minimize the short-maturity funding gap. The second factor, DMRt, has implications regarding the bank’s long-term assets and liabilities. The market value of a bank’s net worth is very sensitive to changes in the slope of the term structure. The null hypothesis b2M ¼ 0 conjectures that management has correctly anticipated future changes in the slope of the term structure and has immunized the institution’s net worth by a sensible allocation assets in the loan portfolio, sale of loans to the secondary markets, securitization of loans, and the use of derivative contracts. The third factor, DDRt, relates to the management of loan losses and overall default-risk rating of bank assets. The null hypothesis b2D ¼ 0 infers that management has correctly anticipated future loan losses due to changes in exogenous conditions, and subsequent loan losses, however large, will not adversely affect the stock returns.
6.4
Data Description
Month-end yield for 3-month T-bill, 6-month T-bill, 1-year T-note, 7-year T-note, and BAA corporate bonds, for the period January 1974 to December 1990, were obtained, and monthly returns were calculated.6 Return on the CRSP equally weighted index of NYSE stocks was used as the market return. Month-end closing prices and dividends for a sample of 88 banks were obtained from Compustat’s Price, Dividend, and Earning (PDE) data tape, and monthly returns were calculated. Availability of continuous data was the sole sample selection criteria. This selection criterion does introduce a survivorship bias. However, it was correctly pointed out by Elyasiani and Iqbal (1998) that the magnitude of this bias could be small and not affect the pricing of interest risk. Equally weighted portfolios of bank returns were calculated for this study. The total observation period is divided into three contrasting economic and bank regulatory periods: (1) an increasing but moderately volatile interest rate period from January 1974 to December 1978 in a highly regulated environment; (2) a high interest rate and high-volatility period from January 1979 to December 1984, during which there was gradual deregulation of the industry; and (3) a low interest rate and low-volatility period from January 1985 to December 1990, during which many regulatory changes were made in response to banks loan loss problems. The descriptive statistics of the sample are
The return on a bond index is calculated from the yield series as RIt ¼ (YIt YI,t1)/YIt where YIt is the bond index yield at time t. 6
194
S. Srivastava and K. Hung
Table 6.1 Summary statistics of portfolio returns and interest rate yields Bank portfolio return Portfolio betac CRSP market return 3-month T-bill yield 6-month T-bill yield 1-year T-note yield 7-year T-note yield BAA bond yield Monthly inflationd Maturity-risk premiume Default-risk premiumf
1974–1978 9.66%a (5.95%)b 0.7091 22.05% (7.63%) 6.21% (1.27%) 6.48% (1.27%) 7.05% (1.25%) 7.23% (0.54%) 9.66% (0.68%) 0.78% (0.69%) 0.68% (0.88%) 1.94% (0.77%)
1979–1984 22.28% (4.93%) 0.7259 22.05% (5.39%) 10.71% (2.41%) 10.80% (2.21%) 11.70% (2.27%) 11.91% (1.79%) 14.04% (1.97%) 0.46% (0.50%) 0.20% (1.32%) 2.13% (0.76%)
1985–1990 6.48% (6.17%) 1.0102 8.58% (5.36%) 6.92% (1.00%) 7.02% (0.95%) 7.62% (1.00%) 8.67% (1.09%) 10.84% (0.98%) 0.25% (0.23%) 1.05% (0.72) 2.17% (0.46%)
a
Average returns and yields are reported on annualized basis Standard deviation is in parenthesis c Estimated using single-index market model d Measured by the relative change in consumer price index e Yield differential between 7-year T-note and 1-year T-note f Yield differential between BAA-rated corporate bond and 7-year T-note b
summarized in Table 6.1. The average monthly inflation, as measured by the relative change in the consumer price index, was highest during 1974–1978 and lowest during 1985–1990. The average default-risk premium was of the same order of magnitude for all the three periods. The average maturity-risk premium was high during 1985–1990 indicating a steeper yield curve. For the period 1979–1984, the average maturity-risk premium was low, but its standard deviation was high. This indicates a relatively less steep but volatile yield curve. The bank portfolio’s average return (9.66 %), for the period 1974–1978, was much smaller than the average market return (22.05 %). For the period 1979–1984, both returns increased dramatically; the portfolio’s average return (22.28 %) was about the same as the average market return (22.05 %). For the period 1985–1990, the portfolio and markets average return both dropped dramatically to 6.48 % and 8.58 %, respectively. The estimated portfolio beta increased from about 0.7 in the two earlier periods to about 1.0 in the latest period.
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
195
6.5
Empirical Results
6.5.1
Two-Variable Regression Model and Pricing of Interest Rate Risk
The estimated coefficients of the two-variable regression model are presented in Table 6.2.7 For the period 1974–1978, a period characterized by low and stable interest rates, the interest rate beta was statistically insignificant with all the interest rate factors except the 7-year T-note returns. For the period 1979–1984, Table 6.2 Two-variable model of the bank stock returns: Rjt ¼ aj + b1jRmt + b2jRIt + ejt
Period Estimates 1974–1978 Constant, a Market beta, b1 Interest rate beta, b2 R2 F-statistic 1979–1984 Constant, a Market beta, b1 Interest rate beta, b2 R2 F-statistic 1985–1990 Constant, a Market beta, b1 Interest rate beta, b2 R2 F-statistic
Interest rate variablea 3-month 6-month T-bill T-bill 0.0046 0.0045 (0.17) (0.18)b 0.6921 0.6886 (15.05) (14.67) 0.0566 0.0608 (1.01) (1.07) 0.828 0.828 137.3 137.7 0.0052 0.0052 (1.49) (1.53) 0.7261 0.7258 (11.73) (12.01) 0.1113 0.1425 (3.47) (3.99) 0.684 0.699 74.8 80.1 0.0018 0.0018 (0.51) (0.51) 1.010 1.010 (15.31) (15.30) 0.0508 0.0374 (0.51) (0.40) 0.773 0.772 117.3 117.1
1-year T-note 0.0043 (0.16) 0.6897 (15.17) 0.0700 (1.24) 0.830 138.8 0.0052 (1.57) 0.7262 (12.37) 0.1714 (4.41) 0.716 87.2 0.0018 (0.52) 1.010 (15.45) 0.1036 (1.22) 0.777 120.0
7-year T-note 0.0033 (0.14) 0.6845 (16.47) 0.2893 (2.77) 0.845 156.3 0.0052 (1.70) 0.7262 (13.33) 0.3582 (5.97) 0.755 106.6 0.0018 (0.53) 1.010 (15.70) 0.1752 (1.95) 0.794 125.1
BAA Bond 0.0042 (0.16) 0.6987 (15.76) 0.2248 (0.93) 0.828 136.9 0.0052 (1.62) 0.7264 (12.74) 0.6083 (5.15) 0.732 94.4 0.0018 (0.52) 1.010 (15.57) 0.2741 (1.62) 0.780 122.5
a
The error term from the regression of interest rate on market return is used as the appropriate orthogonal interest rate variable b t-statistics are in the parenthesis
7
Test of two-variable model for the period 1991–2007 indicated that interest rate risk is not priced. Tables can be provided to interested readers.
196
S. Srivastava and K. Hung
the interest rate beta was statistically significant with all the interest factors, and its magnitude varied substantially. This period was characterized by a relatively flat but volatile yield curve. For the period 1985–1990, the interest rate beta was statistically significant only with the 7-year T-note returns as the interest rate factor. In general, the estimated value of b2j for the periods 1974–1978 and 1985–1990 is smaller and less statistically significant than the value for the period 1979–1984. The magnitude of interest rate beta, whether significant or not, varies with the choice of the interest rate variable. The size of b2j increases with the choice of securities in the following order: 3-month T-bill, 6-month T-bill, 1-year T-note, 7-year T-note, and BAA-rated bond (except b2j estimated using BAA bond return for the period 1974–1978 and the 6-month T-bill return in 1985–1990). This observation validates inequality (6.3) in all the periods and suggests the expanding nature of the investment opportunity set with increased horizon (Merton 1973). As stated earlier, the difference between b2j estimated using 7-year T-note returns and that using 3-month T-bill returns measures the effect on the bank stock returns due to the unexpected changes in the maturity-risk premium. Further, difference between b2j estimated using BAA bond returns and that using 7-year T-note returns measures the effect on the bank stock returns due to the unexpected changes in the default-risk premium. The fact that bank stock returns are more sensitive to the long-term interest rates than to short-term interest rates is consistent with our expectation about the size of b2 expressed in inequalities (6.3). Similar results were reported by other researchers (such as Unal and Kane (1988) and Chen et al. (1986)). A shift of focus from shortterm to long-term inflation expectation could explain this result. An alternative explanation is that bank balance sheet returns are better approximated by long-term than by short-term bond returns. To the extent that balance maturity mismatches occur, they should be related to long-term bond returns. The reason is simply that long-term bond returns include the present value of more future period returns than do short-term bond returns. That is, long-term bond returns include price changes not included in short-term bond returns. The price changes in the long-term bond returns represent price changes in long-term bank contracts. The most representative term equals the term of assets or liabilities, whichever is longer. If the maturity mismatch is a net asset (liability) position, then the long-term bond maturity would reflect the asset (liability) maturity. The estimate of a large, positive coefficient (b2j) for 7-year T-notes and BAA bonds implies that banks mismatch in favor of assets with relatively long maturities. The plausible causes for the change in the interest risk from period to period are (1) changes in real returns, as reported by Fama and Gibbons (1982), (2) unexpected changes in short-term and long-term inflation expectation, (3) shift of focus from short-term to long-term inflation expectation, (4) unexpected changes in risk aversion, (5) unexpected changes in the default risk of bank’s nominal contracts, and (6) structural instability of the systematic interest rate equation used to extract interest rate innovations. The estimated coefficients of multi-risk premia model will shed some light on this issue.
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
6.5.2
197
Multi-risk Premia Model
The estimates of the first systematic interest rate Eq. 6.11, which specifies innovations in the short-term default-free bond returns, are reported in Table 6.3. The ordinary least square (OLS) estimates were rejected because of the presence of serial autocorrelation as indicated by the Durbin-Watson statistic (see Woolridge 2009). A Durbin-Watson statistic equal to 2 indicates the absence of any serial autocorrelation. The generalized least square (GLS) estimates seemed more appropriate because the Durbin-Watson statistic was approximately equal to 2 and the value of R2 was higher. Results reported in Table 6.3 exhibit a significant correlation between market return and short-term interest rate variable in the period 1974–1978, but not in the periods 1979–1984 and 1985–1990. However, a low value of R2 indicates that the relationship expressed in Eq. 6.4 is not as robust as one would have preferred. This approach of extracting changes in short-term inflation was used by Fama and Schwert (1977) and French et al. (1983). Their estimation period overlapped our estimation period 1974–1978 but not the later ones. In Table 6.4 we present the results of the second systematic interest rate Eqs. 6.12a and 6.12b. The estimated coefficient, y1, determined the average slope of the yield curve during the estimation period. The OLS estimates were rejected because of the presence of serial correlation indicated by the Durbin-Watson statistics. The term structure coefficient y1 was significant in all the periods for Specifications a and b. However, coefficients y2 of Specification b was
Table 6.3 Estimating a proxy for ex-post unexpected inflation: RST,t ¼ d0 + d1Rmt + et
Ordinary least square Constant, d0 Market linkage coefficient, d1b R2 Durbin-Watson statistic Generalized least square Constant, d0 Market linkage coefficient, d1b R2 Durbin-Watson statistic
Estimated coefficients 1974–1978 1979–1984
1985–1990
0.0068 (0.87)a 0.2987 (2.97) 0.132 1.90
0.0070 (0.45) 0.5791 (2.19) 0.076 1.19
0.0037 (0.89) 0.0891 (1.13) 0.018 1.12
0.0075 (0.82) 0.3173 (3.16) 0.145 1.99
0.0013 (0.08) 0.3136 (1.44) 0.306 1.97
0.0044 (0.83) 0.0806 (1.15) 0.242 1.92
Ordinary Least Square estimates were rejected because of the presence of serial correlation indicated by Durbin-Watson statistic RST,t and Rmt are 3-month T-bill and CRSP equally-weighted market returns a t-statistics are in the parenthesis b Coefficient d1 measures the stock–bond market linkage
198
S. Srivastava and K. Hung
Table 6.4 Estimating alternate proxies for maturity-risk premia: Specification a : RLT, t ¼ y0 þ y1 RST, t þ et , Specification b : RLT, t ¼ y0 þ y1 RST, t þ y2 Rmt þ et 1974–1978 Specification a: generalized least square 0.0028 Constant, y0 (1.02)a Term structure coefficient, y1b 0.3988 (10.30) 0.675 R2 Durbin-Watson statistic 1.98 Specification b: generalized least square 0.0024 Constant, y0 (0.84) 0.4070 Term structure coefficient, y1b (10.02) 0.0216 Market linkage coefficient, y2c (0.68) 0.677 R2 Durbin-Watson statistic 1.98
1979–1984
1985–1990
0.0053 (2.11) 0.5201 (16.29) 0.846 1.83
0.0017 (0.66) 0.8183 (14.96) 0.820 1.84
0.0064 (2.47) 0.5122 (15.78) 0.0576 (1.07) 0.849 1.82
0.0011 (0.48) 0.8105 (15.34) 0.0944 (2.56) 0.835 1.80
Ordinary least square estimates were rejected because of the presence of serial correlation indicated by Durbin-Watson statistic RLT,t and RST,t are 7-year T-note and 1-year T-note returns a t-statistics are in the parenthesis b Coefficient y1 measures the average slope of the yield curve c Coefficient y2 accounts for the stock–bond market linkage
significant only during 1985–1990. The errors from Specification a for periods 1974–1978 and 1979–1984 and from Specification b for the period 1985–1990 were used to specify the second risk factor DMRt. Estimates of the third systematic interest rate Eqs. 6.13a and 6.13b are presented in Table 6.5. As before, the GLS estimates were deemed appropriate. The errors from Specification a for the periods 1974–1978 and 1979–1984 and from Specification b for the period 1985–1990 were used to specify the unexpected change in default risk, DDRt. Results of the multi-risk premia model are presented in Table 6.6. The inflation beta is statistically significant in the periods 1979–1984 and 1985–1990 but not in the period 1974–1978. It was shown in Table 6.3 that quasi-differenced short-term interest rates were correlated with the market return for the period 1974–1978 but not for the periods 1979–1984 and 1985–1990. Hence, one could argue that when short-term interest rates are correlated with the market return (i.e., 1974–1978), the error term from Eq. 6.8 contains no systematic information. This results in the inflation beta being insignificant and the interest rate risk not priced with respect to the short-term rates within the context of the two-factor model. A corollary is that, when short-term interest rates are uncorrelated with the market return (i.e., 1979–1984 and 1985–1990), the error term from Eq. 6.11 contains valuable information leading to the inflation beta being significant. The maturity beta was found
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
199
Table 6.5 Estimating alternate proxies for the default-risk premia: Specification a : RBAA, t ¼ j0 þ j1 RLT, t þ et , Specification b : RBAA, t ¼ j0 þ j1 RLT, t þ j2 Rmt þ et 1974–1978 Specification a: generalized least square 0.0021 Constant, j0 (0.60)a Default-risk coefficient, j1b 0.0842 (2.27) 0.553 R2 Durbin-Watson statistic 1.88 Specification a: generalized least square 0.0019 Constant, j0 (0.54) 0.0861 Term structure coefficient, j1b (2.30) 0.0066 Market linkage coefficient, j2c (0.49) 0.555 R2 Durbin-Watson statistic 1.87
1979–1984
1985–1990
0.0039 (1.12) 0.4129 (12.27) 0.800 1.97
0.0010 (0.66) 0.4734 (13.47) 0.772 1.99
0.0051 (1.41) 0.3976 (11.35) 0.0491 (1.49) 0.806 1.97
0.0006 (0.48) 0.4637 (13.83) 0.0577 (2.39) 0.785 2.00
Ordinary least square estimates were rejected because of the presence of serial correlation indicated by Durbin-Watson statistic RBAA,t and RLT,t are BAA corporate bond and 7-year T-note returns a t-statistics are in the parenthesis b Coefficient j1 measures the average yield differential between RBAA,t and RLT,t c Coefficient j2 accounts for the stock–bond market linkage
to be statistically significant in the periods 1974–1978 and 1979–1984 but not for the period 1985–1990. Results in Table 6.4 (Specification b) showed that long-term interest rates were correlated with the market return for the period 1985–1990 but not for the periods 1974–1978 and 1979–1984. Hence, we posit that when longterm interest rates are correlated with the market return (i.e., 1985–1990), the error term from Eq. 6.12b contains no systematic information. This results in the maturity beta being insignificant. A corollary is that, when long-term interest rates are uncorrelated with the market return (i.e., 1974–1978 and 1979–1984), the error term from Eq. 6.12b contains valuable information producing a significant maturity beta and the interest rate risk is priced with respect to the long-term rates within the context of the two-factor model. The default beta was found to be statistically significant in the period 1985–1990 but not for the periods 1974–1978 and 1979–1984. The economic factors that lead to significant correlation between market returns and long-term interest rates (Eq. 6.12b) or between market returns and BAA-rated bond returns (Eq. 6.13b) caused the interest rate risk to be priced with respect to the long-term rates within the context of the two-factor model (1985–1990). Since the correlation between market return and interest rate changes over time, the interest rate risk also changes over time.
200
S. Srivastava and K. Hung
Table 6.6 Multi-risk premia model of the bank stock returns: Rjt ¼ a0j + b1jRmt + b2QjDPt + b2MjDMRt + b2DjDDRt + ejt
Constant, a0 Market beta, b1 Inflation beta, b2P Maturity beta, b2M Default beta, b2D R2 Durbin-Watson statistic
Estimated coefficientsa 1974–1978 1979–1984 0.0056 0.0041 (1.84)b (1.04) 0.7397 0.6833 (16.94) (10.55) 0.0698 0.1292 (1.27) (3.45) 0.5350 0.4881 (2.87) (2.93) 0.1973 0.1153 (0.56) (0.43) 0.849 0.754 2.00 2.04
1985–1990 0.0047 (1.31) 0.7185 (17.55) 0.2157 (2.29) 0.2476 (1.38) 0.5696 (1.94) 0.851 1.98
Ordinary least square estimates were rejected because of the presence of serial correlation indicated by Durbin-Watson statistic Portfolio beta estimated using single-index model are 0.7091, 0.7259, and 1.0102 for the periods 1974–1978, 1979–1984, and 1985–1990, respectively a Generalized least square estimates b t-statistics are in the parenthesis
Negative inflation and maturity betas for the period 1985–1990 need some explanation because it is contrary to a priori expectation. For the period 1985–1990, the bank portfolio and market return both dropped dramatically to 6.48 % and 8.58 %, respectively. However, the estimated portfolio beta increased from about 0.7 in the two earlier periods to about 1.0 in this period (beta estimated independently by the single-index model). Consequently, the market factor alone will overestimate the bank portfolio’s expected return. The negative values of inflation beta and maturity beta (though insignificant) correct the overestimation. Economic factors and regulatory changes that fuelled M&A activities during this period must have been such that they increased the portfolio beta without increasing the ex-post portfolio return. Some of these unidentifiable factors are negatively correlated with the interest rates. One of the shortcomings of the factor analytic approach is that factors are at times unidentifiable. In spite of difficulties in explaining some of the results for the period 1985–1990, the multi-risk premia model does provide greater insight into the pricing of interest rate risk.
6.6
Conclusions
In this paper, we examine the interest rate sensitivity of commercial bank returns covering three distinct economic and regulatory environments. First, we investigate the pricing of the interest rate risk within the framework of the two-factor model.
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
201
Our results indicate that interest rate risk was priced during 1979–1984 irrespective of the choice of interest rate variable. However, during the periods 1974–1978 and 1985–1990, bank stock returns were sensitive only to the unexpected changes in the long-term interest rates. Next, we tested to an asset-pricing model in which the traditional CAPM is augmented by three additional factors to account for unexpected changes in the inflation, the maturity premium, and default premium. Our results show that the inflation beta was significant for the periods 1979–1984 and 1985–1990, but not for the period 1974–1978; the maturity beta was significant for the periods 1974–1978 and 1979–1984 but not for the period 1985–1990; and the default beta was significant for the period 1985–1990 but not for the periods 1974–1978 and 1978–1984. We can infer that when short-term interest rates are correlated with the market return, the innovations in short-term interest rate are indeed white noise. However, innovations in short-term interest rates contain valuable information when short-term interest rates are uncorrelated with the market return. This will lead to a significant inflation beta and the interest rate risk will be priced with respect to the short-term rates within the context of the two-factor model. We can also infer that when long-term interest rates are correlated with the market return, the innovations in long-term interest rate are indeed white noise. However, innovations in long-term interest rates contain valuable information when long-term interest rates are uncorrelated with the market return. This results in a significant maturity beta and priced interest rate risk with respect to the long-term rates within the context of the two-variable model.
Appendix 1: An Integration of CAPM and APT The traditional CAPM is augmented by three additional APT-type principal component factors (Johnson and Wichern 2007) to account for unexpected changes in the inflation premium, the maturity-risk premium, and the default-risk rating. Hence, the proposed return-generating model is written as Rjt ¼ aj þ b1j Rmt þ b2Pj DPt þ b2Mj DMRt þ b2Dj DDRt þ ejt
(6.7)
where DPt, DMRt, and DDRt are the proxies for the innovations in inflation, maturity risk, and default risk, respectively. Coefficients b1, b2P, b2M, and b2D are the measures of systematic market risk, inflation risk, maturity risk, and default risk, respectively, and are consistent with APT.
Principal Component Factor Specification An important issue in the empirical investigation of this model is the specification of an appropriate inflation and maturity-risk and default-risk factors. Being innovations
202
S. Srivastava and K. Hung
in economic variables, these factors cannot be predicted using past information. Hence, they must meet the following conditions: EðDPt jt 1Þ ¼ 0 EðDMRt jt 1Þ ¼ 0
(6.8)
EðDDRt jt 1Þ ¼ 0 where E(.jt1) is the expectation based on available information at t1. Theoretical consideration dictates that the choice of a factor (say DMRt) should not influence any other factor loadings (say ∃1j). Hence, there should not be any contemporaneous cross-correlations between the factors. Hence, EðDPt :DMRt Þ ¼ 0 EðDPt :DDRt j ¼ 0
(6.9)
EðDRt :DDRt Þ ¼ 0 In addition, there are no common market shocks that may influence any of the risk factors. So the following conditions EðRmt :DPt Þ ¼ 0 EðRmt :DMRt Þ ¼ 0
(6.10)
EðRmt :DDRt Þ ¼ 0 must be satisfied. We use a generated regressor approach to construct orthogonal inflation-risk, maturity-risk, and default-risk factors. Economic factors that lead to changes in the equity market return also induce term structure movements. This leads to a significant correlation between the stock market index and the bond market index. Hence, we use the stock market return to forecast systematic changes in the short-term interest rates. The innovations in the short-term default-free bond return specify the unexpected changes in inflation. Hence, the first systematic interest rate equation is written as RST, t ¼ d0 þ d1 Rmt þ eIt
(6.11)
where Rmt is the return on the stock market index and RST,t is the return on the shortterm default-free bond return. The error term in Eq. 6.11, eIt, specifies the unexpected change in inflation and is the generated regressor which serves as a proxy for factor DPt. Fama and Schwert (1977) and French et al. (1983) employ similar approaches to specify a proxy for changes in inflation. The yield differential between short-term and long-term default-free bond represents the slope of the yield curve. The relationship used to construct the maturity-risk factor is given by the second systematic interest rate equation: RLT, t ¼ y0 þ y1 RST, t þ eMt
(6.12a)
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
203
where y1 measures the average slope of the yield curve. Alternately, we will also test the following systematic interest rate equation: RLT, t ¼ y0 þ y1 RST, t þ y2 Rmt þ eMt
(6.12b)
Inclusion of the independent variable, Rmt, will control for the stock–bond market linkage. The estimated coefficient y1 measures the slope of the yield curve, and it determines the maturity-risk premium on long-term assets and liabilities. The error term in Eqs. 6.12a or 6.12b, eMt, specifies the unexpected change in maturity risk and will be used as the generated regressor which serves as a proxy for factor DMRt. The portfolio of commercial banks used for this study consisted of money center and large national and regional banks. Most of these banks were well capitalized and had acquired a balanced portfolio of assets. The average default-risk rating of the portfolio of banks should be close to the defaultrisk rating of a high-grade corporate bond. Hence, the yield differential between BAA corporate bond and long-term treasury security is used to construct the default-risk factor. Our third systematic interest rate equation is written as RBAA, t ¼ j0 þ j1 RLT, t þ eDt
(6.13a)
Alternately, we will also test the following systematic interest rate equation: RBAA, t ¼ j0 þ j1 RLT, t þ j2 Rmt þ eDt
(6.13b)
where j1 measures the average default risk on BAA corporate bond. Inclusion of the independent variable, Rmt, will control for the stock–bond market linkage. The error term in Eqs. 6.13a or 6.13b, eDt, specifies the unexpected change in default risk and serves as the generated regressor which will be used as the proxy for factor DDRt.
Appendix 2: Interest Rate Innovations An important issue in the empirical investigation of the two-factor model is the specification of an appropriate interest rate factor. Theoretical consideration of factor analysis requires that two factors, Rmt and RIt, be orthogonal whereby choice of the second factor (RIt) would not influence the first factor loading (blj). The resolution of this constraint requires a robust technique for determining the unexpected changes in the interest rate that is uncorrelated with the market return. The three approaches to specify the interest rate factor are presented here.
204
S. Srivastava and K. Hung
Orthogonalization Procedure Economic factors producing changes in the market return also induced term structure movements. This leads to a high correlation between the market factor and the interest rate factor. Hence, market return can be used as an instrument variable to forecast the expected changes in the interest rates. To find the unexpected component of interest rates, the expected interest rate is purged by regressing RIt on Rmt and using the residuals. The systematic interest rate risk equation is RIt ¼ d0 þ d1 Rmt þ eit
(6.14)
The residuals, eit, are the unsystematic interest rates and are used to replace RIt in Eq. 6.14. The validity of this approach has been questioned on methodological grounds. It is pointed out that this orthogonalization procedure produces biased estimates of coefficients (intercept and b1j in Eq. 6.14) and that the deficiency of b2j is not improved. On the other hand, use of an unorthogonal interest rate factor leads to the errors-in-variable problem, i.e., the estimated coefficient b2j also captures some of the effects responsible for changing the market factor, and hence, it is not a true measure of the interest rate risk. Another problem using an unorthogonal interest rate factor stems from the fact that interest rate (RIt) is usually autoregressive. Therefore, residuals, eit, from Eq. 6.14 are autocorrelated unless GLS parameter estimation procedure is employed. To use the GLS procedure, a variance-covariance matrix has to be specified, which is not an easy task.
Univariate ARMA Model The second approach is to identify and estimate an ARMA model for the interest rate variable, RIt (Flannery and James (1984)). The unanticipated change in interest rate from the estimated model (i.e., residuals) is used to replace RIt in Eq. 6.14. In general, the ARMA model of order (p, q) for the univariate time series, RIt, is written as RIt ¼ ф1 RIt1 þ ф2 RIt2 þ . . . þ фp RItp þ m y1 eI, t1 . . . yq e1, tq þ eIt (6.15) where eIt, eI,t1, . . . are identically and independently distributed random errors with mean zero. The ARMA procedure for the modeling of time series data is outlined in Box and Jenkins (1976). The modeling is usually done in three steps. First, a tentative parsimonious model is identified. Second, the parameters are estimated, and diagnostic (Box-Pierce Q) statistics and residual auto correlation plots are examined. The model is acceptable if the time series of residuals is white noise and the Box-Pierce Q statistics are significant.
6
Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT
205
This approach leads to unbiased estimates of all the coefficients in Eq. 6.14. The shortcoming of the univariate ARMA approach is that valuable information contained in the stock and bond market linkage is ignored. Consequently, coefficient b2j captures some of the effects of economic factors producing stock market changes.
Vector ARMA Model In recent years vector autoregressive moving average (VARMA) models have proved to be useful tools to describe the dynamic relationship between economic variables. A vector autoregressive moving average model of order (p, q) for k-dimensional time series, Rt ¼ (R1t, R2t, . . . Rkt)T, is generated by the following equation: FðBÞRt ¼ YðBÞet
(6.16)
FðBÞ ¼ I ф1 B ф2 B2 . . . фp Bp
(6.17)
YðBÞ ¼ I y1 B y2 B2 . . . yq Bq
(6.18)
where B is the back shift operator, I is kxk unit matrix, and et is a sequence of an independent k-dimensional vector with zero mean and positive definite covariance matrix. The interest rate variable RIt and the market return Rmt are treated as the components of a bivariate vector. Then vector (RIt, Rmt)T is modeled as a vector AR process. The estimated model provides the unanticipated change in interest rate variable to be used as the second factor in the augmented CAPM model.
References Bae, S. C. (1990). Interest rate changes and common stock returns of financial institutions: Revised. Journal of Financial Research, 13, 71–79. Booth, J. R., & Officer, D. T. (1985). Expectations, interest rates, and commercial bank stocks. Journal of Financial Research, 8, 51–58. Box, G. E. P., & Jenkins, G. M. (1976). Time series analysis, forecasting and control. San Fran: Holden-Day. Chance, D. M., & Lane, W. R. (1980). A Re-examination of interest rate sensitivity in the common stock of financial institutions. Journal of Financial Research, 3, 49–55. Chen, N., Roll, R., & Ross, S. (1986). Economic forces and the stock market. Journal of Business, 59, 383–404. Choi, J. J., Elyasiani, E., & Kopecky, K. (1992). The sensitivity of bank stock returns to market, interest, and exchange rate risks. Journal of Banking and Finance, 16, 983–1004. Christie, A. A. (1981). The stochastic behavior of common stocks variances: Value, leverage and interest rate effects. Journal of Financial Economics, 10, 407–432.
206
S. Srivastava and K. Hung
Cox, J. C., Ingersoll, J. E., & Ross, S. A. (1985). An intertemporal general equilibrium model of asset prices. Econometrica, 53, 363–84. Elyasiani, E., & Iqbal, M. (1998). Sensitivity of the bank stock returns distribution to changes in the level and volatility of interest rate: A GARCH-M model. Journal of Banking & Finance, 22, 535–563. Fama, E. F., & Gibbons, M. R. (1982). Inflation, real returns and capital investment. Journal of Monetary Economics, 9, 297–323. Fama, E. F., & Schwert, W. G. (1977). Asset returns and inflation. Journal of Financial Economics, 4, 115–146. Flannery, M. J., & James, C. M. (1984). The effect of interest rate changes on the common stock returns of financial institutions. Journal of Finance, 39, 1141–1153. Flannery, M. J., Hameed, A. S., & Harjes, R. H. (1997). Asset pricing time-varying risk premia and interest rate risk. Journal of Banking and Finance, 21, 315–335. French, K. R., Ruback, R. C., & Schwert, G. W. (1983). Effects of nominal contracting hypothesis on stock returns. Journal of Political Economy, 91, 70–96. Johnson, R., & Wichern, D. (2007). Applied multivariate statistical analysis (6th ed.). Upper Saddle River: Pearson Education. Llyod, W. P., & Shick, R. (1977). A test of Stone’s two index models of returns. Journal of Financial and Quantitative Analysis, 12, 363–376. Lynge, M., & Zumwalt, J. (1980). An empirical study of interest rate sensitivity of commercial bank returns: A multi index approach. Journal of Financial and Quantitative Analysis, 15, 731–742. Merton, R. C. (1973). An intertemporal capital asset pricing model. Econometrica, 41, 867–887. Song, F. (1994). A two factor ARCH model for deposit-institution stock returns. Journal of Money, Credit, and Banking, 26, 323–340. Srivastava, S. C., Hamid, S., & Choudhury, A. H. (1999). Stock and bond market linkage in the empirical study of interest rate sensitivity of bank returns. Journal of Applied Business Research, 15, 47–58. Stone, B. (1974). Systematic interest rate risk in a two – index model of returns. Journal of Financial and Quantitative Analysis, 9, 709–721. Sweeney, R. J., & Warga, A. D. (1986). The pricing of interest-rate risk: Evidence from the stock market. Journal of Finance, 41, 393–410. Unal, H., & Kane, E. J. (1988). Two approaches to assessing the interest-rate sensitivity of deposit-institutions’ equity returns. In C. Andrew (Ed.), Research in finance (Vol. 7, pp. 113–132). Greenwich Conn.: Jai Press. Woolridge, J. (2009). Introductory econometrics: A modern approach (4th ed.). Mason: Cengage Learning. Yourougou, P. (1990). Interest rate and the pricing of depository financial intermediary common stock: Empirical evidence. Journal of Banking and Finance, 14, 803–320.
7
Nonparametric Bounds for European Option Prices Hsuan-Chu Lin, Ren-Raw Chen, and Oded Palmon
Contents 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Empirical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
208 210 214 218 222 228 229 230
Abstract
There is much research whose efforts have been devoted to discovering the distributional defects in the Black-Scholes model, which are known to cause severe biases. However, with a free specification for the distribution, one can only find upper and lower bounds for option prices. In this paper, we derive a new nonparametric lower bound and provide an alternative interpretation of Ritchken’s (1985) upper bound to the price of the European option. In a series of numerical examples, our new lower bound is substantially tighter than previous lower bounds.
The financial support of National Science Council, Taiwan, Republic of China (NSC 96-2416-H006-039-), is gratefully acknowledged. H.-C. Lin (*) Graduate Institute of Finance and Banking, National Cheng-Kung University, Tainan, Taiwan e-mail: [email protected] R.-R. Chen Graduate School of Business Administration, Fordham University, New York, NY, USA e-mail: [email protected] O. Palmon Department of Finance and Economics, Rutgers Business School – Newark and New Brunswick, Piscataway, NJ, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_7, # Springer Science+Business Media New York 2015
207
208
H.-C. Lin et al.
This is prevalent especially for out-of-the-money (OTM) options where the previous lower bounds perform badly. Moreover, we present that our bounds can be derived from histograms which are completely nonparametric in an empirical study. We first construct histograms from realizations of S&P 500 index returns following Chen, Lin, and Palmon (2006); calculate the dollar beta of the option and expected payoffs of the index and the option; and eventually obtain our bounds. We discover violations in our lower bound and show that those violations present arbitrage profits. In particular, our empirical results show that out-of-the-money calls are substantially overpriced (violate the lower bound). Keywords
Option bounds • Nonparametric • Black-Scholes model • European option • S&P 500 index • Arbitrage • Distribution of underlying asset • Lower bound • Out-ofthe-money • Kernel pricing
7.1
Introduction
In a seminal paper, Merton (1973) presents for the first time the no-arbitrage bounds of European call and put options. These bounds are nonparametric and do not rely on any assumption.1 Exact pricing formulas such as the Black and Scholes (1973) model and its variants, on the other hand, rely on strong assumptions on the asset price process and continuous trading. Due to the discreteness of actual trading opportunities, Perrakis and Ryan (1984) point out that option analyses in continuous time limit the accuracy and applicability of the Black-Scholes and related formulas. Relying on Rubinstein’s (1976) approach, the single-price law, and arbitrage arguments, they derive upper and lower bounds for option prices with both a general price distribution and discrete trading opportunities. Their lower bound is tighter than that of Merton. Levy (1985) applies stochastic dominance rules with borrowing and lending at the risk-free interest rate to derive upper and lower option bounds for all unconstrained utility functions and alternatively for concave utility functions. The derivation of these bounds can be applied to any kinds of stock price distribution as long as the stock is “nonnegative beta,” which is identical to the assumption of Perrakis and Ryan (1984). Moreover, Levy claims that Perrakis and Ryan’s bounds can be obtained by applying the second-degree stochastic dominance rule. However, Perrakis and Ryan do not cover all possible combinations of the risky asset with the riskless asset, and their bounds are therefore wider than those of Levy. Levy also applies the first-degree stochastic dominance rule (FSDR) with riskless assets to prove that Merton’s bounds are in fact FSDR bounds and applies the second-degree stochastic dominance rule to strengthen Merton’s bounds on the option value. At the same time, Ritchken (1985) uses a linear programming methodology to derive option bounds based on primitive prices in incomplete markets and claims that his bounds are tighter than those of Perrakis and Ryan (1984). 1
The only assumption is that both option and its underlying stock are traded securities.
7
Nonparametric Bounds for European Option Prices
209
With an additional restriction that the range of the distribution of the one-period returns per dollar invested in the optioned stock is finite and has a strictly positive lower limit, Perrakis (1986) extends Perrakis and Ryan (1984) to provide bounds for American options. Instead of assuming that no opportunities exist to revise positions prior to expiration in Levy (1985) and Ritchken (Ritchken 1985), Ritchken and Kuo (1988) obtain tighter bounds on option prices under an incomplete market by allowing for a finite number of opportunities to revise position before expiration and making more restrictive assumptions on probabilities and preferences. The single-period linear programming option model is extended to handle multiple periods, and the stock price is assumed to follow a multiplicative multinomial process. Their results show that the upper bounds are identical to those of Perrakis, while the lower bounds are tighter. Later, Ritchken and Kuo (1989) also add suitable constraints to a linear programming problem to derive option bounds under higher orders of stochastic dominance preferences. Their results show that while the upper bounds remain unchanged beyond the second-degree stochastic dominance, the lower bounds become sharper as the order of stochastic dominance increases.2 Claiming that Perrakis and Ryan (1984), Levy (1985), Ritchken (1985), and Perrakis (1986) are all parametric models, Lo (1987) derives semi-parametric upper bounds for the expected payoff of call and put options. These upper bounds are semi-parametric because they depend on the mean and variance of the stock price at maturity but not on its entire distribution. In addition, the derivation of corresponding semi-parametric upper bounds for option prices is shown by adopting the risk-neutral pricing approach of Cox and Ross (1976).3 To continue the work of Lo (1987), Zhang (1994) and De La Pena et al. (2004), both of which assume that the underlying asset price must be continuously distributed, sharpen the upper option bounds of Lo (1987). Boyle and Lin (1997) extend the results of Lo (1987) to contingent claims on multiple underlying assets. Under an intertemporal setting, Constantinides and Zariphopoulou (2001) derive bounds for derivative prices with proportional transaction costs and multiple securities. Frey and Sin (1999) examine the sufficient conditions of Merton’s bounds on European option prices under random volatility. More recently, Gotoh and Konno (2002) use the semi-definite programming and a cutting plane algorithm to study upper and lower bounds of European call option prices. Rodriguez (2003) uses a nonparametric method to derive lower and upper 2
To further explore the research work of Ritchken and Kuo (1989) under the decreasing absolute risk aversion dominance rule, Basso and Pianca (1997) obtain efficient lower and upper option pricing bounds by solving nonlinear optimization problem. Unfortunately, neither model provides enough information of their numerical examples for us to compare our model with. The RitchkenKuo model provides no Black-Scholes comparison, and the Basso-Pianca model provides only some partial information on the Black-Scholes model (we find the Black-Scholes model under 0.2 volatility to be 13.2670 and under the 0.4 volatility to be 20.3185, which are different from what are reported in their paper (12.993 and 20.098, respectively)) which is insufficient for us to provide any comparison. 3 Inspired by Lo (1987), Grundy (1991) derives semi-parametric upper bounds on the moments of the true, other than risk-neutral, distribution of underlying assets and obtains lower bounds by using observed option prices.
210
H.-C. Lin et al.
bounds and contributes a new tighter lower bound than previous work. Huang (Huang 2004) puts restrictions on the representative investor’s relative risk aversion and produces a tighter call option bound than that of Perrakis and Ryan (1984). Hobson et al. (2005) derive arbitrage-free upper bounds for the prices of basket options. Pen˜a et al. (2010) conduct static-arbitrage lower bounds on the prices of basket options via linear programming. Broadie and Cao (2008) introduce new and improved methods based on simulation to obtain tighter lower and upper bounds for pricing American options. Lately, Chung et al. (2010) also use an exponential function to approximate the early exercise boundary to obtain tighter bounds on American option prices. Chuang et al. (2011) provide a more complete review and comparison of theoretical and empirical development on option bounds.4 In this paper we derive a new and tighter lower bound for European option prices under a nonparametric framework. We show that Ritchken’s (1985) upper bound is consistent with our nonparametric framework. Both bounds are nonparametric because the price distribution of underlying asset is totally flexible, can be arbitrarily chosen, and is consistent with any utility preference.5 We compare our lower bound with those in previous studies and show that ours dominate those models by a wide margin. We also present the lower bound result on the model with random volatility and random interest rates (Bakshi et al. 1997; Scott 1997) to demonstrate how easily our model can be made consistent with any parametric structure.6 Finally, we present how our bounds can be derived from histograms which are nonparametric in an empirical study. We discover violations of our lower bound and show that those violations present arbitrage profits.7 In particular, our empirical results show that out-of-the-money calls are substantially overpriced (violate the lower bound).
7.2
The Bounds
A generic and classical asset pricing model with a stochastic kernel is S t ¼ Et M t , T S T ,
(7.1)
where Mt,T is the marginal rate of substitution, also known as the pricing kernel that discounts the future cash flow at time T; Et[.] is the conditional expectation 4
Since our paper only provides a nonparametric method on examining European option bounds, our literature review is much limited. For a more complete review and comparison on prior studies of option bounds, please see Chuang et al. (2011). 5 Christoffersen et al. (2010) provide results for the valuation of European-style contingent claims for a large class of specifications of the underlying asset returns. 6 Given that our upper bound turns out to be identical to Ritchken’s (1985), we do not compare with those upper bound models that dominate Ritchken (e.g., Huang (2004), Zhang (1994) and De La Pena et al. (2004)). Also, we do not compare our model with those models that require further assumptions to carry out exact results (e.g., Huang (2004) and Frey and Sin (1999)), since it is technically difficult to do. 7 For the related empirical studies of S&P 500 index options, see Constantinides et al. (2009, 2011).
7
Nonparametric Bounds for European Option Prices
211
under the physical measure ℙ taken at time t; and St is the value of an arbitrary asset at time t. The standard kernel pricing theory (e.g., Ingersoll (1989)) demonstrates that Dt, T ¼ Et Mt, T ,
(7.2)
where Dt,T is the risk-free discount factor that gives the present value of $1 over the period (t,T). The usual separation theorem gives rise to the well-known risk-neutral pricing result: St ¼ Et Mt, T St ðT Þ ¼ Et Mt, T E^t ½ST , (7.3) ðT Þ
¼ Dt, T E^t ½ST If the risk-free interest rate is stochastic, then E^t(T)[.] is the conditional expec^ ðT Þ . When the risk-free rate is non-stochastic, tation under the T-forward measure P ^ and will not depend then the forward measure reduces to the risk-neutral measure P ðT Þ 8 ^ upon maturity time, i.e., Et ½ ! E t ½. Note that Eqs. 7.1 and 7.3 can be applied to both the stock and the option prices. This leads to the following theorem which is the main result of this paper. Theorem 7.1 The following formula provides a lower bound for the European call
option Ct:
where bC ¼
Ct ¼ Dt, T Et ½CT þ bC St Dt, T Et ½ST cov½CT ; ST var½ST
(7.4)
:
Proof By Eq. 7.1, the option price must follow Ct ¼ Et[Mt,T CT], and hence Ct ¼ Et Mt, T CT (7.5) ¼ Et Mt, T Et ½CT þ cov Mt, T , CT ¼ Dt, T Et ½CT þ cov Mt, T , CT or Ct Dt, T Et ½CT ¼ cov Mt, T ; CT :
(7.6)
St Dt, T Et ½ST ¼ cov Mt, T ; ST :
(7.7)
Similarly,
8
Without loss of generality and for the ease of exposition, we take non-stochastic interest rates and ^ for the rest of the paper. proceed with the risk-neutral measure P
212
H.-C. Lin et al.
Hence, to prove Eq. 7.4, we only need to prove cov Mt, T ; CT bC cov Mt, T ; ST :
(7.8)
Note that cov[Mt,T,CT] ¼ cov[Mt,T, max{ST K, 0}] is monotonic in strike price K and has a minimum value when K ¼ 0 in which case cov[Mt,T,CT] ¼ cov [Mt,T,ST] and a maximum value when K!1 in which case cov[Mt,T, CT] ¼ 0. Hence, cov[Mt,T,ST] cov[Mt,T,CT] which is less than 0. Note that 0 < bC < 1 (see the proof in the following Corollary). In Appendix 1, it is proved that cov[Mt,T,CT] bC cov[Mt,T,ST]. The put lower bound takes the same form and is provided in the following corollary: Corollary 7.1 The lower bound of the European put option Pt can be obtained by the
put-call parity and satisfies the same functional form in Theorem 7.1: Pt ¼ Dt, T Et ½PT þ bP St Dt, T Et ½ST where bP ¼
cov½PT ; ST var½ST
(7.9)
:
[Proof] By the Put-Call Parity: Pt ¼ Ct þ Dt, T K St Ct þ Dt, T K St ¼ Pt :
(7.10)
We then substitute in the result of Theorem 7.1 to get Pt ¼ Dt, T Et ½CT þ bC St Dt, TEt ½ST þ Dt, T K St ¼ Dt, T Et ½ST þ PT Dt, T Et ½ST þ Dt, T K St K þ bC S t ¼ Dt, T Et ½PT þ bP St Dt, T Et ½ST
(7.11)
where bP ¼ bC 1. Note that bP < 0 < bC. This also implies that bc 0. Perrakis and Ryan (1984) and Ritchken (1985) obtain the identical upper bound.
9
10
7
Nonparametric Bounds for European Option Prices
213
Theorem 7.2 The following formulas provide upper bounds for the European call
and put options (Ritchken (1985))11:
St Et ½CT Et ½ST St St Et ½PT þ Dt, T Pt ¼ K: Et ½ST E t ½ ST Ct ¼
(7.12)
Proof Similar to the proof of the lower bound, the upper bound of the call option is provided as follows: Et Mt, T ST St Et ½CT ¼ Et ½CT E t ½ ST Et ½ST Et Mt, T ST CT cov Mt, T ST , CT ¼ Et ½ST Et Mt, T ST CT > Et ½ S T
(7.13)
ðCÞ
Et ½ST Et ½ S T > Ct : ¼ Ct
The third line of the above equation is a result from the fact that cov[Mt,TST, CT] < 0. The fourth line of the above equation is a change of measure with the call option being the numeraire. The last line of the above equation is a result based upon 12 E(C) t [ST]/Et[ST] > 1. By the put-call parity, we can show that the upper bound of the put option requires an additional term: St Et ½PT þ ST K þ Dt, T K St Et ½ S T St St Et ½PT þ Dt, T ¼ K: Et ½ S T Et ½ S T
Pt ¼ Ct þ Dt, T K St ¼
(7.14)
The lower and upper bounds we show in this paper have two important advantages over the existing bounds. The bounds will converge to the true value of the option if: • The expected stock return, EtS½St T , approaches the risk-free rate. • The correlation between the stock and the call or put option (rSC or rSP) approaches 1 or 1.
11
This is same as Proposition 3-i (Eq. 7.26) in Ritchken (1985). (C) By the definition of measure change, we have Et[CTST] ¼ Et[CT]E(C) t [ST] which implies Et [ST]/ E[ST] ¼ Et[CTST]/{Et[CT]Et[ST]} > 1.
12
214
H.-C. Lin et al.
These advantages help us identify when the bounds are tight and when they are not. The first advantage indicates that the bounds are tight for low-risk stocks and not tight for high-risk stocks. The second advantage indicates that the bounds are tighter for in-the-money options than out-of-the-money options.
7.3
Comparisons
The main purpose of this section is to compare our lower bound to several lower bounds in previous studies, namely, Merton (1973), Perrakis and Ryan (1984), Ritchken (1985), Ritchken and Kuo (1988), Gotoh and Konno (2002), and Rodriguez (2003) using the Black-Scholes model as the benchmark for its true option value. We also compare Ritchken’s upper bound (which is also our upper bound) with more recent works by Gotoh and Konno (2002) and Rodriguez (2003). The Black-Scholes model has five variables: stock price, strike price, volatility (standard deviation), risk-free rate (constant), and time to maturity. In addition to the five variables, the lower bound models need the physical expected stock return. The following is the base case for the comparison: Current stock S0 Strike K Volatility s Risk-free rate r Time to maturity T Stock expected return m
50 50 0.2 0.1 1 0.2
In the Black-Scholes model, stock price (S) evolution follows a log normal process: dS ¼ mSdt þ sSdW
(7.15)
where instantaneous expected rate of stock return m and volatility of stock price s are assumed to be constants and where dW is a wiener process. The call option price is computed as C ¼ S0 N ðdþ Þ erT KN ðd Þ where d ¼
ln S0 ln K þ ðr 0:5s2 ÞT pffiffiffi : s T
(7.16)
7
Nonparametric Bounds for European Option Prices
215
The lower bound is computed by simulating the stock price using Eq. 7.15 via a binomial distribution approximation: Sj ¼ S0 uj dnj :
(7.17)
As n approaches infinity, Sj approaches the log normal distribution, and the binomial model converges to the Black-Scholes model. Under the risk-neutral measure, the probability associated with the jth state is set as b ½j ¼ Pr
n j p^ ð1 p^Þnj j
(7.18)
where p^ ¼
erDt d , ud
u ¼ es
pffiffiffiffi Dt
d ¼ es
,
pffiffiffiffi Dt
,
and Dt ¼ T/n represents the length of the partition. Under the actual measure, the formula will change to Pr½j ¼
n j p ð1 pÞnj j
(7.19)
where p¼
emDt d : ud
Finally, the pricing kernel in our model is set as M0T ½j ¼
b ½j rT Pr e : Pr½j
(7.20)
In our results, we let n be great enough so that the binomial model price is 4-digit accurate to the Black-Scholes model. We hence set n to be 1,000. The results are reported in Table 7.1. The first panel presents the results for various moneyness levels, the second panel presents the results for various volatility levels, the third panel presents the results for various interest rates, the fourth panel presents the results for various maturity times, and the last presents the results for various stock
216
H.-C. Lin et al.
expected returns. In general, the lower bounds are tighter (for all models) when the moneyness is high (in-the-money), volatility is high, risk-free rate is high, time to maturity is short, and the expected return of stock is low. This table presents comparisons between our lower bound and existing lower bounds by Merton (1973), Perrakis and Ryan (1984), and Ritchken (1985). The base case parameter values are: • Stock price ¼ 50 • Strike price ¼ 50 • Volatility ¼ 0.2 • Risk-free rate ¼ 10 % • Time to maturity ¼ 1 year • Stock expected return (m) ¼ 20 % The highlighted rows represent the base case. As we can easily see, universally our model for the lower bound is tighter than any of the comparative models. One result particularly worth mentioning is that our lower bound performs better than the other lower bound models in out-of-the-money options. For example, our lower bound is much better than Ritchken’s (1985) lower bound when the option is 20 % out-of-the-money and continues to show value when Ritchken’s lower bound returns to 0 (see the first panel of Table 7.1). While Ritchken and Kuo (1988) claim to obtain tighter lower bounds than Perrakis (1986) and Perrakis and Ryan (1984), they do not show direct comparisons in their paper. Rather, they present through a convergence plot (Figure 3 on page 308 in Ritchken and Kuo (1988)) of a Black-Scholes example with the true value being $5.4532 and the lower bound approaching roughly $5.2. The same parameter values with our lower bound show a lower bound of $5.4427, which demonstrates a substantial improvement over the Ritchken and Kuo model. The comparisons with more recent studies of Gotoh and Konno (2002) and Rodriguez (2003) are given in Tables 7.2 and 7.3.13 Gotoh and Konno use semi-definite programming and a cutting plane algorithm to study upper and lower bounds of European call option prices. Rodriguez uses a nonparametric method to derive lower and upper bounds. As we can see in Tables 7.2 and 7.3, except for very few upper bound cases, none of the bounds under the Gotoh and Konno’s model and Rodriguez’s model are very tight, compared to our model. Furthermore, note that our model requires no moments of the underlying distribution.14 The base case parameter values are: • Stock price ¼ 40 • Risk-free rate ¼ 6 % • Stock expected return (m) ¼ 20 % 13
We also compare with the upper bound by Zhang (1994), which is an improved upper bound by Lo (1987), and show overwhelming dominance of our upper bound. The results (comparison to Tables 7.1, 7.2, and 7.3 in Zhang) are available upon request. 14 The upper bounds by the Gotoh and Konno model perform well in only in-the-money, short maturity, and low volatility scenarios, and these scenarios are where the option prices are close to their intrinsic values, and hence the percentage errors are small.
7
Nonparametric Bounds for European Option Prices
217
Table 7.1 Lower bound comparison S 20 25 30 35 40 45 50 55 60 65 70 75 80
Blk-Sch 0.0000 0.0028 0.0533 0.3725 1.3932 3.4746 6.6322 10.6248 15.1288 19.9075 24.8156 29.7794 34.7658
Merton 0 0 0 0 0 0 4.758 9.758 14.758 19.758 24.758 29.758 34.758
PR 0 0 0 0 0 1.441 5.449 10.017 14.849 19.788 24.768 29.761 34.759
Ritch. 0 0 0 0 0.272 2.278 5.791 10.143 14.888 19.797 24.767 29.76 34.754
Our 0 0 0 0.0008 0.8224 2.8957 6.1924 10.3535 14.9851 19.8395 24.7860 29.7673 34.7611
$error 0.0000 0.0028 0.0533 0.3717 0.5708 0.5789 0.4398 0.2714 0.1437 0.0680 0.0296 0.0121 0.0047
%error
Vol 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
Blk-Sch 5.1526 5.8325 6.6322 7.4847 8.3633 9.2555 10.1544 11.0559 11.9574 12.8569
Merton 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580
PR 4.7950 5.0290 5.4490 5.9700 6.5370 7.1180 7.6990 8.2680 8.8220 9.3570
Ritch. 4.8630 5.2400 5.7890 6.4360 7.1010 7.7760 8.4570 9.1250 9.7780 10.4240
Our 4.8930 5.4367 6.1924 7.0247 7.8864 8.7601 9.6382 10.5170 11.3945 12.2692
$error 0.2596 0.3958 0.4398 0.4600 0.4769 0.4954 0.5162 0.5389 0.5629 0.5877
%error 5.04 6.79 6.63 6.15 5.70 5.35 5.08 4.87 4.71 4.57
99.78 40.97 16.66 6.63 2.55 0.95 0.34 0.12 0.04 0.01
Rate 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22
Blk-Sch 4.4555 4.9600 5.4923 6.0504 6.6322 7.2355 7.8578 8.4965 9.1488 9.8122 10.4841
Merton 0.9901 1.9605 2.9118 3.8442 4.7581 5.6540 6.5321 7.3928 8.2365 9.0635 9.8741
PR 1.7380 2.6940 3.6310 4.5490 5.4490 6.3310 7.1960 8.0430 8.8740 9.6880 10.4870
Ritch. 2.7680 3.5010 4.2490 5.0200 5.7970 6.5810 7.3670 8.1470 8.9350 9.7170 10.4830
Our 3.0243 3.8402 4.6400 5.4240 6.1924 6.9456 7.6839 8.4076 9.1169 9.8122 10.4841
$error 1.4313 1.1198 0.8523 0.6264 0.4398 0.2900 0.1739 0.0889 0.0319 0.0000 0.0000
%error 32.12 22.58 15.52 10.35 6.63 4.01 2.21 1.05 0.35 0.00 0.00
T 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Blk-Sch 1.5187 2.3037 2.9693 3.5731 4.1371 4.6726 5.1862
Merton 0.4975 0.9901 1.4777 1.9605 2.4385 2.9118 3.3803
PR 1.2850 1.8870 2.4000 2.8730 3.3250 3.7640 4.1940
Ritch. 1.3600 2.0240 2.5940 3.0990 3.5830 4.0510 4.5020
Our 1.4976 2.2469 2.8696 3.4266 3.9416 4.4272 4.8909
$error 0.0211 0.0568 0.0997 0.1465 0.1955 0.2454 0.2953
%error 1.39 2.47 3.36 4.10 4.72 5.25 5.69
(continued)
218
H.-C. Lin et al.
Table 7.1 (continued) 0.8 0.9 1
5.6822 6.1635 6.6322
3.8442 4.3034 4.7581
4.6170 5.0350 5.4490
4.9460 5.3710 5.7890
5.3375 5.7705 6.1924
0.3447 0.3930 0.4398
6.07 6.38 6.63
Mu (µ) 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45
Blk-Sch 6.6322 6.6322 6.6322 6.6322 6.6322 6.6322 6.6322 6.6322
Merton 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580 4.7580
PR 6.3740 5.8380 5.4490 5.1800 5.0040 4.8940 4.8300 4.7940
Ritch. 6.4310 6.0610 5.7890 5.5820 5.4360 5.3090 5.2130 5.1330
Our 6.6322 6.4703 6.1924 5.8689 5.5572 5.2936 5.0929 4.9536
$error 0.0000 0.1620 0.4398 0.7633 1.0750 1.3386 1.5393 1.6786
%error 0.00 2.44 6.63 11.51 16.21 20.18 23.21 25.31
Note: S is the stock price; vol is the volatility; rate is the risk-free rate; Mu (m) is the expected rate of return of stock; Blk-Sch is the Black-Scholes (1973) solution; Merton is the Merton (1973) model; PR is Perrakis and Ryan (1984) model; Ritch. is the Ritchken (1985) model; Our is our model; $error is error in dollar; %error is error in percentage
The base case parameter values are: Strike price ¼ 50 Volatility ¼ 0.2 Risk-free rate ¼ 10 % Time to maturity ¼ 1 year Stock expected return (m) ¼ 20 %
• • • • •
7.4
Extensions
In addition to a tight lower bound, another major contribution of our model is that it makes no assumption on the distribution of the underlying stock (unlike Lo (1984) and Gotoh and Konno (2002) who require moments of the underlying distribution) or any assumption on interest rates and volatility (unlike Rodriguez (2003) who requires constant interest rates). As a result, our lower bound can be used with models that assume random volatility and random interest rates or any arbitrary specification of the underlying stock. Note that our model needs only the dollar beta of the option and expected payoffs of the stock and the option.15 In this section, we extend our numerical experiment to a model with random volatility and random interest rates. Option models with random volatility and random interest rates can be derived with closed form solutions under the Scott (1997) and Bakshi et al. (1997) The term “dollar beta” is originally from Page 173 of Black (1976). Here we mean bc and br.
15
7
Nonparametric Bounds for European Option Prices
219
Table 7.2 Comparison of upper and lower bounds with the Gotoh and Konno (2002) model Lower bound Stk Our GK S ¼ 40; rate ¼ 6 %; vol ¼ 0.2; t ¼ 1 week 30 10.0346 10.0346 35 5.0404 5.0404 40 0.4628 0.3425 45 0.0000 0.0000 50 0.0000 0.0000 S ¼ 40; rate ¼ 6 %; vol ¼ 0.8; t ¼ 1 week 30 10.0400 10.0346 35 5.2644 5.0404 40 1.7876 1.2810 45 0.3533 0.0015 50 0.0412 0.0000 S ¼ 40; rate ¼ 6 %; vol ¼ 0.8; t ¼ 12 week 30 11.9661 10.4125 35 8.7345 6.2980 40 6.2141 3.8290 45 4.3432 2.5271 50 2.9948 1.5722
Blk-Sch
Upper bound Our
GK
10.0346 5.0404 0.4658 0.0000 0.0000
10.1152 5.1344 0.5225 0.0000 0.0000
10.0349 5.0428 0.5771 0.0027 0.0003
10.0401 5.2663 1.7916 0.3548 0.0419
10.1202 5.3483 1.8428 0.3717 0.0444
10.1028 5.4127 2.2268 0.5566 0.1021
12.0278 8.8246 6.3321 4.4689 3.1168
12.7229 9.4774 6.8984 4.9421 3.4990
12.8578 9.7658 7.5165 6.8726 4.5786
Note: S is the stock price; Stk is the strike price; vol is the volatility; rate is the risk-free rate; BlkSch is the Black-Scholes (1973) solution; GK is the Gotoh and Konno (2002) model; Our is our model; $error is error in dollar; %error is error in percentage
specifications. However, here, given that there is no closed form solution to the covariance our model requires, we shall use Monte Carlo to simulate the lower bound. In order to be consistent with the lower bound, we must use the same Monte Carlo paths for the valuation of the option. For the ease of exposition and simplicity, we assume the following joint stochastic processes of stock price S, interest rate r, and volatility V under the actual measure, respectively: pffiffiffiffi ^2 dS ¼ mSdt þ V SdW ^2 dr ¼ aðy r Þdt þ vd W ^2 dV ¼ Vd W
(7.21)
where dW is a wiener process, dWidWj ¼ 0, m, a, y, u, and are constants. The processes under the actual measure are used for simulating the lower and upper bounds. The Monte Carlo simulations are performed under the risk-neutral measure in order to compute the option price: pffiffiffiffi ^1 V Sd W ^2 dr ¼ aðy r Þdt þ vd W ^ 3: dV ¼ Vd W dS ¼ rSdt þ
(7.22)
220
H.-C. Lin et al.
Table 7.3 Comparison of upper and lower bounds with the Rodriguez (2003) model S 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70
Lower bound Our 0.0221 0.0725 0.1828 0.3878 0.7224 1.2177 1.8982 2.7711 3.8319 5.0709 6.4703 8.0072 9.6574 11.3974 13.2067 15.0683 16.9716 18.9035 20.8559 22.8239 24.8018
Rodriguez 0 0.0000 0.0171 0.1158 0.3598 0.7965 1.4521 2.3329 3.4286 4.7177 6.1724 7.7635 9.4631 11.2462 13.0922 14.9845 16.9101 18.8593 20.8250 22.8020 24.7867
Upper bound Our 0.1001 0.2244 0.4451 0.7973 1.3100 2.0044 2.8927 3.9697 5.2211 6.6302 8.1753 9.8327 11.5794 13.3950 15.2621 17.1674 19.1024 21.0573 23.0262 25.0056 26.9915
Blk-Sch 0.0538 0.1284 0.2692 0.5072 0.8735 1.3950 2.0902 2.9676 4.0255 5.2535 6.6348 8.1494 9.7758 11.4933 13.2832 15.1292 17.0179 18.9384 20.8822 22.8429 24.8157
Rodriguez 0.1806 0.3793 0.7090 1.2044 1.8900 2.7767 3.8619 5.1315 6.5640 8.1337 9.8149 11.5835 13.4187 15.3032 17.2235 19.1693 21.1328 23.1086 25.0926 27.0822 29.0754
Note: S is the stock price; Blk-Sch is the Black-Scholes (1973) solution; Rodriguez is the Rodriguez (2003) model; Our is our model; $error is error in dollar; %error is error in percentage
To simplify the problem without loss of generosity, we assume that investors ^2 charge no risk premiums on interest rate risk and volatility risk, i.e., dW 2 ¼ dW ^ and dW 3 ¼ dW 3 . The simulations are done by the following integrals under the actual (top equation) and risk-neutral (bottom equation) measures: ð t ðt ðt pffiffiffiffiffiffi V u dW u St ¼ S0 exp mu du 1⁄2V u du þ 0
S^t ¼ S0 exp
ð t
r u du
0
0
0
ðt
ðt
1
0
⁄2V u du þ
pffiffiffiffiffiffi ^ V u dW u :
(7.23)
0
The no-arbitrage price of the call option is computed under the risk-neutral measure as ðt
^ C ¼ E exp r u du max S t K, 0 :
0
(7.24)
7
Nonparametric Bounds for European Option Prices
221
Table 7.4 Lower bound under the random volatility and random interest rate model S 25 30 35 40 45 50 55 60 65 70
BCC/Scott 1.9073 3.2384 5.2058 7.5902 10.2962 13.6180 17.1579 21.1082 25.0292 29.3226
Our 1.1360 2.4131 4.1891 6.5864 9.5332 12.8670 16.5908 20.5539 24.7251 29.0742
$error 0.7713 0.8253 1.0167 1.0039 0.7630 0.7510 0.5671 0.5543 0.3040 0.2484
%error 40.44 25.49 19.53 13.23 7.41 5.52 3.30 2.63 1.21 0.85
Note: S is the stock price; BCC/Scott are Bakshi et al. (1997) and Scott (1997) models; Our is our model; $error is error in dollar; %error is error in percentage
The bounds are computed under the actual measure. For example, E½Ct ¼ E½maxfSt K, 0g:
(7.25)
Given that dWidWj ¼ 0, we can simulate the interest rates and volatility separately and then simulate the stock price. That is, conditional on known interest rates and volatility, under independence, the stock price is log normally distributed. We perform our simulations using 10,000 paths over 52 weekly periods. The parameters are given as follows: Strike K Time to maturity T Stock expected return m Reverting speed a Reverting level y Interest rate volatility u Initial interest rate r0 Initial variance V0 Volatility on variance
50 1 0.2 0.5 0.1 0.03 0.1 0.0416 0.2
Note that implicitly we assume the price of risk for both interest rate process and volatility process to be 0, for simplicity and without loss of generality. The results are shown in Table 7.4. Compared to the model of the Black-Scholes (i.e., the first panel of Table 7.4), the lower bound performs similarly in the random volatility and random interest rate model. Take the base case as an example where the Black-Scholes price is 6.6322, the Bakshi-Cao-Chen/Scott price is 13.6180 as a result of extra uncertainty in the stock price due to random volatility and interest
16
This is so because the initial volatility is 0.2.
222
H.-C. Lin et al.
rates. The error of the lower bound of our model is 0.7510 in the Bakshi-Cao-Chen/ Scott case as opposed to 0.4398 in the Black-Scholes case.17 The percentage error is 5.52 % in the Bakshi-Cao-Chen/Scott case versus 6.63 % in the Black-Scholes case. The in-the-money options have larger percentage errors than those of the out-ofthe-money options. The parameters are given as follows: Strike price Time to maturity Stock expected return Reverting speed Reverting level Interest rate volatility Price of risk Initial interest rate Initial volatility Volatility on volatility
50 1 0.2 0.5 0.1 0.03 0 0.1 0.2 0.2
The Monte Carlo paths are 10,000. The stock price, volatility, and interest rate processes are assumed to be independent
7.5
Empirical Study
In this section, we test the lower and upper bounds against data. Charles Cao has generously provided us with the approximated prices of S&P 500 index call option contracts, matched levels of S&P 500 index, and approximated risk-free 90-day T-Bill rates for the period of June 2, 1988 through December 31, 1991.18 For each day, the approximated option prices are calculated as the average of the last bid and ask quotes. Index returns are computed using daily closing levels for the S&P 500 index that are collected and confirmed using data obtained from Standard and Poor’s, CBOE, Yahoo, and Bloomberg.19 The dataset contains 46,540 observations over 901 days (from June 2, 1988, to December 31, 1991). Hence, on average there are over 50 options for various maturities and strikes. The shortest maturity of the dataset is 7 days and the longest is 367 days. 15 % of the data are less than 30 days to maturity, 32 % are between
17
This Black-Scholes case is from the highlighted row in the first panel of Table 7.1. The data are used in Bakshi et al. (1997). 19 The (ex-dividend) S&P 500 index we use is the index that serves as an underlying asset for the option. For option evaluation, realized returns of this index need not be adjusted for dividends unless the timing of the evaluated option contract is correlated with lumpy dividends. Because we use monthly observations, we think that such correlation is not a problem. Furthermore, in any case, this should not affect the comparison of the volatility smile between our model and the Black-Scholes model. 18
7
Nonparametric Bounds for European Option Prices
223
30 and 60 days, 30 % are between 60 and 180 days, and 24 % are more than 180 days to maturity. Hence, these data do not have maturity bias. The deepest out-of-the-money option is 18.33 %, and the deepest in-the-money option is 47.30 %. Roughly half of the data are at-the-money options (46 % of the data are within 5 % in-the-money and out-of-the-money). 10 % are deep-in-the-money (more than 15 % in-the-money), but less than 1 % of the data are deep-out-of-the-money (more than 15 % out-of-the-money). Hence, the data have disproportional fraction of in-the-money options. This is clearly a reflection of the bull market in the sample period. The best way to test the lower and upper bounds derived in this paper is to use a nonparametric, distribution-free model. Note that the lower bound in Theorem 7.1 requires only the expected return of the underlying stock and the covariance between the stock and the option. There is no further requirement for the lower and the upper bounds. In other words, our lower bound model can permit any arbitrary distribution of the underlying stock and any parametric specification of the underlying stock such as random volatility, random interest rates, and jumps. Hence, to best test the bounds with a parsimonious empirical design, we adopt the histogram method introduced by Chen et al. (2006) where the underlying asset is modelled by past realizations, i.e., histogram. We construct histograms from realizations of S&P 500 index (SPX) returns. We calculate the price on day t of an option that settles on day T using a histogram of S&P 500 index returns for a holding period of T–t, taken from a 5-year window immediately preceding time t.20 For example, an option price on x-calendar-day any date is evaluated using a histogram of round 252 x -trading-day holding period 365 returns where round [.] is rounding the nearest integer.21 The index levels used to calculate these returns are taken from a window that starts on the 1260th ( 5 252) trading day before the option trading date and ends 1 day before the trading date. Thus, this histogram contains 1, 260 round 252 x -trading-day return reali365 zations. Formally, we compute histogram of the (unannualized) returns by the following equation: Rt, tþx, i ¼ ln Stix ln Sti ,
(7.26)
where each i is an observation in time and t is the last i. For example, if t is 1988/06/ 02 and x is 15 calendar days (or ten business days). We further choose our histogram horizon to be 5 years or 1,260 business days. Fifteen business days after 1988/06/02 is 1988/06/17. To estimate a distribution of the stock return for 1988/06/17, we look back a series of 10-business-day returns. Since we choose a 5-year historical window, or 1,260-business-day window, the histogram will contain 20
We use three alternative time windows, 2-year, 10-year, and 30-year, to check the robustness of our procedure and results. 21 The conversion is needed because we use trading-day intervals to identify the appropriate return histograms and calendar-day intervals to calculate the appropriate discount factor.
224
H.-C. Lin et al.
1,260 observations. The first return in the histogram, R1933/06/02,1933/06/07,1, is the difference between the log of the stock price on 1988/06/01, ln St–1, and the log of the stock price 15 calendar days (ten business days) earlier on 1988/05/17, ln St–1–a. The second observation in the histogram, R1933/06/02/1933/06/07,2 is computed as ln St–2–ln St–2–a. After, we complete the histogram of returns, we then convert it to the histogram of prices by multiplying every observation in the histogram by the current stock price: Stþx, i ¼ St Rt, tþx, i :
(7.27)
The expected option payoff is calculated as the average payoff where all the realizations in the histogram are given equal weights. Thus, Et[CT,T,K] and Et[ST] are calculated as 8 1 XN > < Et CT , T , K ¼ max S K, 0 T , i i¼1 N , (7.28) XN > : Et ½ S T ¼ 1 S T , i i¼1 N where N is the total number of realized returns and Ct,T,K is the price observed at time t, of an option that expires at time T with strike price K. Substituting the results in Eq. 7.28 in the approximation pricing formula of Eq. 7.4, we obtain our empirical model: C t ¼ P t , T E t C T , T , K þ bC S t P t , T E t ½ S T 1 XN 1 XN max ST , i K, 0 þ bC St Pt, T S ¼ Pt , T i¼1 i¼1 T , i N N
(7.29)
where the dollar beta is defined as bC ¼ covvar½C½ST T;ST as defined in Eq. 7.4. Note that option prices should be based upon projected future volatility levels rather than historical estimates. We assume that investors believe that the distribution of index returns over the time to maturity follows the histogram of a particular horizon with a projected volatility. In practice, traders obtain this projected volatility by calibrating the model to the market price. We incorporate the projected volatility, n*t,T,K, into the histogram by adjusting its returns: R t, T , K , i ¼
v t, T , K Rt, T , i Rt, T þ Rt, T ::, i ¼ 1, , N, vt, T
(7.30)
where the historical volatility nt,T is calculated as the standard deviation of the historical returns as follows: v2t, T ¼
2 1 XN R R t , T , i t , T i¼1 N1
where Rt,T,i ¼ ST,i/St and Rt, T ¼ N1 SNi¼1 Rt, T , i is the mean return.
(7.31)
7
Nonparametric Bounds for European Option Prices
225
Note that the transformation from R to R* changes the standard deviation from nt,T to n*t,T,K, but does not change the mean, skewness, or kurtosis. In our empirical study, we approximate the true volatility by the Black-Scholes implied volatility. For the upper bound calculations, we also need an expected mean return of the stock. In our empirical study, we simply use the histogram mean for it. The selection of the time horizon is somewhat arbitrary. Empiricists know well that too long horizons reduce the impact of recent events and yet too short understate the impact of long time effects. Given that there is no consensus on a most proper horizon, we perform our test over a variety of choices, namely, 5-year, 10-year, and 30-year, and the results are similar. To conserve space, we provide the results on the 10-year and leave the others available on request. The results are shown in Table 7.5. Columns (1) and (2) define the maturity and moneyness buckets. Short maturity is less than 30 days to maturity, medium is between 31 and 90 days, long is between 91 and 180 days, and real long is over 180 days to maturity. At-the-money is between 5 % in-the-money and 5 % out-of-the-money (or 5 %), near-in-the-money/near-out-of-the-money is between 5 % and 15 %, and deep-in-the-money/deep-out-of-the-money is over 15 %. Moneyness is defined as S/K–1. Column (3) is a frequency count of the number of observations in each bucket. Column (4) represents the average value of the ratios of the lower bound over the market price of the option. Column (5) shows the number of violations when the lower bound is higher than the market price. Out of the entire sample (46,540 observations), on average, the lower bound is 9.57 % below the market value and the upper bound is 9.28 % above the market value. When we look into subsamples, the performances vary. In general, the lower bound performs better in-the-money than out-of-the-money and medium maturity than other maturities. The best lower bound performance is when the option is nearin-the-money and short-term maturity (2.83 % below market value). To visualize the upper and lower bounds, we plot selected contracts in Fig. 7.1. In Fig. 7.1, we plot at-the-money (ATM) options from four maturities, 1 month (Fig. 7.1a), 2 months (Fig. 7.1b), 3 months (Fig. 7.1c), and 6 months (Fig. 7.1d). As we move from short maturity to long maturity, the number of observations drops (51, 38, 21, 15 observations, respectively). The bounds are wider as we move from short maturity to long maturity, consistent with the analysis in the previous sections. As we can see, in general, the lower bound using histograms is best for in-the-money and short-dated options and worst for out-of-the-money options. On average the lower bound is 9.57 % below the market value (the ratio is 90.43 %). However, there are violations. For example, medium-term, near-out-ofthe-money options have five violations of the lower bound, and there are a total of 4,233 violations. Theoretically, there should be arbitrage opportunities when the bounds are violated. To test the lower bound does imply arbitrage opportunities, we
226
H.-C. Lin et al.
Table 7.5 Empirical results on the lower bound (1) Maturity Short Medium Long Real long Short Medium Long Real long Short Medium Long Real long Short Medium Long Real long Short Medium Long Real long Total
(2) Money Deep out Deep out Deep out Deep out Near out Near out Near out Near out At At At At Near in Near in Near in Near in Deep in Deep in Deep in Deep in
(3) Total 0 0 84 295 145 1,925 3,554 3,317 4,106 7,732 5,650 3,857 2,220 3,924 2,951 2,018 660 1,296 1,352 1,454
(4) % lbdd
57.26 61.18 65.33 69.27 82.29 86.88 89.73 92.45 92.82 92.61 97.17 97.08 94.50 90.31 92.48 94.52 93.14 89.42
(5) Violation 0 0 0 0 0 5 30 271 492 962 380 584 218 785 234 130 0 29 46 67
46,540
90.43
4,233
Note: 1. This is based upon 2,520 business-day (10 years) horizon. Results of other horizons are similar and are available on request 2. Columns (1) and (2) define the maturity and moneyness buckets. Short maturity is less than 30 days to maturity, medium is between 31 and 90 days, long is between 91 and 180 days, and real long is over 180 days to maturity. At-the-money is between 5 % in-the-money and 5 % out-of-themoney (or 5 %), near-in-the-money/near-out-of-the-money is between 5 % and 15 %, and deepin-the-money/deep-out-of-the-money is over 15 %. Moneyness is defined as S/K 1. Column (3) is a frequency count of the number of observations in each bucket. Column (4) represents the average value of the ratios of the lower bound over the market price of the option. Column (5) shows the number of violations when the lower bound is higher than the market price 3. The best lower bound performance is when the option is near-in-the-money and short-term maturity (2.83 % below market value) The underlying stock return distribution is a 10-year historical return histogram with the volatility replaced by the Black-Scholes implied volatility
perform a simple buy and hold trading strategy. If the lower bound is violated, we will buy the option and hold it till maturity. For the 4,233 (out of 46,540) violations, the buy and hold strategy generated $22,809 or an average of $5.39 per contract. Given that the buy and hold strategy can be profitable simply due to the bull market, we compute those that had no violation of the lower bound. For the 42,307 cases that violated no lower bound, the average profit is $1.83. Hence, the options that violated the lower bound imply a trading profit 200 % above the average.
7
Nonparametric Bounds for European Option Prices
a
14 12
actual lbdd ubdd
227
30-day Maturity ATM
10 8 6 4 2 0 02/27/88 09/14/88 04/02/89 10/19/89 05/07/90 11/23/90 06/11/91 12/28/91 07/15/92
b
18 16
actual lbdd ubdd
60-day Maturity ATM
14 12 10 8 6 4 2 0 02/27/88 09/14/88 04/02/89 10/19/89 05/07/90 11/23/90 06/11/91 12/28/91 07/15/92
c
25
20
actual lbdd ubdd
91-day Maturity ATM
15
10
5 0 02/27/88 09/14/88 04/02/89 10/19/89 05/07/90 11/23/90 06/11/91 12/28/91 07/15/92
Fig. 7.1 (continued)
228
H.-C. Lin et al.
d 40 35
actual lbdd ubdd
182-day Maturity ATM
30 25 20 15 10 5 0 02/27/88 09/14/88 04/02/89 10/19/89 05/07/90 11/23/90 06/11/91 12/28/91 07/15/92
Fig. 7.1 (a) Plot of 30-day maturity actual ATM option prices and its upper and lower bound values. (b) Plot of 60-day maturity actual ATM option prices and its upper and lower bound values. (c) Plot of 91-day maturity actual ATM option prices and its upper and lower bound values. (d) Plot of 182-day maturity actual ATM option prices and its upper and lower bound values
7.6
Conclusion
In this paper, we derive a new and tighter lower bound for European option prices comparing with those of previous studies. We further reinterpret Ritchken’s (1985) upper bound under a nonparametric framework. Our model contributes to the literature in two different ways. First, our bounds require no parametric assumption of the underlying stock or the moments of the distribution. Furthermore, our bounds require no assumptions on interest rates or volatility. The only requirements of our model are the dollar beta of the option and expected payoffs of the stock and the option. Hence, our bounds can be applied to any model such as the random volatility and random interest rate model by Bakshi et al. (1997) and Scott (1997). Second, despite of much looser and flexible assumptions, our bounds are significantly tighter than the existing upper and lower bound models. Most importantly, our bounds are tighter for the out-of-the-money options that cannot be bounded efficiently by previous models. Finally, we apply our model to real data using histograms of the realized stock returns. The results show that nearly 10 % of the observations violate the lower bound. These violations are shown to generate significant arbitrage profits, after correction of the bull market in the sample period.
7
Nonparametric Bounds for European Option Prices
229
Appendix 1 In this appendix, we prove Theorem 7.1. Without loss of generality, we prove Theorem 7.1 by a three-point convex function. The extension of the proof to multiple points is straightforward but tedious. Let the distribution be trinomial and the relationship between the pricing kernel M and the stock price S be convex. In the following table, x 0; y > e 0. Probability p2 2p(1 p) (1 p)2
M M+x M Mx
S S+y>K Se 0:
As a result, it is straightforward to show that cov½S; C 2p2 ð1 pÞðS þ y K Þðy þ peÞ cov½M; S ¼ 2pð1 p xðy þ eð2p 1 var½S 2pð1 pÞz ðy þ peÞðy þ eð2p 1ÞÞ ¼ 2p2 ð1 pÞxðS þ y K z
e ð 1 p Þðy eÞ 2 ¼ 2p ð1 pÞxðS þ y K 1 þ z 2 2p ð1 pÞxðS þ y K ¼: cov½M, C:
230
H.-C. Lin et al.
The fourth line is obtained because cov[M,C] < 0 and 1 þ eð1pzÞðyeÞ > 1. Note that the result is independent of p since all it needs is 0 < p < 1 for 1 þ eð1pzÞðyeÞ to be greater than 1. Also note that when e ¼ 0 the equality holds.
References Bakshi, G., Cao, C., & Chen, Z. (1997). Empirical performance of alternative option pricing models. Journal of Finance, 52, 2003–2049. Basso, A., & Pianca, P. (1997). Decreasing absolute risk aversion and option pricing bounds. Management Science, 43, 206–216. Black, F. (1976). The pricing of commodity contacts. Journal of Financial Economics, 3, 167–179. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81, 637–654. Boyle, P., & Lin, X. (1997). Bounds on contingent claims based on several assets. Journal of Financial Economics, 46, 383–400. Broadie, M., & Cao, M. (2008). Improved lower and upper bound algorithms for pricing American options by simulation. Quantitative Finance, 8, 845–861. Chen, R. R., Lin, H. C., & Palmon, O. (2006). Explaining the volatility smile: Reduced form versus structural option models. 2006 FMA Annual Meeting. Salt Lake City, UT. Christoffersen, P., Elkamhi, R., Feunou, B., & Jacobs, K. (2010). Option valuation with conditional heteroskedasticity and nonnormality. Review of Financial Studies, 23, 2139–2189. Chuang, H., Lee, C. F. & Zhong, Z. K. (2011). Option bounds: A review and comparison (Working paper). Chung, S. L., Hung, M. W., & Wang, J. Y. (2010). Tight bounds on American option prices. Journal of Banking and Finance, 34, 77–89. Constantinides, G., & Zariphopoulou, T. (2001). Bounds on derivative prices in an intertemporal setting with proportional transaction costs and multiple securities. Mathematical Finance, 11, 331–346. Constantinides, G. M., Jackwerth, J. C., & Perrakis, S. (2009). Mispricing of S&P 500 index options. Review of Financial Studies, 22, 1247–1277. Constantinides, G. M., Czerwonko, M., Jackwerth, J. C., & Perrakis, S. (2011). Are options on index futures profitable for risk-averse investors? Empirical evidence. Journal of Finance, 66, 1407–1437. Cox, J., & Ross, S. (1976). The valuation of options for alternative stochastic process. Journal of Finance Economics, 3, 145–166. De La Pena, V., Ibragimov, R., & Jordan, S. (2004). Option bounds. Journal of Applied Probability, 41, 145–156. Frey, R., & Sin, C. (1999). Bounds on European option prices under stochastic volatility. Mathematical Finance, 9, 97–116. Gotoh, J. Y., & Konno, H. (2002). Bounding option prices by semidefinite programming: A cutting plane algorithm. Management Science, 48, 665–678. Grundy, B. (1991). Option prices and the underlying assets return distribution. Journal of Finance, 46, 1045–1069. Hobson, D., Laurence, P., & Wang, T. H. (2005). Static-arbitrage upper bounds for the prices of basket options. Quantitative Finance, 5, 329–342. Huang, J. (2004). Option pricing bounds and the elasticity of the pricing kernel. Review of Derivatives Research, 7, 25–51. Ingersoll, J. (1989). Theory of financial decision making. Totowa, New Jersey: Rowman & Littlefield.
7
Nonparametric Bounds for European Option Prices
231
Levy, H. (1985). Upper and lower bounds of put and call option value: Stochastic dominance approach. Journal of Finance, 40, 1197–1217. Lo, A. (1987). Semi-parametric upper bounds for option prices and expected payoff. Journal of Financial Economics, 19, 373–388. Merton, R. (1973). The theory of rational option pricing. The Bell Journal of Economics and Management Science, 4, 141–183. Pen˜a, J., Vera, J. C., & Zuluaga, L. F. (2010). Static-arbitrage lower bounds on the prices of basket options via linear programming. Quantitative Finance, 10, 819–827. Perrakis, S. (1986). Option pricing bounds in discrete time: Extensions and the pricing of the American put. Journal of Business, 59, 119–141. Perrakis, S., & Ryan, J. (1984). Option pricing in discrete time. Journal of Finance, 39, 519–525. Ritchken, P. (1985). On option pricing bounds. Journal of Finance, 40, 1219–1233. Ritchken, P., & Kuo, S. (1988). Option bounds with finite revision opportunities. Journal of Finance, 43, 301–308. Ritchken, P., & Kuo, S. (1989). On stochastic dominance and decreasing absolute risk averse option pricing bounds. Management Science, 35, 51–59. Rodriguez, R. (2003). Option pricing bounds: Synthesis and extension. Journal of Financial Research, 26, 149–164. Rubinstein, M. (1976). The valuation of uncertain income stream and the pricing of options. The Bell Journal of Economics, 7, 407–425. Scott, L. (1997). Pricing stock options in a jump diffusion model with stochastic volatility and interest rates: Applications of Fourier inversion methods. Mathematical Finance, 7, 413–424. Zhang, P. (1994). Bounds for option prices and expected payoffs. Review of Quantitative Finance and Accounting, 4, 179–197.
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio? Chin-Wen Huang, Chun-Pin Hsu, and Wan-Jiun Paul Chiou
Contents 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Empirical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Copulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Copula Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Gaussian Copula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Student’s t-Copula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 Archimedean Copulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Portfolio Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Average Portfolio Returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Testing Significance of Return Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
234 235 236 236 237 237 237 238 239 240 240 240 241 243 246 247 250
C.-W. Huang Department of Finance, Western Connecticut State University, Danbury, CT, USA e-mail: [email protected] C.-P. Hsu (*) Department of Accounting and Finance, York College, The City University of New York, Jamaica, NY, USA e-mail: [email protected] W.-J.P. Chiou Department of Finance and Law College of Business Administration, Central Michigan University, Mount Pleasant, MI, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_8, # Springer Science+Business Media New York 2015
233
234
C.-W. Huang et al.
Abstract
Research in structuring asset return dependence has become an indispensable element of wealth management, particularly after the experience of the recent financial crises. In this paper, we evaluate whether constructing a portfolio using time-varying copulas yields superior returns under various weight updating strategies. Specifically, minimum-risk portfolios are constructed based on various copulas and the Pearson correlation, and a 250-day rolling window technique is adopted to derive a sequence of time-varied dependencies for each dependence model. Using daily data of the G7 countries, our empirical findings suggest that portfolios using time-varying copulas, particularly the Clayton dependence, outperform those constructed using Pearson correlations. The above results still hold under different weight updating strategies and portfolio rebalancing frequencies. Keywords
Copulas • Time-varying dependence • Portfolio optimization • Bootstrap • Outof-sample return • Performance evaluation • GARCH • Gaussian copula • Student’s t-copula • Gumbel copula • Clayton copula
8.1
Introduction
The return of a portfolio depends heavily on its asset dependence structure. Over the past decade, copula modeling has become a popular alternative to Pearson correlation modeling when describing data with an asymptotic dependence structure and a non-normal distribution.1 However, several critical issues attached to the applications of copulas emerge: Do portfolios using time-varying copulas outperform those constructed with Pearson correlations? How does the risk return of copulabased portfolios change over the business cycle? The estimation of parameters has become particularly critical for finance academics and professionals on the heels of the recent financial crises. In this chapter, we model the time-varying dependence of an international equity portfolio using several copula functions and the Pearson correlation. We investigate whether a portfolio constructed with copula dependence yields superior returns as compared to a portfolio constructed using a Pearson correlation under various weight updating strategies. This paper extends the existing literature in two ways. First, we estimate our time-varying copulas using a rolling window of the latest 250 trading days. It is well accepted that the dependencies between asset returns are time varying (Kroner and Ng 1998; Ang and Bekaert 2002). Differing from the regime-switching type used in Rodriguez (2007) and Okimoto (2008) or the time-evolving type GARCH model used in Patton (2006a), we estimate time-varying copulas via a rolling window
1 See Chan et al. (1999), Dowd (2005), Patton (2006a), Engle and Sheppardy (2008), Chollete et al. (2009), Bauer and Vorkink (2011).
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
235
based on daily data from the previous year. The rolling window method has several benefits. First, it is a method frequently adopted by practitioners. Second, the rolling window method considers only the past year’s information when forming dependencies, thus avoiding disturbances that may have existed in the distant past. Several studies have applied this technique, such as Aussenegg and Cech (2011). However, Aussenegg and Cech (2011) considered only the daily Gaussian and Student’s t-copulas in constructing models, and it is reasonable to consider monthly and quarterly frequencies, given that portfolio managers do not adjust their portfolios on a daily basis. Our research also extends Aussenegg and Cech’s (2011) study by including the Archimedean copulas to govern the strength of dependence. Second, our study investigates how the choice of copula functions affects portfolio performance during periods of economic expansion and recession. The expansion and recession periods we define are based on announcements from the National Bureau of Economic Research (NBER). While the use of copula functions in financial studies has grown enormously, little work has been done in comparing copula dependencies under different economic states. Using daily US dollar-denominated Morgan Stanley Capital International (MSCI) indices of the G7 countries, our empirical results suggest that the copula-dependence portfolios outperform the Pearson-correlation portfolios. The Clayton-dependence portfolios, for most scenarios studied, deliver the highest portfolio returns, indicating the importance of lower-tail dependence in building an international equity portfolio. Moreover, the choice of weight updating frequency matters. As we increase the weight updating frequency from quarterly to monthly, the portfolio returns for the full sample and recession periods also increase, regardless of the choice of dependence measure. Our finding supports the value of active portfolio reconstruction during recession periods. This paper is organized as follows. Section 8.2 reviews the literature on copula applications in portfolio modeling. Section 8.3 describes the empirical models. Section 8.4 presents the data used. The main empirical results are reported in Sect. 8.5. Section 8.6 concludes.
8.2
Literature Review
Copulas, implemented in either static or time-varying fashion, are frequently seen in options pricing, risk management, and portfolio selection. In this section, we review some copula applications in portfolio selection. Patton (2006a) pioneered timevarying copulas by modifying the copula functional form to allow its parameters to vary. Patton (2006a) used conditional copulas to examine asymmetric dependence in daily Deutsche mark (DM)/US dollar (USD) and Japanese yen (Yen)/US dollar (USD) exchange rates. His empirical results suggest that the correlation between DM/USD and Yen/USD exchange rates is stronger when the DM and Yen are depreciating against the dollar. Hu (2006) adopted a mixture of a Gaussian copula, a Gumbel copula, and a Gumbel survival copula to examine the various dependence structures of four stock indices. His results demonstrate the underestimation problem
236
C.-W. Huang et al.
due to multivariate normality correlations and the importance of incorporating both the structure and the degree of dependence into the portfolio evaluation. Kole et al. (2007) compared the Gaussian, Student’s t, and Gumbel copulas to illustrate the importance of selecting an appropriate copula to manage the risk of a portfolio composed of stocks, bonds, and real estate. Kole et al. (2007) empirically demonstrated that the Student’s t-copula, which considers the dependence both in the center and the tail of the distribution, provides the best fit for the extreme negative returns of the empirical probabilities under consideration. Rodriguez (2007) studied financial contagions in emerging markets using switching Frank, Gumbel, Clayton, and Student’s t-copulas. Rodriguez (2007) found evidence that the dependence structures between assets change during a financial crisis and that a good asset allocation strategy should allow dependence to vary with time. Chollete et al. (2009) modeled asymmetric dependence in international equity portfolios using a regime-switching, canonical vine copula approach, which is a branch of the copula family first described by Aas et al. (2007). Chollete et al. (2009) documented that the canonical vine copula provides better portfolio returns and that the choice of copula dependencies affects the VaR of the portfolio return. Chollete et al. (2011) investigated international diversification benefits using the Pearson correlation and six copula functions. Their results show that dependence increases over time and that the intensity of the asymmetric dependence varies in different regions of the world. While some existing studies have applied copulas to the optimization of portfolio selection, most have tended to focus on portfolio risks, i.e., value at risk, rather than portfolio returns. Empirically, however, investors pay at least equal attention to portfolio returns. Our study is among the few that have focused on equity portfolio returns using time-varying copulas.
8.3
Empirical Methods
8.3.1
Copulas
A copula C is a function that links univariate distribution functions into a multivariate distribution function. Let F be an n-dimensional joint distribution function and let U ¼ (u1, u2, . . ., un)T be a vector of n random variables with marginal distributions F1, F2, . . ., Fn. According to Sklar’s (1959) theorem, if the marginal distributions F1, F2, . . ., Fn are continuous, then a copula C exists, where C is a multivariate distribution function with all uniform (0,1) marginal distributions.2 That is, Fðu1 ; u2 ; . . . ; un Þ ¼ CðF1 ðu1 Þ, F2 ðu2 Þ, . . . , Fn ðun ÞÞ, for all u1 , u2 , . . . , un 2 ℝn : (8.1)
2
For detailed derivations, please refer to Cherubini et al. (2004), Demarta and McNeil (2005), Embrechts et al. (2003), Embrechts et al. (2005), Franke et al. (2008), Nelson (2006), and Patton (2009).
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
237
Table 8.1 The characteristics of different copulas Dependence model Pearson correlation Gaussian copula Student’s t-copula Gumbel copula Clayton copula
Tail dependence No No Yes (symmetry) Yes (upper tail) Yes (lower tail)
Parameter range r ϵ (1, 1) r ϵ (1, 1) r ϵ (1, 1), v > 2 d ϵ (0, 1) a ϵ [1, 1)\{0}
For a bivariate case, the model can be defined as Fðx; yÞ ¼ CðFX ðxÞ, FY ðyÞÞ
8.3.2
(8.2)
Copula Specifications
In this paper, we consider four copula functions: the Gaussian, Student’s t, Gumbel, and Clayton. The Gaussian copula focuses on the center of the distribution and assumes no tail dependence. The Student’s t-copula stresses both the center of the distribution and symmetric tail behaviors. The Clayton copula emphasizes lowertail dependence, while the Gumbel copula focuses on upper-tail dependence. Table 8.1 summarizes the characteristics of each copula in detail.
8.3.3
Gaussian Copula
The Gaussian copula is frequently seen in the finance literature due to its close relationship to the Pearson correlation. It represents the dependence structure of two normal marginal distributions. According to Nelson (2006), the bivariate Gaussian copula can be expressed as 2 1 s 2rst þ t2 pffiffiffiffiffiffiffiffiffiffiffiffiffi exp Cðx; yÞ ¼ 2 ð 1 r2 Þ 2p 1 r2 1 1 1 ¼ Fr F ðxÞ, F1 ðyÞ , ð F1 ðxÞ
ð F1 ðyÞ ds dt
(8.3)
where F denotes the univariate standard normal distribution function and Fr is the joint distribution function of the bivariate standard normal distribution with correlation coefficient –1 r 1. The Gaussian copula has no tail dependence unless r ¼ 1.
8.3.4
Student’s t-Copula
Unlike the Gaussian copula, which fails to capture tail behaviors, the Student’s t-copula depicts the dependence in the center of the distribution as well as in the tails. The Student’s t-copula is defined using the multivariate t distribution and can be written as
238
C.-W. Huang et al.
Ctv, r ðx; yÞ
vþ2 2 1 s2 2rst þ t2 pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ¼ dsdt v ð 1 r2 Þ 1 1 2p 1 r2 1 ¼ t2v, r t1 v ðxÞ, tv ðyÞ , ð t1 ð 1 v ðxÞ tv ðyÞ
(8.4)
where t2v, r indicates the bivariate joint t distribution, tv1 is the inverse of the distribution of a univariate t distribution, v is the degrees of freedom, and r is the correlation coefficient of the bivariate t distribution when v > 2.
8.3.5
Archimedean Copulas
According to Embrechts et al. (2005), the coefficient of upper-tail dependence (lu) of two series, X and Y, can be defined as h (8.5) lu ðx; yÞ ¼ limq1 P Y > Fy ðqÞX > Fx ðqÞ : The upper-tail dependence presents the probability that Y exceeds its qth quantile given that X exceeds its qth quantile, considering the limit as q goes to its infinity. If the limit lu∈[0,1] exists, then X and Y are said to show upper-tail dependence. In the same manner, the coefficient of lower-tail dependence (ll) of X and Y is described as h ll ðx; yÞ ¼ limq0þ P Y Fy ðqÞX Fx ðqÞ :
(8.6)
Since both FU1 and FU2 are continuous density functions, the upper-tail dependence can be presented as P Y > Fy ðqÞX > Fx ðqÞ : lu ¼ lim q0 P X > Fx ð qÞ For lower-tail dependence, it can be described as P Y Fy ð qÞ X Fx ð qÞ ll ¼ limþ : P X Fx ð qÞ q0
(8.7)
(8.8)
8.3.5.1 Gumbel Copula The Gumbel copula is a popular upper-tail dependence measure as suggested in Embrechts et al. (2005). The Gumbel copula can be written as n
o 1 1 d d d cðx; yÞ ¼ exp lnðxÞ þ ðlnðyÞ , (8.9)
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
239
where 0 < d 1 measures the degree of dependence between X and Y. When d ¼ 1, X and Y do not have upper-tail dependence (i.e., X and Y are independent at upper tails), and when d ! 0, X and Y have perfect dependence.
8.3.5.2 Clayton Copula The Clayton copula is used to measure lower-tail dependence. The Clayton copula is defined as h i 1 cðx; yÞ ¼ max ðxa þ ya 1Þa ; 0 , (8.10) where a describes the strength of dependence. If a ! 0, X and Y do not have lowertail dependence. If a ! 1, X and Y have perfect dependence.
8.3.6
Portfolio Construction
The selection of optimal portfolios draws on the seminal work of Markowitz (1952). Specifically, we adopt the variance minimization strategy with no short selling and transaction cost assumptions.3 An optimal portfolio allocation can be found by solving the following optimization problem: 0
Minfwg w Vw Subject to
X
wi ¼ 1, wi 0,
(8.11)
where wi is the weight of asset i, and V is the covariance matrix of asset returns. Because dependence is a time-varying parameter, the data from a subset of 250 trading days prior to the given sample date t is used to derive the dependence for date t. With 1,780 daily data points in our sample, we calculate a total of 1,531 dependencies for each copula method and the Pearson correlation. With these dependencies, optimal portfolio weightings can be obtained by solving a quadratic function subject to specified constraints. The optimal weightings for time t are used to calculate the realized portfolio returns for (t + 1).4 In practice, portfolio managers periodically reexamine and update the optimal weights of their portfolios. If the asset allocation of an existing portfolio has deviated from the target allocation to a certain degree and if the benefit of updating exceeds its costs, a portfolio reconstruction action is executed. In this paper, we construct a comprehensive study of portfolio returns by varying the state of 3
Short selling usually involves other service fees, which vary depending on the creditability of the investors. Because the focus of this study is on the effect of the dependence structure on portfolio performance, we assume that short selling is not allowed to simplify the comparison. 4 For example, we use return data from t1 to t250 to calculate the optimal portfolio weights with dependencies estimated from the copulas and the Pearson correlation. The optimal portfolio weights are applied to the return data at t251 to calculate the realized portfolio returns.
240
C.-W. Huang et al.
Table 8.2 The summary statistics of the G7 indices Mean (%) Std. dev. Skewness Kurtosis Jarque-Bera JB P-value Observations
Canada 0.0301 0.0164 0.8781 14.1774 9494 0.0000 1780
France 0.0064 0.0171 0.0740 10.7576 4465 0.0000 1780
Germany 0.0082 0.0178 0.0666 8.6920 2404 0.0000 1780
Italy 0.0003 0.0161 0.0477 12.9310 7315 0.0000 1780
Japan 0.0017 0.0155 0.1475 7.4592 1481 0.0000 1780
UK 0.0039 0.0159 0.0535 12.9143 7290 0.0000 1780
USA 0.0082 0.0144 0.1365 12.1182 6171 0.0000 1780
The results indicate that the daily returns of the G7 indices are not normally distributed
the economy (i.e., expansion or recession), the dependence structure, and the frequency of weight updating, i.e., quarterly, monthly, and daily. Quarterly updating allows investors to update the optimal weights on the first trading days of March, June, September, and December; monthly updating allows investors to update the optimal weights on the first trading days of each month. Under daily updating, investors update the optimal weights every trading day.
8.4
Data
The data are the US dollar-denominated daily returns of the Morgan Stanley Capital International (MSCI) indices for the G7 countries, including Canada, France, Germany, Italy, Japan, the UK, and the USA. The sample period covers from the first business day in June 2002 to the last business day in June 2009, for a total of 1,780 daily observations. Based on the definitions provided by the National Bureau of Economic Research, we separate the data into an expansion period from June 2002 to November 2007 and a recession period from December 2007 to June 2009. Table 8.2 presents the descriptive statistics. Among the G7 countries, Canada had the highest daily returns, while the USA had the lowest. Germany, however, experienced the most volatile returns. All return series exhibit high kurtosis, suggesting fat tails on return distributions. The results of the Jarque-Bera test reject the assumption that the G7 indices have normal distributions.
8.5
Empirical Results
8.5.1
Dependence
Using 1,780 daily data points from the G7 countries, for each dependence model, we estimate 21 dependence pairs, each containing a sequence of 1,531 dependencies. The parameters for the Gaussian, Student’s t, Gumbel, and Clayton copula functions are estimated using the two-stage inference for the margins (IFM) method proposed by Joe and Xu (1996) and Joe (1997).
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
241
The dependencies from the Pearson correlation are calculated using the standard method. Appendix 1 shows the maximum and the minimum of the 21 dependence pairs of each dependence model. The graphs in Fig. 8.1 show the dependencies between the USA and other countries as estimated via the various copulas and the Pearson correlation. In general, the Gaussian copula estimation is similar to that of the corresponding Pearson correlation, but the Student’s t-copulas show significant jumps over time. For our sample period, Japan shows a low dependence with the US market as compared to other economies.
8.5.2
Average Portfolio Returns
Table 8.3 presents the average portfolio returns for the full sample period, the expansion period, and the recession period for the quarterly, monthly, and daily weight updating strategies. Under the quarterly weight updating, the Clayton-dependence portfolios have the highest average returns at 6.07 % in the expansions and 12.52 % in the recessions, and the Pearson-correlation portfolios have the lowest average returns, 5.48 % in the expansions and 14.25 % in the recessions. The order of portfolio performance, in the form of its dependence model regardless of the state of economy, is as follows: the Clayton copula, the Gumbel copula, the Student’s t-copula, the Gaussian copula, and the Pearson correlation. Because both the Clayton and Gumbel copulas highlight the tail dependence between assets, the empirical evidence suggests that with a quarterly weighting strategy, tail dependence, particularly lower-tail dependence, is important for obtaining superior average portfolio returns across different states of the economy. As we increase the updating frequency from quarterly to monthly, similar empirical results are observed. That is, the Clayton-copula portfolios yield the highest average returns, while the Pearson-correlation portfolios provide the lowest average returns. In the expansion periods, the order of portfolio performance, in the form of its dependence model, is as follows: the Clayton copula, the Student’s t-copula, the Gaussian copula, the Gumbel copula, and the Pearson correlation. In recession periods, the order of portfolio performance, in the form of its dependence model, is as follows: the Clayton copula, the Gumbel copula, the Student’s t- copula, the Gaussian copula, and the Pearson correlation. According to Kole et al. (2007), the Gaussian copula, which does not consider lower-tail dependence, tends to be overly optimistic on the portfolio’s diversification benefits, and the Gumbel copula, which focuses on the upper tail and pays no attention to the center of the distribution, tends to be overly pessimistic on the portfolio’s diversification benefits. We verify this argument by observing that the Gumbel-copula portfolio outperforms only the Pearson-correlation portfolio in the expansion periods, while the Gaussian-copuladependence portfolio outperforms the Pearson-correlation portfolio only in recession periods. Interestingly, as we increase the weight updating frequency from quarterly to monthly, the average portfolio returns for the full sample and recession periods also
242
C.-W. Huang et al. The Dependence using Different Copulas and Pearson Correlation: US vs. Canada
0.90 a 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00 Jan-03 0.800 0.700 0.600 0.500 0.400 0.300 0.200 0.100
Clayton Gaussian Gumbel t Pearson
May-04
Sep-05
Jan-07
May-08
Sep-09
The Dependence using Different Copulas and Pearson Correlation: US vs. France
b
Clayton Gaussian Gumbel t Pearson
0.000 Jan-03
May-04
Sep-05
Jan-07
May-08
Sep-09
The Dependence using Different Copulas and Pearson Correlation: US vs. Germany 0.80 c 0.70 0.60 0.50 0.40 0.30 Clayton Gaussian 0.20 Gumbel t 0.10 Pearson 0.00 Jan-03 May-04 Sep-05 Jan-07 May-08 Sep-09 0.08 0.70
The Dependence using Different Copulas and Pearson Correlation: US vs. Italy
d
0.60 0.50 0.40 0.30 0.20 0.10 0.00 Jan-03
Clayton Gaussian Gumbel t Pearson
May-04
Fig. 8.1 (continued)
Sep-05
Jan-07
May-08
Sep-09
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
243
The Dependence using Different Copulas and Pearson Correlation: US vs. Japan 0.40
Clayton Gaussian Gumbel t Pearson
e
0.30 0.20 0.10 0.00 Jan-03 −0.10
May-04
Sep-05
Jan-07
May-08
Sep-09
−0.20 0.80 0.70
The Dependence using Different Copulas and Pearson Correlation: US vs. UK
f
0.60 0.50 0.40 0.30 0.20 0.10 0.00 Jan-03
Clayton Gaussian Gumbel t Pearson May-04
Sep-05
Jan-07
May-08
Sep-09
Fig. 8.1 The dependence using different copulas and Pearson correlation. Panel (a) USA versus Canada. Panel (b) USA versus France. Panel (c) USA versus Germany. Panel (d) USA versus Italy. Panel (e) USA versus Japan. Panel (f) USA versus UK
increase, regardless of the choice of dependence measures. Thus, the empirical results seem to support the need for active portfolio reconstruction during recessions. As the weight updating frequency increases to daily, the Clayton copula delivers only the highest average portfolio returns during the expansion period. The Student’s t-copula, by contrast, generates the highest portfolio average returns for the full sample and recession periods. The influence of the lower-tail dependence seems to diminish under daily weight reconstruction. The Gaussiancopula portfolio delivers the worst portfolio performance in both expansion and recession periods.
8.5.3
Testing Significance of Return Difference
The results reported in the previous section show the average portfolio returns for different dependencies and weight updating frequencies. One issue with average returns is that if extreme values exist over the examined period, the empirical results may be biased and relatively high-standard deviations will be reported. Previous methods of examining the robustness of portfolio performance usually build on the data normality assumption (Jobson and Korkie 1981; Memmel 2003), which goes against the empirical facts.
244
C.-W. Huang et al.
Table 8.3 Average portfolio returns Clayton Panel A: quarterly adjustments Full sample returns 1.44 % (1.1903) Expansion returns 6.07 % (0.6946) Recession returns 12.52 % (2.0549) Panel B: monthly adjustments Full sample returns 1.63 % (1.1764) Expansion returns 6.15 % (0.6800) Recession returns 12.01 % (2.0375) Panel C: daily adjustments Full sample returns 1.34 % (1.1737) Expansion returns 5.84 % (0.6733) Recession returns 12.27 % (2.0382)
Gaussian
Gumbel
Student’s t
Pearson
0.91 % (1.1749) 5.63 % (0.0798) 13.35 % (2.0025)
1.16 % (1.1922) 5.70 % (0.6962) 12.56 % (2.0578)
1.11 % (1.1763) 5.66 % (0.7014) 12.64 % (1.7922)
0.57 % (1.0644) 5.48 % (0.6613) 14.25 % (2.0153)
1.08 % (1.1812) 5.67 % (0.6928) 12.78 % (2.0355)
1.22 % (1.1835) 5.66 % (0.6870) 12.19 % (2.0474)
1.22 % (1.1736) 5.69 % (0.6864) 12.30 % (2.0247)
0.75 % (1.0199) 5.23 % (0.6401) 12.80 % (1.1708)
0.85 % (1.1651) 5.35 % (0.6859) 12.74 % (2.0053)
1.16 % (1.1749) 5.59 % (0.6839) 12.22 % (2.0277)
1.47 % (1.1733) 5.70 % (0.6766) 11.34 % (2.0346)
1.03 % (1.0253) 5.39 % (0.6367) 12.14 % (1.7280)
The average portfolio returns are presented in an annualized, percentage format. Three weight updating frequencies are considered: quarterly, monthly, and daily. Within each frequency, we report the returns for the full sample period, the expansion period, and the recession period. The numbers in the parentheses are standard errors
To cope with this problem, Ledoit and Wolf (2008) proposed an alternative testing method using the inferential studentized time-series bootstrap. Ledoit and Wolf’s (2008) method is as follows.5 Let a and b be two investment strategies, and let rat and rbt be the portfolio returns for strategies a and b, respectively, at time t, where t ranges from 1 to i. The mean vector m and the covariance matrix S for the return pairs (ra1,rb1)’,. . .,(rat,rbt)’ are denoted by
X s2 s ma ab a m¼ and ¼ : (8.12) mb sab s2b The performance of strategies a and b can be examined by checking whether the difference between the Sharpe ratios for strategies a and b are statistically different from 0. That is, m m D ¼ Sa Sb ¼ a b (8.13) sa sb 5
For detailed derivations and computer codes, please refer to Ledoit and Wolf (2008).
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
245
and ^ ¼ Sba Sbb ¼ mba mbb , D sba sbb
(8.14)
where D is the difference between the two Sharpe ratios, and Sa and Sb are the Sharpe ratios for strategies a and b, respectively. Let the second moments of the returns from strategies a and b be denoted by ga and gb. Then ga ¼ E(gat2) and gb ¼ E(gbt2). Let u and ^u be (ma, mb, ga, gb)’ and ^ can be expressed as ðm^a , m^b , g^a , g^a Þ0 , respectively. Then D and D ^ ¼ f ð^u Þ D ¼ f ðuÞ and D ma mb ffi and pffiffiffiffiffiffiffiffiffi where f ðuÞ ¼ pffiffiffiffiffiffiffiffiffi 2 2 ga ma
For
time-series
gb mb
data,
(8.15)
pffi d ið^u uÞ! N ð0; CÞ:
Ledoit
and
Wolf
(2008) have argued XW b ¼ 1 that C can be evaluated by the studentized bootstrap as C x x0 , # j¼1 j j Xb where xj ¼ p1ffiffib y t ¼ 1, . . . , #:# is the integer part of the fraction of t¼1 ðj1Þbþt
the total observations divided by the blocks b. Also, ^ 2 ^ yt ¼ r ta m^a , r tb m^b , r 2 ta ga , r tb gb t ¼ 1, . . . , i:
(8.16)
Following Ledoit and Wolf’s (2008) method, we examine the significance of 60 pairs of portfolio performance. The size of the bootstrap iteration is 10,000 to ensure a sufficient sample.6 Table 8.4 presents the results from Ledoit and Wolf’s (2008) portfolio performance test. The results indicate that during the recession periods and using quarterly weight updating, the Pearson correlation underperforms all the copula dependencies at a confidence level of 90 % or greater. During recession periods and adopting monthly weight updating, the superiority of the copula dependencies jumps to a 99 % confidence level. Moreover, during the recession periods and assuming daily updating, the Student’s t-copula outperforms the Pearson correlation at the 99 % confidence level. Overall, Ledoit and Wolf’s (2008) empirical tests illustrate the superiority of copulas during recession periods, regardless of the frequency of weight updating. During a bullish market, this advantage seems not as statistically significant as during a bearish market.
6
Ledoit and Wolf (2008) suggested that 5,000 iterations guarantee a sufficient sample. We adopt a higher standard of 10,000 iterations to strengthen our testing results.
246
C.-W. Huang et al.
Table 8.4 Ledoit and Wolf portfolio performance test Panel A: quarterly adjustments CL-GA Expansion 0.821 Recession 0.060a GA-PE Expansion 0.892 Recession 0.0030c Panel B: monthly adjustments CL-GA Expansion 0.193 Recession 0.249 GA-PE Expansion 0.850 Recession 0.001c Panel C: daily adjustments CL-GA Expansion 0.389 Recession 0.000c GA-PE Expansion 0.928 Recession 0.718
CL-GU 0.812 0.987 GA-t 0.930 0.0727a
CL-PE 0.788 0.054a GU-PE 0.766 0.0267b
CL-t 0.677 0.839 GU-t 0.912 0.943
GA-GU 0.916 0.0760a PE-t 0.778 0.026b
CL-GU 0.415 0.803 GA-t 0.892 0.318
CL-PE 0.568 0.021c GU-PE 0.795 0.019c
CL-t 0.744 0.295 GU-t 0.896 0.475
GA-GU 0.881 0.092a PE-t 0.809 0.008c
CL-GU 0.814 0.119 GA-t 0.915 0.001c
CL-PE 0.732 0.173 GU-PE 0.460 0.045c
CL-t 0.929 0.371 GU-t 0.831 0.778
GA-GU 0.913 0.000c PE-t 0.301 0.019c
The performance tests are conducted using the approach suggested by Ledoit and Wolf (2008). The tests examine whether the returns from two portfolios are significantly different at the 95 % level CL stands for the Clayton copula, GA stands for the Gaussian copula, GU stands for the gumbel copula, PE stands for Pearson correlation, and t stands for the Student’s t-copula a Represents 90 % statistical significance b Represents 95 % statistical significance c Represents 99 % statistical significance
8.6
Conclusions
In this paper, we study whether adopting time-varying copulas as a measure of dependence of asset returns can improve portfolio performance. This study was motivated by the fact that the traditional Pearson correlation is inadequate in describing most financial returns. Moreover, the robustness of copula functions has not been fully examined under different states of the economy and weight updating scenarios. We evaluate the effectiveness of various copulas in managing portfolios while considering portfolio rebalance frequencies and the business cycle. The significance of return difference is tested using the studentized time-series bootstrap method suggested by Ledoit and Wolf (2008). The main findings are as follows: first, an international equity portfolio modeled using the Pearson correlations underperforms those modeled using copula-based
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
247
dependencies, especially during recession periods. Our findings remain robust regardless of the rebalancing frequency. Second, the importance of lower-tail behaviors in portfolio modeling is highlighted by the higher average portfolio returns of the Clayton-dependence portfolios. Third, the choice of weight updating frequency affects portfolio returns. The portfolios using monthly weight updating frequency provide better portfolio returns than those using quarterly and daily weight adjustments. We add to the current literature by thoroughly evaluating the effectiveness of asymmetric conditional correlations in managing portfolio risk. This paper synthesizes the major concepts and modi operandi of previous research and maximizes the practicality of applying copulas under a variety of scenarios. Future research into copulas can be extended to the contagion of various asset classes and interest rates and evaluations of the impact of certain economic events.
Appendix 1 Appendix 1 illustrates the dependence of the G7 countries from different dependence models. Note that to ease the comparison between dependencies, we transform Gumbel dependencies by (1 d). Therefore, the range for the Clayton and the Gumbel copulas is between 0 and 1, with 0 meaning no dependence and 1 standing for perfect dependence. The range for the Gaussian copula, the Student’s t-copula, and the Pearson correlation is 1 to 1, with 0 meaning no dependence and 1 or 1 standing for complete dependence. CA FR Panel A: Gaussian dependence CA Max Min FR Max 0.7064 Min 0.3914 DE Max 0.6487 0.9686 Min 0.2016 0.7856 IT Max 0.6320 0.9407 Min 0.2143 0.2303 JP Max 0.1814 0.2761 Min 0.2115 0.2238 UK Max 0.6393 0.9100 Min 0.1945 0.2384
DE
IT
JP
UK
USA
0.9274 0.7001 0.2478 0.2229
0.4023 0.0119
0.8665 0.2008
0.8575 0.2050
0.4664 0.0329 (continued)
248
CA FR USA Max 0.7221 0.5031 Min 0.1864 0.2127 Panel B: Student’s t dependence CA Max Min FR Max 0.7509 Min 0.3921 DE Max 0.4578 0.9810 Min 0.1070 0.7476 IT Max 0.4367 0.9810 Min 0.1089 0.7476 JP Max 0.1093 0.1679 Min 0.1284 0.1511 UK Max 0.4478 0.8055 Min 0.1094 0.1546 USA Max 0.5733 0.8055 Min 0.1107 0.1546 Panel C: Gumbel dependence CA Max Min FR Max 0.5947 Min 0.3220 DE Max 0.3744 0.9063 Min 0.0000 0.5928 IT Max 0.3666 0.7286 Min 0.0000 0.0000 JP Max 0.0961 0.1384 Min 0.0000 0.0000
C.-W. Huang et al.
DE
IT
JP
UK
USA
0.5231 0.2087
0.4661 0.2414
0.5831 0.2241
0.5671 0.1997
0.9586 0.6937 0.1522 0.1574
0.6846 0.0093
0.7380 0.1359
0.7316 0.1230
0.7335 0.0284
0.3457 0.1343
0.3176 0.1360
0.4375 0.1519
0.6614 0.2687
0.8544 0.5516 0.1222 0.0000
0.5356 0.0200 (continued)
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
CA FR UK Max 0.3650 0.6946 Min 0.0000 0.0000 USA Max 0.4340 0.2816 Min 0.0000 0.0000 Panel D: Clayton dependence CA Max Min FR Max 0.6763 Min 0.2899 DE Max 0.3585 0.9327 Min 0.0000 0.6696 IT Max 0.3463 0.7635 Min 0.0000 0.0000 JP Max 0.0038 0.0408 Min 0.0000 0.0000 UK Max 0.3657 0.7193 Min 0.0000 0.0000 USA Max 0.5248 0.2450 Min 0.0000 0.0000 Panel E: Pearson correlation CA Max Min FR Max 0.7002 Min 0.3966 DE Max 0.6944 0.9726 Min 0.3645 0.7890 IT Max 0.6833 0.9596 Min 0.3877 0.8204
DE
IT
JP
0.6156 0.0000
0.6086 0.0000
0.5855 0.0234
0.2952 0.0000
0.2649 0.0000
0.3261 0.0000
249
UK
USA
0.5967 0.2493
0.9004 0.6006 0.0287 0.0000
0.6476 0.0000
0.6567 0.0000
0.6506 0.0000
0.6794 0.0000
0.2476 0.0000
0.1987 0.0000
0.3472 0.0000
0.6622 0.1982
0.9477 0.6965 (continued)
250
C.-W. Huang et al.
CA JP Max Min UK Max Min USA Max Min
FR
DE
IT
JP
0.3549 0.0411
0.4594 0.0251
0.4702 0.0170
0.4073 0.0072
0.7080 0.3676
0.9573 0.7791
0.9298 0.6572
0.9181 0.6990
0.4612 0.0154
0.7586 0.3764
0.6096 0.2647
0.7443 0.2921
0.5871 0.2481
0.2078 0.1562
UK
USA
0.5480 0.1913
References Aas, K., Czado, C., Frigessi, A., & Bakken, H. (2007). Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics, 2, 1–25. Ang, A., & Bekaert, G. (2002). International asset allocation with regime shifts. Review of Financial Studies, 15, 1137–1187. Ang, A., & Chen, J. (2002). Asymmetric correlations of equity portfolios. Journal of Financial Economics, 63, 443–494. Aussenegg, W., & Cech, C. (2011, forthcoming). Simple time-varying copula estimation. In A. S. Barczak, E. Dziwok (Ed.), Mathematical econometrical and computational methods in finance and insurance. Katowice: University of Economics in Katowice. Bauer, G. H., & Vorkink, K. (2011). Forecasting multivariate realized stock market volatility. Journal of Econometrics, 160, 93–101. Chan, L. K., Karceski, J., & Lakonishok, J. (1999). On portfolio optimization: Forecasting covariances and choosing the risk model. Review of Financial Studies, 12, 937–974. Cherubini, U., Luciano, E., & Vecchiato, W. (2004). Copula methods in finance. Hoboken: Wiley. Chollete, L., Heinen, A., & Valdesogo, A. (2009). Modeling international financial returns with a multivariate regime-switching copula. Journal of Financial Econometrics, 7, 437–480. Chollete, L., Pen˜a, V., & Lu, C. (2011). International diversification: A copula approach. Journal of Banking and Finance, 35, 403–417. Demarta, S., & McNeil, A. (2005). The t copula and related copulas. International Statistical Review, 73, 111–129. Dowd, K. (2005). Copulas and coherence. Journal of Portfolio Management, Fall, 32(1), 123–127. Embrechts, P., Frey, R., & McNeil, A. (2005). Quantitative risk management. Princeton: Princeton University Press. Embrechts, P., Lindskog, F., & McNeil, A. (2003). Modelling dependence with copulas and applications to risk management. In S. T. Rachev (Ed.), Handbook of heavy tailed distributions in finance (pp. 329–384). Amsterdam: Elsevier/North-Holland. Engle, R. F., & Sheppardy K. (2008). Evaluating the specification of covariance models for large portfolios (Working Paper). New York University. Franke, J., Hardle, W., & Hafner, C. (2010). Statistics of financial markets: An introduction. New York: Springer. Hu, L. (2006). Dependence patterns across financial markets: A mixed copula approach. Applied Financial Economics, 16, 717–729.
8
Can Time-Varying Copulas Improve the Mean-Variance Portfolio?
251
Jobson, J., & Korkie, B. (1981). Performance hypothesis testing with the Sharpe and Treynor measures. Journal of Finance, 36, 889–908. Joe, H. (1997). Multivariate models and dependence concepts. New York: Chapman & Hall/CRC. Joe, H., & Xu, J. J. (1996). The estimation method of inference functions for margins for multivariate models (Technical Report 166). Department of Statistics, University of British Columbia. Kole, E., Koedijk, K., & Verbeek, M. (2007). Selecting copulas for risk management. Journal of Banking and Finance, 31, 2405–2423. Kroner, K. E., & Ng, V. (1998). Modeling asymmetric comovements of asset returns. Review of Financial Studies, 11, 817–844. Ledoit, O., & Wolf, M. (2008). Robust performance hypothesis testing with the Sharpe ratio. Journal of Empirical Finance, 15, 850–859. Longin, F., & Solnik, B. (2001). Extreme correlation of international equity markets. Journal of Finance, 56, 649–676. Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7, 77–91. Memmel, C. (2003). Performance hypothesis testing with the Sharpe ratio. Finance Research Letters, 1, 21–23. Nelson, R. (2006). An introduction to copulas. New York: Springer. Okimoto, T. (2008). New evidence of asymmetric dependence structures in international equity markets. Journal of Financial and Quantitative Analysis, 43, 787–816. Patton, A. (2006a). Modelling asymmetric exchange rate dependence. International Economic Review, 47, 527–556. Patton, A. (2006b). Estimation of multivariate models for time series of possibly different lengths. Journal of Applied Econometrics, 21, 147–173. Patton, A. (2009). Copula-based models for financial time series. In T. Mikosch, J. P. Kreiss, T. G. Andersen, R. A. Davis, J. P. Kreiss, & T. Mikosch (Eds.), Handbook of financial time series. New York: Springer. Rodriguez, J. C. (2007). Measuring financial contagion: A copula approach. Journal of Empirical Finance, 14, 401–423. Sklar, A. (1959) “Fonctions de Re´partition a` n Dimensions et Leurs Marges,” Publications de l’Institut de Statistique de l’Universite´ de Paris 8, 229–231.
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience Ken Hung and Kuo-Hao Lee
Contents 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Testable Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Firm Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Volatility of Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Volume Turnover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Corporate Earnings Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Type of Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Forecasting Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Comparison of Management and Analyst’s Earnings Forecast . . . . . . . . . . . . . . . . . 9.4.2 Factors Influencing the Absolute Errors of Earnings Forecast . . . . . . . . . . . . . . . . . . 9.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Variable Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Market Volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wilcoxon Two-Sample Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
254 255 258 258 258 259 259 260 260 260 260 263 266 269 269 270 272 274 276 276
K. Hung (*) Division of International Banking & Finance Studies, Texas A&M International University, Laredo, TX, USA e-mail: [email protected] K.-H. Lee Department of Finance, College of Business, Bloomsburg University of Pennsylvania, Bloomsburg, PA, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_9, # Springer Science+Business Media New York 2015
253
254
K. Hung and K.-H. Lee
Abstract
Individual investors are actively involved in stock market and are making investment decision based on publicly available and nonproprietary information, such as corporate earnings forecasts from the management and the financial analyst. Also, the management forecast is another important index investors might use. To examine the accuracy of the earnings forecasts, the following test methodology have been conducted. Multiple regression models are used to examine the effect of six factors: firm size, market volatility, trading volume turnover, corporate earnings variances, type of industry, and experience. If the two-sample groups are related, Wilcoxon two-sample test will be used to determine the relative earnings forecast accuracy. The results indicate that firm size has no effect on management forecast, voluntary management forecast, mandatory management forecast, and analysts’ forecast. There are some indications that forecasting accuracy is affected by market ups and downs. The results also reveal that relative accuracy of earnings forecasts is not a function of trading volume turnover. However, management’s earnings forecast and analysts’ forecasts are sensitive to earnings variances. Readers are well advised and referred to the chapter appendix for methodological issues such as sample selection, variable definition, regression model, and Wilcoxon two-sample test. Keywords
Multiple regression • Wilcoxon two-sample test • Corporate earnings • Forecast accuracy • Management earnings • Firm size • Corporation regulation • Volatility • Trade turnover • Industry
9.1
Introduction
In recent times, individual investors are actively involved in stock market and are making investment decision based on publicly available and nonproprietary information. Corporate earnings forecasts are an important investment tool for investors. Corporate earnings forecasts come from two sources: the company management and financial analyst. As an insider, the management has the advantage of possessing more information and hence provides a more accurate earnings forecast. However, because of the existing relationship of the company with its key investor group, the management may have a tendency to take an optimistic view and overestimate its future earnings. In contrast, the financial analysts are less informed about the company and often rely on management briefings. They have more experiences in the overall market and economies and are expected to analyze companies objectively. Hence, analysts should provide reliable and more accurate earnings forecast.
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 255
Whether investors should rely on the earnings forecast made by the management or by the analyst is a debatable issue. Many researchers have examined the accuracy of such earnings forecasts. Because there are differences in methodologies, sample selections and time horizons, the findings and conclusions from the previous studies are conflicting and inconclusive. This motivated us to do a new analysis by using a different methodology. According to Regulations for Publishing Corporate Earnings Forecast imposed by the Department of Treasury,1 publicly traded Taiwanese companies have to publish their earnings forecasts under the following situations: 1. To issue new stocks or acquired liabilities; 2. When more than one third of the board has been changed; 3. When one of the situations as listed in section 185 of the corporation regulations happens; 4. Merger and acquisitions; 5. Profit gain/loss up to one third of its annual revenue due to an unexpected event; 6. Revenue loss over 30 % compared to last year; 7. Voluntarily publish its earnings forecast Since management earnings forecasts are mandatory or voluntary, the focus of this research is to examine the accuracy of management’s overall earnings forecast, management voluntary earnings forecast, management mandatory earnings forecast, and financial analyst earnings forecast.
9.2
Literature Review
Jaggi (1980) examined the impact of company size on forecast accuracy using management’s earnings forecasts from the Wall Street Journal and analysts’ earnings forecasts from Value Line Investment Service from 1971 to 1974. He argued that because a larger company has strong financial and human capital resources, its management’s earnings forecast would be more accurate than the analyst’s. The sample data were classified into six categories based on the size of the firms’ total revenue to examine the factors that attribute to the accuracy of management’s earnings forecast with the analyst’s. The result of his research did not support his hypothesis that management’s forecast is more accurate than the analyst’s. Bhushan (1989) assumed that it is more profitable trading large companies’ stocks because large companies have better liquidity than the small ones. Therefore, the availability of information is related to company size. His research results support his hypothesis that the larger the company size, the more information is available to financial analysts and the more accurate their earnings forecasts are. Kross and Schreoder (1990) proposed that brokerage firm’s characteristics influence analysts’ earnings forecasting accuracy. In their analysis, sample analysts’
1
Securities Regulation Committee, Department of Treasury, series 00588, volume #6, 1997
256
K. Hung and K.-H. Lee
earnings forecasts from 1980 to 1981 were obtained from Value Line Investment, and the market value of a firm is used as the size of the firm. The results of this study on analysts’ earnings forecasts did not find a positive relation between the company size and the analyst’s forecast accuracy. Xu (1990) used the data range from 1986 to 1990 and the logarithm of average total revenue as a proxy of company earnings and examined factors associated with the accuracy of analysts’ earnings forecast. The hypothesis that the larger the firm size is, the more accurate the analysts’ earnings forecast would be was supported. Su (1996) focuses on comparison of relative accuracy of management and analysts’ earnings forecasts by using cross-sectional design method. Samples selection includes forecast data during the time period from 1991 to 1994. The company’s “book value” of its total assets is used as a proxy for the size of the company in the regression analysis. The author believes that analysts are more attracted to larger companies, and there are more incentives for them to follow large companies than small companies in their forecasting. Therefore, the size of a company will affect the relative accuracy of analyst earnings forecast. On the other hand, large companies possess excessive human and financial resources and information which analysts have no access to, to allow managers to anticipate the corporate future earnings with high accuracy. The study results show that analyst and voluntary earnings forecast accuracy for larger companies are higher than forecast accuracy for small companies. Yie (1996) examines the factors influencing management and financial analyst earnings forecasting accuracy. Data used in this study are the earnings forecasts during the years 1991–1995. She uses the company’s total assets and market value of the company’s equity as proxies for company size. The finding of this research reveals that the relative earnings forecast accuracy (management, voluntary management, mandatory management, and analyst) is not affected by the size of the company when the company’s total assets are used as the proxy of company size. The result also indicates that mandatory management’s earnings forecast and analysts’ earnings forecasts are influenced by company size if market value of company’s equity is used. Xu (1990) examines the relative accuracy of analysts’ earnings forecasts, a hypothesis that market volatility is one of factors that influence the relative accuracy of analyst earnings forecast. In upmarket situation, a vast amount of information regarding corporate earnings and overwhelming trading activities may hinder the analyst from getting realistic and objective information; thus, overoptimistic forecast might be a result. In contrast, when market is experiencing a downturn, individual investors are less speculative and more rational; thus, information about corporate earnings tends to be more accurate. Under these circumstances, the analyst tends to provide earnings forecasts with a higher level of accuracy. The results of this study support the author’s hypothesis. Jiang (1993) examines the relative accuracy between management’s earnings forecast and analyst earnings forecast. He hypothesizes that analysts’ earnings forecast has a higher degree of accuracy in down market compared to upmarket situation. He uses
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 257
samples forecast data from years 1991 to 1993 in his analysis and finds that the result of this research supports his argument. Das et al. (1998) used a cross-sectional approach to study the optimistic behavior of financial analysts. Especially, they focused on the predicative accuracy of past information analysts’ earnings forecast associated with magnetite of the bias in analysts’ earnings forecasts. The sample selection covers the time period from 1989 to 1993 with 274 companies’ earnings forecasts information. A regression method was used in this research. The term “optimistic behavior” is referred to as the optimistic earnings forecasts made by financial analysts. The authors hypothesize the following scenario: there is higher demand for nonpublic information for firms whose earnings are more difficult to predict than for firms whose earnings can be accurately forecasted using public information. Their finding supports the hypothesis that analysts will make more optimistic forecasts for low-predictability firms with an assumption that optimistic forecast facilitates access to management’s nonpublic information. Clement (1999) studies the relation between the quality and forecast accuracy of analysts’ reports. It also identifies systematic and time persistence in analysts’ earnings forecast accuracy and examines the factors associated with degree of accuracy. Using the I/B/E/S database, the author has found that earnings forecast accuracy is positively related with analysts’ experience (a surrogate for analyst ability and skill) and employer size (a surrogate for resources available) and inversely related with the number of firms and industries followed by the analyst. The sample selection covers the time horizon from 1983 to 1994 with earnings forecasts of 9,500 companies and 7,500 analysts. The author believes that as the analyst’s experience increases, his earnings forecast accuracy will increase, which implies that the analyst has a better understanding of the idiosyncrasies of a particular firm’s reporting practices or he might establish a better relationship with insiders and therefore gain better access to the managers’ private information. An analyst’s portfolio complexity is also believed to have association with his earnings forecast accuracy. He hypothesizes that forecast accuracy would decrease with the number of industries/firms followed. The effect of available resources impacts analyst’s earnings forecast in such a way that analysts employed by a larger broker firm supply more accurate forecasts than smaller ones. The rationale behind this hypothesis is that the analyst hired by a large brokerage firm has better access to the private information of managers at the companies he follows. Large firms have more advanced networks that allow the firms to better disseminate their analyst’s recommendations into the capital markets. The results of this research support the hypothesis made by the author. Xiu (1992) studies the relative accuracy of management and analysts’ earnings forecasts using Taiwan database covering the period 1986–1990. The management and analyst’s earnings forecasts used in the study are from Business News, Finance News, Central News Paper, and The United Newspaper. The research methodology is to examine management’s earnings forecast accuracy with prior and posterior analyst’s earnings forecasts. The result reveals that
258
K. Hung and K.-H. Lee
a management’s forecast is superior to prior analyst’s forecast, but less accurate than posterior analyst’s earnings forecasts. Jiang (1993) examined the determinants associated with analysts’ earnings forecast and management and analyst’s earnings accuracy under different assumptions. A sample of Taiwan corporations is collected from the Four Seasons newspaper. Jiang uses cross-sectional regression analysis to investigate the relations between the forecast accuracy and firm’s size, rate of earnings deviation, forecasting time horizon, market situation, and rate of annual trading volume turnover. His results show that earnings forecasts provided by analysts are more accurate than management earnings forecasts. Chia (1994) focuses a study on mandatory management earnings forecasts and the rate of trading volume turnover in an unpublished thesis.
9.3
Testable Hypotheses
9.3.1
Firm Size
The size of a firm is believed to have influence on the accuracy of analyst’s and management’s earnings forecast. Jaggi (1980), Bhushan (1989), and Clement (1999) found that the larger the company is, the more accurate the earnings forecast will be. They believe that holding other factors constant, larger companies have more financial and human resources available that allow the management to draw more precise earnings forecast than smaller companies. Thus, forecasts and recommendations supplied by larger firms are more valuable and accurate than the smaller firms: H1: The accuracy of management’s earnings forecast increases with the size of the firm. H2: The accuracy of management’s voluntary earnings forecast increases with the size of the firm. H3: The accuracy of management’s mandatory earnings forecast increases with the size of the firm. H4: The accuracy of analysts’ earnings forecast increases with the size of the firm.
9.3.2
Volatility of Market
The accuracy of earnings forecast will be affected by market situation. When market is very volatile and unstable, investors who are looking for the opportunities to profit will act more, speculative about what would be the next for the market. In this situation, it is more difficult for analysts to figure out the real useful information for their forecasts; they might have a tendency to overoptimistically forecast the earnings and provide recommendations. When a market is in a relative stable
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 259
period, investors tend to be rational about the next movement of the market; there is less biased information regarding corporate earnings among the general investors; thus, the information accessed by analysts will allow them to be more objective in the earnings forecast. In contrast, the management has the insights on what is really happening in the aspects of operation, finance, top management changes, and profitability of the business. Even if they are less vulnerable regardless of what the market situation is, voluntary management’s earnings forecast might be affected by market volatility to some extent: H5: Management’s earnings forecast will not be affected by volatility of market. H6: Voluntary management’s earnings forecast is a function of the of market volatility. H7: Mandatory management’s earnings forecast is not affected by market volatility. H8: Accuracy of analysts’ earnings forecast is affected by market volatility.
9.3.3
Volume Turnover
The relationship between trading volume turnover and accuracy of earnings forecast can be examined based on the hypothesis that daily stock trading volume represents the public investors’ perception about a company. Larger trading volume during a day for a particulate stock reflects a higher degree of divergence on confidence about the company’s stock, and vice versa. This public perception on a stock might distract management’s and analysts’ judgment; they need more time and strive extra efforts in order to prove accurate earnings forecasts: H9: Trading volume turnover affects the accuracy of management’s earnings forecast. H10: Trading volume turnover affects the accuracy of voluntary management’s earnings forecast. H11: Trading volume turnover affects the accuracy of mandatory management’s earnings forecast. H12: Trading volume turnover affects the accuracy of analysts’ earnings forecast.
9.3.4
Corporate Earnings Variance
Corporate earnings surprises are an important aspect of analysts’ earnings forecast. The larger the earnings surprise is, the less useful the past information will be in earnings forecasting and the harder it is to make accurate forecasts. Corporate earnings variances represent the earnings surprises a company has in the past; it would affect the accuracy of management and analysts’ earnings forecasts: H13: Corporate earnings variances affect the accuracy of management’s earnings forecast. H14: Corporate earnings variances affect the accuracy of voluntary management’s earnings forecast.
260
K. Hung and K.-H. Lee
H15: Corporate earnings variances affect the accuracy of mandatory management’s earnings forecast. H16: Corporate earnings variances affect the accuracy of analysts’ earnings forecast.
9.3.5
Type of Industry
There may exist a relationship between the type of industry and earnings forecast accuracy. They hypothesize that the difference between different industries may result in different levels of accuracy on earnings forecast. Some analysts may not possess adequate knowledge necessary in the forecasting in a particular industry; therefore, their forecast may not be as accurate as management’s earnings forecast. Hence, the following hypotheses can be tested: H17: Type of industries influences the accuracy of management‘s earnings forecast. H18: Type of industries affects the accuracy of voluntary management’s earnings forecast. H19: Type of industries affects the accuracy of mandatory management’s earnings forecast. H20: Type of industries affects the accuracy of analysts’ earnings forecast.
9.3.6
Forecasting Experience
Analysts’ accuracy of earnings forecast will improve as their experience and knowledge about companies increase. They learn from their previous forecasts and make the next forecast more accurate. A similar argument can be made about the management’s earnings forecast. Hence, the following hypotheses can be tested: H21: Forecasting experience influences the accuracy of management’s earnings forecast. H22: Forecasting experience affects the accuracy of voluntary management’s earnings forecast. H23: Forecasting experience affects the accuracy of mandatory management’s earnings forecast. H24: Forecasting experience affects the accuracy of analysts’ earnings forecast.
9.4
Empirical Results
9.4.1
Comparison of Management and Analyst’s Earnings Forecast
To compare the relative accuracy of management and analysts’ earnings forecasts, we focus on four major aspects regarding the relative accuracy of earnings forecasts. First, management versus analysts’ earnings forecasts is made to compare the
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 261
relative accuracy of the forecasts. Secondly, a comparison is made between voluntary management’s forecasts and analysts’ forecasts. Thirdly, mandatory management’s forecasts are compared with analysts’ forecasts to determine the relative accuracy of the forecasts. Finally, tests of hypothesis have been made to further prove the relative accuracy of management and analysts’ earnings forecasts. Table 9.1 provides descriptive statistics of management and financial analysts’ earnings forecasts based from 1987 to 1999. It can be observed that absolute errors of management earnings forecasts are less than the analyst’s in the years 1992, 1993, 1995, 1996, 1997, and 1997. It indicates that management’s earnings forecasts are superior to analysts during that time period. But, in other time periods, the absolute errors for management’s earnings forecast are higher than analysts’, indicating analysts provide higher forecast productions. Overall, it is less obvious to conclude who has higher predicate ability for providing more precise earnings forecast. The last three rows of Table 9.1 list the mean absolute errors of earnings forecasts by management and analysts during three different time periods. From 1988 to 1992, management’s forecast absolute mean error is 1.796, whereas analyst’s earnings forecast absolute mean error is 1.503. From 1993 to 1999, management’s earnings forecast absolute mean error is 2.031, while analysts’ forecast absolute mean error shows higher value of 2.236. If we look at the entire time period from 1987 to 1999, the absolute mean error for management’s forecast is less than the absolute mean error for analysts’ earnings forecast, which is 1.969 and 2.043 respectively. A conclusion can be drawn from the above results that management’s earnings forecasts are more accurate than analysts’ forecasts from the early 1990s, but less accurate during the late 1980s. Table 9.2 shows the results of the Wilcoxon signed-rank test used to test the relative accuracy of managements’ forecast and analysts’ forecast. Comparing the negative ranks and positive ranks in Table 9.2, management’s forecasts are less accurate than analysts’ forecasts in the years 1987, 1988, 1989, 1990, and 1999, but more accurate in the years 1995 and 1998. There is no significant difference in the absolute errors between management’s forecasts and analysts’ forecasts. If we examine the z-values of the test for the entire time period (1986–1999), the z-value for Wilcoxon signed-rank test is 0.346, which is not significant enough to tell the difference between the two samples. This supports the hypothesis H1 that there are no significant differences between management’s forecast accuracy and analysts’ forecast accuracy. This also agrees with the findings suggested by Imhoff and Pare (1982) and Baretley and Cameron (1991). They believe the reason for that is due to the similar abilities of forecasters and comparable networks to access company information (public/private) between management and analysts; it is possible that both can provide relative accurate earnings forecasts. If the entire time period is divided into two subsamples, one is from 1987 to 1992 and the other is from 1993 to 1999, the latter subsample shows a significant level of 0.05 with a z-value of 2.138, which indicates that management’s forecasts are less reliable than analysts’ forecasts. For the former subsample, it shows no contradiction with the results by Imhoff and Pare (1982) and Baretley and Cameron (1991).
Year 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1993–1999 1987–1993 1987–1999
Sample size 402 360 317 267 226 178 157 126 144 120 116 96 78 1,907 680 2,587
Management’s forecast error Mean Standard error Maximum value 1.15 3.64 38.55 1.61 5.80 63.48 0.74 2.20 29.87 1.12 5.41 55.45 0.86 2.93 33.27 0.62 2.17 18.8 12.71 149.1 1869.1 0.65 1.48 11.58 0.84 2.99 32.93 5.98 32.60 334.94 1.11 2.28 18.66 1.25 3.29 19.57 0.66 1.88 15.73 2.031 0.983 1869.1 1.796 0.535 334.93 1.969 37.57 1869.1 Minimum value 0.0006 0.0017 0.0040 0.0007 0.0001 0.0015 0.0011 0.0013 0.0014 0.0017 0.0009 0.0001 0.0171 0.0001 0.0001 0.0001
Table 9.1 Descriptive statistics of management and analyst’s earnings forecast errors Analyst’s forecast error Mean Standard error 1.07 3.18 2.10 7.10 0.78 3.17 1.33 6.49 0.87 2.31 0.62 2.17 13.80 164.1 0.73 1.48 0.65 1.10 4.86 20.92 0.91 1.59 1.08 2.74 0.57 1.78 2.236 1.083 1.503 0.34 2.043 40.88 Maximum value 35.76 75.81 53.23 71.34 28.42 23.86 2057.4 8.14 5.67 187.3 10.56 15.43 14.62 2057. 187.368 2057.44
Minimum value 0.0006 0.0037 0.0002 0.0009 0.0018 0.0025 0.0003 0.0005 0.0079 0.0075 0.0056 0.0006 0.0025 0.0002 0.0005 0.0002
262 K. Hung and K.-H. Lee
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 263
Table 9.2 Wilcoxon signed-rank test for earnings forecast accuracy of management and analyst’s forecasts errors Year 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1993–1999 1987–1993 1987–1999
Sample size 402 360 317 267 226 178 157 126 144 120 116 96 78 1,299 1,288 2,587
Negative ranks 150 209 147 143 117 82 74 60 69 43 46 28 27 626 569 1,195
Positive ranks 230 140 140 119 89 90 79 62 69 76 67 60 51 607 665 1,272
Ties 22 11 30 5 20 6 4 4 6 1 3 8 0 66 54 120
Z-value 3.119 5.009 0.5 1.567 2.857 0.339 0.502 1.136 0.407 2.941 2.7 3.503 3.315 1.592 2.138 0.346
Sig. 0.002*** 0*** 0.96 0.17 0.004*** 0.734 0.616 0.256 0.684 0.003*** 0.007*** 0*** 0.001*** 0.111 0.033** 0.73
Negative ranks: absolute error of management’s earnings forecast < absolute error of analyst earnings forecast Positive ranks: absolute error of management’s earnings forecast > absolute error of analyst earnings forecast Tie: absolute error of management’s earnings forecast ¼ absolute error of analyst earnings forecast * Significant level ¼ 0.1, ** significant level ¼ 0.05, *** significant level ¼ 0.01
9.4.2
Factors Influencing the Absolute Errors of Earnings Forecast
9.4.2.1 Firm Size We argue that the management possesses the relative advantage of having private insights that the analyst cannot access, and that a larger company has stronger human and financial resources. Therefore, the management forecasts of corporate earnings are much more precise. On the other hand, a larger company tends to draw attentions and is more likely to attract and be followed by financial analysts; analysts’ forecasts can be objective and accurate as well; however, the results from our research do not support this argument. Table 9.4 shows that the p-value of t-parameter for the size of a company (0.478) does not reach a significant level, indicating the size of a company is not associated with the accuracy of management’s earnings forecast. This result does not support the hypothesis H1: management’s earnings forecast accuracy increases with the size of company. Tables 9.5 and 9.6 show results of the regression analysis for two subsamples representing the time period of 1987–1992 and 1993–1999 to investigate the relationship between company size and accuracy of management’s earnings forecast.
264
K. Hung and K.-H. Lee
Table 9.3 Taiwan stock market volatility from 1987 to 1999 Year 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987
Rm (%) 27 13 14 39 24 13 55 28 13 59 51 124 138
Rf (%) 4 6 5 5 5 5 6 6 6 8 7 4 4
Rm Rf (%) 23 19 9 34 29 18 49 34 7 67 44 120 134
Market volatility Upmarket Down market Upmarket Upmarket Down market Down market Upmarket Down market Upmarket Down market Upmarket Upmarket Upmarket
Market volatility measure, Rm, suggested by Pettengill et al. (1995), is the last month’s market return minus the first month’s market return divided by the first month’s market return in a given year. Rf is the risk-free rate
9.4.2.2 Volatility of Market In Table 9.4, the p-value of t-parameter in column 4 for the volatility of market of a company (0.075) does not reach a significant level, which implicates the accuracy of management’s earnings forecast is not positively associated with the volatility of market. This result supports the hypothesis H1: management’s earnings forecast accuracy will not change with market volatility. Further examining the two sub-tables of Tables 9.4, 9.5, and 9.6, p-value of parameter for the volatility of market of a company (0.310) indicates that management’s earnings forecast accuracy will not change with market volatility during 1987–1999. But the t-parameter for the volatility of market is 2.569, indicating the management can provide accurate forecast during upmarket, but less accurate forecast during down market. 9.4.2.3 Trading Volume Turnover The p-values of t-parameter in column 4 for the trading volume turnover of a company in all three tables do not reach a significant level. The regression analysis does not support the hypothesis H8 that trading volume turnover will affect management earnings forecast accuracy. 9.4.2.4 Corporate Earnings Variances In Table 9.4, the p-values of t-parameter (2.74) in column 4 for the rate of earnings divination of a company is 0.01, which shows corporate earnings variances affect management’s earnings forecast accuracy. Positive value of t-parameter means management earnings forecast accuracy decreases as corporate earnings variance
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 265
Table 9.4 Regression model for the absolute errors of management earnings forecasts dating from 1987 to 1999 Independent variable Intercept Size I1: Market volatility TR: Rate of trading volume turnover CV: Corporate earnings variances E: Forecasting experience I2: Cement I3: Food I4: Plastic I5: Textile I6: Electrical machinery I7: Electronic equipment and cable I8: Chemical I9: Glass and ceramic I10: Paper manufacturing I11: Steel I12: Rubber I13: Auto I14: Electronics I15: Construction I16: Transportation I17: Travel I18: Insurance I19: Grocery R-square
Correlation coefficient 11.59 0.69 2.86 0.09 2.74 1.00 0.71 0.38 1.13 0.87 0.85 0.78 14.83 1.36 0.84 0.79 0.46 2.74 0.82 1.01 0.53 0.76 0.92 0.65 0.016
t-statistic 0.52 0.71 1.55 0.10 3.50 1.25 0.11 0.07 0.20 0.19 0.14 0.13 2.70 0.17 0.11 0.14 0.68 0.28 0.17 0.18 0.08 0.10 0.14 0.09
p-value of t-statistic 0.604 0.478 0.122 0.923 0.00*** 0.212 0.196 0.943 0.844 0.853 0.888 0.895 0.007*** 0.867 0.912 0.891 0.946 0.782 0.865 0.856 0.933 0.924 0.886 0.925
I2–I19: dummy variables for industry * Significant level ¼ 0.10, ** significant level ¼ 0.05, *** significant level ¼ 0.01
increases. This supports the hypothesis H12 that corporate earnings variance has an effect on management earnings forecast accuracy. From the examination for Tables 9.5 (1993–1999) and 9.6 (1987–1992), management earnings forecast accuracy is affected by corporate earnings variances during the recent years (1993–1999).
9.4.2.5 Type of Industry To determine whether and which industry will influence the forecast accuracy, 18 industries are selected and represented by a dummy variable Ij. From Table 9.4, I8 is the only industry that has a significant level for the p-values of t-parameter of 14.73. According to our assumption, I8 represents the chemical industry. Thus, we conclude that management’s forecasts are reliable for most of the industries studied in this research, except for the chemical industry.
266
K. Hung and K.-H. Lee
Table 9.5 Regression model for the absolute errors of management earnings forecasts dating from 1993 to 1999 Independent variable Intercept Size I1: Market volatility TR: Rate of trading volume turnover CV: Corporate earnings variances E: Forecasting experience I2: Cement I3: Food I4: Plastic I5: Textile I6: Electrical machinery I7: Electronic equipment and cable I8: Chemical I9: Glass and ceramic I10: Paper manufacturing I11: Steel I12: Rubber I13: Auto I14: Electronics I15: Construction I16: Transportation I17: Travel I18: Insurance I19: Grocery R-square
Correlation coefficient 18.16 1.00 2.54 0.04 3.48 1.26 1.94 0.96 2.08 1.39 1.20 1.42 20.36 1.91 2.22 1.13 0.74 3.29 3.01 1.55 0.37 0.36 1.99 0.70 0.016
t-statistic 0.59 0.76 1.02 0.03 3.42 1.23 0.21 0.15 0.27 0.23 0.14 0.18 2.84 0.17 0.19 0.16 0.08 0.26 0.50 0.23 0.05 0.03 0.23 0.08
p-value of t-statistic 0.552 0.450 0.310 0.976 0.001*** 0.218 0.836 0.885 0.789 0.815 0.875 0.859 0.005*** 0.854 0.851 0.875 0.935 0.798 0.617 0.822 0.964 0.974 0.815 0.939
I2–I19: *
dummy variables for industry Significant level ¼ 0.10, ** significant level ¼ 0.05, *** significant level ¼ 0.01
9.4.2.6 Forecasting Experience The results of regression analyses for investigating the relationship of forecasting experience and management’s earnings forecast accuracy are shown in Tables 9.4, 9.5, and 9.6. All three p-values of t-parameter in column 4 for forecasting experience indicate that management earnings forecast accuracy is not affected by previous forecasting experiences. This conclusion does not support the hypothesis H20 that forecasting experiences affect management’s forecast accuracy. Test of other hypotheses indicates similar results.
9.5
Conclusions
The results of our research indicate that company size has no effect on any of the following: management forecast, voluntary management forecast, mandatory
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 267
Table 9.6 Regression model for the absolute errors of management earnings forecasts dating from 1978 to 1992 Independent variable Intercept Size I1: Market volatility TR: Rate of trading volume turn over) CV: Corporate earnings variances E: Forecasting experience I2: Cement I3: Food I4: Plastic I5: Textile I6: Electrical machinery I7: Electronic equipment and cable I8: Chemical I9: Glass and ceramic I10: Paper manufacturing I11: Steel I12: Rubber I13: Auto I14: Electronics I15: Construction I16: Transportation I17: Travel I18: Insurance I19: Grocery R-square
Correlation coefficient 15.55 0.68 2.57 0.25 0.39 0.57 0.59 0.54 0.16 1.17 0.68 0.40 2.05 0.40 2.05 0.56 0.14 1.94 7.98 0.04 0.36 0.96 1.63 3.78 0.035
t-statistic 0.87 0.87 1.78 0.47 0.59 0.76 0.11 0.11 0.03 0.26 0.12 0.08 0.41 0.06 0.37 0.10 0.02 0.24 1.64 0.01 0.06 0.16 0.29 0.66
p-value of t-statistic 0.386 0.383 0.075* 0.636 0.555 0.450 0.914 0.913 0.974 0.799 0.905 0.936 0.680 0.955 0.709 0.923 0.981 0.814 0.102 0.994 0.949 0.876 0.770 0.509
I2–I19: dummy variables for industry * Significant level ¼ 0.10, ** significant level ¼ 0.05, *** significant level ¼ 0.01
management forecast, and analysts’ forecast. This result agrees with Jaggi (1980) and Kross and Schreoder (1990) that analyst’s earnings forecast accuracy is not related with the size of company, but differs from the results suggested by Bhushan (1989), Das et al. (1998), Clement (1999), Xu (1990), and Jiang (1993) that company size does influence the relative precision of management or analysts’ earnings forecasts. It can be seen that the relative accuracy of management’s earnings forecast and analyst’s earnings forecast is not affected by market situation across the entire range of sampled forecasts. There are some indications that forecasting accuracy is affected by market ups and downs. For instance, the relative accuracy of voluntary management’s earnings forecast during the entire time period, accuracy of management’s forecast, and analysts’ earnings forecasts during the years 1978 through 1992 are more accurate when market is up and are less accurate during the
268
K. Hung and K.-H. Lee
down market. This result agrees with what Su has suggested – earnings forecast accuracy is affected by market volatility, but in different ways. We believe that due to the fact that more individual investors who are most likely to chase the market when it is up saturate the Taiwan stock market. They examine corporate earnings with more caution, so their expectations for companies in general are more realistic and rational. Therefore, overall earnings forecast accuracy is increased, and vice versa. The results of this study reveal that relative accuracy of all four kinds of earnings forecasts is not the functions of trading volume turnover. This agrees with the results obtained by Chai, but it disagrees with the results of Jiang (1993). Results of regression analysis indicate that management’s earnings forecast and analysts’ forecasts are sensitive to the corporate earnings variances. This conclusion proves the hypothesis supports H13 through H16 and supports the theories of Kross and Schreoder (1990) and Jiang (1993). We postulate that corporate earnings variance of earnings surprises is an important indicator for a company’s profitability and its earnings in the future. The management and analysts use past year’s earnings surprises to forecast that future earnings, with an assumption of higher forecast inaccuracy, are a result of a high degree of earnings deviation. Therefore, they will need to exercise their highest ability in making earnings forecast more accurate. But we found that the higher the corporate earnings variance is, the lower the forecast accuracy will be for both the management and for analysts. Corporate earnings variances should not be used as an important indicator as how a company is operating, but it represents a complicated business environment it operates in. The higher the complicity of the business environment is, the less accurate the prediction/forecast will be. Analyst earnings forecast and management’s earnings forecast are biased for the chemical industry over the entire time period of sampled forecast; voluntary management’s earnings forecasts for textile, electrical machinery, and paper manufacturing industries are inaccurate during 1993–1999. Mandatory management’s earnings forecasts are very inaccurate for the food industry, textile industry, and travel industry in the time period of 1993–1999. This supports Kross and Schreoder (1990) who concluded that analyst’s earnings forecast is affected by the type of industry he/she follows. The results reveal that the relative accuracy of management’s earnings forecast and analyst’s earnings forecast do not respond to the differences of forecasters’ previous experiences. But, the relative accuracy of mandatory management’s earnings forecast for forecasters affect the entire time period and the subsampled voluntary management’s forecast previous earnings forecast experiences. This conclusion agrees with Clement’s (1999) finding. We rationalize that forecast accuracy is positive related to the forecasting experiences as hypotheses H21 through H24 state. The results from this study indicate otherwise. The forecast accuracy of mandatory and voluntary management’s earnings forecast has a negative relationship with previous forecasting experiences. We argue that it is because of (1) mis-quantifying variable as
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 269
a proxy of forecasting experiences and (2) only using the past mandatory management’s earnings forecasts as the base of future focusing without paying attention to ways on how to reduce the forecasting errors in those forecasts. Therefore, the more forecasting experiences the forecaster has, the less accurate the forecast will be.
Methodology Appendix Sample Selection This research uses cross-sectional design to examine the relative accuracy of management and analysts’ earnings forecast. Due to the disclosure regulation in Taiwan, management’s earnings forecast is classified as two categories: mandatory earnings forecast and voluntary earnings forecast1. Samples used are management’s and analysts’ earnings forecasts at all publicly traded companies during the time period of 1987–1999. The forecasts are compared with actual corporate earnings on an annual basis. Voluntary and mandatory management’s forecast and analysts’ forecast are then used to compare with actual corporate earnings to evaluate the effects of management motivation and behaviors on their earnings forecasts. Management and analysts’ pretax earnings forecast data are collected from “Taiwan Business & Finance News” during the time period of 1987–1999. Actual corporate earnings are collected from the Department of Education’s “AREMOS” database. Only those firms were included in the samples whose stocks were traded on the Taiwan Stock Exchange before December 31, 1999. Also, forecasts made after accounting year and before announcement of earnings were excluded from the sample. Management’s earnings forecast and analysts’ earnings forecast samples for this research are selected to cover the time period from 1987 to 1999. Available database, over the 13-year period, consisted of 5,594 management’s earnings forecasts, in which 2,894 management forecasts are voluntary and 2,700 management’ forecasts are mandatory. A total of 17,783 analysts’ forecasts are in the database. The selected samples, presented in Table 9.7, consist of 2,941 management earnings forecasts, of which 2,046 are voluntary and 1,679 mandatory forecasts and 3,210 analysts’ earnings forecasts. Table 9.7 shows that the average number of analysts’ earnings forecasts is more than the number of management’s earnings forecasts. A higher frequency of analysts’ earnings forecasts is expected as an analyst may cover more than one firm. Most of management’s earnings forecasts are made after 1991; it may be attributed by the amendment of “Regulation of Financial Report of Stock Issuer” imposed by the Taiwanese government in 1991. In the new regulation, a new section dealing with earnings forecast was added requiring company’s management to disclose its earnings forecasts to the general public. Comparing the number of management voluntary forecasts and mandatory forecasts, the latter is about 1.5 times more than the former except during the years 1991–1993.
270
K. Hung and K.-H. Lee
Table 9.7 Sample of earnings forecasts by management and analysts from Taiwan database selected for the study Year 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 Total
Management voluntary 407 408 384 328 281 219 172 139 156 125 123 112 87 2,941
Management mandatory 236 238 227 201 193 135 137 78 154 125 123 112 87 2,046
Management 319 335 288 223 218 112 84 84 16 NAa NA NA NA 1,679
Analysts 479 430 376 322 279 247 225 200 175 154 130 106 87 3,210
a
Mandatory forecast requirement was introduced in 1991
Variable Definition Absolute Earnings Errors The mean value of total corporate earnings before tax is used as proxy of earnings forecast. Using earnings before tax in the analysis will eliminate other factors, such as raising cash for capital investment, earnings retention for capital investment, and stock distribution from paid in capital that might impact the accuracy of the analysis. Absolute earnings (before tax) forecast error is used to compare the relative accuracy of management and analysts’ earnings forecasts. Management’s forecasts errors are calculated as follows: MFm, i, t ¼
n 1 X FEm, i, j, t N j¼1
MF1m, i, t ¼
n 1 X FE1m, i, j, t N j¼1
MF2m, i, t ¼
n 1 X FE2m, i, j, t N j¼1
9
Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience 271
AFEm, i, t ¼ MFm, i, t AEi, t =AEi, t AFE1m, i, t ¼ MF1m, i, t AEi, t =AEi, t AFE2m, i, t ¼ MF2m, i, t AEi, t =AEi, t
where, MFm,i,t: mean management’s pretax earnings forecast for company i at time t MF1m,i,t: mean management’s voluntary pretax earnings forecast for company i at time t MF2m,i,t: mean management’s mandatory pretax earnings forecast for company i at time t FEm,i,j,t: management’s jth pretax earnings forecast for company i at time t FE1m,i,j,t: voluntary management’s jth pretax earnings forecast for company i at time t FE2m,i,j,t: mandatory management’s jth pretax earnings forecast for company i in year t AFEm,i,t: absolute error of management’s pretax earnings forecast for company i at time t AFE1m,i,t: absolute error of voluntary management’s pretax earnings forecast for company i at time t AFE2m,i,t: absolute error of mandatory management’s pretax earnings forecast for company i at time t AEi,t: actual pretax EPS for company i at time t Analysts forecast errors are calculated as follows: MFf, i, t ¼
n 1 X FEf, i, j, t n j¼1
AFEf, i, t ¼ MFf, i, t AEi, t =AEi, t where MFf,i,t: mean analysts’ pretax earnings forecast for company at time t FEf,i,j,t: analysts’ jth pre-tax earnings forecast for company i at time t; AFEf,i,t: absolute error of analyst’s pre-tax earnings forecast for company i at time t; AEi,t: actual pre-tax EPS for company i at time t.
272
K. Hung and K.-H. Lee
Company Size Unlike other previous researchers who used market value of company’s equity as a indication of size of a company, in this study, a company’s last year’s total revenue is used as the size of the company. The reason is because Taiwanese market is not efficient and investors are not informed fully with information they need during their investment decision-making process; speculations among individual investors are the main cause of the stock market volatility; thus, market value of company’s equity cannot fully represent a company’s real size.In order to better control the company size for our regression analysis, a logarithm of company’s last year’s total revenue is used as the following: SIZEi, t ¼ ln TAi, t1 where SIZEi,t: the size of company i at time t; TAi,t1: total revenue of company i at time t1.
Market Volatility This study adapts what Pettengill, Glenn N, Sundaram, Sridhar, and Mathur and Ike used in their research to measure market volatility. Market volatility is measured as upmarket or down market by using market-adjusted return. This return is calculated as RmRf, in which Rm is the last month’s market return minus the last first month’s market return divided by the first month’s market return in a given year. Rf is the risk-free interest rate in the same year: ReturnðMarketadjustedÞ ¼ Rm Rf where Upmarket if Return (Market-adjusted) >0 Down market if Return (Market-adjusted) tT Þ Q
(23.14)
Here we employ the Monte Carlo simulation, which is described in Appendix 3, to obtain the initial fair BCLN coupon rate c as follows: s W X 1 dsk ertk I tsk tT ertT I tsk > tT s¼1 " # c¼ W T X X rti s e I t i < tk s¼1
(23.15)
i¼1
where W represents the number of simulation runs. tks represents the kth default time, and dks denotes the recovery rate of the kth default reference entity at the sth simulation, respectively. When the issuer default risk is involved, whether the issuer default occurs before or after the kth default must be taken into account. This article defines ^t as the issuer default time and ^d as the issuer recovery rate. The BCLN holder gets back the recovered value of the reference obligation if the kth default occurs before both the issuer default time ^t and maturity date tT. If the issuer default occurs before the kth default and maturity date, the issuer will not provide the BCLN holder with the redemption proceeds and stop the coupon payments. In this situation, the notional principal multiplied by the issuer recovery rate is returned to the BCLN holder. To obtain all of the notional principal back, both the kth default time and the issuer default time must be later than the contract maturity date. Thus, the value of a kthto-default BCLN with issuer default risk is modified as follows:
646
P.-C. Wu et al.
3 T X rti ^ c e I ð t < min ð t , t Þ Þ i k 7 6 7 6 i¼1 7 6 7 6 BCLN ¼ EQ 6 þ d ertk I ðt < minð^t , t ÞÞ 7 k k T 7 6 7 6 4 þ ^d er^t I ð^t < minðtk , tT ÞÞ 5 þ ertT I ðtT < minðtk , ^t ÞÞ 2
(23.16)
Therefore, the fair value of the coupon rate c with issuer default risk is 3 s 1 dsk ertk I tsk < minð^t s , tT Þ 7 6 7 5 4 ^d er^t s I ^t s < min ts , tT s s k rtT I tT < min tk , ^t e " # W T X X erti I ti < min tsk , ^t s 2 W 6 X s¼1
c¼
s¼1
(23.17)
i¼1
where ^t s represents the issuer default time at the sth simulation.
23.4
Numerical Analysis
The numerical example presented here is a 5-year BCLN with three reference entities; all of which are with notional principal one dollar, hazard rate 5 %, and recovery rate 30 %. Furthermore, the coupon is paid annually; the hazard rate and recovery rate of the issuer are 1 % and 30 %, respectively. Sixty thousand runs of Monte Carlo simulation are executed to calculate the coupon rates, and the results are shown in Tables 23.1, 23.2 and 23.3. As we can see in Tables 23.1, 23.2 and 23.3, when issuer default risk is considered by viewing it as one reference entity of the credit portfolio (column II), the BCLN coupon rate increases compared to those without issuer default risk (column I) for k ¼ 1–3. This is reasonable because the existence of issuer default risk increases the risk of holding a BCLN; thus, the holder will demand a higher coupon rate. When the default correlations between the issuer and the reference entities are considered as in the proposed model, the BCLN coupon rates with issuer default risk (column III) are greater than those without issuer default risk (column I) for k ¼ 2 and 3 in Tables 23.2 and 23.3. However, when k ¼ 1 in Table 23.1, most of the BCLN coupon rates with issuer default risk are lower than those without issuer default risk, especially when the issuer and the reference entities are highly negatively or positively correlated. This result shows that the BCLN coupon rates with issuer default risk are not necessarily greater than those without issuer default risk. Moreover, from Figs. 23.2,
r 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Default free 8.0195 % 9.2625 % 10.2417 % 11.0469 % 11.7604 % 12.3336 % 12.7890 % 13.1304 % 13.3336 % 13.3875 % 13.3547 % 13.1236 % 12.7744 % 12.2933 % 11.7091 % 10.9854 % 10.1941 % 9.2061 % 7.9788 %
(I)
In credit portfolio 8.0346 % 9.3471 % 10.4279 % 11.3501 % 12.1928 % 12.8846 % 13.4509 % 13.8645 % 14.1181 % 14.1785 % 14.1215 % 13.8336 % 13.4073 % 12.8209 % 12.1252 % 11.2806 % 10.3750 % 9.2802 % 7.9892 %
(II) 0.9 – – – – – – – 7.5062 % 8.7846 % 9.0738 % 8.7274 % 7.4400 % – – – – – – –
0.6 – – – – – 8.8110 % 10.6576 % 11.7035 % 12.2398 % 12.3865 % 12.1743 % 11.5964 % 10.5344 % 8.6831 % – – – – –
(III) The proposed model 0.3 – – – – 10.2184 % 11.8856 % 12.9594 % 13.5910 % 13.9263 % 14.0210 % 13.8962 % 13.4923 % 12.8035 % 11.7668 % 10.1270 % – – – –
rXZ 0 – – 7.2895 % 10.5546 % 12.0213 % 12.9609 % 13.5479 % 13.9236 % 14.1327 % 14.1785 % 14.1373 % 13.8863 % 13.4871 % 12.8436 % 11.9387 % 10.4846 % 7.1962 % – – 0.3 – 7.0292 % 10.2124 % 11.4061 % 12.0974 % 12.5139 % 12.7852 % 12.9839 % 13.1152 % 13.1205 % 13.0764 % 12.9678 % 12.7940 % 12.5034 % 12.0415 % 11.3343 % 10.1661 % 6.9644 % –
0.6 – 9.3593 % 10.1562 % 10.5488 % 10.7840 % 10.9275 % 11.0273 % 11.0791 % 11.1038 % 11.1105 % 11.1036 % 11.0703 % 10.9995 % 10.9036 % 10.7603 % 10.5102 % 10.1106 % 9.2933 % –
0.9 7.6700 % 7.8852 % 7.9327 % 7.9452 % 7.9650 % 7.9665 % 7.9659 % 7.9894 % 7.9758 % 7.9933 % 8.0044 % 7.9941 % 7.9702 % 7.9704 % 7.9715 % 7.9448 % 7.9210 % 7.8398 % 7.6319 %
Table 23.1 First-to-default BCLN coupon rates without and with issuer default risk. (I) Default free: Issuer default risk is not included in the pricing model. (II) In credit portfolio: The issuer is viewed as one reference entity of the credit portfolio. The default correlations between the issuer and the reference entities are fixed to r2, which is always positive. (III) The proposed model: The default correlation between the issuer and the reference entities is rXZ, which may be positive or negative
23 Factor Copula for Defaultable Basket Credit Derivatives 647
r 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Default free 5.2169 % 4.9714 % 4.7203 % 4.4753 % 4.2476 % 4.0568 % 3.8888 % 3.7766 % 3.6917 % 3.6646 % 3.6833 % 3.7480 % 3.8817 % 4.0242 % 4.2155 % 4.4092 % 4.6447 % 4.9138 % 5.1952 %
(I)
In credit portfolio 5.2585 % 5.1490 % 5.0406 % 4.9077 % 4.7844 % 4.6587 % 4.5440 % 4.4490 % 4.3717 % 4.3412 % 4.3559 % 4.4025 % 4.5070 % 4.5995 % 4.7249 % 4.8338 % 4.9559 % 5.0873 % 5.2336 %
(II) 0.9 – – – – – – – 6.3028 % 6.1079 % 6.0386 % 6.0959 % 6.2475 % – – – – – – –
0.6 – – – – – 6.1656 % 5.7880 % 5.5060 % 5.3332 % 5.2612 % 5.3033 % 5.4584 % 5.7235 % 6.0943 % – – – – –
(III) The proposed model 0.3 – – – – 5.8264 % 5.3827 % 5.0664 % 4.8417 % 4.7028 % 4.6477 % 4.6836 % 4.7916 % 4.9942 % 5.3165 % 5.7533 % – – – –
rXZ 0 – – 6.1463 % 5.5463 % 5.1061 % 4.8129 % 4.6078 % 4.4584 % 4.3759 % 4.3412 % 4.3606 % 4.4244 % 4.5580 % 4.7513 % 5.0596 % 5.4952 % 6.0639 % – – 0.3 – 5.9481 % 5.3390 % 4.9738 % 4.7372 % 4.6044 % 4.4986 % 4.4238 % 4.3711 % 4.3441 % 4.3478 % 4.3710 % 4.4430 % 4.5276 % 4.6854 % 4.9002 % 5.2704 % 5.8734 % –
0.6 – 5.1998 % 4.9617 % 4.8270 % 4.7592 % 4.7055 % 4.6566 % 4.6153 % 4.6020 % 4.5867 % 4.5939 % 4.5969 % 4.6288 % 4.6679 % 4.7051 % 4.7565 % 4.8851 % 5.1404 % –
0.9 5.2870 % 5.2213 % 5.2041 % 5.2135 % 5.1736 % 5.1541 % 5.1422 % 5.1375 % 5.1434 % 5.1507 % 5.1636 % 5.1732 % 5.1900 % 5.1908 % 5.2016 % 5.2005 % 5.1929 % 5.2065 % 5.2602 %
Table 23.2 Second-to-default BCLN coupon rates without and with issuer default risk. (I) Default free: Issuer default risk is not included in the pricing model. (II) In credit portfolio: The issuer is viewed as one reference entity of the credit portfolio. The default correlations between the issuer and the reference entities are fixed to r2, which is always positive. (III) The proposed model: The default correlation between the issuer and the reference entities is rXZ, which may be positive or negative
648 P.-C. Wu et al.
r 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Default free 3.5790 % 3.0115 % 2.6429 % 2.3993 % 2.2319 % 2.1107 % 2.0280 % 1.9809 % 1.9499 % 1.9359 % 1.9470 % 1.9719 % 2.0171 % 2.0883 % 2.2105 % 2.3793 % 2.6279 % 2.9783 % 3.5583 %
(I)
In credit portfolio 3.7167 % 3.3632 % 3.1441 % 2.9853 % 2.8735 % 2.7835 % 2.7177 % 2.6597 % 2.6251 % 2.6020 % 2.6101 % 2.6320 % 2.6668 % 2.7214 % 2.8231 % 2.9471 % 3.1099 % 3.3144 % 3.6858 %
(II) 0.9 – – – – – – – 5.3797 % 4.5085 % 4.3246 % 4.5012 % 5.3262 % – – – – – – –
0.6 – – – – – 4.5560 % 3.6782 % 3.3040 % 3.1384 % 3.0819 % 3.1331 % 3.2787 % 3.6289 % 4.4910 % – – – – –
(III) The proposed model 0.3 – – – – 3.7706 % 3.2013 % 2.9511 % 2.8003 % 2.7242 % 2.6916 % 2.7072 % 2.7677 % 2.8962 % 3.1651 % 3.7405 % – – – –
rXZ 0 – – 5.2536 % 3.4885 % 3.0416 % 2.8419 % 2.7344 % 2.6635 % 2.6247 % 2.6020 % 2.6095 % 2.6357 % 2.6780 % 2.7836 % 2.9960 % 3.4615 % 5.2006 % – – 0.3 – 5.1041 % 3.3862 % 3.0143 % 2.8644 % 2.7752 % 2.7243 % 2.6743 % 2.6480 % 2.6360 % 2.6356 % 2.6482 % 2.6788 % 2.7186 % 2.8098 % 2.9825 % 3.3573 % 5.0265 % –
0.6 – 3.4122 % 3.1083 % 2.9859 % 2.9228 % 2.8858 % 2.8614 % 2.8273 % 2.8155 % 2.8013 % 2.7996 % 2.8099 % 2.8205 % 2.8426 % 2.8894 % 2.9468 % 3.0784 % 3.3628 % –
0.9 3.8150 % 3.6937 % 3.6545 % 3.6223 % 3.6225 % 3.6088 % 3.5933 % 3.5886 % 3.5889 % 3.5745 % 3.5754 % 3.5761 % 3.5859 % 3.5879 % 3.6007 % 3.6257 % 3.6345 % 3.6818 % 3.7992 %
Table 23.3 Third-to-default BCLN coupon rates without and with issuer default risk. (I) Default free: Issuer default risk is not included in the pricing model. (II) In credit portfolio: The issuer is viewed as one reference entity of the credit portfolio. The default correlations between the issuer and the reference entities are fixed to r2, which is always positive. (III) The proposed model: The default correlation between the issuer and the reference entities is rXZ, which may be positive or negative
23 Factor Copula for Defaultable Basket Credit Derivatives 649
650
P.-C. Wu et al. First to Default BCLN 15% 14%
rxz
Coupon Rate
13%
0.9
12%
0.5
11%
0
10%
−0.5
9%
−0.9 ND
8% 7% −1
−0.5
0 r
0.5
1
Fig. 23.2 First-to-default BCLN coupon rates under various default correlations between the common factor and reference entities/issuer (r) Second to Default BCLN
7%
rxz
Coupon Rate
6%
0.9 0.5
5%
0 −0.5 −0.9
4%
ND 3% −1
−0.5
0 r
0.5
1
Fig. 23.3 Second-to-default BCLN coupon rates under various default correlations between the common factor and reference entities/issuer (r)
23.3 and 23.4, we find that when the correlation between the issuer and the reference entities approaches a strongly positive correlation (rXZ ¼ 0.9), the BCLN coupon rate curve becomes flatter and less sensitive to the common factor.
23.5
Conclusion
This article applies a factor copula approach for evaluating basket credit derivatives with issuer default risk. The proposed model considers the different effects of
23
Factor Copula for Defaultable Basket Credit Derivatives
651
Third to Default BCLN 6% rxz
Coupon Rate
5%
0.9 4%
0.5 0
3%
−0.5 −0.9
2%
ND 1% −1
−0.5
0 r
0.5
1
Fig. 23.4 Third-to-default BCLN coupon rates under various default correlations between the common factor and reference entities/issuer (r)
the default correlation between the issuer and the reference entities. A numerical example of the proposed model on BCLN is demonstrated and discussed in the article. The example shows that viewing the issuer default as a new reference entity cannot reflect the effect of issuer default risk thoroughly. The different default correlation between the issuer and the reference entities affects the coupon rate greatly and must be taken into account in credit derivative pricing.
Appendix 1: Factor Copula In a factor copula model, we assume that different variables depend on some common factors. The most widely used model in finance is the one-factor Gaussian copula model.
One-Factor Gaussian Copula Model Let Si(t) ¼ P(ti > t) and Fi(t) ¼ P(ti t) be the marginal survival and marginal default distributions, respectively. Let Y be the common factor and f its density function. Assume the default times are conditionally independent, given the ijY ijY common factor Y. qt ¼ P(ti > t | Y) and pt ¼ P(ti t | Y) are the conditional survival and conditional default distributions, respectively. According to the law of iterated expectations, the joint survival and default distribution functions are as follows:
652
P.-C. Wu et al.
Sðt1 ; t2 ; . . . ; tn Þ ¼ Pðti > t1 , t2 > t2 , . . . , tn > tn Þ ðY n ijy ¼ qt f ðyÞdy
(23.18)
i¼1
Fðt1 ; t2 ; . . . ; tn Þ ¼ Pðti t1 , t2 t2 , . . . , tn tn Þ ðY n ijy ¼ pt f ðyÞdy
(23.19)
i¼1
In the one-factor Gaussian copula, the credit variable Xi is Gaussian distributed. Xi depends on the common factor Y and an individual factor eXi as follows: Xi ¼ rXi Y Y þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 rXi Y 2 eXi , i ¼ 1, 2, . . . , N
(23.20)
where Y and eXi are two independent standard Gaussian random variables. We can get the default time ti ¼ F1 i (f(Xi)), where f() is the cumulative density function of a standard Gaussian variable. Then the conditional distribution of ti, given the common factor Y, is 1
0 1
Bf ðFi ðtÞÞ rXi Y Y C ijY qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pt ¼ f @ A, 1 r2Xi Y
(23.21)
and the joint distribution function of t1,t2, ,tn is as follows: Fð t 1 ; t 2 ; . . . ; t n Þ ¼
ðY n ijy pt f ðyÞdy i¼1
2 13 0 ð Y 1 n ð ð Þ Þ f F t r Y i 6 C7 B qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXi Y A5f ðyÞdy ¼ 4 f@ 2 i¼1 1 rX i Y
(23.22)
where f(y) is the standard Gaussian density. The copula of default times is a Gaussian copula.
Law of Iterated Expectations The law of iterated expectations states that E(Y) ¼ E(E(Y|X)). The proof is as follows:
23
Factor Copula for Defaultable Basket Credit Derivatives
653
ð EðEðY jXÞÞ ¼ EðY jXÞf X ðxÞ dx ð ð ¼ y f XjY ðyjxÞ dy f X ðxÞ dx ðð ¼ y f X, Y ðx; yÞ dxdy ð ð ¼ y f X, Y ðx; yÞ dx dy ð ¼ y f ðyÞ dy ¼ EðY Þ
(23.23)
Appendix 2: Cholesky Decomposition and Correlated Gaussian Random Numbers Cholesky Decomposition A symmetric positive defined real number matrix A can be decomposed as follows: A ¼ LLT
(23.24)
where L is a lower triangular matrix with strictly positive diagonal entries and LT is the transpose of L. This format is the Cholesky decomposition and it is unique. The entries of L are as follows: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u j1 u X Lj, j ¼ tAj, j L2j, k
(23.25)
k¼1
! j1 X 1 Ai , j Li, k Lj, k , Li, j ¼ Lj , j k¼1
for
i>j
(23.26)
Correlated Gaussian Random Numbers In financial applications, the Cholesky decomposition is usually used to create correlated Gaussian random variables. Suppose we need to generate n correlated random Gaussian numbers, X1, X2, , Xn, given a positive defined correlation coefficient matrix R. We first generate iid standard Gaussian random variables Z1, Z2, , Zn and let
654
P.-C. Wu et al.
2
2 3 3 Z1 X1 6 X2 7 6 Z2 7 6 7 7 X¼6 4 ⋮ 5, Z ¼ 4 ⋮ 5: Xn Zn Let X ¼ CZ, where C is an n n matrix; then R ¼ Var(X) ¼ E(XXT) ¼ CCT and C is the lower triangular matrix of the Cholesky decomposition of R. Therefore, we can obtain the correlated Gaussian random variables X1, X2, , Xn by letting X ¼ CZ.
Appendix 3: Monte Carlo Simulation Monte Carlo simulation is a computational algorithm based on random number sampling. It generates n iid random samples X1, X2, , Xn of the random variable X and estimates E[X] by X = X1 þX2 nþþ Xn . X converges to E[X] as n ! 1, according to the law of large numbers.
Weak Law of Large Numbers (WLLN) Let X1, X2, , Xn be an iid sequence of random variables for which E[X] < 1. P n Then X1 þX2 þþX ! E½X as n ! 1. The WLLN implies that the Monte n Carlo method converges to E[X].
Strong Law of Large Numbers (SLLN) Let X1, X2, , Xn be an iid sequence of random variables for which E[X] < 1. a:s: Then X1 þX2nþþXn ! E½X as n ! 1. The SLLN implies that the Monte Carlo method almost surely converges to E[X].
Uniform and Nonuniform Random Numbers Usually, the computer has a uniform random number generator, which can generate a sequence of iid uniform random numbers U1, U2, on [0,1]. Then, how do we get the nonuniform random numbers? Usually, these can be achieved by inversion. Given a distribution function F(∙), there is a one-to-one mapping from U(∙) to F(∙). Since F(∙) is nondecreasing, F-1(x) ¼ inf{y : F(y) x}. If F is continuous and strictly increasing, the inverse of the function is F(F1(x)) ¼ F1(F(x)) ¼ x. Thus, the nonuniform random numbers with distribution F can be obtained by x ¼ F1(u), where u is a uniform random number on [0,1].
23
Factor Copula for Defaultable Basket Credit Derivatives
655
References Andersen, L., Sidenius, J., & Basu, S. (2003). All your hedges in one basket. Risk, 16(11), 67–72. Bastide, D., Benhamou, E., & Ciuca, M. (2007). A comparative analysis of basket default swaps pricing using the Stein method. Working paper, pricing partners. Brigo, D., & Chourdakis, K. (2009). Counterparty risk for credit default swaps: Impact of spread volatility and default correlation. International Journal of Theoretical and Applied Finance, 12(7), 1007–1026. Chen, Z., & Glasserman, P. (2008). Fast pricing of basket default swaps. Operations Research, 56(2), 286–303. Chiang, M. H., Yueh, M. L., & Hsieh, M. H. (2007). An efficient algorithm for basket default swap valuation. Journal of Derivatives, 15(2), 1–22. Hull, J., & White, A. (2004). Valuation of a CDO and an nth to default CDS without Monte Carlo simulation. Journal of Derivatives, 12(2), 8–23. Jarrow, R., & Turnbull, S. (1995). Pricing derivatives on financial securities subject to credit risk. Journal of Finance, 1, 53–85. Joshi, M. S., & Kainth, D. (2004). Rapid and accurate development of prices and Greeks for Nth to default credit swaps in the Li model. Quantitative Finance, 4, 266–275. Laurent, J. P., & Gregory, J. (2005). Basket default swaps, CDOs and factor copulas. Journal of Risk, 7(4), 103–122. Li, D. X. (1999). The valuation of basket credit derivatives. CreditMetrics Monitor, 34–50. Li, D. X. (2000). On default correlation: A copula function approach. Journal of Fixed Income, 9, 43–54. Mashal, R., & Naldi, M. (2003). Pricing multi-name default swaps with counterparty risk. Quantitative Credit Research, 1–16. Merton, R. C. (1974). On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance, 29, 449–470. Sklar, A. (1959). Fonctions de repartition a n dimensions et leurs marges. Public Institute Statistic University of Paris, 8, 229–231. Stein, C. (1972). A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the sixth Berkeley symposium on mathematical statistics and probability (pp. 583–602). Berkeley: University of California Press. Walker, M. B. (2006). Credit default swaps with counterparty risk: A calibrated Markov model. Journal of Credit Risk, 2(1), 31–49. Wu, P. C. (2010). Applying a factor copula to value basket credit linked notes with issuer default risk. Finance Research Letters, 7(3), 178–183. Wu, P. C., Kao, L. J., & Lee, C. W. (2011). How issuer default risk affects basket credit linked note coupon rate. International Journal of Information and Management Sciences, 22(1), 59–71.
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds
24
Win Lin Chou, Shou Zhong Ng, and Yating Yang
Contents 24.1 24.2 24.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wild-Cluster Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.3.1 Data and Definitions of the Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Wild-Cluster Bootstrap Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
658 660 661 661 661 665 666 667
Abstract
Thompson (Journal of Financial Economics 99, 1–10, 2011) argues that double clustering the standard errors of parameter estimators matters the most when the number of firms and time periods are not too different. Using panel data of similar number in firms and time periods on China’s mutual funds, we estimate double- and single-clustered standard errors by wild-cluster bootstrap procedure. To obtain the wild bootstrap samples in each cluster, we reuse the regressors (X) but modify the residuals by transforming the OLS residuals with weights which follow the popular two-point distribution suggested by Mammen (Annals of Statistics 21, 255–285, 1993) and others. We then compare them with other
W.L. Chou (*) Department of Economics and Finance, City University of Hong Kong, Hong Kong, China e-mail: [email protected] S.Z. Ng Hong Kong Monetary Authority, Hong Kong, China e-mail: [email protected] Y. Yang Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_24, # Springer Science+Business Media New York 2015
657
658
W.L. Chou et al.
estimates in a set of asset pricing regressions. The comparison indicates that bootstrapped standard errors from double clustering outperform those from single clustering. Our findings support Thompson’s argument. They also suggest that bootstrapped critical values are preferred to standard asymptotic t-test critical values to avoid misleading test results. Keywords
Asset-pricing regression • Bootstrapped critical values • Cluster standard errors • Double clustering • Firm and time effects • Finance panel data • Single clustering • Wild-cluster bootstrap
24.1
Introduction
Researchers using finance panel data have increasingly realized the need to account for the residual correlation across both firms and/or time in estimating standard errors of regression parameter estimates. Ignoring such clustering can result in biased OLS standard errors. Two forms of residual dependence that are common in finance applications are time-series dependence and cross-sectional dependence. The former is called a firm effect, whereas the latter a time effect. The usual solution to account for the residual dependence is to compute clustered standard errors. The notable examples are Petersen (2009) and Thompson (2011). Using the Monte Carlo-simulated panel data, Petersen (2009) compares the performance of many different standard error estimation methods surveyed in the literature. These methods include White’s heteroskedasticity-robust standard errors, single clustering (by firm or by time), and double clustering (by both firm and time). His findings suggest that the performance of different methods depends on the forms of residual dependence. For example, in the presence of a firm effect, the clustered standard errors are unbiased and can produce correctly sized confidence intervals while those estimated by OLS, White, or Fama-MacBeth method are biased. Much of the analysis in Petersen (2009) is based on the simulated panel data set whose data structure is certain. With the simulated panel data set, it is easier to choose among the estimation methods. This paper chooses an alternative method, namely, bootstrapping, to investigate the performance of standard errors estimated by White’s OLS and single- and double-clustering methods with actually observed data. The use of the bootstrap method is motivated by Kayhan and Titman (2007) who show that bootstrapped standard errors are robust to heteroskedasticity, and serial correlation problems in panel finance data applications. Moreover, despite the wide use of the bootstrap in statistical and econometric applications, the survey finding of Petersen (2009) found that the bootstrap applications are relatively scarce in the finance literature. Hence, it may be of some interest to investigate the bootstrapping application to a set of panel finance data on Chinese mutual funds.
24
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds
659
Table 24.1 Summary statistics for variables used in China’s mutual fund regressions (Sept 2002–Aug 2006) Variable Mean Mutual funds 0.0006 returns Market 0.0025 excess returns
Std. deviation 0.0580
Skewness 0.1313
Kurtosis 0.1489
Normality test stat. W-sq. A-sq. 0.1809*** 0.2062***
0.0465
0.1554
0.7801
2.4993*** 18.3650***
Notes: Sample size N ¼ 2,592. Individual mutual funds return is the dependent variable, while the market excess return is the independent variable. W-sq or W2 ¼ Cramer-von Mises test statistic, and A-sq or A2 ¼ Anderson-Darling test statistic. Both test statistics are empirical distribution function (EDF) statistics. The computing formulas for W2 and A2 statistics are available in Stephens (1974), p. 731 *** denotes statistical significance at the 1 % level
The bootstrap method is applied to a panel data set on the monthly returns for 54 Chinese mutual funds over the period of September 2002–August 2006. The data set is applied to a set of asset pricing regressions. Table 24.1 contains summary statistics such as sample skewness, sample excess kurtosis, and two test statistics for normality for the variables used in the asset pricing regressions. They suggest that normality does not characterize the variables. Additionally, since the time-series and/or cross-sectional independence assumption is most likely to be violated in panel data sets, ignoring these dependence could result in biased estimates of the standard errors. As evidenced in Kayhan and Titman (2007), bootstrapping is a possible alternative to handle this dependence issue. In this paper, we are particularly interested in the performance of the bootstrapped double-clustered standard error estimates, because Thompson (2011) has argued that double clustering matters most when the number of firms and time periods are not too different. Given the panel data set we have collected which consists of 54 China mutual fund returns for 48 months with data exhibiting firm and time effects, double clustering is likely to show a significant difference. Our findings show that the bootstrapped standard errors from double clustering leads to more significant test results. We also demonstrate the importance of using bootstrapped critical values in hypothesis testing. A number of bootstrap procedures are available in the literature. The bootstrap procedure we consider in this paper is the wild-cluster bootstrap procedure, which is an extended version of the wild bootstrap proposed by Cameron et al. (2008) in a cluster setting. This procedure has been shown by Cameron et al. (2008) to perform very well in practice, despite the fact that the pairs cluster bootstrap works well in principle. In this paper, our comparison of the finite-sample size of the bootstrapped t-statistics resulting from the pairs cluster bootstrap and wild-cluster bootstrap also indicates that the wild-cluster bootstrap performs better. The rest of the paper is organized as follows. Section 24.2 presents the wild-cluster bootstrap procedure. Section 24.3 discusses the empirical results and the last section gives conclusions.
660
24.2
W.L. Chou et al.
Wild-Cluster Bootstrap
The bootstrap we use in this paper is known as the “wild-cluster bootstrap” which is based on the nonclustered wild bootstrap proposed by Wu (1986). Proofs of the ability of the wild bootstrap to provide refinements in the linear regression model for linear regression models with heteroskedastic errors can be found in Liu (1988) and Mammen (1993). Cameron et al. (2008) extended Wu’s (1986) wild bootstrap to the clustered setting. The wild-cluster bootstrap procedure involves two stages. In the first stage, we consider the asset pricing model with G clusters (subscripted by g), and 0 with Ng observations within each cluster, namely, yg ¼ Xg b + eg, g ¼ 1, . . . ,G, b is k 1, Xg is Ng k, and yg and eg are Ng 1 vectors. We fit the model to the actually observed panel data set by OLS and estimate White’s heteroskedasticity-robust standard errors, as well as standard errors, clustered by firm, by time, and by both. We then save residuals and denote them as eˆg. The second stage is the bootstrap resampling procedure which creates 0 ^ þ e . For ^y 1 ; X1 ; . . . ; ^y G ; Xg where ^y g = Xg b samples for each cluster, g each bootstrap sample in a cluster, the explanatory variables are reused and unchanged. The residuals eg are constructed according to eg = ag^e g , where the weight ag serves as a transformation of the OLS residuals ^e g . A variety of constructions of weights ag have proposed in the literature1. We use the two-point distribution of the weight variable ag suggested in Mammen (1993), Brownstone and Valletta (2001), and Davidson andFlachaire pffiffiffi (2008), namely, ag which takes on one of the following values: (i) 1 pffiffiffi pffiffiffi p5ffiffiffi=2 0:6180 with probability 1 þ 5p=ffiffiffi2 5pffiffi 0:7236 or (ii) 1 þ 5 =2 1:6180 with ffi probability 1 1 þ 5 = 2 5 0:2764 . Note that this random variable ag has a mean zero with variance equal to one and the constraint E(a3g ) ¼ 1. We perform 1,000 replications. On each replication, a new set of eg is generated and 0 ^ þ e, and therefore a new a new set of bootstrap-data is created based on ^y G = Xg b g ^ , is obtained. set of parameter estimates, denoted as b For the 1,000 starred estimates (eg ), we calculate their bootstrapped standard errors using different estimation methods. The bootstrapped test statistics are calculated by dividing the 1,000 parameter estimates by the corresponding bootstrapped standard errors. The bootstrapped critical values can be obtained from the bootstrapped distribution of these test statistics. A detailed explanation of the procedure we follow is documented in Appendix 1.
1 For example, in Cameron et al. (2008), ag takes the value +1 with probability 0.5, or the value 1 with probability 1–0.5.
24
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds
24.3
661
Empirical Results
24.3.1 Data and Definitions of the Variables The Data. The sample consists of the returns on 54 publicly traded closed-end mutual funds that are gathered for 48 months from September 2002 to August 2006, a total of 2,592 observations. The mutual fund data set is purchased from the GTA Information Technology Company, Shenzhen, China. For simplicity, we divide the mutual funds investment objectives into equity growth and nongrowth funds. Our sample consists of 37 (68.5 %) growth funds and 17 (31.5 %) nongrowth funds. Although the first closed-end fund in China was sold to the public in April 19982, complete data for all 54 mutual funds are collected from September 2002. The summary statistics for the variables used in China’s mutual fund return regressions are displayed in Table 24.1, and the test statistics for normality for these variables suggest that the variables are non-normal. Definitions of the variables. The following are the definitions of the variables used in the estimation procedures: Ri, t ¼ the return of a mutual fund i in excess of the risk-free rate in month t. Savings deposit rate of the People’s Bank of China3 is used as the proxy for the risk-free rate. Rm, t ¼ is the market return in excess of the risk-free rate in month t and is calculated4 as follows: Rm, t ¼ 0:4 R1, t þ 0:4 R2, t þ 0:2 0:0006,
(24.1)
where R1 is the monthly return on Shanghai Stock Exchange index, R2 the monthly return on Shenzhen Stock Exchange index, and 0.06 % is the monthly return on savings deposits.
24.3.2 Results Table 24.2 presents the results from a set of asset pricing regressions of China mutual fund returns on its market returns. The firm and time effect in OLS residuals and data can be seen graphically in Fig. 24.1. Figure 24.1, Panel A, shows the within-firm autocorrelations in OLS residuals and independent variable, respectively, for lags 1–12. Panel B of Fig. 24.1 displays the within-month autocorrelations for residuals for lags 1–12. As the independent variable is a monthly series without cross-sectional units, we cannot calculate its within-month autocorrelations, thus no within-month plot is available for the independent variable. Figure 24.1 suggests that the residuals exhibit both firm and time effects, whereas
2
Chen and Lin (2006), p. 384 Data are taken from the website of the People’s Bank of China: http://www.pbc.gov.cn/publish/ zhengcehuobisi/627/index.html. 4 The calculation follows that of Shen and Huang (2001), p. 24. 3
662
W.L. Chou et al.
Table 24.2 Application to asset pricing modeling
Regressor Rm, t 1 % critical value (CV) Intercept 1 % critical value (CV) Coefficient estimates R-squared Regressor Rm, t Bootstrapped 1 % CV Intercept Bootstrapped 1 % CV Coefficient estimates R-squared
Estimate 0.8399 0.0016 OLS 0.4539 Estimate 0.8401 0.0016
t-statistics Clustered by White Firm I II 46.935*** 65.492*** 2.576 2.576 1.858 2.222** 2.576 2.576
Time III 9.017*** 2.576 0.326 2.576
Firm and time IV 9.099*** 2.576 0.327 2.576
V 47.215*** 1.951 1.878** 2.300
VII 50.886*** 2.520 2.033** 2.371
VIII 73.672*** 11.898 2.659 12.288
VI 66.808*** 2.661 2.262** 2.717
Wild-cluster bootstrap 0.4579
Notes: The dependent variable is the monthly mutual fund return in excess of risk-free rate, denoted as Ri,t, and the independent variable Rm,t is the market returns in excess of risk-free rate. Both variables are monthly observations from September 2002 to August 2006 *** , and ** denote statistical significance at the 1 %, and 5 % levels, respectively
independent variable shows firm effects. Given Thompson’s (2011) argument that double clustering matters most when the number of firms and time periods are not too different, our data set which has the number of mutual funds (54) similar to the number of months (48) is expected to imply that double clustering is important in our analysis. In Table 24.2, the second column presents the OLS parameter estimates, whereas the remaining columns report their corresponding t-statistics by dividing each parameter estimate by its corresponding standard error. These t-statistics indicate all beta coefficients are statistically significant at the 1 % level, whereas the intercept is only significant in one case (under column II) when the standard error computed by single clustering by firm is used. More importantly, the t-statistics in Table 24.2 enable us to compare the clustered standard errors constructed from double and single clustering with the OLS White estimate. Notice that the t-statistic for beta coefficient obtained from double clustering (SEˆboth) is 9.10 (column IV) which is much smaller than 46.9 calculated using the White method (column I), indicating the presence of firm and time effects. It also means that the double-clustering standard errors are much larger. A comparison of t-statistics in columns III and IV implies the SEˆboth of the beta coefficient (9.10) is similar to the standard error clustered by time which is 9.01. This means that the firm effects do not matter much. The comparison reveals that OLS White standard errors are underestimated when residuals exhibit both firm and time effects.
24
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds
663
PANEL A: Within Firm
0.3 residual
Rm,t
Autocorrelation
0.2 0.1 0 1
−0.1
2
3
4
5
6
7
8
9
10
11
12
−0.2 −0.3
Lag PANEL B: Within Month
0.25 0.2
Autocorrelation
0.15 0.1 0.05 0 −0.05
1
2
3
4
5
6
7
8
9
10
11
12
−0.1 −0.15 −0.2 −0.25
Lag
Fig. 24.1 The autocorrelations of residuals and the independent variable are plotted for 1–12 lags. The solid lines in Panel A and B show, respectively, within-firm and within-month autocorrelations in residuals, whereas the dashed line in Panel A shows the within-firm autocorrelations in the independent variable
Turning to the results obtained by the wild-cluster bootstrapping, the story changes. The bootstrapped t-statistics of the beta coefficient estimates displayed in columns V–VIII of Table 24.2 differ quite significantly from those in columns I–IV. The t-statistic is 47.2 when the bootstrapped White standard error is used and 66.8 if the bootstrapped standard error clustered by the firm is used. This means the firm effect is significant in the data. By a similar comparison, the bootstrapped t-statistic is 73.7 when the double-clustered standard error is used, meaning both the time and firm effects are strong in the residuals. The bootstrapped t-statistic is 50.9 when the bootstrapped standard error clustered by time is used implying the time effect exists in the data. These comparisons suggest both the firm and time effects matter in the computation of the bootstrapped standard errors using double as well as single clustering. Their implication is that we might follow what Kayhan and Titman (2007) have done in their study to simply compute the bootstrapped standard errors with our panel data set.
664
W.L. Chou et al.
Table 24.3 Bootstrapped critical values Percentiles 0.005 OLS White 1.842 Rm,t Intercept 2.300 Clustered by firm Rm,t 2.621 Intercept 2.717 Clustered by time Rm,t 2.513 Intercept 2.371 Clustered by firm and time Rm,t 6.517 Intercept 12.288
0.025
0.05
0.95
0.975
0.995
1.396 1.656
1.174 1.396
1.213 1.385
1.490 1.587
1.951 2.091
1.984 2.035
1.698 1.700
1.766 1.778
2.007 2.014
2.661 2.672
1.795 1.956
1.510 1.608
1.517 1.629
1.821 1.917
2.520 2.364
4.376 4.053
3.491 2.792
3.081 3.136
4.416 4.384
11.898 8.168
Note: Critical values are obtained by implementing the bootstrap procedure presented in the Appendix 1
Table 24.4 Rejection rates using wild-cluster bootstrapped critical values Rm,t Intercept
OLS White 0.050 0.042
Clustered by firm 0.067 0.034
Clustered by time 0.053 0.037
Clustered by firm and time 0.052 0.036
Note: The number of replications is 1,000
The statistical significance of the bootstrapped t-statistics of the beta coefficient estimates is determined by using the bootstrapped critical values reported in Table 24.3. Compared with the bootstrapped critical values presented in Table 24.3, we notice that all bootstrapped t-statistics constructed from bootstrapped standard errors are statistically significant at the 1 % level. The intercept is now significant in three cases (columns V–VII) when the standard errors were computed by White and by single clustering. It is noteworthy that in Table 24.3 the bootstrapped critical values on beta coefficient estimates when double clustering is used are numerically larger than the corresponding asymptotic t-test critical values of 2.58 (1 %), 1.96 (5 %), and 1.65 (10 %), indicating that the use of the large-sample (normal approximation) critical values can lead to misleading test results when both firm and time effects exist in the residuals. On the other hand, intercept coefficient in column VIII was not significant when bootstrapped double clustering is used. Interestingly, we observe from Table 24.3 that bootstrapped critical values differ considerably depending on different standard error estimation methods. We now examine the finite-sample size of the bootstrapped t-statistics assuming the beta coefficient takes the OLS estimate under the null hypothesis. Table 24.4 shows that for the bootstrapped t-statistics, no serious size distortion is found as
24
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds
665
Table 24.5 Rejection rates using conventional critical values Rm,t Intercept
OLS White 0.004 0.012
Clustered by firm 0.069 0.046
Clustered by time 0.037 0.036
Clustered by firm and time 0.206 0.203
Note: The number of replications is 1,000
reflected by the close to 5 % size values. However, for the tests based on the standard asymptotic t-test critical values, all tests suffer from serious size distortion. For example, those based on the OLS White and clustered by time are undersized, while others oversized (see Table 24.5).
24.4
Conclusion
In this paper, we examine the performance of single- and double-clustered standard errors using the wild-cluster bootstrap method. The panel data set on the Chinese mutual funds used in the analysis has similar number of firms (54) and time periods (48); we are particularly interested in the performance of the bootstrapped double-clustered standard errors. This is mainly due to the conclusion made in Thompson (2011) that double clustering the standard errors matters the most when the number of firms and time periods are not too different. In the presence of firm and time effects, the standard OLS White standard errors are found to be underestimated when compared to standard errors computed from double clustering (columns I–IV, Table 24.2). Further, the wild-cluster bootstrapped standard errors are found to account for the firm and time effects in residuals, as evidenced in column VIII of Table 24.2. The bootstrapped t-statistic computed by OLS White method is found to be much smaller than that calculated from the double clustering, suggesting that the firm and time effects in residuals are strong. The size values for the test statistics of the beta coefficient estimates in Table 24.4 suggest that the bootstrapped double clustering outperforms the single clustering either by firm or by time. They support Thompson’s (2011) argument that double clustering the standard errors of parameter estimators matters the most when the number of firms and time periods are not too different. Size distortions reported in Table 24.5 imply that it may not be appropriate to compare the bootstrapped t-statistics with standard t-test critical values. These findings also suggest that to avoid obtaining misleading test results with the presence of either firm or time effects or both, the bootstrapped critical values are preferred to conventional critical values. Additionally, a comparison of the sizes displayed in Table 24.4 with those calculated using the pairs cluster bootstrap method shown in Table 24.6 also suggests the wild-cluster bootstrap approach performs better.
666
W.L. Chou et al.
Table 24.6 Rejection rates using pairs cluster bootstrapped critical values Rm,t Intercept
OLS White 0.043 0.062
Clustered by firm 0.040 0.048
Clustered by time 0.040 0.059
Clustered by firm and time 0.040 0.060
Note: The number of replications is 1,000
Appendix 1: Wild-Cluster Bootstrap Procedure The following steps are used to obtain the wild-clustered bootstrapped standard errors and critical values:5 (i) Define firm effects, and time effects. The asset pricing model using individual variables Ri,t and Rm,t is specified as Rit ¼ b0 þ Rm, t b1 þ eit , i ¼ 1, . . . , N, t ¼ 1, . . . , T
(24.2)
where the variables Ri,t and Rm,t are, respectively, the return of mutual fund i and the market return in excess of risk-free rate in month t. b0 and b1 are unknown parameters. The construction of the variables is detailed in Sect. 24.3.1. eit is the error term. It may be heteroskedastic but is assumed to be independent of the explanatory variable E(eit | Rm,t) ¼ 0. Following Thompson (2011), we make the following assumptions on the correlations between errors, eit: (a) Firm effects: The errors may be correlated across time for a particular firm, that is, E(eit, eik | Rm,t,Rm,k) 6¼ 0 for all t 6¼ k. (b) Time effects: The errors may be correlated across firms within the same time period, that is, E eit , ejt Rm, t 6¼ 0 for all i 6¼ j: Let G be the number of clusters, and let Ng be the number of observations within each cluster. The errors are assumed to be independent across clusters but correlated within clusters. The asset pricing model can be written as Rig ¼ b0 þ Rm b1 þ eig , Rg ¼ db0 þ Rmg b1 þ eg ,
i ¼ 1, . . . , Ng g ¼ 1, . . . , G,
g ¼ 1, . . . , G,
(24.3)
where Rig, Rm, and eig are scalars; Rg, Rmg, and eg are Ng 1 vectors; and d is Ng 1 vector with all elements equal to 1. (ii) Fit data to model. We fit model (Eq. 24.3) to the observed data using OLS ^ and b ^ together with the OLS residuals and obtain the parameter estimates b 0 1 eˆg, g ¼ 1,. . ., G 5
See also Cameron et al. (2008) for details
24
Panel Data Analysis and Bootstrapping: Application to China Mutual Funds
667
(iii) Construct 1,000 bootstrap samples. The bootstrap-residuals are obtained according to the following transformation eg = ag^e g, where ag takes pffiffirelation: ffi on one of the following values: (i) 1 5 =2 0:6180 with probability pffiffiffi pffiffiffi pffiffiffi 1 þ 5 = 2 5 0:7236 or (ii) 1 þ 5 =2 1:6180 with probability pffiffiffi pffiffiffi 1 1 þ 5 = 2 5 0:2764: Hence, (a) For each cluster g = 1, . . . , G, set eg = 1:618^e g with probability 0.2764 or eg = 0:618^e g with probability 0.7236. (b) Repeat (a) 1,000 times to obtain eg and then construct the bootstrap samples Rg as follows: ^ þ e : ^ þ Rmg b Rg ¼ db 0 1 g
(24.4)
^ (iv) With each pseudo sample generated in step (iii), we estimate the parameters b 0 ^ by OLS, White standard errors (SEˆwhite)which are OLS standard errors and b 1 robust to heteroskedasticity, as well as standard errors clustered by firm (SEˆfirm), by time (SEˆtime), and by both firm and time (SEˆboth). The simulations are performed using GAUSS 9.0. The standard error formulas can be found in Thompson (2011). ^ ði = 0, 1Þ obtained (v) Construct bootstrapped test statistics by taking ratios of b i by OLS to its corresponding SEˆwhite, SEˆfirm, SEˆtime, and SEˆboth obtained in step (iv). More specifically, the bootstrapped test statistics are expressed as follows: wi
^ b ^ b ¼ i i , i ¼ 0, 1, ^ ^ b SE
(24.5)
i
^ can either be SEˆwhite, SEˆfirm, SEˆtime, or SEˆboth. ^ b where SE i (vi) Obtain the empirical distribution of the individual test statistics by sorting the 1,000 test statistics computed in step (v) in an ascending order. Bootstrapped critical values are then obtained from this empirical distribution at the following quantiles: 0.5 %, 2.5 %, 5 %, 95 %, 97.5 %, and 99.5 %, respectively.
References Brownstone, D., & Valletta, R. (2001). The bootstrap and multiple imputations: Harnessing increased computing power for improved statistical tests. Journal of Economic Perspectives, 15, 129–141. Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-based improvements for inference with clustered errors. Review of Economics and Statistics, 90, 414–427. Chen, Z., & Lin, R. (2006). Mutual fund performance evaluation using data envelopment analysis with new risk measures. OR Spectrum, 28, 375–398. Davidson, R., & Flachaire, E. (2008). The wild bootstrap, tamed at last. Journal of Econometrics, 146, 162–169. Kayhan, A., & Titman, S. (2007). Firms’ histories and their capital structures. Journal of Financial Economics, 83, 1–32.
668
W.L. Chou et al.
Liu, R. Y. (1988). Bootstrap procedures under some non-IID models. Annals of Statistics, 16, 1696–1708. Mammen, E. (1993). Bootstrap and wild bootstrap for high dimensional linear models. Annals of Statistics, 21, 255–285. Petersen, M. A. (2009). Estimating standard errors in finance panel data sets: Comparing approaches. The Review of Financial Studies, 22, 435–480. Shen, W. T., & Huang, S. L. (2001). An empirical analysis of China’s mutual funds: Performance and evaluation. Economic Research, 9, 22–30 (in Chinese). Stephens, M. A. (1974). EDF statistics for goodness of fit and some comparisons. Journal of the American Statistical Association, 69, 730–737. Thompson, S. B. (2011). Simple formulas for standard errors that cluster by both firm and time. Journal of Financial Economics, 99, 1–10. Wu, C. F. G. (1986). Jackknife, bootstrap and other resampling methods in regression analysis. Annals of Statistics, 14, 1261–1295.
Market Segmentation and Pricing of Closed-End Country Funds: An Empirical Analysis
25
Dilip K. Patro
Contents 25.1 25.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2.1 Pricing of Country Funds in the Context of International CAPM . . . . . . . . . . . . 25.2.2 Cross-Sectional Variations in Country Fund Premiums . . . . . . . . . . . . . . . . . . . . . . 25.2.3 Conditional Expected Returns and Pricing of Country Funds . . . . . . . . . . . . . . . . 25.2.4 Conditional Expected Returns and Time-Varying Country Fund Premiums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3 Data and Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4 Empirical Results: Pricing of Country Funds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.1 Unconditional Risk Exposures of Country Funds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.2 Pricing of Country Funds in the Context of International CAPM . . . . . . . . . . . . 25.4.3 Cross-Sectional Variations in Country Fund Premiums . . . . . . . . . . . . . . . . . . . . . . 25.4.4 Predictability of Closed-End Country Fund Returns . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.5 Conditional Expected Returns and Pricing of Country Funds . . . . . . . . . . . . . . . . 25.4.6 Conditional Expected Returns and Time-Varying Country Fund Premiums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Generalized Method of Moments (GMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
670 673 673 674 676 677 677 684 684 689 690 693 694 700 701 701 703
Abstract
This paper finds that for closed-end country funds, the international CAPM can be rejected for the underlying securities (NAVs) but not for the share prices. This finding indicates that country fund share prices are determined globally where as the NAVs reflect both global and local prices of risk. Cross-sectional variations in the discounts or premiums for country funds are explained by the differences The views expressed in this paper are strictly that of the author and not of the OCC or the US department of Treasury. D.K. Patro RAD, Office of the Comptroller of the Currency, Washington, DC, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_25, # Springer Science+Business Media New York 2015
669
670
D.K. Patro
in the risk exposures of the share prices and the NAVs. Finally, this paper shows that the share price and NAV returns exhibit predictable variation and country fund premiums vary over time due to time-varying risk premiums. The paper employs generalized method of moments (GMM) to estimate stochastic discount factors and examines if the price of risk of closed-end country fund shares and NAVs is identical. GMM is an econometric method that was a generalization of the method of moments developed by Hansen (Econometrica 50, 1029–1054, 1982). Essentially GMM finds the values of the parameters so that the sample moment conditions are satisfied as closely as possible. Keywords
Capital markets • Country funds • CAPM • Closed-end funds • Market segmentation • GMM • Net asset value • Stochastic discount factors • Timevarying risk • International asset pricing
25.1
Introduction
The purpose of this paper is to provide an empirical analysis of pricing of closedend country equity funds in the context of rational asset-pricing models which account for the role of market segmentation and time-varying risk premiums. Specifically, the paper addresses the following issues. How are country fund share prices and net asset values (NAVs) determined? What are the implications of differential pricing of closed-end fund shares and NAVs for cross-sectional and time-series variations in the premiums of the funds? The answers to these questions contribute to the burgeoning literature on country funds. Closed-end country equity funds are a relatively recent innovation in international capital markets. Whereas only four closed-end country equity funds traded in New York at the end of 1984, currently there are 94 closed-end country equity funds targeting over 31 countries. Past researchers have examined issues related to the benefits of diversification from holding these funds (Bailey and Lim 1992; Chang et al. 1995; Bekaert and Urias 1996) and how they should be designed and priced (Hardouvelis et al. 1993; Diwan et al. 1995; Bodurtha et al. 1995). Relatively unexplored is how the expected returns of the country fund share prices and NAVs are determined, in a framework of market segmentation and time-varying risk premiums. Country fund share prices are determined in the USA, but their NAVs are determined in the country of origin of the fund. Models of international asset pricing (e.g., Errunza and Losq 1985) and models of country fund pricing (e.g., Errunza et al. 1998) suggest that the expected returns on country fund share prices should be determined by their covariances with the world market portfolio. However, if there are barriers to investment, such as limits on ownership, the capital markets may be segmented. In such a segmented market, the expected returns of the NAVs will be determined by their covariances with the world market portfolio as well as the home market portfolio, to the extent that the home market is not spanned by the world market.
25
Market Segmentation and Pricing of Closed-End Country Funds
671
The foundation of this paper is in such a framework of market segmentation and its implications for international capital market equilibrium. In such a market structure where the local investors have complete access to the world market but the foreign investors (e.g., the US investors) have only partial access to the local equity market in terms of the percentage of equity of a firm they can own, the prices paid by foreigners relative to local investors can be higher due to the limited supply of the local securities. To preclude arbitrage, it is assumed that the local investors cannot buy the securities at a lower price and sell it to the foreign investor at a higher price. Thus, foreign investors are willing to pay a premium due to the diversification benefits from adding that security to their purely domestic portfolio. The premium arises only if the cash flows of the local security are not spanned by the purely domestic portfolio for the foreign investor. Since closed-end country fund’s share prices and NAVs are determined in two separate markets, the expected returns on the prices and NAVs can be different, leading to premiums, both positive and negative. Thus, barriers to investments may be sufficient to generate premiums.1 Even in the absence of institutional restrictions on foreign equity ownership or even in a purely domestic context (see, e.g., Pontiff 1997), it is still possible for observed price returns to be much more volatile than NAV returns. As shown in the academic literature, closed-end country fund share prices and NAVs are not perfect substitutes. The sources of these differences may be attributable to differences in numeraires, information, and possibly noise trading causing excess volatility in the share price returns (see Lee et al. 1991). Apart from investment restrictions, these imperfections alone may be sufficient to generate premiums. However, BonserNeal et al. (1990) document that relaxation of investment restrictions leads to a decrease in the share price-NAV ratio for a sample of country equity funds. Further, Bodurtha et al. (1995) document that the correlation of changes in country fund premiums and domestic fund premiums is low and insignificant, indicating that the structure of international capital markets is an important contributor in determining premiums. Thus, the objective of this paper is to provide an analysis of what explains the expected returns on closed-end country fund share prices and NAVs in a segmented market framework. The paper utilizes both unconditional and conditional tests on mean-variance efficiency of the world market index (as proxied by the Morgan Stanley Capital International world index) and provides results on the cross-sectional and time-series variations of premiums across closed-end country equity funds. In addition, the paper employs generalized method of moments (GMM) as discussed in Appendix 1 to estimate stochastic discount factors and examines if the price of risk of closed-end country fund shares and NAVs is identical.
1
Hence forth both premiums and discounts are referred to as premiums. Therefore, a discount is treated as a negative premium.
672
D.K. Patro
The sample consists of 40 closed-end country equity funds. Twenty of the funds are from developed markets and 20 funds are from emerging markets.2 The main empirical findings of the paper are as follows. For country fund share prices, the hypothesis that the unconditional international CAPM is a valid model cannot be rejected. However, for the NAVs the international CAPM can be rejected. This finding suggests that country fund share prices and NAVs are not priced identically. The share prices reflect the global price of risk only, but the NAVs reflect both the global and the local prices of risk. It is shown that the differences in risk exposure to the world market index of share prices and NAVs can explain up to 18.7 % of the cross-sectional variations in premiums for developed market funds, but only 1.9 % of the variation for emerging market funds. When conditioning on information is allowed, the international CAPM explains country fund share returns and NAV returns for both developed and emerging markets. However, the hypothesis that the price of risk is identical between closed-end fund shares and NAVs can be rejected for alternate stochastic discount factors for majority of the markets. This finding is consistent with market segmentation. Therefore, differential pricing of the fund shares and the underlying portfolio causes expected returns to be different and explains the existence of premiums and their variation over time. Finally, it is shown that the country fund premiums vary over time due to differential conditional risk exposures of the share prices and NAVs. The principal contributions of this paper are as follows. Existence of premiums on closed-end funds has been a long-standing anomaly. In the domestic setting, negative premiums have been the norm, which has been attributed to taxes, management fees, illiquid stocks, or irrational factors (e.g., noise trading). Although these factors may also be important in explaining country fund premiums, unlike domestic closed-end funds, country fund share prices and NAVs are determined in two different market segments. This paper provides a rational explanation for the premiums on country funds based on differential risk exposures of the share prices and NAVs and market segmentation. Unlike the noise trading hypothesis which assumes the presence of “sentiment” or an additional factor for the share prices but not the NAVs, this paper shows that the same factor may be priced for the share prices and the NAVs, but priced differently. The differential risk exposures are shown to be among the significant determinants of cross-sectional variations in the premiums. Further, this paper examines the role of time variations in expected returns for country fund returns and attributes the time-varying country fund premiums to time-varying country fund returns. The paper is organized as follows. Section 25.2 presents the theoretical motivation and the hypotheses. Section 25.3 presents the data and the descriptive statistics. Section 25.4 provides the empirical results for pricing of country funds. Concluding remarks are presented in the last section.
2
In 1993, the World Bank defined an emerging market as a stock market in a developing country with a GNP per capita of $8,625 or less. This is the definition of an emerging market in this paper.
25
Market Segmentation and Pricing of Closed-End Country Funds
25.2
673
Theoretical Motivation
This section first focuses on the theoretical motivation for the empirical testing of the unconditional mean-variance efficiency of the world market index in the context of country funds. The following subsections present tests for cross-sectional variations in country fund premiums and the methodology for pricing country funds using stochastic discount factors. Finally, this section outlines the methodology for explaining the time variations in country fund premiums attributable to time-varying risk premium.
25.2.1 Pricing of Country Funds in the Context of International CAPM In a global pricing environment, the world market portfolio surrogates the market portfolio. If purchasing power parity is not violated and there are no barriers to investment, the world market portfolio is mean-variance efficient (see, e.g., Solnik 1996; Stulz 1995) and expected returns are determined by the international CAPM. The international CAPM implies that the expected return on an asset is proportional to the expected return on the world market portfolio: E½ri ¼ bi E½rw
(25.1)
where, for any asset i, E[ri] is the expected excess return on the asset and E[rw] is the expected excess return on the world market portfolio. The excess returns are computed in excess of the return on a risk-free asset. Following a standard practice (see Roll 1977), mean-variance efficiency of a benchmark portfolio can be ascertained by estimating a regression of the form ri, t ¼ ai þ bi, w rw, t þ ei, t
(25.2)
where ri,t is the excess return on a test asset and rw,t is the return on a benchmark portfolio. The GMM-based methodology as outlined in Mackinlay and Richardson (1991) is employed to test mean-variance efficiency of the world market index in the context of country funds, using just a constant and the excess return on the world market index as the instruments.3 The hypothesis that ai ¼ 0, for all i ¼ 1, . . ., N, where N is the number of assets, is tested. This is the restriction implied by the international CAPM. The joint hypothesis is tested by a Wald test for country fund share prices and NAVs returns. Since country fund’s share prices are determined with complete access to their markets of origin, their expected returns are expected to be determined via the 3
Although the evidence in favor of the domestic CAPM is ambiguous, many studies such as Cumby and Glen (1990) and Chang et al. (1995) do not reject the mean-variance efficiency of the world market index. Interestingly, although Cumby and Glen (1990) do not reject mean-variance efficiency of the world market index, they reject mean-variance efficiency of the US market index.
674
D.K. Patro
international CAPM (see, e.g., Errunza and Losq 1985). However, if the country from which the fund originates has restrictions on foreign equity investments, the expected returns of the NAVs will be determined via their covariances with both the world market portfolio and the part of the corresponding local market portfolio which is not spanned by the world market portfolio (see Errunza et al. 1998). Letting ei, t ¼ ri, t ai bi, w rw, t
(25.3)
for the N assets, the residuals from the above equation can be stacked into a vector et+1. The model implies that E[ei,t+1jZt] ¼ 0, for 8 i and 8 t. Therefore, E [et+1 Zt] ¼ 0 for 8 t, where Zt is a set of predetermined instruments. GMM estimation is based on minimizing the quadratic form f0 Of, where the sample counterpart of E[et+1 Zt] is given by f ¼ {1/T[∑E[et+1 Zt]} and O is an optimal weighting matrix in the sense of Hansen (1982).
25.2.2 Cross-Sectional Variations in Country Fund Premiums If the mean-variance efficiency of the world market portfolio is not rejected, then expected returns of the test assets are proportional to their “betas” with respect to the world market portfolio. Therefore, Eðra Þ ¼ lw ba, w where a ¼ p or n
(25.4)
where E(rp) and E(rn) are the expected excess returns on the share prices and NAVs, respectively, and lw is the risk premium on the world market index. Stulz and Wasserfallen (1995) demonstrate that the logarithm of two share prices can be written as a linear function of the differences in their expected returns.4 Therefore, Prem ¼ logðP=NAVÞ ¼ Eðrn Þ E rp
(25.5)
where Prem is the premium on a fund calculated as the logarithm of the price-NAV ratio. Combining Eqs. 25.4 and 25.5 leads to the testable implication (25.6) Prem ¼ lw bn, w bp, w The above equation assumes that the international CAPM is valid and world capital markets are integrated. In reality, the world markets may be segmented. Therefore, the effect of market segmentation is captured by introducing additional 4
Stulz and Wasserfallen (1995) use the log-linear approximation of Campbell and Ammer to write the logarithm of the price of a stock as ln(P) ¼ ESZi[(1 Z)dt+j+1 rt+j+1] + y, where Z is a log-linear approximation parameter and rt+j+1 is the return from t+j to t+j+1. Assuming that Z is the same for prices and NAVs, and the relation holds period by period, the premium can be written as above.
25
Market Segmentation and Pricing of Closed-End Country Funds
675
variables based on prior research of Errunza et al. (1998) who document that measures of access, spanning, and substitutability of the prices and NAVs have explanatory power for the cross-sectional variation of premiums. The spanning measure (SPN) is the conditional variance of the NAV return of a fund, unspanned by the US market return and the fund’s share price returns, with specification as follows: rn, t ¼ ai þ bi rUS, t þ bi rp, t þ ei, t
(25.7)
where ei,t has a GARCH (1, 1) specification. The conditional volatility of ei,t is the measure of spanning. The measure of substitution (SUB) is the ratio of conditional volatilities of the share price and NAV returns of a fund not spanned by the US market. The specifications for the expected return equations are rn, t ¼ ai þ bi rUS, t þ en, t
(25.8)
rp, t ¼ ai þ bi rUS, t þ ep, t
(25.9)
where the error terms en,t and ep,t have GARCH (1, 1) specifications.5 The ratio of the conditional volatilities of the residuals in Eqs. 25.8 and 25.9 is used as the measure of substitutability of the share prices and NAVs. Since it is difficult to systematically classify countries in terms of degree of access, a dummy variable is used to differentiate developed and emerging markets. The dummy variable takes value of one for developed markets. Another measure of access, which is the total purchase of securities from a country by US residents as a proportion of total global purchase, is also used as a measure of access to a particular market. It is expected that the premiums are lower for countries with easy access to their capital markets. Therefore, the coefficient on the access variable is expected to be negative. The measure of spanning is interpreted as the degree of ease with which investors could obtain substitute assets for the NAVs. As noted earlier, even if there are barriers to foreign equity ownership, availability of substitute assets can overcome the barriers. Since the spanning variable is the volatility of the residual as specified in Eq. 25.7, it is expected to be positive. The measure of imperfect substitutability of the share prices and NAVs is the ratio of the variances of the share price and NAV returns. It is expected that the premium is inversely related to this measure. Therefore, the extended cross-sectional equation is Premi ¼ d0 þ d1 DIFi þ d2 SPNi þ d3 SUBi þ d4 ACCi þ ei
(25.10)
where: DIFi ¼ the difference of the betas on the world market index for the NAVs and the share prices. SPNi ¼ the conditional residual volatility from Eq. 25.7 used as measures of spanning. 5
Note that unlike here, Errunza et al. (1998) use returns on industry portfolios to proxy for the US market.
676
D.K. Patro
SUBi ¼ the imperfect substitutability of the share prices and the NAVs proxied by the ratio of conditional volatilities in Eqs. 25.8 and 25.9. ACC ¼ a measure of access to a market proxied by a dummy variable which is 1 for developed markets [ACC(d)] or the total purchase of securities from the US residents from that country as a fraction of global security purchases [ACC(cf)]. The empirical specification in Eq. 25.10 or its variations is used in the empirical analysis for explaining cross-sectional variations in premiums. The higher the difference of risk exposures, the higher is the premium. Therefore, the sign of d1 is expected to be positive. However, as the level of unspanned component is higher, the level of premiums should be higher. Therefore the sign of d2 is expected to be positive. Also, the higher the substitutability of the share prices and the NAVs, the lower should be the premium. Therefore, d3 is expected to be negative. Since higher access is associated with lower premiums, d4 is also expected to be negative.
25.2.3 Conditional Expected Returns and Pricing of Country Funds If investors use information to determine expected returns, expected returns could vary over time because of rational variations in risk premium. In such a scenario, the premium on a country fund can be time varying as a result of the differential pricing of the share prices and the NAVs. tþ1 Let Rtþ1 ¼ Ptþ1 þD , where Pt+1 is the price at time t+1 and Dt+1 is the dividend Pt at time t+1. Asset-pricing models imply that E[Mt+1Rt+1jZt] ¼ 1, where Zt is a subset of the investor’s information set at time t (see, e.g., Ferson 1995). M is interpreted as a stochastic discount factor (SDF). The existence of the above equation derives from the law of one price. Given a particular form of an SDF, it is estimated by using GMM as outlined earlier.6 The SDFs examined in this paper are the SDFs implied by the international CAPM and its extension under market segmentation. The international CAPM is obtained by assuming that the SDF is a linear function of the return on the world market index (Rw, t+1). Specifically, the SDF is: International CAPM: M ¼ l0 þ l1 Rw, tþ1
(25.11)
A two-factor SDF, where the second factor is the return on a regional index (specifically for Asia or Latin America), is also estimated. Such a model is implied by the models of Errunza and Losq (1985).7 Specifically, the SDFs is: Two-factor model: M ¼ l0 þ l1 Rw, tþ1 þ l2 Rh, tþ1
6
(25.12)
Ferson and Foerster (1994) show that an iterated GMM approach has superior finite sample properties. Therefore the iterated GMM approach is used in the estimations. 7 For such conditional representation, see, for example, Ferson and Schadt (1996).
25
Market Segmentation and Pricing of Closed-End Country Funds
677
Once an SDF is estimated for a group of share prices and NAVs, the coefficients of the estimated SDFs are compared to test the hypothesis that for a given SDF, the share price and NAV are priced identically.8
25.2.4 Conditional Expected Returns and Time-Varying Country Fund Premiums To examine the role of time-varying risk premiums in explaining the time variation in country fund premiums, the following procedure is used. Similar to Ferson and Schadt (1996), a conditional international CAPM in which the betas vary over time as a linear function of the lagged instrumental variables is used. The following equations are estimated via GMM: rp, tþ1 ¼ ap þ bp, w ðZt Þrw, tþ1 þ ep, tþ1
(25.13)
rn, tþ1 ¼ an þ bn, w ðZt Þrw, tþ1 þ en, tþ1
(25.14)
where the excess return on the world market index is scaled by a set of instrumental variables. Using Eq. 25.6 and the above time-varying betas, the premium can be written as Premtþ1 ¼ g0 þ g1 bn, w ðZt Þ bp, w ðZt Þ þ eprem, tþ1 (25.15) The empirical specification in Eq. 25.15 is used for explaining the time-varying premiums. If the time-varying betas are significantly different and their difference can explain the time variation of country fund premiums, the coefficient g1 is expected to be positive and significant.
25.3
Data and Descriptive Statistics
The sample includes all single-country closed-end country equity funds publicly trading in New York as of 31 August 1995. The test sample is limited to country funds with at least 100 weeks of weekly NAV observations.9 An overview of the sample is presented in Table 25.1. The funds are classified as developed market or
8
See Cochrane (1996) for such specifications. Cochrane (1996) calls such models as “scaled factor models.” See Bansal et al. (1993) for a nonlinear specification of stochastic discount factors. 9 Unlike industrial initial public offerings (IPOs), closed-end fund IPOs are “overpriced” (Weiss 1989). Peavy (1990) finds that new funds show significant negative returns in the aftermarket. Hanley et al. (1996) argue that closed-end funds are marketed to a poorly informed public and document the presence of flippers – who sell them in the immediate aftermarket. They also document evidence in support of price stabilization in the first few days of trading. Therefore, the first 6 months (24 weeks) of data for each fund is excluded in the empirical analysis.
678
D.K. Patro
Table 25.1 Closed-end country funds: sample overview Fund name IPO date Panel A: Developed market funds First Australia 16 December fund 1985 Italy fund 27 February 1986 Germany fund 18 July 1986 UK fund 6 August 1987 Swiss Helvetia 19 August 1987 fund Spain fund 21 June 1988 Austria fund 22 September 1989 New Germany 24 January 1990 fund Growth fund of 14 February 1990 Spain Future Germany 27 February 1990 fund Japan OTC 14 March 1990 equity fund Emerging 29 March 1990 Germany fund Irish investment 30 March 1990 fund France growth 11 May 1990 fund Singapore fund 24 July 1990 China fund 7 July 1992 Jardine Fleming 17 July 1992 China fund Greater China 17 July 1992 fund Japan equity fund 14 August 1992 First Israel fund 23 October 1992 Total Panel B: Emerging market funds Mexico fund 3 June 1981 Korea fund 22 August 1984 Taiwan fund 16 December 1986 Malaysia fund 8 May 1987 Thai fund 17 February 1988 Brazil fund 31 March 1988 India growth 12 August 1988 fund
Symbol
No of shares (millions)
Net assets ($ millions)
Listing
AUS
15.9
163.4
AMEX
ITA GER GBR SHEL
9.5 13.5 4.0 9.2
89.4 173.9 51.2 181.8
NYSE NYSE NYSE NYSE
SPN AUT
10.0 11.7
94.2 107.9
NYSE NYSE
GERN
32.5
373.8
NYSE
GSPN
17.3
164.0
NYSE
GERF
11.9
171.5
NYSE
JPNO
11.4
115.4
NYSE
GERE
14.0
130.6
NYSE
IRL
5.0
51.4
NYSE
FRA
15.3
168.4
NYSE
SGP CHN JCHN
6.9 10.8 9.1
97.4 136.3 114.5
NYSE NYSE NYSE
GCHN
9.6
116.2
NYSE
JPNE FISR
8.1 5.0 230.7
102.8 53.7 2,657.8
NYSE NYSE
MEX KOR TWN
37.3 29.5 11.3
765.6 610.0 289.5
NYSE NYSE NYSE
MYS THA BRA INDG
9.7 12.2 12.1 5.0
180.6 343.8 376.4 146.7
NYSE NYSE NYSE NYSE (continued)
25
Market Segmentation and Pricing of Closed-End Country Funds
679
Table 25.1 (continued) Fund name ROC Taiwan fund Chile fund Portugal fund First Philippine fund Turkish investment fund Indonesia fund Jakarta growth fund Thai capital fund Mexico equity and income fund Emerging Mexico fund Argentina fund Korea investment fund Brazilian equity fund Total
No of shares (millions) 27.9
Net assets ($ millions) 365.7
IPO date 12 May 1989
Symbol RTWN
26 September 1989 1 November 1989 8 November 1989 5 December 1989
CHL
7.0
367.0
NYSE
PRT
5.3
75.9
NYSE
FPHI
9.0
212.0
NYSE
TUR
7.0
33.5
NYSE
1 March 1990 10 April 1990
INDO JAKG
4.6 5.0
42.3 43.2
NYSE NYSE
22 May 1990 14 Aug 1990
THAC MEXE
6.2 8.6
124.8 103.0
NYSE NYSE
2 October 1990
EMEX
9.0
117.6
NYSE
10 October 1991 ARG 14 February 1992 KORI
9.2 6.0
108.1 87.4
NYSE NYSE
9 April 1992
4.6
91.4
NYSE
226.5
4,484.5
BRAE
Listing NYSE
This table presents the sample of closed-end country funds. The funds are classified as from developed markets or emerging markets using the World Bank’s definition of emerging markets. The China funds are classified as developed market funds because the majority of their investments are in Hong Kong. The net assets reported is as of 31 December 1994. The listing of the funds is denoted as NYSE, New York Stock Exchange and AMEX, American Stock Exchange. The Germany fund, Korea fund, and the Thai capital fund also trade on the Osaka Stock Exchange. The future Germany fund is now called Central European equity fund
emerging market funds using the World Bank’s definition of an emerging market. The three China funds are classified as developed market funds because the majority of their assets are in Hong Kong. The weekly (Friday closing) prices, NAVs, and corresponding dividends for each fund are obtained from Barrons and Bloomberg. 10 The returns are adjusted for stock splits and dividends. The 7-day Eurodollar deposit rate, provided by the Federal Reserve, is used as the risk-free benchmark.
10
For some of the funds, such as the India growth fund, the prices and net asset values are as of Wednesday closing. This may lead to nonsynchronous prices and NAVs. However, as Bodurtha et al. (1995) and Hardouvelis et al. (1993) show, the effects of nonsynchronous trading are not pervasive and do not affect the analysis.
680
D.K. Patro
Table 25.2 Closed-end country funds: descriptive statistics Panel A: Developed market funds Price returns (%) NAV returns (%) Fund Mean Std Mean Std AUS 0.092 3.051 0.106 2.438 ITA 0.009 4.126 0.029 2.943 GER 0.106 3.943 0.130 2.372 GBR 0.140 3.363 0.148 2.285 SHEL 0.330 4.482 0.259 2.017 SPN 0.007 4.128 0.033 2.505 AUT 0.005 3.874 0.048 2.334 GERN 0.113 3.787 0.113 2.146 GSPN 0.239 5.159 0.143 2.675 GERF 0.194 3.288 0.179 2.392 JPNO 0.171 5.004 0.014 3.474 GERE 0.051 3.548 0.014 2.200 IRL 0.261 3.220 0.189 2.226 FRA 0.133 3.456 0.090 2.133 SGP 0.261 4.238 0.134 2.599 Average 0.138 2.284 0.092 1.541 Panel B: Emerging market funds Price returns (%) NAV returns (%) Symbol Mean Std Mean Std MEX 0.841 12.670 0.897 14.695 KOR 0.277 4.778 0.295 3.428 TWN 0.177 5.636 0.123 3.562 MYS 0.308 4.581 0.251 3.233 THA 0.232 4.359 0.358 3.458 BRA 0.786 7.048 0.831 6.607 INDG 0.297 5.518 0.130 4.144 RTWN 0.235 5.026 0.030 2.998 CHL 0.354 5.629 0.315 4.454 PRT 0.209 4.639 0.129 2.284 FPHI 0.489 4.490 0.359 2.589 TUR 0.112 6.078 0.130 7.164 INDO 0.199 5.387 0.000 2.638 JAKG 0.249 4.987 0.045 2.308 THAC 0.449 4.921 0.377 3.510 Average 0.347 2.478 0.284 1.939
Premium (%) Mean Std 10.182 5.172 5.985 9.311 1.120 11.316 11.873 5.728 3.497 4.989 0.076 11.923 9.860 7.709 15.003 6.020 14.522 8.655 14.246 5.942 6.522 10.494 14.750 6.200 15.473 5.816 13.266 8.020 1.136 10.606 8.310 4.775
Volatility ratio 1.565* 1.965* 2.762* 2.165* 4.938* 2.716* 2.754* 3.113* 3.717* 1.889* 2.074* 2.601* 2.091* 2.625* 2.658* 1.481*
Premium (%) Mean Std 5.783 8.646 18.625 11.146 8.477 12.806 3.636 7.599 3.264 10.251 1.464 8.481 0.833 15.397 0.443 8.892 8.589 8.363 6.313 7.311 21.127 5.345 13.052 16.077 15.190 10.105 5.221 7.947 8.516 6.409 0.447 4.093
Volatility ratio 0.743 1.943* 2.503* 2.007* 1.588* 1.137* 1.773* 2.809* 1.597* 4.125* 3.005* 0.720 4.168* 4.666* 1.964* 1.277*
This table presents the descriptive statistics of the sample of closed-end country funds. The sample covers the period January 1991–August 1995 (244 weekly observations). All the returns are weekly returns in US dollars. The premium on a fund is calculated as the logarithm of the pricenet asset value ratio. The “volatility ratio” is the ratio of variance of price returns over the variance of NAV returns for that fund. An asterisk (*) denotes that the variance of price returns is significantly higher than the variance of NAV returns, from an F-test at the 5 % level of significance
25
Market Segmentation and Pricing of Closed-End Country Funds
681
Table 25.2 reports the descriptive statistics for the sample of funds. The IFC global indices are available weekly from December 1988. Therefore, for emerging market funds which were launched before December 1988, the analysis begins in December 1988. To enable comparison across funds, a common time frame of January 1991–August 1995 is chosen. Since, the data for the first 6 months are not used in the analysis, only 30 funds listed on or before July 1990 are considered. The mean premium on the index of developed market funds is 8.31 %, whereas the mean premium on the emerging market funds is 0.44 %. A t-test of the difference of means indicates that the mean premium of the index of the emerging market funds is significantly higher than the mean premiums on the index of developed market funds at the 1 % level of significance. Figure 25.1 plots the time series of the premiums for the index of developed and emerging market funds. The figure clearly indicates that the premiums on emerging market funds are usually positive and higher than the premiums on developed market funds, highlighting the effects of market segmentation. Table 25.2 also presents the ratio of the variance of the excess price returns over the variance of the excess NAV returns. For 27 funds out of the 30 funds, the ratios are significantly higher than one.11 A set of predetermined instrumental variables similar to instruments used in studies of predictability of equity returns for developed markets (Fama and French 1989; Ferson and Harvey 1993) and emerging markets (Bekaert 1995) are used in the empirical analysis when testing conditional asset-pricing models. The instruments are the dividend yield on the Standard and Poor’s 500 Index, calculated as the last quarter’s dividend annualized and divided by the current market price (DIVY); the spread between 90-day Eurodollar deposit rate and the yield on 90-day US treasury bill (TED); and the premium on an index of equally weighted country funds (FFD). Only three instruments are used in order to be parsimonious representation of the conditional asset-pricing models. TED is calculated using data from the Federal Reserve of Chicago and DIVY is obtained from Barrons. FFD is constructed using the sample of 30 funds listed in Table 25.2. Table 25.3 reports the sample characteristics of these instruments. As Fama and French (1989) show, the dividend yields and interest rates track the business cycle, and expected returns vary as a function of these instruments. Table 25.4 reports the descriptive statistics for the returns on the market indices and the correlations of returns on the market indices of various countries with the returns on the US index. The developed market indices are from Morgan Stanley Capital International and the emerging market indices are from the International Finance Corporation. All the developed market indices have significantly higher correlations with the US market index, compared to the emerging markets, which implies that there are potential diversification benefits from investing in emerging markets. 11
Assuming that the price returns and NAV returns are from two normal populations, the test statistic which is the ratio of the variances has an F distribution (if the two variances are estimated using the same sample size). The null hypothesis that the variance of price returns is greater than the variance of NAV returns is tested using this statistic at the 5 % level of significance.
682
D.K. Patro 20 Developed market funds Emerging market funds
15
10
950630
950421
950210
941202
940923
940715
940506
940225
931217
931008
930730
930521
930319
930108
921113
920904
920626
920417
920221
911213
911004
910726
910517
−5
910315
0 910104
Premium (%)
5
−10 −15 −20 Date
Fig. 25.1 Country fund premiums
Table 25.3 Summary statistics for the instrumental variables Panel A: Means and standard deviations (%) Mean TED 0.34 DIVY 2.91 FFD 3.87 Panel B: Correlations of the instrumental variables TED TED 1.00 DIVY 0.50 FFD 0.55
Std. dev 0.18 0.23 3.93 DIVY
FFD
1.00 0.25
1.00
The statistics is based on weekly data from January 1991–August 1995 (244 weekly observations). The spread between 90-day Eurodollar deposits and 90-day US treasury yields (TED), the dividend yield on the Standard and Poor’s 500 index (DIVY), and an equally weighted index of the premiums for the sample of funds in Table 25.2 (FFD)
25
Market Segmentation and Pricing of Closed-End Country Funds
683
Table 25.4 Market indices: descriptive statistics Country Mean (%) Panel A: Developed markets Austria 0.002 Hong Kong 0.485 Australia 0.227 Israel 0.000 France 0.167 Germany 0.180 Spain 0.090 Ireland 0.209 Italy 0.085 Japan 0.117 Singapore 0.336 Switzerland 0.375 USA 0.235 UK 0.143 Europe 0.169 Panel B: Emerging markets Argentina 0.841 Brazil 1.145 Chile 0.687 India 0.259 Indonesia 0.114 Korea 0.130 Malaysia 0.410 Mexico 0.351 Philippines 0.636 Portugal 0.147 Taiwan 0.132 Thailand 0.520 Turkey 0.121 Asia 0.211 Latin America 0.554 World 0.172
Std. (%)
Correlation with USA
2.675 3.366 2.212 3.351 2.272 2.416 2.627 2.675 3.558 3.006 2.018 2.186 1.371 2.186 1.808
0.178 0.174 0.292 0.130 0.334 0.238 0.329 0.299 0.146 0.180 0.231 0.285 1.000 0.335 0.377
6.562 7.577 3.180 4.534 3.302 3.432 2.802 4.303 3.618 2.560 4.483 3.609 7.660 2.137 3.314 1.410
0.136 0.221 0.125 0.021 0.031 0.040 0.163 0.195 0.141 0.155 0.153 0.189 0.061 0.188 0.640 0.277
This table presents the descriptive statistics for the returns on market indices of developed and emerging markets for the time period January 1991–August 1995. The table also presents the correlation of the returns, with the returns on the US market. All the developed market indices are weekly indices from Morgan Stanley Capital International. The emerging market indices are from the International Finance Corporation
684
25.4
D.K. Patro
Empirical Results: Pricing of Country Funds
This section presents the empirical results for pricing of country funds. The results from the unconditional tests indicate that, in general, developed market closed-end fund returns have significant risk exposures to the world market index, while emerging market closed-end fund returns have significant risk exposures to both the world market index and the corresponding local market index. Second, the hypothesis of unconditional mean-variance efficiency of the world market index cannot be rejected using the share price returns of either developed or emerging market funds and NAV returns of some of the developed market funds. However, the hypothesis of unconditional mean variance of the world market index can be rejected for emerging market NAVs and some developed market NAVs. This finding indicates that while the share prices reflect the global price of risk, the NAVs may reflect both the global and the respective local prices of risk. Tests of predictability using global instruments indicate that country fund share price and NAVs exhibit significant predictable variation. When conditional asset-pricing restrictions are examined using alternate stochastic discount factors, the results indicate that the share prices and NAVs of the closed-end country funds are not priced identically for some developed and all the Asian market funds. This finding is consistent with market segmentation. Finally, it is shown that the time-varying premiums for country funds are attributable to time-varying risk premiums. The detailed results are discussed below.
25.4.1 Unconditional Risk Exposures of Country Funds To ascertain the risk exposures of the share price and NAV returns of the sample of country funds, the following econometric specification is employed: rp, t ¼ ap þ bp, w rw, t þ bp, h rh, t þ ep, t
(25.16)
rn, t ¼ an þ bn, w rw, t þ bn, h rh, t þ en, t
(25.17)
where, for any country fund “i” (the subscript “i” has been dropped for convenience): rp,t ¼ the excess total return on the share price (including dividends) of a country fund between t1 and t. rn,t ¼ the excess return on the NAV of a country fund between t1 and t. rw,t ¼ the excess return on the Morgan Stanley Capital International (MSCI) world market index. rh,t ¼ the excess return on the MSCI or International Finance Corporation (IFC) global index, corresponding to the country of origin of the fund. The coefficients bp,w, bp,h are the risk exposures on the world and home market portfolios for the price returns, and bn,w,bn,h are the risk exposures on the world and
25
Market Segmentation and Pricing of Closed-End Country Funds
685
home market portfolios for the NAV returns. Two hypotheses are tested. The hypotheses are share price returns have significant risk exposures to the world market index and the NAV returns have significant risk exposures to both the world market index and the corresponding local market index. These hypotheses are implied by the international CAPM and its extension under market segmentation (see Errunza and Losq 1985 and Diwan et al. 1995). If assets are priced in a global environment, the country fund share price as well as NAV returns should have significant risk exposure to the world market index only. However, when there are barriers to investment, the local market factor may be a source of systematic risk for the NAVs. The risk exposures on the prices and NAVs could be different when the prices and NAVs are imperfect substitutes. By jointly estimating Eqs. 25.16 and 25.17, the hypothesis that the risk exposures on the world market and the home market indices are identical for the price and NAV returns is tested via a Wald test which is w2 distributed, with degrees of freedom equal to the number of restrictions.12 Since the local indices are significantly correlated with the world index, for ease of interpretation, they are made orthogonal to each other by regressing the local index returns on a constant and the world index return and using the residuals as the local index return. Therefore, the risk exposure on the local or regional index is the marginal exposure in the presence of the world market index. The results from the regressions to estimate risk exposures are reported in Table 25.5. Panel A of Table 25.5 presents the results for developed market funds and panel B presents the results for the emerging market funds. The results presented in Table 25.5 indicate that 15 of the 15 developed market funds and 12 of the 15 emerging market funds price returns have significant exposure to the world index at the 5 % level of significance. Also, 12 of the developed market funds and 14 of the emerging market funds price returns have significant exposure to their corresponding local market index, in the presence of the orthogonal world market index. For the NAVs, all 15 of the developed funds and ten of the emerging market funds returns have significant exposure to the world market index. Moreover, 14 of the developed and 13 of the emerging market NAV returns have significant exposure to their local market index, in the presence of the world market index. The adjusted R-squares across all the regressions vary from a low of zero to a high of 92 %.13
For a set of linear restrictions, the Wald test statistic is given by W ¼ ½Rb-r0 ½R VarðbÞ R0 ½Rb-r, where b is the vector of estimated parameters and Rb ¼ r is the set of linear restrictions. 13 Risk exposures of the price and NAV returns were also estimated using regional indices in place of the local market indices for the funds from Europe, Latin America, and Asia. These results indicate that, out of the 12 European funds, only two funds’ price returns have significant risk exposure to the MSCI Europe index in the presence of the MSCI world index. Also, out of the seven Latin American funds, all seven price returns and six NAV returns have significant exposure to the Latin American index. Also, out of the 11 Asian funds, nine price returns and eight NAV returns have significant risk exposures to the Asian index. 12
Panel A: Developed market funds Coefficients on price returns (t-stats) Fund ap bp,h bp,w AUS 0.00 0.47 0.71 (0.66) (5.03)* (5.99)* ITA 0.00 0.44 0.60 (0.73) (5.18)* (3.15)* GER 0.00 0.49 1.05 (0.15) (3.89)* (5.38)* GBR 0.00 0.32 0.87 (0.11) (2.78)* (6.33)* SHEL 0.00 0.08 0.63 (1.07) (0.62) (2.24)* SPN 0.00 0.34 0.89 (0.80) (3.85)* (3.27)* AUT 0.00 0.30 0.97 (0.79) (3.46)* (6.57)* GERN 0.00 0.50 1.10 (0.37) (4.12)* (6.27)* GSPN 0.00 0.35 0.77 (0.44) (3.85)* (3.22)* GERF 0.00 0.45 0.91 (0.29) (4.24)* (6.00)* JPNO 0.00 0.23 1.95 (0.31) (1.82) (6.35)* GERE 0.00 0.51 0.80 (0.90) (5.87)* (4.34)*
Table 25.5 Risk exposures of country funds
0.18
0.31
0.23
0.06
0.23
0.14
0.11
0.03
0.17
0.19
0.17
Adj. R2 0.22
Coefficients on NAV returns (t-stats) an bn,h bn,w 0.00 0.92 0.39 (1.26) (26.79)* (8.05)* 0.00 0.76 0.70 (3.06)* (33.69)* (14.30)* 0.00 0.91 0.95 (0.95) (35.56)* (30.87)* 0.00 0.92 0.74 (1.42) (21.17)* (11.44)* 0.00 0.11 0.32 (1.41) (1.11) (3.07)* 0.00 0.80 0.82 (1.79) (20.66)* (21.50)* 0.00 0.72 0.77 (1.37) (17.57)* (9.59)* 0.00 0.79 0.85 (1.02) (27.79)* (25.90)* 0.00 0.75 0.89 (0.11) (16.15)* (10.63)* 0.00 0.93 0.89 (0.90) (39.49)* (30.39)* 0.00 0.74 1.16 (0.71) (7.71)* (8.18)* 0.00 0.84 0.76 (3.46)* (34.82)* (23.53)* 0.85
0.41
0.92
0.59
0.87
0.71
0.68
0.05
0.73
0.89
0.87
Adj. R2 0.67
Chi-squares for Wald tests [p-values] w2local w2world 6.61 24.00 [0.01] [0.00] 0.21 11.36 [0.64] [0.00] 0.28 11.68 [0.59] [0.00] 0.64 18.77 [0.42] [0.00] 1.09 0.03 [0.29] [0.84] 0.06 19.63 [0.80] [0.00] 1.21 19.06 [0.26] [0.00] 1.73 4.70 [0.18] [0.03] 0.15 22.75 [0.69] [0.00] 0.01 23.94 [0.88] [0.00] 6.86 10.25 [0.00] [0.00] 0.04 12.03 [0.83] [0.00]
686 D.K. Patro
RTWN
INDG
BRA
THA
MYS
TWN
KOR
Symbol MEX
0.81 (5.73)* 0.79 (6.02)* 0.93 (5.65)*
Coefficients on price returns (t-stats) bp,h bp,w ap 0.00 0.42 0.84 (1.36) (2.86)* (2.60)* 0.00 0.20 0.09 (0.71) (1.77) (0.58) 0.00 0.49 0.95 (0.26) (4.85)* (4.34)* 0.00 0.62 1.01 (0.60) (5.98)* (3.74)* 0.00 0.56 1.02 (0.26) (7.80)* (5.71)* 0.00 0.50 0.90 (1.89) (10.84)* (2.98)* 0.00 0.34 0.06 (0.77) (2.15)* (0.27) 0.00 0.56 1.01 (0.10) (6.47)* (5.87)*
0.00 0.08 (0.75) (0.95) FRA 0.00 0.56 (0.20) (5.46)* SGP 0.00 0.63 (0.26) (4.18)* Panel B: Emerging market funds
IRL
0.30
0.07
0.31
0.27
0.20
0.19
0.01
Adj. R2 0.02
0.15
0.21
0.13
0.75 (21.43)* 0.84 (26.34)* 0.65 (6.35)*
0.65 (12.18)* 0.68 (17.95)* 0.43 (4.61)*
Coefficients on NAV returns (t-stats) an bn,h bn,w 0.00 0.39 0.96 (1.12) (2.01)* (2.35)* 0.00 0.00 0.01 (0.73) (0.05) (0.09) 0.00 0.38 0.30 (0.33) (6.91)* (2.26)* 0.00 0.92 0.74 (0.89) (23.92)* (7.15)* 0.00 0.87 0.72 (2.27)* (23.26)* (13.28)* 0.00 0.64 1.07 (2.41)* (17.65)* (5.64)* 0.00 0.35 0.06 (0.26) (5.04)* (0.45)* 0.00 0.53 0.42 (1.58) (12.03)* (6.17)*
0.00 (0.33) 0.00 (1.47) 0.00 (0.14)
0.62
0.14
0.58
0.82
0.63
0.22
0.00
Adj. R2 0.01
0.25
0.79
0.78
48.12 [0.00] 6.13 [0.01] 0.01 [0.90]
Chi-squares for Wald tests [p-values] w2local w2world 0.07 0.16 [0.78] [0.68] 3.09 0.14 [0.07] [0.70] 1.49 8.57 [0.22] [0.00] 8.35 0.83 [0.00] [0.36] 18.04 3.07 [0.00] [0.07] 7.38 0.29 [0.00] [0.59] 0.00 0.20 [0.98] [0.64] 0.12 10.87 [0.71] [0.00] (continued)
1.04 [0.30] 0.63 [0.42] 11.27 [0.00]
25 Market Segmentation and Pricing of Closed-End Country Funds 687
0.29
0.23
0.14
0.00
0.30
0.07
Adj. R2 0.21
Coefficients on NAV returns (t-stats) an bn,h bn,w 0.00 0.89 0.04 (1.19) (25.34)* (0.71) 0.00 0.73 0.58 (1.12) (13.25)* (9.52)* 0.00 0.56 0.24 (0.55) (9.56)* (2.49)* 0.00 0.02 0.08 (0.05) (0.39) (0.24) 0.00 0.64 0.06 (1.11) (12.85)* (0.92) 0.00 0.57 0.04 (0.83) (13.64)* (0.73) 0.00 0.89 0.73 (1.83) (19.05)* (15.04)* 0.84
0.68
0.66
0.00
0.60
0.70
Adj. R2 0.40
Chi-squares for Wald tests [p-values] w2local w2world 1.24 5.13 [0.26] [0.02] 7.67 0.76 [0.00] [0.38] 1.77 6.07 [0.18] [0.01] 0.00 0.92 [0.96] [0.33] 2.74 15.83 [0.09] [0.00] 0.06 28.64 [0.79] [0.00] 7.80 4.78 [0.00] [0.02]
This table presents the results from GMM estimation of risk exposures of the price (rp, t) and the NAV (rn.t) returns of country fund share and NAV excess returns on the MSCI world index (rw, t)and the excess return on the corresponding local market index (rh, t). For developed market funds, the time series covers the time from January 1991–August 1995. The price and the NAV risk exposures are estimated simultaneously using an exactly identified system of equations. The t-stats robust to heteroskedasticity and serial correlation (six Newey-West lags) are presented below the coefficients. An asterisk (*) denotes significance at the 5 % level of significance. The last two columns present the Chi-squares from a Wald test for the equality of the coefficients on the world market and local market indices for the price and NAV returns. The models estimated are of the form ra, t ¼ aa þ ba, h rh, :t þ ba, w rw, t þ ea, t where a ¼ p and n
THAC
JAKG
INDO
TUR
FPHI
PRT
Symbol CHL
Coefficients on price returns (t-stats) ap bp,h bp,w 0.00 0.77 0.46 (0.90) (7.57)* (2.54)* 0.00 0.31 0.75 (0.18) (2.40)* (4.29)* 0.00 0.64 0.64 (0.68) (8.72)* (3.94)* 0.00 0.03 0.34 (0.05) (0.60) (1.28) 0.00 0.48 0.91 (0.10) (4.94)* (4.62)* 0.00 0.54 1.07 (0.26) (4.35)* (6.26)* 0.00 0.62 1.26 (1.23) (7.12)* (5.10)*
Panel B: Emerging market funds
Table 25.5 (continued)
688 D.K. Patro
25
Market Segmentation and Pricing of Closed-End Country Funds
689
These results confirm the hypothesis that returns on prices and NAVs are generated as a linear function of the returns on the world market and possibly the corresponding local market index. The fact that 27 out of the 30 funds have significant risk exposure to the world market index indicates that the share prices may be determined via the global price of risk. The higher adjusted R-squares for the NAVs indicate that the world market index and the corresponding local market index have higher explanatory power for the time-series variations of the returns. The intercepts in the univariate regressions are not significantly different from zero, which suggests that the return on the world market index and an orthogonal return on the local market index are mean-variance efficient for the price and NAV returns. When Wald tests are performed to test the hypothesis that the risk exposures on the prices and NAVs are identical, the results presented in the last two columns of Table 25.5 indicate that for 21 (14 from developed) out of the 30 funds, the null hypothesis of the same betas on the world index is rejected at the 10 % level of significance. Similarly, for eight funds the null hypothesis of the same betas on the local index is rejected at the 5 % level. This is a very important finding since it indicates that the systematic risks of the shares and NAVs are different. The fact that closed-end funds may have different risk exposures for the share and NAVs has been documented for domestic funds by Gruber (1996). Also different risk exposures for restricted and unrestricted securities have been documented by Bailey and Jagtiani (1994) for Thai securities and by Hietala (1989) for Finnish securities. These results indicate that country fund share prices and NAVs may have differential risk exposures. The different risk exposures will result in different expected returns for the share prices and NAVs, which is one of the sources of the premiums. This section clearly shows that country fund share price and NAV returns have significant risk exposure to the world market index. The different risk exposures will result in different expected returns for the share prices and NAVs, which is one of the sources of the premiums. The issue of whether the country funds are priced in equilibrium via the international CAPM is the focus of next section.
25.4.2 Pricing of Country Funds in the Context of International CAPM The results of the mean-variance efficiency of the MSCI world index, using Eq. 25.2, are presented in Table 25.6. The table reports the w2 and the p-values from a Wald test for the hypothesis that a set of intercepts are jointly zero.14 This hypothesis is the restriction implied by unconditional international CAPM. Failure to reject this hypothesis would imply that the world market index is mean-variance efficient and expected returns are proportional to the expected returns on the world 14
Other studies such as Cumby and Glen (1990) fail to reject the mean-variance efficiency of the MSCI world market portfolio for national indices of developed markets. Also, Chang et al. (1995) fail to reject the mean-variance efficiency of the MSCI world market index for a sample of developed as well as emerging market indices.
690
D.K. Patro
market index. The hypothesis is tested using GMM for an exactly identified system as shown in section IA. To ensure a longer time series, the tests are conducted using funds listed before December 1990. The null hypothesis that the intercepts are jointly zero cannot be rejected for the share prices of the 15 developed market funds as well as the 15 emerging market funds. But, the null hypothesis is rejected at the 5 % level for the NAVs of both developed and emerging market funds. The tests are also conducted on subsets of funds to check the robustness of the results. Since the classification of developed and emerging markets is based on national output, it may not always capture whether a country’s capital market is well developed. The subset consists of funds with zero intercepts for the NAV returns, on an individual basis. For this subset of the developed market funds consisting of 11 funds, though not reported here, the mean-variance efficiency of the world index cannot be rejected for both the share prices and NAVs. For emerging market funds NAVs, even for subsets of funds, the MSCI world market index is not mean-variance efficient. These results indicate that the world market index is an appropriate benchmark for the share prices but not necessarily for the NAVs. These results have important implications for capital market integration. Since for the NAVs, the mean-variance efficiency of the world market index is rejected, it implies that the share prices and NAVs are priced differently. This differential pricing is sufficient to generate premiums on the share prices. Also, the fact that mean-variance efficiency of the world market index cannot be rejected for share prices of both developed and emerging markets is consistent with the theoretical prediction of Diwan et al. (1995). Since the country fund share prices are priced with complete access to that market, in equilibrium, their expected returns should be proportional to their covariances with the world market portfolio. If the NAVs are priced with incomplete access, the world market portfolio is not mean-variance efficient with respect to those returns. The results in this section are consistent with this notion. As the earlier section shows, the risk exposures of the share prices and NAVs may differ. If the international CAPM is a valid model for the share prices and NAVs, as outlined in section IB, this differential risk exposure can explain cross-sectional variations in premiums. The next section analyzes the effect of the differential risk exposures of the share prices and NAVs on the country fund premiums.
25.4.3 Cross-Sectional Variations in Country Fund Premiums This section presents the results for tests of the hypothesis that cross-sectional variation in country fund premiums is positively related to the differences in the risk exposures of the share prices and NAVs. The theoretical motivation for this was presented in section IC. Equation 25.10 is estimated for each week during the last 75 weeks of the sample and the parameter estimates are averaged. The results indicate that cross-sectional variation in country fund premiums is explained by the
25
Market Segmentation and Pricing of Closed-End Country Funds
691
Table 25.6 Pricing of country funds in the context of international CAPM Test assets Developed market country funds (15 funds) Emerging market country funds (15 funds)
Chi-squares for price returns [p-value] 15.71 [0.40]
Chi-squares for NAV returns [p-value] 44.48 [0.00]
10.64 [0.77]
25.47 [0.04]
This table presents the Chi-squares for the null hypothesis that the intercepts are jointly zero for a set of assets when their excess returns are regressed on the excess return on the MSCI world index. The regression estimated is ri,t ¼ ai + birw,t + ei,t, i ¼ 1.... N The null hypothesis Ho: ai ¼ 0 for all i ¼ 1. . .N is tested by a Wald test using GMM estimates from an exactly identified system. The p-values are based on standard errors robust to heteroskedasticity and serial correlation (13 Newey-West lags). The sample covers January 1991–August 1995. The test assets are excess price/NAV returns of country funds The composition of the test assets are as follows – the developed market funds are AUT, AUS, FRA, GER, GERE, GERF, GERN, GSPN, IRL, ITA, JPNO, SGP, SPN, SHEL, and GBR. The emerging market funds are BRA, CHL, FPHI, INDG, INDO, JAKG, KOR, MYS, MEX, PRT, RTWN, TWN, THAC, THA, and TUR
differential risk exposures of a closed-end country fund share price and NAV returns. The estimation proceeds are as follows: after the betas are estimated using 75 weeks of data prior to the last 75 weeks of the sample, the difference in the betas on the world index is used as an explanatory variable in regressions with the premiums as the dependent variable. The results are reported in Table 25.7. The results indicate that, for the full sample, the difference in risk exposure is significant and the adjusted R-square is 9.5 %. For the developed market funds, the results in panel B indicate that the differences in the betas on the world index have significant explanatory power for the cross-section of premiums for the developed market funds, with an adjusted R-square of 18.7 %. For the emerging market funds, however, as reported in panel C, the differences in risk exposures are not significant, and the adjusted R-square is only 1.9 %. This result is not surprising, given the result of the previous section that the world market index is not mean-variance efficient for the emerging market NAVs. When a dummy variable which takes value of one for the developed markets is added, it is highly negatively significant, indicating that the premiums for developed market funds are significantly lower than the emerging market fund premiums. The differential risk exposure is however significant only at the 10 % level, when the dummy variable is added. As an extended specification, measures of spanning, integration, and substitution (based on prior research of Errunza et al. 1998) are used as additional explanatory variables. When measures of spanning, substitution, and access are used as additional explanatory variables, the adjusted R-squares go up to 39.5 % for the full sample. The measure of spanning has a positive sign as expected. The measure of spanning is also significant across the developed market funds. However, contrary
692
D.K. Patro
Table 25.7 Cross-sectional variations in country fund premiums CONST
DIF
SPN
Panel A: Full sample 0.05 0.08 (1.60) (2.85)* 0.15 0.06 63.77 (2.04)* (0.07) (2.00)* 0.17 0.07 69.48 (2.00)* (2.28)* (2.79)* 0.16 72.59 (2.26)* (2.73)* 0.00 0.05 (0.18) (2.03)* Panel B: Developed market funds 0.09 0.11 (4.33)* (2.51)* 0.29 0.12 207.76 (4.23)* (4.03)* (2.05)* 0.29 0.12 210.39 (4.28)* (3.81)* (2.06)* 0.28 234.11 (2.12)* (4.07)* Panel C: Emerging market funds 0.00 0.01 (0.25) (0.40) 0.10 0.02 53.72 (1.21) (0.60) (1.75)** 0.01 0.02 49.56 (0.95) (0.51) (1.59) 0.09 50.91 (1.11) (1.64)
SUB
ACC(d)
ACC (CF)
Adj. R2 0.095
0.03 (1.55) 0.03 (1.60) 0.03 (1.73)
0.02 (1.40)
0.395 0.00 (0.02)
0.383 0.300
0.08 (2.85)*
0.184
0.187 0.05 (2.27)* 0.08 (2.12)* 0.04 (2.14)*
0.551 0.00 (0.00)
0.565 0.370
0.019 0.02 (0.86) 0.01 (0.95) 0.01 (0.86)
0.395 0.03 (0.15)
0.383 0.301
Results from an OLS regression of premiums on country funds on DIF (the difference between the betas on the world market index, for the price returns and NAV returns). The coefficient ACC (CF) is a measure of access proxied by the total purchase of securities by US residents (divided by the global purchase) from the country of origin of the fund. The coefficient ACC(d) is a dummy variable that takes value one for developed markets and zero for emerging markets. The coefficients below are the averages from the regression of the premiums on the independent variables for each week for the last 75 weeks of the sample (Mar 94–Aug 95). The t-statistics are presented in the parenthesis * and ** denote significance at the 5 % and 10 % levels of significance, respectively. The spanning measure (SPN) is the conditional variance of the NAV return of a fund, unspanned by the US market return and the fund’s share price returns, with specification as follows: rn,t ¼ ai + birUS,t + + birp,t + ei,t where ei,t has a GARCH(1, 1) specification. The measure of substitution (SUB) is the ratio of conditional volatilities of the share price and NAV returns of a fund not spanned by the US market. The specification are rn,t ¼ ai + birUS,t + en,t, rp,t ¼ ai + birUS,t + ep,t The error terms en,t and ep,t have GARCH (1, 1) specifications as follows (ht is variance of the error term): ht ¼ y0 + y1e2i;t1 + y2ht1 The equation for the cross-sectional regressions is Premi ¼ d0 + d1DIFi + d2SPNi + d3SUBi + d4ACCi + ei
25
Market Segmentation and Pricing of Closed-End Country Funds
693
to expectation, the measure of spanning is not significant across emerging market funds. This could be due to very low variability of the variables within emerging markets. The measure of substitutability, which is the ratio of conditional variances of the price returns and NAV returns, is not significant. The measure of substitutability persists to be insignificant for the subsets of emerging market funds. The measure of access proxied by purchase of securities by US residents from that market is not significant. These results indicate that differential risk exposures and measures of access and spanning are significant determinants of cross-sectional variation in closed-end country fund premiums. Also, the measures of segmentation explain premiums better across developed and emerging markets compared to only within emerging markets. The phenomenon that differential risk exposures can explain crosssectional variations in premiums on unrestricted securities relative to securities restricted to only local investors has been previously documented for Thai equities by Bailey and Jagtiani (1994) and for Swiss equities by Stulz and Wasserfallen (1995). Therefore, both market segmentation and other factors which make the closed-end country fund shares and the underlying securities imperfect substitutes – which is the source of the differences in the risk exposures and the excess volatility of the share prices – account for the cross-sectional variations in the premiums. The significance of the difference in risk exposures indicates that the greater the difference in risk exposures across a sample of country funds, the higher the premium. Second, the premium is higher for country funds originating from countries whose capital markets are less integrated with the US capital market. Although an important finding, the differential risk exposures cannot explain the time variation in premiums. The academic literature has documented that the country fund premiums vary over time. Figure 25.1 illustrates the time variation of country fund premiums. Section IIIB reported the results for pricing of country funds in the context of the unconditional international CAPM which indicated that country fund share prices are priced consistent with the international CAPM, whereas the NAVs may or may not be priced via the international CAPM. Unconditional meanvariance efficiency of a benchmark portfolio implies conditional mean-variance efficiency, but not vice versa (Ferson 1995). If expected returns conditional on an information set are also different for the share prices and the NAVs, it could explain not only the premiums but also their variation over time, which is further explored in section IIIF. To analyze the time variability of expected returns, the predictability of the funds share price and NAV returns is examined in the next section.
25.4.4 Predictability of Closed-End Country Fund Returns To assess the predictability of price and NAV returns, 4-week cumulative returns (the sum of returns over a 4-week period beginning the next week) are used, since most studies of predictability have used a monthly horizon. Predictability of the returns would imply that expected returns vary over time due to rational variation in
694
D.K. Patro
risk premiums. To test the hypothesis of predictability, the returns for both share prices and NAVs are regressed on a set of global and fund-specific instruments, to ascertain the predictive power of these instruments. The global instruments are TED and DIVY. The fund-specific instrument is the lagged premium on an equally weighted index of all the funds in the sample (FFD). The lagged premium was found to have predictive power for price returns by Bodurtha et al. (1995). Also, Errunza et al. (1998) find that when time series of premiums is regressed on a global fund premium, it is highly significant. However, unlike Errunza et al. (1998) here the lagged premium is used. Significance of this variable would indicate that investors use information about past premiums to form expectations about future prices. The regression is of the form t¼ tþ4 X
ra, t ¼ Z0 þ Z1 FFDt þ Z2 TEDt þ Z3 DIVYt þ ea, t
(25.18)
t ¼ tþ1
where ra, t is the excess return on the price or NAV of an equally weighted portfolio of developed or emerging market funds. The results from the regressions are presented in Table 25.8. Wald tests of no predictability, which test the hypothesis that a set of coefficients are jointly zero, indicate that the fund factor predicts price returns for 4 funds only and it also predicts 15 funds’ NAV returns. The predictive power of the lagged premiums for price returns has been attributed to noise trading (see Bodurtha et al. 1995). The fact that it predicts NAV returns may indicate that noise trading is not an important determinant of expected returns of share prices of closed-end funds. When global instruments are used, the Wald test of no predictability can be rejected at the 5 % level for only five funds’ price returns and 15 funds’ NAV returns. Overall, using all the instruments, the null hypothesis of no predictability can be rejected for six share prices and 20 NAVs. The poor predictability of the share prices is puzzling, since the share prices are determined in the USA and the instruments used have been shown to predict returns in the USA. In summary, closed-end country fund price and NAV returns exhibit considerable predictability. The existence of predictable variation in returns is interpreted as evidence of time-varying expected returns. The differences between the predictability of prices and NAVs also imply that they are not priced identically, which is further examined in the next section using time-varying expected returns.
25.4.5 Conditional Expected Returns and Pricing of Country Funds Table 25.9 and 25.12. CAPM and returns for
reports the results of estimating the SDFs specified in Eqs. 25.11 The models in panels A and B correspond to the international a two-factor model, with the world market return and the regional Latin America or Asia as the factors. For both the models,
25
Market Segmentation and Pricing of Closed-End Country Funds
695
Table 25.8 Predictability of country fund returns Panel A: Developed market funds Chi-squares [p-values] Global Fund Fund factor instruments AUS 0.11 1.93 [0.73] [0.38] ITA 0.71 1.30 [0.39] [0.51] GER 4.91 2.51 [0.02] [0.28] GBR 0.07 1.23 [0.78] [0.53] SHEL 1.81 1.43 [0.17] [0.48] SPN 4.96 6.19 [0.02] [0.04] AUT 0.02 0.01 [0.86] [0.99] GERN 1.67 0.10 [0.19] [0.95] GSPN 1.36 2.53 [0.24] [0.28] GERF 0.20 0.29 [0.64] [0.86] JPNO 7.12 3.18 [0.00] [0.20] GERE 0.14 0.03 [0.70] [0.98] IRL 0.00 1.93 [0.95] [0.38] FRA 3.56 1.52 [0.05] [0.46] SGP 0.00 2.55 [0.96] [0.27] Panel B: Emerging market funds Chi-squares [p-values] Global Symbol Fund factor instruments MEX 0.85 5.65 [0.35] [0.05] KOR 0.74 1.04 [0.38] [0.59] TWN 0.06 3.15 [0.79] [0.20]
All instruments 1.95 [0.58] 1.36 [0.71] 8.67 [0.03] 2.58 [0.45] 2.51 [0.47] 7.06 [0.06] 0.03 [0.99] 2.62 [0.45] 2.61 [0.45] 1.42 [0.69] 8.73 [0.03] 0.44 [0.93] 2.15 [0.54] 5.02 [0.17] 2.55 [0.46]
Chi-squares [p-values] Global Fund factor instruments 8.23 3.44 [0.00] [0.17] 5.86 7.27 [0.01] [0.02] 5.18 1.98 [0.02] [0.37] 3.33 5.64 [0.06] [0.05] 0.05 1.86 [0.81] [0.39] 10.01 16.95 [0.00] [0.00] 1.86 3.25 [0.17] [0.19] 3.33 2.13 [0.06] [0.34] 4.58 11.22 [0.03] [0.00] 3.58 2.91 [0.05] [0.23] 13.96 5.20 [0.00] [0.07] 4.69 1.32 [0.03] [0.51] 1.80 14.56 [0.17] [0.00] 0.00 0.23 [0.94] [0.89] 11.05 1.37 [0.00] [0.50]
All instruments 5.82 [0.12] 2.00 [0.57] 3.67 [0.29]
Chi-squares [p-values] Global Fund factor instruments 0.53 6.62 [0.46] [0.03] 4.20 8.87 [0.04] [0.01] 19.77 16.71 [0.00] [0.00]
All instruments 9.24 [0.02] 9.19 [0.02] 8.04 [0.04] 5.98 [0.11] 1.86 [0.60] 18.88 [0.00] 6.03 [0.10] 6.26 [0.09] 11.29 [0.01] 5.87 [0.11] 19.60 [0.00] 8.12 [0.04] 14.58 [0.00] 0.25 [0.96] 11.28 [0.01]
All instruments 7.24 [0.06] 13.46 [0.00] 25.93 [0.00] (continued)
696
D.K. Patro
Table 25.8 (continued) Panel B: Emerging market funds Chi-squares [p-values] Global Symbol Fund factor instruments MYS 0.20 1.89 [0.64] [0.38] THA 0.03 1.67 [0.85] [0.43] BRA 0.03 6.34 [0.84] [0.04] INDG 0.47 2.03 [0.49] [0.36] RTWN 1.72 15.45 [0.18] [0.00] CHL 0.05 4.94 [0.80] [0.08] PRT 1.17 0.45 [0.27] [0.79] FPHI 0.00 3.30 [0.97] [0.19] TUR 1.37 1.11 [0.24] [0.57] INDO 2.06 1.09 [0.15] [0.57] JAKG 0.28 0.87 [0.59] [0.64] THAC 0.03 4.22 [0.84] [0.12]
All instruments 2.83 [0.41] 1.85 [0.60] 6.99 [0.07] 2.03 [0.56] 15.58 [0.00] 6.97 [0.07] 2.41 [0.49] 3.43 [0.32] 3.45 [0.32] 2.26 [0.51] 1.35 [0.71] 4.79 [0.18]
Chi-squares [p-values] Global Fund factor instruments 1.91 2.21 [0.16] [0.33] 4.07 4.92 [0.04] [0.08] 0.32 4.90 [0.56] [0.08] 7.50 3.79 [0.00] [0.15] 19.35 15.42 [0.00] [0.00] 1.78 7.32 [0.18] [0.02] 6.56 0.01 [0.01] [0.99] 1.03 2.91 [0.31] [0.23] 0.35 0.34 [0.55] [0.83] 2.70 7.81 [0.09] [0.02] 1.75 19.36 [0.18] [0.00] 1.98 2.33 [0.15] [0.31]
All instruments 3.47 [0.32] 7.24 [0.06] 5.39 [0.14] 24.21 [0.00] 24.72 [0.00] 7.34 [0.06] 9.51 [0.02] 5.61 [0.13] 1.42 [0.70] 7.90 [0.04] 19.93 [0.00] 3.53 [0.31]
Table presents the Chi-squares from a Wald test to ascertain importance of the fund factor and global instruments in predicting the 4-week ahead cumulative returns of the prices and NAVs. The fund factor FFD is the lagged premium on an equally weighted index of all country funds in the sample. The global instruments include the lagged spread on 90-day Eurodollar deposits and 90-day US treasury yields (TED) and the lagged dividend yield on the S&P 500 index (DIVY). The estimates are obtained via GMM using an exactly identified set of moment conditions. The table below reports Chi-squares for a Wald test that a set of one or more instruments is zero. The p-values are robust to heteroskedasticity. The model estimated is as follows, where tX ¼ tþ4 ra, t ¼ Z0 þ Z1 FFDt þ Z2 TEDt þ Z3 DIVYt þ ea, t t ¼ tþ1
the conditional restrictions are tested. As noted earlier, if expected returns vary over time due to rational variation in risk premiums, conditional expected returns should be used. The conditional restrictions are tested by using a set of lagged instrumental variables TED and DIVY as predictors of excess returns. The tests use sets of assets – two sets of four developed market funds, a set of three Latin American funds, and two sets of three Asian funds. To ensure an adequate time series,
25
Market Segmentation and Pricing of Closed-End Country Funds
697
the funds selected are the ones listed prior to December 1989. Also, using too many assets would result in too many moment conditions.15 The funds selected are listed in Table 25.9. After estimating a model using GMM, its goodness-of-fit is ascertained by the J-statistic. The J-statistic is a test of the overidentifying restrictions of the model. The p-values given below the J-statistic indicate that none of the models are rejected for any test assets, at conventional significance levels. This finding is consistent with the idea that for a group of assets, there may be a number of SDFs that satisfy the pricing relation. Also, failure to reject the conditional international CAPM even for the NAVs indicates that while the unconditional international CAPM is not a valid model for emerging market funds and some developed market funds, the conditional international CAPM may be a valid model. This finding is consistent with the finding of Buckberg (1995) who fails to reject the conditional international CAPM for a set of emerging market indices. It must be noted that, unlike traditional conditional models, using stochastic discount factors implies a nonlinear relation between asset returns and the returns on the market index. Failure to reject such specifications may indicate that nonlinear specifications perform better in explaining equity returns. Table 25.9 also reports the results for tests of hypothesis that the coefficients of the SDFs of the price returns and the NAV returns are identical. The p-values for the Wald tests indicate that the null hypothesis can be rejected at all conventional levels for a subset of the developed market funds and all the Asian funds. This is a striking result, since it indicates that, although the same factors may be priced for the share prices and NAVs, the factors are priced differently. In traditional asset-pricing framework, it is equivalent to saying that the risk premiums are different. This result is consistent with the previous finding of Harvey (1991) who reports that for a sample of industrial markets, although the conditional international CAPM cannot be rejected, the price of risk varies across countries. Also, De Santis (1995) finds that stochastic discount factors that can price developed market indices cannot price emerging market indices. The difference in the price of risk is one of the sources of the premiums for the country funds. When the two-factor model is used, again the tests reject the hypothesis that the coefficients are identical for the Asian funds. However, the hypothesis of identical coefficients for the price and NAV returns cannot be rejected for a set of developed market funds and the Latin American funds. The above results imply that a subset of the developed markets and the Asian market are segmented from the US market. However, a subset of the developed markets and the Latin American markets are integrated with the US market. The results for Latin America are not affected, when tests are conducted excluding the time period surrounding the Mexican currency crisis. Therefore, differential risk exposures are more important in explaining the premiums for the Latin American markets and the developed markets that are integrated with the world market. The results in this section clearly show that while some markets may be integrated
15
Cochrane (1996) shows that iterated GMM estimates behave badly when there are too many moment conditions (37 in his case).
Price returns Test assets l0 Developed market funds AUT, GBR, 2.14 (4.02)* GER, ITA SPN, SHEL, 2.81 (4.46)* AUS, AUT Asian funds KOR, MYS, 4.27 (4.08)* TWN THA, INDG, 0.99 FPHI (0.74) Latin American funds BRA, CHL, 4.46 (2.45)* MEX
3.46 (1.90)*
3.27 (3.13)* 1.99 (1.50)
1.14 (2.15)* 1.81 (2.88)*
l1
3.82 (2.37)*
3.42 (2.20)*
2.95 (3.56)*
0.81 (1.64)
2.19 (5.51)*
NAV returns l2 l0 International CAPM
Table 25.9 GMM estimation of stochastic discount factors
2.82 (1.75)
1.95 (2.36)* 4.42 (2.84)*
1.29 (3.11)* 0.18 (0.37)
l1
l2
14
14
14
5.25 [0.07]
8.38 [0.01]
0.53 [0.76]
6.38 [0.01]
7.72 [0.00]
0.07 [0.79]
J1 [p-value] J2 [p-value]
9.80 [0.77]
0.42 [0.80]
0.36 [0.54]
9.28 [0.81] 11.21 [0.00] 10.57 [0.00]
8.11 [0.88]
20 11.23 [0.94]
20 11.98 [0.91]
DF J [p-value]
698 D.K. Patro
0.83 (0.98)
1.09 (3.27)*
0.29 (0.24)
2.00 (3.95)*
3.10 (3.63)* 1.88 (3.86)*
1.16 (1.48) 1.35 (1.63)
3.54 (4.06)*
Twofactor model
0.16 (0.19)
0.88 (1.81)
4.10 (4.80)*
0.76 (3.97)*
1.19 (1.74) 1.50 (1.84) 18 12.64 [0.81]
18 10.66 [0.90]
18 11.51 [0.87]
2.27 [0.51]
8.16 [0.04]
3.12 [0.26]
0.02 [0.88]
7.65 [0.00]
2.94 [0.08]
Estimation of stochastic discount factors, based on the equation E[M(t,1) R(t,1)jZt] ¼1 via GMM. The sample covers January 1990–August 1995. The instrument set Zt includes a constant, the lagged spread on 90-day Eurodollar deposits and 90-day US Treasury yields (LTED), and the dividend yield on Standard and Poor’s 500 index (LDIVY). The estimations with the returns on regional indices (Rh) also use the lagged regional return (for Asia or Latin America) as an instrument The stochastic discount factors estimated are ICAPM: M ¼ l0 + l1Rw Two-factor: M ¼ l0 + l1Rw + l2Rh DF is the degrees of freedom for Hansen’s test of the overidentifying restrictions (the J-statistic), which has an w2 distribution. The t-stats for the coefficients are given in parenthesis (an asterisk indicates significance at the 5 % level) and the p-values for the J-stat is given in brackets. The last two columns report Chi-squares and p-values for tests of hypotheses that the coefficients of the price returns and NAV returns are identical. J1 is for a joint test for all the coefficients and J2 is a test for the coefficients on the world market index
Asian funds KOR, MYS, 2.35 TWN (2.92)* THA, INDG, 3.00 (5.93)* FPHI Latin American funds BRA, CHL, 0.70 (0.57) MEX
25 Market Segmentation and Pricing of Closed-End Country Funds 699
700
D.K. Patro
with the world capital market, others are not. If capital markets are segmented, that is sufficient to generate the premiums on the share prices. However, if markets are integrated, differential risk exposures may explain the premiums. This finding is different from the existing literature on country fund pricing, which has attributed the existence of premiums on irrational factors such as noise trading. The preceding analysis clearly shows that differential pricing of the same factors or differential risk exposures on the same factor may lead to premiums on the share prices. The next section provides an analysis of the effect of time-varying expected returns on country fund premiums.
25.4.6 Conditional Expected Returns and Time-Varying Country Fund Premiums Table 25.9 reports the results of regressing the premiums on the country funds on the differences in conditional risk exposures of share price returns and NAV returns estimated using Eqs. 25.13, 25.14, and 25.15. The conditional risk exposures are estimated as a linear function of lagged dividend yields and the term premium. The null hypothesis is that the coefficient of the difference in the conditional betas is significant. The results indicate that the difference in conditional betas is highly significant in explaining the time-varying premiums on country funds. For 24 out of the 30 funds, the coefficient is significant. Also, the adjusted R-squares are high, especially for the developed market funds. For many of the funds, however, the coefficient is negative, implying that using the world market index alone is not sufficient to explain the return-generating process and the returns on the NAVs may reflect the local price of risk. The adjusted R2 values are higher for the developed market funds. This result is consistent with earlier results which show that the world market index is not an appropriate benchmark for emerging market NAVs. Also, majority of the emerging market funds are from Asia. The results in Table 25.9 indicated that these markets are segmented from the world market. Therefore, different risk exposures to the world market index do not explain much of the variation in the premiums for the emerging markets. This is an important finding since the existing literature on closed-end funds has attributed the existence of premiums to irrational factors, such as noise trading. The preceding analysis shows clearly that segmented capital markets in which risk premiums vary over time are sufficient to generate two different prices for the same set of cash flows. Also, the differences in prices may vary over time as expected returns vary over time due to rational variations in expected returns. If the price of risk is different across markets, the same security will have different expected returns. The difference in these expected returns results in the premium on the share price. If the expected returns vary over time because of rational variation in risk premiums, the premiums will also vary over time as a function of the differential expected returns (Table 25.10).
25
Market Segmentation and Pricing of Closed-End Country Funds
25.5
701
Conclusions
Closed-end country funds are becoming an increasingly attractive source of capital in international equity markets. This paper provides an empirical analysis of the pricing of these funds in the context of rational asset-pricing models. The paper finds that differential risk exposures, market segmentation, and time-varying risk premiums play important roles in the differential pricing of the share prices and NAVs. Based on the unconditional international CAPM, the existence of premiums can be attributed to different risk exposures, as is the case with developed market funds and differential pricing of the shares and NAVs, as is the case with emerging market funds and some developed market funds. The paper also analyzes the pricing of country funds when conditioning information is allowed. The results indicate that, for alternate stochastic discount factors, for majority of the funds, the pricing of country fund shares and NAVs is consistent with the conditional international CAPM. However, tests of the estimated stochastic discount factors indicate that, for a subset of the developed market funds and all the Asian funds, closed-end country fund share prices and NAVs are priced differently. This result indicates that the international capital markets are not fully integrated. Finally, this paper shows that the premiums on country funds vary over time because of time variation in expected returns. The findings in this paper have several possible extensions. The focus of the paper has been on rational explanations for country fund premiums based on differential risk exposures and market segmentation effects. It will be interesting to extend this analysis and examine the effect of other factors such as taxes, numeraires, liquidity, and bid-ask spreads.
Appendix 1: Generalized Method of Moments (GMM) GMM is an econometric method that was a generalization of the method of moments developed by Hansen (1982). The moment conditions are derived from the model. Suppose Yt is a multivariate independently and identically distributed (i.i.d) random variable. The econometric model specifies the relationship between Zt and the true parameters of the model (y0). To use GMM there must exist a function g(Zt, y0) so that mðy0 Þ E½gðZt ; y0 Þ ¼ 0
(25.19)
In GMM, the theoretical expectations are replaced by sample analogs: X gðZt ; yÞ: (25.20) f ðy; Zt Þ ¼ 1=T The law of large numbers ensures that the RHS of above equation is the same as E½f ðZt ; y0 Þ:
(25.21)
702
D.K. Patro
Table 25.10 Time-varying closed-end country fund premiums Panel A: Developed market funds Fund g0 AUS 0.07 (10.13)* ITA 0.07 (13.05)* GER 0.01 (1.55)* GBR 0.11 (34.40)* SHEL 0.06 (13.38)* SPN 0.00 (0.14) AUT 0.03 (3.47)* GERN 0.14 (37.73)* GSPN 0.15 (35.72)* GERF 0.14 (40.05)* JPNO 0.09 (9.44)* GERE 0.15 (39.20)* IRL 0.16 (44.37)* FRA 0.15 (24.47)* SGP 0.37 (2.03)* Panel B: Emerging market funds Fund g0 MEX 0.07 (16.93)* KOR 0.21 (26.45)* TWN 0.07 (1.01) MYS 0.03 (6.67)* THA 0.00 (0.04) BRA 0.01 (2.44)* INDG 0.00 (0.61) RTWN 0.60 (0.76) CHL 0.34 (6.76)* PRT 0.06 (13.60)* FPHI 0.20 (25.86)* TUR 0.15 (11.34)* INDO 0.23 (7.85)* JAKG 0.38 (8.91)* THAC 0.22 (4.97)*
g1 0.11 (5.04)* 0.10 (5.86)* 0.00 (0.29) 0.08 (8.53)* 0.10 (7.12)* 0.18 (4.89)* 0.21 (7.47)* 0.04 (3.45)* 0.13 (7.13)* 0.03 (2.47)* 0.04 (3.79)* 0.04 (4.54)* 0.13 (5.72)* 0.24 (5.22)* 0.72 (1.96)
Adj. R2 0.08 0.19 0.00 0.22 0.13 0.19 0.13 0.13 0.22 0.01 0.05 0.06 0.12 0.21 0.02
g1 0.08 (6.66)* 0.15 (8.42)* 0.26 (2.28)* 0.02 (2.25)* 0.08 (0.43) 0.05 (4.81)* 0.01 (0.83) 0.00 (1.46) 0.62 (5.14)* 0.03 (2.47)* 0.01 (0.83) 0.10 (2.16)* 0.10 (2.97)* 0.34 (7.76)* 0.24 (3.11)*
Adj. R2 0.24 0.20 0.01 0.02 0.00 0.09 0.00 0.01 0.20 0.02 0.00 0.01 0.07 0.27 0.03
Results from an OLS regression of premiums on country funds on the difference between the time-varying betas on the world market index. The t-statistics robust to heteroskedasticity are presented in the parenthesis. An asterisk (*) denotes significance at the 5 % level of significance. The regression estimated is of the form Premt+1 ¼ g0 + g1[bn,w(Zt) bp,w(Zt)] + eprem,t+1 The time-varying betas are estimated using the conditional international CAPM for an index of developed and emerging market price and NAV returns, using the equations X funds ra, tþ1 ¼ aa þ ba, w rw, tþ1 þ ga, i rw, tþ1 zt þ ea, tþ1 a ¼ p and n i
where rp,t+1 is the excess return on the share price and rn,t+1 is the excess NAV return and Prem t+1 is the premium for the time period January 1991–August 1995. rw,t+1 is the excess return on the world market index and Zt is a set of instrumental variables DIVY and TED
25
Market Segmentation and Pricing of Closed-End Country Funds
703
The sample GMM estimator of the parameters may be written as (see Hansen 1982) h i0 i X X gðZt ; yÞ (25.22) Y ¼ arg min 1=T gðZt ; yÞ WT 1=T So essentially GMM finds the values of the parameters so that the sample moment conditions are satisfied as closely as possible. In our case for the regression model, yt ¼ Xt 0 b þ et
(25.23)
The moment conditions include E½ðyt Xt 0 bÞxt ¼ E½et xt ¼ 0 for all t
(25.24)
So the sample moment condition is X 1=T ðyt Xt 0 bÞxt and we want to select b so that this is as close to zero as possible. If we select b as (X0 X)1(X0 y), which is the OLS estimator, the moment condition is exactly satisfied. Thus, the GMM estimator reduces to the OLS estimator and this is what we estimate. For our case the instruments used are the same as the independent variables. If, however, there are more moment conditions than the parameters, the GMM estimator above weighs them. These are discussed in detail in Greene (2008, Chap. 15). The GMM estimator has the asymptotic variance
1
X0 Z ðZ0 OZÞ Z0 X
1
(25.25)
The White robust covariance matrix may be used for O as discussed in appendix C when heteroskedasticity is present. Using this approach, we estimate GMM with White heteroskedasticity consistent t-stats.
References Bailey, W., & Jagtiani, J. (1994). Foreign ownership restrictions and stock prices in the Thai Capital Market. Journal of Financial Economics, 36, 57–87. Bailey, W., & Lim, J. (1992). Evaluating the diversification benefits of the new country funds. Journal of Portfolio Management, 18, 74–80. Bansal, R., Hsieh, D. A., & Viswanathan, S. (1993). A new approach to international arbitrage pricing. Journal of Finance, 48, 1719–1747. Bekaert, G. (1995). Market integration and investment barriers in equity markets. World Bank Economic Review, 9, 75–107.
704
D.K. Patro
Bekaert, G., & Urias, M. (1996). Diversification, integration and emerging market closed-end funds. Journal of Finance, 51, 835–869. Bodurtha, J. N., Jr., Kim, D.-S., & Lee, C. M. C. (1995). Closed-end country funds and U.S. market sentiment. Review of Financial Studies, 8, 881–919. Bonser-Neal, C., Brauer, G., Neal, R., & Wheatly, S. (1990). International investment restrictions and closed-end country fund prices. Journal of Finance, 45, 523–547. Chang, E., Eun, C., & Kolodny, R. (1995). International diversification through closed-end country funds. Journal of Banking and Finance, 19, 1237–1263. Cochrane, J. H. (1996). A cross-sectional test of an investment-based asset pricing model. Journal of Political Economy, 104(3), 572–621. Cumby, R. E., & Glen, J. D. (1990). Evaluating the performance of international mutual funds. Journal of Finance, 45(2), 497–521. De Santis, G. (1995). Volatility bounds for stochastic discount factors: Tests and implications from international financial markets. Working paper, University of Southern California, Jan. Diwan, I., Vihang, E., & Senbet, L.W. (1995). Diversification benefits of country funds. In M. Howell (Ed.), Investing in emerging markets. Washington, DC: Euromoney/World Bank. Errunza, V., & Losq, E. (1985). International asset pricing under mild segmentation: Theory and test. Journal of Finance, 40, 105–124. Errunza, V., Senbet, L. W., & Hogan, K. (1998). The pricing of country funds from emerging markets: Theory and evidence. International Journal of Theoretical and Applied Finance, 1, 111–143. Fama, E. F., & French, K. R. (1989). Business conditions and expected returns on stocks and bonds. Journal of Financial Economics, 25, 23–49. Ferson, W. (1995). Theory and empirical testing of asset pricing models. In R. Jarrow, V. Maksimovic, & W. T. Ziemba (Eds.), Handbooks in operation research and management science (Vol. 9). Amsterdam: North-Holland. Ferson, W., & Foerster, S. (1994). Finite sample properties of generalized methods of moments in tests of conditional asset pricing models. The Journal of Financial Economics, 36, 29–55 Ferson, W. E., & Harvey, C. R. (1993). The risk and predictability of international equity returns. Review of Financial Studies, 6(3), 527–566. Ferson, W. E., & Schadt, R. (1996). Measuring fund strategy and performance in changing economic conditions. Journal of Finance, 51(2), 425–461. Greene, W. (2008). Econometric analysis. Washington, DC: Pearson. Gruber, M. J. (1996). Presidential address: Another puzzle: The growth in actively managed mutual funds. Journal of Finance, 51, 783–810. Hanley, K. W., Lee, C., & Seguin, P. (1996). The marketing of closed-end fund IPOs. Journal of Financial Intermediation, 5, 127–159. Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50, 1029–1054. Hardouvellis, G., Wizman, T., & La Porta, R. (1993). What moves the discount on country equity funds? Working paper 4571, NBER. Harvey, C. (1991). The world price of covariance risk. Journal of Finance, 46, 111–159. Hietala, P. T. (1989). Asset pricing in partially segmented markets: Evidence from the finnish market. Journal of Finance, 44(3), 697–718. Lee, C. M. C., Shleifer, A., & Thaler, R. (1991). Investor sentiment and the closed-end fund puzzle. Journal of Finance, 46(1), 75–109. Mackinlay, C., & Richardson, M. (1991). Using generalized method of moments to test meanvariance efficiency. Journal of Finance, 46(2), 511–527. Peavy, J. W., III. (1990). Returns on initial public offerings of closed-end funds. Review of Financial Studies, 3, 695–708. Pontiff, J. (1997). Excess volatility and closed-end funds. American Economic Review, 87(1), 155–169.
25
Market Segmentation and Pricing of Closed-End Country Funds
705
Roll, R. (1977). A critique of the asset pricing theory’s tests. Journal of Financial Economics, 4, 129–176. Solnik, B. (1996). International investments (3rd ed.). New York: Addison-Wesley. Stulz, R. (1995). International portfolio choice and asset pricing: An integrative survey. In R. Jarrow, V. Maksimovic, & W. T. Ziemba (Eds.), Handbooks in operation research and management science (Vol. 9). Amsterdam: North-Holland. Stulz, R., & Wasserfallen, W. (1995). Foreign equity investment restrictions, capital flight, and shareholder wealth maximization: Theory and evidence. Review of Financial Studies, 8, 1019–1057. Weiss, K. (1989). The post-offering price performance of closed-end funds. Financial Management, 18, 57–67.
A Comparison of Portfolios Using Different Risk Measurements
26
Jing Rung Yu, Yu Chuan Hsu, and Si Rou Lim
Contents 26.1 26.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Portfolio Selection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2.1 The Mean-Variance Model (MV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2.2 The Mean Absolute Deviation Model (MAD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2.3 The Downside Risk Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3 The Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3.1 The MV Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3.2 The MAD Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3.3 The DSR Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3.4 The MV_S Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3.5 The MAD_S Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3.6 The DSR_S Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Linearization of the MAD Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Linearization of the DSR Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple Objective Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
708 709 710 710 711 711 712 712 713 713 714 715 716 722 722 722 724 724 726 726 727
Abstract
In order to find out which risk measurement is the best indicator of efficiency in a portfolio, this study considers three different risk measurements: the meanvariance model, the mean absolute deviation model, and the downside risk model. Meanwhile short selling is also taken into account since it is an important
J.R. Yu (*) • Y.C. Hsu • S.R. Lim National Chi Nan University, Nantou, Taiwan e-mail: [email protected]; [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_26, # Springer Science+Business Media New York 2015
707
708
J.R. Yu et al.
strategy that can bring a portfolio much closer to the efficient frontier by improving a portfolio’s risk-return trade-off. Therefore, six portfolio rebalancing models, including the MV model, MAD model, and the downside risk model, with/without short selling, are compared to determine which is the most efficient. All models simultaneously consider the criteria of return and risk measurement. Meanwhile, when short selling is allowed, models also consider minimizing the proportion of short selling. Therefore, multiple objective programming is employed to transform multiple objectives into a single objective in order to obtain a compromising solution. An example is used to perform simulation, and the results indicate that the MAD model, incorporated with a short selling model, has the highest market value and lowest risk. Keywords
Portfolio selection • Risk measurement • Short selling • MV model • MAD model • Downside risk model • Multiple objective programming • Rebalancing model • Value-at-risk • Conditional value-at-risk
26.1
Introduction
Determining how to maximize the profit and minimize the risk of a portfolio is an important issue in portfolio selection. The mean-variance (MV) model of portfolio selection is based on the assumptions that investors are risk averse and the return of assets is normally distributed (Markowitz 1952). This model is regarded as the basis of modern portfolio theory (Deng et al. 2005). However, the MV model is limited in that it only leads to optimal decisions if the investor’s utility functions are quadratic or if investment returns are jointly elliptically distributed (Grootveld and Hallerbach 1999; Papahristoulou and Dotzauer 2004). Thus, numerous researches have focused on risk, return, and diversification in the development of investment strategies. Also, a large number of researches have been proposed to improve the performance of investment portfolios (Deng et al. 2000; Yu and Lee 2011). Konno and Yamazaki (1991) proposed a linear mean absolute deviation (MAD) portfolio optimization model. The MAD model replaces the variance of objective function in the MV model with the mean absolute deviation. The major advantage of the MAD model is that the estimation of the covariance matrix of asset returns is not needed. Also, it is much easier to solve large-scale problems with linear programming than with quadratic approaches (Simaan 1997). Portfolio selection under shortfall constraints originated from Roy’s (1952) safety-first theory. Economists have found that investors care about downside losses more than they care about upside gains. Therefore, Markowitz (1959) suggested using semi-variance as a measure of risk, instead of variance, because semi-variance measures downside losses rather than upside gains. The use of downside risk (DSR) measures is proposed due to the problems encountered in using a conventional mean-variance analysis approach in the presence of non-normality in the emerging market data. Unlike the mean-variance framework,
26
A Comparison of Portfolios Using Different Risk Measurements
709
the downside risk measure does not assume that the return distributions of assets are normal. In addition, the increasing emphasis of investors on limiting losses might make the downside risk measure more intuitively appealing (Stevenson 2001; Ang et al. 2006). The downside risk measure can help investors make proper decisions when returns are non-normally distributed, especially for emerging market data or for an international portfolio selection (Vercher et al. 2007). Another type of shortfall constraint is value-at-risk (VaR), which is a percentilebased metric system for risk measurement purposes (Jorion 1996). It is defined as the maximum loss that a portfolio can suffer at a given level of confidence and at a given horizon (Fusai and Luciano 2001). However, VaR is too weak to handle the situation when losses are not “normally” distributed, as loss distribution tends to exhibit “fat tail” or empirical discreteness. Conditional value-at-risk (CVaR) is an alternative measure that quantifies losses that might be encountered in the tail of loss distribution (Rockafellar and Uryasev 2002; Topaloglou et al. 2002). A confidence level is required when employing the measurements of VaR and CVaR. The allocation of a portfolio varies with the varying confidence level. Therefore, neither measure is included for comparison in this chapter. Generally, short selling has good potential to improve a portfolio’s risk-return trade-off (White 1990; Kwan 1997) and is considered by most investors to obtain interest arbitrage; however, it comes with high risk (Angel et al. 2003). Since high risks should be avoided, the role of short selling is minimized in this chapter. Instead, three kinds of portfolio models with and without short selling are compared. Section 26.2 introduces the mean-variance, mean absolute, and downside risk models. In Sect. 26.3, the rebalancing models with/without short selling are proposed. In Sect. 26.4, the performances of three different risk measurements with/ without short selling are compared by using the historical data of 45 stocks listed in the TSE50 index. Finally, Sect. 26.5 presents the conclusions along with suggestions for future research.
26.2
Portfolio Selection Models
In this section, the mean-variance, the mean absolute deviation, and the downside risk models are introduced separately. First, the notations are defined as follows: n is the number of available securities. wi is the investment portion in securities i for i ¼ 1, . . ., n. ri is the return on securities i. m is the expected portfolio return. si2 is the variance of the return on securities i. sij2 is the covariance between the returns of securities i and j. rit is the return on securities i in period t for t ¼ 1, . . ., T, which is assumed to be available through historical data. T X Ri is equal to T1 r it . t¼1
dt is the deviation between the return and the average return.
710
J.R. Yu et al.
26.2.1 The Mean-Variance Model (MV) The MV model uses the variance of the return as the measure of risk and formulates the portfolio optimization problem as the following quadratic programming problem (Markowitz 1952): Min sp ¼
n n n X X X w2i s2i þ sij wi wj i¼1 j¼1ði6¼jÞ
i¼1
n X s:t: r i wi m,
(26.1)
i¼1 n X
wi ¼ 1,
(26.2)
i¼1
wi 0,
(26.3)
for i ¼ 1, . . ., n. Constraint (26.1) expresses the requirements m of a portfolio return, and constraint (26.2) is the budget constraint. The model is known to be valid if an investor is risk averse in the sense that he prefers less standard deviation of the portfolio rather than more. Since wi 0, a short sale is not allowed here.
26.2.2 The Mean Absolute Deviation Model (MAD) The mean-variance model is weak in constructing a large-scale portfolio due to the computational difficulty associated with solving a large-scale quadratic programming problem with a dense covariance matrix. The MAD model (Konno and Yamazaki 1991) replaces the variance in the objective function of the MV model with the mean absolute deviation as follows: T X n 1X Min ðr it Ri Þwi T t¼1 i¼1 s.t. Constraints (26.1) (26.3). Because of the absolute deviation, the MAD model can be linearized as following (Chang 2005): Min
T 1X dt T t¼1
s.t. Constraints (26.1) (26.3),
26
A Comparison of Portfolios Using Different Risk Measurements
dt þ
n X
711
ðr it Ri Þwi 0, t ¼ 1, . . . , T,
(26.4)
ðr it Ri Þwi 0, t ¼ 1, . . . , T:
(26.5)
i¼1
dt
n X i¼1
If the return is lower than the average return, constraint (26.4) is a binding n X constraint which means d t ¼ ðr it Ri Þwi for t ¼ 1, . . .,T. Otherwise, constraint i¼1
(26.5) is a binding constraint which means dt ¼
n X
ðr it Ri Þwi for t ¼ 1, . . .,T. For
i¼1
more details on the reformulation, please refer to Appendix 1. Apparently, the MAD model does not require the covariance matrix of asset returns, and consequently its estimation is not needed. Large-scale problems can be solved faster and more efficiently because the MAD model has a linear rather than quadratic nature.
26.2.3 The Downside Risk Model Vercher et al. (2007) consider the equivalent formulation of the portfolio selection problem (Speranza 1993) and reformulate the following linear optimization model by considering downside risk measurement: Min
T 1X d T t¼1 t
s.t. Constraints (26.1) (26.3), d t þ
n X
ðr it Ri Þwi 0, t ¼ 1, . . . , T
(26.6)
i¼1
where dt ¼ dt+ + dt, dt+, dt 0. Downside risk measurement focuses on returns falling below some critical level (Grootveld and Hallerbach 1999). Differing from the MAD model, the downside risk model ignores constraint (26.5). If the return is lower than the average return, constraint (26.6) is a binding constraint. Please refer to Appendix 2 for more details.
26.3
The Proposed Model
Multiple period portfolio selection models with rebalancing mechanisms have become attractive in the financial field in order to get desired returns in situations that are subject to future changes (Yu et al. 2010). To reflect a changing situation in the models, the rebalancing mechanism is adopted for multiple periods (Yu and Lee 2011).
712
J.R. Yu et al.
Six rebalancing models are introduced. These are MV, MAD, DSR, MV_S, MAD_S, and DSR_S. The first three models lack short selling, and the other three have short selling. The model notations are denoted as follows: w+i,0 is the weight of security i held in the previous period, w i,0 is the weight of securities i sold short in the previous period, wþ i is the total weight of securities i bought after rebalancing, and w i is the total weight of securities i sold short after rebalancing. With each rebalancing, lþ i is the weight of securities i bought in this period, li is þ the weight of securities i sold in this period, si is the weight of securities i sold short in this period, s i is the weight of securities i repurchased in this period, ui is the binary variable that indicates whether the securities i are selected for buying, vi is the binary variable that indicates whether the securities i are selected for selling short, and k is the initial margin requirement for short selling.
26.3.1 The MV Model The conventional MV model can be regarded as a bi-objective model without short selling, whose objective functions are the maximization of portfolio return and minimization of portfolio risk, as measured by the portfolio variance: Max
n X
Ri wi
i¼1
Min sp s.t. Constraint (26.2), þ wi ¼ wþ i, 0 þ ‘i ‘i ,
(26.7)
0:05ui wi 0:2ui ,
(26.8)
for i ¼ 1, 2, . . ., n. Constraint (26.7) is the rebalancing constraint; it shows the current weight for the ith security according to the previous period. Constraint (26.8) is the required range of weights for each security in buying. For simplexity, the upper and lower bounds of each weight are set 0.2 and 0.05, respectively.
26.3.2 The MAD Model The objectives of the MAD model include the maximization of return and minimization of the mean absolute deviation, which is transformed into a linear deviation as follows: n X Max Ri w i i¼1
26
A Comparison of Portfolios Using Different Risk Measurements
713
T 1X dt T t¼1
Min
s.t. Constraints (26.2) (26.5), (26.7), (26.8).
26.3.3 The DSR Model The DSR model considers the objectives of maximizing the return and minimizing the downside risk, which is transformed into a linear risk as follows: n X
Max
Ri w i
i¼1
Min
T 1X d T t¼1 t
s.t. Constraints (26.2), (26.3), (26.6), (26.7), (26.8). When short selling is allowed, the above three models are reformulated as follows (Yu and Lee 2011):
26.3.4 The MV_S Model
Max
n X
Ri w þ i wi
i¼1
Min sp Min
n X i¼1
s:t:
w i
n X
¼ 1, wþ i þ kwi
(26.9)
i¼1 þ þ wþ i ¼ wi, 0 þ ‘i ‘i ,
(26.10)
þ w i ¼ wi, 0 þ si si ,
(26.11)
0:05ui wþ i 0:2ui ,
(26.12)
714
J.R. Yu et al.
0:05vi w i 0:2vi ,
(26.13)
ui þ v i ¼ y i ,
(26.14)
for i ¼ 1, 2, . . ., n. Unlike the MV model, the MV_S model has an extra objective, namely, minimizing the short selling proportion. The total budget is including the cost of buying and short selling as shown in constraint (26.9). k is the initial margin requirement for short selling. Constraint (26.10) indicates the current weight for the ith securities based on the previous period. Constraint (26.11) indicates the current short selling proportion of the ith security adjusted by the previous period. Constraints (26.12) and (26.13) limit the upper and lower bounds, respectively, of the long and short selling proportion for the ith security.
26.3.5 The MAD_S Model Based on the MAD model (Konno and Yamazaki 1991), the MAD_S model replaces the objective function of minimizing the variance in the MV_S model (Yu and Lee 2011) with the objective of minimizing the absolute deviation of average return, as follows: Min
n X wþ w r it wþ w Ri i
i
i
i
i¼1
The objective of the mean absolute deviation can be transformed into a linear problem: Min
T 1X dt T t¼1
s:t: dt þ
n X
þ wþ i wi r it wi wi Ri 0,
(26.15)
i¼1
dt
n X
þ wþ i wi r it wi wi Ri 0,
i¼1
for t ¼ 1, . . ., T. The following is the MAD_S model:
Max
n X i¼1
Ri wþ i wi
(26.16)
26
A Comparison of Portfolios Using Different Risk Measurements
Min
Min
715
T 1X dt T t¼1 n X
w i
i¼1
s.t. Constraints (26.9) (26.16).
26.3.6 The DSR_S Model The DSR_S model focuses on the deviation when the return falls below the average return, as follows:
Max
n X
Ri w þ i wi
i¼1
Min
Min
T 1X d T t¼1 t n X
w i
i¼1
s.t. Constraints (26.9) (26.14), d t þ
n X
þ wþ i wi r it wi wi Ri 0,
(26.17)
i¼1
for t ¼ 1, . . ., T. Apparently, all six models have multiple objectives. Therefore, multiple objective programming (Zimmermann 1978, and Lee and Li 1993) is adopted to transform the multiple objectives into a single objective. For more details, please refer to Appendix 3. Taking the MAD_S model as an example, we can reformulate the multiple objectives as follows: Max l
ðr r l Þ , s:t: l rg rl
(26.18)
ð s sl Þ , l s g sl
(26.19)
716
J.R. Yu et al.
w w l , l w g wl r ¼
n X
Ri w þ i wi ,
(26.20)
(26.21)
i¼1
s ¼
T 1X d , T t¼1 t
w ¼
n X
w i ,
(26.22) (26.23)
i¼1
Constraints (26.9) (26.16). For multiple objective programming (Lee and Li 1993), r is the return of the portfolio, rl is the anti-ideal return of the portfolio, rg is the ideal return of the portfolio that maximizes the objective, s is the inherent risk of the portfolio, sl is the anti-ideal risk of the portfolio, sg is the ideal risk of the portfolio, w is the short selling proportion of the portfolio, wl is the anti-ideal short selling proportion of the portfolio, and w g is the ideal short selling proportion of the portfolio. The constraints (26.18–26.20) are the achievements for maximizing the return, minimizing the absolute deviation and objectives minimizing the short selling problem of the corresponding portfolio, which are less than or equal to the whole achievement (l). The whole achievement (l) should be maximized. In the same way, the other five multiple objective models can be reformulated in turn as a single-objective model. The details of the transformation are introduced in Appendix 3.
26.4
Experimental Results
Forty-five stocks listed on the Taiwan Stock Exchange were adopted and used to compare the six models discussed above in order to determine which one is the best. The benchmark is the Taiwan 50 Index (TSE50). The exchange codes of the 45 stocks are listed in Table 26.1. The duration of the analyzed data is from November 1, 2006, to November 24, 2009. The historical data of the first 60 transaction days are used to build the initial models. For the monthly updates, 20 transaction days are set as a sliding window. This study assumes a budget of $1 million NTD is invested in the Taiwan Stock Market. For investments based on the weights generated by the initial models, the first transaction day is January 25, 2007, and there are 34 rebalancing times in total. The models are executed on an Intel Pentium Dual CPU E2200 2.20GHz and 2G RAM computer, with Lingo11.0, an optimizing software. From Table 26.2, it is apparent that the MAD and DSR models are more efficient than the MV model because they both use linear transformation for problem solving. This is much faster and more efficient when handling a large-scale
No Code No Code No Code
1 1101 16 2330 31 2885
2 1102 17 2347 32 2886
3 1216 18 2353 33 2888
4 1301 19 2354 34 2890
Table 26.1 The exchange codes of 45 stocks 5 1303 20 2357 35 2891
6 1326 21 2382 36 2892
7 1402 22 2409 37 2912
8 1722 23 2454 38 3009
9 2002 24 2498 39 3231
10 2105 25 2603 40 3474
11 2308 26 2801 41 3481
12 2311 27 2880 42 5854
13 2317 28 2881 43 6505
14 2324 29 2882 44 8046
15 2325 30 2883 45 9904
26 A Comparison of Portfolios Using Different Risk Measurements 717
718
J.R. Yu et al.
Table 26.2 The running time of six portfolios Models without short selling
MV 00:09:06 MV_S 00:10:01
Models with short selling
MAD 00:01:01 MAD_S 00:01:15
DSR 00:01:01 DSR_S 00:01:14
(hh:mm:ss)
1.9 MV
14,000
MAD
1.7
DSR TSE50
Market Value (million)
1.5
12,000
TAIEX 1.3
10,000
1.1 8,000 0.9 6,000
9/24/09
7/24/09
5/24/09
3/24/09
1/24/09
11/24/08
9/24/08
7/24/08
5/24/08
3/24/08
1/24/08
11/24/07
9/24/07
7/24/07
5/24/07
1/24/07
0.5
3/24/07
0.7
4,000
Fig. 26.1 The market value of three risk measurements without short selling, the TSE50, and TAIEX
problem. As one can see in Table 26.2, the MV model takes 9 min and 6 sec to compute the result. However, the MAD and DSR models only take 1 min or slightly more to solve the same data. Moreover, it is not necessary to calculate the covariance matrix to set up the MAD and DSR models; this makes it very easy to update the models when new data are added (Konno and Yamazaki 1991). Figures 26.1, 26.2 and 26.3 show the comparisons of the MV, MAD, and DSR models, respectively. These are the models without short selling. Figures 26.4, 26.5 and 26.6 show the comparisons of the MV_S, MAD_S, and DSR_S models, respectively. These are the models with short selling. The TSE50 is used as the benchmark to these models. As shown in Fig. 26.1, the market value of the MAD
26
A Comparison of Portfolios Using Different Risk Measurements
719
0.016 MV
0.014
MAD DSR
0.012
Expected Return
0.01 0.008 0.006 0.004
5/24/09
7/24/09
9/24/09
5/24/09
7/24/09
9/24/09
3/24/09
1/24/09
11/24/08
9/24/08
7/24/08
5/24/08
3/24/08
1/24/08
11/24/07
9/24/07
7/24/07
5/24/07
−0.002
3/24/07
−1E-17
1/24/07
0.002
Fig. 26.2 The expected return of three risk measurements without short selling
0.035 MV MAD DSR
0.03
Risk
0.025
0.02
Fig. 26.3 The risk of three risk measurements without short selling
3/24/09
1/24/09
11/24/08
9/24/08
7/24/08
5/24/08
3/24/08
1/24/08
11/24/07
9/24/07
7/24/07
5/24/07
3/24/07
0.01
1/24/07
0.015
720
J.R. Yu et al. 18,000
2.3 MV_S 2.1
MAD_S
16,000
DSR_S 1.9
TSE50 14,000
Market Value (million)
TAIEX 1.7 1.5
12,000
1.3
10,000
1.1 8,000 0.9 6,000
9/24/09
7/24/09
5/24/09
3/24/09
1/24/09
11/24/08
9/24/08
7/24/08
5/24/08
3/24/08
1/24/08
11/24/07
9/24/07
7/24/07
5/24/07
3/24/07
0.5
1/24/07
0.7
4,000
Fig. 26.4 The market value of three risk measurements with short selling, the TSE50, and TAIEX
model is always greater than that of the other models and the benchmark. Figures 26.2 and 26.3 display the expected return and risk of the portfolios constructed with three risk measurements without short selling. Figure 26.2 shows that the expected returns of these three portfolios are almost the same. However, Fig. 26.3 shows that the MAD model has the lowest risk under a similar expected return to that of the others. Especially, in January 24, 2009, the risk in the MAD model was much lower than in the other two models. Downside risk measurement is applied when investors only take the negative return into consideration and focus on the loss of investment. In other words, their concern is with the real loss, not with the positive deviation of the average return in portfolios. Therefore, in Fig. 26.3, the risk generated by the DSR model is always higher than other measurements. In Fig. 26.4, the market value of the MV_S, MAD_S, and DSR_S models is compared. Since these models take short selling into consideration, the portfolio selection is more flexible, and the risk is much lower, as Fig. 26.6 shows. Under the same expected return, the MAD_S model has the lowest risk among the three models, using the mean absolute deviation risk measure. Figure 26.4 shows that it also has the highest market value. Even though the market value of each model is increased after short selling is allowed, the market value of the MAD_S model is always higher than the other models and the benchmark. Evidently, the MAD model is suggested for use as the best risk measurement tool with or without short sell.
26
A Comparison of Portfolios Using Different Risk Measurements
721
0.016 MV_S MAD_S
0.014
DSR_S 0.012
Expected Return
0.01
0.008
0.006
0.004
5/24/09
7/24/09
9/24/09
5/24/09
7/24/09
9/24/09
3/24/09
1/24/09
11/24/08
9/24/08
7/24/08
5/24/08
3/24/08
1/24/08
11/24/07
9/24/07
7/24/07
5/24/07
−0.002
3/24/07
0
1/24/07
0.002
Fig. 26.5 The expected return of three risk measurements with short selling
0.035 MV_S MAD_S 0.03
DSR_S
Risk
0.025
0.02
Fig. 26.6 The risk of three risk measurements with short selling
3/24/09
1/24/09
11/24/08
9/24/08
7/24/08
5/24/08
3/24/08
1/24/08
11/24/07
9/24/07
7/24/07
5/24/07
3/24/07
0.01
1/24/07
0.015
722
26.5
J.R. Yu et al.
Conclusion
This chapter compares six different rebalancing models, with/without short selling, in order to determine which is more flexible for portfolio selection. One of the advantages of the MAD and DSR models is that they can be linearized; thus, they are faster and more efficient than the MV model, especially with large-scale problems. The experimental results indicate that the MAD and MAD_S models are efficient in handling data and show higher market value than the other models; moreover, they have lower risks in situations with/without short selling. However, there remain important risk measurements, such as VaR and CVaR, for future research to investigate. Thus, future studies may focus on developing rebalancing models with the measures of VaR and CVaR. Since the rebalancing period is fixed, the dynamic rebalancing mechanism is required for the first change environment. In addition, a portfolio selection model able to predict future returns is required.
Appendix 1 The Linearization of the MAD Model Konno and Yamazaki (1991) assume that rit is the realization of random variable ri n X during period t (t ¼ 1, . . ., T), and Ri ¼ T1 r it : i¼1
The MAD model is as follows:
T X n 1X Max ðr it Ri Þwi T t¼1 i¼1 s:t:
n X
n X
Ri xi m,
i¼1
wi ¼ 1,
i¼1
for i ¼ 1, . . ., n.
wi 0, X n Let ðr it Ri Þwi ¼ dt ¼ dt þ þ d t , i¼1 dt þ and dt 0, then
n X i¼1
ðr it Ri Þwi ¼ dt þ d t ,
26
A Comparison of Portfolios Using Different Risk Measurements
dt d t ¼
n X
ðr it Ri Þwi þ d t ,
i¼1
dt
n X 1 dt ¼ ðr it Ri Þwi 2 i¼1
2dt ¼ dt
n X
! 0:
ðr it Ri Þwi :
i¼1
Similarity, perform the same process, dt þ ¼
n X
ðr it Ri Þwi þ dt ,
i¼1
dt dt þ ¼
n X
ðr it Ri Þwi þ dt þ ,
i¼1 n X 2dt þ ¼ ðr it Ri Þwi þ d t , i¼1
dt
þ
n 1 X ¼ ðr it Ri Þwi þ d t 2 i¼1
! 0,
dt þ , d t 0: Two constraints are added because dt+, dt 0. Then the model can be transformed into a linear model as follows: 1X T X n T 1X ðr it Ri Þwi ¼ d þ þ dt Min T t¼1 t T t¼1 i¼1
dt
þ
)
n 1 X ¼ ðr it Ri Þwi þ dt 2 i¼1
n X i¼1
ðr it Ri Þwi þ dt 0:
! 0,
723
724
J.R. Yu et al.
dt
n X 1 ¼ ðr it Ri Þwi þ d t 2 i¼1
)
n X
! 0,
ðr it Ri Þwi þ dt 0:
i¼1
where dt ¼ dt+ + dt. Therefore, the MAD model can be linearized as the following linear model: T 1X dt T t¼1 n X s:t: d t þ ðr it Ri Þwi 0, t ¼ 1, . . . , T,
Min
i¼1
dt
n X
ðr it Ri Þwi 0, t ¼ 1, . . . , T,
i¼1 n X
Ri xi m,
i¼1 n X
wi ¼ 1,
i¼1
wi 0, for i ¼ 1, . . . , n.
Appendix 2 The Linearization of the DSR Model The use of variance as a measure of risk makes no distinction between gains and losses. The following mean semi-absolute deviation risk measurement proposed by Speranza (1993) is used to find the portfolios with minimum semi-variance:
1 T
Min
X T X n n X ðr it Ri Þwi ðr Ri Þwi i¼1 it t¼1 i¼1 2
26
A Comparison of Portfolios Using Different Risk Measurements
s:t:
n X
725
Ri xi m,
i¼1 n X
wi ¼ 1,
i¼1
wi 0, for i ¼ 1, . . . n.
X n ðr it Ri Þwi of the DSR model can be Because of the absolute deviation, i¼1
linearized in the same manner as the MAD model: n X
ðr it Ri Þwi ¼ dt þ dt
i¼1
X X n n ðr it Ri Þwi ðr Ri Þwi i¼1 it i¼1 2 dt þ ¼
¼
dt þ þ dt dt þ þ dt ¼ dt 2 !
n 1 X ðr it Ri Þwi þ d t 2 i¼1
0,
d t 0: Then the DSR model is reformulated as follows: Min
s:t:
T 1X d T t¼1 t
n X
Ri xi m,
i¼1 n X
wi ¼ 1,
i¼1
wi 0, d t þ
n X
ðr it Ri Þwi 0, t ¼ 1, . . . , T,
i¼1
dt 0, t ¼ 1, . . . , T, for i ¼ 1, . . ., n.
726
J.R. Yu et al.
Appendix 3 Multiple Objective Programming The aforementioned multiple objective models in Sect. 26.3 are solved by fuzzy multiple objective programming (Zimmermann 1978; Lee and Li 1993) in order to transform the multiple objective model into a single-objective model. Fuzzy multiple objective programming based on the concept of fuzzy set uses a min operator to calculate the membership function value of the aspiration level, l, for all of the objectives. The following is a multiple objective programming problem (Lee and Li 1993): Max Z ¼ ½Z 1 ; Z 2 ; . . . ; Z l T ¼ ½c1 x, c2 x, . . . , cl xT Min W ¼ ½W 1 ; W 2 ; . . . ; W l T ¼ ½q1 x, q2 x, . . . , ql xT s:t: Ax b, x 0, where Ck, k ¼ 1, 2, . . .,l, cs, s ¼ 1, 2, . . .,r, and x are n-dimensional vectors; b is an m-dimensional vector; A is an m n matrix; and * denotes the operators , ¼, or The program aimed to achieve its maximization of the achievement level for each objective while also considering a trade-off among the conflicting objectives or criteria. The ideal and anti-ideal solutions must be obtained in advance. This ideal solution and anti- ideal solutions are given by the decision maker, respectively, as follows: I þ ¼ Z 1 ; Z 2 ; . . . ; Z l ; W 1 ; W 2 ; . . . ; W l , I ¼ Z 1 ; Z2 ; . . . ; Zr ; W 1 ; W 2 ; . . . ; W r : The membership (achievement) functions for the objectives are defined as follows: m k ðZ K Þ ¼ ms ðZ s Þ ¼
Z K ðxÞ Z k , k ¼ 1, 2, . . . , l, Z k Z k
W s W s ðxÞ , s ¼ 1, 2, . . . , r: W s Ws
Then the “min” operator is used; the multiple objective programming is formulated as follows:
26
A Comparison of Portfolios Using Different Risk Measurements
727
Max l s:t: l ðZK ðxÞ Z k = Z k Z k , k ¼ 1, 2, . . . , l, l W s W s ðxÞÞ= W s W s , s ¼ 1, 2, . . . , r, x 2 X, where l is defined as l ¼ mini mðxÞ ¼ mink, s ðmk ðZÞ, ms ðW s ÞÞ.
References Ang, A., Chen, J., & Xing, Y. (2006). Downside risk. The Review of Financial Studies, 19, 1191–1239. Angel, J. J., Christophe, S. E., & Ferri, M. G. (2003). A close look at short selling on Nasdaq. Financial Analysts Journal, 59, 66–74. Chang, C. T. (2005). A modified goal programming approach for the mean-absolute deviation portfolio optimization model. Applied Mathematics and Computation, 171, 567–572. Deng, X. T., Wang, S. Y., & Xia, Y. S. (2000). Criteria models and strategies in portfolio selection. Advanced Modeling and Optimization, 2, 79–104. Deng, X. T., Li, Z. F., & Wang, S. Y. (2005). A minimax portfolio selection strategy with equilibrium. European Journal of Operational Research, 166, 278–292. Fusai, G., & Luciano, E. (2001). Dynamic value at risk under optimal and suboptimal portfolio policies. European Journal of Operational Research, 135, 249–269. Grootveld, H., & Hallerbach, W. (1999). Variance vs downside risk: Is there really that much difference? European Journal of Operational Research, 114, 304–319. Jorion, P. H. (1996). Value at risk: A new benchmark for measuring derivative risk. New York: Irwin Professional Publishers. Konno, H., & Yamazaki, H. (1991). Mean-absolute deviation portfolio optimization model and its applications to Tokyo stock market. Management Science, 37, 519–531. Kwan, C. C. Y. (1997). Portfolio selection under institutional procedures for short selling: Normative and market equilibrium considerations. Journal of Banking & Finance, 21, 369–391. Lee, E. S., & Li, R. J. (1993). Fuzzy multiple objective programming and compromise programming with Pareto optimum. Fuzzy Sets and Systems, 53, 275–288. Markowitz, H. M. (1952). Portfolio selection. The Journal of Finance, 7, 77–91. Markowitz, H. M. (1959). Portfolio selection, efficient diversification of investments. New York: Wiley. Papahristoulou, C., & Dotzauer, E. (2004). Optimal portfolios using linear programming models. Journal of the Operational Research Society, 55, 1169–1177. Rockafellar, R. T., & Uryasev, S. (2002). Conditional value-at-risk for general loss distribution. Journal of Banking and Finance, 26, 1443–1471. Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 20, 431–449. Simaan, Y. (1997). Estimation risk in portfolio selection: The mean variance model versus the mean absolute deviation model. Management Science, 43, 1437–1446. Speranza, M. G. (1993). Linear programming models for portfolio optimization. Finance, 14, 107–123. Stevenson, S. (2001). Emerging markets, downside risk and the asset allocation decision. Emerging Markets Review, 2, 50–66.
728
J.R. Yu et al.
Topaloglou, N., Vladimirou, H., & Zenios, S. A. (2002). CVaR models with selective hedging for international asset allocation. Journal of Banking and Finance, 26, 1535–1561. Vercher, E., Bermudez, J. D., & Segura, J. V. (2007). Fuzzy portfolio optimization under downside risk measures. Fuzzy Sets and Systems, 158, 769–782. White, J. A. (1990). More institutional investors selling short: But tactic is part of a wider strategy. Wall Street Journal, 7, 493–501. Yu, J. R., & Lee, W. Y. (2011). Portfolio rebalancing model using multiple criteria. European Journal of Operational Research, 209, 166–175. Yu, M., Takahashi, S., Inoue, H., & Wang, S. (2010). Dynamic portfolio optimization with risk control for absolute deviation model. Journal of Operational Research, 201, 349–364. Zimmermann, H. J. (1978). Fuzzy programming and linear programming with several objective functions. Fuzzy Sets and Systems, 1, 44–45.
Using Alternative Models and a Combining Technique in Credit Rating Forecasting: An Empirical Study
27
Cheng-Few Lee, Kehluh Wang, Yating Yang, and Chan-Chien Lien
Contents 27.1 27.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2.1 Ordered Probit Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2.2 Ordered Logit Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2.3 Combining Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.1 Model Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.2 Credit Rating Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.3 Estimation Results Using the Combining Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Ordered Probit Procedure for Credit Rating Forecasting . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Ordered Logit Procedure for Credit Rating Forecasting . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Procedure for Combining Probability Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
730 731 731 732 732 733 734 738 738 741 741 744 747 748 749
Abstract
Credit rating forecasting has long time been very important for bond classification and loan analysis. In particular, under the Basel II environment, regulators
C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] K. Wang • Y. Yang (*) Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] C.-C. Lien Treasury Division, E.SUN Commercial Bank, Taipei, Taiwan C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_27, # Springer Science+Business Media New York 2015
729
730
C.-F. Lee et al.
in Taiwan have requested the banks to estimate the default probability of the loan based on its credit classification. A proper forecasting procedure for credit rating of the loan is crucially important in abiding the rule. Credit rating is an ordinal scale from which the credit category of a firm can be ranked from high to low, but the scale of the difference between them is unknown. To model the ordinal outcomes, this study first constitutes an attempt utilizing the ordered logit and the ordered probit models, respectively. Then, we use ordered logit combining method to weigh different techniques’ probability measures as described in Kamstra and Kennedy (International Journal of Forecasting 14, 83–93, 1998) to form the combining model. The samples consist of firms in the TSE and the OTC market and are divided into three industries for analysis. We consider financial variables, market variables, as well as macroeconomic variables and estimate their parameters for out-of-sample tests. By means of cumulative accuracy profile, the receiver operating characteristics, and McFadden R2, we measure the goodness-of-fit and the accuracy of each prediction model. The performance evaluations are conducted to compare the forecasting results, and we find that combining technique does improve the predictive power. Keywords
Bankruptcy prediction • Combining forecast • Credit rating • Credit risk • Credit risk index • Forecasting models • Logit regression • Ordered logit • Ordered probit • Probability density function
27.1
Introduction
This study explores the credit rating forecasting techniques for firms in Taiwan. We employ the ordered logit and the ordered probit models for rating classification and then a combining procedure to integrate both. We then examine empirically the performance of these alternative methods, in particular, whether the combining forecasting performs better than any individual method. Credit rating forecasting has long time been very important for bond classification and loan analysis. In particular, under the Basel II environment, regulators in Taiwan have requested the banks to estimate the default probability of the loan based on its credit classification. A proper forecasting procedure for credit rating of the loan is crucially important in abiding the rule. Different forecasting models and estimation procedures have various underlying assumptions and computational complexities. They have been used extensively by researchers in the literature. Review papers like Hand and Henley (1997), Altman and Sounders (1997), and Crouhy et al. (2000) have traced the developments of the credit classification and bankruptcy prediction models over the last two decades. Since Beaver’s (1966) pioneered work, there have been considerable researches on the subject of the credit risk. Many of them (Altman 1968; Pinches and Mingo 1973; Altman and Katz 1976; Altman et al. 1977; Pompe and Bilderbeek 2005)
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 731
use the multivariate discriminant analysis (MDA) which assumes normality for the explanatory variables of the default class. Zmijewski (1984) utilizes the probit model, and Ohlson (1980) applies the logit model in which discrete or continuous data can be fitted. Kaplan and Urwitz (1979), Ederington (1985), Lawrence and Arshadi (1995), and Blume et al. (1998) show that it is a consistent structure considering credit rating as ordinal scale instead of interval scale. That is, the different values of the dependent variables as different classes represent an ordinal, but not necessarily a linear scale. For instance, higher ratings are less risky than lower ratings, but we don’t have a quantitative measure indicating how much less risky they are. Kaplan and Urwitz (1979) conduct an extensive examination of alternative prediction models including N-chotomous probit analysis which can explain the ordinal nature of bond ratings. To test the prediction accuracy of various statistical models, Ederington (1985) compares the linear regression, discriminant analysis, ordered probit, and unordered logit under the same condition. He concludes that the ordered probit can have the best prediction ability and the linear regression is the worst. In a survey paper on forecasting methods, Mahmoud (1984) concludes that combining forecasts can improve accuracy. Granger (1989) summarizes the usefulness of combining forecasts. Clemen (1989) observes that combining forecasts increase accuracy, whether the forecasts are subjective, statistical, econometric, or by extrapolation. Kamstra and Kennedy (1998) integrate two approaches with logitbased forecast-combining method which is applicable to dichotomous, polychotomous, or ordered-polychotomous contexts. In this paper, we apply the ordered logit and the ordered probit models in credit rating classification for listed firms in Taiwan and then combine two rating models with a logit regression technique. The performance of each model is then evaluated and we find that combining technique does improve the predictive power.
27.2
Methodology
27.2.1 Ordered Probit Model Credit rating is an ordinal scale from which the credit category of a firm can be ranked from high to low, but the scale of the difference between them is unknown. To model the ordinal outcomes, let the underlying response function be Y ¼ Xb þ e
(27.1)
where Y* is the latent variable, X is a set of explanatory variables, and e is the residual. Y* is not observed, but from which we can classify the category j: Yi ¼ j
if tj1 < Y i tj ði ¼ 1, 2, . . . , n;
j ¼ 1, 2, . . . , J Þ:
(27.2)
732
C.-F. Lee et al.
Maximum likelihood estimation can be used to estimate the parameters given a specific form of the residual distribution. For the ordered probit model, e is normally distributed with mean 0 and variance 1. The probability density function is 2 1 e fðeÞ ¼ pffiffiffiffiffiffi exp 2 2p
(27.3)
and the cumulative density function is F ðeÞ ¼
ðe 1
2 1 t pffiffiffiffiffiffiexp dt: 2 2p
(27.4)
27.2.2 Ordered Logit Model For the ordered logit model, e has a logistic distribution with mean 0 and variance p2/3. The probability density function is lð e Þ ¼
expðeÞ
(27.5)
½1 þ expðeÞ2
and the cumulative density function is LðeÞ ¼
expðeÞ : 1 þ expðeÞ
(27.6)
27.2.3 Combining Method To combine the ordered logit and the ordered probit models for credit forecasting, the logit regression method as described in Kamstra and Kennedy (1998) is applied. We first assume that firm’s credit classification is determined by an index y. Suppose there are J rating classes, ordered from 1 to J. If y exceeds the threshold value tj, j ¼ 1, . . ., j 1, credit classification changes from j rating to j + 1 rating. The probability of company i being in rating j is given by the integral of a standard logit from tj 1 yi to tj yi. Each forecasting method is considered as producing J 1 measures oji ¼ tj yi, j ¼ 1, . . ., j 1 for each firm. These measures can be estimated as
P1i þ þ Pji oji ¼ ln 1 P1i Pji
(27.7)
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 733
Table 27.1 Sample numbers across the industry Industry Non-bankruptcy observations Panel A: In-sample (2000.Q1 2004.Q4) Traditional 3,993 Manufacturing 2,450 Electronics 8,854 Panel B: Out-of-sample (2005.Q1 2005.Q3) Traditional 629 Manufacturing 432 Electronics 1,990
Bankruptcy observations
Total
509 411 191
4,502 2,861 9,045
63 38 72
692 470 2,062
where Pji is a probability estimate for firm i in rating j. The combining method proposed by Kamstra and Kennedy (1998) consists of finding, via MLE, an appropriate weighted average of o’s in ordered logit and ordered probit techniques.1 To validate the model, we use cumulative accuracy profile (CAP) and its summary statistics, the accuracy ratio (AR). A concept similar to the CAP is the receiver operating characteristic (ROC) and its summary statistics, the area under the ROC curve (AUC). In addition, we also employ the McFadden’s R2 (pseudo R2) to evaluate the performance of the credit rating model. McFadden’s R2 is defined as 1(unrestricted log-likelihood function/restricted log-likelihood function).
27.3
Empirical Results
Data are collected from the Taiwan Economic Journal (TEJ) database for the period between the first quarter in 2000 and the third quarter in 2005, with the last three quarters used for out-of-sample tests. The sample consists of firms traded in the Taiwan Security Exchange (TSE) and the OTC market. The credit rating of the sample firms is determined by the Taiwan Corporate Credit Risk Index (the TCRI). Among ten credit ratings, 1–4 represent the investment grade levels, 5–6 represent the low-risk levels, and 7–9 represent the high-risk or speculative grade levels. The final rating, 10, represents the bankruptcy level. Table 27.1 exhibits the descriptive statistics for the samples which are divided into three industry categories. Panel A contains the in-sample observations, while panel B shows the out-of-sample observations. There are 509 bankruptcy cases in the traditional industry, 411 in the manufacturing sector, and 191 in the electronics industry for in-sample data. For out-of-sample data, there are 63 in the traditional, 38 in the manufacturing, and 72 in the electronics industries, respectively. Table 27.2 displays the frequency distributions of the credit ratings for in-samples in these three industries.
1
See Kamstra and Kennedy (1998) for the detail description.
734
C.-F. Lee et al.
Table 27.2 Frequency distributions of the credit ratings Ratings 1 2 3 4 5 6 7 8 9 10 Subtotal
Traditional 10 47 151 380 921 1,044 645 459 336 509 4,502
Manufacturing 3 71 62 294 321 559 465 338 337 411 2,861
Electronics 156 259 252 1,066 2,343 2,736 1,272 490 280 191 9,045
Note: Level 10 represents the bankruptcy class
Bonfim (2009) finds that not only the firms’ financial situation has a central role in explaining default probabilities, but also macroeconomic conditions are very important when assessing default probabilities over time. Based on previous studies in the literature, 62 explanatory variables including financial ratios, market conditions, and macroeconomic factors are considered. We use the hybrid stepwise method to find the best predictors in the ordered probit and ordered logit models. The combining technique using logit regression is then applied.
27.3.1 Model Estimates 27.3.1.1 Ordered Logit Model Table 27.3 illustrates the in-sample estimation results under the ordered logit model for each industry. From the likelihood ratio, score ratio, and Wald ratio, with significant level at 1 %, we can determine the goodness-of-fit for each model. For the traditional industry, the coefficients of Fixed Asset to Long Term Funds Ratio (Fixed Asset to Equity and Long Term Liability Ratio), Interest Expense to Sales Ratio, and Debt Ratio are positive. It shows that firms with higher ratios will get worse credit ratings as well as higher default probabilities. On the other hand, the coefficients of Accounts Receivable Turnover Ratio, Net Operating Profit Margin, Return on Total Assets (Ex-Tax, Interest Expense), Depreciation to Sales Ratio, Free Cash Flow to Total Debt Ratio, Capital Spending to Gross Fixed Assets Ratio, Retained Earning to Total Assets Ratio, and Ln (Total Assets/GNP price-level index) are negative so that firms tend to have good credit qualities as well as lower default probabilities when these ratios become higher. All these signs meet our expectation. For the manufacturing industry, the coefficient of the dummy variable for the Negative Net Income for the last 2 years is positive, so the losses worsen the credit rating. On the other hand, the coefficients of Equity to Total Asset Ratio, Total Assets Turnover Ratio, Return on Total Assets (Ex-Tax, Interest Expense),
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 735
Table 27.3 Regression results estimated by the ordered logit. This table represents the regression results estimated by the ordered logit model. Panel A shows the 11 explanatory variables fitted in the traditional industry. Panel B shows the nine explanatory variables fitted in the manufacturing industry. Panel C shows the ten explanatory variables fitted in the electronics industry Parameters Explanatory variables Panel A: Traditional X7 Fixed Asset to Long Term Funds Ratio X12 Accounts Receivable Turnover Ratio X19 Net Operating Profit Margin X27 Return on Total Assets (Ex-Tax, Interest Expense) X29 Depreciation to Sales Ratio X30 Interest Expense to Sales Ratio X35 Free Cash Flow to Total Debt Ratio X41 Capital Spending to Gross Fixed Assets Ratio X46 Debt Ratio X47 Retained Earning to Total Assets Ratio X50 Ln (Total Assets/GNP price-level index) Panel B: Manufacturing X2 Equity to Total Asset Ratio X15 Total Assets Turnover Ratio X27 Return on Total Assets (Ex-Tax, Interest Expense) X40 Accumulative Depreciation to Gross Fixed Assets X47 Retained Earning to Total Assets Ratio X50 Ln (Total Assets/GNP price-level index) X52 1:If Net Income was Negative for the Last Two Years 0: Otherwise X54 Ln (Age of the firm) X60 Ln (Net Sales) Panel C: Electronics X2 Equity to Total Asset Ratio X27 Return on Total Assets (Ex-Tax, Interest Expense) X30 Interest Expense to Sales Ratio X35 Free Cash Flow to Total Debt Ratio X40 Accumulative Depreciation to Gross Fixed Assets X44 Cash Reinvestment Ratio X45 Working Capital to Total Assets Ratio X47 Retained Earning to Total Assets Ratio X49 Market Value of Equity/Total Liability X50 Ln (Total Assets/GNP price-level index) ***
Represents significantly different from zero at 1 % level Represents significantly different from zero at 5 % level * Represents significantly different from zero at 10 % level **
Standard Estimates errors 0.968*** 0.054*** 5.135*** 5.473*** 6.662*** 22.057*** 0.971*** 0.746*** 4.781*** 5.872*** 6.991***
(0.098) (0.0087) (0.546) (0.934) (0.537) (1.862) (0.094) (0.167) (0.305) (0.336) (1.024)
7.851*** 0.714*** 7.520*** 1.804*** 3.138*** 5.663*** 1.274***
(0.317) (0.172) (0.932) (0.199) (0.340) (1.213) (0.108)
0.486*** (0.102) 1.063*** (0.065) 3.239*** 6.929*** 25.101*** 0.464*** 0.781*** 0.880*** 3.906*** 3.210*** 0.048*** 8.797***
(0.207) (0.349) (1.901) (0.031) (0.127) (0.167) (0.179) (0.174) (0.004) (0.704)
736
C.-F. Lee et al.
Accumulative Depreciation to Gross Fixed Assets, Retained Earning to Total Assets Ratio, Ln (Total Assets/GNP price-level index), Ln (Age of the firm), and Ln (Net Sales) are all negative. The ratios improve the credit standings. For the electronics industry, the coefficient of Interest Expense to Sales Ratio is positive, and the coefficients of Equity to Total Asset Ratio, Return on Total Assets (Ex-Tax, Interest Expense), Free Cash Flow to Total Debt Ratio, Accumulative Depreciation to Gross Fixed Assets, Cash Reinvestment Ratio, Working Capital to Total Assets Ratio, Retained Earning to Total Assets Ratio, Market Value of Equity/Total Liability, and Ln (Total Assets/GNP price-level index) are negative. In general, the matured companies like those in the traditional and the manufacturing industries should focus mainly on their capabilities in operation and in liquidity. Also, the market factors are important for the manufacturing firms. For high-growth industry like the electronics, we should pay more attention to their market factors and the liquidity ratios. The common explanatory variables among three industries seem related to the operating returns, the retained earnings, and the asset size.
27.3.1.2 Ordered Probit Model Table 27.4 shows the in-sample estimation results using the ordered probit model for each industry. For the traditional industry, the coefficients of Fixed Asset to Long Term Funds Ratio, Interest Expense to Sales Ratio, and Debt Ratio are positive in the ordered probit model, which are similar to the results from the ordered logit model. On the other hand, the coefficients of Accounts Receivable Turnover Ratio, Net Operating Profit Margin, Return on Total Assets (Ex-Tax, Interest Expense), Depreciation to Sales Ratio, Operating Cash Flow to Total Liability Ratio, Capital Spending to Gross Fixed Assets Ratio, and Retained Earning to Total Assets Ratio are negative, also showing no big difference with the results from the ordered logit model. For the manufacturing industry, the coefficients of Accounts Payable Turnover Ratio and the dummy variable for the negative Net Income for the last 2 years are positive. On the other hand, the coefficients of Equity to Total Asset Ratio, Quick Ratio, Total Assets Turnover Ratio, Return on Total Assets (Ex-Tax, Interest Expense), Accumulative Depreciation to Gross Fixed Assets, Cash Flow Ratio, Retained Earning to Total Assets Ratio, and Ln (Age of the firm) are negative. The results are also similar to those of the ordered logit model, only that the ordered logit seems focusing more on the size factors (sales, asset), while the ordered probit concerns more on the liquidity (quick ratio, cash flow ratio, and payables) of the firm. For the electronics industry, the coefficients of Interest Expense to Sales Ratio is positive and the coefficients of Free Cash Flow to Total Debt Ratio, Working Capital to Total Assets Ratio, Retained Earning to Total Assets Ratio, Market Value of Equity/Total Liability, LN (Total Assets/GNP price-level index), and LN (Age) are negative. Table 27.5 shows the threshold values estimated by the two models. Threshold values represent the cutting points for neighboring ratings.
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 737
Table 27.4 Regression results estimated by the ordered probit. This table represents the regression results estimated by the ordered probit model. Panel A shows the ten explanatory variables fitted in the traditional industry. Panel B shows the ten explanatory variables fitted in the manufacturing industry. Panel C shows the nine explanatory variables fitted in the electronics industry Explanatory variables Panel A: Traditional X7 Fixed Asset to Long Term Funds Ratio X12 Accounts Receivable Turnover Ratio X19 Net Operating Profit Margin X27 Return on Total Assets (Ex-Tax, Interest Expense) X29 Depreciation to Sales Ratio X30 Interest Expense to Sales Ratio X34 Operating Cash Flow to Total Liability Ratio X41 Capital Spending to Gross Fixed Assets Ratio X46 Debt Ratio X47 Retained Earning to Total Assets Ratio Panel B: Manufacturing X2 Equity to Total Asset Ratio X10 Quick Ratio X11 Accounts Payable Turnover Ratio X15 Total Assets Turnover Ratio X27 Return on Total Assets (Ex-Tax, Interest Expense) X40 Accumulative Depreciation to Gross Fixed Assets X43 Cash Flow Ratio X47 Retained Earning to Total Assets Ratio X52 1:If Net Income was Negative for the Last Two Years 0: Otherwise X54 Ln (Age of the firm) Panel C: Electronics X2 Equity to Total Asset Ratio X27 Return on Total Assets (Ex-Tax, Interest Expense) X30 Interest Expense to Sales Ratio X35 Free Cash Flow to Total Debt Ratio X45 Working Capital to Total Assets Ratio X47 Retained Earning to Total Assets Ratio X49 Market Value of Equity/Total Liability X50 Ln (Total Assets/GNP price-level index) X54 Ln (Age of the firm) ***
Represents significantly different from zero at 1 % level Represents significantly different from zero at 5 % level * Represents significantly different from zero at 10 % level **
Parameters Estimates Standard errors 0.442*** 0.027*** 2.539*** 3.335*** 3.204*** 9.881*** 0.514*** 0.459*** 3.032*** 3.317***
(0.058) (0.005) (0.311) (0.534) (0.287) (1.040) (0.121) (0.096) (0.188) (0.192)
4.648*** 0.036*** 0.021*** 0.742*** 4.175*** 0.989*** 0.190*** 1.804*** 0.753***
(0.183) (0.009) (0.004) (0.072) (0.522) (0.112) (0.045) (0.188) (0.061)
0.289***
(0.056)
1.851*** 3.826*** 11.187*** 0.233*** 2.101*** 1.715*** 0.027*** 4.954*** 0.247***
(0.117) (0.197) (1.052) (0.017) (0.101) (0.098) (0.002) (0.399) (0.028)
738
C.-F. Lee et al.
Table 27.5 Threshold values estimated by the ordinal analysis. This table shows the threshold values estimated by the ordered logit and the ordered probit models. There are nine threshold parameters given ten credit ratings Threshold parameter t1 t2 t3 t4 t5 t6 t7 t8 t9
Ordered logit model Traditional Manufacturing 23.361*** 28.610*** (0.633) (0.987) 21.236*** 24.879*** (0.545) (0.766) 19.441*** 23.848*** (0.519) (0.749) 17.639*** 21.657*** (0.502) (0.717) 15.389*** 20.333*** (0.484) (0.703) 13.317*** 18.309*** (0.473) (0.683) 11.584*** 16.461*** (0.465) (0.670) 9.825*** 14.738*** (0.459) (0.662) 8.231*** 12.166*** (0.457) (0.655)
Electronics 31.451*** (0.478) 29.759*** (0.459) 28.855** (0.451) 26.846*** (0.436) 24.451*** (0.421) 21.777*** (0.407) 19.715*** (0.400) 18.019*** (0.397) 16.010*** (0.400)
Ordered probit model Traditional Manufacturing 12.272*** 16.891*** (0.332) (0.447) 11.231*** 15.042*** (0.294) (0.360) 10.316*** 14.446*** (0.282) (0.352) 9.339*** 13.175*** (0.275) (0.337) 8.080*** 12.411*** (0.268) (0.331) 6.918*** 11.251*** (0.263) (0.322) 5.972*** 10.207*** (0.261) (0.314) 5.049*** 9.246*** (0.260) (0.308) 4.256*** 7.860*** (0.260) (0.301)
Electronics 17.306*** (0.252) 16.410*** (0.243) 15.917*** (0.240) 14.797** (0.233) 13.441*** (0.227) 11.937*** (0.221) 10.809*** (0.218) 9.929*** (0.217) 9.005*** (0.219)
***
Represents significantly different from zero at 1 % level Represents significantly different from zero at 5 % level * Represents significantly different from zero at 10 % level **
27.3.2 Credit Rating Forecasting Tables 27.6 and 27.7 illustrate the prediction results of the ordered logit and the ordered probit models. Following Blume et al. (1998), we define the most probable rating as the actual rating or its immediate adjacent ratings. The ratio of the number of the predicted ratings as the most probable ratings to the total number of the ratings being predicted can assess the goodness-of-fit for the model. For out-ofsample firms, the predictive power of the ordered logit model for each industry is 86.85 %, 81.06 %, and 86.37 %, respectively; and the predictive power of the ordered probit model for each industry is 86.42 %, 80.21 %, and 84.87 %, respectively. The results from two models are quite similar.
27.3.3 Estimation Results Using the Combining Method Table 27.8 depicts the regression results using the Kamstra-Kennedy combining forecasting technique. The coefficients are the logit estimates on o’s. These o values are all positive and strongly significant for each industry.
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 739
Table 27.6 Out-of-sample predictions by the ordered logit model Panel A: Traditional Predicted rating 1 2 Actual rating 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 10 0 0 Prediction ratio 86.85% Panel B: Manufacturing Predicted rating 1 2 Actual rating 1 0 0 2 0 3 3 0 2 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 10 0 0 Prediction ratio 81.06% Panel C: Electronics Predicted rating 1 2 Actual rating 1 0 1 2 0 0 3 0 0 4 0 2 5 0 1 6 0 0 7 0 0 8 0 0 9 0 0 10 1 0 Prediction ratio 86.37%
3 3 0 4 4 0 0 0 0 0 0
4 0 6 8 26 14 2 0 0 0 0
5 0 5 12 34 101 53 5 0 0 0
6 0 1 0 7 41 104 24 21 1 3
7 0 0 0 0 2 29 26 26 4 2
8 0 0 0 0 0 5 11 13 15 9
9 0 0 0 0 0 0 0 2 7 15
10 0 0 0 0 0 0 0 3 10 34
3 0 0 2 0 0 0 1 0 0 0
4 0 4 4 7 5 8 1 0 0 0
5 0 3 2 21 29 20 0 0 0 0
6 0 2 5 24 51 68 18 4 0 0
7 0 0 0 0 5 18 38 16 9 4
8 0 0 0 1 3 3 9 5 12 5
9 0 0 0 0 0 0 3 9 10 8
10 0 0 0 0 0 0 1 1 5 21
3 2 0 0 4 1 0 0 0 0 0
4 0 5 1 13 9 11 0 0 0 0
5 15 22 36 144 279 293 16 2 0 3
6 0 2 11 42 180 303 206 50 6 2
7 0 0 2 0 15 39 79 80 11 5
8 0 0 0 0 1 2 9 26 21 12
9 0 0 0 0 0 0 0 10 22 15
10 0 0 0 0 0 0 1 2 13 34
740
C.-F. Lee et al.
Table 27.7 Out-of-sample predictions by the ordered probit model Panel A: Traditional Predicted rating 1 Actual rating 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 Prediction ratio 86.42% Panel B: Manufacturing Predicted rating 1 Actual rating 1 0 2 1 3 1 4 0 5 0 6 0 7 0 8 0 9 0 10 0 Prediction ratio 80.21% Panel C: Electronics Predicted rating 1 Actual rating 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 Prediction ratio 84.87%
2 2 0 0 0 0 0 0 0 0 0
3 1 0 3 6 0 0 0 0 0 0
4 0 4 5 22 10 2 0 0 0 0
5 0 6 16 36 97 45 4 1 1 0
6 0 2 0 7 48 106 28 19 0 2
7 0 0 0 0 3 36 21 26 5 3
8 0 0 0 0 0 4 13 14 10 11
9 0 0 0 0 0 0 0 2 9 15
10 0 0 0 0 0 0 0 3 12 32
2 0 2 1 0 0 0 0 0 0 0
3 0 0 2 0 0 0 0 0 0 0
4 0 4 4 9 6 7 1 0 0 0
5 0 5 2 20 28 22 1 0 0 0
6 0 0 5 23 51 71 23 6 0 1
7 0 0 0 1 5 15 34 15 12 4
8 0 0 0 0 3 2 8 4 8 5
9 0 0 0 0 0 0 3 9 11 8
10 0 0 0 0 0 0 1 1 5 20
2 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0
4 3 1 0 5 7 8 0 0 0 0
5 15 26 40 155 295 313 11 2 0 3
6 0 2 8 45 164 286 207 54 5 4
7 0 0 2 0 20 40 87 96 26 12
8 0 0 0 0 0 1 5 14 30 21
9 0 0 0 0 0 0 0 2 6 8
10 0 0 0 0 0 0 1 2 6 24
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 741
Table 27.8 Regression results estimated by the combining forecast Threshold parameter t1 t2 t3 t4 t5 t6 t7 t8 t9 oLogit oProbit
Combining forecasting model Traditional Manufacturing Electronics 13.1875 (0.3033)*** 13.6217 (0.5475)*** 11.6241 (0.1591)*** 11.3484 (0.1854)*** 9.9864 (0.1918)*** 10.5484 (0.1483)*** 9.8997 (0.1524)*** 9.1712 (0.1704)*** 9.973 (0.1456)*** 8.3781 (0.1370)*** 7.5337 (0.1441)*** 8.7037 (0.143)*** 6.4390 (0.1257)*** 6.4459 (0.1315)*** 7.0973 (0.1414)*** 4.5627 (0.1137)*** 4.8259 (0.1142)*** 5.0093 (0.1364)*** 3.1037 (0.1016)*** 3.3724 (0.1003)*** 3.1906 (0.1261)*** 1.5619 (0.0885)*** 2.0849 (0.0913)*** 1.5682 (0.1145)*** 0.0816 (0.0861) 0.1127 (0.0929) 0.3735 (0.1169)** 1.1363 (0.0571)*** 1.2321 (0.029)*** 0.6616 (0.0528)*** 0.0825 (0.0330)* 0.1437 (0.0085)*** 0.1867 (0.0272)***
Numbers in parentheses represent the standard errors *** Represents significantly different from zero at 1 % level ** Represents significantly different from zero at 5 % level * Represents significantly different from zero at 10 % level
Table 27.9 shows the prediction results of the combining forecasting model. For out-of-sample test, the predictive power of the combining model for each industry is 89.88 %, 82.77 %, and 88.02 %, respectively, which are higher than those of the ordered logit or ordered probit models by 2–4 %.
27.3.4 Performance Evaluation To evaluate the performance of each model, Fig. 27.1 illustrates the ROC curves estimated by the three models, respectively. From these ROC curves we can distinguish the performance of each rating model. Furthermore, we can compare the AUC and AR calculated from the ROC and CAP (See Table 27.10). For the traditional industry, the AUCs from the ordered logit, the ordered probit, and the combining model are 95.32 %, 95.15 %, and 95.32 %, respectively. For the manufacturing industry, they are 94.73 %, 93.66 %, and 95.51 %, respectively. And for the electronics industry, they are 92.43 %, 92.30 %, and 94.07 %, respectively. These results apparently show that the combining forecasting model performs better than any individual one.
27.4
Conclusion
This study constitutes an attempt to explore the credit rating forecasting techniques. The samples consist of firms in the TSE and the OTC market and
742
C.-F. Lee et al.
Table 27.9 Out-of-sample credit rating prediction by the combining forecast Panel A: Traditional Predicted rating 1 2 Actual rating 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 10 0 0 Prediction ratio 89.88% Panel B: Manufacturing Predicted rating 1 2 Actual rating 1 0 0 2 0 2 3 0 1 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 10 0 0 Prediction ratio 82.77% Panel C: Electronics Predicted rating 1 2 Actual rating 1 0 3 2 0 0 3 0 0 4 0 4 5 0 1 6 0 0 7 0 0 8 0 0 9 0 0 10 0 0 Prediction ratio 88.02%
3 2 0 4 2 0 0 0 0 0 0
4 1 6 7 28 12 2 0 0 0 0
5 0 6 13 34 100 48 5 0 0 0
6 0 0 0 7 44 113 27 23 1 3
7 0 0 0 0 2 26 23 25 4 2
8 0 0 0 0 0 4 11 12 14 9
9 0 0 0 0 0 0 0 2 8 16
10 0 0 0 0 0 0 0 3 10 33
3 0 1 2 0 0 0 1 0 0 0
4 0 3 5 6 5 6 1 0 0 0
5 0 6 2 24 30 23 0 1 0 0
6 0 0 5 22 50 70 27 3 1 0
7 0 0 0 0 5 17 32 16 8 4
8 0 0 0 1 3 1 7 6 12 4
9 0 0 0 0 0 0 2 8 11 7
10 0 0 0 0 0 0 1 1 4 23
3 0 0 0 4 1 0 0 0 0 0
4 0 4 1 13 11 12 0 0 0 0
5 15 18 34 128 256 248 14 1 0 3
6 0 7 13 56 193 338 190 44 5 1
7 0 0 2 0 23 47 91 82 10 6
8 0 0 0 0 1 3 15 32 23 11
9 0 0 0 0 0 0 0 10 24 17
10 0 0 0 0 0 0 1 1 11 34
27
a
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 743
b
1.0
0.8 Hit Rate
0.8 Hit Rate
1.0
0.6
Source of the Curve
0.4
Ordered Logit Ordered Probit Combine Forecast Random Model
0.2 0.0
0.6
Source of the Curve
0.4
Ordered Logit Ordered Probit Combine Forecast Random Model
0.2 0.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
False Alarm Rate
c
0.4
0.6
0.8
1.0
False Alarm Rate
1.0
Hit Rate
0.8 0.6 0.4
Source of the Curve Ordered Logit Ordered Probit Combine Forecast Random Model
0.2 0.0 0.0
0.2
0.6 0.4 False Alarm Rate
0.8
1.0
Fig. 27.1 ROC curves, (a) Traditional, (b) Manufacturing, (c) Electronics
Table 27.10 Performance evaluation for each model Panel A: Ordered logit AUC AR McFadden’s R-square Panel B: Ordered probit AUC AR McFadden’s R-square Panel C: Combining forecasting AUC AR McFadden’s R-square
Traditional
Manufacturing
Electronics
95.32 % 90.63 % 35.63 %
94.73 % 89.46 % 38.25 %
92.43 % 84.86 % 39.63 %
95.15 % 90.30 % 34.45 %
93.66 % 87.32 % 40.05 %
92.30 % 84.60 % 41.25 %
95.32 % 90.63 % 42.34 %
95.51 % 91.03 % 43.16 %
94.07 % 88.15 % 46.28 %
CAP represents the cumulative accuracy profile, AR represents accuracy ratio. McFadden’s R2 is defined as 1(unrestricted log-likelihood function/restricted log-likelihood function)
744
C.-F. Lee et al.
are divided into three industries, i.e., traditional, manufacturing, and electronics, for analysis. Sixty-two explanatory variables consisting of financial, market, and macroeconomics factors are considered. We utilize the ordered logit, the ordered probit, and the combining forecasting model to estimate the parameters and conduct the out-of-sample tests. The main result is that the combining forecasting method leads to a more accurate rating prediction than that of any single use of the ordered logit or ordered probit analysis. By means of cumulative accuracy profile, the receiver operating characteristics, and McFadden R2, we can measure the goodness-of-fit and the accuracy of each prediction model. These performance evaluations depict consistent results that the combining forecast performs better.
Appendix 1: Ordered Probit Procedure for Credit Rating Forecasting2 Credit rating is an ordinal scale from which the credit category of a firm can be ranked from high to low, but the scale of the difference between them is unknown. To model the ordinal outcomes, we follow Zavoina and McKelvey (1975) to begin with a latent regression Y ¼ Xb þ e
(27.8)
where 2
3 2 Y 1 1 Y ¼ 4 ⋮ 5, X ¼ 4 ⋮ Y N 1
X11 ⋮ X1N
3 XK1 ⋮ 5 XKN
2
3 2 3 b0 e0 b ¼ 4 ⋮ 5, e ¼ 4 ⋮ 5: bK eN Here, b is a vector of unknown parameters, X is a set of explanatory variables, and e is a random disturbance term assumed to follow the multivariate normal distribution with mean 0 and variance-covariance matrix s2I, that is, e N 0, s2 I : (27.9) Y*, the dependent variable of theoretical interests, is unobserved, but from which we can classify the category j:3 2
There are detailed discussion about ordered data in Ananth and Kleinbaum (1997), McCullagh (1980), Wooldridge (2010), and Greene (2011). 3 In our case, J ¼ 10 and K ¼ 62.
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 745
Y i ¼ j if tj1 < Y i tj ði ¼ 1, 2, . . . , N; j ¼ 1, 2, . . . , J Þ
(27.10)
where Y is the one we do observe, an ordinal version of Y*, and the t’s are unknown parameters to be estimated with b. We assume that 1 ¼ t0 t1 tJ ¼ + 1. From Eqs. 27.8 and 27.10, we have tj1 < Y i tj , tj1 < Xi b þ ei tj ,
tj1 Xi b ei tj Xi b < s s s
(27.11)
where Xi is the ith row of X. From Eqs. 27.9 and 27.11, the probability of Yi ¼ j can be written as tj X i b tj1 Xi b PrðY i ¼ jÞ ¼ F F s s
(27.12)
where F(.) represents the cumulative density function of standard normal distribution. The model (27.12) is under-identified since any linear transformation of the underlying scale variable Y*, if applied to the parameters and t0,. . .,tJ as well, would lead to the same model. We will assume without loss of generality that,t1 ¼ 0 and s ¼ 1 in order to identify the model. The model we will estimate turns out to be PrðY i ¼ jÞ ¼ F tj Xi b F tj1 Xi b : (27.13) Maximum likelihood estimation can be used to estimate the J + K 1 parameters, t2,. . .,tJ 1 and b0, b1,. . .,bK, in Eq. 27.13. To form the likelihood function, first, define a dummy variable Yi,j: Y i, j ¼
1 if Y i ¼ j : 0 otherwise
Then, for simple notation, we set Zi,j ¼ tj Xib and Fi,j ¼ F(Zi,j). The likelihood function, L, is Y i, j n Y J Y L ¼ Lðb0 , . . . , bK , t2 , . . . , tJ1 jY Þ ¼ Fi, j Fi, j1 :
(27.14)
i¼1 j¼1
So, the log-likelihood function, ln L, is ln L ¼
n X J X i¼1 j¼1
Y i, j ln Fi, j Fi, j1 :
(27.15)
746
C.-F. Lee et al.
Now, we want to maximize ln L subject to t1 t2 . . . tJ 1. Z2 1 if l ¼ j i2, j Let N i, j ¼ p1ffiffiffiffi e for 1 j J and 1 i N, and let dl, j ¼ , 2p 0 if l ¼ 6 j it follows that ∂Fi, j ∂Zi, j ¼ N i, j ¼ N i, j Xu, i for 0 u K ∂bu ∂bu ∂Fi, j ∂Zi, j ¼ N i, j ¼ N i, j dl, j for 2 l J 1 ∂tl ∂tl
(27.16)
∂N i, j ¼ Z i, j N i, j Xu, i for 0 u K ∂bu ∂N i, j ¼ Zi, j N i, j dl, j for 2 l J 1 ∂tl
(27.17)
and
By using Eqs. 27.16 and 27.17, we can calculate the J + K 1 partial derivatives of Eq. 27.15 with respect to the unknown parameters, b and t, respectively: n X J N i, j1 N i, j Xu, i ∂ln L X ¼ Y i, j ∂bu Fi, j Fi, j1 i¼1 j¼1 n X J X N i, j dl, j N i, j1 dl, j1 Xu, i ∂ln L ¼ Y i, j ∂tl Fi, j Fi, j1 i¼1 j¼1
for 0 u K (27.18) for 2 l J 1:
And the elements in the (J + K 1) (J + K 1) matrix of second partials are n X J X ∂2 ln L ¼ Y i, j ∂bu ∂bV i¼1 j¼1
2 Fi, j Fi, j1 Z i, j1 N i, j1 Z i, j N i, j N i, j1 N i, j Xu, i Xv, i , 2 Fi, j Fi, j1 " n X J X Fi, j Fi, j1 Zi, j N i, j dl, j Zi, j1 N i, j1 dl, j1 ∂2 ln L ∂2 ln L ¼ ¼ Y i, j 2 ∂bu ∂tl ∂tl ∂bu Fi, j Fi, j1 i¼1 j¼1 # N i, j1 N i, j N i, j dl, j N i, j1 dl, j1 Xu, i , 2 Fi, j Fi, j1
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 747
" n X J Fi, j Fi, j1 Zi, j1 N i, j1 dm, j1 dl, j1 Zi, j N i, j dm, j dl, j ∂2 ln L X ¼ Y i, j 2 ∂tl ∂tm Fi, j Fi, j1 i¼1 j¼1 # N i, j dl, j N i, j1 dl, j1 N i, j dm, j N i, j1 dm, j1 : 2 Fi, j Fi, j1 (27.19) We then set the J + K 1 equations in Eq. 27.18 to zero to get the MLE of the unknown parameters. The matrix of second partials should be negative definite to insure that the solution is a maximum. The computer program NPROBIT, which uses the Newton-Raphson method, can solve the nonlinear equations in Eq. 27.18.
Appendix 2: Ordered Logit Procedure for Credit Rating Forecasting Consider a latent variable model where Y* is the unobserved dependent variable, X a set of explanatory variables, b an unknown parameter vector, and e a random disturbance term: Y ¼ Xb þ e
(27.20)
e is assumed to follow a standard logistic distribution, so the probability density function of e is lð e Þ ¼
expðeÞ ½1 þ expðeÞ2
(27.21)
and the cumulative density function is LðeÞ ¼
expðeÞ : 1 þ expðeÞ
(27.22)
Y*, the dependent variable of theoretical interests, is unobserved, but from which we can classify the category j: Y i ¼ j if tj1 < Y i tj ði ¼ 1, 2, . . . , N; j ¼ 1, 2, . . . , J Þ
(27.23)
where Y is the one we do observe, an ordinal version of Y*, and the t’s are unknown parameters satisfying t1 . . . tj and to be estimated with b.
748
C.-F. Lee et al.
From Eqs. 27.20 and 27.22, we form the proportional odds model: exp tj Xi b PrðY i jjXÞ ¼ 1 þ exp tj Xi b or equivalently
Pij log it Pij ¼ log 1 Pij PrðY i jjXi Þ log ¼ tj X i b PrðY i > jjXi Þ
(27.24)
(27.25)
where Pij ¼ Pr(Yi j). Notice that in the proportional odds model, b is assumed to be constant and not depend on j. The validity of this assumption can be checked based on a w2 score test. The model that relaxes the proportional odds assumption can be represented as log it Pij ¼ tj Xi bj , (27.26) where the regression parameter vector b is allowed to vary with j. Both models can be fit through the procedure of maximum likelihood estimation.
Appendix 3: Procedure for Combining Probability Forecasts To combine the ordered logit and the ordered probit models for credit forecasting, the logit regression method as described in Kamstra and Kennedy (1998) is applied. We first assume that firm’s credit classification is determined by an index y. Suppose there are J rating classes, ordered from 1 to J. If y exceeds the threshold value tj, j ¼ 1,. . . J 1, credit classification changes from j rating to j + 1 rating. The probability of company i being in rating j is given by the integral of a standard logit from tj 1 yi to tj yi. Each forecasting method is considered as producing J 1 measures, oji ¼ tj y, j ¼ 1, . . ., J 1 for each firm. These measures can be estimated as oji ¼ ln
P1i þ þ Pji 1 P1i Pji
(27.27)
where Pji is a probability estimate for firm i in rating j. For firm i we have 8 oji e > > for j ¼ 1 > oji > > < 1 þoeji oj1i e e (27.28) Pji ¼ for j ¼ 2 J 1 : oji > 1 þ e 1 þ eoj1i > > > 1 > : for j ¼ J 1 þ eoj1i
27
Using Alternative Models and a Combining Technique in Credit Rating Forecasting 749
In our case, there are two forecasting techniques A and B. For firm i, the combining probability estimate is 8 pj þpA ojiA þpB ojiB e > > for j ¼ 1 > pj þpA ojiA þpB ojiB > 1 þ > < pjeþpA ojiA þpB ojiB e epj1 þpA oj1iA þpB oj1iB Pji ¼ for j ¼ 2 J 1 : p þp o þp o j A jiA B jiB > 1þe 1 þ epj1 þpA oj1iA þpB oj1iB > > > 1 > : for j ¼ J 1 þ epj1 þpA oj1iA þpB oj1iB (27.29) This can be estimated using an ordered logit software package with the oA and oB values as explanatory variables and the t1 and p2 parameters playing the role of the unknown threshold values.
References Altman, E. I. (1968). Financial ratios discriminant analysis and the prediction of corporate bankruptcy. Journal of Finance, 23, 589–609. Altman, E. I., & Katz, S. (1976). Statistical bond rating classification using financial and accounting data. In Proceedings of the conference on topical research in accounting, New York. Altman, E. I., & Sounders, A. (1997). Credit risk measurement: Developments over the last 20 years. Journal of Banking and Finance, 21, 1721–1742. Altman, E. I., Hadelman, R. G., & Narayanan, P. (1977). Zeta analysis, a new model to identify bankruptcy risk of corporations. Journal of Banking and Finance, 10, 29–51. Ananth, C. V., & Kleinbaum, D. G. (1997). Regression models for ordinal responses: A review of methods and applications. International Journal of Epidemiology, 26, 1323–1333. Beaver, W. (1966). Financial ratios as predictors of failure. Journal of Accounting Research, 4, 71–111. Blume, M. E., Lim, F., & Mackinlay, A. C. (1998). The declining credit quality of U.S. corporate debt: Myth or reality? Journal of Finance, 53, 1389–1413. Bonfim, D. (2009). Credit risk drivers: Evaluating the contribution of firm level information and of macroeconomic dynamics. Journal of Banking and Finance, 33, 281–299. Clemen, R. T. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5, 559–583. Crouhy, M., Galai, D., & Mark, R. (2000). A comparative analysis of current credit risk models. Journal of Banking and Finance, 24, 59–117. Ederington, L. H. (1985). Classification models and bond ratings. The Financial Review, 20, 237–261. Granger, C. W. (1989). Invited review: Combining forecasts-twenty years later. Journal of Forecasting, 8, 167–173. Greene, W. H. (2011). Econometric analysis. NJ: Prentice Hall. Hand, D., & Henley, W. (1997). Statistical classification methods in consumer credit scoring: A review. Journal of the Royal Statistical Society, Series A, 160, 523–541. Kamstra, M., & Kennedy, P. (1998). Combining qualitative forecasts using logit. International Journal of Forecasting, 14, 83–93. Kaplan, R. S., & Urwitz, G. (1979). Statistical models of bond ratings: A methodological inquiry. Journal of Business, 52, 231–261.
750
C.-F. Lee et al.
Lawrence, E. C., & Arshadi, N. (1995). A multinomial logit analysis of problem loan resolution choices in banking. Journal of Money, Credit and Banking, 27, 202–216. Mahmoud, E. (1984). Accuracy in forecasting: A survey. Journal of Forecasting, 3, 139–159. McCullagh, P. (1980). Regression models for ordinal data. Journal of the Royal Statistical Society, 42, 109–142. Ohlson, J. A. (1980). Financial ratios and the probabilistic prediction of bankruptcy. Journal of Accounting Research, 18, 109–131. Pinches, G. E., & Mingo, K. A. (1973). A multivariate analysis of industrial bond ratings. Journal of Finance, 28, 1–18. Pompe, P. M., & Bilderbeek, J. (2005). The prediction of bankruptcy of small- and medium-sized industrial firms. Journal of Business Venturing, 20, 847–868. Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data (2nd ed.). Cambridge, MA: MIT Press. Zavoina, R., & McKelvey, W. (1975). A statistical model for the analysis of ordinal level dependent variables. Journal of Mathematical Sociology, 4, 103–120. Zmijewski, M. E. (1984). Methodological issues related to the estimation of financial distress prediction models. Journal of Accounting Research, 22, 59–68.
Can We Use the CAPM as an Investment Strategy?: An Intuitive CAPM and Efficiency Test
28
Fernando Go´mez-Bezares, Luis Ferruz, and Maria Vargas
Contents 28.1 28.2 28.3 28.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.4.1 Strategy 1: The Investor Buys All the Undervalued Stocks . . . . . . . . . . . . . . . . . . 28.4.2 Strategy 2: The Investor Buys the Three Stock-Quartiles with the Highest Degree of Undervaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Strategy 1: Investor Buys All Undervalued Stocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Strategy 2: The Investor Buys the Most Undervalued Stocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
752 756 759 762 762 763 766 767 768 774 788
Abstract
The aim of this chapter is to check whether certain playing rules, based on the undervaluation concept arising from the CAPM, could be useful as investment strategies, and can therefore be used to beat the Market. If such strategies work, we will be provided with a useful tool for investors, and, otherwise, we will The authors wish to thank Professor Jose´ Vicente Ugarte for his assistance in the statistical analysis of the data presented in this chapter. F. Go´mez-Bezares Universidad de Deusto, Bilbao, Spain e-mail: [email protected] L. Ferruz (*) Facultad de Economı´a y Empresa, Departamento de Contabilidad y Finanzas, Universidad de Zaragoza, Zaragoza, Spain e-mail: [email protected] M. Vargas Universidad de Zaragoza, Zaragoza, Spain e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_28, # Springer Science+Business Media New York 2015
751
F. Go´mez-Bezares et al.
752
obtain a test whose results will be connected with the Efficient Market Hypothesis (EMH) and with the CAPM. The basic strategies were set out in Go´mez-Bezares, Madariaga, and Santiba´n˜ez (Ana´lisis Financiero 68:72–96, 1996). Our purpose now is to reconsider them, to improve the statistical analysis, and to examine a more recent period for our study. The methodology used is both intuitive and rigorous: analyzing how many times we beat the Market with different strategies, in order to check whether beating the Market happens by chance. Furthermore, we set out to study, statistically, when and by how much we beat it, and to analyze whether this is significant. Keywords
ANOVA • Approximately normal distribution • Binomial distribution • CAPM • Contingency tables • Market efficiency • Nonparametric tests • Performance measures
28.1
Introduction
Finance, as it is currently taught in business schools and brought together in the most prestigious text books, places great importance on the Capital Asset Pricing Model (CAPM)1 and on the Efficient Market Hypothesis (EMH)2. For example, Brealey, Myers, and Allen (2008) conclude their famous book by saying that within what we know about Finance, there are seven key ideas: the first of which is Net Present Value (NPV), the second CAPM, and the third EMH3; leaving aside for the moment NPV,4 we have as the two clearly outstanding concepts the CAPM and the EMH. In our opinion, asset pricing models (above all the CAPM) and the efficiency of the Markets are among the foundations of the current paradigm which has been in vogue since the 1970s.5 And this is perfectly logical; the financial objective of a company is to maximize its Market value6; to this end it must make investments with a positive NPV, and to calculate the NPV financiers require an asset pricing model such as the CAPM; ultimately the Markets need to be efficient so that they can notice increases in value provided by investments. We can thus see by this simple reasoning how the three concepts highlighted by Brealey, Myers, and Allen are interrelated. 1
Sharpe (1964), Lintner (1965). One of its main defenders has been Fama (1970, 1991, 1998). 3 There are also other four key ideas. 4 A fundamental concept, predating the other two and clearly related to the CAPM and the EMH. 5 See Go´mez-Bezares and Go´mez-Bezares (2006). Also of interest could be Dimson and Mussavian (1998, 1999). 6 The interesting work by Danielson, Heck and Shaffer (2008) is also worth looking at. 2
28
Can We Use the CAPM as an Investment Strategy?
753
An alternative way of viewing this is to say that Finance, as it is currently understood, is the science of pricing; we have to valuate so that we can know which decisions will result in the greatest added value, and to valuate we need asset pricing models (like the CAPM). Finally, the result of our labor will be recognized by the Market, if the Market is efficient and values more highly those stocks with a higher intrinsic value. The Golden Age of these two principles was the decade of the 1970s, appearing in works such as Fama (1970), Black, Jensen, and Scholes (1972) and Fama and MacBeth (1973): the Market correctly values the assets (the EMH) and we have a good model of asset pricing (the CAPM). However, since then there have been many critiques of both of these principles: the efficient Market principle has been criticized by psychologists and behavioral economists, who question the rationality of human beings, as well as by econometricians, who claim that prices are susceptible to prediction (Malkiel 2003). There is ample criticism in the literature, but the EMH also has important apologists; among the classic defenders are the works of Fama (1991, 1998) as well as the very noteworthy and recent study by Fama and French (2010), in which they show that portfolio managers, on average, do not beat the Market, thereby proving that the Market is indeed efficient. The detractors of the efficient Market hypothesis (currently, above all, psychologists and behavioral economists) underline again and again the inefficiencies they observe. These problems with the model, when they are seen to occur repeatedly, are termed “anomalies” (for a summary, see Malkiel 2003). Fama and French (2008) studied different anomalies, concluding that some are more significant than others. However, they concluded that we cannot be sure whether to attribute abnormal returns to inefficiencies of the Market or to rewards for risk-taking. This brings us back to the debate regarding asset pricing models. According to the CAPM, the expected return on an asset should be a positive linear function of its systematic risk measured by beta, which is the sole measurement of risk. This statement has been questioned in many ways; one very important contribution in this respect is that of Fama and French (1992) in which they comment that the beta can contribute little to an explanation of expected returns and, in fact, that there are other variables that can explain a lot more. Their work gave rise to the famous Fama and French three-factor model. The existence of variables other than the beta which can help to explain mean returns is not in itself an argument against the efficiency of the Market if we consider them to be risk factors: as risk is our enemy we demand higher returns for higher risk, and we can measure this risk in terms of a range of factors. This approach may be compatible with Arbitrage Pricing Theory (APT), but it conflicts with the CAPM. But we could also consider that the problem is that the Markets are inefficient, and hence expected returns respond to variables that are not risk factors, variables to which they should not react. Let us take as example momentum, which tells us that recent past high (or low) returns can help to predict high (or low) returns in the near future. Some may think that this could be used as a risk factor to explain returns whereas others would say that it is an inefficiency of the Market. The truth is that
754
F. Go´mez-Bezares et al.
the asset pricing model and efficiency are tested jointly (Fama 1970, 1991, 1998), in such a way that, when they work, they both work,7 but when they fail we cannot know which of them has failed.8 The CAPM tests fall far short of giving definitive results. Although some have given it up for dead and buried because it cannot fit the data or because of theoretical problems, one recent magnificent study by Levy (2010) rejects the theoretical problems posed by psychologists and behavioral economists and, based on ex ante data, says that there is experimental support for the CAPM. Brav, Lehavy, and Michaely (2005), in an approach related to the above, set out to test the model, not by looking at past returns (as a proxy of expected returns) but instead at the expectations of Market analysts, on the basis that these may be assumed to be unbiased estimates of the Market expectations. From their analysis, they found a positive relationship between expected returns and betas. Asset pricing models (such as the CAPM) look at the relationship between return and risk. Recently, there has been more focus on liquidity. Liu (2006) builds an enhanced CAPM based on two factors: the Market and liquidity – he obtains their corresponding factorial weighting which he uses to explain the expected returns. The model works correctly and allows him to account several anomalies; according to Liu, his enhanced CAPM lends new support to the risk-return paradigm. The methods most commonly used to contrast the CAPM are time-series and cross-sectional studies, which each present different statistical problems. What we set out to achieve in this chapter is to replicate a simple and intuitive but also rigorous methodology that we believe is much less problematic from a statistical point of view. The procedure was first proposed by Go´mez-Bezares, Madariaga, and Santiba´n˜ez (1996), and in this chapter we attempt to replicate it with a more updated sample and greater statistical rigor. The basic idea is simple: an individual who knows the CAPM calculates at the beginning of each month the return that each stock has rendered in the past and compares it with the return it ought to have given according to the CAPM. The stocks which gave higher returns than they ought to have are cheap (undervalued) while those which gave lower return are expensive (overvalued). If we assume that return levels are going to persist, the investor should buy the cheap stocks and sell off the expensive ones in order to beat the Market. We illustrate this line of reasoning in Fig. 28.1. In Fig. 28.1, the betas appear on the horizontal axis and the expected return on the vertical axis (Rf is the riskless rate; Rm is the Market return). We also show the Security Market Line (SML), with its formula above it, which is the formula for the CAPM. Our investor decides to buy the stocks which are cheap and refrain from selling short the expensive stocks (due to the limitations on short selling). Based on the data at his disposal, the stocks which have gained a value more than that the SML indicates are undervalued, that is to say, they are above
7
We refer to the situation we have described previously which occurred at the beginning of the 1970s. 8 See also Copeland, Weston, and Shastri (2005, p. 244).
28
Can We Use the CAPM as an Investment Strategy?
Fig. 28.1 shows the Security Market Line (SML). The betas appear on the horizontal axis, and the expected return on the vertical axis
755
E(Ri)
E (Ri) = Rf + [E (Rm) – Rf] βi SML
E(Rm)
Rf
1
βi
the SML. Believing that this situation will continue, he will buy all of these stocks, trusting that he will obtain an adjusted return higher than the Market return. If this happens (which he can find out by using Jensen’s alpha), and this strategy consistently proves to result in abnormal returns (positive Jensen alphas, or to put it another way, the stocks stay above the SML in the next period), he will have found a way to beat the Market; therefore it is not efficient. Moreover, he shall be able to predict which stocks will, in the future, outperform the SML and therefore do not comply with the CAPM, which goes against the CAPM. There would then be one way left to save the EMH: risk should not be measured using the beta (i.e., the CAPM is mistaken) and therefore a positive value for alpha is not synonymous with beating the Market (which could not be done consistently in an efficient Market); we could also save the CAPM as follows: risk must be measured with the beta; however, the Market fails to value stocks accurately and hence it can be beaten; what we cannot do is to save both the EMH and the CAPM simultaneously. On the other hand, if our investor cannot consistently beat the Market with his strategy but can only achieve about a 50 % success rate, more or less, we must conclude that the portfolio assembled with the previously undervalued stocks sometimes performs better than the SML and other times not as well, purely by chance, and then settles down to the value it ought to have according to the CAPM. We will not have a Market-beating strategy, and the results are compatible with the EMH; likewise, we will see that on a random basis the portfolios perform better than the SML at times and other times worse, which is compatible with the CAPM (every month there may be random oscillations around the SML); therefore two key elements of the aforementioned paradigm would be rescued. In our research, we use Jensen’s (1968, 1969) and Treynor’s (1965) indices, since they consider systematic risk. These indices have been very widely used in the literature.9
9
Recent studies that use these indices are, for example, Samarakoon and Hasan (2005), AbdelKader and Qing (2007), Pasaribu (2009), Mazumder and Varela (2010), and Ferruz, Go´mezBezares, and Vargas (2010).
F. Go´mez-Bezares et al.
756
One of the virtues of this method is that it is perfectly replicable, as we have hypothesized an investor who makes decisions based on information in the public domain whenever he makes them. Meanwhile, we feel, reasonably enough, that the beta values and the risk premium may vary over time. We also use the most liquid stocks (to avoid problems with lack of liquidity) and portfolios (which reduces the measuring problems). The results of our analysis clearly indicate that our strategy is not capable of beating the Market consistently, and therefore it is compatible with both the CAPM and the EMH. The result, to which we have come via different routes, which supports the strategy’s robustness, supports the results reported by Fama and French (2010) and Levy (2010) although they used very different methods. Doubtless, both the CAPM and the EMH are simplifications of reality, but they help us to explain that reality. The rest of our chapter is organized as follows: in Sect. 28.2, we describe the sample analyzed with information about the assets that it comprises, and we comment on the method used to calculate the returns and betas; in Sect. 28.3, we comment on the method employed, with reference to the proposed strategies and how these were formulated, we then go on to analyze the scenarios in which the strategies succeed in beating the Market and we describe the statistical tests used for the analyses; in Sect. 28.4, we show the results obtained by the different strategies and we analyze the number of occasions on which we were able to beat the Market and whether these results occur by chance. Furthermore, we analyze when and in what magnitude we beat the Market; in Sect. 28.5, we summarize the main conclusions drawn, highlighting the implications of our results in relation to the possibilities of using strategies to beat the Market according to the CAPM, and, therefore, we set out our conclusions as to the efficiency of the Market and the validity of the CAPM. In addition, there is a list of the references on which our research is based and an Appendix which contains, in greater detail, the statistical evidence gathered during our study.
28.2
Data
Our analysis is based on the 35 securities that comprise the IBEX 35 (the official index of the Spanish Stock Exchange Market) in each month from practically the beginning of trading in Continuous Market in Spain (1989) up to March 2007.10 We chose this index because it includes the 35 most liquid companies in the Spanish Market and, taking into account that the CAPM in Sharpe-Lintner’s version is valid for liquid companies, it would have been difficult to find such companies in Spain outside the IBEX 35.
10
We have not extended our analysis beyond March 2007 in order to exclude the period of the global financial crisis which had a very damaging effect on major companies listed on the IBEX 35, such as banks, as this could have distorted our results.
28
Can We Use the CAPM as an Investment Strategy?
757
The IBEX 35 is a capitalization-weighted index comprising the 35 most liquid companies which quote in the Continuous Market in Spain. It fulfills all of the criteria required of an indicator which aspires to be a benchmark for trading: it is representative (almost 90 % of the trading volume in the Continuous Market and approximately 80 % of the Market capitalization is held by the 35 firms listed on the IBEX 35), it can be replicated (ease of replication), its computation is guaranteed, it is widely publicized and impartial (supervised by independent experts). The selection criteria for securities listed on the IBEX 35 are as follows: • Securities must be included in the SIBE trading system. • Stocks must be representative with a big Market capitalization and trading volume. • There must be a number of “floating” stocks which is sufficient to ensure that the index’s Market capitalization is sufficiently widespread and allows for hedge and arbitrage strategies in the Market for derivatives on the IBEX 35. • The average Market capitalization of a stock measurable on the index11 must be greater than 0.30 % of the average capitalization of the index during the monitoring period.12 • The stock must have been traded in at least 1/3 of the sessions during the monitoring period or be among the top 15 stocks measured in terms of Market capitalization. The IBEX 35 is revised on a half-yearly basis in terms of its composition and the number of stocks considered of each firm; nevertheless, if financial operations are carried out which affect significantly any of the listed stocks, it can be adjusted accordingly. In general, the index is adjusted when there are increases in capital with preemptive rights, extraordinary dividend distributions, stocks integration as a result of increases in capital excluding preemptive rights, reductions in capital due to stocks redemption, capital reductions against own funds with distribution of the value to the shareholders (this is not the payment of an ordinary dividend), as well as mergers, takeovers, and demergers. Special adjustments of the index are often carried out. In the 6-monthly selection of the 35 most liquid stocks, there is no minimum or maximum number of adjustments made with regard to the previous period. No changes at all may be required, or as many adjustments as necessary may be made, based on the results obtained when measuring the liquidity. Having described the context in which we intend to carry out our study, we will propose a series of strategies that will enable us to test the validity of the CAPM and the degree to which the Market is efficient. For this we will take a hypothetical investor who, each month, examines the 35 stocks of the IBEX 35 listed at that moment and for which there are at least 36 monthly data available prior to that moment. 11
The average Market capitalization of a stock in the Index is the arithmetic average of the result we obtain when multiplying the stocks allowed to be traded in each session during the monitoring period by the closing price of the stock in each of those sessions. 12 The monitoring period is the 6-month period ending prior to each ordinary meeting of the Commission.
F. Go´mez-Bezares et al.
758
The data on returns for all the stocks were obtained from Bloomberg. In Table 28.1, we show the annual descriptive statistics from our database. There are two panels in Table 28.1: in panel A we show the descriptive statistics for the monthly returns of the previous period (December 1989 to November 1992). In this period, the index did not exist as such, so we have looked at the return of those stocks which, being included in the IBEX 35 at the beginning of the contrasting period, provide data in the corresponding month as they then were quoted in the Continuous Market. In panel B, we bring together the descriptive
Table 28.1 Annual descriptive statistics for the monthly returns on the stocks PANEL A: PRECEDING PERIOD YEAR
DEC 1989/NOV 1990 DEC 1990/NOV 1991 DEC 1991/NOV 1992 DEC 1989/NOV 1992
MEAN 0.187118 0.010761 −0.013669 0.054038
MEDIAN −0.002995 0.0000 −0.011575 -0.004545
MAXIMUM
MINIMUM
St. Dev.
11.17 0.470758 0.448602 11.17
− 0.359280 −0.288640 −0.375127 -0.375127
1,195 0.095169 0.113660 0.661538
PANEL B: CONTRASTING PERIOD YEAR
MEAN
MEDIAN
MAXIMUM
MINIMUM
St. Dev.
DEC 1992/NOV 1993 DEC 1993/NOV 1994 DEC 1994/NOV 1995 DEC 1995/NOV 1996 DEC 1996/NOV 1997 DEC 1997/NOV 1998 DEC 1998/NOV 1999 DEC 1999/NOV 2000 DEC 2000/NOV 2001 DEC 2001/NOV 2002 DEC 2002/NOV 2003 DEC 2003/NOV 2004 DEC 2004/NOV 2005 DEC 2005/NOV 2006 DEC 2006/MAR 2007 DEC 1992/MAR 2007
0.035059 0.009628 0.006480 0.022723 0.038109 0.027631 −0.001611 0.002103 0.003081 −0.005393 0.013977 0.018440 0.023066 0.029634 0.021989 0.016678
0.028576 −0.000857 0.002051 0.022042 0.034819 0.028225 −0.003229 −0.005529 0.000000 −0.002908 0.016353 0.015852 0.014685 0.022335 0.013010 0.012686
0.343220 0.510076 0.554368 0.204344 0.429381 0.432660 0.351075 0.556553 0.468603 0.369085 0.551835 0.297322 0.257841 0.372409 0.264933 0.556553
−0.237244 −0.604553 −0.220966 −0.164185 −0.308638 −0.331141 −0.223282 −0.357094 −0.282024 −0.403408 −0.347280 −0.147819 −0.110474 −0.169830 −0.285306 -0.604553
0.083789 0.101984 0.077322 0.062786 0.097809 0.120737 0.080742 0.110788 0.098028 0.104471 0.083299 0.050610 0.057863 0.062640 0.068472 0.087537
Table 28.1 reports the annual descriptive statistics for our database. In Panel A, we show the figures for the monthly returns for the previous period and in Panel B the figures for monthly returns for the contrasting period For Panel B, we consider only the returns on stocks which we have included in our study, that is to say, those stocks which were part of the IBEX 35 at a given time and which had a track record of monthly returns of at least 36 months. For panel A, as the index as such did not yet exist, we include those stocks listed on the IBEX 35 at the beginning of the contrasting period, for which there is data in the corresponding previous month; for example, the stocks we included for the month of December 1989 are those which were quoted on the IBEX 35 in December 1992 but which also were quoted in the Continuous Market back in December 1989. For the subsequent months the number of stocks considered in the study rises consistently since the number of stocks quoted continued to grow
28
Can We Use the CAPM as an Investment Strategy?
759
data for the monthly returns during the contrasting period, that is, it includes the returns on the stocks which are included in our study (namely, the stocks which comprised the IBEX 35 at each moment and for which there were at least 36 prior quotations). We built the Market portfolio,13 and as a risk-free asset we took the monthly return on the 1-month Treasury Bills.
28.3
Methods
The aim of our study is to check whether it is possible to obtain abnormal returns using the CAPM, that is to say, whether the returns derived from the use of the model are greater than those which could be expected according to the degree of systematic risk undertaken. To this end, we analyzed the period December 1989 to March 2007. The study focuses on the 35 securities comprising the IBEX 35 at any given moment during the said period. From this starting point, we propose two possible strategies for testing the efficient Market hypothesis and the validity of the CAPM: the first strategy assumes an individual who adjusts his portfolio at the end of each month, selling those stocks he bought at the end of the previous month and buying those he considers to be undervalued according to the CAPM; the second strategy is similar to the first, but with the distinction that in this case the investor does not buy all of the undervalued stocks but just 75 % of them, namely, those which are furthest removed from the SML. We use two methods to calculate which securities are the most undervalued: Jensen’s ratio and Treynor’s ratio. The reason for using a second method is in response to the critique by Modigliani (1997) of Jensen’s alpha, in which she argues that differences in return cannot be compared when the risks are significantly different. To conduct our analysis, we begin by dividing the overall period analyzed in our study (December 1989 to March 2007) into two subperiods: the first (December 1989 to November 1992) allows us to calculate the betas of the model at the beginning of the following (we carried out this calculation with the 36 previous month data). The second subperiod (December 199214 to March 2007) allows us to carry out the contrast.
13
This has been constructed as a simple average of returns for the stocks comprising the IBEX 35 each month, corrected for dividend distributions, splits, etc. Of course, in this case, it is not necessary that there be at least 36 months of prior quotations for the stocks to be included in the portfolio. 14 Initially, we intended to let the beginning of the second subperiod be the launch date of the IBEX 35 (January 1992), however, given the requirement that we set ourselves of a prior 36-month period to compute the betas of the model, we had to move this forward to December. Likewise, it was originally our intention that the beginning of the first subperiod be the launch date of the Continuous Market (April 1989) but most of the securities included in our study were listed for the first time in December 1989, so data could only be obtained from that date onward.
F. Go´mez-Bezares et al.
760
So our hypothetical investor will firstly, at the end of each month, observe the stocks that comprise the IBEX 3515 and will buy those which are undervalued (in our second analysis method he will buy just 75 % of the stocks which are the most undervalued). A stock is undervalued if its return is higher than what is ought to be based on the CAPM. To obtain this we compute the monthly betas for each stock (bi) during the contrasting period by regressing the monthly returns on each stock16 on the returns on the Market portfolio17 in the 36 months immediately prior to this. Then, we calculate the mean monthly returns on stock i (Ri ), on the Market portfolio (RM ) and on the risk-free asset (RF ) during the same period as a simple average of the corresponding 36 monthly returns. From all of the foregoing we can compute the mean monthly return (Ri ), which, according to the CAPM, the stock ought to have gained during this 36-month period, Ri ¼ RF þ RM RF bi
(28.1)
We then compare it with the actual return gained (Ri ) to determine whether the stock is undervalued. Once we have determined which stocks are undervalued, according to our first strategy the investor buys all of the undervalued stocks each month,18 while with the second strategy, he disregards the first quartile and just buys 75 % of the most undervalued stocks. The quartiles are assembled based on either Jensen’s alphas or Treynor’s ratio.19 The next step consists in evaluating how well the CAPM functions; if the undervalued stocks continue to be undervalued indefinitely, we would expect any
15
And for which there is a minimum of 36-monthly data prior to the month analyzed so that the beta can be calculated. In the case of companies which have recently merged, the beta is calculated by looking at the weighted average monthly returns on the companies prior to the merger. The weighting is proportional to the relative importance of each company involved in the merger. 16 We look at the return obtained by our investor from capital gains, dividends, and the sale of preemptive rights. The returns are also adjusted to take splits into account. 17 The return on the Market portfolio is calculated based on a simple average of the returns gained by the stocks which comprise the IBEX 35 in each month. The reason we decided to build our own portfolio rather than relying on a stock Market index such as the IBEX 35 itself is that we wanted to obtain an index corrected by capital gains, dividends, splits, and preemptive rights. Moreover, an equally weighted portfolio is theoretically superior. 18 Overvalued stocks are not included in our analysis as we decided to exclude short selling. 19 In this second variant, although the quartiles are built based on Treynor’s ratio, we continue to use Jensen’s index to work out whether or not a stock is undervalued. This is because of the problems we have encountered when determining which stocks are undervalued using Treynor’s ratio. These are related to the return premiums and the negative betas which appear in some cases in our database. Naturally, we take on board the problems of using Treynor’s ratio to construct the quartiles, as there could be stocks with positive alphas and low Treynor ratios which would exclude such stocks from the analysis. This is due to the form of Treynor’s ratio as a quotient.
28
Can We Use the CAPM as an Investment Strategy?
761
portfolio assembled with them to beat the Market in a given month, but this will not occur if in that month the CAPM operates perfectly. Jensen’s index allows us to determine whether we have been able to beat the Market, in which case it must yield a positive result: ap ¼ Rp RF bp ðRM RF Þ
(28.2)
where, bp is the beta for the portfolio, calculated as a simple average of the individual betas20 of the stocks included in the portfolio (all of the undervalued stocks, or alternatively the top 75 % thereof as we have outlined above). Rp is the return on the portfolio and is calculated as the simple average of the individual returns obtained each month for the stocks that comprise it. RM and RF are the monthly returns on the Market portfolio and the risk-free asset respectively, for the month in question. ap is Jensen’s alpha for the portfolio p. The Z-statistic, which follows an approximately normal distribution, (0.1), allows us to determine whether the number of months in which our investor beats the Market is due to chance: Z ¼ ðY npÞ=
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi npð1 pÞ
(28.3)
Y indicates the number of periods in which the portfolio comprising undervalued stocks beats the Market, n represents the number of months analyzed and to p we give a value of 0.5 as this is the probability of beating the Market if the CAPM and the efficient Market hypothesis are fulfilled,21 and we want to find out if the difference (Y – np) is due to chance. A value of |Z| > 1.96 would lead us to reject the null hypothesis of a difference due to chance, with a significance level of 5 %. Moreover, in order to make our results more robust, we developed a series of statistical tests which are presented in Appendix. We carried out a mean test for each of the proposed strategies to ascertain whether we beat the Market or not, both parametric – which means normality – as well as nonparametric (Wilcoxon’s test); in addition we tested the hypothesis that we could beat the Market 50 % of the time, and for this we used a binomial test. We also analyzed, on a monthly and on a yearly basis, whether or not there were differences between the performance means achieved in each month/year, using the ANOVA, a nonparametric test (Kruskal and Wallis), and also the technique of contingency tables and the chi-square
20
Which have been calculated with the 36 previous months’ data. If the CAPM and the efficient Market hypothesis are fulfilled exactly, the results of any portfolio (adjusted for risk) should match, as an average, with those of the Market. Here we suppose that, by pure chance, for any given month, they will end up 50 % above and 50 % below the Market average.
21
F. Go´mez-Bezares et al.
762
Pearson test together with the likelihood ratio. Finally, we conducted regression analysis of the performance measurement as a dependent variable and the time dimension (months) as an independent variable to see whether the performance improved or worsened over time. These tests allow us to confirm our results, which we shall see in the next section.
28.4
Results
28.4.1 Strategy 1: The Investor Buys All the Undervalued Stocks In this first strategy, the investor observes, in a given month, the track record (for the previous 36 months) of the 35 stocks which comprise the IBEX 35 at the moment in question,22 and buys all those that are undervalued according to Jensen’s index. The next step is to check whether he has managed to beat the Market with the portfolio of undervalued stocks. Table 28.2 shows the results of this strategy. We show for each of the 14 years in our study and for the other 4 months the number of months (and the percentage) in which the investor’s portfolio has beaten the Market. In addition, we show this result for the whole period. Furthermore, and also with respect to the whole period, we show the result for the Z-statistic. As we can see in Table 28.2, only in 92 of the 172 months analyzed in our study is the Market beaten, which is equal to 53.49 %23; moreover, the figure for Z-statistic (which is below 1.96) confirms that the CAPM does not offer a significantly better strategy than investing in the Market portfolio. In other words, we could say that the success of the investor (in the months in which he succeeds in beating the Market) could be due to chance. This conclusion is further confirmed in Appendix by the mean tests which allow us to accept that the mean of Jensen’s alphas is zero and by the binomial test which allows accepting a success probability rate of 50 %. If we now focus on the analysis of each year in our time frame, we can see that in only two of the 14 years covered in our study, to be precise from December 2004 to November 2006, the CAPM proves to be a useful tool for the investor allowing him to use a strategy to beat the Market; in fact, in 75 % and 83.33 % of the months of each of those 2 years our investor beats the Market, thus confirming the poor performance of the model, or the inefficiency of the Market.
22
Which have a track record of at least 36 months in the Market. If, in a given month, there is a stock which is listed on the IBEX35 but does not have at least 36 months of prior data, this will be included in our study in the month in which it reaches this figure, unless by that stage it has already been deleted from the IBEX35. 23 We have not included the transaction costs, which undoubtedly would be higher for the managed portfolio and, therefore, would make that strategy less attractive.
28
Can We Use the CAPM as an Investment Strategy?
763
Table 28.2 Results of the Strategy 1 (buying all of the undervalued stocks) YEAR
No. of successful months
% success
DEC 1992/NOV 1993 DEC 1993/NOV 1994 DEC 1994/NOV 1995 DEC 1995/NOV 1996 DEC 1996/NOV 1997 DEC 1997/NOV 1998 DEC 1998/NOV 1999 DEC 1999/NOV 2000 DEC 2000/NOV 2001 DEC 2001/NOV 2002 DEC 2002/NOV 2003 DEC 2003/NOV 2004 DEC 2004/NOV 2005 DEC 2005/NOV 2006 DEC 2006/MAR 2007
4 4 5 7 7 6 7 6 6 5 7 7 9 10 2
33.33% 33.33% 41.67% 58.33% 58.33% 50% 58.33% 50% 50% 41.67% 58.33% 58.33% 75% 83.33% 50%
DEC 1992/MAR 2007
92
53.49%
Z-statistic 0.91
In Table 28.2, we bring together the results of a strategy in which our hypothetical investor buys all the undervalued stocks on the Index. To be specific, we provide for each of the 14 years in our study as well as for the other 4 months, the number of months (and the percentage) in which the portfolio beats the Market. We also show this result for the period as a whole. Moreover, for the period as a whole, we show the result for the Z-statistic
In the first 2 years (from December 1992 to November 1994), the opposite happens: in just 33.33 % of the months the investor’s strategy beats the Market. In those months it seems, therefore, that he could use the opposite strategy to beat it. The statistical tests carried out and included in the Appendix do not provide us with clear conclusions as to the basis for the differences in performance of the investor’s strategy between the different months and years, because the different tests used gave us different results. However, the regression analysis does confirm that the performance of the portfolio improves over time, as we obtain a positive and significant beta.
28.4.2 Strategy 2: The Investor Buys the Three Stock-Quartiles with the Highest Degree of Undervaluation With this strategy, which has two variants, what lets us find out whether a stock is or is not undervalued is, just as with the previous strategy, the difference between the mean monthly return actually achieved by the stock in a given month (using information on the previous 36 months) and that which it ought to have achieved according to the CAPM. However, with this second strategy, the investor disregards the first quartile of undervalued stocks and buys the other 75 %, those that are the most undervalued.
F. Go´mez-Bezares et al.
764
In the 1st variant of this second strategy, the ranking of the stocks (in each month) by quartiles is done using Jensen’s index and in the 2nd variant it is done using Treynor’s ratio.
28.4.2.1 Strategy 2, 1st Variant: The Quartiles Are Built Based on Jensen’s Index In Table 28.3, we show the results of this strategy: The results, although slightly better, are not very different from those achieved by the previous strategy, as once again the CAPM proves not to be a useful tool for beating the Market. In only 94 of the 172 months in our study (54.65 %) does the strategy manage to beat the Market. The value for Z-statistic again confirms this result. And again, the mean tests show that the mean for the alphas for the investor’s portfolio can be zero; while the binomial test shows that the probability of success in beating the Market can be 50 % (see Appendix). There is also a slight improvement in the results by years in comparison with the results achieved by the previous strategy; the CAPM proves to be a useful Marketbeating tool in 5 of the 14 years analyzed (December 1995 to November 1996, December 1998 to November 1999, December 2002 to November 2003, December 2004 to November 2005, and December 2005 to November 2006), which implies that the model either works badly or that the Market is inefficient since it is possible to beat it in most months. Table 28.3 Results of Strategy 2, 1st Variant (buying the top three quartiles of the most undervalued stocks, constructed based on Jensen’s Index) YEAR
No. of successful months
% success
DEC 1992/NOV 1993 DEC 1993/NOV 1994 DEC 1994/NOV 1995 DEC 1995/NOV 1996 DEC 1996/NOV 1997 DEC 1997/NOV 1998 DEC 1998/NOV 1999 DEC 1999/NOV 2000 DEC 2000/NOV 2001 DEC 2001/NOV 2002 DEC 2002/NOV 2003 DEC 2003/NOV 2004 DEC 2004/NOV 2005 DEC 2005/NOV 2006 DEC 2006/MAR 2007 DEC 1992/MAR 2007
3 4 5 8 6 7 8 6 5 5 9 6 10 10 2 94
25% 33.33% 41.67% 66.67% 50% 58.33% 66.67% 50% 41.67% 41.67% 75% 50% 83.33% 83.33% 50% 54.65%
Z-statistic 1.22
Table 28.3 shows the results of the strategy in which the investor disregards the first quartile of undervalued stocks and buys the remaining 75 %, which are the stocks that are most undervalued. The quartiles are built using Jensen’s Index
28
Can We Use the CAPM as an Investment Strategy?
765
The opposite results are obtained for the first 2 years (from December 1992 to November 1994), in which the percentages of success (25 % and 33.33 %, respectively) are the lowest. The ANOVA, the Kruskal–Wallis test, and the contingency table with Pearson’s chi-square-statistic and the likelihood ratio all lead us to the conclusion24 that there are no differences between the performance means achieved by the portfolio in the months in our analysis, so the small differences could be due to mere chance. However, these tests do not allow us to offer conclusions when we look at the years that comprise our sample, as different conclusions can be drawn from the different tests used. Moreover, the regression analysis once again allows us to confirm that, as with the previous strategy, the performance of the portfolio improves over time.
28.4.2.2 Strategy 2, 2nd Variant: The Quartiles Are Constructed Based on Treynor’s Ratio In Table 28.4, we show the results of this strategy. The conclusions are very similar to those which the two previous strategies lead us to, thus we can see, for the whole period, a scant success rate for the strategy. Table 28.4 Results of Strategy 2, 2nd Variant (buying the top three quartiles of the most undervalued stocks, constructed based on Treynor’s Ratio) YEAR
No. of successful months
% success
DEC 1992/NOV 1993 DEC 1993/NOV 1994 DEC 1994/NOV 1995 DEC 1995/NOV 1996 DEC 1996/NOV 1997 DEC 1997/NOV 1998 DEC 1998/NOV 1999 DEC 1999/NOV 2000 DEC 2000/NOV 2001 DEC 2001/NOV 2002 DEC 2002/NOV 2003 DEC 2003/NOV 2004 DEC 2004/NOV 2005 DEC 2005/NOV 2006 DEC 2006/MAR 2007 DEC 1992/MAR 2007
3 4 7 7 6 7 7 7 4 5 8 6 10 10 2 93
25% 33.33% 58.33% 58.33% 50% 58.33% 58.33% 58.33% 33.33% 41.67% 66.67% 50% 83.33% 83.33% 50% 54.07%
Z-statistic 1.07
Table 28.4 shows the results of the strategy in which the investor disregards the first quartile of undervalued stocks and buys the remaining 75 %, which are the stocks that are most undervalued. The quartiles are built using Treynor’s Ratio
24
See Appendix.
F. Go´mez-Bezares et al.
766
Moreover, the value for Z-statistic once again leads us to accept the hypothesis that any possible success or failure is due to chance. Meanwhile, the tests we include in Appendix confirm the previous results: the mean tests support the interpretation that the mean performance of the portfolios is zero, therefore we do not beat the Market, and the binomial test suggests that the strategy’s success rate is 50 %. Focusing now on the analysis by years, there are 3 years (December 2002 to November 2003 and December 2004 to November 2006) in which we can confirm that the Market is not efficient or the model does not work well, since in those years the CAPM seems to be a useful Market-beating tool. However, in the first 2 years of our study as well as in the ninth year, the lowest success rates are delivered, which does not allow us to confirm the efficient Market hypothesis, as by using the opposite strategy we could have beaten it. With regard to the statistical tests to analyze the robustness of the abovementioned results for the various years and months in our database and those shown in Appendix, we find that there are no significant differences between the performance means achieved by the portfolio in the different months, so any difference can be due to chance. Nonetheless, we cannot draw conclusions on a yearly basis as the tests produce different results. Finally, once again the result of the previous strategies is confirmed: the performance of the investor’s portfolio improves over time. Overall, simply looking at the graphics for the three strategies where we can see the Jensen alphas achieved by our investor over time, if the reader disregards the initial data, he will see that the adjusted line is flat.
28.5
Conclusion
The aim of our study was to analyze whether the design of strategies based on the CAPM can enable an investor to obtain abnormal returns. We also set out to explore this using a methodology that was both intuitive and scientifically rigorous. We also set out to determine whether the efficient Market hypothesis was fulfilled and whether we could accept the validity of the CAPM, since if strategies that can beat the Market with a degree of ease exist, we cannot confirm either the efficiency of the Market or the workability of the CAPM, as this defends that the only way to obtain extra returns is to accept a higher degree of systematic risk. Conversely, we would then be in possession of a tool enabling the investor to achieve higher returns than the Market for a given level of systematic risk. Analyzing the behavior of an investor who can buy the stocks comprising the IBEX 35 at any given moment, that is, the benchmark for the Spanish stock
28
Can We Use the CAPM as an Investment Strategy?
767
Market, and who reconfigures his portfolio on a monthly basis, we find that, regardless of the strategy used (buying all of the undervalued stocks or buying the 75 % most undervalued stocks, either measured with Jensen’s alpha or with Treynor’s ratio), the investor manages to beat the Market about 50 % of the time. We opted to exclude from our calculations the transaction costs, so we in fact overvalued the investor’s results. Therefore, we can conclude that the CAPM is not an attractive tool for an investor who wishes to achieve abnormal returns. It seems that undervalued stocks rapidly fit the SML, and so from another perspective, we can confirm the efficient Market hypothesis and the workability of the CAPM. These conclusions are backed up by a series of statistical tests included in Appendix. A positive aspect of our study from the point of view of its applicability is that the behavior we have envisaged for our virtual investor is perfectly replicable as he acts only with the information available in any given month. Meanwhile, by focusing on the stocks that make up the IBEX 35, our study’s conclusions are applicable to the most representative and consolidated stocks on the Spanish Market. Furthermore, our system possesses interesting statistical advantages: it allows for variation of the beta and of the risk premium over time, it avoids the risk of illiquidity, and it reduces errors in measurement. We have also considered its robustness. Finally, it is important to point out that at all times we have accepted implicitly the logic of the CAPM, when measuring performance with Jensen’s index (which implies basing ourselves on the CAPM). For this reason our results enable us to confirm that it is the efficiency of the Market which prevents abnormally high returns compared to those the Market itself is able to achieve, and that the CAPM is a model that works reasonably well; hence it cannot be used as a Marketbeating tool.
Appendix In this section, we report a series of statistical tests done using the “Stata” program, which support the results obtained in our study, thereby, we believe, making it more robust.25 These statistics are drawn up for each of the strategies we tried out: buying all of the undervalued stocks, buying the top 75 % most undervalued stocks using Jensen’s Index to select them, and buying the 75 % most undervalued stocks according to Treynor’s Ratio.
25 See Agresti (2007), Anderson et al. (2011), Conover (1999) and Newbold et al. (2009) for a widening of the statistical processing used in this Appendix.
F. Go´mez-Bezares et al.
768
Strategy 1: Investor Buys All Undervalued Stocks 1. Summary statistics We work here, as for all of the strategies, with the Jensen values recorded for each of the portfolios (172). Smallest
Percentiles 1% 5% 10% 25% 50%
−0.1064671 −0.0279054 −0.0180511 −0.0077398 0.0012046
75% 90% 95% 99%
0.0123422 0.0237416 0.0309298 0.0509601
−0.1087249 −0.1064671 −0.0708467 Obs. 172 −0.0540319 Sum of Wgt. 172 0.001246 Mean Largest 0.0213603 Std. Dev. 0.0389192 0.0004563 0.0444089 Variance −1.397642 0.0509601 Skewness 10.85719 0.0773151 Kurtosis
We would point out that in this table the Jensen values are far from normality, as can be seen in the readings for asymmetry and kurtosis. 2. Mean test Variable jensen
Obs. 172
Mean Std. Err. 0.001246 0.0016287
mean = mean (jensen) Ho: mean = 0 Ha: mean < 0 Pr(T < t) = 0.7773
Std. Dev. 0.0213603
[95% Conf. Interval] −0.001969 0.004461
t = 0.7650 degrees of freedom = 171 Ha: mean != 0 Pr(|T| > |t|) = 0.4453
Ha: mean > 0 Pr(T > t) = 0.2227
From this mean test, we can accept that the mean for the Jensen alphas is zero; in fact, if we focus on the two-sided test we obtain a probability of 0.4453, higher than 5 %. This allows us to conclude that we are not able to beat the Market with this strategy. 3. Nonparametric mean test sign positive negative zero all unadjusted variance adjustment for ties adjustment for zeros adjusted variance Ho: jensen = 0 z = 1.547 Prob > |z| = 0.1218
Obs. 92 80 0 172 427742.50 0.00 0.00 -------------427742.50
sum ranks 8451 6427 0 14878
expected 7439 7439 0 14878
28
Can We Use the CAPM as an Investment Strategy?
769
We can see again that Wilcoxon’s nonparametric test leads to the same conclusion: we cannot beat the Market with this strategy. 4. Frequencies BEAT NO YES Total
NO
BEAT YES
Total
80 46.51 0 0 80 46.51
0 0 92 53.49 92 53.49
80 46.51 92 53.49 172 100
From this contingency table, we see that the proportion of months in which we beat the Market is very similar to the proportion in which we fail to do so. 5. Probabilities test Variable
N
Observed K
Expected K
Assumed p
Observed p
Jensen
172
92
86
0.50000
0.53488
Pr(k >= 92) = 0.200842 (one-sided test) Pr(k F
Total
0.07802116
171
0.00045626 R-squared
df
Number of obs.
MS
0.0589
Adj. R-squared Root MSE
jensen
Coef.
Std. Err.
t
P>|t|
time
0.0001041
0.0000319
3.26
0.001
0.0154174
−3.16
0.002
_cons
−0.048762
172 10.63 0.0013 0.0533 0.02078
[95% Conf. Interval] 0.0000411
0.0001671
−0.0791963 −0.0183277
From the above table we can observe that the regression slope which connects Jensen’s index (the dependent variable) and time measured in months (the independent variable) gives a positive and significant coefficient for beta which allows us to conclude that this strategy gives results that improve over time. 8. One-way analysis of Jensen by month
.1
.05
0
−.05
er ov em be D r ec em be r
r
N
be
ct ob O
y
us t
pt em
Se
Au g
Ju l
e Ju n
M ay
il Ap r
Ja nu ar y Fe br ua ry M ar ch
−.1
28
Can We Use the CAPM as an Investment Strategy?
771
9. One-way ANOVA (by month) Source
SS
df
Model Residual Total
0.00607108 0.07195009 0.07802116
11 160 171
MS Number of obs 0.00055192 F( 11, 160) 0.00044969 Prob > F 0.00045626 R-squared Adj R-squared Root MSE
172 1.23 0.2729 0.0778 0.0144 0.02121
jensen
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
_cons 1 2 3 4 5 6 7 8 9 10 11 12
0.0084177 −0.0094303 −0.0142236 0.0040468 −0.0021065 −0.0167755 −0.0049838 −0.0026922 −0.0082308 −0.0112643 −0.0092672 −0.0115344 (dropped)
0.0056675 0.0080151 0.0080151 0.0078803 0.0078803 0.0078803 0.0080151 0.0080151 0.0078803 0.0080151 0.0080151 0.0080151
1.49 −1.18 −1.77 0.51 −0.27 −2.13 −0.62 −0.34 −1.04 −1.41 −1.16 −1.44
0.139 0.241 0.078 0.608 0.79 0.035 0.535 0.737 0.298 0.162 0.249 0.152
−0.0027751 0.0196104 −0.0252592 0.0063987 −0.0300526 0.0016053 −0.0115161 0.0196097 −0.0176694 0.0134564 −0.0323384 −0.0012126 −0.0208128 0.0108451 −0.0185211 0.0131368 −0.0237937 0.0073321 −0.0270932 0.0045647 −0.0250961 0.0065618 −0.0273633 0.0042946
We carry out an ANOVA to determine whether the means for Jensen’s indices obtained in the different months are uniform. We can see that F has a value of 1.23 and a related probability of 0.2729, above 5 %, which leads us to accept that the means for Jensen’s indices are uniform throughout the months included in our study. 10. Kruskal-Wallis tests (Rank sums), by month Month
Obs.
Rank Sum
January February March April May June July August September October November December
15 15 15 14 14 14 14 14 14 14 14 15
949 1551 1310 1217 1093 1386 1270 798 1382 1025 1032 1865
chi-squared = 22.716 with 11 d.f. probability = 0.0194
We use the Kruskal–Wallis nonparametric test to measure the same phenomenon as with the ANOVA, and we see that for a significance level of 1 % we would come to the same conclusion as we did with the ANOVA, while for a significance level of 5 %, we can rule out uniformity from month to month.
F. Go´mez-Bezares et al.
772
11. Contingency table (by month) Month January February March April May June July August September October November December Total
NO
BEAT YES
Total
11 5 7 6 7 2 6 11 6 8 9 2 80
4 10 8 8 7 12 8 3 8 6 5 13 92
15 15 15 14 14 14 14 14 14 14 14 15 172
Pearson chi2(11) = 26.3578 Pr = 0.006 likelihood-ratio chi2(11) = 28.4294 Pr = 0.003
We use this contingency table to test the same point as with the ANOVA and the Kruskal–Wallis test above, to determine whether there are any differences between the months, and we find that both with Pearson’s chi-square test and with the likelihood ratio the probability is less than 1 %, which leads us to conclude that the chances of beating the Market are not the same month on month, so that there will be some months in which it is easier to do so than others. 12. One-way analysis of Jensen by year .1
.05
0
−.05
06
05
20
03
04
20
20
02
20
00
01
20
20
99
98
97
96
94
95
20
19
19
19
19
19
19
19
93
−.1
28
Can We Use the CAPM as an Investment Strategy?
773
13. One-way ANOVA (by year) Source
SS
df
Model Residual Total
0.01216506 0.06538048 0.07754554
13 154 167
MS Number of obs 0.00093577 F(13, 154) 0.00042455 Prob > F 0.00046435 R-squared Adj R-squared Root MSE
168 2.2 0.0117 0.1569 0.0857 0.0206
jensen
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
_cons 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0.0113553 −0.0368126 −0.0159892 −0.0088588 −0.0077928 −0.0082156 −0.0062178 −0.0102373 −0.0061693 −0.0108019 −0.0127709 −0.009494 −0.0078177 0.0003579 (dropped)
0.005948 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118 0.0084118
1.91 −4.38 −1.9 −1.05 −0.93 −0.98 −0.74 −1.22 −0.73 −1.28 −1.52 −1.13 −0.93 0.04
0.058 0 0.059 0.294 0.356 0.33 0.461 0.225 0.464 0.201 0.131 0.261 0.354 0.966
0.0231055 −0.000395 −0.0534299 −0.0201952 −0.0326065 0.0006282 −0.0254762 0.0077586 −0.0244102 0.0088246 −0.024833 0.0084018 −0.0228352 0.0103996 −0.0268546 0.0063801 −0.0227866 0.0104481 −0.0274192 0.0058155 −0.0293883 0.0038465 −0.0261114 0.0071234 −0.0244351 0.0087997 −0.0162595 0.0169752
We performed an ANOVA based on years, to see whether the strategy gives similar Jensen values from year to year for those years included in our study (we excluded 1992 and 2007 because they could provide only 1 and 3 months of data, respectively. These years are also excluded from the other tests done based on years). We obtained a figure F of 2.20 with a related probability of 0.0117, thus for a significance level of 1 % we would accept that the means are uniform from one year to another, however, for a significance level of 5 % we must rule out this hypothesis and conclude that the Jensen values differ from one year to another. 14. Kruskal-Wallis tests (Rank sums), by year Year 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Obs. 12 12 12 12 12 12 12 12 12 12 12 12 12 12
Rank Sum 722 661 1017 1092 1024 1083 993 1002 931 906 1019 1058 1275 1413
chi-squared = 16.527 with 13 d.f. probability = 0.2218
F. Go´mez-Bezares et al.
774
We now use the Kruskal–Wallis test to find out the same point as we did with the ANOVA. We see that we obtain a probability greater than 5 % which means we can accept that the mean Jensen values are uniform from one year to another. 15. Contingency table (by year) Year 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 Total
BEAT YES 4 3 6 7 7 6 7 6 6 5 7 7 8 11 90
NO 8 9 6 5 5 6 5 6 6 7 5 5 4 1 78
Total 12 12 12 12 12 12 12 12 12 12 12 12 12 12 168
Pearson chi2(13) = 15.2205 Pr = 0.294 likelihood-ratio chi2(13) = 16.7608 Pr = 0.210
The contingency table confirms the result of the previous test, namely, that there are no differences between the Jensen values for the different years so that any small difference can be due to chance.
Strategy 2: The Investor Buys the Most Undervalued Stocks Strategy 2, 1st Variant: Quartiles Constructed Using Jensen’s Index 1. Summary statistics Percentiles 1% 5% 10% 25% 50%
−0.1414719 −0.0391241 −0.0192669 −0.0103728 0.0017188
75% 90% 95% 99%
0.0154674 0.0294085 0.0390722 0.0696097
Smallest −0.1486773 −0.1414719 −0.1117398 Obs. 172 172 −0.0683232 Sum of Wgt. 0.0011871 Mean 0.0280576 Largest Std. Dev. 0.0545513 0.0690987 Variance 0.0696097 Skewness 0.0745625 Kurtosis
0.0007872 −1.842765 11.929
28
Can We Use the CAPM as an Investment Strategy?
775
We would point out that in this table the Jensen values are far from normality, as can be seen in the readings for asymmetry and kurtosis. 2. Mean test Variable
Obs.
Mean
Std. Err.
Std. Dev.
[95% Conf. Interval]
jensen (Q)
172
0.001187
0.0021394
0.0280576
−0.0030359 0.0054101
mean = mean (jensen) Ho: mean = 0 Ha: mean < 0 Pr(T < t) = 0.7101
t = 0.5549 degrees of freedom = 171
Ha: mean != 0 Pr(|T| > |t|) = 0.5797
Ha: mean > 0 Pr( T > t) = 0.2899
From this mean test we assume that the mean for Jensen’s alphas is zero, in fact, if we focus on the two-sided test we can detect a probability of 0.5797, higher than 5 %. This allows us to conclude that we are not able to beat the Market with this strategy. 3. Nonparametric mean test sign
Obs.
sum ranks
expected
positive negative zero all
94 78 0 172
8567 6311 0 14878
7439 7439 0 14878
unadjusted variance 427742.50 adjustment for ties 0.00 adjustment for zeros 0.00 -------------adjusted variance 427742.50 Ho: jensen = 0 z = 1.725 Prob > |z| = 0.0846
We can see again, that Wilcoxon’s nonparametric test leads to the same conclusion: we cannot beat the Market with this strategy. 4. Frequencies
BEAT NO YES Total
BEAT NO
YES
Total
78 45.35 0 0 78 45.35
0 0 94 54.65 94 54.65
78 45.35 94 54.65 172 100
F. Go´mez-Bezares et al.
776
From this contingency table, we see that the proportion of months in which we beat the Market is very similar to the proportion of months in which we do not, with a slightly higher probability of beating the Market. 5. Probabilities test Variable
N
Jensen (Q)
172
Observed K Expected K Assumed p Observed p 94
86
0.50000
0.54651
Pr(k >= 94) = 0.126331 (one-sided test) Pr(k F 0.00078723 R-squared Adj R-squared Root MSE
jensen (Q)
Coef.
Std. Err.
t
P>|t|
time _cons
0.000152 −0.0718465
0.0000416 0.0201013
3.65 −3.57
0 0
172 13.34 0.0003 0.0728 0.0673 0.0271
[95% Conf. Interval] 0.0000699 −0.1115268
0.0002341 −0.0321663
From the above table we can observe that the regression slope, which connects Jensen’s index (the dependent variable) and time measured in months (the independent variable), gives a positive and significant coefficient for beta which allows us to conclude that this strategy gives results that improve over time. 8. One-way analysis of Jensen (quartiles) by month .1
.05
0
−.05
−.1
er ov em be D r ec em be r
ct ob
N
be r
O
t
m
y
us te
Se p
Au g
Ju l
e Ju n
ay M
ril Ap
Ja nu ar y Fe br ua ry M ar ch
−.15
F. Go´mez-Bezares et al.
778
9. One-way ANOVA (by month) Source
SS
df
Model Residual Total
0.00735542 0.12726097 0.13461639
11 160 171
jensen (Q) _cons 1 2 3 4 5 6 7 8 9 10 11 12
Coef. 0.0105061 −0.0142298 −0.015749 0.0028747 −0.0060897 −0.0172233 −0.0069927 −0.0014246 −0.014007 −0.0143055 −0.0125205 −0.0123631 (dropped)
Std. Err. 0.0075374 0.0106595 0.0106595 0.0104804 0.0104804 0.0104804 0.0106595 0.0106595 0.0104804 0.0106595 0.0106595 0.0106595
Number of Obs. MS 0.00066868 F(11, 160) 0.00079538 Prob > F 0.00078723 R-squared Adj. R-squared Root MSE
t 1.39 −1.33 −1.48 0.27 −0.58 −1.64 −0.66 −0.13 −1.34 −1.34 −1.17 −1.16
P>|t| 0.165 0.184 0.142 0.784 0.562 0.102 0.513 0.894 0.183 0.181 0.242 0.248
172 0.84 0.5997 0.0546 −0.0104 0.0282
[95% Conf. Interval] 0.0253918 0.0068218 0.0053025 0.0235724 0.0146081 0.0034744 0.0140588 0.019627 0.0066907 0.0067461 0.008531 0.0086884
−0.0043796 −0.0352813 −0.0368006 −0.017823 −0.0267874 −0.037921 −0.0280443 −0.0224761 −0.0347047 −0.035357 −0.0335721 −0.0334147
We perform an ANOVA to see whether the means for Jensen values obtained for the different months are uniform. We find that F-statistic has a value of 0.84 with a related probability of 0.5997, above 5 %, which leads us to accept that the means for Jensen values are uniform among the months included in our sample. 10. Kruskal-Wallis tests (Rank sums), by month Month
Obs.
Rank Sum
January February March April May June July August September October November December
15 15 15 14 14 14 14 14 14 14 14 15
1167 1428 1228 1105 1077 1451 1216 885 1385 1156 1020 1760
chi-squared = 14.369 with 11 d.f. probability = 0.2133
We now use the Kruskal–Wallis test to find out the same point as we did with the ANOVA. We come to the same conclusion as we did with the ANOVA.
28
Can We Use the CAPM as an Investment Strategy?
779
11. Contingency table (by month) Month January February March April May June July August September October November December Total
NO
BEAT YES
7 5 7 8 9 3 7 9 6 7 8 2
8 10 8 6 5 11 7 5 8 7 6 13
78
94
Total 15 15 15 14 14 14 14 14 14 14 14 15 172
Pearson chi2(11) = 16.2331 Pr = 0.133 likelihood-ratio chi2(11) = 17.3939 Pr = 0.097
The contingency table allows us to test the same point as above with the ANOVA and the Kruskal–Wallis test, and leads to the same conclusions; hence, we can accept that all months show a similar tendency in terms of beating the Market. 12. One-way analysis of Jensen (quartiles) by year .1
.05
0
−.05
−.1
06
05
20
20
04 20
02
03 20
01
20
00
20
20
99 19
98
97
19
96
19
95
19
19
94 19
19
93
−.15
F. Go´mez-Bezares et al.
780
13. One-way ANOVA (by year) Source
SS
df
Model Residual Total
0.02482403 0.10923702 0.13406105
13 154 167
MS Number of obs. 0.00190954 F(13, 154) 0.00070933 Prob > F 0.00080276 R-squared Adj. R-squared Root MSE
168 2.69 0.0019 0.1852 0.1164 0.02663
jensen (Q)
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
_cons 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0.0129934 −0.0494527 −0.0191305 −0.00969 −0.0092178 −0.0100839 −0.0066183 −0.0095188 −0.0095735 −0.0142031 −0.0158029 −0.0087823 −0.0075224 0.0071267 (dropped)
0.0076884 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873 0.010873
1.69 −4.55 −1.76 −0.89 −0.85 −0.93 −0.61 −0.88 −0.88 −1.31 −1.45 −0.81 −0.69 0.66
0.093 0 0.08 0.374 0.398 0.355 0.544 0.383 0.38 0.193 0.148 0.421 0.49 0.513
−0.0021949 0.0281817 −0.0709322 −0.0279733 −0.04061 0.002349 −0.0311695 0.0117894 −0.0306973 0.0122617 −0.0315634 0.0113956 −0.0280978 0.0148612 −0.0309983 0.0119607 −0.031053 0.0119059 −0.0356825 0.0072764 −0.0372824 0.0056765 −0.0302617 0.0126972 −0.0290019 0.013957 −0.0143528 0.0286061
We now perform an ANOVA by years to see whether the strategy gives similar Jensen values among the years in our sample. We obtain a value for F-statistic of 2.69 with a related probability of 0.0019; hence we conclude that the means for Jensen values are not uniform from one year to another. 14. Kruskal-Wallis tests (Rank sums), by year Year 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Obs. 12 12 12 12 12 12 12 12 12 12 12 12 12 12
Rank Sum 735 665 980 1042 1002 1064 1045 972 846 913 1136 1070 1353 1373
chi-squared = 17.864 with 13 d.f. probability = 0.1627
28
Can We Use the CAPM as an Investment Strategy?
781
We now use the Kruskal–Wallis nonparametric test to find out the same point as we did with the ANOVA. We see that we obtain a probability greater than 5 % which means we can accept that the mean Jensen values are uniform from one year to another. 15. Contingency table (by year)
Year 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 Total
BEAT YES 4 3 6 8 6 7 8 6 5 5 9 6 10 10 93
NO 8 9 6 4 6 5 4 6 7 7 3 6 2 2 75
Total 12 12 12 12 12 12 12 12 12 12 12 12 12 12 168
Pearson chi2(13) = 19.9673 Pr = 0.096 likelihood-ratio chi2(13) = 21.0731 Pr = 0.071
The contingency table confirms the result of the previous test, namely, that there are no differences between the Jensen values for the different years so that any small difference can be due to chance.
Strategy 2, 2nd Variant: Quartiles Built Using Treynor’s Ratio 1. Summary statistics Percentiles 1% 5% 10% 25% 50%
−0.0968895 −0.0382262 −0.0231222 −0.0106754 0.0023977
75% 90% 95% 99%
0.0157561 0.0285186 0.04065 0.0648316
Smallest −0.104714 −0.0968895 −0.0940315 Obs. 172 172 −0.0702515 Sum of Wgt. 0.0014829 Mean 0.0256145 Std. Dev. Largest 0.0484799 0.0629996 Variance 0.0648316 Skewness 0.0745625 Kurtosis
0.0006561 −0.9274022 6.503481
F. Go´mez-Bezares et al.
782
We note that in this table the Jensen values are far from normality, as can be seen in the readings for asymmetry and kurtosis, although this is less obvious than for the previous strategies. 2. Mean test Variable
Jensen (Q)
Obs.
172
Mean
Std. Err.
Std. Dev.
[95% Conf. Interval]
0.001483
0.0019531
0.0256145
−0.0023724 0.0053381
mean = mean (jensen) Ho: mean = 0
t = 0.7592 degrees of freedom = 171
Ha: mean < 0 Pr(T < t) = 0.7756
Ha: mean != 0 Pr(|T| > |t|) = 0.4487
Ha: mean > 0 Pr(T > t) = 0.2244
From this mean test, we can accept that the mean for the Jensen alphas is zero; in fact, if we focus on the two-sided test, we obtain a probability of 0.4487, which is higher than 5 %. This allows us to conclude that we cannot beat the Market with this strategy. 3. Nonparametric mean test sign
Obs.
sum ranks
expected
positive negative zero all
93 79 0 172
8460 6418 0 14878
7439 7439 0 14878
unadjusted variance 427742.50 adjustment for ties 0.00 adjustment for zeros 0.00 -------------adjusted variance 427742.50 Ho: jensen = 0 z = 1.561 Prob > |z| = 0.1185
We can see again that Wilcoxon’s nonparametric test leads to the same conclusion: we cannot beat the Market with this strategy. 4. Frequencies BEAT NO YES Total
BEAT NO
YES
Total
79 45.93 0 0 79 45.93
0 0 93 54.07 93 54.07
79 45.93 93 54.07 172 100
28
Can We Use the CAPM as an Investment Strategy?
783
From this contingency table, we see that the proportion of months in which we beat the Market is very similar to the proportion in which we do not, with a slightly higher figure for those months in which we do beat the Market. 5. Probabilities test
Variable
N
Observed K
Expected K
Assumed p
Observed p
Jensen (Q)
172
93
86
0.50000
0.5407
Pr(k >= 93) = 0.160787 (one-sided test) Pr(k F 0.0006561 R-squared Adj R-squared Root MSE
172 11.68 0.0008 0.0643 0.0588 0.02485
jensen (Q)
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
time _cons
0.0001304 −0.0611824
0.0000382 0.0184347
3.42 −3.32
0.001 0.001
0.0000551 0.0002058 −0.0975728 −0.024792
From the above table we can observe that the regression slope which connects Jensen’s index (the dependent variable) and time measured in months (the independent variable) gives a positive and significant coefficient for beta which allows us to conclude that this strategy gives results that improve over time. 8. One-way analysis of Jensen (quartiles) by month .1
.05
0
−.05
er ov em be D r ec em be r
r
N
ct
ob
be O
y
us t
pt em
Se
Au g
Ju l
e Ju n
ay M
il Ap r
ch
y ar
M ar
ru Fe b
Ja nu a
ry
−.1
28
Can We Use the CAPM as an Investment Strategy?
785
9. One-way ANOVA (by month) Source Model Residual Total
jensen (Q) _cons 1 2 3 4 5 6 7 8 9 10 11 12
SS
df
0.00656026 0.10563286 0.11219312
11 160 171
Coef. 0.0092267 −0.0125889 −0.0150679 0.003986 −0.0040562 −0.0145235 −0.0055123 −0.0003726 −0.0092886 −0.013379 −0.0120712 −0.0105584 (dropped)
Std. Err. 0.0068671 0.0097116 0.0097116 0.0095484 0.0095484 0.0095484 0.0097116 0.0097116 0.0095484 0.0097116 0.0097116 0.0097116
Number of Obs. 0.00059639 F(11, 160) 0.00066021 Prob > F 0.0006561 R-squared Adj. R-squared Root MSE MS
t 1.34 −1.3 −1.55 0.42 −0.42 −1.52 −0.57 −0.04 −0.97 −1.38 −1.24 −1.09
P>|t| 0.181 0.197 0.123 0.677 0.672 0.13 0.571 0.969 0.332 0.17 0.216 0.279
172 0.9 0.5387 0.0585 −0.0063 0.02569
[95% Conf. Interval] 0.0227886 0.0065906 0.0041115 0.0228431 0.0148009 0.0043336 0.0136671 0.0188068 0.0095684 0.0058005 0.0071083 0.008621
−0.0043352 −0.0317683 −0.0342474 −0.0148711 −0.0229133 −0.0333805 −0.0246917 −0.0195521 −0.0281457 −0.0325584 −0.0312506 −0.0297378
We perform an ANOVA to see whether the means for the Jensen values obtained for the different months are uniform. We find that F-statistic has a value of 0.90 with a related probability of 0.5387, above 5 %, which leads us to accept that the means for the Jensen values are uniform among the months included in our sample. 10. Kruskal-Wallis tests (Rank sums), by month Month
Obs.
Rank Sum
January February March April May June July August September October November December
15 15 15 14 14 14 14 14 14 14 14 15
1153 1442 1297 1107 1046 1430 1220 899 1352 1160 1044 1728
chi-squared = 12.840 with 11 d.f. probability = 0.3039
F. Go´mez-Bezares et al.
786
We now use the Kruskal–Wallis nonparametric test to find out the same point as we did with the ANOVA. We come to the same conclusion as we did with the ANOVA. 11. Contingency table (by month) Month January February March April May June July August September October November December Total
NO
BEAT YES
Total
7 5 6 9 8 3 7 10 7 6 8 3 79
8 10 9 5 6 11 7 4 7 8 6 12 93
15 15 15 14 14 14 14 14 14 14 14 15 172
Pearson chi2 (11) = 15.8416 Pr=0.147 likelihood-ratio chi2(11) = 16.5468 Pr = 0.122
The contingency table allows us to test the same point as above with the ANOVA and the Kruskal–Wallis test, and leads to the same conclusions; hence we can accept that all months show a similar tendency in terms of beating the Market. 12. One-way analysis of Jensen (quartiles) by year .1
.05
0
−.05
06
05
20
20
04
03
20
02
20
20
01
00
20
99
20
98
19
97
19
96
19
94
95
19
19
19
19
93
−.1
28
Can We Use the CAPM as an Investment Strategy?
787
13. One-way ANOVA (by year) Source
SS
df
Model Residual Total
0.0170417 0.09434205 0.11138374
13 154 167
MS Number of obs. 0.0013109 F(13, 154) 0.00061261 Prob > F 0.00066697 R-squared Adj. R-squared Root MSE
168 2.14 0.0147 0.153 0.0815 0.02475
jensen
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
_cons 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0.0135106 −0.041029 −0.0190657 −0.0091611 −0.0100757 −0.0077821 −0.0098938 −0.0154658 −0.0103002 −0.0158053 −0.0165104 −0.0080536 −0.0072511 0.004184 (dropped)
0.007145 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045 0.0101045
1.89 −4.06 −1.89 −0.91 −1 −0.77 −0.98 −1.53 −1.02 −1.56 −1.63 −0.8 −0.72 0.41
0.061 0 0.061 0.366 0.32 0.442 0.329 0.128 0.31 0.12 0.104 0.427 0.474 0.679
−0.0006043 0.0276254 −0.0609904 −0.0210676 −0.0390271 0.0008957 −0.0291225 0.0108003 −0.0300372 0.0098857 −0.0277435 0.0121793 −0.0298552 0.0100676 −0.0354272 0.0044956 −0.0302616 0.0096612 −0.0357667 0.0041561 −0.0364718 0.003451 0.0119078 −0.028015 −0.0272125 0.0127103 −0.0157774 0.0241454
We now perform an ANOVA by years to see whether the strategy gives similar Jensen values among the years in our sample. We obtain a value for F-statistic of 2.14 with a related probability of 0.0147; hence we conclude that for a significance level of 1 % we can accept that the means for the Jensen values are the same from one year to another, however for a significance level of 5 % we must reject this hypothesis and we would conclude that there are differences between the Jensen values of different years. 14. Kruskal-Wallis tests (Rank sums), by year Year 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Obs. 12 12 12 12 12 12 12 12 12 12 12 12 12 12
Rank Sum 740 679 1025 1033 1065 1022 877 1026 824 920 1166 1110 1322 1387
chi-squared = 18.336 with 13 d.f. probability = 0.1452
F. Go´mez-Bezares et al.
788
We now use the Kruskal–Wallis test to find out the same point as we did with the ANOVA. We see that we obtain a probability greater than 5 % which means we can accept that the means for the Jensen values are uniform from one year to another. 15. Contingency table (by year) Year 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 Total
NO 8 9 4 5 6 5 6 4 8 7 4 6 2 2 76
BEAT YES 4 3 8 7 6 7 6 8 4 5 8 6 10 10 92
Total 12 12 12 12 12 12 12 12 12 12 12 12 12 12 168
Pearson chi2 (13) = 19.9908 Pr = 0.095 likelihood-ratio chi2(13) = 21.0581 Pr = 0.072
The contingency table confirms the result of the previous test, namely, that there are no differences between the Jensen values for the different years and that any small difference can be due to chance.
References Abdel-Kader, M. G., & Qing, K. Y. (2007). Risk-adjusted performance, selectivity, timing ability and performance persistence of Hong Kong mutual funds. Journal of Asia-Pacific Business, 8(2), 25–58. Agresti, A. (2007). An introduction to categorical data analysis (2nd ed.). Hoboken: Wiley Interscience. Anderson, D. R., Sweeney, D. J., & Williams, T. A. (2011). Statistics for business and economics (11th ed.). Mason: South-Western, Cengage Learning. Black, F., Jensen, M. C., & Scholes, M. (1972). The capital asset pricing model: Some empirical tests. In Jensen, M. C. (Ed.), Studies in the theory of capital Markets (pp. 79–121). New York: Praeger. Brav, A., Lehavy, R., & Michaely, R. (2005). Using expectations to test asset pricing models. Financial Management, 34(3), 31–64. Brealey, R. A., Myers, S. C., & Allen, F. (2008). Principles of corporate finance (9th ed.). New York: McGraw-Hill. Conover, W. J. (1999). Practical nonparametric statistics (3rd ed.). New York: Wiley. Copeland, T. E., Weston, J. F., & Shastri, K. (2005). Financial theory and corporate policy (4th ed.). Boston: Pearson Addison Wesley.
28
Can We Use the CAPM as an Investment Strategy?
789
Danielson, M. G., Heck, J. L., & Shaffer, D. R. (2008). Shareholder theory – How opponents and proponents both get it wrong. Journal of Applied Finance, 18(2), 62–66. Dimson, E., & Mussavian, M. (1998). A brief history of Market efficiency. European Financial Management, 4(1), 91–103. Dimson, E., & Mussavian, M. (1999). Three centuries of asset pricing. Journal of Banking and Finance, 23(12), 1745–1769. Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. Journal of Finance, 25(2), 383–417. Fama, E. F. (1991). Efficient capital markets: II. Journal of Finance, 46, 1575–1617. Fama, E. F. (1998). Market efficiency, long-term returns, and behavioral finance. Journal of Financial Economics, 49, 283–306. Fama, E. F., & French, K. R. (1992). The cross-section of expected stock returns. Journal of Finance, 47, 427–465. Fama, E. F., & French, K. R. (2008). Dissecting anomalies. Journal of Finance, 63(4), 1653–1678. Fama, E. F., & French, K. R. (2010). Luck versus skill in the cross-section of mutual fund returns. Journal of Finance, 65(5), 1915–1947. Fama, E. F., & MacBeth, J. D. (1973). Risk, return and equilibrium: empirical tests (pp. 607–636). May-June: Journal of Political Economy. Ferruz, L., Go´mez-Bezares, F., & Vargas, M. (2010). Portfolio theory, CAPM, and performance measures. In Lee, C. F. et al. (Eds.), Handbook of quantitative finance and risk management (pp. 267–283). New York: Springer. Go´mez-Bezares, F., & Go´mez-Bezares, F. R. (2006). On the use of hypothesis testing and other problems in financial research. The Journal of Investing, 15(4), 64–67. Go´mez-Bezares, F., Madariaga, J. A., & Santiba´n˜ez, J. (1996). Valuation and efficiency models: Does the CAPM beat the Market? Ana´lisis Financiero, 68, 72–96. Jensen, M. C. (1968). The performance of mutual funds in the period 1945–1964 (pp. 389–416). May: Journal of Finance. Jensen, M. C. (1969). Risk, the pricing of capital assets, and the evaluation of investment portfolios. Journal of Business, 42, 167–247. Levy, H. (2010). The CAPM is alive and well: A review and synthesis. European Financial Management, 16(1), 43–71. Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. Review of Economics and Statistics, 47, 13–37. Liu, W. (2006). A liquidity-augmented capital asset pricing model. Journal of Financial Economics, 82, 631–671. Malkiel, B. G. (2003). The efficient Market hypothesis and its critics. Journal of Economic Perspectives, 17(1), 59–82. Mazumder, M. I., & Varela, O. (2010). Market timing, the trading of international mutual funds: weekend, weekday and serial correlation strategies. Journal of Business, Finance & Accounting, 37(7–8), 979–1007. Modigliani, L. (1997). Don’t pick your managers by their alphas. U.S. Investment Research, U.S. Strategy, Morgan Stanley Dean Witter, June. Newbold, P., Carlson, W. L., & Thorne, B. M. (2009). Statistics for business and economics (7th ed.). New Jersey: Pearson Education Canada. Pasaribu, R. B. (2009). Relevance of accrual anomaly information in stock portfolio formation – case study: Indonesia stock exchange. Journal of Accounting and Business, 10(2), 1–45. Samarakoon, L. P., & Hasan, T. (2005). Portfolio performance evaluation. In Lee, C. F. (Ed.), The encyclopedia of finance (pp. 617–622). New York: Springer. Sharpe, W. F. (1964). Capital asset prices: A theory of Market equilibrium under conditions of risk. Journal of Finance, 19, 425–442. Treynor, J. L. (1965). How to rate management of investment funds. Harvard Business Review, 43, 63–75.
Group Decision-Making Tools for Managerial Accounting and Finance Applications
29
Wikil Kwak, Yong Shi, Cheng-Few Lee, and Heeseok Lee
Contents 29.1 29.2
29.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Designing a Comprehensive Performance Evaluation System: Using the Analytic Hierarchy Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.1 Hierarchical Schema for Performance Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.2 Analytic Hierarchical Performance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Developing a Comprehensive Performance Measurement System in the Banking Industry: An Analytic Hierarchy Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.3 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.4 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
793 793 795 796 798 800 801 801 802 803 804 807
W. Kwak (*) University of Nebraska at Omaha, Omaha, NE, USA e-mail: [email protected] Y. Shi University of Nebraska at Omaha, Omaha, NE, USA Chinese Academy of Sciences, Beijing, China e-mail: [email protected] C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] H. Lee Korea Advanced Institute of Science and Technology, Yuseong-gu, Daejeon, South Korea e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_29, # Springer Science+Business Media New York 2015
791
792
W. Kwak et al.
29.4
Optimal Trade-offs of Multiple Factors in International Transfer Pricing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.1 Existing Transfer Pricing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.3 Model Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.4 Optimal Trade-offs and Their Accounting Implications . . . . . . . . . . . . . . . . . . . . . . 29.4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5 Capital Budgeting with Multiple Criteria and Multiple Decision Makers . . . . . . . . . . . . 29.5.1 AHP and MC2 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.2 Model Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
807 808 810 813 816 822 822 825 829 833 833 834 836
Abstract
To deal with today’s uncertain and dynamic business environments with different background of decision makers in computing trade-offs among multiple organizational goals, our series of papers adopts an analytic hierarchy process (AHP) approach to solve various accounting or finance problems such as developing a business performance evaluation system and developing a banking performance evaluation system. AHP uses hierarchical schema to incorporate nonfinancial and external performance measures. Our model has a broader set of measures that can examine external and nonfinancial performance as well as internal and financial performance. While AHP is one of the most popular multiple goals decision-making tools, multiple-criteria and multiple-constraint (MC2) linear programming approach also can be used to solve group decisionmaking problems such as transfer pricing and capital budgeting problems. This model is rooted by two facts. First, from the linear system structure’s point of view, the criteria and constraints may be “interchangeable.” Thus, like multiple criteria, multiple-constraint (resource availability) levels can be considered. Second, from the application’s point of view, it is more realistic to consider multiple resource availability levels (discrete right-hand sides) than a single resource availability level in isolation. The philosophy behind this perspective is that the availability of resources can fluctuate depending on the decision situation forces, such as the desirability levels believed by the different managers. A solution procedure is provided to show step-by-step procedure to get possible solutions that can reach the best compromise value for the multiple goals and multiple-constraint levels.
Keywords
Analytic hierarchy process • Multiple-criteria and multiple-constraint linear programming • Business performance evaluation • Activity-based costing system • Group decision making • Optimal trade-offs • Balanced scorecard • Transfer pricing • Capital budgeting
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 793
29.1
Introduction
In this chapter, we provide an up-to-date review on our past works in AHP and MC2 linear programming models to solve real-world problems faced by managers in accounting and finance. These applications include developing a business performance evaluation system using an analytic hierarchical model, a banking performance evaluation system, capital budgeting, and transfer pricing problems. A good performance measurement system should incorporate strategic success factors, especially to be successful in today’s competitive environment. Balanced scorecard is a hot topic, but it lacks linkages among different basic units of financial and nonfinancial measures or across different levels of managers. The model proposed in this study uses a three-level hierarchical schema to combine financial and nonfinancial performance measures systematically. Its emphasis is on an external as well as an internal business performance measures such as the balanced scorecard method. This method is more likely to cover a broader set of measures that include operational control as well as strategic control. The purpose of this chapter is to provide additional insight for managers who face group decision-making problems in accounting and finance and want to find practical solutions.
29.2
Designing a Comprehensive Performance Evaluation System: Using the Analytic Hierarchy Process
AHP is one of the most popular multiple goals decision-making tools (Ishizaka et al. 2011). Designing a comprehensive performance measurement system has frustrated many managers (Eccles 1991). The traditional performance measures enterprises have used may not well fit in for the new business environment and competitive realities. The figures that enterprises have traditionally used are not very useful for the information-based society we are becoming. We suspect that firms are much more productive than these out-of-date measures. A broad range of firms is deeply engaged in redefining how to evaluate the performance of their businesses. New measurements for quantification are needed to perform business evaluation. Drucker (1993) put the ever-increasing measurement dilemma this way: Quantification has been the rage in business and economics these past 50 years. Accountants have proliferated as fast as lawyers. Yet we do not have the measurements we need. Neither our concepts nor our tools are adequate for the control of operations, or for managerial control. And, so far, there are neither the concepts nor the tools for business control, i.e., for economic decision making. In the past few years, however, we have become increasingly aware of the need for such measurements.
Drucker’s message is clear: a traditional measure is not adequate for business evaluation. A primary reason why traditional measures fail to meet new business needs is that most measures are lagging indicators (Eccles and Pyburn 1992).
794
W. Kwak et al.
The emphasis of accounting measures has been on historical statements of financial performance. They are the result of management performance, not the cause of it; i.e., they are better at measuring the consequence of yesterday’s decisions but unlikely to provide useful indicators for future success. As a result, they easily conflict with new strategies and current competitive business realities. To ameliorate this accounting lag situation, researchers have frequently attempted to provide new measuring procedures (Kaplan 1986; Wu et al. 2011). Yet most measuring guidelines may not be well represented analytically. Managers keep asking: What are the most important measures of performance? What are the associations among those measures? Unfortunately, we know little about how measures are integrated into a performance measurement system regarding a particular business. For example, is the customer service perceived as a more important measure than cost of quality? Another question often raised is: What is the association between the customer service and on-time delivery or manufacturing cycle time? The current wave of dissatisfaction with traditional accounting systems has been intensified partly because most measures have internal and financial focus. The new measure should broaden the basis of nonfinancial performance measurement. Measures must truly predict long-term strategic success. External performance relative to competitors such as market share is as of importance as internal measures. In addition, the recent rise of global competitiveness reemphasizes the primacy of operational, i.e., nonfinancial, performance over financially oriented performance. Nonfinancial measures reflect the actionable steps needed for surviving in today’s competitive environment (Fisher 1992). When a company uses an activity-based costing system (Campi 1992; Ayvaz and Pehlivanli 2011) or just-in-time manufacturing system, using nonfinancial measures is inevitable. Nonfinancial measures reduce communication gap between workers and managers; i.e., workers can better understand what they are measured by, and managers can get timely feedback and link them to strategic decision making. The answer proposed in this study is to use hierarchical schema to incorporate nonfinancial and external performance measures. The model has a broader set of measures that can examine external and nonfinancial performance as well as internal and financial performance. On the basis of the schema, this chapter demonstrates how Saaty’s analytic hierarchy process (e.g., see Saaty (1980) and Harker and Vargas (1987)) can be merged with the performance measurement. The analytic hierarchy process is a theory of measurement that has been widely applied in modeling human judgment process. In this sense, the performance measuring method proposed in this study is referred to as the Analytic Hierarchical Performance Model (AHPM). While the AHP has been applied in a number of cases of capital budgeting, auditing, preference analysis, and balanced scorecard to product planning, enterprise risk management, and internal control structure study (see Arrington et al. 1984; Boucher and MacStravic 1991; Liberatore et al. 1992; Hardy and Reeve 2000; Huang et al. 2011; Li et al. 2011), little attention is devoted to the problem of an analytical and comprehensive business performance model to cover a broader base of measures in the currently changing environment in accounting information systems. Although Chan and Lynn (1991) originally investigated the
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 795
application of the AHP in business performance evaluation, the problem of structuring the decision hierarchy in an appropriate manner is yet to be explored. The methodology proposed in this chapter will resolve this issue.
29.2.1 Hierarchical Schema for Performance Measurement The AHP is a theory of measurement that has been extensively applied in modeling the human judgment process (e.g., Lee 1993; Muralidhar et al. 1990). It decomposes a complex decision operation into a multilevel hierarchical structure. The primary advantage of the AHP is its simplicity and the availability of the software. Several desirable features of the AHP can help resolve issues in performance evaluation. For example, nonfinancial and external effects can be identified and integrated with financial and internal aspects of business performance through the AHP. Furthermore, the AHP is a participation-oriented methodology that can aid coordination and synthesis of multiple evaluators in the organizational hierarchy. Participation makes a positive contribution to the quality of the performance evaluation process. This point is further explored within the context of hierarchical schema of performance evaluation as follows. Performance measures have the relationship with management levels. They need to be filtered at each superior/subordinate level in an organization; i.e., measures do not need to be the same across management levels. Performance measures at each level, however, should be linked to performance measures at the next level up. Performance measurement information is tailored to match the responsibility of each management level. For example, at the highest level, the CEO has responsibility for performance of the total business. In contrast, the production manager’s main interest may be in cost control because he or she is responsible for this. Taking this idea into account leads to the notion that the performance measuring process consists of different levels according to management levels. Depending on organizational levels, therefore, a three management level model will be suggested: top, middle, and operational. To recognize the ever-increasing importance of nonfinancial measures, at the top management level, the hierarchy consists of two criteria: nonfinancial and financial performance. One level lower (middle-level management) includes performance criteria such as market share, customer satisfaction, productivity, ROI, and profitability. The lowest level (operational management level) includes measures that lead to simplifications in the manufacturing process as related to high-level performance measures. Examples are quality, delivery, cycle time, inventory turnover, asset turnover, and cost. Typically, the criteria at the lowest level may have several sub-criteria. For example, quality measures may have four sub-criteria: voluntary, (appraisal and prevention,) and failure, (internal and external,) costs. With relation to these criteria, accounting information is controlled typically with respect to each product (or service) or division (or department). As a result, product and division levels are added. One should keep in mind that the above three-level hierarchical schema is dynamic over time. As a company evolves, the hierarchy must be accordingly adjusted.
796
W. Kwak et al.
Another interesting aspect is that the hierarchy is not company invariant. The hierarchy must be adjusted depending on unique situation faced by each individual company or division. Basically, the hierarchal schema is designed to accommodate any number of levels and alternatives. New level or alternative can be easily added or deleted to the hierarchy once introduced. For example, another level regarding products may be added at the bottom in order to evaluate performance of different products.
29.2.2 Analytic Hierarchical Performance Model Based on the hierarchical structure, the performance indices at each level of the AHPM are derived by the analytic hierarchy process. The AHPM collects input judgment in the form of matrix by pairwise comparisons of criteria. An eigenvalue method is then used to scale weights of criteria at each level; i.e., the relative importance of each criterion at each level is obtained. The relative importance is defined as a performance index with respect to each alternative (i.e., criterion, product, or division). From now on, each step of the AHPM in obtaining the weights is explored. First, a set of useful strategic criteria must be identified. Let nt be the total number of criteria under consideration at the top management level. Typically, nt ¼ 2 (nonfinancial and financial measures). The relative weight of each criteria may be evaluated by pairwise comparison; i.e., two criteria are compared at one time until all combinations of comparison are considered (only one pairwise comparison is needed if nt ¼ 2). The experience of many users of this method and the experiments reported (Saaty 1980) are likely to support that the 1–9 scale for pairwise comparisons captures human judgment fairly well while the scale can be altered to suit each application. The result from all of pairwise comparisons is stored in an input matrix: At ¼ atij ðan nt by nt matrixÞ: The element atij states the importance of alternative i compared to alternative j. For instance, if at at12 ¼ 2, then criterion 1 is twice as important as criterion 2. Applying an eigenvalue method to At results in a vector Wt ¼ (wti) that has nt elements. In addition to the vector, the inconsistency ratio (g) is obtained to estimate the degree of inconsistency in pairwise comparisons. The common guideline is that if the ratio surpasses 0.1, a new input matrix must be generated. Generally speaking, each element of the vector resulting from an eigenvalue method is the estimated relative weight of the corresponding criterion of one level with respect to one level higher; i.e., the element wti is the relative weight of the ith criteria at this level. At the second level of hierarchy, consider the ith criterion of the top management level. Then, we have one input matrix of pairwise comparisons of criteria (at middle management level) that corresponds to the ith criterion of the top management level. The result is stored in an nm by nm matrix Aim. Here, nm is the total number of criteria at this level. Applying an eigenvalue method to Aim results in the relative
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 797
weight for each criterion with relation to the ith criteria at one level higher. This “local relative weight” is stored in a local weighing vector Wim. We need nt local weighing vectors at this level. The “global relative weight” at this level is computed and then stored in a vector Wm ¼ (wmi) that has the nm elements as W m ¼ W t W 1m . . . W nt m : The similar computing process continues at the operational management level until we have all global relative weights as Wo. A prototype for the AHPM with three-level hierarchies was built via commercial software for the analytic hierarchy process, called Expert Choice (Forman et al. 1985). Above relative weights can be utilized in a number of ways for performance measurement. Clearly, it implies the relative importance among criteria at each level. For example, consider an automobile company where market share and ROI are two important criteria in performance evaluation. If the AHPM generates their weights as 0.75 and 0.25, respectively, it is reasonable to conclude that market share affects the company’s performance three times higher than ROI. The weights can be used as a measure for allocating future resources in products or divisions. Assume the automobile company produces two types of autos, sedan and minivan. If the AHPM generates global relative weights as 0.8 and 0.2, respectively, i.e., the performance of sedans is four times higher than minivans, then this may provide a good reason for the CEO to invest in sedans four times higher than minivans. One important performance control measure is the rate of performance change that can be computed at any level. The change of performance can be measured. This measure is useful in estimating the elasticity of the performance of any alternative. This elasticity can aid in resource allocation decisions; i.e., further resources may be assigned to more elastic products of divisions. For the example of computing this elasticity, at the middle management level, the rate is then em ¼
Xnm i¼1
Wmi DCmi =Cmi :
Here, DCmi is the amount of change of the ith criteria. For example, if the ROI is increased from 5 % to 7 % and the market share is changed from 20 % to 25 % in the automobile company discussed above, 2 5 ¼ 0:2875: em ¼ 0:25 þ 0:75 5 20 We may conclude that the overall business performance has been increased by $28.75 % with respect to middle management level. Similarly, the performance change rates at any level can be obtained. Typically, they vary depending on levels of the AHPM; i.e., they have different implications.
798
W. Kwak et al.
29.2.3 An Example The suitability of the AHPM is illustrated with a study on a hypothetical automobile company. Nonfinancial criteria include market share, customer satisfaction, and productivity. Financial criteria include ROI and profitability. At the operational level, six criteria such as quality, delivery, cycle time, inventory turnover, asset turnover, and cost are considered. First, nonfinancial and financial criteria are compared and stored in a vector: Wt ¼ ð0:4; 0:6Þ: Next, the middle management level is considered. The relative weights of middle-level performance criteria with relation to each top-level criterion are to be computed. First, the local relative weights were computed. For the nonfinancial criteria, 2
3 1 2 4 A1m ¼ 4 1=2 1 35 1=4 1=3 1 For the sake of convenience, ROI and profitability are not listed in this comparison matrix. The lower triangle is not listed because, in the eigenvalue method, the lower triangle is simply the reciprocity of the upper triangle. As a result, W 1m ¼ ð0:558, 0:320, 0:122Þ: For each weighing computing, an inconsistency ratio was computed and checked for acceptance; i.e., in this case, the ratio (g ¼ 0.017) was accepted because g 0.1. For the financial criteria, ROI is estimated to be twice more important than profitability, i.e., W 2m ¼ ð0:667, 0:333Þ: Accordingly, the global relative weights of the managerial criteria (in the order of market share, customer satisfaction, productivity, ROI, and profitability) are then
0:558 0:320 Wm ¼ ð0:4, 0:6Þ 0 0
0:122 0 0 0:667
0 0:333
¼ ð0:223, 0:128, 0:049, 0:400, 0:200Þ: Here, the global relative weights of market share, customer satisfaction, productivity, ROI, and profitability are 22.3 %, 12.8 %, 4.9 %, 40 %, and 20 %, respectively. Note that these percentages are elements of the above Wm.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 799
Let us move down to the operational level. The following are local relative weights of operational level criteria (in the order of quality, delivery, cycle time, cost, inventory turnover, and asset turnover). For the market share, 3 2 1 2 3 3 5 7 6 1=2 1 2 3 4 57 7 6 6 1=3 1=2 1 2 4 57 1 7 6 A0 ¼ 6 1 2 37 7 6 1=3 1=3 1=2 4 1=5 1=4 1=4 1=2 1 25 1=7 1=5 1=5 1=3 1=2 1 For convenience, the local weights are arranged in the order of quality, cycle time, delivery, cost, inventory turnover, and asset turnover: W 1o ¼ ð0:372, 0:104, 0:061, 0:250, 0:174, 0:039Þ: Similarly, W 2o ¼ ð0:423, 0:038, 0:270, 0:185, 0:055, 0:029Þ; W 3o ¼ ð0:370, 0:164, 0:106, 0:260, 0:064, 0:037Þ; W 4o ¼ ð0:220, 0:060, 0:030, 0:4150, 0:175, 0:101Þ; W 5o ¼ ð0:246, 0:072, 0:065, 0:334, 0:160, 0:122Þ: Consequently, we get the global relative weights of operational criteria as Wo ¼ ð0:223; 0:128; 0:049; 0:400; 0:200Þ 2 0:372 0:104 0:061 0:250 0:174 6 0:423 0:038 0:270 0:185 0:055 6 6 6 0:370 0:164 0:106 0:260 0:064 6 4 0:220 0:060 0:030 0:414 0:175 0:246 0:072 0:065 0:334 0:160 ¼ ð0:292; 0:074; 0:078; 0:324; 0:151; 0:079Þ:
3 0:039 0:029 7 7 7 0:037 7 7 0:101 5 0:122
Finally, the relative importance of operational level performance measures are 29.2 %, 7.4 %, 7.8 %, 32.4 %, 15.1 %, and 7.9 %, respectively. One should note that nonfinancial measures are integrated with financial measures in the scaling process. Our automobile company can adopt these performance measures for further analysis. The measures serve as the basis for the rate of performance change. In addition, they can be further used for evaluating the performance of each product if the product level is connected to the operational management level in the AHPM.
800
W. Kwak et al.
29.2.4 Discussions It is too optimistic to argue that there can be one right way of measuring performance. There are two factors to be considered. First, each business organization requires its own unique set of measures depending on its environment. What is most effective for a company depends upon its history, culture, and management style (Eccles 1991). To be used for any business performance measurement, a welldesigned model must be flexible enough to incorporate a variety of measures while retaining major aspects. Second, managers should change measures over time. In the current competitive business environment, change is not incidental. It is essential. Change must be managed. The AHPM is highly flexible so that managers may adjust its structure to evolving business environments. This flexibility will allow a company to improve its performance measurement system continuously. Generally, a performance measurement system exists to monitor the implementation of planning of an organization and aid to motivate desirable individual performance through a realistic communication of performance information in related goals of business. This premise of performance measurement requires a significant number of feedbacks and corrective actions in the practice of accounting information systems (Nanni et al. 1990), i.e., feedbacks between levels and also within level. In the implementation of AHPM, these activities are common procedures. A clean separation of levels in the hierarchy of the AHPM is an idealization for simplifying the presentation. The flexibility of the AHPM, however, accommodates as many feedbacks and corrective actions as possible. Performance is monitored by group rather than by an individual because of its ever-increasing importance of teamwork in the success of businesses. The integrated structure of the AHPM facilitates group decision and thus increases the chance that managers put trust in the resulting measures. This structure will enhance more involvement of lower-level managers as well as workers when a firm implements the AHPM. Furthermore, the hierarchy of the AHPM corresponds to that of business organization. As a result, group decision at each level is facilitated. For example, middle-level managers will be responsible for determining weights for middle-level criteria while lower-level managers will be responsible for operational level criteria. The iterative process of weighing goals among managers, as the implementation of AHPM progresses, will help them to understand which strategic factors are important and how these factors are linked to other goals to be a successful company as a group. Information technology plays a critical role in designing a performance measurement system to provide timely information to management. A computer-based decision support system can be used to utilize this conceptual foundation in a realworld situation. The AHPM can be easily stored in an enterprise database because of the commercial software, Expert Choice. As a result, the AHPM can be readily available to each management level via the network system of an enterprise. The AHPM fits any corporate information architecture to pursue the company’s long-term strategy.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 801
29.2.5 Conclusions A well-designed performance model is a must for an enterprise to gain competitive edges. The performance measurement model proposed in this study is the first kind of analytical model to cover a wide variety of measures while providing operational control as well as strategic control. With comparison to previous evaluation methods, the model shows advantages such as flexibility, feedbacks, group evaluation, and computing simplicity. A prototype was built via a personal computer so that the model can be applied to any business situations. The possible real-time control is of importance for the competitive business environment we are facing today. In sum, the contribution of this study is of both conceptual and practical importance.
29.3
Developing a Comprehensive Performance Measurement System in the Banking Industry: An Analytic Hierarchy Approach
The objective of this study is to design a practical model for a comprehensive performance measurement system that incorporates strategic success factors in the banking industry. A performance measurement system proposed in this study can be used to evaluate top managers of a main bank or managers of a branch office. Performance measurement should be closely tied to goal setting of an organization because it feeds back information to the system on how well strategies are being implemented (Chan and Lynn 1991; Eddie et al. 2001). A balanced scorecard approach (Kaplan and Norton 1992; Tapinos et al. 2011) is a hot topic currently in this area, but it does not provide systematic aggregation of each level as well as different levels of managers’ performance for the overall company. In other words, there is no systematic linkage between financial and nonfinancial measures across different levels of management hierarchy. A traditional performance measurement system, which focuses on financial measures such as return on assets (ROA), however, may not serve this purpose well for middle- or lower-level managers in the new competitive business environment either. The model proposed in this study will have a broader set of measures that incorporate traditional financial performance measures such as return on assets and debt to equity ratio as well as nonfinancial performance measures such as the quality of customer service and productivity. Using Saaty’s analytic hierarchy process (AHP) (Saaty 1980; Harker and Vargas 1987), the model will demonstrate how multiple performance criteria can be systematically incorporated into a comprehensive performance measurement system. The AHP enables decision makers to structure a problem in the form of a hierarchy of its elements according to an organization’s structure or ranks of management levels and to capture managerial decision preferences through a series of comparisons of relevant factors or criteria. The AHP has been applied recently to several business problems (e.g., divisional performance evaluation (Chan and Lynn 1991), capital budgeting
802
W. Kwak et al.
(Liberatore et al. 1992), and real estate investment (Kamath and Khaksari 1991), marketing applications (Dyer and Forman 1991), information system project selection (Schniederjans and Wilson 1991), activity-based costing cost driver selection (Schniederjans and Garvin 1997), developing lean performance measures (DeWayne 2009), and customer’s choice analysis in retail banking (Natarajan et al. 2010)). The AHP is relatively easy to use and its commercial software is available. This study will be an analytical and comprehensive performance evaluation model to cover a broader base of measures in the rapidly changing environment of today’s banking industry. The following section presents background. The third section discusses methodology. The fourth section presents a numerical example. The last section summarizes and concludes this chapter.
29.3.1 Background Recent research topic guide from Institute of Management Accountants lists “performance measurement” as one of top priority research issues. Therefore, this project will be interesting to bank administrators as well as managerial accountants. With unprecedented competitive pressure from nonbanking institutions, deregulation, and the rapid acquisition of smaller banks by large national or regional banks, the most successful banks in the new millennium will be the ones that adapt strategically to a changing environment (Calvert 1990). Management accounting and finance literature have emphasized using both financial and nonfinancial measures as performance guidelines in the new environment (e.g., Chan and Lynn 1991; Rotch 1990). However, most studies do not propose specifically how we should incorporate these financial and nonfinancial factors into a formal model. The performance measurement system proposed in this study is the formal model applied using the AHP in the banking industry to cover a wide variety of measures while providing operational control as well as strategic control. The AHP can incorporate multiple-subjective goals into a formal model (Dyer and Forman 1991). Unless we design a systematic performance measurement system that includes financial as well as nonfinancial control factors, there may be incorrect behavior by employees because they misunderstand the organization’s goals and how they relate to their individual performance. Compared with previous evaluation methods, the model proposed in this study will have advantages such as flexibility, continuous feedback, teamwork in goal setting, and computational simplicity. To be used for any business performance measurement, a well-designed model must be flexible enough to incorporate a variety of measures while retaining major success factors. The AHP model is flexible enough for managers to adjust its structure to a changing business environment through an iterative process of weighing goals. This flexibility will allow a company to improve its performance measurement system continuously. Through the iterative process of goal comparisons, management could get continuous feedback for the priority of goals and work as a team. The possible real-time control is of importance in the competitive business environment we are facing today.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 803
29.3.2 Methodology The analytic hierarchy process (AHP) collects input judgments in the form of a matrix by pairwise comparisons; i.e., two criteria are compared at one time. The experience of many users of this method supports use of a 1–9 scale for pairwise comparisons to capture human judgment while the scale can be altered to fit each application (Saaty 1980). A simple example will be provided to explain how AHP operates. Consider the situation where a senior executive has to decide on which of three managers to promote to a senior position in the firm. The candidate’s profiles have been studied and rated on three criteria: leadership, human relations skills, and financial management ability. First, the decision maker compares each of the criteria in pairs to develop a ranking of the criteria. In this case, the comparisons would be: 1. Is leadership more important than human relations skills for this job? 2. Is leadership more important than financial management ability for this job? 3. Are human relations skills more important than financial management ability for this job? The response to these questions would provide an ordinal ranking of the three criteria. By adding a ratio scale of 1–9 for rating the relative importance of one criterion over another, a decision maker could make statements such as “leadership is four times as important as human relations skills for this job,” “financial management ability is three times as important as leadership,” and “financial management ability is seven times as important as human relations skills.” These statements of pairwise comparisons can be summarized in a square matrix. The preference vectors are then computed to determine the relative rankings of the three criteria in selecting the best candidate. For example, the preference vectors of the three criteria are 0.658 for financial management ability, 0.263 for leadership, and 0.079 for human relations skills. Once the preference vector of the criteria is determined, each of the candidates can be compared on the basis of the criteria in the following manner: 1. Is candidate A superior to candidate B in leadership skills? 2. Is candidate A superior to candidate C in leadership skills? 3. Is candidate B superior to candidate C in leadership skills? Again, rather than using an ordinal ranking, the degree of superiority of one candidate over another can be assessed. The same procedures can be applied to human relations skills and financial management ability. The responses to these questions can be summarized in matrices where the preference vectors are again computed to determine the relative ranking of the three candidates for each criterion. Accordingly, the best candidate should be the one who ranks “high” on the “more important” criteria. The matrix multiplication of preference vectors of candidates on evaluation criteria and the preference vector of evaluation criteria will provide the final ranking of the candidates. In this example, the candidates are ranked A, B, and C. This example provides the usefulness of the AHP for setting priorities for both qualitative and quantitative measures (Chan and Lynn 1991). We could apply the same procedures to a bank. Based on the hierarchical structure of a banking institution, the relative weights of criteria at each level of managers are derived by AHP. Here the relative importance of performance
804
W. Kwak et al.
measures can be defined as a performance index with respect to each alternative (i.e., criterion, service, or branch office). For example, derive the relative weights of financial and nonfinancial criteria at the highest management level through the pairwise comparisons of criteria. Next, the relative weights of middle-level performance criteria with relation to each top-level criterion are to be computed then enter these relative weights into an n x n matrix format. Finally, this matrix is multiplied by the relative weights of criteria of the top management level. The same procedures can be applied to the next lower-level management. This AHP approach could link systematically the hierarchical structure of business performance measurement between different levels of organizational structures. Consider a local bank where market share and return on assets are two important criteria in performance evaluation. If the AHP generates their weights as 0.4 and 0.6, respectively, it is reasonable to conclude that return on assets affects the bank’s performance one and a half times higher than market share. The weights can be used as a measure for allocating future resources in products or branch offices. Assume the bank has two types of services, commercial loans and residential mortgage loans. If the AHP generates relative weights as 0.8 and 0.2 (i.e., the performance of commercial loans is four times higher than mortgage loans), then this may provide a good reason for top management to invest resources in the commercial loan market four times higher than the residential loan market. The use of the AHP for multiple-criteria situation is superior to ad hoc weighing because it has the advantage of forcing the decision maker to focus exclusively on the criteria at one time and the way in which they are related to each other (Saaty 1980). A model could be built using a microcomputer program called Expert Choice so that the model can be applied to any bank easily.
29.3.3 A Numerical Example This section presents a numerical example for a commercial bank. The Commercial Omaha Bank (COB) is a local bank that specializes in commercial loans. Their headquarters are located in Omaha, Nebraska, and they have several branch offices throughout rural areas of Nebraska. The top management of COB realized that the current measurement system is not adequate for their strategic performance management and identified the following measures based on the hierarchy of the organization for their new performance measurement using AHP. These measures are shown in Table 29.1. The OCB uses financial criteria such as return on assets and debt to equity and nonfinancial criteria such as market share, productivity, and quality of service. At the lowest management level, income to interest expense, service charges, interest revenue, growth of deposits, default ratio, and customer satisfaction can be used. Each computing step of the AHP is discussed as follows. First, nonfinancial and financial criteria are computed and the result is entered in a vector: Wt ¼ ð0:5; 0:5Þ:
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 805
Table 29.1 New performance measures Level of organization
High Financial measures
Middle Return on assets Debt to equity
Nonfinancial measures
Market share Productivity Quality
Low Income to interest expenses Service charges Interest revenue Growth of deposits Default ratio Customer satisfaction
Next, the mid-level management is considered. The relative weight of mid-level performance criteria with relation to each top-level criterion is to be computed. Here, the local relative weights are computed. For the nonfinancial criteria, 2
1 A1m 4 1=3 1=4
3 1 1=3
3 4 3 5: 1
Here, the market share is estimated to be three times more important than productivity and four times more important than the quality of service. Productivity is estimated to be three times more important than the quality of service. From this result, W 1m ¼ ð0:608; 0:272; 0:120Þ: For each weight computation, an inconsistency ratio (g) was computed and checked for the acceptance level. If g 0.1, it is acceptable. For this example, it is acceptable since g ¼ 0.065. If it is not acceptable, the input matrix should be adjusted or recomputed. For the financial criteria, ROA is estimated to be three times more important than debt to equity ratio. Therefore, W 2m ¼ ð0:75; 0:25Þ: The global relative weights of the criteria are Wm ¼ ð0:5; 0:5Þ
0:608, 0:272, 0:120, 0, 0 0, 0, 0, 0:75, 0:25
¼ ð0:304; 0:136; 0:060; 0:375; 0:125Þ: Here the global relative weights of market share, productivity, quality of service, ROA, and debt to equity ratio are 30.4 %, 13.6 %, 6 %, 37.5 %, and 12.5 %, respectively.
806
W. Kwak et al.
Let us move to the lower-level managers. Income to interest expense, service charges, interest revenue, growth of deposits, default ratio, and customer satisfaction are criteria at this level. For market share, 2
1 6 1=2 6 6 1=3 1 A0 ¼ 6 6 1=4 6 4 1=5 1=6
2 1 1=2 1=3 1=4 1=5
3 2 1 1=3 1=4 1=5
4 3 3 1 1=3 1=4
5 4 4 3 1 1=3
3 6 57 7 57 7 47 7 35 1
For simplicity of presentation, the local weights are arranged in the order of income to interest expense, service charges, interest revenue, growth of deposits, default ratio, and customer satisfaction: W 1o ¼ ð0:367; 0:238; 0:183; 0:109; 0:065; 0:038Þ: Here the local weights with relation to market share are 36.7 %, 23.8 %, 18.3 %, 10.9 %, 6.5 %, and 3.8 %, respectively. For other criteria at one level higher, the local weights can be calculated in the same way. These are W 2o ¼ ð0:206; 0:163; 0:179; 0:162; 0:143; 0:146Þ; W 3o ¼ ð0:155; 0:231; 0:220; 0:103; 0:169; 0:122Þ; W 4o ¼ ð0:307; 0:197; 0:167; 0:117; 0:117; 0:094Þ; W 5o ¼ ð0:266; 0:133; 0:164; 0:159; 0:154; 0:124Þ: For the next step, the global relative weights of lower-level management criteria are 0:367 0:238 0:183 0:109 0:065 0:038 2
0:206 0:163 0:179 6 0:155 0:231 0:220 6 W0 ¼ ð0:304; 0:136; 0:060; 0:375; 0:125Þ 6 4 0:307 0:197 0:167
0:162 0:103 0:117
3 0:143 0:146 0:169 0:122 7 7 7 0:117 0:094 5
0:266 0:133 0:164 ¼ ð0:297; 0:199; 0:176; 0:125; 0:112; 0:089Þ:
0:159
0:159 0:124
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 807
Finally, the relative importance of lower-level performance measures are 29.7 %, 19.9 %, 17.6 %, 12.54 %, 11.2 %, and 8.9 %, respectively. Note that financial measures are integrated with nonfinancial measures in the scaling process. The OCB can extend these performance measures into the next lower level of each product using the same method.
29.3.4 Summary and Conclusions A performance measurement system should incorporate nonfinancial as well as financial measures to foster strategic success factors for a bank in the new environment. Generally, a good performance measurement system should monitor employees’ behavior in a positive way and be flexible enough to adapt to the changing environment. To motivate employees, a bank should communicate performance information of an individual employee in relation to overall business goals. This characteristic of performance measurement requires a significant amount of feedback both between and within levels and corrective actions in the practice of accounting information (Nanni et al. 1990). The AHP model proposed in this study is flexible enough to incorporate the “continuous improvement” philosophy of today’s business environment by changing weighting values of measures. In addition, the integrated structure of AHP allows group performance evaluation, which is a buzzword for “teamwork” in today’s business world. The iterative process of getting input data in the AHP procedure also helps each manager as well as employee to be aware of the importance of strategic factors of each performance measure of the bank.
29.4
Optimal Trade-offs of Multiple Factors in International Transfer Pricing Problems
Ever since DuPont and General Motors Corporation of the USA initiated transfer pricing systems for the interdivisional transfer of resources among their divisions, many large organizations, with the creation of profit centers, have used a transfer pricing system in one way or the other. Recently, transfer pricing problems have become more important because most corporations increase transfer of goods or services dramatically among their divisions as a result of restructuring or downsizing their organizations (Tang 1992). Therefore, designing a good transfer pricing strategy should be a major concern for both top management and divisional managers (Curtis 2010). Transfer pricing problems have been extensively studied by a number of scholars. Many of them have recognized that a successful transfer pricing strategy should consider multiple criteria (objectives), such as overall profit, total market share, divisional autonomy, performance evaluation, and utilized production capacity (Abdel-khalik and Lusk 1974; Bailey and Boe 1976; Merville and Petty 1978; Yunker 1983; Lecraw 1985; Lin et al. 1993). However, few developed methods
808
W. Kwak et al.
have a capability of dealing with all possible optimal trade-offs of multiple criteria in optimal solutions of the models with involvement of multiple decision makers. In this chapter, we propose a multiple factor model to provide managers from different background, who are involved in transfer price decision making of a multidivisional corporation, with a systematic and comprehensive scenario about all possible optimal transfer prices depending on both multiple-criteria and multiple-constraint levels (in short, multiple factors). The trade-offs of the optimal transfer prices, which have rarely been considered in the literature, can be used as a basis for managers of corporations to make a high-quality decision in selecting their transfer pricing systems for business competition. This chapter proceeds as follows. First, existing transfer pricing models will be reviewed. Then, the methodology of formulating and solving a transfer pricing model with multiple factors will be described. A prototype of a transfer pricing problem in a corporation will be illustrated to explain the implications of the multiple factor transfer pricing model. Finally, conclusions and remaining research problems will be presented.
29.4.1 Existing Transfer Pricing Models In the literature of transfer pricing problems, the various approaches can be categorized into four groups: (i) market-based pricing, (ii) accounting-based pricing, (iii) marginal cost pricing (or opportunity cost pricing), and (iv) negotiationbased pricing. Market-based prices are ideal for transfer prices when external market prices are available. Even though empirical research has found that some corporations prefer cost-based prices to market-based prices, market-based pricing method is recommended when the emphasis is on the motivation of divisional managers (Borkowski 1990). Pricing intermediate goods based on the market price will motivate the supplying division to reduce its costs to achieve efficiency and to allow divisional autonomy for both the supplying division and the purchasing division. Statistics show that almost a third of corporations actually use marketbased transfer pricing (Tang 1992). However, if there is no outside market for intermediate goods or services, then accounting-based pricing, marginal cost pricing (economic models and mathematical programming techniques), or negotiation-based pricing (behavioral approach) is commonly recommended for finding a transfer pricing system. In the accounting-based pricing approach, the divisional managers simply use accounting measurements of the divisions, such as full costs or variable costs, as their transfer prices. Thus, the transfer price of one division may differ from that of another division. These transfer prices may not be globally optimal for the corporation as a whole (Abdel-khalik and Lusk 1974; Eccles 1983). In marginal cost pricing approaches, Hirshleifer (1956) recommended use of an economic model to set transfer pricing at a manufacturing division’s marginal cost to achieve the global optimal output. A problem of this economic model is that it can destroy the divisional manager’s autonomy, and the supplying division may not
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 809
get the benefit of efficiencies. Moreover, the manufacturing division manager in this case should be evaluated based on cost, not on profit. Similarly, Gould (1964) and Naert (1973) recommended economic models based on current entry and current exit prices. Their models also focus on global profit maximization and have the same problems as Hirshleifer’s. Note that when the transfer price is set based on marginal costs, the division should be either controlled as a standard cost center or merged into a larger profit center with a division that processes the bulk of its output. Ronen and McKinney (1970) suggested dual prices in which a subsidy is given to the manufacturing division by the central office in addition to marginal costs. This subsidy would not be added to the price charged to the purchasing division. They believed that autonomy is enhanced because the corporate office is only a transmitter of information, not a price setter, and that the supplying division has the same autonomy as an independent supplier. However, there might be a gaming chance where all divisions are winners but the central office is a loser. There is also a “marginal cost plus” approach that charges variable manufacturing costs (usually standard variable costs) and is supplemented by periodic lump-sum charges to the transferees to cover the transferor’s fixed costs, or their fixed costs plus profit, or some kind of subsidy. It is difficult, however, to set a fixed fee that will satisfy both the supplying division and the purchasing division. This method enables the purchasing division to absorb all the uncertainties caused by fluctuations in the marketplace. Moreover, the system begins to break down if the supplying division is operating at or above the normal capacity since variable costs no longer represent the opportunity costs of additional transfers of goods and services (Onsi 1970). As an alternative method to identify marginal costs as the transfer prices, a mathematical programming approach becomes more attractive for the transfer pricing problem because it handles complex situations in a trade setting (Dopuch and Drake 1964; Bapna et al. 2005). The application of linear programming to the transfer pricing problem is based on the relationship between the primal and dual solutions in the linear programming problem. Shadow prices, which reflect the input values of scarce resources (or opportunity cost) implied in the primal problem, can be used as the basis for a transfer price system. However, these transfer prices have the following limitations for decision making: (i) those transfer prices based on dual values of a solution tend to reward divisions with scarce resources, (ii) the linear formulation requires a great deal of local information, and (iii) transfer prices based on shadow prices do not provide a guide for performance evaluation of divisional managers. Based on the mathematical decomposition algorithm developed by Dantzig and Wolfe (1960), Baumol and Fabian (1964) demonstrated how to reduce complex optimization problems into sets of smaller problems solvable by divisions and the central office (corporation). Although the final analysis of output decisions is made by the central manager, the calculation process is sufficiently localized that central management does not have to know anything about the internal technological arrangements of the divisions. However, this approach does not permit divisional autonomy. Ruefli (1971) proposed a generalized goal decomposition model for incorporating multiple criteria (objectives) and some behavioral aspects within a three-level
810
W. Kwak et al.
hierarchical organization into the mathematical formulation of transfer pricing problems. Bailey and Boe (1976) suggested another goal programming model as a supplement of Ruefli’s model. Both models overcome the shortcoming of linear programming in dealing with multiple criteria and the organizational hierarchy. Merville and Petty (1978) used a dual formulation of goal programming to directly find the shadow price as the optimal transfer price for multinational corporations. However, the optimal solution of these models that results in the transfer prices is determined by a particular distance (norm) function because of the mathematical structure of goal programming. Thus, the optimal solution represents only a single optimal trade-off of the multiple criteria, not all possible optimal trade-offs. Watson and Bulmer (1975) and Thomas (1980) criticized the lack of behavioral considerations in the mathematical programming approaches. They suggested a negotiated transfer pricing to further the integration of differentiation within the organization. The differentiation of the organization exists because, for example, different managers can interpret the same organizational problem differently. Generally, the negotiated transfer pricing will be successful under conditions such as the existence of some form of outside markets for the intermediate product, freedom to buy or sell outside, and sharing all market information among the negotiators. Negotiated prices may require iterative exchanges of information with the central office as a part of a mathematical programming algorithm.
29.4.2 Methodology 29.4.2.1 MC2 Linear Programming Practically speaking, since linear programming has only a single criterion (objective) and a single resource availability level (right-hand side), it has limitations in handling real-world transfer pricing problems. For instance, linear programming cannot be used to solve the problem in which a corporation tries to maximize overall profit and the total market share simultaneously. This dilemma is overcome by a technique called multiple-criteria (MC) linear programming (Zeleny 1974; Goicoechea et al. 1982; Steuer 1986; Yu 1985). To extend the framework of MC linear programming, Seiford and Yu (1979) and Yu (1985) formulated a model of multiple-criteria and multiple-constraint level (MC2) linear programming. This model is rooted by two facts. First, from the linear system structure’s point of view, the criteria and constraints may be “interchangeable.” Thus, like multiple criteria, multiple-constraint (resource availability) levels can be considered. Second, from the application’s point of view, it is more realistic to consider multiple resource availability levels (discrete right-hand sides) than a single resource availability level in isolation. The philosophy behind this perspective is that the availability of resources can fluctuate depending on the decision situation forces, such as the desirability levels believed by the different managers. For example, if the differentiation of budget among managers in transfer pricing problems (Watson and Baumler 1975) is represented by different levels of budget, then this differentiation can be resolved by identifying some best compromise of budget levels as the consensus budget.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 811
The theoretical connections between MC linear programming and MC2 linear programming can be found in Gyetvan and Shi (1992). Decision problems related to MC2 linear programming have been extensively studied in Lee et al. (1990), Shi (1991), and Shi and Yu (1992). Key ideas of MC2 linear programming, a primary theoretical foundation of this chapter, are outlined as follows. An MC2 linear programming problem can be formulated as Max lt Cx s:t: Ax Dg x 0, where C 2 Rqxn, A 2 Rmxn, and D 2 Rmxp are matrices of qxn, mxn, and mxp dimensions, respectively; x 2 Rn are decision variables; l 2 Rq is called the criteria parameter; and g 2 Rp is called the constraint level parameter. Both (g,l) are assumed unknown. The above MC2 problem has q criteria (objectives) and p constraint levels. If the constraint level parameter g is known, then the MC2 problem reduces to an MC linear programming problem (e.g., Yu and Zeleny 1975). In addition, if the criteria parameter l is known, it reduces to a linear programming problem (e.g., Charnes and Cooper 1961; Dantzig 1963). Denote the index set of the basic variables {xji, . . ., xjm} for the MC2 problem by J ¼ {j1, . . ., jm}. Note that the basic variables may contain some slack variables. Without confusion, J is also called a basis for the MC2 problem. Since a basic solution J depends on parameters (g, l), define that (i) a basic solution J is feasible for the MC2 problem if and only if there exists a g0 > 0 such that J is a feasible solution for the MC2 problem with respect to g0 and (ii) J is potentially optimal for the MC2 problem if and only if there exist a g0 > 0 and a l0 > 0 such that J is an optimal solution for the MC2 problem with respect to (g0, l0). Let G(J) be the constraint level parameter set of all g such that the basis J is feasible and L(J) be the criteria parameter set of all l that the basis J is dual feasible. Then, for a given basis J of the MC2 problem, (i) J is a feasible solution if and only if the set G(J) is not empty and (ii) J is potentially optimal if and only if both sets G(J) and L(J) are not empty. For an MC2 problem, there may exist a number of potentially optimal solutions {J} as parameters (g, l) vary depending on decision situations. Seiford and Yu (1979) derived a simplex method to systematically locate the set of all potentially optimal solutions {J}. The computer software of the simplex method (called MC2 software) was developed by Chien et al. (1989). This software consists of five subroutines in each iteration: (i) pivoting, (ii) determining primal potential bases, (iii) determining dual potential bases, (iv) determining the effective constraints for the primal weight set, and (v) determining the effective constraints for the dual weight set (Chap. 8 of Yu 1985). It is written in PASCAL and operates in a man-machine interactive fashion. The user cannot only view the tableau of each iteration but also trace the past iterations. In the next section, the framework of the MC2 problem, as well as its software, will be used to formulate and solve the multiple factor transfer pricing problems.
812
W. Kwak et al.
29.4.2.2 Multiple Factor Transfer Pricing Model The review of existing transfer pricing models shows that two major shortcomings, from a management point of view, need to be overcome in the previous mathematical models of transfer pricing problems. First, neither the linear programming approach nor the goal programming approach can provide a comprehensive scenario of all possible optimal trade-offs between multiple objectives under consideration for a given transfer pricing problem, such as maximizing the overall profits for a corporation and minimizing the underutilization of production capacity (Tang 1992). Transfer pricing scheme by linear programming only reflects a single objective of a corporation. As a result, the linear programming approach cannot help the corporation seek to simultaneously achieve several objectives, some in conflict, in business competition. The transfer price determined by the goal programming approach is an optimal compromise (i.e., trade-off) among several objectives of the corporation. However, it misses other possible optimal compromises of the objectives that result from some linear combinations of objective weights. These compromises lead to different optimal transfer prices for different decision situations that the corporation may face. Second, none of the past mathematical models can deal with the organizational differentiation problems, as Watson and Baumler (1975) pointed out. In real-life cases, when a corporation designs its transfer prices for the divisions, the involved decision makers (executives or the members of the task force) can give different opinions on the same issue, such as production capacity and customer’s demand. In mathematical models, these different interpretations can be represented by different “constraint levels.” Because both linear programming and goal programming presume a fixed single constraint level, they fail to mathematically describe such an organizational differentiation problem. The MC2 linear programming framework can resolve the above shortcomings inherent in the previous transfer pricing models. Based on Yunker (1983) and Tang (1992), the four important objectives of transfer pricing problems in most corporations are considered: (i) maximizing the overall profit, (ii) maximizing the total market share, (iii) maximizing the subsidiary profit (note that the subsidiary profit maximization is used to reflect the degree of the subsidiary autonomy in decision making. It may differ from the overall profit), and (iv) maximizing the utilized production capacity. Even though the following model contains only these four specific objectives, the generality of the modeling process fits in all transfer pricing problems with multiple-criteria and multiple-constraint levels. Let k be the number of divisions in a corporation under consideration and t be the index number of the products that each division of the corporation produces. Define xij as the units of the jth product made by the ith division, i ¼ 1, . . . , k; j ¼ 1, . . . , t. For the coefficients of the objectives, let pij be the unit overall profit generated from the jth product made by the ith division, mij be the market share value for the jth product made by the ith division in the market, sij be the unit subsidiary profit generated from the jth product made by the ith division, and cij be the unit utilized production capacity of the ith division to produce the jth product. For the coefficients of the constraints, let bij be the budget allocation rate for producing the jth product by the ith division. For the coefficients of the constraint levels, let bijs
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 813
be the budget availability level believed by the sth manager (or executive) for producing the jth product by the ith division, s ¼ 1, . . . , h; dis be the production capacity level believed by the sth manager for the ith division; dijs be the production capacity level believed by the sth manager for the ith division to produce the jth product; and eijs be the initial inventory level believed by the sth manager for the ith division to hold the jth product. Then, the multiple factor transfer pricing model is Max Ski¼1 Slj¼1 pij xij Max Ski¼1 Slj¼1 mij xij Max Ski¼1 Slj¼1 sij xij Max Ski¼1 Slj¼1 cij xij
Subject to Ski¼1 Slj¼1 bijxij b1 ij ; . . . ; bh ij Skj¼1 xij d1 ij , . . . dh i xij d1 ij ; . . . ; dh ij xij þ xiþ1, j e1 ij ; . . . ; eh ij
(29.1)
Xij 0, i ¼ 1, . . . , k, j ¼ 1, . . . , t: In the next section, a prototype of the transfer pricing model in a corporation will be illustrated to demonstrate the implications for decision makers.
29.4.3 Model Implications 29.4.3.1 Numerical Example As an illustration of the multiple factor transfer pricing model, the United Chemical Corporation has two divisions that process raw materials into intermediate or final products. Division 1, which is located in Kansas City, manufactures two kinds of chemicals, called Products 1 and 2. Product 1 in Division 1 is intermediate product and cannot be sold externally. It can, however, be processed further by Division 2 into a final product. Division 2, which is located in Atlanta, manufactures Product 2 and finalizes Product 1 in Division 1. The executives (president, vice president for production, and vice president for finance) and all divisional managers agree on the following multiple objectives: (i) Maximize the overall company’s profit. (ii) Maximize the market share goal of Product 2 in Division 1 and Products 1 and 2 in Division 2. (iii) Maximize the utilized production capacity of the company so that each division manager can avoid any underutilization of normal production capacity. The data related to these objectives is given in Table 29.2. In Table 29.2, Product 1 of Division 1 is a by-product that has no sale value in Division 1 at all. The unit profit $4 of this product means its production cost.
814
W. Kwak et al.
Table 29.2 Data of the objectives in the United Chemical Corporation Division 1 Product 1 Product 2 Division 2 Product 1 Product 2 Unit profit ($) 4 8 13 5 Market price 0 40 46.2 38 Unit utilizing production capacity (h) 4 4 3 2
All other products generate profits. We use a market price of each product to maximize the market share goal. Note that the market prices of three products in Table 29.2 are different. In the company, besides the president, the vice presidents for production and for finance are the main decision makers and may have different interpretations of the same resource availability across the divisions. The vice president for production views the constraint level based on the material, manpower, and equipment under control, while the vice president for finance views the constraint level based on the available cash flow. The president will make the final decision for the company’s transfer price setting on the basis of the compromises of both vice presidents. All divisional managers will carry out the president’s decision, although they have their autonomy to provide the information about the divisional profits and costs for the executives. The interpretations of the vice presidents for the resource constraint levels are summarized in Table 29.3. All executives and divisional managers agree on the units of resources consumed to produce the products. The data is given in Table 29.4. Let xij be the units of the jth product produced by the ith division, i ¼ 1, 2; j ¼ 1, 2. Using the information in Tables 29.2, 29.3, and 29.4, the multiple factor transfer pricing model is formulated as Max 4x11 þ 8x12 þ 13x21 þ 5x22 Max 40x12 þ 46:2x21 þ 38x22 Max 4x11 þ 4x12 þ 3x21 þ 2x22 Subject to x11 þ x21 ð0; 100Þ 0:4x12 þ 0:4x21 þ 0:4x22 ð45; 000; 40; 000Þ X12 ð38; 000; 12; 000Þ X21 ð45; 000; 50; 000Þ X22 ð36; 000; 10; 000Þ xij 0, i ¼ 1, 2; j ¼ 1, 2:
(29.2)
Since this multiple factor transfer pricing problem is a typical MC2 problem, the MC2 software of Chien et al. (1989) can be used to solve the problem. Let l ¼ (l1, l2, l3) be the weight parameter for the objectives, where l1 + l2 + l3 ¼ 1 and l1, l2, l3 0. Let g ¼ (g1, g2) be the weight parameter for the constraint levels, where g1 + g2 ¼ 1 and g1, g2 0. Because both weight parameters (g, l) are unknown before design time, the solution procedure of MC2 linear programming must be used to locate all possible potentially optimal solutions as (g, l) vary. The implications of the potentially optimal solutions for accounting decision makers will be explained in the next subsection. After putting (g, l) into the above model, it becomes
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 815
Table 29.3 The constraint levels of the vice presidents Transfer product constraint Budget constraint ($) Production capacity of Product 2 in Division 1 Production capacity of Product 1 in Division 2 Production capacity of Product 2 in Division 2
Vice president for production 0 45,000 38,000
Vice president for finance 100 40,000 12,000
45,000
50,000
36,000
10,000
Table 29.4 The unit consumptions of resources Division 1 Transfer product constraint Budget constraint ($) Production capacity of Product 2 in Division 1 Production capacity of Product 1 in Division 2 Production capacity of Product 2 in Division 2
Max
Product 1 1.0 0
Product 2 0 0.4 1.0
Division 2
Product 1 1.0 0.4
Product 2 0 0.4
1.0 1.0
l1 ð4x11 þ 8x12 þ 13x21 þ 5x22 Þ þ l2 ð40x12 þ 46:2x21 þ 38x22 Þ
þ l3 ð4x11 þ 4x12 þ 3x21 þ 2x22 Þ Subject to x11 þ x21 100g2 0:4x12 þ 0:4x21 þ 0:4x22 45, 000g1 þ 40, 000g2
(29.3)
x12 38, 000g1 þ 12, 000g2 x21 45, 000g1 þ 50, 000g2 x22 36, 000g1 þ 10, 000g2 xij 0, i ¼ 1, 2; j ¼ 1, 2: Let sq, q ¼ 1, . . . , 5, be the slack variables corresponding to the constraints. The MC2 software yields two potentially optimal solutions {J1, J2} and their associated values of (g, l) as shown in Table 29.5. Here, V(Ji) denotes the objective value of potentially optimal solution Ji, which is a function of (g, l). When (g, l) are specified, V(Ji) is the payoff of using Ji. Table 29.5 means that if the g takes the value from G(J1) and the l takes the value from L(J1), J1 is the optimal solution for the transfer pricing problem. In this case, products {x11, x12, x21, x22} will be produced to achieve the objective payoff V(J1) and the resource of x22 has the amount of s5 unused. Similarly, the potentially optimal solution J2 can be interpreted. However, J2 is different from J1
816
W. Kwak et al.
Table 29.5 All potentially optimal solutions G(Ji) Ji J1 ¼ (x11, x12, x21, x22, s5) g1 + g2 ¼ 1, 65g1 280g2 0, g1, g2 0 J2 ¼ (x11, x12, x21, x22, s2) g1 + g2 ¼ 1, 65g1 280g2 0, g1, g2 0
L(Ji) V(Ji) l1 + l2 + l3 ¼ 1, 856,500 736,400 g l1 l3, 4,720,000 4,234,000 l1, l2, l3 0 526,000 47,444,000 l1 + l2 + l3 ¼ 1, 889,000 596,400 g l1 l3, 4,967,000 3,170,000 l1, l2, l3 0 539,000 41,184,000
Table 29.6 All optimal transfer prices Ji J1 ¼ (x11, x12, x21, x22, s5)
J2 ¼ (x11, x12, x21, x22, s2)
pq(Ji) p1(J1) ¼ 4l1 4l3, p2(J1) ¼ 12.5l1 + 95l2 + 5l3, p3(J1) ¼ 3l1 + 2l2 + 2l3 p4(J1) ¼ 4l1 + 8.2l2 + 5l3 p5(J1) ¼ 0 p1(J2) ¼ 4l1 4l3, p2(J2) ¼ 0, p3(J2) ¼ 8l1 + 40l2 + 4l3 p4(J2) ¼ 9l1 + 46.2l2 + 7l3 p5(J2) ¼ 5l1 + 38l2 + 2l3
because there is the unused budget s2 for J2 and the parameter set G(J1) _ G(J2). Because Product 1 in Division 1 is a by-product and its unit profit is $4, whenever l1 < l3, both J1 and J2 are not optimal. Let pq(Ji), q ¼ 1, . . . , 5; i ¼ 1, 2, be the shadow price of Ji for the qth constraint. According to the marginal cost pricing approach, the optimal transfer prices of {J1, J2} are designated as the shadow prices of {J1, J2}. These optimal transfer prices are found in Table 29.6. Table 29.6 shows that (i) the relative transfer price between x11 and x21 is p1(J1) ¼ 4 l1 4 l3; (ii) the transfer price for budget across the divisions is p2(J1) ¼ 12.5 l1 + 95 l2 + 5 l3; (iii) the transfer price for x12 is p3(J1) ¼ 3 l1 + 2 l2 + 2 l3; (iv) the transfer price for x21 is p4(J1) ¼ 4 l1 + 8.2 l2 + 5 l3; and (v) the transfer price for x22 is p5(J1) ¼ 0.
29.4.4 Optimal Trade-offs and Their Accounting Implications Optimal trade-offs related to transfer prices of the multiple factor model consist of three components: (i) trade-offs among multiple objectives, (ii) trade-offs among multiple-constraint levels, and (iii) trade-offs between multiple objectives and multiple-constraint levels. The trade-offs among multiple objectives imply that all possible optimal compromises of the multiple objectives are determined by locating all possible weights of importance of these objectives. Similarly,
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 817
Fig. 29.1 Optimal trade-offs between profit (l1) and production capacity (l3)
λ3 1
λ1 = λ3
J1
J2
0
1
λ1
the trade-offs among multiple-constraint levels imply that all possible optimal compromises of the multiple-constraint levels that represent the executives’ different opinions are determined by locating all possible weights of importance of these opinions. The trade-offs between multiple objectives and multiple-constraint levels measure the effects on the transfer pricing problem by the interaction of the objectives and constraint levels. These three cases of the optimal trade-offs can be analyzed through the potentially optimal solutions of the multiple factor model. The optimal transfer prices also result from those solutions. In the following, three trade-off cases and accounting implications of the corresponding optimal transfer prices are explored in detail by using the above numerical example. Because there are two potentially optimal solutions {J1, J2} in the example, the optimal trade-offs should be studied in terms of both J1 and J2. For the trade-offs among three objectives that are the overall company’s profit, the market share of the company, and the utilized production capacity of the company: (i) If the overall company’s profit is not considered (this implies l1 ¼ 0 and l3 6¼ 0), then either J1 or J2 is not optimal since l3 < 0 (see Table 29.5). (ii) If the market share of the company is not considered (i.e., l2 ¼ 0), then both J1 and J2 are optimal for which l1 + l3 ¼ 1, l1 l3, and l1, l3 0. The graphical representation is shown in Fig. 29.1. From Fig. 29.1, any weighting values of l1 and l3 taken from the feasible (dark) segment guarantee that the utility values of both the overall company’s profit and the utilized production capacity of the company are maximized. The resulting optimal transfer prices associated with J1 are p1(J1) ¼ 4l1 4l3, p2(J1) ¼ 12.5l1 + 5l3, p3(J1) ¼ 3l1 + 2l3, p4(J1) ¼ 4l1 + 5l3, and p5(J1) ¼ 0, respectively. The resulting optimal transfer
818 Fig. 29.2 Optimal trade-offs between profit (l1) and market share (l2)
W. Kwak et al.
λ2 1 J1
J2
0
1
λ1
prices associated with J2 are p1(J2) ¼ 4l1 4l3, p2(J2) ¼ 0, p3(J2) ¼ 8l1 + 4l3, p4(J2) ¼ 9l1 + 7l3, and p5(J2) ¼ 5l1 + 2l3, respectively. The weighting values of l1 and l3 in the transfer prices for J1 and J2 are given in Fig. 29.1, because L(J1) ¼ L(J2). (iii) If the utilized production capacity of the company is not considered (i.e., l3 ¼ 0), then both J1 and J2 are optimal for which l1 + l2 ¼ 1 and l1, l2 _ 0, where the graphical representation is shown in Fig. 29.2. Figure 29.2 implies that any weighted combination of l1 and l2 taken from the indicated feasible segment maximizes the utility values of both the overall company’s profit and the market share of the company. The resulting optimal transfer prices of J1 are p1(J1) ¼ 4l1, p2(J1) ¼ 12.5l1 + 95l2, p3(J1) ¼ 3l1 + 2l2, p4(J1) ¼ 4l1 + 8.2l2, and p5(J1) ¼ 0, while the resulting optimal transfer prices of J2 are p1(J2) ¼ 4l1, p2(J2) ¼ 0, p3(J2) ¼ 8l1 + 40l2, p4(J2) ¼ 9l1 + 46.2l2, and p5(J2) ¼ 5l1 + 38l2, respectively. For the trade-offs among two constraint levels (the different opinions of two vice presidents), the weight of the vice president for production is g1 while that of the vice president for finance is g2 such that g1 + g2 ¼ 1, g1, g2 0. The range of g1 and g2 are decomposed into two subsets: G(J1) ¼ {g1, g2 0 | 65g1 280g2 0 and g1 + g2 ¼ 1} and G(J2) ¼ {g1, g2 0 | 65g1 + 280g2 0 and g1 + g2 ¼ 1} (see Table 29.5). The graphical representation of G(J1) and G(J2) is shown in Fig. 29.3. Whenever the weighting values of g1 and g2 are taken from G(J1), the corresponding compromise of two vice presidents will result in the optimal transfer prices of J1 in Table 29.6. Similarly, the decision situation of constraint levels for J2 can be explained. Finally, there are many optimal trade-off situations between three objectives and two constraint levels involved with the transfer pricing problem. For example, two cases are illustrated as follows:
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 819
Fig. 29.3 Optimal trade-offs between V.P. for production (g1) and V.P. for finance (g2)
γ2 1 J2
65γ1 = 280γ2
J1
0
γ1
1
λ1 1
J1
J2
Fig. 29.4 Optimal trade-offs V.P. for production (g1) and total profit (l1)
0
.81
1
γ1
(i) In the case that the utilized production capacity of the company is not considered (i.e., l3 ¼ 0) in contrast to the weighting value of the market share and that of the vice president for finance (i.e., 0 l2 1 and 0 g2 1), the optimal trade-offs between the overall company’s profit and the constraint level believed by the vice president for production for J1 and J2 are shown in Fig. 29.4. Here, if the values of (g1, l1) are taken from 0 l1 1 to .81 g1 1, then the optimal transfer prices
820 Fig. 29.5 Optimal trade-offs between V.P. for production (g1) and production capacity (l3)
W. Kwak et al.
λ3 1
.5 J2
0
J1
.81
1
γ1
associated with J1 will be chosen. They are p1(J1) ¼ 4l1, p2(J1) ¼ 12.5l1 + 95l2, p3(J1) ¼ 3l1 + 2l2, p4(J1) ¼ 4l1 + 8.2l2, and p5(J1) ¼ 0. Otherwise, the values of (g1, l1) are taken from 0 l1 1 to 0 g1 .81, and the optimal transfer prices associated with J2, p1(J2) ¼ 4l1, p2(J2) ¼ 0, p3(J2) ¼ 8l1 + 40l2, p4(J2) ¼ 9l1 + 46.2l2, and p5(J2) ¼ 5l1 + 38l2 will be chosen. (ii) In the case that the weighting value of the overall company’s profit is fixed at .5 (i.e., l1 ¼ .5) and the weighting value of the market share and that of the vice president for finance are any of 0 l2 1 and 0 g2 1, respectively, Fig. 29.5 shows the optimal trade-offs between the utilized production capacity of the company and the constraint level believed by the vice president for production for J1 and J2. In Fig. 29.5, if the values of (g1, l3) are taken from 0 l3 .5 to .81 g1 1, then the optimal transfer prices associated with J1 are p1(J1) ¼ 2 4l3, p2(J1) ¼ 6.25 + 95l2 + 5l3, p3(J1) ¼ 1.5 + 2l2 + 2l3, p4(J1) ¼ 2 + 8.2l2 + 5l3, and p5(J1) ¼ 0, respectively. If the values of (g1, l3) are taken from 0 l3 .5 to 0 g1 .81, then the optimal transfer prices associated with J2 are p1(J2) ¼ 2 4l3, p2(J2) ¼ 0, p3(J2) ¼ 4 + 40l2 + 4l3, p4(J2) ¼ 4.5 + 46.2l2 + 7l3, and p5(J2) ¼ 2.5 + 38l2 + 2l3, respectively. Note that when the weighting values fall in the range of .5 l3 1 and 0 g1 1, there is not any optimal trade-off because both J1 and J2 are not optimal solutions (recall that this is caused by the by-product 1 in Division 1 that has -$4 as the unit profit). It is worth noting some important implications for accounting decision makers from the above trade-off analysis. First, the multiple factor transfer pricing model has a capability of systematically locating all possible optimal transfer prices through the optimal trade-offs of multiple objectives and multiple-constraint levels. Since the set of all possible optimal transfer prices found by this model describes
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 821
every possible decision situation within the model framework, the optimal transfer price obtained from either linear programming or goal programming models is included in the subset as a special case. Second, the multiple factor transfer pricing model can be applied to solve transfer pricing problems not only with complex structure but also with some organizational behavior contents, such as the organizational differentiation (see Table 29.3 for different constraint levels). Consequently, this model can foster more autonomous flexibility than any other mathematical programming model by allowing central management or local managers to express their own preference structures and weights. Third, the proposed method facilitates decision makers’ participation that may make a positive management achievement of organizational goals (Locke et al. 1981). The model can aid coordination and synthesis of multiple conflicting views. This may be quite effective in a transfer pricing situation in which many objectives are contradictory to each other and these objectives are measured differently by a number of decision participants. Fourth, in most multiple-criteria solution techniques, including the goal programming approach, if the decision makers are not satisfied with the optimal solution obtained by using their preferred weights of importance for the multiple objectives, then an iterative process has to be conducted for incorporating the decision makers’ new preference on the weights and finding new solutions until decision makers are satisfied. However, the proposed model eliminates such a timeand cost-consuming iterative process since it already considers all possible optimal solutions with respect to the changes of parameter (g, l). Whenever decision makers want to change their preference on the weights, the corresponding optimal transfer prices can be immediately identified from the results like in Table 29.6. This model, in turn, allows the performance evaluation of optimal transfer prices. Finally, the optimal transfer prices obtained by the multiple factor model have a twofold significance in terms of decision characteristics. If the problem is viewed as a deterministic decision problem, whenever the preferred weighting values of objectives and constraint levels are known, the resulting optimal solutions can be identified from the potentially optimal solutions of the model. Then, the corresponding optimal transfer prices can be adopted to handle the business situation (recall the above tradeoff analysis). If the problem is viewed as a probabilistic decision problem, it involves the assessment of the likelihood of (g, l) to occur at the various points of the range. With proper assumptions, the uncertainty may be represented by random variables with some known probability distribution. A number of known criteria such as maximizing expected payoff, minimizing the variance of the payoff, maximin payoff, maximizing the probability of achieving a targeted payoff, stochastic dominance, probability dominance, and mean-variance dominance can be used to choose the optimal transfer prices (see Shi 1991). In summary, the multiple factor transfer pricing model fosters flexibility in designing the optimal transfer prices for the corporation to cope with all possible changes of business competition. This model is more likely to be a better aid for executives or managers to understand and deal with their current or future transfer pricing problems.
822
W. Kwak et al.
29.4.5 Conclusions A multiple factor transfer pricing model has been developed to solve the transfer pricing problems in a multidivisional corporation. This model can provide a systematic and comprehensive scenario about all possible optimal transfer prices depending on multiple-criteria and multiple-constraint levels. The trade-offs of optimal transfer prices offer a broad basis for managers of a corporation to flexibly implement the optimal transfer pricing strategy and cope with various business situations. Furthermore, this method also aids global optimization, division autonomy, and performance evaluation. There are some research problems remaining to be explored. From a practical point of view, the framework of this model can be applied to other accounting areas such as capital budgeting, cost allocation, audit sampling objectives, and personnel planning for an audit corporation if the decision variables and formulation are expressed appropriately. From a theoretical point of view, the decomposition algorithm of linear programming (Dantzig and Wolfe 1960) can be incorporated into the MC2-simplex method (Seiford and Yu 1979) to sharpen the multiple factor transfer pricing model’s capability of solving the large-scale transfer pricing problem. Thus, this incorporation may result in the development of more effective solution procedures. How to incorporate other trade-off techniques, such as the satisficing trade-off method (Nakayama 1994), into the MC2 framework for designing optimal transfer pricing strategies is another interesting research problem.
29.5
Capital Budgeting with Multiple Criteria and Multiple Decision Makers
Capital budgeting is not a trivial task if a firm is to maintain competitive advantages by adopting new information or manufacturing systems. A firm may implement innovative accounting systems such as activity-based costing (ABC) or balanced scorecard to generate more useful information for better economic decision making in the ever-changing business environment. ABC can provide value-adding and non-value-adding activity information about new capital investments. Investment justification in the new manufacturing environment, however, requires a comprehensive decision-making process that involves competitive analysis, overall firm strategy, and evaluation of uncertain cash flows (Howell and Schwartz 1994). The challenge here is to measure cash flows as well as intangible benefits that these new systems will bring. Furthermore, conflicts of goals, limited resources, and uncertain risk factors may complicate the capital budgeting problem (see Hillier (1963), Lee (1993), Karanovic et al. (2010) for details). These problems of conflicts of goals among decision makers and limited resources in a typical organization support the use of multiple-criteria and multiple-constraint levels (MC2) linear programming (Seiford and Yu 1979). While traditional techniques such as payback or accounting rate of return are used as a secondary method, discounted cash flow (DCF) methods, including net
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 823
present value (NPV) and internal rate of return (IRR), are the primary quantitative methods in capital budgeting (Kim and Farragher 1981). The payback method estimates how long it will take to recover the original investment. However, this method incorporates neither the cash flows after the payback period nor the variability of those cash flows (Boardman et al. 1982). The accounting rate of return method measures a return on the original cost of the investment. Both of the above methods ignore the time value of money. DCF methods may not be adequate to evaluate new manufacturing or information systems, because of a bias in favor of short-term investments with quantifiable benefits (Mensah and Miranti 1989). A current trend in capital budgeting methods utilizes mathematical programming and higher discount rates to incorporate higher risk factors (Pike 1983; Palliam 2005). Hillier (1963) and Huang (2008) suggested useful ways to evaluate risky investments by estimating expected values and standard deviations of net cash flows for each alternative investment. They showed that the standard deviation of cash flows is easily obtainable. With this information, a complete description of the risky investment is possible via probability distribution of the IRR, NPV, or annual cost of the proposed investment under the assumption of the net cash flows from the investment, which are normally distributed. Similarly, Turney (1990) suggested a stochastic dynamic adjustment model to incorporate greater risk premiums when significant additional funds are required in multiple time periods. Lin (1993) also proposed a multiple-criteria capital budgeting model under risk. His model used chance constraints of uncertain cash flows and accounting earnings as risk factors. Pike (1988) empirically tested the correlation between sophisticated capital budgeting techniques and decisionmaking effectiveness and found that management believed that sophisticated investment techniques improve effectiveness in the evaluation and control of large capital projects. Weingartner (1963) introduced a mathematical programming approach in the capital budgeting problem. Other researchers have extended Weingartner’s work with different directions (e.g., Baumol and Quandt 1965; Bernard 1969; Howe and Patterson 1985). The development of chanceconstrained programming (CCP) by Charnes and Cooper (1961) also enriched with applications of mathematical programming models in the capital budgeting problem. They also developed an approximation solution method to the CCP with zero-one variables using a linear constraint. A typical mathematical capital budgeting approach maximizes DCFs that measure a project’s desirability on the basis of its expected net present value as a primary goal. DCF analysis, however, ignores strategic factors such as future growth opportunities (Cheng 1993). Furthermore, management can change their plans if operating conditions change. For example, they can change input and output mixes or abandon the project in a multi-period situation. The increasing involvement of stakeholders, other than shareholders, in a business organization supports a multiple-objective approach (Bhaskar 1979). Other empirical studies also found that firms used multiple criteria in their capital budgeting problems (e.g., Bhaskar and McNamee 1983; Thanassoulis 1985). The goal
824
W. Kwak et al.
programming approach has been used to handle multiple-objective problems. It emphasizes weights of importance of the multiple objectives with respect to the decision maker’s (DM) preference (Bhaskar 1979; Deckro et al. 1985). Within the framework of hierarchical goal optimization, several goal programming models have been suggested to unravel multiple-objective capital budgeting problems (e.g., Ignizio 1976; Lee and Lerro 1974). Similarly, Santhanam et al. (1989) used a zeroone goal programming approach for information system project selection, but their article lacks a multiple time horizon (learning curve) effect. Choi and Levary (1989) investigated the use of a chance-constrained goal programming approach to reflect multiple goals for a multinational capital budgeting problem. Reeves and Hedin (1993) suggested interactive goal programming (IGP) which allows more flexibility for DMs in considering trade-offs and adjusting goal target levels. However, because of the difficulties of measuring the preferences and priorities of decision makers, Reeves and Franz (1985) developed a simplified interactive multiple-objective linear programming (SIMOLP) method which uses an interactive procedure for a DM to identify a preferred solution and Gonzalez et al. (1987) applied this procedure in a capital budgeting problem (see Corner et al. (1993) and Reeves et al. (1988) for other examples). Interactive multipleobjective techniques such as SIMOLP or IGP can reduce the difficulty of the solution process. Such studies suggest that most capital budgeting problems require the analysis of multiple criteria that better reflect a real-world situation. Thanassoulis (1985) addressed multiple objectives such as the maximization of shareholder wealth, maximization of firm growth, minimization of financial risk, maximization of the firm liquidity, and minimization of environmental pollution. However, these existing multiple-criteria approaches, including goal programming, implicitly assume that there is only one decision maker setting up the constraint (budget availability) level of a capital budgeting problem. This assumption is not realistic because in most real-world capital budgeting problems, such as constructing a major highway or shopping mall, multiple decision makers must involve the decision of the constraint levels (see Sect. 29.5.2 for detailed discussion). To remove this assumption of a single decision maker from models of capital budgeting with multiple criteria, this article attempts to incorporate multiple decision makers’ preferences using (1) the analytic hierarchy process (AHP) approach and (2) MC2 linear programming in a capital budgeting situation. Our approach shows its strength in quantifying strategic and nonfinancial factors that are important in the current competitive business environment. The problem of incommensurable units in the selection criteria, because of nonfinancial and qualitative measures, can be resolved by using the AHP approach. The MC2 approach fosters modeling flexibility by incorporating decision makers’ preferences as multiple-constraint levels. The rest of the article is organized as follows. Sect. 29.2 introduces the AHP and MC2 framework. Sect. 29.3 demonstrates the managerial significance and implications of capital budgeting problems by illustrating an example. Sect. 29.4 concludes the article with several future research avenues.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 825
29.5.1 AHP and MC2 Framework In general, an organization has limited resources. Furthermore, each manager’s risk assessments and preferences about a new project may be different. For example, a financial manager may think that his or her company has only 50 million dollars available for a new project and he will not approve this project unless he is sure about substantial financial benefits. In contrast, a production manager may think his company should have at least 60 million dollars to implement just-in-time (TIT) manufacturing. The production manager strongly believes that this new system will produce high-quality products with lower costs to maintain competitive advantages against competitors. It is clear that there is a conflict of interests between managers. Even top management’s goal may be different from other managers’. To increase other managers’ involvement and motivation, top management should induce other managers’ inputs. There must be trade-offs between goals of different managers in this group decision-making process. AHP and MC2 linear programming can be used to resolve this type of group dilemma.
29.5.1.1 Analytical Hierarchy Process (AHP) AHP is a practical measurement technique that has been widely applied in modeling the human judgment process (Saaty 1980). AHP enables decision makers to structure a complex problem in the form of a hierarchy of its elements according to an organization’s structure or ranks of management levels. It captures managerial decision preferences through a series of comparisons of relevant criteria. This feature of the AHP minimizes the risk of inconsistent decisions due to incommensurable units in the selection criteria. Recently, AHP has been applied to several accounting problems (e.g., capital budgeting (Liberatore et al. 1992), real estate investment (Kamath and Khaksari 1991), and municipal government capital investment (Chan 2004)). The preference of each manager may be analyzed through the use of AHP or multiple attribute utility technique (MAUT). Both AHP and MAUT have their own strengths and weaknesses. For a recent debate regarding the two methods, readers are referred to Dyer and Forman (1991). In this article, the AHP is employed mainly due to its capability of reducing computational complexity and availability of software. 29.5.1.2 MC2 Linear Programming Linear programming has been applied to capital budgeting problems such as maximizing NPV with a single objective. However, the linear programming approach has limitations in handling multiple conflicting real-world goals. For instance, linear programming cannot solve the problem in which a firm ties to maximize overall profits and total market share simultaneously. This dilemma is overcome by a technique called multiple-criteria (MC) linear programming (Goicoechea et al. 1982; Steuer 1986; Yu 1985; Zeleny 1974). To extend the framework of MC linear programming, Sieford and Yu (1979) and Yu (1985) formulated a model of MC2 linear programming. This model is based on two
826
W. Kwak et al.
premises. First, from the linear system structure’s perspective, the criteria and constraint levels may be interchangeable. Thus, like multiple criteria, multipleconstraint (resource availability) levels can be considered. Second, from the application’s perspective, it is more realistic to consider multiple resource availability levels (discrete right-hand sides) than a single resource availability level in isolation. This recognizes that the availability of resources can fluctuate depending on the decision situation forces, such as the preferences of the different managers. The concept of multiple resource levels corresponds to the typical characteristic of capital budgeting situations where the decision-making process should reflect each manager’s preference on the new project. A theoretical connection between MC linear programming and MC2 linear programming can be found in Gyetvan and Shi (1992) and decision problems related to MC2 linear programming have been extensively studied (see, e.g., Lee et al. (1990), Shi (1991), and Shi and Yu (1992)). Key ideas of MC2 linear programming, as a primary theoretical foundation of this article, are described as follows. An MC2 linear programming problem can be formulated as max lt Cx s:t: Ax Dg x0 where C 2 Rqxn, A 2 Rmxn, and D 2 Rmxp are matrices of qxn, mxn, and mxp dimensions, respectively; x 2 Rn are decision variables; l 2 Rq is called the criteria parameter; and g 2 Rp is called the constraint level parameter. Both (g, l) are assumed unknown. The above MC2 problem has q criteria (objectives) and p constraint levels. If the constraint level parameter g is known, then the MC2 problem reduces to an MC linear programming problem (e.g., Yu and Zeleny 1975). In addition, if the criteria parameter l is known, it reduces to a linear programming problem (e.g., Charnes and Cooper 1961; Dantzig 1963). Denote the index set of the basic variables {xj1, . . . , xjm} for the MC2 problem by J ¼ {j1, . . . , jm}. Note that the basic variables may contain some slack variables. Without confusion, J is also called a basis for the MC2 problem. Since a basic solution J depends on parameters (g, l), define that (1) a basic solution J is feasible for the MC2 problem if and only if there exists a g0 > 0 such that J is a feasible solution for the MC2 problem with respect to g0 and (2) J is potentially optimal for the MC2 problem if and only if there exist a g0 > 0 and a l0 > 0 such that J is an optimal solution for the MC2 problem with respect to (g0, l0). For an MC2 problem, there may exist a number of potentially optimal solutions {J} as parameters (g, l) vary depending on decision situations. Seiford and Yu (1979) derived a simplex method to systematically locate the set of all potentially optimal solutions {J}. In summary, a model within the framework of AHP and MC2 is proposed to formulate and solve the multiple-objective capital budgeting problems with
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 827
multiple decision makers as follows. Note that because of complexity of the problem, we employed AHP to derive weights of l and g. This solution procedure decreases computational complexity.
29.5.1.3 A Model of MC2 Decision-Making Capital Budgeting The linear programming approach has limitations for solving real-world problems with multiple objectives. The goal programming or MOLP approach provides only one optimal trade-off among several objectives of the firm. None of the past mathematical models supports multiple decision makers in a capital budgeting decision-making process. As discussed before, in a real-world situation, the decision makers (DMs) can have different opinions on the same issue. These different interpretations can be represented by different constraint levels in mathematical models. Because both linear programming and goal programming presume a fixed single constraint level, they fail to mathematically describe such an organizational differentiation problem. The MC2 linear programming framework can resolve the above shortcomings in current capital budgeting models. The framework is flexible enough to include any objectives depending on problem situation. However, for the sake of clear presentation, we address four groups of objectives that are common in capital budgeting: (1) maximization of net present value, (2) profitability, (3) growth, and (4) flexibility of financing. Some group objectives may have multi-time periods. Net present value measures the expected net monetary gain or loss from a project by discounting all expected future cash flows to the present point in time, using the desired rate of return. If we want to incorporate risk factors, we can estimate the standard deviation of net cash flows as Hillier (1963) suggested. Profitability can be measured by return on investment (ROI). Growth can be measured by sales growth or a nonfinancial measure of market share growth, each of which focuses on the long-term success of a firm. Flexibility of financing (leverage) can be measured by the debt to equity ratio. As a firm’s debt to equity ratio goes up, the firm’s cost of borrowing becomes more expensive. Even though our model contains only these four specific objective groups, the flexibility of the modeling process fits in all capital budgeting problems with multiple-criteria and multiple-constraint levels. This model integrates AHP with a mathematical model. The strengths of such an integration have been shown in several areas (e.g., advertisement media choice (Dyer and Forman 1991), R&D portfolio selection (Suh et al. 1993), and telecommunication hub design (Lee et al. 1995)). The use of both AHP and MC2, as we will explore in this article, is the first in the decision-making literature. In general, let i be the time period for a firm to consider capital budgeting and j be the index number of the projects that the firm can select. Define xj as the jth project that can be selected by the firm, j ¼ 1, . . . , t. For the coefficients of the objectives, let vj be the net present value generated by the jth project; gij be the net sales increase generated by the jth project in the ith time period to measure sales growth, i ¼ 1, . . . , s; rij be the ROI generated from the jth project in the ith time period; and lij be the firm’s equity to debt ratio in the ith time period after jth project is selected to measure liquidity. Here, the optimum debt to
828
W. Kwak et al.
equity ratio for the firm is assumed to be 1 and the inverse of debt to equity ratio is used to measure leverage. For the coefficients of the constraints, let bij be the cash outlay required by the jth project in the ith time period. For the coefficients of the constraint levels, let Cki be the budget availability level believed by the kth manager (or executive) for the firm in the ith time period, k ¼ 1, . . . , u. In this article, for illustration, if we treat t ¼ 10, i ¼ 5, and u ¼ 5, then the model is 10 X max vj Xj j¼1
10 X max gij Xj 8i ¼ 1, . . . , 5 j¼1
10 X max r ij Xj 8i ¼ 1, . . . , 5 j¼1
10 X max lij Xj 8i ¼ 1, . . . , 5 j¼1
subject to
10 X
bij Xj ðcli ; . . . ; c5i Þ 8i ¼ 1, . . . , 5
j¼1
and Xj ¼ f0; 1g where k ¼ 1, . . . , 5, for cki, represents the five possible DMs (i.e., president, controller, production manager, marketing manager, and engineering manager). The weights of 16 total objectives can be expressed by (l1, . . . , l16) with Slq ¼ 1, q ¼ 1, . . . , 16, 0 < lq < 1, and the weights of five DMs can be expressed by (l1, . . . , l5) with Sgk ¼ 1, k ¼ 1, . . . , 5, 0 < gk < 1. The above model is an integer MC2 problem with 16 objectives and five constraint levels. Solving this problem by using a currently available solution technique (Seiford and Yu 1979) is not a trivial task because of its substantial computational complexity. To overcome this difficulty, we propose a two-phased solution procedure as depicted in Fig. 29.6. The first phase is referred to as the AHP phase in the sense that AHP is applied to derive weights of l and g, which reduces the model’s complexity. The computation of weights is effectively handled by Expert Choice (Forman et al. 1985), commercial software for AHP. In this phase, we first induce preferences about objectives from all five DMs involved. Note that each DM may have different preferences about the four goals. The president may think maximization of ROI is the most important goal, while the controller may think maximization of NPV is the most important goal. AHP generates relative weights for each goal. Then, multiple-constraint levels are incorporated using the MC2 framework. For instance, DMs may estimate different budget availability levels for each time period. The controller may believe that $40 million is available for the first year, while the production manager may believe that the company can spend $50 million for this time period. This scenario is more realistic if we consider the characteristics of
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 829
Fig. 29.6 A two-phased capital budgeting model
Initial Data
AHP Weights of λ and γ HC2 Framework
IP Capital Budgeting
Accept ?
No
Yes
Final Solution
a capital budgeting problem from the perspective of the different utility functions or risk factors of each DM. An example using AHP to generate relative weights for the five DMs’ constraint levels (preferences) can be found in Appendix 1. The second phase is called the integer programming (IP) phase. After applying the AHP phase to the model, the model is reduced to a linear IP problem, in which the potentially optimal projects must be selected. To solve this problem, we employ the ZOOM software (Singhal et al. 1989) that was originally introduced for solving zero-one integer linear programming problems.
29.5.2 Model Implications 29.5.2.1 Numerical Example A prototype of our capital budgeting model demonstrates the implications for multiple criteria and multiple decision makers. We use a Lorie-Savage (1955) type of problem as follows. The Lorie-Savage Corporation has ten projects under consideration. All projects have 5-year tune periods. The executives (president, controller, production manager, marketing manager, and engineering manager) agree on the fallowing multiple objectives: 1. Maximize net present value of each project. Net present value is computed as the sum of all the discounted, estimated future cash flows, using the desired rate of return, minus the initial investment.
830
W. Kwak et al.
Table 29.7 Net present value data for the Lorie-Savage Corporation (in millions) Project 1 2 3 4 5 6 7 8 9 10
Net present value $20 16 11 4 4 18 7 19 24 4
Cash outlays for each period Period 1 Period 2 Period 3 $24 $16 $6 12 10 2 8 6 6 6 4 7 1 6 9 18 18 20 13 8 10 14 8 12 16 20 24 4 6 8
Period 4 $6 5 6 5 2 15 8 10 16 6
Period 5 $8 4 4 3 3 15 8 8 16 4
2. Maximize the growth of the firm due to each project. Growth can be measured by net sales increase from each project. This can measure a strategic success factor of a firm. 3. Maximize the profitability of the company. Profitability can be measured by ROI for each time period generated by each project. 4. Maximize the flexibility of financing. The flexibility of financing (leverage) can be measured by debt to equity ratio. If a firm’s debt to equity ratio is higher than the industry average or the optimum level, the firm’s cost of debt financing will become more expensive. In this example, we use the inverse of the debt to equity ratio and assume 1 as the optimum debt to equity ratio. The data related to these objectives is given in Table 29.7. In this example, all projects are assumed to be independent. In Table 29.7, the firm’s cost of capital is assumed to be known a priori and to be independent of the investment decisions. Based on these assumptions, the net present value of each project can be defined as the sum of the cash flows discounted by the cost of capital. Cash outlay is the amount of expenditure required for project j, j ¼ 1, 2, 3, . . . , 10, in each time period. To measure growth, the net sales increase for each time period for each project is estimated. These data are provided in Table 29.8. To measure the profitability of each project, ROI is estimated after reflecting additional income and capital expenditures from each investment for each time period. These data are provided in Table 29.9. To measure leverage, the inverse of debt to equity ratio is used after adopting each project. Here, the optimum debt to equity ratio is assumed to be one for the Lorie-Savage Corporation. These data are provided in Table 29.10. The five key DMs in this company (president, controller, production manager, marketing manager, and engineering manager) have different beliefs regarding resource availability. For example, for budget availability levels, each DM may have a different opinion. Of course, the president will make the final decision based on the opinions of other managers. However, the DMs’ preferences of collection process should improve the quality of the final decision. Budget availability level data are provided in Table 29.11.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 831
Table 29.8 Net sales increase data for the Lorie-Savage Corporation (in millions) Project 1 2 3 4 5 6 7 8 9 10
Net sales increase for each project Period 1 Period 2 Period 3 $120 $130 $145 100 120 140 80 90 95 40 50 50 40 45 50 110 120 140 60 70 65 110 120 100 150 170 180 35 40 40
Period 4 $150 150 95 55 55 150 70 110 190 50
Period 5 $170 160 100 60 60 165 80 120 200 50
Table 29.9 Return on investment data for the Lorie-Savage Corporation (in percentage) Project 1 2 3 4 5 6 7 8 9 10
Return on investment for each project Period 1 Period 2 Period 3 10 12 14 10 12 18 12 15 15 8 15 10 15 10 8 12 12 10 8 12 10 14 16 13 12 10 9 10 8 9
Period 4 15 16 15 8 20 15 12 15 12 8
Period 5 17 17 18 12 18 15 12 16 12 12
Period 4 0.95 0.95 0.92 0.92 0.98 0.95 0.98 0.92 0.95 0.98
Period 5 0.95 0.96 0.92 0.95 0.95 0.95 0.98 0.95 0.95 0.95
Table 29.10 Debt to equity data for the Lorie-Savage Corporation Project 1 2 3 4 5 6 7 8 9 10
Inverse of debt to equity ratio Period 1 Period 2 0.85 0.90 0.90 0.98 0.96 0.97 0.98 0.95 0.90 0.95 0.90 0.95 0.96 0.98 0.96 0.95 0.90 0.88 0.98 0.95
Period 3 0.98 0.95 0.98 0.96 0.95 0.94 0.98 0.90 0.85 0.95
832
W. Kwak et al.
Table 29.11 Budget availability level data for the Lorie-Savage Corporation (in millions) Decision maker President Controller Production manager Marketing manager Engineering
Estimate budget availability Period 1 Period 2 $50 $30 40 45 55 40 45 30 50 40
Period 3 $30 30 20 40 45
Period 4 $35 30 30 45 30
Period 5 $30 20 35 30 35
The parameters for budget availability levels are derived by using AHP. Here, one level of an AHP is used. The aggregated objective function also can be obtained by using AHP. The difference here is that two levels of an AHP must be used to provide preferences for the objective functions. The first level of this AHP corresponds to the four groups of objectives. The second level corresponds to time periods within each objective group. Note that the second level is not required for the objective of maximizing the net present value. Hence, four pairwise comparison matrices are required for eigenvalue computation. More detailed information about the formulation of this problem is presented in the appendix. In sum, the objective function aggregated from the 16 objectives is written as Maximize
56:34x1 þ 51:74x2 þ 40:54x3 þ 25:43x4 þ 25:44x5 þ 53:63x6 þ 31:98x7 þ 50:59x8 þ 67:62x9 þ 23:17x10 :
The optimum IP solution for this numerical example is selecting projects 2, 3, 4, and 8. This solution reflects the preferences of multiple DMs and satisfies budget constraints. If the DMs are not satisfied with the solution, we have to use AHP to assess and compute new values of weights g and l according to Fig. 29.6. This process will terminate whenever all DMs agree to the compromise solution.
29.5.2.2 Managerial Implications The model and solution procedure proposed in this study have several managerial implications. First, the model integrates the multiple-objective capital budgeting problem with multiple decision makers. The most common problem in a capital investment decision-making situation is how to estimate cash flows and to incorporate the risk factors of decision makers (Hillier 1963; Lee 1993). In our model, we allow multiple-constraint levels to incorporate each DM’s preference about budget availability. Neither linear programming nor goal programming models can handle these characteristics. Second, our model can be applied to solve capital budgeting problems not only with the representation of a complex structure but also with motivational implications. Our method facilitates DMs’ participation in order to minimize suboptimization of overall company goals (Locke et al. 1981). The model can aid coordination and synthesis of multiple objectives, some in conflict. This feature can be quite effective in a capital budgeting situation, in which many objectives are
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 833
contradictory to each other and these objectives can be measured differently by a number of decision participants. This model can foster more autonomous control than any other mathematical programming models by allowing DMs to express their own priority structures and weights. Third, adoption of the AHP reduces the solution complexity of the resulting MC2 IP program. The MC2 IP is a very complex problem to solve. Real-life size MC2 IP problems are intractable by using current computational technology. Furthermore, the computing software for MC2 IP is still under development (Shi and Lee 1992). Even though the use of AHP may lose some possible trade-offs among objectives and/or DMs’ preferences, the AHP reduces MC2 IP to a traditional zero-one IP that is much easier to be solved using currently available IF software. Lastly, our two-phased framework (Fig. 29.6) for capital budgeting problem is highly flexible. The DMs can reach an agreement interactively. For example, alternative budgeting policy can be obtained by providing a different preference matrix of objectives and/or resource availability levels. Furthermore, the two-phased framework may be attempted iteratively until all of the DMs are satisfied with the final budgeting policy.
29.5.3 Conclusions A decision-making process with a multiple-criteria model has been addressed to solve capital budgeting problems. This model can foster a high-quality decision in capital budgeting under multiple criteria and multiple decision makers. This decision-making strategy reflects each decision maker’s preference and limits suboptimization of overall company goals. This method can also better handle real-world problems that may include uncertain factors. By incorporating information from each influential manager, our model is more likely to provide better budgeting solutions than the previous linear programming or goal programming approaches. There are other research problems remaining to be explored. From a practical point of view, the estimation of future cash flows, determining a firm’s cost of capital, measuring intangible benefits, and measuring residual value of assets are still important issues. Models like ours can reduce the chance of an ineffective decision making by incorporating multiple decision makers’ preferences. This framework can be applied to other accounting problems, such as cost allocation, audit sampling objectives, and personnel planning for an audit, if the decision variables and formulation are expressed appropriately.
29.6
Conclusions
We have introduced group decision-making tools that can be applied in accounting and finance. By the nature of today’s dynamic business environment, there will be more than one decision maker and business conditions keep changing. We showed
834
W. Kwak et al.
AHP and MC2 applications in performance evaluation, banking performance evaluation, international transfer pricing, and capital budgeting. Our last paper shows the combination of AHP and MC2 in capital budgeting to deal with multiple objectives and multiple constraints. Our performance evaluation model shows advantages such as flexibility, feedbacks, group evaluation, and computing simplicity. A prototype was built via a personal computer so that the model can be applied to any business situations. The iterative process of getting input data in the AHP procedure helps each manager as well as employee to be aware of the importance of strategic factors of each performance measure of the bank from our second paper. Our transfer pricing model can provide a systematic and comprehensive scenario about all possible optimal transfer prices depending on multiple-criteria and multiple-constraint levels. The trade-offs of optimal transfer prices offer a broad basis for managers of a corporation to flexibly implement the optimal transfer pricing strategy and cope with various business situations. Furthermore, this method also aids global optimization, division autonomy, and performance evaluation. Our last paper shows that our capital budgeting model is more likely to provide better budgeting solutions than the previous approaches by incorporating information from each influential manager.
Appendix 1 For illustrative purposes, we show how to use AHP to induce the five DMs’ preferences of budget availability and to compute the relative weights. Generally, AHP collects input judgments of DMs in the form of a matrix by pairwise comparisons of criteria (i.e., their budget availability levels). An eigenvalue method is then used to scale weights of such criteria. That is, the relative importance of each criteria is computed. The result from all of pairwise comparison is stored in an input matrix as follows: President
Controller
Production Manager 2 1 3 4 5 6 1 2 5 6 6 6 1 3 6 6 1 4
6
Marketing Manager 3
Engineering Manager
7 57 7 47 7 7 25 1
Applying an eigenvalue method to the above input matrix results in a vector Wi ¼ (0.477, 0.251, 0.154, 0.070, 0.048). In addition to the vector, the inconsistency ratio (g) is obtained to estimate the degree of inconsistency in pairwise comparisons. In this example, the inconsistency ratio is 0.047. A common guideline is that if the ratio surpasses 0.1, a new input matrix must be generated. Therefore, this input matrix is acceptable.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 835
A similar computing process can be applied for the 16 objective functions. However, two hierarchical levels are required for this case. The first level of AHP corresponds to the four groups of objectives and the second level corresponds to the time periods within each objective group. Let xj be the jth project that can be selected by the firm. Using data in Tables 29.7, 29.8, 29.9, and 29.10, the model for capital budgeting with multiple criteria and multiple DMs is formulated as maximize
20x1 þ 16x2 þ 11x3 þ 4x4 þ 4x5 þ 18x6 þ 7x7 þ 9x8 þ 24x9 þ 4x10
maximize
120x1 þ 100x2 þ 80x3 þ 40x4 þ 40x5 þ 110x6 þ 60x7 þ 110x8 þ 150x9 þ 35x10
maximize
130x1 þ 120x2 þ 90x3 þ 50x4 þ 45x5 þ 120x6 þ 70x7 þ 120x8 þ 170x9 þ 40x10
maximize
145x1 þ 140x2 þ 95x3 þ 50x4 þ 50x5 þ 140x6 þ 65x7 þ 100x8 þ 180x9 þ 40x10
maximize
150x1 þ 150x2 þ 95x3 þ 55x4 þ 55x5 þ 150x6 þ 70x7 þ 110x8 þ 190x9 þ 50x10
maximize
170x1 þ 160x2 þ 100x3 þ 60x4 þ 60x5 þ 165x6 þ 80x7 þ 120x8 þ 200x9 þ 50x10
maximize
10x1 þ 10x2 þ 12x3 þ 8x4 þ 15x5 þ 12x6 þ 8x7 þ 14x8 þ 12x9 þ 10x10
maximize
12x1 þ 12x2 þ 15x3 þ 15x4 þ 10x5 þ 12x6 þ 12x7 þ 16x8 þ 10x9 þ 8x10
maximize
14x1 þ 18x2 þ 15x3 þ 10x4 þ 8x5 þ 10x6 þ 10x7 þ 13x8 þ 9x9 þ 9x10
maximize
15x1 þ 16x2 þ 15x3 þ 8x4 þ 20x5 þ 15x6 þ 12x7 þ 15x8 þ 12x9 þ 8x10
maximize
17x1 þ 17x2 þ 8x3 þ 12x4 þ 18x5 þ 15x6 þ 12x7 þ 16x8 þ 12x9 þ 12x10
maximize
0:85x1 þ 0:90x2 þ 0:96x3 þ 0:98x4 þ 0:90x5 þ 0:90x6 þ 0:96x7 þ 0:96x8 þ 0:90x9 þ 0:98x10
836
W. Kwak et al.
maximize
0:90x1 þ 0:98x2 þ 0:97x3 þ 0:95x4 þ 0:95x5 þ 0:95x6 þ 0:98x7 þ 0:95x8 þ 0:88x9 þ 0:95x10
maximize
0:98x1 þ 0:95x2 þ 0:98x3 þ 0:96x4 þ 0:95x5 þ 0:94x6 þ 0:98x7 þ 0:90x8 þ 0:85x9 þ 0:95x10
maximize
0:95x1 þ 0:95x2 þ 0:92x3 þ 0:92x4 þ 0:98x5 þ 0:95x6 þ 0:98x7 þ 0:92x8 þ 0:95x9 þ 0:98x10
maximize
0:95x1 þ 0:96x2 þ 0:92x3 þ 0:95x4 þ 0:95x5 þ 0:95x6 þ 0:98x7 þ 0:95x8 þ 0:95x9 þ 0:95x10
subject to
24x1 þ 12x2 þ 8x3 þ 6x4 þ x5 þ 18x6 þ 13x7 þ 14x8 þ 16x9 þ 4x10 49:37 16x1 þ 10x2 þ 6x3 þ 4x4 þ 6x5 þ 18x6 þ 8x7 þ 8x8 þ 20x9 þ 6x10 35:30 6x1 þ 2x2 þ 6x3 þ 7x4 þ 9x5 þ 20x6 þ 10x7 þ 12x8 þ 24x9 þ 8x10 28:91 6x1 þ 5x2 þ 6x3 þ 5x4 þ 2x5 þ 15x6 þ 8x7 þ 10x8 þ 16x9 þ 6x10 33:44 8x1 þ 4x2 þ 4x3 þ 3x4 þ 3x5 þ 15x6 þ 8x7 þ 8x8 þ 16x9 þ 4x10 29:96 and Xj ¼ f0; 1g for j ¼ 1, . . . , 10:
References Abdel-khalik, A. R., & Lusk, E. J. (1974). Transfer pricing – A synthesis. The Accounting Review, 8–24. Arrington, E. E., Hillison, W., & Jensen, R. E. (1984). An application of analytical hierarchy process to model expert judgment on analytic review procedures. Journal of Accounting Research, 22, 298–312. Ayvaz, E., & Pehlivanli, D. (2011). The use of time driven activity based costing and analytic hierarchy process method in the balanced scorecard implementation. International Journal of Business and Management, 6, 146–158. Bailey, A. D., & Boe, W. J. (1976). Goal and resource transfers in the multigoal organization. The Accounting Review, 559–573. Bapna, R., Goes, P., & Gupta, A. (2005). Pricing and allocation for quality-differentiated online services. Management Science, 51, 1141–1150. Baumol, W. J., & Fabian, T. (1964). Decomposition, pricing for decentralization and external economies. Management Science, 11, 1–32.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 837
Baumol, W. J., & Quandt, R. E. (1965). Investment and discount rate under capital rational – A programming approach. Economic Journal, 75(298), 173–329. Bernard, R. H. (1969). Mathematical programming models for capital budgeting – A survey generalization and critique. Journal of Financial and Quantitative Analysis, 4, 111–158. Bhaskar, K. A. (1979). Multiple objective approach to capital budgeting. Accounting and Business Research, 10, 25–46. Bhaskar, I. C., & McNamee, P. (1983). Multiple objectives accounting and finance. Journal of Business Finance and Accounting, 10, 595–621. Boardman, W., Reinhart, I., & Celec, S. E. (1982). The role of payback period in the theory and application of duration to capital budgeting. Journal of Business Finance and Accounting, 9, 511–522. Borkowski, S. C. (1990). Environmental and organizational factors affecting transfer pricing: A survey. Journal of Management Accounting Research, 2, 78–99. Boucher, T. O., & MacStravic, E. L. (1991). Multi-attribute evaluation within a present value framework and its relation to the analytic hierarchy process. The Engineering Economist, 37, 1–32. Calvert, J. (1990). Quality customer service: A strategic marketing weapon for high performance banking. Industrial Engineering, 54–57. Campi, J. P. (1992). It’s not as easy as ABC. Journal of Cost Management Summer, 6, 5–11. Chan, Y. L. (2004). Use of capital budgeting techniques and an analytic approach to capital investment decisions in Canadian municipal governments. Public Budgeting & Finance, 24, 40–58. Chan, Y. L., & Lynn, B. E. (1991). Performance evaluation and the analytic hierarchy process. Journal of Management Accounting Research, 3, 57–87. Charnes, A., & Cooper, W. W. (1961). Management models and industrial applications of linear programming (Vols. I & II). New York: Wiley. Cheng, J. K. (1993). Managerial flexibility in capital investment decision: Insights from the realoptions. Journal of Accounting Literature, 12, 29–66. Chien, I. S., Shi, Y., & Yu, P. L. (1989). MC2 program: A Pascal program run on PC or Vax (revised version). Lawrence: School of Business, University of Kansas. Choi, T. S., & Levary, R. R. (1989). Multi-national capital budgeting using chance-constrained goal programming. International Journal of Systems Science, 20, 395–414. Corner, I. L., Decicen, R. F., & Spahr, R. W. (1993). Multiple-objective linear programming in capital budgeting. Advances in Mathematical Programming and Financial Planning, 3, 241–264. Curtis, S. L. (2010). Intercompany licensing rates: Implications from principal-agent theory. International Tax Journal, 36, 25–46. Dantzig, G. B. (1963). Linear programming and extensions. New Jersey: Princeton University Press. Dantzig, G. B., & Wolfe, P. (1960). Decomposition principles for linear programs. Operations Research, 8, 101–111. Deckro, R. F., Spahr, R. W., & Herbert, J. E. (1985). Preference trade-offs in capital budgeting decisions. IIE Transactions, 17, 332–337. DeWayne, L. S. (2009). Developing a lean performance score. Strategic Finance, 91, 34–39. Dopuch, N., & Drake, D. F. (1964). Accounting implications of a mathematical programming approach to the transfer price problem. Journal of Accounting Research, 2, 10–24. Drucker, P. F. (1993). We need to measure, not count. Wall Street Journal, April 13. Also Expert Choice Voice, July. Dyer, R. F., & Forman, E. H. (1991). An analytic approach to marketing decisions. Englewood, NJ: Prentice Hall. Eccles, R. G. (1983). Control with fairness in transfer pricing. Harvard Business Review, 61, 149–161. Eccles, R. G. (1991). The performance measurement manifesto. Harvard Business Review, 69, 131–137.
838
W. Kwak et al.
Eccles, R. G., & Pyburn, P. J. (1992). Creating a comprehensive system to measure performance. Management Accounting, 74, 41–44. Eddie, W., Cheng, L., & Li, H. (2001). Analytic hierarchy process: An approach to determine measures for business performance. Measuring Business Excellence, 5, 30–36. Fisher, J. (1992). Use of nonfinancial performance measures. Journal of Cost Management, 6, 31–38. Forman, E. H., Saaty, T. L., Selly, M. A., & Waldron, R. (1985). Expert choice. Pittsburgh: Decision Support Software. Goicoechea, A., Hansen, D. R., & Duckstein, L. (1982). Multi-objective decision analysis applications. NewYork: Wiley. Gonzalez, G., Reeves, R., & Franz, L. S. (1987). Capital budgeting decision making: An interactive multiple objective linear integer programming search procedure. Advances in Mathematical Programming and Financial Planning, 1, 21–44. Gould, J. R. (1964). Internal pricing in corporations when there are costs of using an outside market. The Journal of Business, 37, 61–67. Gyetvan, F., & Shi, Y. (1992). Weak duality theorem and complementary slackness theorem for linear matrix programming problems. Operations Research Letters, 11, 244–252. Hardy, C., & Reeve, R. (2000). A study of the internal control structure for electronic data interchange systems using the analytic hierarchy process. Accounting and Finance, 40, 191–210. Harker, P. T., & Vargas, L. G. (1987). The theory of ratio scale estimation: Saaty’s analytic hierarchy process. Management Science, 33, 1383–1403. Hillier, F. S. (1963). The derivation of probabilities information for the evaluation of risky investments. Management Science, 443–457. Hirshleifer, J. (1956). On the economies of transfer pricing. The Journal of Business, 172–184. Howe, K. M., & Patterson, I. H. (1985). Capital investment decisions under economies of scale in flotation costs. Financial Management, 14, 61–69. Howell, R. A., & Schwartz, W. A. (1994). Asset deployment and investment justification, In Handbook of cost management. Boston: Warren Gorham Lamont. Huang, X. (2008). Mean-variance model for fuzzy capital budgeting. Computers & Industrial Engineering, 55, 34. Huang, S., Hung, W., Yen, D. C., Chang, I., & Jiang, D. (2011). Building the evaluation model of the IT general control for CPAs under enterprise risk management. Decision Support Systems, 50, 692. Ignizio, J. P. (1976). An approach to the capital budgeting problem with multiple objectives. The Engineering Economist, 21, 259–272. Ishizaka, A., Balkenborg, D., & Kaplan, T. (2011). Influence of aggregation and measurement scale on ranking a compromise alternative in AHP. The Journal of the Operational Research Society, 62, 700–711. Kamath, R. R., & Khaksari, S. Z. (1991). Real estate investment: An application of the analytic hierarchy process. The Journal of Financial and Strategic Decisions, 4, 73–100. Kaplan, R. S. (1986). Accounting lag: The obsolescence of cost accounting systems. California Management Review, 28, 175–199. Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard-measures that drive performance. Harvard Business Review, 69, 71–79. Karanovic, G., Baresa, S., & Bogdan, S. (2010). Techniques for managing projects risk in capital budgeting process. UTMS Journal of Economics, 1, 55–66. Kim, S. H., & Farragher, E. I. (1981). Current capital budgeting practices. Management Accounting, 6, 26–30. Lecraw, D. J. (1985). Some evidence on transfer pricing by multinational corporations. In A. M. Rugman & L. Eden (Eds.), Multinationals and transfer pricing. New York: St. Martin’s Press. Lee, C. F. (1993). Statistics for business and financial economics. Lexington: Heath. Lee, H. (1993). A structured methodology for software development effort prediction using the analytic hierarchy process. Journal of Systems and Software, 21, 179–186.
29
Group Decision-Making Tools for Managerial Accounting and Finance Applications 839
Lee, S. M., & Lerro, A. (1974). Capital budgeting for multiple objectives. Financial Management, 3, 51–53. Lee, Y. R., Shi, Y., & Yu, P. L. (1990). Linear optimal designs and optimal contingency plans. Management Science, 36, 1106–1119. Lee, H., Nazem, S. M., & Shi, Y. (1995). Designing rural area telecommunication networks via hub cities. Omega: International Journal of Management Science, 22(3), 305–314. Li, Y., Huang, M., Chin, K., Luo, X., & Han, Y. (2011). Integrating preference analysis and balanced scorecard to product planning house of quality. Computers & Industrial Engineering, 60, 256–268. Liberatore, M. J., Mohnhan, T. F., & Stout, D. E. (1992). A framework for integrating capital budgeting analysis with strategy. The Engineering Economist, 38, 31–43. Lin, T. W. (1993). Multiple-criteria capital budgeting under risk. Advances in Mathematical Programming and Financial Planning, 3, 231–239. Lin, L., Lefebvre, C., & Kantor, J. (1993). Economic determinants of international transfer pricing and the related accounting issues, with particular reference to Asian Pacific countries. The International Journal of Accounting, 49–70. Locke, E. A., Shaw, K. N., & Lathom, G. P. (1981). Goal setting and task performance: 1969–1980. Psychological Bulletin, 90, 125–152. Lorie, J. H., & Savage, L. I. (1955). Three problems in rationing capital. Journal of Business, 28, 229–239. Mensah, Y. M., & Miranti, P. J., Jr. (1989). Capital expenditure analysis and automated manufacturing systems: A review and synthesis. Journal of Accounting Literature, 8, 181–207. Merville, L. J., & Petty, W. J. (1978). Transfer pricing for the multinational corporation. The Accounting Review, 53, 935–951. Muralidhar, K., Santhanam, R., & Wilson, R. L. (1990). Using the analytic hierarchy process for information system project selection. Information and Management, 18, 87–95. Naert, P. A. (1973). Measuring performance in a decentralized corporation with interrelated divisions: Profit center versus cost center. The Engineering Economist, 99–114. Nakayama, H. (1994). Some remarks on trade-off analysis in multi-objective programming. The XIth International conference on multiple criteria decision making, Coimbra, Portugal, Aug. 1–6. Nanni, A. J., Dixon, J. R., & Vollmann, T. E. (1990). Strategic control and performance measurement. Journal of Cost Management, 4, 33–42. Natarajan, T., Balasubramanian, S. A., & Manickavasagam, S. (2010). Customer’s choice amongst self-service technology (SST) channels in retail banking: A study using analytical hierarchy process (AHP). Journal of Internet Banking and Commerce, 15, 1–15. Onsi, M. (1970). A transfer pricing system based on opportunity cost. The Accounting Review, 45, 535–543. Palliam, R. (2005). Application of a multi-criteria model for determining risk premium. The Journal of Risk Finance, 6, 341–348. Pike, R. (1983). The capital budgeting behavior and corporate characteristics of capitalconstrained firms. Journal of Business Finance and Accounting, 10, 663–671. Pike, R. (1988). An empirical study of the adoption of sophisticated capital budgeting practices and decision-making effectiveness. Accounting and Business Research, 18, 341–351. Reeves, G. R., & Franz, L. S. (1985). A simplified interactive multiple objective linear programming procedure. Computers and Operations Research, 12, 589–601. Reeves, G. A., & Hedin, S. R. (1993). A generalized interactive goal programming procedure. Computers and Operations Research, 20, 747–753. Reeves, G. F., Lawrence, S. M., & Gonzalez, J. J. (1988). A multiple criteria approach to aggregate industrial capacity expansion. Computers and Operations Research, 15, 333–339. Ronen, J., & McKinney, G. (1970). Transfer pricing for divisional autonomy. Journal of Accounting Research, 8, 99–112. Rotch, W. (1990). Activity-based costing in service industries. Journal of Cost Management, 4–14.
840
W. Kwak et al.
Ruefli, T. W. (1971). A generalized goal decomposition model. Management Science, 17, 505–518. Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw-Hill. Santhanam, R., Muralidhar, K., & Schniederjans, M. (1989). Zero-one goal programming approach for information system project selection. Omega: International Journal of Management Science, 17, 583–598. Schniederjans, M. J., & Garvin, T. (1997). Using the analytic hierarchy process and multiobjective programming for the selection of cost drivers in activity-based costing. European Journal of Operational Research, 100, 72–80. Schniederjans, M. J., & Wilson, R. L. (1991). Using the analytic hierarchy process and goal programming for information system project selection. Information and Management, 20, 333–342. Seiford, L., & Yu, P. L. (1979). Potential solutions of linear systems: The multi-criteria multiple constraint level program. Journal of Mathematical Analysis and Applications, 69, 283–303. Shi, Y. (1991). Optimal linear production systems: Models, algorithms, and computer support systems. Ph.D. dissertation, School of Business, University of Kansas. Shi, Y., & Lee, H. (1992). A binary integer linear programming with multi-criteria and multiconstraint levels (Working paper #92–21). School of Business, University of Nebraska at Omaha. Shi, Y., & Yu, P. L. (1992). Selecting optimal linear production systems in multiple criteria environments. Computer and Operations Research, 19, 585–608. Singhal, J., Marsten, R. E., & Morin, T. L. (1989). Fixed order brand-and-bound methods for mixed-integer programming: The ZOOM system. ORSA Journal on Computing, 1, 44–51. Steuer, R. (1986). Multiple criteria optimization: Theory. Computation and application. New York: Wiley. Suh, C.K., Suh, E.H., Choi, J. (1993). A two-phased DSS for R&D portfolio selection. Proceedings of International DSI, 351–354. Tang, R. Y. W. (1992). Transfer pricing in the 1990s. Management Accounting, 73, 22–26. Tapinos, E., Dyson, R. G., & Meadows, M. (2011). Does the balanced scorecard make a difference to the strategy development process? The Journal of the Operational Research Society, 62, 888–899. Thanassoulis, E. (1985). Selecting a suitable solution method for a multi-objective programming capital budgeting problem. Journal of Business Finance and Accounting, 12, 453–471. Thomas, A. L. (1980). A behavioral analysis of joint-cost allocation and transfer pricing. Champaign: Stipes Publishing. Turney, S. T. (1990). Deterministic and stochastic dynamic adjustment of capital investment budgets. Mathematical Computation and Modeling, 13, 1–9. Watson, D. J. H., & Baumler, J. V. (1975). Transfer pricing: A behavioral context. The Accounting Review, 50, 466–474. Weingartner, H. M. (1963). Mathematical programming and the analysis of capital budgeting problems. Englewood Cliffs: Prentice-Hall. Wu, C.-R., Lin, C.-T., & Tsai, P.-H. (2011). Financial service sector performance measurement model: AHP sensitivity analysis and balanced scorecard approach. The Service Industries Journal, 31, 695. Yu, P. L. (1985). Multiple criteria decision making: Concepts, techniques and extensions. New York: Plenum. Yu, Y. L., & Zeleny, M. (1975). The set of all nondominated solutions in the linear cases and a multicriteria simplex method. Journal of Mathematical Analysis and Applications, 49, 430–458. Yunker, P. J. (1983). Survey study of subsidiary, autonomy, performance evaluation and transfer pricing in multinational corporations. Columbia Journal of World Business, 19, 51–63. Zeleny, M. (1974). Linear multi-objective programming. Berlin: Springer.
Statistics Methods Applied in Employee Stock Options
30
Li-jiun Chen and Cheng-der Fuh
Contents 30.1 30.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preliminary Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2.1 Change of Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2.2 Hierarchical Clustering with K-Means Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2.3 Standard Errors in Finance Panel Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.3 Employee Stock Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.3.1 Model-Based Approach to Subjective Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.3.2 Compensation-Based Approach to Subjective Value . . . . . . . . . . . . . . . . . . . . . . . . . 30.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.4.1 Exercise Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.4.2 Factors’ Effects on ESOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.4.3 Offsetting Effect of Sentiment and Risk on ESO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.5 Empirical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.5.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.5.2 Preliminary Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.5.3 Implications of Regression Results and Variable Sensitivities . . . . . . . . . . . . . . . 30.5.4 Subjective Value and Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Derivation of Risk-Neutral Probability by Change of Measure . . . . . . . . . . . . . . . . . Appendix 2: Valuation of European ESOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Hierarchical Clustering with a K-Means Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
842 844 844 845 846 847 847 851 852 852 853 857 857 857 859 861 864 866 867 869 870 871
L.-j. Chen (*) Department of Finance, Feng Chia University, Taichung City, Taiwan e-mail: [email protected] C.-d. Fuh Graduate Institute of Statistics, National Central University, Zhongli City, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_30, # Springer Science+Business Media New York 2015
841
842
L.-j. Chen and C.-d. Fuh
Abstract
This study presents model-based and compensation-based approaches to determining the price-subjective value of employee stock options (ESOs). In the model-based approach, we consider a utility-maximizing model in which the employees allocate their wealth among company stock, a market portfolio, and risk-free bonds, and then we derive the ESO formulas, which take into account illiquidity and sentiment effects. By using the method of change of measure, the derived formulas are simply like those of the market value with altered parameters. To calculate the compensation-based subjective value, we group employees by hierarchical clustering with a K-means approach and back out the option value in an equilibrium competitive employment market. Further, we test illiquidity and sentiment effects on ESO values by running regressions that consider the problem of standard errors in the finance panel data. Using executive stock options and compensation data paid between 1992 and 2004 for firms covered by the Compustat Executive Compensation Database, we find that subjective value is positively related to sentiment and negatively related to illiquidity in all specifications, consistent with the offsetting roles of sentiment and risk aversion. Moreover, executives value ESOs at a 48 % premium to the Black-Scholes value and ESO premiums are explained by a sentiment level of 12 % in risk-adjusted, annualized excess return, suggesting a high level of executive overconfidence.
Keywords
Employee stock option • Sentiment • Subjective value • Illiquidity • Change of measure • Hierarchical clustering with K-means approach • Standard errors in finance panel data • Exercise boundary • Jump diffusion model
30.1
Introduction
Employee stock options (ESOs) are a popular method of compensation. According to the Compustat Executive Compensation database, the number of option grants increased from roughly 0.25 billion in 1992 to 1.4 billion in 2001. Specifically, in fiscal 2001, 53 % of total pay came from granted options, compared with 33 % in 1992.1 Moreover, executives receive averagely more than ten thousand options after 1998. ESOs can help firms retain talent and reduce agency costs (Jensen and Meckling 1976). They also mitigate risk-related incentive problems (Agrawal and Mandelker 1987; Hemmer et al. 2000) and provide an alternative to cash compensation, which is especially important for firms facing 1
The option values are estimated by the calculation in ExecuCompustat database.
30
Statistics Methods Applied in Employee Stock Options
843
financial constraints (Core and Guay 2001). In addition, ESOs attract highly motivated and able employees (Core and Guay 2001; Oyer and Schaefer 2005). All of these factors contribute to the importance of ESOs for corporate governance and finance research. The illiquidity problem of ESOs cannot be neglected. ESOs usually have a vesting period during which they cannot be exercised, and hence they cannot be redeemed for a fixed period of time. Furthermore, ESOs are generally not publicly traded and their sale is not permitted. Standard methods for valuing options are difficult to apply to ESOs. Because of the illiquidity of ESOs, many employees hold undiversified portfolios that include large stock options of their own firms. Because of the impossibility of full diversification, the value perceived by employees (subjective value) may be quite different from the traded option. We consider the subjective value to be what a constrained agent would pay for the ESOs and the market value to be the value perceived by an unconstrained agent. Many papers address the differences between subjective and market value, each concluding that the subjective value of ESOs should be less than the market value (Lambert et al. 1991; Hall and Murphy 2002; Ingersoll 2006). If ESOs are generally worth less than market value, why do employees continue to accept and, indeed, sometimes prefer ESO compensation? One reasonable explanation is employee sentiment. Sentiment means positive private information or behavioral overconfidence regarding the future risk-adjusted returns of the firm. Simply, the employee believes that he possesses private information and can benefit from it. Or the employee overestimates the return of the firm and believes ESOs are valuable. Some empirical evidence supports this conjecture. Oyer and Schaefer (2005) and Bergman and Jenter (2007) posit that employees attach a sentiment premium to their stock options; firms exploit this sentiment premium to attract and retain optimistic employees. Hodge et al. (2009) provide survey evidence and find that managers subjectively value the stock option greater than its Black-Scholes value. Three statistics methods are applied in our ESO study (Chen and Fuh 2011; Chang et al. 2013), including change of measure, hierarchical clustering with a K-means approach, and estimation of standard errors in finance panel data. We derive a solution for ESO value that is a function of both illiquidity and sentiment in a world where employees balance their wealth between the company’s stock, the market portfolio, and a risk-free asset. By using the method of change of measure, we find a probability measure and then the ESO formulas are derived easily. In addition, from the ESO pricing formulas, we are able to not only estimate the subjective values but also study the exercise policies. Early exercise is a pervasive phenomenon and, importantly, the early exercise effect is critical in valuation of ESOs, especially for employees who are more risk averse and when there are more restrictions on stock holding. Applying a comprehensive set of executive options and compensation data, this study empirically determines subjective value, grouping employees by hierarchical clustering with a K-means approach and backing out the option
844
L.-j. Chen and C.-d. Fuh
value in an equilibrium competitive employment market. Specifically, we group executives according to position, the firm’s total market value, nonoption compensation, and the immediate exercise value of the options for each industry by hierarchical clustering with a K-means approach. We then calculate the empirical value of ESO that each executive places on their ESOs in order for total compensation to be equivalent in the same cluster. Further, we model both illiquidity and sentiment effects and test them with executive options data. We regress the empirical ESO values on the proportion of total wealth held in illiquid firm-specific holdings and sentiment, which is estimated from our pricing formula or Capital Asset Pricing Model (CAPM) risk-adjusted alpha under controlling key options pricing variables such as moneyness, time to maturity, volatility, and dividend payout. As we know, when the residuals are correlated across observations, the standard errors of estimated coefficients produced by Ordinary Least Squares (OLS) may be biased and then lead to incorrect inference. Petersen (2009) compares the different methods used in the literature and gives researchers guidance for their use. The data we collected are from multiple firms over several years. Hence, we consider the problem of standard errors in finance panel data and run the regressions, including standard errors clustered by firm, year, and both firm and year. The remainder of this chapter is organized as follows. Section 30.2 introduces some preliminary knowledge. Section 30.3 develops our model and addresses the approaches to price-subjective value of ESOs. Section 30.4 presents the simulation results. Section 30.5 shows the empirical study; Sect. 30.6 concludes the chapter.
30.2
Preliminary Knowledge
30.2.1 Change of Measure In our ESO study, we assume stock price follows a jump-diffusion process. Here, we briefly introduce change of measure for a compound Poisson and Brownian motion. Suppose that we have a probability space (O, F , ℙ) on which is defined a Brownian motion Wt. Suppose that on this same probability space there is defined a compound Poisson process Qt ¼
Nt X
Yi
i¼0
with intensity l and jumps having density function f(y). Assume that there is a single filtration F t, t0, for both the Brownian motion and the compound Poisson process. In this case, the Brownian motion and compound Poisson process must be independent (see Shreve 2008).
30
Statistics Methods Applied in Employee Stock Options
845
Let e l be a positive number, let ef ðyÞ be another density function with the property that ef ðyÞ ¼ 0 whenever f(y) ¼ 0, and let Y(t) be an adapted process. We define ðt ð 1 t 2 Z1 ðtÞ ¼ exp Yðu dW u Y ðuÞdu , 2 0 0 Y N t ee l f ðY i Þ e , Z2 ðtÞ ¼ e ll t lf ðY i Þ i¼0 ZðtÞ ¼ Z 1 ðt Z 2 ðt : It can be verified that the process Z(t) is a martingale. ð e Lemma 30.1 Define P ðAÞ ¼ Z ðT ÞdP for all A 2 F . Under the probability A measure ℙ, the process e t ¼ Wt þ W
ðt
YðsÞds
0
is a Brownian motion, Qt is a compound Poisson process with intensity e l and independent, identically distributed jump sizes having density ef ðyÞ, and the proe t and Qt are independent. cesses W The proof of Lemma 30.1 is presented by Shreve (2008). This lemma is useful for us to derive the ESO formula in Sect. 30.3.
30.2.2 Hierarchical Clustering with K-Means Approach The main assumption of the compensation-based subjective value is that all executives within the same group receive the same total compensation. For each executive in the group, the implied subjective value is derived by comparing the difference between nonoption compensation and the average compensation. Grouping executives appropriately is essential in the compensation-based approach. Clustering is an unsupervised technique for analyzing data and dividing patterns (observations, data items, or feature vectors) into groups (clusters). There are two kind of clustering algorithms: hierarchical and partitional approaches. Hierarchical methods produce a nested structure of partitions, whereas partitional methods produce only one partition (Jain et al. 1999). Hierarchical clustering algorithms repeat the cycle of either merging smaller clusters into larger ones or dividing larger clusters to smaller ones. An agglomerative clustering strategy uses the bottom-up approach of merging clusters into larger ones, whereas divisive clustering strategy uses the top-down approach of splitting larger clusters into smaller ones. Typically, the greedy approach is used in deciding which larger/smaller clusters are used for merging/dividing. Euclidean distance, Manhattan distance, and cosine similarity are some of the most
846
L.-j. Chen and C.-d. Fuh
commonly used metrics of similarity for numeric data. For non-numeric data, metrics such as the Hamming distance are used. It is important to note that the actual observations (instances) are not needed for hierarchical clustering, because only the matrix of distances is sufficient. The user can obtain different clustering depending on the level that is cut. A partitional clustering algorithm obtains a single partition of the data instead of a clustering structure. A problem accompanying the use of a partitional algorithm is the choice of the number of desired output clusters (usually called k). One of the most commonly used partitional clustering algorithms is the K-means clustering algorithm. It starts with a random initial partition and keeps reestimating cluster centers and reassigning the patterns to clusters based on the similarity between the pattern and the cluster centers. These two steps are repeated until a certain intra-cluster similarity objective function and inter-cluster dissimilarity objective function are optimized. Therefore, sensible initialization of centers is an important factor in obtaining quality results from partitional clustering algorithms. Hierarchical and partitional approaches each have their advantages and disadvantages. Therefore, we apply a hybrid approach combining hierarchical and partitional approaches in this study.
30.2.3 Standard Errors in Finance Panel Data In finance panel data, the residuals may be correlated across firms or across time, and OLS standard errors can be biased. Petersen (2009) compares the different methods used in the literature and provides guidance to researchers as to which method should be used. The standard regression for a panel data is Y it ¼ Xit b þ Eit
i ¼ 1, . . . , N; t ¼ 1, . . . , T
where there are observations on firms i across years t. X and ϵ are assumed to be independent of each other and to have finite variance. OLS standard errors are unbiased when the residuals are independent and identically distributed. However, it may result in incorrect inference when the residuals are correlated across observations. In finance study, there are two general forms of dependence: time-series dependence (firm effect), in which the residuals of a given firm may be correlated across years for a given firm, and cross-sectional dependence (time effect), in which the residuals of a given year may be correlated across different firms. Considering the firm effect, the residuals and independent variable are specified as Eit ¼ gi þ nit ; Xit ¼ mi þ vit Both the independent variable and the residual are correlated across observations of the same firm but are independent across firms. Petersen (2009) shows that OLS,
30
Statistics Methods Applied in Employee Stock Options
847
Fama-MacBeth, and Newey-West standard errors are biased, and only clustered standard errors are unbiased as they account for the residual dependence created by the firm effect. Considering the time effect, the residuals and independent variable are specified as Eit ¼ dt þ it ; Xit ¼ zt þ nit : OLS standard errors still underestimate the true standard errors. The clustered standard errors are much more accurate, but unlike the results with the firm effect, they underestimate the true standard error. However, the bias in the clustered standard error estimates declines with the number of clusters. Because the Fama-MacBeth procedure is designed to address a time effect, the Fama-MacBeth standard errors are unbiased. In both firm and time effect, the residuals and independent variable are specified as Eit ¼ gi þ dt þ it ; Xit ¼ mi þ zt þ nit : Standard errors clustered by only one dimension are biased downward. Clustering by two dimensions produces less biased standard errors. However, clustering by firm and time does not always yield unbiased estimates. We only introduce the method we use in this study. Readers wanting to know more methods for estimation of standard error in financial panel data should refer to Petersen (2009).
30.3
Employee Stock Options
This section introduces two methods to price the subjective value of ESOs. We call them the model-based approach and the compensation-based approach. To derive the model-based subjective value, we use the technique of change of measure and find a probability measure P* such that the option value can be generated simply. To calculate compensation-based subjective value, we group employees by hierarchical clustering with a K-means approach and back out the option value in an equilibrium-competitive employment market.
30.3.1 Model-Based Approach to Subjective Value From Chen and Fuh (2011), we have a three asset economy in which the employee allocates their wealth among three assets: company stock S, market portfolio M, and risk-free bond B. Because of the illiquidity of ESO, the employee is constrained to allocate a fixed fraction a of their wealth to company stock (via some form of ESO). Define the jump-diffusion processes for the three assets as follows:
848
L.-j. Chen and C.-d. Fuh
8 Nt X > dS > > ¼ m dt þ s dW þ ndW þ d ðY i 1Þ, > s m s s > > < S i¼0 dM ¼ ðmm dm Þdt þ sm dW m , > > > M > > > : dB ¼ rdt, B
(30.1)
where ms ¼ m d lk. m, mm, r are instantaneous expected rates of return for the stock, market portfolio, and risk-free bond, respectively. d and dm are dividends for the stock and market portfolio, respectively. The Brownian motion process Wm represents the normal systematic risk of the market portfolio. The Brownian motion process Ws and jump process Nt are the idiosyncratic risk of the company stock, where Nt captures the jump risk of company stock and follows a Poisson distribution with average frequency l. Yi 1 represents the percentage of stock variation when the ith jump occurs. Denote E(Yi 1) ¼ k and E(Yi 1)2 ¼ k2 for all i. ss and sm are the normal systematic portions of total volatility for the stock and the market portfolio, respectively, whereas n is the normal unsystematic volatility of the stock. The two Brownian motions and the jump process are presumed independent. For simplicity, we assume that CAPM holds so that the efficient portfolio is the market. Following the idea of Ingersoll (2006), we solve the constrained optimization problem and determine the employee’s indirect utility function. The marginal utility is then used as a subjective state price density to value compensation. We assume that the employee’s utility function U(·) is set as U(C) ¼ Cg/g with a coefficient of relative risk aversion, 1 g. The process of the employee’s marginal utility or the pricing kernel can be derived as2: dJ W ^ dW m ð1 gÞandW s ¼ ^r dt s JW Nt n o X þd ½aðY i 1Þ þ 1g1 1 ,
(30.2)
i¼0 ∂J ½W ðtÞ, t ∂W ðtÞ
is the marginal utility, J[W(t), t] and W(t) are the where J W ¼ ^r ¼ r ð1 gÞ employee’s total utility and wealth at time t, 2 2 1 2 mm r ^ ¼ sm . a n þ 2 a g l þ a lk , and s The rational equilibrium value of the ESO at time t, F(St, t), satisfies the Euler equation FðSt ; tÞ ¼
Et fJ W ½W ðT Þ, T FðST ; T Þg , J W ½W ðtÞ, t
(30.3)
where F(ST, T) is the payoff at the maturity T. To easily calculate the ESO value, we find a probability measure P* by using change of measure method and then the second equality in the following Eq. (30.4) is satisfied. 2
Because the process can be similarly derived from Chang et al. (2008), it is omitted.
30
Statistics Methods Applied in Employee Stock Options
F ð St ; t Þ ¼ ¼
Et fJ W ½W ðT Þ, T FðST ; T Þg J W ½W ðtÞ, t
849
(30.4)
erðTtÞ Et ½FðST ; T Þ,
Z ðT Þ JW[W(t), t], r* is subjective bond yield, and E*t is the where dP dP ¼ ZðtÞ , Z(t) ¼ e * expectation under P and information at time t. Under P*, the stock process can be expressed as
r*t
Nt X dS ¼ ðr d Þdt þ sN dW t þ d ðY i 1Þ, S i¼0
where 1 r ¼ r ð1 gÞða lk þ g lk2 a2 þ a2 n2 2 lðx 1Þ, 1 d ¼ d ð1 gÞ alk þ g lk2 a2 ð1 a a n2 2 lðx 1Þ þ lk, s2N ¼ s2s þ n2 , l ¼ l x, x ¼ E½aðY i 1Þ þ 1g1 , W*t is the standard Brownian motion and Nt is a Poisson process with rate l*. A detailed explanation is given in Appendix 1.
30.3.1.1 European ESO First, we consider the simple ESO contract, the European ESO. The price formula is presented in Theorem 30.1. Theorem 30.1 The value of the European ESO with strike price K and time to
maturity t, written on the jump-diffusion process in Eq. (30.1), is as follows: CE ðSt ; tÞ ¼
1 X ðl tÞj elt CðjÞ j! j¼0
(
"
where CðjÞ ¼
St e
j Y E Y i F d1
#
d t
i¼0
Ker t E F d2 Yj ln St i¼0 Y i =K þ r d þ 12 s2N t pffiffiffi d1 ¼ , sN t pffiffiffi d2 ¼ d1 sN t: The proof of Theorem 30.1 is in Appendix 2.
(30.5)
850
L.-j. Chen and C.-d. Fuh
30.3.1.2 American ESO Suppose that the option can be exercised at n time instants. These time instants are assumed to be regularly spaced at intervals of Dt, and denoted by ti, 0 i n, where t0 ¼ 0, tn ¼ T, and ti+1 ti ¼ Dt for all i. Denote CA as the value of the American call option, CE as the value of the European call option, K as the strike price, and Si ¼ Sti . The critical price at these time points is denoted by S*i , 0 i n, and is the price at which the agent is indifferent between holding the option and exercising. Denote E*i as the expectation under P* and information at time ti. Theorem 30.2 (Chen and Fuh 2011) The value of the American ESO exercisable at
n time instants, when the ESO is not exercised, written on the jump-diffusion process in Eq. (30.1) is as follows: CA ðS0 ; T Þ
n1 X r‘Dt ¼ CE ðS0 ; T Þ þ e E0 ½S‘ 1 edDt ‘¼1
K 1 erDt I fS‘ S g ‘ n X rjDt e E0 ½CA ðSj , ðn jÞDtÞ Sj K I fSj1 S g I fSj and y(t)> ¼ (b0, b1, . . ., bp, g1, . . ., gq)F1(t). Since Zt contains stk(k ¼ 1, , q) which in turn depends on unknown parameters y ¼ (b0, b1, . . ., bp, g1, . . ., gq), we may write Zt as Zt(y) to emphasize the nonlinearity and its dependence on y. If we use the following nonlinear quantile regression min y
X
rt ut y> Z t ðyÞ ,
(41.21)
t
for a fixed t in isolation, consistent estimate of y cannot be obtained since it ignores the global dependence of the stk’s on the entire function y(·). If the dependence structure of ut is characterized by (1) and (1), we can consider the following restricted quantile regression instead of Eq. 41.21: ^ , ^y ¼ p
(
XX arg minp, y i t rti ut p> i Z t ðyÞ s:t: pi ¼ yðti Þ ¼ yF1 ðti Þ:
Estimation of this global restricted nonlinear quantile regression is complicated. Xiao and Koenker (2009) propose a simpler two-stage estimator that both incorporates the global restrictions and also focuses on the local approximation around the specified quantile. The proposed estimation consists of the following two steps: (i) The first step considers a global estimation to incorporate the global dependence of the latent stk’s on y. (ii) Then, using results from the first step, we
1156
Z. Xiao et al.
focus on the specified quantile to find the best local estimate for the conditional quantile. Let AðLÞ ¼ 1 b1 L bp Lp , BðLÞ ¼ g1 þ þ gq Lq1 ; under regularity assumptions ensuring that A(L) is invertible, we obtain an ARCH(1) representation for st: st ¼ a0 þ
1 X aj utj :
(41.22)
j¼1
For identification, we normalize a0 ¼ 1. Substituting the above ARCH(1) representation into (1) and (1), we have ut ¼
! 1 X a0 þ aj utj et ,
(41.23)
j¼1
and Qut ðtjF t1 Þ ¼ a0 ðtÞ þ
1 X
aj ðtÞutj ,
j¼1
where aj ðtÞ ¼ aj Qet ðtÞ, j ¼ 0, 1, 2, . . .. Let m ¼ m(n) be a truncation parameter; we may consider the following truncated quantile autoregression: Qut ðtjF t1 Þ a0 ðtÞ þ a1 ðtÞjut1 j þ þ am ðtÞjutm j: By choosing m suitably small relative to the sample size n, but large enough to avoid serious bias, we obtain a sieve approximation for the GARCH model. One could estimate the conditional quantiles simply using a sieve approximation: ∨
Qut (t | Ft−1) = aˆ 0 (t ) + aˆ1 (t ) | ut−1 | + ⋅⋅⋅ + aˆ m (t ) | ut−m |, where ^ a j ðtÞ are the quantile autoregression estimates. Under regularity assumptions ∨
Qut (t | Ft−1 ) = Qut (t | Ft−1 ) + Op (m / n ). However, Monte Carlo evidence indicates that the simple sieve approximation does not directly provide a good estimator for the GARCH model, but it serves as an adequate preliminary estimator. Since the first step estimation focuses on the global model, it is desirable to use information over multiple quantiles in estimation.
41
Quantile Regression and Value at Risk
1157
Combining information over multiple quantiles helps us to obtain globally coherent estimate of the scale parameters. Suppose that we estimate the m-th order quantile autoregression e a ðtÞ ¼ arg min a
n X
rt
m X ut a0 aj utj
t¼mþ1
! (41.24)
j¼1
at quantiles (t1, . . . , tK) and obtain estimates e a ðtk Þ, k ¼ 1, . . . , K. Let e a 0 ¼ 1 in accordance with the identification assumption. Denote a ¼ ½a1 ; . . . ; am ; q1 ; . . . ; qK > ,
h i> p¼ e a ðt1 Þ> , . . . , e a ðtK Þ> ,
where qk ¼ Qet ðtk Þ, and fðaÞ ¼ g a ¼ ½q1 , a1 q1 , . . . , am q1 , . . . , qK , a1 qK , . . . , am qK > , where g ¼ [q1, . . ., qK]> and a ¼ [1, a1, a2, . . ., am]>; we consider the following estimator for the vector a that combines information over the K quantile estimates based on the restrictions aj ðtÞ ¼ aj Qet ðtÞ: e a ¼ arg min ðp fðaÞÞ> An ðp fðaÞÞ, a
(41.25)
a¼ where An is a (K(m + 1)) (K(m + 1)) positive definite matrix. Denoting e a0; . . . ; e ðe a m Þ, st can be estimated by e a0 þ st ¼ e
m X e a j utj : j¼1
In the second step, we perform a quantile regression of ut on > e Z t ¼ 1, e s t1 , . . . e s tp , jut1 j, . . . , utq by X et ; rt ut y> Z min (41.26) y
t
the two-step estimator of y(t)> ¼ (b0(t), b1(t), . . ., bp(t), g1(t), . . ., gq(t)) is then _
given by the solution of Eq. 41.26, y ðtÞ, and the t-th conditional quantile of ut can be estimated by ^ u ðtjF t1 Þ ¼ ^y ðtÞ> Z e t: Q t Iteration can be applied to the above procedure for further improvement.
1158
Z. Xiao et al.
Let e a ðtÞ be the solution of Eq. 41.24; then under appropriate assumptions, we have
a ðtÞ aðtÞ 2 ¼ Op ðm=nÞ, ke
(41.27)
and for any l 2 Rm+1, pffiffiffi > nl ðe a ðtÞ aðtÞÞ ) N ð0; 1Þ, sl 1 2 > 1 where s2l ¼ fe(F1 e (t)) l Dn ∑n(t)Dn l, and
"
# n n 1 X xt x> 1 X t xt xT Y 2 ðutt Þ, Dn ¼ , S n ð tÞ ¼ n t¼mþ1 st n t¼mþ1 t t where xt ¼ (1,jut1j, . . ., jutmj)>. Define 2 3 Qet ðt1 Þ @fðaÞ G¼ ¼ f_ ða0 Þ ¼ ½g J m ⋮I K a0 , g0 ¼ 4 5, @a> a¼a0 Q ðt Þ et
K
where g0 and a0 are the true values of vectors g ¼ [q1, . . ., qK]> and a ¼ [1, a1, a2, . . ., am]>, and 2
0 6 1 Jm ¼ 6 4⋮ 0
⋱
3 0 0 7 7 ⋮5 1
is an (m + 1) m matrix and IK is a K-dimensional identity matrix; under regularity assumptions, the minimum distance estimator e a solving (Eq. 41.25) has the following asymptotic representation:
1 pffiffiffi pffiffiffi nð^a a0 Þ ¼ G> An G G> An nðp pÞ þ op ð1Þ where 2 6 6 n 6 pffiffiffi 1 X 6 nðp pÞ ¼ pffiffiffi n t¼mþ1 6 6 4
D1 n xt
y ðutt Þ t1 1 f e F1 e ðt1 Þ
!3
7 7 7 7 ! 7 þ op ð1Þ, 7 ytm ðuttm Þ 5 x D1 t n 1 f e F e ð tm Þ
41
Quantile Regression and Value at Risk
1159
and the two-step estimator ^y ðtÞ based on Eq. 41.26 has asymptotic representation: ( ) pffiffiffi pffiffiffi^ 1 1 X 1 pffiffiffi Z t yt ðutt Þ þ O1 G nðe a aÞ þ op ð1Þ, n y ðtÞ yðtÞ ¼ 1 O n t f e Fe ðtÞ
where a ¼ [a1, a2, . . ., am]>, O ¼ E[ZtZt>/st], and Zt yk Ck , Ck ¼ E ðjutk1 j; . . . ; jutkm jÞ G¼ : st k¼1 p X
41.6
An Empirical Application
41.6.1 Data and the Empirical Model In this section, we apply the quantile regression method to five major world equity market indexes. The data used in our application are the weekly return series, from September 1976 to June 2008, of five major world equity market indexes: the US S&P 500 Composite Index, the Japanese Nikkei 225 Index, the UK FTSE 100 Index, the Hong Kong Hang Seng Index, and the Singapore Strait Times Index. The FTSE 100 Index data are from January 1984 to June 2008. Table 41.1 reports some summary statistics of the data. The mean weekly returns of the five indexes are all over 0.1 % per week, with the Hang Seng Index producing an average return of 0.23 % per week, an astonishing Table 41.1 Summary statistics of the data Mean Std. Dev. Max Min Skewness Excess kurtosis AC(1) AC(2) AC(3) AC(4) AC(5) AC(10)
S&P 500 0.0015 0.0199 0.1002 0.1566 0.4687 3.3494 0.0703 0.0508 0.0188 0.0039 0.0189 0.0446
Nikkei 225 0.0010 0.0253 0.1205 0.1289 0.2982 2.9958 0.0306 0.0665 0.0328 0.0418 0.0053 0.0712
FTSE 100 0.0017 0.0237 0.1307 0.2489 1.7105 12.867 0.0197 0.0916 0.0490 0.0202 0.0069 0.0138
Hang Seng 0.0023 0.0376 0.1592 0.5401 3.0124 9.8971 0.0891 0.0803 0.0171 0.0122 0.0386 0.0345
Singapore ST 0.0012 0.0291 0.1987 0.4551 1.5077 19.3154 0.0592 0.0081 0.0336 0.0099 0.0519 0.0227
This table shows the summary statistics for the weekly returns of five major equity indexes of the world. AC(k) denotes autocorrelation of order k. The source of the data is the online data service Datastream
1160
Z. Xiao et al.
increase in the index level over the sample period. In comparison, the average return of Nikkei 225 index is only 0.1 %. The Hang Seng’s phenomenal rise does not come without risk. The weekly sample standard deviation of the index is 3.76 %, the highest of the five indexes. In addition, over the sample period the Hang Seng suffered four larger than 15 % drop in weekly index level, with maximum loss reaching 35 %, and there were 23 weekly returns below 10 %! As has been documented extensively in the literature, all five indexes display negative skewness and excess kurtosis. The excess kurtosis of Singapore Strait Times Index reached 19.31, to a large extent driven by the huge 1 week loss of 47.47 % during the 1987 market crash. The autocorrelation coefficients for all five indexes are fairly small. The Hang Seng Index seems to display the strongest autocorrelation with the AR(1) coefficient equal to 0.0891. We consider an AR-linear ARCH model in the empirical analysis. Thus, the return process is modeled as r t ¼ a0 þ a1 r t1 þ þ as r ts þ ut ,
(41.28)
where ut ¼ st et , st ¼ g0 þ g1 jut1 j þ þ gq utq , and the t Conditional VaR of ut is given by 0
Qut ðtjF t1 Þ ¼ gðtÞ Zt 0 0 gðtÞ ¼ g0 ðtÞ, g1 ðtÞ, . . . , gq ðtÞ , and Zt ¼ 1; jut1 j; . . . ; utq : For each time series, we first conduct model specification analysis and choose the appropriate lags for the mean equation and the quantile ARCH component. Based on the selected model, we use Eq. 41.28 to obtain a time series of residuals. The residuals are then used in the ARCH VaR estimation using a quantile regression.
41.6.2 Model Specification Analysis We conduct sequential tests for the significance of the coefficients on lags. The inference procedures we use here are asymptotic inferences. For estimation of the covariance matrix, we use the robust HAC (Heteroskedastic and Autocorrelation Consistent) covariance matrix estimator of Andrews (1991) with the datadependent automatic bandwidth parameter estimator recommended in that paper. First of all, we choose the lag length in the autoregression, r t ¼ a0 þ a1 r t1 þ þ as r ts þ ut ,
41
Quantile Regression and Value at Risk
1161
using a sequential test of significance on lag coefficients. The maximum lag length that we start with is s ¼ 9, and the procedure is repeated until a rejection occurs. Table 41.2 reports the sequential testing results for the S&P 500 index. The t-statistics of all the coefficients are listed for nine rounds of the test. We see that the t-statistic of the coefficient with the maximum number of lags does not become significant until s ¼ 1, the ninth round. The preferred model is an AR(1) model. The selected mean equations for all five indexes are reported in Table 41.4. Our next task is to select the lag length in the ARCH effect ut ¼ g0 þ g1 jut1 j þ þ gq utq et : Again, a sequential test is conducted. To calculate the t-statistic, we need to estimate o2 ¼ t(1 t)/f(F1(t))2. There are many studies on estimating f(F1(t)), including Siddiqui (1960), Bofinger (1975), Sheather and Maritz (1983), and Welsh (1987). Notice that dF1 ðtÞ 1 ¼ 1 ; dt f F ðtÞ
(41.29)
following Siddiqui (1960), we may estimate (Eq. 41.29) by a simple difference quotient of the empirical quantile function. As a result, 1 f Fd ðtÞ ¼
2hn 1 ^ ^ 1 ðt hn Þ F ð t þ hn Þ F
(41.30)
1
^ ðtÞ is an estimate of F1(t) and hn is a bandwidth which goes to zero as where F n ! 1. A bandwidth choice has been suggested by Hall and Sheather (1988) based on Edgeworth expansion for studentized quantiles. This bandwidth is of order n1/3 and has the following representation: h i1=3 00 1:5sðtÞ=s ðtÞ n1=3 , hH S ¼ z2=3 a where za satisfies F(za) ¼ 1 a/2 for the construction of 1a confidence intervals. In the absence of additional information, s(t) is just the normal density. Starting with qmax ¼ 10, a sequential test was conducted and results for the 5 % VaR model of the S&P 500 Index are reported in Table 41.3. We see that in the fourth round, the t-statistic on lag 7 becomes significant. The sequential test stops here, and it suggests that ARCH(7) is appropriate. Based on the model selection tests, we decide to use the AR(1)-ARCH(7) regression quantile model to estimate 5 % VaR for the S&P 500 index. We also conduct similar tests on the 5 % VaR models for other four indexes. To conserve space we do not report the entire testing process in the paper. Table 41.4 provides a summary of the selected models based on the tests. The mean equations
1st 3.3460 1.6941 1.2950 0.9235 1.0414 0.7776 0.2094 1.5594 0.8926 0.3816
2nd 3.3003 1.7693 1.3464 0.9565 1.0080 0.7642 0.5362 1.5426 0.8664
3rd 3.2846 1.8249 1.1555 0.9774 0.9947 0.7865 0.7166 1.5233
4th 3.3248 1.9987 1.0776 1.5521 1.0102 0.8288 0.8931
5th 3.2219 1.9996 0.0872 0.8123 0.9899 0.7662
6th 3.7304 2.0868 1.3106 0.8162 0.1612
7th 3.1723 2.1536 1.2089 0.9553
8th 3.0650 2.097 1.0016
9th 3.8125 2.2094
This table reports the test results for the VaR model mean equation specification for the S&P 500 Index. The number of lags in the AR component of the ARCH model is selected according to the sequential test. The table reports the t-statistic for the coefficient with the maximum lag in the mean equation
Round a0 a1 a2 a3 a4 a5 a6 a7 a8 a9
Table 41.2 VaR model mean specification test for the S&P 500 Index
1162 Z. Xiao et al.
41
Quantile Regression and Value at Risk
1163
Table 41.3 5 % VaR model ARCH specification test for the S&P 500 Index Round g0 g1 g2 g3 g4 g5 g6 g7 g8 g9 g10
1st 16.856 2.9163 1.9601 1.0982 0.6807 0.7456 0.3362 1.9868 0.4866 1.2045 1.1326
2nd 15.263 3.1891 2.658 1.0002 0.8954 0.8913 0.3456 2.0197 0.4688 1.0108
3rd 17.118 3.2011 2.533 0.9951 1.1124 0.9016 0.4520 1.8145 1.5631
4th 15.362 3.1106 2.321 1.0089 1.5811 0.9156 0.3795 2.1105
This table reports the test results for the 5 % VaR model specification for the S&P 500 Index. The number of lags in the volatility component of the ARCH model is selected according to the test. The table reports the t-statistic for the coefficient with the maximum lag in the ARCH equation Table 41.4 ARCH VaR models selected by the sequential test Index S&P 500 Nikkei 225 FTSE 100 Hang Seng Singapore ST
Mean Lag 1 1 1 3 2
5 % ARCH Lag 6 7 6 6 7
This table summarizes the preferred ARCH VaR models for the five global market indexes. The number of lags in the mean equation and the volatility component of the ARCH model is selected according to the test
generally have one or two lags, except the Hang Seng Index, which has a lag of 3 and displays more persistent autoregressive effect. For the ARCH equations, at least six lags are needed for the indexes.
41.6.3 Estimated VaRs The estimated parameters for the mean equations for all five indexes are reported in Table 41.5. The constant term for the five indexes is between 0.11 % for the Nikkei and 0.24 % for the Hang Seng. As suggested by Table 41.1, the Hang Seng seems to display the strongest autocorrelation, and this is reflected in the four lags chosen by the sequential test. Table 41.6 reports the estimated quantile regression ARCH parameters for the 5 % VaR model: USA – S&P 500 Index. The estimated 5 % VaRs generally range between 2.5 % and 5 %, but during very volatile periods they could jump over 10 %, as what happened in October 1987. During high-volatility periods, there is high variation in estimated VaRs.
1164
Z. Xiao et al.
Table 41.5 Estimated mean equation parameters Round a0 a1
S&P 500 0.0019 (0.0006) 0.0579 (0.0233)
Nikkei 225 0.0011 (0.0006) 0.0827 (0.0305)
FTSE 100 0.0022 (0.0008) 0.0617 (0.0283)
a2 a3
Hang Seng 0.0024 (0.001) 0.1110 (0.0275) 0.0796 (0.0288) 0.0985 (0.0238)
Singapore ST 0.0014 (0.0009) 0.0555 (0.0225) 0.0751 (0.0288)
This table reports the estimated parameters of the mean equation for the five global equity indexes. The standard errors are in parentheses under the estimated parameters Table 41.6 Estimated ARCH equation parameters for the 5 % VaR model Parameter g0 g1 g2 g3 g4 g5 g6 g7
S&P 500 0.0351 (0.0016) 0.2096 (0.0711) 0.1007 (0.0531) 0.0101 (0.0142) 0.1466 (0.0908) 0.0105 (0.0136) 0.0318 (0.0117)
Nikkei 225 0.0421 (0.0023) 0.0651 (0.0416) 0.1896 (0.0415) 0.1109 (0.0651) 0.0528 (0.0375) 0.0987 (0.0448) 0.0155 (0.0297) 0.2323 (0.0451)
FTSE 100 0.0346 (0.0013) 0.0518 (0.0645) 0.0588 (0.0665) 0.0311 (0.0242) 0.0589 (0.0776) 0.0119 (0.0123) 0.0876 (0.0412)
Hang Seng 0.0646 (0.0031) 0.1712 (0.0803) 0.0922 (0.0314) 0.2054 (0.0409) 0.0671 (0.0321) 0.0229 (0.0338) 0.0359 (0.0136)
Singapore ST 0.0428 (0.0027) 0.1119 (0.0502) 0.1389 (0.0593) 0.0218 (0.0379) 0.1102 (0.0903) 0.1519 (0.0511) 0.0311 (0.0215) 0.1123 (0.0517)
This table reports the estimated parameters of the ARCH equation for the 5 % VaR model for the five global indexes. The standard errors are in parentheses under the estimated parameters
Japan – Nikkei 225 Index. The estimated VaR series is quite stable and remains at the 4 % and the 7 % level from 1976 till 1982. Then the Nikkei 225 Index took off and appreciated about 450 % over the next 8 years, reaching its highest level at the end of 1989. This quick rise in stock value is accompanied by high risk, manifested here by the more volatile VaR series. In particular, the VaRs fluctuated dramatically, ranging from a low of 3 % to a high of 15 %. This volatility in VaR may reflect both optimistic market outlook at times and worry about high valuation and the possibility of a market crash. That crash did come in 1990, and
41
Quantile Regression and Value at Risk
1165
10 years later, the Nikkei 225 Index still hovers around at a level which is about half off the record high in 1989. The 1990s is far from a rewarding decade for investors in the Japanese equity market. Average weekly 5 % VaR is about 5 %, and the variation is also very high. UK – FTSE 100 Index. The 5 % VaR is very stable and averages about 3 %. They stay very much within the 2–4 % band, except on a few occasions, such as the 1987 global market crash. Hong Kong – Hang Seng Index. The Hang Seng Index produces an average return of 0.23 % per week. The Hang Seng’s phenomenal rise does not come without risk. We mentioned above that the weekly sample standard deviation of the index is 3.76 %, the highest of the five indexes. In addition, the Hong Kong stock market has had more than its fair share of the market crashes. Singapore – Strait Times Index. Interestingly, the estimated VaRs display a pattern very similar to that of the UK FTSE 100 Index, although the former is generally larger than the latter. The higher risk in the Singapore market did not result in higher return over the sample period. Among the five indexes, the Singapore market suffered the largest loss during the 1987 crash, a 47.5 % drop in a week. The market has since recovered much of the loss. Among the five indexes, the Singapore market only outperformed the Nikkei 225 Index over this period.
41.6.4 Performance of the ARCH Quantile Regression Model In this section we conduct an empirical analysis to compare VaRs estimated by RiskMetrics and regression quantiles and those by volatility models with the conditional normality assumption. There are extensive empirical evidences supporting the use of the GARCH models in conditional volatility estimation. Bollerslev et al. (1992) provide a nice overview of the issue. Therefore, we compare VaR estimated based on RiskMetrics and GARCH(1,1) model and quantile regression based on ARCH. To measure the relative performance more accurately, we compute the percentage of realized returns that are below the negative estimated VaRs. The results are reported in Table 41.7. The top panel of the table presents the percentages for the VaRs estimated by the ARCH quantile regression model, the middle panel for the VaRs estimated by the GARCH model with the conditional normal return distribution assumption, and the bottom panel for the VaRs estimated by the RiskMetrics method. We estimate VaRs using these methods at 1 %, 2 %, 5 %, 10 %. Now we have a total of four percentage levels. The regression quantile method produces the closest percentage in general. Both the RiskMetrics method and the GARCH method seem to underestimate VaRs for the smaller percentages and overestimate VaRs for the larger percentages. The five indexes we analyzed are quite different in their risk characteristics as discussed above. The quantile regression approach seems to be relatively robust and can consistently produce reasonably good estimates of the VaRs at different
1166
Z. Xiao et al.
Table 41.7 VaR model performance comparison % VaR Quantile regression S&P 500 Nikkei 225 FTSE 100 Hang Seng GARCH S&P 500 Nikkei 225 FTSE 100 Hang Seng RiskMetrics S&P 500 Nikkei 225 FTSE 100 Hang Seng
1%
2%
5%
10 %
1.319 1.350 0.714 0.799
1.925 2.011 1.867 2.113
5.3108 5.7210 5.6019 4.9011
9.656 10.56 9.016 9.289
1.3996 1.4974 1.1980 1.8962
1.7641 1.7927 1.6133 2.8658
4.0114 4.3676 3.3891 3.6653
7.6151 8.4098 6.7717 7.6439
0.3790 0.5877 0.2979 0.7798
0.5199 0.9814 0.5796 0.9822
1.1180 1.358 0.9984 1.4212
3.2563 4.1367 3.5625 4.1936
This table reports the coverage ratios, i.e., the percentage of realized returns that is below the estimated VaRs. The top panel reports the performance of the VaRs estimated by the quantile regression model. The middle panel reports the results for VaRs estimated by the GARCH model based on the conditionally normal return distribution assumption. The bottom panel reports the results for VaRs estimated by the RiskMetrics method
percentage (probability) levels. The GARCH model with the normality assumption, being a good volatility model, is not able to produce good VaR estimates. The quantile regression model does not assume normality and is well suited to hand negative skewness and heavy tails.
41.7
Conclusion
Quantile regression provides a convenient and powerful method of estimating VaR. The quantile regression approach not only provides a method of estimating the conditional quantiles (VaRs) of existing time-series models; it also substantially expands the modeling options for time-series analysis. Estimating Value at Risk using the quantile regression does not assume a particular conditional distribution for the returns. Numerical evidence indicates that the quantile-based methods have better performance than the traditional J. P. Morgan’s RiskMetrics method and other methods based on normality. The quantile regression based method provides an important tool in risk management. There are several existing programs for quantile regression applications. For example, both parametric and nonparametric quantile regression estimations can be implemented by the function rq() and rqss() in the package quantreg in the computing language R, and SAS now has a suite of procedures modeled closely on the functionality of the R package quantreg.
41
Quantile Regression and Value at Risk
1167
References Andrews, D. W. K. (1991). Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica, 59, 817–858. Barrodale, I., & Roberts, F. D. K. (1974). Solution of an overdetermined system of equations in the l1 norm. Communications of the ACM, 17, 319–320. Blanley, A., Lamb, R., & Schroeder, R. (2000). Compliance with SEC disclosure requirements about market risk. Journal of Derivatives, 7, 39–50. Bofinger, E. (1975). Estimation of a density function using order statistics. Australian Journal of Statistics, 17, 1–7. Bollerslev, T., Chou, R. Y., & Kroner, K. F. (1992). ARCH modeling in finance. Journal of Econometrics, 52, 5–59. Bouye´, E., & Salmon, M. (2008). Dynamic copula quantile regressions and tail area dynamic dependence in forex markets [Manuscript]. Coventry: Financial Econometrics Research Centre, Warwick Business School. Chen, X., Koenker, R., & Xiao, Z. (2009). Copula-based nonlinear quantile autoregression. The Econometrics Journal, 12, 50–67. Dowd, K. (1998). Beyond value at risk: The new science of risk management. England: Wiley. Dowd, K. (2000). Assessing VaR accuracy. Derivatives Quarterly, 6, 61–63. Duffie, D., & Pan, J. (1997). An overview of value at risk. Journal of Derivatives, 4, 7–49. Engle, R. F., & Manganelli, S. (1999). CAViaR: Conditional autoregressive value at risk by regression quantiles (Working paper). San Diego: University of California. Engle, R. F., & Manganelli, S. (2004). CAViaR: Conditional autoregressive value at risk by regression quantiles (Working paper). San Diego: University of California. Guo, H., Wu, G., & Xiao, Z. (2007). Estimating value at risk for defaultable bond portfolios by regression quantile. Journal of Risk Finance, 8(2), 166–185. Hall, P., & Sheather, S. J. (1988). On the distribution of a studentized quantile. Journal of Royal Statistical Society B, 50, 381–391. Koenker, R., & Bassett, G. (1978). Regression quantiles. Econometrica, 84, 33–50. Koenker, R., & d’Orey, V. (1987). Computing regression quantiles. Applied Statistics, 36, 383–393. Koenker, R., & Xiao, Z. (2006). Quantile autoregression. Journal of the American Statistical Association, 101(475), 980–1006. Koenker, R., & Zhao, Q. (1996). Conditional quantile estimation and inference for ARCH models. Econometric Theory, 12, 793–813. Portnoy, S. L., & Koenker, R. (1997). The gaussian hare and the Laplacian tortoise: Computability of squared-error vs. absolute-error estimator. Statistical Science, 12, 279–300. Sarma, M., Thomas, S., & Shah, A. (2000). Performance evaluation of alternative VaR models (Working paper). Indira Gandhi Institute of Development Research. Saunders, A. (1999). Financial institutions management: A modern perspective, Irwin series in finance. McGraw-Hill, New York. Sheather, S. J., & Maritz, J. S. (1983). An estimate of the asymptotic standard error of the sample median. Australian Journal of Statistics, 25, 109–122. Siddiqui, M. (1960). Distribution of quantiles in samples from a bivariate population. Journa of Research Natural Bureau Standards, 64B, 145–150. Welsh, A. (1987). Asymptotically efficient estimation of the sparsity function at a point. Statistics and Probability Letters, 6, 427–432. Wu, G., & Zhijie Xiao (2002). An analysis of risk measures. Journal of Risk, 4(4), 53–75. Xiao, Z., & Koenker, R. (2009). Conditional quantile estimation and inference for GARCH models. Journal of the American Statistical Association, 104(488), 1696–1712.
Earnings Quality and Board Structure: Evidence from South East Asia
42
Kin-Wai Lee
Contents 42.1 42.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prior Research and Hypotheses Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.1 Equity Ownership of Outside Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.2 Board Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.3 Equity-Based Compensation of Outside Directors and Control Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2.4 Board Independence and Control Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4.1 Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4.2 Discretionary Accruals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4.3 Accrual Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4.4 Earnings Informativeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4.5 Robustness Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Discretionary Accruals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Accruals Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Earnings Informativeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 4: Adjusting for Standard Errors in Panel Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1170 1174 1174 1175 1176 1176 1177 1177 1177 1178 1181 1182 1183 1184 1184 1186 1188 1189 1192
Abstract
Using a sample of listed firms in Southeast Asian countries, this paper examines the association among board structure and corporate ownership structure in affecting earnings quality. I find that the negative association between separation
K.-W. Lee Division of Accounting, Nanyang Business School, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_42, # Springer Science+Business Media New York 2015
1169
1170
K.-W. Lee
of control rights from cash flow rights and earnings quality varies systematically with board structure. I find that the negative association between separation of control rights from cash flow rights and earnings quality is less pronounced in firms with high equity ownership by outside directors. I also document that in firms with high separation of control rights from cash flow rights, those firms with higher proportion of outside directors on the board have higher earnings quality. Overall, my results suggest that outside directors’ equity ownership and board independence are associated with better financial reporting outcome, especially in firms with high expected agency costs arising from misalignment of control rights and cash flow rights. The econometric method employed is regressions of panel data. In a panel data setting, I address both cross-sectional and time-series dependence. Gow et al. (2010, The Accounting Review 85(2), 483–512) find that in the presence of both cross-sectional and time-series dependence, the two-way clustering method which allows for both cross-sectional and time-series dependence produces well-specified test statistics. Following Gow et al. (2010, The Accounting Review 85(2), 483–512), I employ the two-way clustering method where the standard errors are clustered by both firm and year in my regressions of panel data. Johnston and DiNardo (1997, Econometrics method. New York: Mc-Graw Hill) and Greene (2000, Econometrics analysis. Upper Saddle River: PrenticeHall) are two econometric textbooks that contain a detailed discussion of the econometrics issues relating to panel data. Keywords
Earnings quality • Board structure • Corporate ownership structure • Panel data regressions • Cross-sectional and time-series dependence • Two-way clustering method of standard errors
42.1
Introduction
In Asia, corporate ownership concentration is high, and many listed firms are mainly controlled by a single large shareholder (La Porta et al. 1999; Claessens et al. 2000). Asian firms also show a high divergence between control rights and cash flow rights, which allows the largest shareholder to control a firm’s operations with a relatively small direct stake in its cash flow rights. Control is often increased beyond ownership stakes through pyramid structures, cross-holdings among firms, and dual class shares (Claessens et al. 2000). It is argued that concentrated ownership facilitates transactions in weak property rights environment by providing the controlling shareholders the power and incentive to negotiate and enforce contracts with various stakeholders (Shleifer and Vishny 1997). As a result of concentrated ownership, the main agency problem in listed firms in Asia is the conflict of interest between the controlling shareholder and minority shareholder. Specifically, controlling shareholder has incentives to
42
Earnings Quality and Board Structure: Evidence from South East Asia
1171
expropriate the wealth of minority shareholders by engaging in rent-seeking activities and to mask their private benefits of control by supplying low-quality financial accounting information. Empirical evidence also shows that the quality and credibility of financial accounting information are lower in firms with high separation of control rights and cash flow rights (Fan and Wong 2002; Haw et al. 2004). An important question is how effective are corporate governance mechanisms in mitigating the agency problems in Asia firms, especially in improving corporate transparency in firms with concentrated ownership. Controlling shareholders in Asia typically face limited disciplinary pressures from the market for corporate control because hostile takeovers are infrequent (La Porta et al. 1999; Fan and Wong 2002). Furthermore, controlling shareholders face little monitoring pressure from analysts because analysts are less likely to follow firms with potential incentives to withhold or manipulate information, such as when the family/ management group is the largest control rights blockholder (Lang et al. 2004). In these environments, external corporate governance mechanisms, in particular the market for corporate control and analysts’ scrutiny, exert limited disciplinary pressure on controlling shareholders. Consequently, internal corporate governance mechanisms such as the board of directors may be important to mitigate the agency costs associated with the ownership structure of Asian firms. Thus, the primary research questions in this paper are: (1) Do board of directors play a corporate governance role over the financial reporting process in listed firms in Asia? (2) How does the board of director affect financial reporting quality in firms with high expected agency costs arising from the separation of control rights and cash flow rights? Specifically, this paper examines the relation among outside directors’ equity ownership, board independence, and separation of control rights from cash flow rights of controlling shareholder in affecting earnings quality. My empirical strategy is as follows: First, I examine the main effect between earnings quality and (i) outside directors’ equity ownership, (ii) the proportion of outside directors on the board, and (iii) the separation of control rights from cash flow rights of the largest ultimate shareholder. This sheds light on my first research question on whether the board of directors plays a corporate governance role over the financial reporting process in listed firms in Asia. Second, I examine the (i) interaction between outside directors’ equity ownership and the separation of control rights from cash flow rights and (ii) interaction between the proportion of outside directors on the board and the separation of control rights from cash flow rights, in shaping earnings quality. This addresses the second research question on the effect of board structure (in particular, board independence and equity ownership of outside directors) on financial reporting quality in firms with high expected agency costs arising from the separation of control rights and cash flow rights. In this paper, I focus on two important attributes of board monitoring – outside directors’ equity ownership and board independence – and their association with financial reporting quality. These attributes are important for two reasons.
1172
K.-W. Lee
First, prior research generally finds that in developed economies such as the United States and the United Kingdom, there is a positive association between board independence and earnings quality (Dechow et al. 1996; Klein 2002; Peasnell et al. 2005). However, there is limited evidence on the effect of board independence on the financial accounting process in Asia. My paper attempts to fill this gap. Second, recent research on the monitoring incentives of the board suggests that equity ownership of outside directors plays an important role in mitigating managerial entrenchment (Perry 2000; Ryan and Wiggins 2004). An implication of this stream of research is that even in firms with high board independence, entrenched managers can weaken the monitoring incentives of outside directors by reducing their equity-based compensation. In other words, board independence that is not properly augmented with incentive compensation may hamper the monitoring effectiveness of independent directors over management. My sample consists of 2,875 firm-year observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand. These countries provide a good setting to test the governance potential of the board of directors because shareholders in these countries typically suffer from misaligned managerial incentives, ineffective legal protection, and underdeveloped markets for corporate control (La Porta et al. 1999; Claessens et al. 2000; Fan and Wong 2002). I measure earnings quality with three financial reporting metrics: (i) discretionary accruals, (ii) mapping of accruals to cash flow, and (iii) informativeness of reported earnings. My results are robust across alternative earnings quality metrics. I find that earnings quality is higher when outside directors have higher equity ownership. This result suggests that internal monitoring of the quality and credibility of accounting information is improved through aligning shareholders’ and directors’ incentives. Consistent with the monitoring role of outside directors (Fama and Jensen 1983), I also find that earnings quality is positively associated with the proportion of outside directors on the board. This result supports the notion that outside directors have incentives to be effective monitors in order to maintain the value of their reputational capital. Consistent with prior studies (Fan and Wong 2002; Haw et al. 2004), I also document that earnings quality is negatively associated with the separation of control rights from cash flow rights of the largest ultimate shareholder. More importantly, I document that the negative association between separation of control rights from cash flow rights and earnings quality is less pronounced in firms with high equity ownership by outside directors. This result suggests equity ownership improves the incentives and monitoring intensity of outside directors in firms with high expected agency costs arising from the divergence of control rights from cash flow rights. Furthermore, my result indicates the negative association between separation of control rights from cash flow rights and earnings quality is mitigated by the higher proportion of outside directors on the board. This result provides evidence supporting the corporate governance role of outside directors in
42
Earnings Quality and Board Structure: Evidence from South East Asia
1173
constraining managerial discretion over financial accounting process in firms with high levels of misalignment between control rights and cash flow rights. Collectively, my results suggest that strong internal governance structures can alleviate agency problems between the controlling shareholder and minority shareholders. More generally, my results highlight the interplay between board structure and corporate ownership structure in shaping earnings quality. I perform several robustness tests. My results are robust across different economies. In addition, year-by-year regressions yield qualitatively similar results, suggesting my inferences are not time-period specific. I also include additional country-level institutional variables such as legal origin, country investor protection, and enforcement of shareholder rights. My results are qualitatively similar. Specifically, after controlling for country-level legal institutions, firm-specific internal governance mechanisms, namely – outside directors’ equity ownership and board independence – continue to be important in mitigating the negative effects of the divergence between control rights and cash flow rights on earnings quality. My study has several contributions. First, prior studies find that the divergence of control rights from cash flow rights reduces the informativeness of reported earnings (Fan and Wong 2002) and induces earnings management (Haw et al. 2004). I extend these studies by demonstrating two specific channels at the firm level – equity ownership by outside directors and proportion of outside directors on the board – that mitigate the negative association between earnings quality and divergence of control rights from cash flow rights. This result suggests that the board of directors play an important corporate governance role to alleviate agency problems in firms with entrenched insiders. My findings also complement Fan and Wong’s (2005) result that given concentrated ownership, a controlling owner may introduce some monitoring or bonding mechanisms that limit his ability to expropriate minority shareholders and hence mitigate agency conflicts. In the Fan and Wong’s study, high-quality external auditors alleviate agency problems in firms with concentrated ownership, whereas in my study, strong board of directors augmented with proper monitoring incentives mitigate agency problems in firms with concentrated ownership. Second, my results suggest that there is an incremental role for firm-specific internal governance mechanisms, beyond country-level institutions, in improving the quality of financial information. Haw et al. (2004) find that earnings management that is induced by the divergence between control rights and cash flow rights is less pronounced in countries where (i) legal institutions protect minority shareholder rights (such as legal tradition, minority shareholder rights, efficiency of judicial system, or disclosure system) and (ii) in countries with effective extralegal institutions (such as the effectiveness of competition law, diffusion of the press, and tax compliance). My study shows that after controlling for both country-level legal and extralegal institutions, firm-specific internal governance mechanisms, namely, outside directors’ incentive compensation and board independence, continue to be
1174
K.-W. Lee
important in constraining management opportunism over the financial reporting process in firms with high expected agency costs arising from the divergence between control rights and cash flow rights. To the extent that changes in country-level legal institutions are relatively more costly and more difficult than changes in firm-level governance mechanisms, my result suggests that improvement in firm-specific governance mechanisms can be effective to reduce private benefits of control. My results complement finding in prior studies (Johnson et al. 2000; La Porta et al. 1998; Lang et al. 2004) that firms in countries with weak legal protection substitute with strong firm-level internal governance mechanisms to attract investors. My results also extend the finding in Leuz et al. (2003) that firms located in countries with weaker investor protection have higher earnings management. An important question is what factors may constrain managerial opportunism when country-level investor protection is weak? Because my sample consists of countries with generally weak investor protection, I shed light on this question by documenting that firm-level governance structures matter in improving earnings quality in countries with weak investor protection. The rest of the paper proceeds as follows. Section 42.2 develops the hypotheses and places my paper in the context of related research. Section 42.3 describes the sample and method. Section 42.4 presents my results. I conclude the paper in Sect. 42.5.
42.2
Prior Research and Hypotheses Development
42.2.1 Equity Ownership of Outside Directors Recent research examines the compensation structure of outside directors, who play an important monitoring role over management’s actions. The central theme in this body of research is that incentive compensation leading to share ownership improves the outside directors’ incentives to monitor. Mehran (1995) finds firm performance is positively associated with the proportion of directors’ equity-based compensation. Perry (2000) finds that the likelihood of CEO turnover following poor performance increases when directors receive higher equity-based compensation. Shivdasani (1993) finds that probability of a hostile takeover is negatively associated with the percentage of shares owned by outside directors in target firms. He interprets this finding as suggesting that board monitoring may substitute for monitoring from the market of corporate control. Hermalin and Weisbach (1988) and Gillette et al. (2003) develop models where incentive compensation for directors increases their monitoring efforts and effectiveness. Ryan and Wiggins (2004) find that directors in firms with entrenched CEOs receive a significantly smaller proportion of compensation in the form of equity-based awards. Their result suggests that entrenched CEOs use their position to influence directors’
42
Earnings Quality and Board Structure: Evidence from South East Asia
1175
compensation, which results in contracts that provide directors with weaker incentives to monitor management. Internal monitoring is improved through aligning shareholders’ and directors’ incentives. If higher equity-based compensation contracts provide outside directors with stronger incentives to act in the interests of shareholders, I predict that managerial opportunism over the financial reporting process is reduced when outside directors have higher equity-based compensation. My first hypothesis is: H1 Earnings quality is positively associated with the outside directors’ equity
ownership.
42.2.2 Board Independence There is considerable literature on the role of outside directors in reducing agency problems between managers and shareholders. Fama and Jensen (1983) argue that outside directors have strong incentives to be effective monitors in order to maintain their reputational capital. Prior studies support the notion that board effectiveness in protecting shareholders’ wealth is positively associated with the proportion of outside directors on the board (Weisbach 1988; Rosenstein and Wyatt 1990). In the United States, Klein (2002) finds that firms with high proportion of outside directors on the board have lower discretionary accruals. Using a sample of listed firms in the United Kingdom, Peasnell et al. (2005) document that the greater the board independence, the lower the propensity of managers making income-increasing discretionary accruals to avoid reporting losses and earnings reductions. Using US firms subjected to SEC enforcement action for alleged earnings manipulation, Dechow et al. (1996) and Beasley (1996) find that the probability of financial reporting fraud is negatively associated with the proportion of outside directors on the board. In contrast, in emerging markets, conventional wisdom suggests that the agency conflicts between controlling owners and the minority shareholders may be difficult to mitigate through conventional corporate control mechanisms such as boards of directors (La Porta et al. 1998; Claessens et al. 2000; Fan and Wong 2005; Lee 2007; Lee et al. 2009). However, since the Asian economic crisis in 1997, many countries in Asia took steps to improve their corporate governance environment such as implementing country-specific code of corporate governance. For example, the Stock Exchange of Thailand Code of Best Practice for Directors of Listed Companies was implemented in 1998, the Code of Proper Practices for Directors for the Philippines was implemented in 2000, the Malaysian Code of Corporate Governance was implemented in 2000, and the Singapore Code of Corporate Governance was implemented in 2001. Among the key provisions of the code of corporate governance in these countries is the recommendation to have sufficient independent directors on the board to improve monitoring of management. For example, the 2001 Code of Corporate Governance for Singapore stated that there should be a strong and independent element on the board, which is able to exercise objective judgment on corporate affairs independently from management.
1176
K.-W. Lee
Although compliance with the code of corporate governance is not legally mandatory, listed companies are required to explain deviations from the recommendations of the code of corporate governance.1 I posit that in the waves of corporate governance reform in emerging markets in the early 2000s and the guidelines of country-specific code of corporate governance in emphasizing the importance of board independence, there is heightened awareness among outside directors on their increased monitoring responsibilities. To the extent that outside directors in listed firms in Asia perform a corporate governance role, I predict that: H2 Earnings quality is positively associated with the proportion of outside direc-
tors on the board.
42.2.3 Equity-Based Compensation of Outside Directors and Control Divergence The preceding discussion suggests that higher equity-based incentive compensation for outside directors improves their monitoring efforts. Greater monitoring from outside directors reduces managerial discretion over the financial reporting process (Lee et al. 2008). The benefits of more effective monitoring arising from higher equity ownership are likely to be concentrated in firms with high agency problems arising from the separation of control rights from cash flow rights. Thus, I predict that: H3 The negative association between separation of control rights from cash flow
rights and earnings quality is less pronounced in firms with high equity ownership by outside directors.
42.2.4 Board Independence and Control Divergence Outside directors play an important corporate governance role in resolving agency problems between managers and shareholders. Following prior studies (Beasley 1996; Dechow et al. 1996; Klein 2002; Lee et al. 2007), higher proportion of outside directors on the board is associated with higher constraints on management discretion over the financial reporting process. I extend this notion to posit that the greater monitoring efforts from a high proportion of outside directors on the
1
To illustrate, the Singapore Exchange Listing Rules require “listed companies to describe in the annual reports their corporate governance practices with specific reference to the principles of the Code, as well as disclose and explain any deviation from any guideline of the Code. Companies are also encouraged to make a positive confirmation at the start of the corporate governance section of the annual report that they have adhered to the principles and guidelines of the Code, or specify each area of non-compliance. Many of these guidelines are recommendations for companies to disclose their corporate governance arrangements.”
42
Earnings Quality and Board Structure: Evidence from South East Asia
1177
board are likely to mitigate the negative effects of the separation of control rights from cash flow rights on earnings quality. Thus, I predict that: H4 The negative association between separation of control rights from cash flow
rights and earnings quality is mitigated by the proportion of outside directors.
42.3
Data
I begin with the Worldscope database to identify listed firms in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand during the period 2004–2008. I exclude financial institutions because of their unique financial structure and regulatory requirements. I eliminate observations with extreme values of control variables such as return-on-assets and leverage. I obtain stock price data from the Datastream database. I obtain annual reports for the period 2004–2008 from the Global Report database and company websites. The sample consists of 617 firms for 2,875 firm-year observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand. I collect data on the board characteristics such as board size, the number of independent directors, and equity ownership of directors from the annual report. I also examine the annual report to trace the ultimate owners of the firms. The procedure of identifying ultimate owners is similar to the one used in La Porta et al. (1999).2 In this study, I measure earnings quality with three financial reporting metrics: (i) discretionary accruals, (ii) mapping of accruals to cash flow, and (iii) informativeness of reported earnings. Appendix 1 contains detailed description on the econometric method.
42.4
Results
42.4.1 Descriptive Statistics Table 42.1 presents the descriptive statistics. Mean absolute discretionary accrual as a proportion of lagged assets is 0.062. Mean equity ownership of outside directors (computed as common stock and stock options held by outside directors divided by 2
In summary, an ultimate owner is defined as the shareholder who has the determining voting rights of the company and who is not controlled by anyone else. If a company does not have an ultimate owner, it is classified as widely held. To economize on the data collection task, the ultimate owner’s voting right level is set at 50 % and not traced any further once that level exceeds 50 %. Although a company can have more than one ultimate owner, we focus on the largest ultimate owner. We also identify the cash flow rights of the ultimate owners. To facilitate the measurement of the separation of cash flow and voting rights, the maximum cash flow rights level associated with any ultimate owner is also set at 50 %. However, there is no minimum cutoff level for cash flow rights.
1178
K.-W. Lee
Table 42.1 Descriptive statistics. The sample consists of 617 firms for 2,875 firm-year observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand DISCAC AQ EBC (%) OUTDIR VOTE BOARDSIZE CEODUAL LNASSET MB LEV ROA
Mean
25th percentile
Median
75th percentile
Standard deviation
0.062 0.068 2.041 0.489 1.198 7 0.405 11.722 1.851 0.261 0.086
0.009 0.029 0.837 0.206 1.000 5 0 9.867 0.582 0.093 0.027
0.038 0.035 1.752 0.385 1.175 8 0 11.993 1.272 0.211 0.0705
0.085 0.063 2.663 0.520 1.326 10 1 13.078 2.195 0.335 0.109
0.053 0.037 1.035 0.217 0.638 3 – 2.115 0.833 0.106 0.071
DISCAC ¼ absolute value of discretionary accruals estimated based on the modified Jones model AQ ¼ accrual quality measured by Dechow and Dichev’s (2002) measure of mapping of accruals to past, present, and future cash from operations DIROWN ¼ common stock and stock options held by outside directors divided by number of ordinary shares outstanding in the firm OUTDIR ¼ proportion of outside directors on the board VOTECASH ¼ voting rights divided by cash flow rights of the largest controlling shareholder CEODUAL ¼ a dummy variable that equals 1 if the CEO is chairman of board and 0 otherwise BOARDSIZE ¼ number of directors on the board LNASSET ¼ natural logarithm of total assets MB ¼ market value of equity divided by book value of equity LEV ¼ long-term debt divided by total assets ROA ¼ net profit after tax divided by total assets
number of ordinary shares outstanding in the firm) is 2.04 %. The mean board size and proportion of outside directors on the board are 7 and 0.489, respectively. CEO chairs the board in 40 % of the firms. Consistent with Fan and Wong’s (2002) study of East Asian economies, the firms in my sample also have high divergence of control rights from cash flow rights (mean VOTE ¼ 1.198).
42.4.2 Discretionary Accruals Table 42.2 presents the estimates of regressions of unsigned discretionary accruals on equity-based compensation, proportion of outside directors on the board, and the separation of control right from cash flow right. Following Gow et al. (2010), I employ the two-way clustering method where the standard errors are clustered by both firm and year in my regressions. In column (1), I document a negative association between the absolute value of discretionary accruals and the equity ownership of outside directors. Results also indicate that firms with higher proportion of outside directors on the board have lower discretionary accruals. I find that
42
Earnings Quality and Board Structure: Evidence from South East Asia
1179
Table 42.2 Regressions of unsigned discretionary accruals. The sample consists of 617 firms for 2,875 firm-year observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand. The dependent variable is absolute discretionary accruals computed based on the modified Jones model. All variables are defined in Table 42.1. The t-statistics (in parentheses) are adjusted based on standard errors clustered by firm and year (Petersen 2009). The symbols *, **, and *** denote statistical significance at the 10 %, 5 %, and 1 % levels (two-tailed), respectively EBC OUTDIR VOTE VOTE *EBC VOTE * OUTDIR BOARDSIZE CEODUAL LNASSET MB LEV ROA Adjusted R2
Predicted sign + +/ + + + +/
1 0.2513 (3.07)*** 0.1862 (2.41)** 0.8173 (2.85)***
0.0359 (1.57) 0.4192 (1.61) 0.5311 (8.93)*** 0.1052 (4.25)*** 1.2186 (5.38)*** 3.877 (4.83)*** 12.5 %
2 0.2142 (2.86)*** 0.1053 (2.19)** 0.9254 (2.94)*** 0.4160 (2.73)*** 0.1732 (2.29)** 0.0817 (1.42) 0.2069 (1.45) 0.5028 (8.01)*** 0.2103 (3.02)*** 1.1185 (5.19)*** 4.108 (4.72)*** 14.1 %
earnings management (as proxied by absolute discretionary accruals) increases as the separation between control rights and cash flow rights of controlling shareholders increases. This result is consistent with the finding in Haw et al. (2004). In column (2), I test whether the positive association between discretionary accruals and the separation between control rights and cash flow rights of controlling shareholder is mitigated by the equity ownership of outside directors. The coefficient on the interaction term between the separation of control rights from cash flow rights and the equity ownership of outside directors (VOTE* DIROWN) is negative and significant at the 1 % level, supporting the hypothesis that in firms with high separation of control rights from cash flow rights, earnings management is reduced when outside directors have higher equity ownership. This finding suggests that greater equity-based compensation increases the monitoring effectiveness of outside directors over the financial reporting process in firms with agency conflicts arising from their control rights from cash flow rights. In addition, the coefficient on the interaction term between the separation of control rights from cash flow rights and the proportion of outside directors (VOTE*OUTDIR) is negative and significant at the 5 % level, supporting the hypothesis that in firms with high separation of control rights from cash flow rights, earnings management is reduced in firms with high proportion of outside directors on the board. This result is consistent with the monitoring role of independent directors to improve the credibility of accounting information in firms with agency problems arising from their concentrated corporate ownership structure.
1180
K.-W. Lee
Table 42.3 Regressions of signed discretionary accruals. The sample consists of 617 firms for 2,875 firm-year observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand. In column (1), the sample consists of firms with income-increasing discretionary accruals, and the dependent variable is positive discretionary accruals. In column (2), the sample consists of firms with income-decreasing discretionary accruals, and the dependent variable is negative discretionary accruals. All variables are defined in Table 42.1. All regressions contain dummy control variables for country, year, and industry. The t-statistics (in parentheses) are adjusted based on standard errors clustered by firm and year (Petersen 2009). The symbols *, **, and *** denote statistical significance at the 10 %, 5 %, and 1 % levels (two-tailed), respectively
EBC OUTDIR VOTE VOTE *EBC VOTE * OUTDIR BOARDSIZE CEODUAL LNASSET MB LEV ROA Adjusted R2
(1) Positive DISCAC 0.1865 (2.21)** 0.1172 (2.08)** 0.8103 (3.25)*** 0.3952 (2.49)*** 0.1732 (2.13)** 0.0533 (1.27) 0.1860 (1.32) 0.7590 (5.22)*** 0.2019 (2.08)** 0.9781 (3.20)*** 3.087 (4.13)*** 13.8 %
(2) Negative DISCAC 0.2017 (2.09)** 0.1302 (2.11)** 0.6735 (2.23)** 0.3064 (2.05)** 0.1105 (2.01)** 0.0681 (1.09) 0.1562 (1.26) 0.4463 (6.12)*** 0.1085 (1.93)** 0.7701 (2.10)** 4.253 (3.62)*** 12.4 %
I partition the sample into two groups based on the sign of the firms’ discretionary accruals. Table 42.3 column (1) presents the results using the subsample of firms with income-increasing discretionary accruals. Results indicate firms with higher equity ownership by outside directors, higher board independence, and lower divergence of control rights from cash flow rights, have lower incomeincreasing discretionary accruals. More importantly, I find that outside directors’ equity ownership and proportion of outside directors mitigate the propensity of firms with high separation of control rights from cash flow rights to make higher income-increasing discretionary accruals. Table 42.3 column (2) presents the results using the subsample of firms with income-decreasing discretionary accruals. I find firms with higher equity ownership, higher board independence, and lower divergence of control rights from cash flow rights, have lower income-decreasing discretionary accruals. Furthermore, I find that equity ownership by outside directors and proportion of outside directors mitigate the propensity of firms with high separation of control rights from cash flow rights to make higher income-decreasing discretionary accruals. In summary, when outside directors have equity ownership and when board independence is high, firms have both lower income-increasing and incomedecreasing discretionary accruals, apparently mitigating earnings management
42
Earnings Quality and Board Structure: Evidence from South East Asia
1181
Table 42.4 Regressions of accrual quality. The sample consists of 617 firms for 2,875 firmyear observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand. The dependent variable is AQ measured by Dechow and Dichev’s (2002) measure of mapping of accruals to past, present, and future cash from operations with higher values of AQ denoting better accrual quality. All variables are defined in Table 42.1. All regressions contain dummy control variables for country, year, and industry. The t-statistics (in parentheses) are adjusted based on standard errors clustered by firm and year (Petersen 2009). The symbols *, **, and *** denote statistical significance at the 10 %, 5 %, and 1 % levels (two-tailed), respectively EBC OUTDIR VOTE VOTE *EBC VOTE * OUTDIR BOARDSIZE CEODUAL LNASSET OPERCYCLE NETPPE STDCFO STDSALE NEGEARN Adjusted R2
Predicted sign + + + + +/ +/ + +
1
2
0.3725 (3.11)*** 0.2049 (2.15)** 0.5108 (3.74)***
0.2133 (3.26)*** 0.1557 (2.86)*** 0.4352 (3.05)*** 0.1751 (2.80)*** 0.1163 (2.09)** 0.0642 (1.17) 0.2173 (1.56) 2.8764 (4.82)*** 3.0185 (5.01)*** 0.9926 (3.72)*** 1.8344 (3.32)*** 0.7845 (2.09)** 1.0345 (1.77)* 10.5 %
0.1003 (1.29) 0.1890 (0.83) 3.2513 (5.11)*** 3.2941 (4.75)*** 1.1802 (3.35)*** 1.2981 (2.83)*** 1.0203 (2.77)*** 0.8306 (2.02)** 9.2 %
both on the upside and downside. For firms with greater separation of control rights from cash flow rights of controlling shareholders, those with high equity ownership by outside directors and those with high proportion of outside directors have lower income-increasing and lower income-decreasing discretionary accruals.
42.4.3 Accrual Quality Table 42.4 presents regressions of accrual quality on corporate ownership structure and board characteristics. Following Gow et al. (2010), I employ the two-way clustering method where the standard errors are clustered by both firm and year in my regressions. In column (1), the coefficient EBC is positive and significant, suggesting that that firms whose directors receive higher equity ownership have higher accrual quality. Firms with high proportion of outside directors have higher accrual quality. The coefficient on VOTE is negative and significant, indicating the firms with high misalignment between control rights and cash flow rights have lower accrual quality. In column (2), the interaction term VOTE*DIROWN is positive and significant at the 1 % level. This finding suggests that firms with
1182
K.-W. Lee
Table 42.5 Regressions of returns on earnings. The sample consists of 617 firms for 2,875 firm-year observations during the period 2004–2008 in five Asian countries comprising Indonesia, Malaysia, the Philippines, Singapore, and Thailand. The dependent variable (RET) is 12-month cumulative raw return ending 3 months after the fiscal year-end. All regressions contain dummy control variables for country, year, and industry. The t-statistics (in parentheses) are adjusted based on standard errors clustered by firm and year (Petersen 2009). The symbols *, **, and *** denote statistical significance at the 10 %, 5 %, and 1 % levels (two-tailed), respectively EARN EARN *EBC EARN * OUTDIR EARN * VOTE EARN * VOTE *EBC EARN * VOTE * OUTDIR EARN * BOARDSIZE EARN * CEODUAL EARN * LNASSET EARN * MB EARN * LEV N Adjusted R2
Predicted sign + + + + + +/ +/ + +
1
2 1.1735 (3.85)*** 0.3122 (2.87)*** 0.1094 (2.13)** 0.6817 (3.72)***
0.0836 (1.50) 0.0405 (1.42) 0.2011 (2.89)*** 0.1573 (1.80)* 0.7814 (2.03)** 3,172 11.8 %
1.2811 (3.62)*** 0.2983 (2.80)*** 0.1105 (2.08)** 0.7019 (3.50)*** 0.2602 (2.83)*** 0.1925 (2.15)** 0.0801 (1.22) 0.0215 (1.30) 0.3122 (3.07)*** 0.1806 (1.81)* 0.6175 (2.84)*** 3,172 13.3 %
higher equity ownership by outside directors have a less pronounced negative association between accrual quality and the separation of control rights from cash flow rights of controlling shareholders. I then test whether board independence attenuates the negative association between accrual quality and the separation of control rights from cash flow rights of controlling shareholders. The interaction term VOTE*OUTDIR is positive and significant at the 5 % level. Hence, for firms with high separation of control rights from cash flow rights of controlling shareholder, those with higher proportion of outside directors have higher accrual quality. Collectively, my results suggest that stronger directors’ equity ownership and higher board independence are associated with better financial reporting outcome, especially in firms with high expected agency costs arising from misalignment of control rights and cash flow rights.
42.4.4 Earnings Informativeness Table 42.5 presents the regression results on earnings informativeness. The coefficient EARN* DIROWN is positive and significant, indicating the greater the equity ownership by outside directors, the higher informativeness of reported earnings. The coefficient EARN*OUTDIR is positive and significant, implying that firms
42
Earnings Quality and Board Structure: Evidence from South East Asia
1183
with higher proportion of outside directors have higher informativeness of reported earnings. Consistent with prior studies (Fan and Wong 2002), the coefficient EARN*VOTE is negative and significant, indicating that the separation of control rights from cash flow rights of controlling shareholder reduces the informativeness of reported earnings. In column (2), I examine the interaction between effectiveness of board monitoring and the divergence of control rights from cash flow rights in affecting earnings informativeness. The interaction term EARN*VOTE* DIROWN is positive and significant at the 1 % level. In firms with high misalignment between control rights from cash flow rights, the informativeness of earnings is higher when outside directors have higher equity ownership. The interaction term EARN*VOTE*OUTDIR is positive and significant at the 5 % level. The negative association between earnings informativeness and the separation between control rights from cash flow rights controlling shareholder is less pronounced in firms with higher proportion of outside directors. In other words, in firms with high misalignment between control rights from cash flow rights, the informativeness of earnings is higher in firms with higher proportion of outside directors.
42.4.5 Robustness Tests As a sensitivity analysis, I repeat all my tests at the economy level. The economyby-economy results indicate that earnings quality is positively associated with equity ownership by outside directors and board independence and negatively associated with the separation of cash flow rights from control rights. More importantly, the mitigating effects of equity ownership and board independence on the association between separation of cash flow rights from control rights and earnings quality are not concentrated in any given economy. Year-by-year regressions yield qualitatively similar results, suggesting my inferences are not timeperiod specific. As a robustness test, I follow Haw et al. (2004) to include legal institutions that protect minority shareholder rights (proxied by legal tradition, minority shareholder rights, efficiency of judicial system, or disclosure system) and extralegal institutions (proxied by the effectiveness of competition law, diffusion of the press, and tax compliance) in my tests. I continue to document firm-specific internal governance mechanisms, namely, outside directors’ equity ownership and board independence, still matter in constraining management opportunism over the financial reporting process, especially in firms with high expected agency costs arising from the divergence between control rights and cash flow rights. Thus, my results suggest that there is an incremental role for firm-specific internal governance mechanisms, beyond country-level institutions, in improving the quality of financial information by mitigating insiders’ entrenchment.
1184
42.5
K.-W. Lee
Conclusion
Publicly reported accounting information, which measures a firm’s financial position and performance, can be used as important input information in various corporate governance mechanisms such as managerial incentive plans. Whether and how reported accounting information is used in the governance of a firm depends on the quality and credibility of such information. I provide evidence that board of directors plays an important corporate governance role in improving the quality and credibility of accounting information in firms with high agency conflicts arising from their concentrated ownership structure. I examine the relation among outside directors’ equity ownership, board independence, separation of control rights from cash flow rights of controlling shareholder, and earnings quality. I measure earnings quality with three financial reporting metrics: (i) discretionary accruals, (ii) mapping of accruals to cash flow, and (iii) informativeness of reported earnings. I find that earnings quality is positively associated with outside directors’ equity ownership and the proportion of outside directors on the board. I document that firms with higher agency problems arising from the separation of control rights from cash flow rights of controlling shareholders have lower earnings quality. The negative association between separation of control rights from cash flow rights and earnings quality is less pronounced in firms with higher equity ownership by outside directors. This finding suggests that equity ownership that aligns outside directors’ and shareholders’ interest is associated with more effective monitoring of managerial discretion on reported earnings. In addition, the low earnings quality induced by the separation of control rights from cash flow rights is mitigated by the proportion of outside directors on the board. Overall, my results suggest that directors’ equity ownership and board independence are associated with better financial reporting outcomes, especially in firms with high expected agency costs arising from misalignment of control rights and cash flow rights.
Appendix 1: Discretionary Accruals My first proxy for earnings quality is discretionary accruals. A substantial stream of prior studies uses absolute discretionary accruals as a proxy for earnings management (Ashbaugh et al. 2003; Warfield et al. 1995; Klein 2002; Kothari et al. 2005). Absolute discretionary accruals reflect corporate insiders’ propensity to inflate reported income to conceal private benefits of control and to understate income in good performance years to create reserves for poor performance in the future. Accruals are estimated by taking the difference between net income and cash flow from operations. I employ the modified cross-sectional Jones (1991) model to decompose total accruals into non-discretionary accruals and
42
Earnings Quality and Board Structure: Evidence from South East Asia
1185
discretionary accruals. Specifically, I estimate the following model for each country in each year at the one-digit SIC industry: ACC ¼ g1 ð1=LAG1ASSETÞ þ g2 ðCHGSALE CHGRECÞ þ g3 ðPPEÞ (42.1) where: ACC ¼ total accruals, which are calculated as net income minus operating cash flows scaled by beginning-of-year total assets. LAG1ASSET ¼ total assets at beginning of the fiscal year. CHGSALE ¼ sales change, which is net sales in year t less net sales in year t 1, scaled by beginning-of-year-t total assets. CHGREC ¼ change in accounts receivables scaled by beginning-of-year-t total assets. PPE ¼ gross property, plant, and equipment in year t scaled by beginning-of-year-t total assets. I use the residuals from the annual cross-sectional country-industry regression model in (A1) as the modified Jones model discretionary accruals. I use the following regression model to test the association between discretionary accruals and board structure: DISCAC ¼ b0 þ b1 EBC þ b2 OUTDIR þ b3 VOTE þ b4 VOTE EBC þ b5 VOTE OUTDIR þ b6 BOARDSIZE þ b7 CEODUAL þ b8 LNASSET þ b9 MB þ b10 LEV þ b11 ROA þ Year controls þ Country Controls (42.2) where: DISCAC ¼ absolute value of discretionary accruals estimated based on the modified Jones model (see Eq. 42.1). DIROWN ¼ common stock and stock options held by outside directors divided by number of ordinary shares outstanding in the firm. OUTDIR ¼ proportion of outside directors on the board. VOTE ¼ control rights divided by cash flow rights of the largest controlling shareholder. BOARDSIZE ¼ number of directors on the board. CEODUAL ¼ a dummy variable that equals 1 if the CEO is chairman of board and 0 otherwise. LNASSET ¼ natural logarithm of total assets. MB ¼ market value of equity divided by book value of equity. LEV ¼ long-term debt divided by total assets. ROA ¼ net profit after tax divided by total assets. Country Controls ¼ a set of country dummy variables. Year Controls ¼ a set of year dummy variables.
1186
K.-W. Lee
If high equity ownership for outside directors improves the board monitoring of managerial discretion over the financial accounting process, I predict the coefficient b1 to be negative. Similarly, a negative coefficient for b2 suggests that board independence curtails managerial opportunism on financial reporting. A positive coefficient b3 indicates greater separation of control rights from cash flow rights of the largest controlling shareholder induces greater earnings management. I predict the positive association between absolute discretionary accruals and the separation of control rights from cash flow rights to be less pronounced in firms with high equity ownership by outside directors. Thus, I expect coefficient b4 to be negative. Furthermore, I predict the positive association between absolute discretionary accruals and the separation of control rights from cash flow rights to be less pronounced in firms high proportion of outside directors on the board. Thus, I expect coefficient b5 to be negative. Other board characteristics include the total number of directors (BOARDSIZE) and CEO-chairman duality (CEODUAL). The evidence is mixed on whether board size and CEO duality impairs board effectiveness. Thus, ex ante, there is no prediction on the sign on both variables. The model controls for the effects of firm size, growth opportunities, and leverage on discretionary accruals. Large firms have greater external monitoring, have more stable operations and stronger control structures, and hence report smaller abnormal accruals (Dechow and Dichev 2002). Firm size (LNASSET) is measured based on book value of total assets. Because discretionary accruals are higher for firms with higher growth opportunities, I employ the market-to-book equity (MB) ratio to control for the effect of growth opportunities on discretionary accruals (Kothari et al. 2005). I also include financial leverage (LEV), defined as long-term debt divided by total assets, to control for the managerial discretion over the financial accounting process to mitigate constraints of accounting-based debt covenants (Smith and Watts 1992). To control for the effect of firm performance on discretionary accruals, I include firm profitability (ROA), defined as net income divided by total assets. Finally, I include country dummy variables to capture country-specific factors that may affect the development of capital markets and financial accounting quality. I include dummy variables for years and industries to control for time effect and industry effects, respectively.
Appendix 2: Accruals Quality My second proxy for earnings quality is accruals quality. Dechow and Dichev (2002) propose a measure of earnings quality that captures the mapping of current accruals into last-period, current-period, and next-period cash flows. Francis et al. (2005) find that this measure (which they term accrual quality) is associated with measures of cost of equity capital. My measure of accrual quality is based on Dechow and Dichev’s (2002) model relating current accruals to last-period, current-period, and next-period cash flows:
42
Earnings Quality and Board Structure: Evidence from South East Asia
TCAj, t CFOj, t1 CFOj, t CFOj, tþ1 ¼ g0, j þ g1j þ g2j þ g2j þ ej, t Assets Assets Assets Assets
1187
(42.3)
where: TCAj,t ¼ firm j’s total current accruals in year t ¼ DCAj,t – DCLj,t – DCASHj,t + DSTDj,t Assets ¼ firm j’s average total assets in year t 1 and year t CFOj,t ¼ cash flow from operations in year t is calculated as net income less total accruals (TA) where: TAj,t ¼ DCAj,t – DCLj,t – DCASHj,t + DSTDj,t – DEPNj,t where DCAj,t ¼ firm j’s change in current assets between year t 1 and year t DCLj,t ¼ firm j’s change in current liabilities between year t 1 and year t DCASHj,t ¼ firm j’s change in cash between year t 1 and year t DSTDj,t ¼ firm j’s change in debt in current liabilities between year t 1 and year t DEPNj,t ¼ firm j’s change in depreciation and amortization expense in year t I estimate Eq. 42.3 for each one-digit SIC industry for each country-year combination. These estimations yield firm- and year-specific residuals, ejt, which form the basis for the accrual quality metric. AQ is the standard deviation of firm j’s estimated residuals multiplied by 1. Hence, large values of AQ correspond to high accrual quality. I employ the following model to test the association between accrual quality and board characteristics: AQ ¼ b0 þ b1 EBC þ b2 OUTDIR þ b3 VOTE þ b4 VOTE DIROWN þ b5 VOTE OUTDIR þ b6 CEODUAL þ b7 BOARDSIZE þ b8 LNASSET þ b9 OPERCYCLE þ b10 NETPPE þ b11 STDSALE þ b12 STDCFO þ b13 NEGEARN þ Country controls þ Industry Controls þ Year Controls: (42.4) where: AQ ¼ the standard deviation of firm j’s residuals from a regression of current accruals on lagged, current, and future cash flows from operations. I multiply the variable by 1 so that higher AQ measure denotes higher accrual quality. OPERCYCLE ¼ log of the sum of the firm’s days accounts receivable and days inventory. NETPPE ¼ ratio of the net book value of PP&E to total assets. STDCFO ¼ standard deviation of the firm’s rolling 5-year cash flows from operations. STDSALE ¼ standard deviation of the firm’s rolling 5-year sales revenue. NEGEARN ¼ the firm’s proportion of losses over the prior 5 years. All other variables are previously defined.
1188
K.-W. Lee
If high equity ownership for outside directors improves the board monitoring of managerial discretion over the financial accounting process, I predict coefficient b1 to be positive. Similarly, a positive coefficient for b2 suggests that higher board independence is associated with higher accrual quality. If greater agency costs arise from the higher separation of control rights from cash flow rights of the largest controlling shareholder, coefficient b3 should be negative. I predict that the negative association between accrual quality and the separation of control rights from cash flow rights is mitigated in firms with high equity ownership by outside directors. Thus, I expect coefficient b4 to be positive. Furthermore, the negative effect of the separation of control rights from cash flow on rights accrual quality should be attenuated in firms with high proportion of outside directors on the board. Thus, I expect coefficient b5 to be positive. In Eq. 42.4, the control variables include innate determinants of accrual quality. Briefly, Dechow and Dichev (2002) find that accrual quality is positively associated with firm size and negatively associated with cash flow variability, sales variability, operating cycle, and incidence of losses. Firm size is measured by the natural logarithm of total assets (LNASSET). Operating cycle (OPERCYCLE) is the log of the sum of the firm’s days accounts receivable and days inventory. Capital intensity, NETPPE, is proxied by the ratio of the net book value of PP&E to total assets. Cash flow variability (STDCFO) is the standard deviation of the firm’s rolling 5-year cash flows from operations. Sales variability (STDSALE) is the standard deviation of the firm’s rolling 5-year sales revenue. Incidence of negative earnings realizations, NEGEARN, is measured as the firm’s proportion of losses over the prior 5 years.
Appendix 3: Earnings Informativeness My third proxy of earnings quality is earnings informativeness, measured by the earnings response coefficients (Warfield et al. 1995; Fan and Wong 2002; Francis et al. 2005). The following model is adopted to investigate the relation between earnings informativeness and equity-based compensation, board independence, and separation of control rights from cash flow rights: RET ¼ b0 þ b1 EARN þ b2 EARN DIROWN þ b3 EARN OUTDIR þ b4 EARN VOTE þ b5 EARN VOTE EBC þ b6 EARN VOTE OUTDIR þ b7 EARN BOARDSIZE þ b8 EARN CEODUAL þ b9 EARN LNASSET
(42.5)
þ b10 EARN MB þ b11 EARN LEV þ b12 EARN ROA þ Year controls þ Country Controls þ Industry Controls þ e where: RET ¼ 12-month cumulative raw return ending 3 months after the fiscal year-end. EARN ¼ net income for year t, scaled by the market value of equity at the end of t 1.
42
Earnings Quality and Board Structure: Evidence from South East Asia
1189
All other variables are as previously defined. The estimated coefficient on b1 reflects the earnings response coefficient. A positive estimate on b2 will be consistent with the notion that equity ownership for outside directors is associated with more informative earnings. A positive estimate on b3 indicates that the greater proportion of outside directors on the board, the greater the informativeness of earnings. From Fan and Wong (2002), I expect coefficient b4 to be negative, indicating that the reported earnings are less informative when the ultimate shareholder’s control rights exceed his cash flow rights. If high equity ownership for outside directors improves their monitoring of management, the negative effects of the divergence of control rights from cash flow rights should be mitigated in firms with high equity-based compensation. I expect coefficient b5 to be positive. If monitoring intensity is positively associated with the proportion of outside directors on the board, the reduced informativeness of reported earnings in firm with high divergence of control rights from cash flow rights should be mitigated in firms with higher board independence. I expect coefficient b6 to be positive.
Appendix 4: Adjusting for Standard Errors in Panel Data Gow et al. (2010) examine several approaches to address issues of cross-sectional and time-series dependence in accounting research. They identified a number of common approaches: Fama-MacBeth, Newey-West, the Z2 statistic, and standard errors clustered by firm, industry, or time. Gow et al. (2010) review each of these approaches and discuss the circumstances in which they produce valid inferences. This section is drawn heavily from Gow et al. (2010). Correcting for crosssectional and time-series dependence in accounting research. The Accounting Review 85(2), 483–512. Reader should refer to the paper for details. (i) OLS and White Standard Errors OLS standard errors assume that errors are both homoskedastic and uncorrelated across observations. While White (1980) standard errors are consistent in the presence of heteroskedasticity, both OLS and White produce misspecified test statistics when either forms of dependence is present. (ii) Newey-West Newey and West (1987) generalize the White (1980) approach to yield a covariance matrix estimator that is robust to both heteroskedasticity and serial correlation. Gow et al. (2010) find that the Newey-West procedure produces slightly biased estimates of standard errors when time-series dependence alone is present. However, Gow et al. (2010) find that, in the presence of both cross-sectional and time-series dependence, Newey and West method produces misspecified test statistics with even moderate levels of crosssectional dependence. (iii) Fama-MacBeth The Fama-MacBeth approach (Fama and MacBeth 1973) is designed to address concerns about cross-sectional correlation. The Fama-MacBeth
1190
K.-W. Lee
approach (FM-t) involves estimating T cross-sectional regressions (one for each period) and basing inferences on a t-statistic calculated as t¼
T b 1X ^ , where b ¼ b seðbÞ T t¼1 t
(42.6)
and se(b) is the standard error of the coefficients based on their empirical distribution. When there is no cross-regression (time-series) dependence, this approach yields consistent estimates of the standard error of the coefficients as T goes to infinity. Two common variants of the Fama-MacBeth approach appear in the accounting literature. The first variant, FM-i, involves estimating firm- or portfolio-specific time-series regressions with inferences based on the cross-sectional distribution of coefficients. This modification of the FamaMacBeth approach is appropriate if there is time-series dependence but not cross-sectional dependence. However, FM-i is frequently used when crosssectional dependence is likely, such as when returns are the dependent variable. The second common variant of the FM-t approach, FM-NW, is intended to correct for serial correlation in addition to cross-sectional correlation. FM-NW modifies FM-t by applying a Newey-West adjustment in an attempt to correct for serial correlation. Gow et al. (2010) suggest two reasons to believe that FM-NW may not correct for serial correlation. First, FM-NW involves applying Newey-West to a limited number of observations, a setting in which Newey-West is known to perform poorly. Second, FM-NW applies Newey-West to a time-series of coefficients, whereas the dependence is in the underlying data. (iv) Z2 Statistic The Z2-t (Z2-i) statistic is calculated using t-statistics from separate crosssectional (time-series) regressions for each time period (cross-sectional unit) and is given by the expression: Z2 ¼
T t 1X ^t t , , where t ¼ seðtÞ T t¼1
(42.7)
se(t) is the standard error of the t-statistics based on their empirical distribution, and T is the number of time periods (cross-sectional units) in the sample. Gow et al. (2010) suggest Z2 may suffer from cross-regression dependence in the same way as the Fama-MacBeth approach does. (v) One-Way Cluster-Robust Standard Errors A number of studies in our survey use cluster-robust standard errors, with clustering either along a cross-sectional dimension (e.g., analyst, firm, industry, or country) or along a time-series dimension (e.g., year); we refer to the former as CL-i and the latter as CL-t. Cluster-robust standard errors
42
Earnings Quality and Board Structure: Evidence from South East Asia
1191
(also referred to as Huber-White or Rogers standard errors) were proposed by White (1980) as a generalization of the heteroskedasticity-robust standard errors of White (1980). With observations grouped into G clusters of Ng observations, for g in {1,. . .,G}, the covariance matrix is estimated using the following expression: G X 0 1 0 1 0 0 ^ B ^ ¼ XX ^ XX ^¼ V B ,B X g ug ug X g , (42.8) g¼1
where Xg is the Ng K matrix of regressors, and ug is the Ng-vector of residuals for cluster g. While one-way cluster-robust standard errors allow for correlation of unknown form within cluster, it is assumed that errors are uncorrelated across clusters. For example, clustering by time (firm) allows observations to be crosssectionally (serially) correlated but assumes independence over time (across firms). While some studies consider both CL-i and CL-t separately, separate consideration of CL-t and CL-i does not correct for both crosssectional and time-series dependence. Gow et al. (2010) find that t-statistics for CL-t are inflated in the presence of time-series dependence and t-statistics for CL-i are inflated in the presence of cross-sectional dependence. Thus, when both forms of dependence are present, both CL-t and CL-i produce overstated t-statistics. (vi) Two-Way Cluster-Robust Standard Errors An extension of cluster-robust standard errors is to allow for clustering along more than one dimension. In contrast to one-way clustering, two-way clustering (CL-2) allows for both time-series and cross-sectional dependence. For example, two-way clustering by firm and year allows for within-firm (timeseries) dependence and within-year (cross-sectional) dependence (e.g., the observation for firm j in year t can be correlated with that for firm j in year t + 1 and that for firm k in year t). To estimate two-way cluster-robust standard errors, the expression in (A4) is evaluated using clusters along each dimension (e.g., clustered by industry and clustered by year) to yield V1 and V2. Then the same expression is calculated using the “intersection” clusters (in the example, observations within an industry-year) to yield VI. The two-way cluster-robust estimator V is calculated as V ¼ V1 + V2 VI. Standard econometric software packages (e.g., Stata and SAS) contain routines for calculating one-way cluster-robust standard errors, making it relatively straightforward to implement two-way cluster-robust standard errors. Gow et al. (2010) find that in the presence of both cross-sectional and time-series dependence, the two-way clustering method (by year and by firm) which allows for both cross-sectional and time-series dependence produces well-specified test statistics. Johnston and DiNardo (1997) and Greene (2000) are two econometric textbooks that contain a detailed discussion of the econometrics issues relating to panel data.
1192
K.-W. Lee
Acknowledgment I appreciate the research funding provided by Institute of Certified Public Accountants of Singapore. I would like to dedicate this paper to my late father, Yew-Ming Lee.
References Ashbaugh, H., LaFond, R., & Mayhew, B. W. (2003). Do nonaudit services compromise auditor independence? Further evidence. The Accounting Review, 78(3), 611–639. Beasley, M. S. (1996). An empirical analysis of the relation between the board of director composition and financial statement fraud. The Accounting Review, 71, 443–465. Claessens, S., Djankov, S., & Lang, L. H. P. (2000). The separation of ownership and control in East Asian corporations. Journal of Financial Economics, 58, 81–112. Claessens, S., Djankov, S., Fan, J. P. H., & Lang, L. H. P. (2002). Disentangling the incentive and entrenchment effects of large shareholdings. Journal of Finance, 57(2), 2741–2771. Dechow, P. M., & Dichev, I. (2002). The quality of accruals and earnings: The role of accrual estimation errors. The Accounting Review, 77, 35–59. Dechow, P. M., Sloan, R. G., & Sweeney, A. P. (1996). Causes and consequences of earnings manipulation: An analysis of firms subject to enforcement actions by the SEC. Contemporary Accounting Research, 13, 1–36. Fama, E., & Jensen, M. (1983). Agency problems and residual claims. Journal of Law and Economics, 26, 327–349. Fama, E., & MacBeth, J. (1973). Risk, return, and equilibrium: Empirical tests. The Journal of Political Economy, 81(3), 607–636. Fan, J., & Wong, T. J. (2002). Corporate ownership structure and the informativeness of accounting earnings in East Asia. Journal of Accounting & Economics, 33, 401–425. Fan, J., & Wong, T. J. (2005). Do external auditors perform a corporate governance role in emerging markets? Journal of Accounting Research, 43(1), 35–72. Francis, J., Lafond, R., Olsson, P., & Schipper, K. (2005). The market pricing of accrual quality. Journal of Accounting & Economics, 39, 295–327. Gillette, A., Noe, T., & Rebello, M. (2003). Corporate board composition, protocols and voting behavior: Experimental evidence. Journal of Finance, 58, 1997–2032. Gow, I. D., Ormazabal, G., & Taylor, D. J. (2010). Correcting for cross-sectional and time series dependence in accounting research. The Accounting Review, 85(2), 483–512. Greene, W. (2000). Econometrics analysis. Upper Saddle River: Prentice-Hall. Haw, I., Hu, B., Hwang, L., & Wu, W. (2004). Ultimate ownership, income management, and legal and extra-legal institutions. Journal of Accounting Research, 42, 423–462. Hermalin, B., & Weisbach, M. (1988). The determinants of board composition. Rand Journal of Economics, 19, 589–606. Johnson, S., Boone, P., Breach, A., & Friedman, E. (2000). Corporate governance in the Asian financial crisis. Journal of Financial Economics, 58, 141–186. Johnston, J., & DiNardo, J. (1997). Econometrics method. New York: Mc-Graw Hill. Klein, A. (2002). Audit committee, board of director characteristics, and earnings management. Journal of Accounting and Economics, 33(3), 375–400. Kothari, S. P., Leone, A. J., & Wasley, C. E. (2005). Performance matched discretionary accruals. Journal of Accounting and Economics, 39, 163–197. La Porta, R., Lopez-de-Silanes, F., Shleifer, A., & Vishny, R. (1998). Law and finance. Journal of Political Economy, 106, 1113–1155. La Porta, R., Lopez-De-Silanes, F., & Shleifer, A. (1999). Corporate ownership around the world. Journal of Finance, 54, 471–518. Lang, M., Lins, K., & Miller, D. (2004). Concentrated control, analyst following, and valuation: Do analysts matter most when investors are protected least? Journal of Accounting Research, 42(3), 589–623.
42
Earnings Quality and Board Structure: Evidence from South East Asia
1193
Lee, K. W. (2007). Corporate voluntary disclosure and the separation of cash flow rights from control rights. Review of Quantitative Finance and Accounting, 28, 393–416. Lee, K. W., Lev, B., & Yeo, H. H. (2007). Organizational structure and earnings management. Journal of Accounting, Auditing and Finance, 22(2), 293–391. Lee, K. W., Lev, B., & Yeo, H. H. (2008). Executive pay dispersion, corporate governance and firm performance. Review of Quantitative Finance and Accounting, 30, 315–338. Lee, C. F., Lee, K. W., & Yeo, H. H. (2009). Investor protection and convertible debt design. Journal of Banking and Finance, 33(6), 985–995. Leuz, C., Nanda, D., & Wysocki, P. (2003). Earnings management and investor protection: An international comparison. Journal of Financial Economics, 69, 505–527. Mehran, H. (1995). Executive compensation structure, ownership, and firm performance. Journal of Financial Economics, 38(2), 163–184. Morck, R., Shleifer, A., & Vishny, R. W. (1988). Management ownership and market valuation: An empirical analysis. Journal of Financial Economics, 20, 293–315. Newey, W., & West, D. (1987). A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica, 55(3), 703–708. Peasnell, K., Pope, P., & Young, S. (2005). Board monitoring and earnings management: Do outside directors influence abnormal accruals? Journal of Business Finance & Accounting, 32(7/8), 1311–1345. Perry, T. (2000). Incentive compensation for outside directors and CEO turnover. Working paper, Arizona State University. Petersen, M. (2009). Estimating standard errors in finance panel data sets: Comparing approaches. Review of Financial Studies, 22(1), 435–480. Rosenstein, S., & Wyatt, J. (1990). Outside directors, board independence, and shareholder wealth. Journal of Financial Economics, 26, 175–192. Ryan, H., & Wiggins, R. (2004). Who is whose pocket? Director compensation, board independence, barriers to effective monitoring. Journal of Financial Economics, 73, 497–524. Shivdasani, A. (1993). Board composition, ownership structure, and hostile takeovers. Journal of Accounting & Economics, 16, 167–188. Shleifer, A., & Vishny, R. (1997). A survey of corporate governance. Journal of Finance, 52, 737–783. Smith, C., & Watts, R. (1992). The investment opportunity set and corporate financing, dividend and compensation policies. Journal of Financial Economics, 32, 263–292. Warfield, T., Wild, J. J., & Wild, K. (1995). Managerial ownership, accounting choices, and informativeness of earnings. Journal of Accounting and Economics, 20, 61–91. Weisbach, M. (1988). Outside directors and CEO turnover. Journal of Financial Economics, 20, 431–460. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48(4), 817–838.
Rationality and Heterogeneity of Survey Forecasts of the Yen-Dollar Exchange Rate: A Reexamination
43
Richard Cohen, Carl S. Bonham, and Shigeyuki Abe
Contents 43.1 43.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background: Testing Rationality in the Foreign Exchange Market . . . . . . . . . . . . . . . . . . . 43.2.1 Why Test Rational Expectations with Disaggregated Survey Forecast Data? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.2.2 Rational Reasons for the Failure of the Rational Expectations Hypothesis Using Disaggregated Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.3 Description of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.4 Empirical Tests of Rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.4.1 Joint Tests of Unbiasedness and Weak Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.4.2 Pretests for Rationality: The Stationarity of the Forecast Error . . . . . . . . . . . . . 43.4.3 Univariate Tests for Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.4.4 Unbiasedness Tests Using Error Correction Models . . . . . . . . . . . . . . . . . . . . . . . . . 43.4.5 Explicit Tests of Weak Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.5 Micro-homogeneity Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.5.1 Ito’s Heterogeneity Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Testing Micro-homogeneity with Survey Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1196 1199 1200 1201 1202 1203 1204 1212 1218 1222 1228 1230 1239 1241 1244 1245
R. Cohen University of Hawaii Economic Research Organization and Economics, University of Hawaii at Manoa, Honolulu, HI, USA e-mail: [email protected] C.S. Bonham (*) College of Business and Public Policy, University of Alaska Anchorage, Anchorage, AK, USA e-mail: [email protected] S. Abe Faculty of Policy Studies, Doshisha University, Kyoto, Japan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_43, # Springer Science+Business Media New York 2015
1195
1196
R. Cohen et al.
Abstract
This chapter examines the rationality and diversity of industry-level forecasts of the yen-dollar exchange rate collected by the Japan Center for International Finance. In several ways we update and extend the seminal work by Ito (1990, American Economic Review 80, 434–449). We compare three specifications for testing rationality: the “conventional” bivariate regression, the univariate regression of a forecast error on a constant and other information set variables, and an error correction model (ECM). We find that the bivariate specification, while producing consistent estimates, suffers from two defects: first, the conventional restrictions are sufficient but not necessary for unbiasedness; second, the test has low power. However, before we can apply the univariate specification, we must conduct pretests for the stationarity of the forecast error. We find a unit root in the 6-month horizon forecast error for all groups, thereby rejecting unbiasedness and weak efficiency at the pretest stage. For the other two horizons, we find much evidence in favor of unbiasedness but not weak efficiency. Our ECM rejects unbiasedness for all forecasters at all horizons. We conjecture that these results, too, occur because the restrictions test sufficiency, not necessity. We extend the analysis of industry-level forecasts to a SUR-type structure using an innovative GMM technique (Bonham and Cohen 2001, Journal of Business & Economic Statistics, 19, 278–291) that allows for forecaster crosscorrelation due to the existence of common shocks and/or herd effects. Our GMM tests of micro-homogeneity uniformly reject the hypothesis that forecasters exhibit similar rationality characteristics. Keywords
Rational expectations • Unbiasedness • Weak efficiency • Micro-homogeneity • Heterogeneity • Exchange rate • Survey forecasts • Aggregation bias • GMM • SUR
43.1
Introduction
This chapter examines the rationality of industry-level survey forecasts of the yen-dollar exchange rate collected by the Japan Center for International Finance (JCIF). Tests of rationality take on additional significance when performed on asset market prices, since rational expectations is a necessary condition for market efficiency. In the foreign exchange market, tests of forward rate unbiasedness simultaneously test a zero risk premium in the exchange rate; hence this joint hypothesis is also called the risk-neutral efficient market hypothesis (RNEMH). The practical significance of such a hypothesis is that if the forward rate is indeed an unbiased predictor of the future spot rate, then exchange risk can be costlessly hedged in the forward market. However, the RNEMH has been rejected nearly universally. Since the risk premium is unobservable, insight into the reason for the rejection of the RNEMH can be gained by separately testing for rationality using survey data on expectations. Because forecasters cannot be assumed to have
43
Rationality and Heterogeneity of Survey Forecasts
1197
identical information sets, we must use individual survey forecasts to avoid the aggregation bias inherent in the use of mean or median forecasts. We use data from the same source as Ito (1990), the seminal study recognizing the importance of using individual data to test rationality hypotheses about the exchange rate. To achieve stationarity of the realizations and forecasts (which each have a unit root), Ito (1990) followed the conventional specification at the time of subtracting the current realization from each. These variables are then referred to as being in “change” form. To test unbiasedness he regressed the future rate of depreciation on the forecasted return and tested the joint restrictions that the intercept equalled zero and the slope coefficient equalled one. At the industry level he found approximately twice as many rejections (at the 1 % level) at the longest horizon (6 months) than at the two shorter horizons (1 and 3 months). We extend Ito’s analysis in two principal respects: the specification of unbiasedness tests and inference in tests for micro-homogeneity of forecasters. One problem with the change specification of unbiasedness tests is that, since there is much more variation in the change in the realization than in the forecast, there is a tendency to under-reject the part of the joint hypothesis that the coefficient on the forecast equals one. This is precisely what we would expect in tests of variables which are near random walks. Second, and more fundamentally, Ito’s (1990) bivariate (joint) regression test of unbiasedness is actually a test of sufficiency, not necessity as well as sufficiency. Following Holden and Peel (1990), the necessary and sufficient condition for unbiasedness is a mean zero forecast error. This is tested in a univariate regression by imposing a coefficient of unity on the forecast and testing the restriction that the intercept equals zero. This critique applies whether or not the forecast and realization are integrated in levels. However, when the realization and forecast are integrated in levels, we must conduct a pretest to determine whether the forecast error is stationary. If the forecast and realization are both integrated and cointegrated, then a necessary and sufficient condition for unbiasedness is that intercept and slope in the cointegrating regression (using levels of the realization and forecast) are zero and one, respectively. We test this hypothesis using Liu and Maddala’s (1992) method of imposing the (0, 1) vector, then testing the “restricted” cointegrating residual for stationarity.1,2 Third, we use the result from Engle and Granger (1987) that cointegrated variables have an error correction representation. First, we employ the specification and unbiasedness restrictions originally proposed by Hakkio and Rush (1989). However, the unbiasedness tests using the ECM specification produce more rejections over industry groups and horizons than the univariate or bivariate specifications.
1
If in addition the residuals from the cointegrating regression are white noise, this supports a type of weak efficiency. 2 Pretesting the forecast error for stationarity is a common practice in testing the RNMEH, but the only study we know of that applies this practice to survey forecasts of exchange rates is Osterberg (2000), and he does not test for a zero intercept in the cointegrating regression.
1198
R. Cohen et al.
We conjecture that one possible explanation for this apparent anomaly is that, similar to the joint restrictions in the bivariate test, the ECM restrictions test sufficient conditions for unbiasedness, while the univariate restriction only tests a necessary and sufficient condition. Thus, the ECM has a tendency to over-reject. We then respecify the ECM, so that only the necessary and sufficient conditions are tested. We compare our results to those obtained using the sufficient conditions represented by the joint restrictions as well as the necessary and sufficient condition represented by the univariate restriction. The second direction in which we extend Ito’s (1990) analysis has to do with testing for differences among forecasters’ ability to produce rational predictions.3 We recognize, as does Ito, that differences among forecasters over time indicate that at least some individuals form biased forecasts. (The converse does not necessarily hold, since a failure to reject micro-homogeneity could conceivably be due to the same degree of irrationality of each individual in the panel.) Ito’s heterogeneity test is a single-equation test of deviations of individual forecasts from the mean forecast, where the latter may or may not be unbiased. In contrast, we test for differences in individual forecast performance using a micro-homogeneity test, i.e., testing for equal coefficients across the system of individual univariate rationality equations. In our tests for micro-homogeneity, we expect cross-forecaster error correlation due to the possibility of common macro shocks and/or herd effects in expectations. To this end, we incorporate two innovations not previously used by investigators studying survey data on exchange rate expectations. First, in our microhomogeneity tests, we use a GMM system with a variance-covariance matrix that allows for cross-sectional as well as moving average and heteroscedastic errors. Here we follow the widely used practice of modeling the individual regression residuals as an MA process of order h-1, where h is the number of periods in the forecast horizon. However, no other researchers have actually tested whether an MA process of this length is required to model the cross-sectional behavior of rational forecast errors. Thus, second, to investigate the nature of the actual MA processes, we use Pesaran’s (2004) CD test to examine the statistical significance of the cross-sectional dependence of forecast errors, both contemporaneous and lagged. The organization of the rest of the chapter is as follows: in Sect. 43.2 we review some fundamental issues in testing rationality in the foreign exchange market. In Sects. 43.3 and 43.4, we conduct various rationality tests on the JCIF data. Section 43.5 contains our micro-homogeneity tests. Section 43.6 summarizes and discusses areas for future research.
3
Market microstructure theories assume that there is a minimum amount of forecaster (as well as cross-sectional forecast) diversity. Also, theories of exchange rate determination that depend upon the interaction between chartists (or noise traders) and fundamentalists by definition require a certain structure of forecaster heterogeneity.
43
Rationality and Heterogeneity of Survey Forecasts
43.2
1199
Background: Testing Rationality in the Foreign Exchange Market
The Rational Expectations Hypothesis (REH) assumes that economic agents know the true data-generating process (DGP) for the forecast variable. This implies that the market’s subjective probability distribution of the variable is identical to the objective probability distribution, conditional on a given information set, Ft. Equating first moments of the market, Em ðstþh jFt Þ, and objective, Eðstþh jFt Þ, distributions, Em ðstþh jFt Þ ¼ Eðstþh jFt Þ,
(43.1)
where the right-hand side can be shortened to Et(st+h). It follows that the REH implies that forecast errors have both unconditional and conditional means equal to zero. A forecast is unbiased if its forecast error has an unconditional mean of zero. A forecast is efficient if its error has a conditional mean of zero. The condition that forecast errors be serially uncorrelated is a subset of the efficiency condition where the conditioning information set consists of past values of the realization and current as well as past values of the forecast.4 In this chapter we focus on testing whether forecasters can form rational expectations of future depreciation. If not, then at least part of the explanation for the failure of the RNEMH is due to the failure of the REH. There are two related interest parity conditions. Covered interest parity, an arbitrage condition, holds if ft,h st ¼ it it , i.e., the forward premium is equal to the interest differential between domestic and foreign risk-free assets. Uncovered interest parity holds if st + h set ¼ it it . Because uncovered interest parity assumes both unbiased expectations and risk neutrality, some authors view it as equivalent to the RNEMH (see Phillips and Maynard 2001). The ability to decompose deviations from UIP into time-varying risk premium and systematic forecast error components also has implications for policymakers. Consider first the possibility of a violation of the risk neutrality hypothesis. According to the portfolio balance model, if a statistically significant time-varying risk premium component is found, this means that it it is time-varying, which in turn implies that foreign and domestic bonds are not perfect substitutes; changes in relative quantities (which are reflected in changes in current account balances) will affect the interest rate differential. In this way, sterilized official intervention can have significant effects on exchange rates. Second, consider the possibility of a violation of the REH. If a statistically significant expectational error of the destabilizing (e.g., “bandwagon”) type is found, and policymakers are more rational than speculators, a policy of “leaning against the wind” could have a stabilizing 4
It is important to note that the result from one type of rationality test does not have implications for the results from any other types of rationality tests. In this chapter we test for unbiasedness and weak efficiency, leaving the more stringent tests of efficiency with respect to publicly available information for future analysis.
1200
R. Cohen et al.
effect on exchange rate movements. (See Cavaglia et al. 1994.) More generally, monetary models of the exchange rate (in which the UIP condition is embedded), which assume model-consistent (i.e., rational) expectations with risk neutrality, generally have not performed well empirically, especially in out-of-sample forecasting. (See, e.g., Bryant 1995.) One would like to be able to attribute the model failure to some combination of a failure of the structural assumptions (including risk neutrality) and a failure of the expectational assumption.
43.2.1 Why Test Rational Expectations with Disaggregated Survey Forecast Data? Beginning with Frankel and Froot (1987) and Froot and Frankel (1989), much of the literature examining exchange rate rationality in general, and the decomposition of deviations from the RNEMH in particular, has employed the representative agent assumption to justify using the mean or median survey forecast as a proxy for the market’s expectation. In both studies, Frankel and Froot found significant evidence of irrationality. Subsequent research has found mixed results. Liu and Maddala (1992, p. 366) articulate the mainstream justification for using aggregated forecasts in tests of the REH. “Although . . .data on individuals are important to throw light on how expectations are formed at the individual level, to analyze issues relating to market efficiency, one has to resort to aggregates.” In fact, Muth’s (1961, p. 316) original definition of rational expectations seemed to allow for the possibility that rationality could be applied to an aggregate (e.g., mean or median) forecast. ‘. . . [E]xpectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the predictions of the theory (or the “objective” probability distribution of outcomes).’ (Emphasis added.) However, if individual forecasters have different information sets, Muth’s definition does not apply. To take the simplest example, the (current) mean forecast is not in any forecaster’s information set, since all individuals’ forecasts must be made before a mean can be calculated. Thus, current mean forecasts contain private information (see MacDonald 1992) and therefore cannot be tested for rationality.5 Using the mean forecast may also result in inconsistent parameter estimates. Figlewski and Wachtel (1983) were the first to show that, in the traditional bivariate 5
A large theoretical literature relaxes Muth’s assumption that all information relevant for forming a rational forecast is publicly available. Instead, this literature examines how heterogeneous individual expectations are mapped into an aggregate market expectation, and whether the latter leads to market efficiency. (See, e.g., Figlewski 1978, 1982, 1984; Kirman 1992; Haltiwanger and Waldman 1989.) Our paper focuses on individual rationality but allows for the possibility of synergism by incorporating not only heteroscedasticity and autocorrelation consistent standard errors in individual rationality tests but also cross-forecaster correlation in tests of microhomogeneity. The extreme informational requirement of the REH led Pesaran and Weale (2006) to propose a weaker form of the REH that is based on the (weighted) average expectation using only publicly available (i.e., common) information.
43
Rationality and Heterogeneity of Survey Forecasts
1201
unbiasedness equation, the presence of private information variables in the mean forecast error sets up a correlation with the mean forecast. This inconsistency occurs even if all individual forecasts are rational. In addition, Keane and Runkle (1990) pointed out that, when some forecasters are irrational, using the mean forecast may lead to false acceptance of the unbiasedness hypothesis, in the unlikely event that offsetting individual biases allow parameters to be consistently estimated. See also Bonham and Cohen (2001), who argue that, in the case of cointegrated targets and predictions, inconsistency of estimates in rationality tests using the mean forecast can be avoided if corresponding coefficients in the individual rationality tests pass a test for micro-homogeneity.6 Nevertheless, until the 1990s, few researchers tested for the rationality of individual forecasts, even when those data were available.
43.2.2 Rational Reasons for the Failure of the Rational Expectations Hypothesis Using Disaggregated Data Other than a failure to process available information efficiently, there are numerous explanations for a rejection of the REH. One set of reasons relates to measurement error in the individual forecast. Researchers have long recognized that forecasts of economic variables collected from public opinion surveys should be less informed than those sampled from industry participants. However, industry participants, while relatively knowledgeable, may not be properly motivated to devote the time and resources necessary to elicit their best responses. The opposite is also possible.7 Having devoted substantial resources to produce a forecast of the price of a widely traded asset, such as foreign exchange, forecasters may be reluctant to reveal their true forecast before they have had a chance to trade for their own account.8 Second, some forecasters may not have the symmetric quadratic loss function embodied in typical measures of forecast accuracy, e.g., minimum mean squared error. (See Zellner 1986; Stockman 1987; Batchelor and Peel 1998.) In this case, the optimal forecast may not be the MSE. In one scenario, related to 6
The extent to which private information influences forecasts is more controversial in the foreign exchange market than in the equity or bond markets. While Chionis and MacDonald (1997) maintain that there is little or no private information in the foreign exchange market, Lyons (2002) argues that order flow explains much of the variation in prices. To the extent that one agrees with the market microstructure emphasis on the importance of the private information embodied in dealer order flow, the Figlewski-Wachtel critique remains valid in the returns regression. 7 Elliott and Ito (1999) show that, although a random walk forecast frequently outperforms the JCIF survey forecasts using an MSE criterion, survey forecasts generally outperform the random walk, based on an excess profits criterion. This supports the contention that JCIF forecasters are properly motivated to produce their best forecasts. 8 To mitigate the confidentiality problem in this case, the survey typically withholds individual forecasts until the realization is known or (as with the JCIF) masks the individual forecast by only reporting some aggregate forecast (at the industry and total level) to the public.
1202
R. Cohen et al.
the incentive aspect of the measurement error problem, forecasters may have strategic incentives involving product differentiation.9 In addition to strategic behavior, another scenario in which forecasters may deviate from the symmetric quadratic loss function is simply to maximize trading profits. This requires predicting the direction of change, regardless of MSE.10 Third, despite their best efforts, forecasters may find it difficult to distinguish between a temporary and permanent shift in the DGP. This difficulty underlies at least three theories of rational forecast errors: the peso problem, learning about past regime changes, and bubbles. Below we conduct tests for structural change in estimated unbiasedness coefficients. When unbiasedness cannot be rejected, the structural change test may show certain subperiods in which unbiasedness did not hold. In the obverse case, when unbiasedness can be rejected, the structural change test may show certain subperiods in which unbiasedness cannot be rejected. Either situation would lend some support to the theories attributing bias to the difficulty of distinguishing temporary from permanent shifts.
43.3
Description of Data
Every 2 weeks, the JCIF in Tokyo conducts telephone surveys of yen/dollar exchange rate expectations from 44 firms. The forecasts are for the future spot rate at horizons of 1 month, 3 months, and 6 months. Our data cover the period May 1985 to March 1996. This data set has very few missing observations, making it close to a true panel. For reporting purposes, the JCIF currently groups individual firms into four industry categories: (1) banks and brokers, (2) insurance and trading companies, (3) exporters, and (4) life insurance companies and importers. On the day after the survey, the JCIF announces overall and industry average forecasts. (For further details concerning the JCIF database, see the descriptions in Ito (1990, 1994), Bryant (1995), and Elliott and Ito (1999).) Figure 43.1 shows that, over the sample period (one of flexible exchange rates and no capital controls), the yen appreciated dramatically relative to the dollar, from a spot rate of approximately 270 yen/dollar in May 1985 to approximately 90 yen/dollar in March 1996. The path of appreciation was not steady, however. In the first 2 years of the survey alone, the yen appreciated to about 140 per dollar.
9
Laster et al. (1999) called this practice “rational bias.” Prominent references in this growing literature include Lamont (2002), Ehrbeck and Waldmann (1996), and Batchelor and Dua (1990a, b, 1992). Because we have access only to forecasts at the industry average level, we cannot test the strategic incentive hypotheses. 10 See Elliott and Ito (1999), Boothe and Glassman (1987), LeBaron (2000), Leitch and Tanner (1991), Lai (1990), Goldberg and Frydman (1996), and Pilbeam (1995). This type of loss function may appear to be relevant only for relatively liquid assets such as foreign exchange, but not for macroeconomic flows. However, the directional goal is also used in models to predict business cycle turning points. Also, trends in financial engineering may lead to the creation of derivative contracts in macroeconomic variables, e.g., CPI futures.
43
Rationality and Heterogeneity of Survey Forecasts
Fig. 43.1 Yen-dollar exchange rate versus time
1203
260 240 220 200 180 160 140 120 100 80 85/05/29
st 88/03/30
91/02/26
94/01/11
96/11/12
The initial rapid appreciation of the yen is generally attributed to the Plaza meeting in September 1985, in which the Group of Five countries decided to let the dollar depreciate, relative to the other currencies. At the Louvre meeting in February 1987, the Group of Seven agreed to stabilize exchange rates by establishing soft target zones. These meetings may well be interpreted as unanticipated regime changes, since, as we will see below, forecasters generally underestimated the rapid appreciation following the Plaza meeting, then overestimated the value of the yen following the Louvre meeting. Thus, forecasts during these periods may have been subject to peso and learning problems. The period of stabilization lasted until about 1990, when yen appreciation resumed and continued through the end of the sample period.
43.4
Empirical Tests of Rationality
Early studies of the unbiasedness aspect of rationality regressed the level of the realization on the level of the forecast, testing the joint hypothesis that the intercept equalled zero and the slope equalled one.11 However, since many macroeconomic variables have unit roots, and realizations and forecasts typically share a common stochastic trend, a rational forecast will be integrated and cointegrated with the target series. (See Granger 1991, pp. 69–70.) According to the modern theory of regressions
11
The efficiency aspect of rationality is sometimes tested by including additional variables in the forecaster’s information set, with corresponding hypotheses of zero coefficients on these variables. See, e.g., Keane and Runkle (1990) for a more recent study using the level specification and Bonham and Cohen (1995) for a critique of Keane and Runkle’s integration accounting.
1204
R. Cohen et al.
with integrated processes (see, inter alia Banerjee et al. 1993), conventional OLS estimation and inference produce a slope coefficient that is biased toward one and, therefore, a test statistic that is biased toward accepting the null of unbiasedness. The second generation studies of unbiasedness addressed this inference problem by subtracting the current realization from the forecast as well as the future realization, transforming the levels regression into a “changes” regression. In this specification of stationary variables, unbiasedness was still tested using the same (0, 1) joint hypothesis as in the levels regression. (Ito (1990) is an example of this methodology.) However, an implication of Engle and Granger (1987) is that the levels regression is now interpreted as a cointegrating regression, with conventional t-statistics following nonstandard distributions which depend on nuisance parameters. After establishing that the realization and forecast are integrated and cointegrated, we perform two types of rationality tests. The first is a “restricted cointegration” test due to Liu and Maddala (1992). This is a cointegration test imposing the (0, 1) restriction on the levels regression. It is significant that, if realization and forecast are cointegrated, Liu and Maddala’s (1992) technique is equivalent to regressing a stationary forecast error on a constant and then testing whether the coefficient equals zero (to test unbiasedness) and/or whether the residuals are white noise (to test a type of weak efficiency). Pretests for unit roots in the realization, forecast, and forecast error are required for at least three reasons. First, univariate tests of unbiasedness are invalid if the forecast error is not stationary. Second, following Holden and Peel (1990), we show below (in Sect. 43.4.1.1) that nonrejection of the joint test in the bivariate regression is sufficient but not necessary for unbiasedness, since the joint test is also an implicit test of weak efficiency with respect to the lagged forecast error. A zero intercept in the (correctly specified) univariate test is a necessary as well as sufficient condition for unbiasedness. Third, the Engle and Granger (1987) representation theorem proves that a cointegrating regression such as the levels joint regression (Eq. 43.2 below) has an error correction form that includes both differenced variables and an error correction term in levels. Under the joint null, the error correction term is the forecast error. While the change form of the bivariate regression, is not, strictly speaking, misspecified (since the regressor subtracts st, e not st1 , from ste), the ECM specification may produce a better fit to the data and, therefore, a more powerful test of the unbiasedness restrictions. We conduct such tests using a form of the ECM due to Hakkio and Rush (1989).
43.4.1 Joint Tests of Unbiasedness and Weak Efficiency 43.4.1.1 The Lack of Necessity Critique Many, perhaps most, empirical tests of the “unbiasedness” of survey forecasts are conducted using the bivariate regression equation (43.2) stþh st ¼ ai, h þ bi, h sei, t, h st þ ei, t, h : It is typical for researchers to interpret their nonrejection of the joint null (ai,h, bi,h) ¼ (0, 1) as a necessary condition for unbiasedness. However,
43
Rationality and Heterogeneity of Survey Forecasts
Fig. 43.2 Unbiasedness with a 6¼ 0, b 6¼ 1
1205
st+h − st
E(st+h − st)
0
45⬚
E(s ei,t,h−st) 0
s ei,t,h−st
Holden and Peel (1990) show that this result is a sufficient, though not a necessary, condition for unbiasedness. The intuition for the lack of necessity comes from interpreting the right-hand side of the bivariate unbiasedness regression as a linear combination of two potentially unbiased forecasts: a constant equal to the unconditional mean forecast plus a variable forecast, i.e., st + h st ¼ (1 bi,h) E(sei;t;h st) + bi,h(sei;t;h st) + ei,t,h. Then the intercept is ai,h ¼ (1 bi,h) E(sei;t;h st). The necessary and sufficient condition for unbiasedness is that the unconditional mean of the subjective expectation E[sei;t;h st] equal the unconditional mean for the objective expectation E[st + h st]. However, this equality can be satisfied without ai,h being equal to zero, i.e., bi,h ¼ 1. Figure 43.2 shows that an infinite number of ai,h, bi,h estimates are consistent with unbiasedness. The only constraint is that the regression line intersects the 45 ray from the origin where the sample mean of the forecast and target are equal. Note that, in the case of differenced variables, this can occur at the origin, so that ai,h ¼ 0, but bi,h is unrestricted (see Fig. 43.3). It is easy to see why unbiasedness holds: in Figs. 43.2 and 43.3 the sum of all horizontal deviations from the 45 line to the regression line, i.e., forecast errors, equal zero. However, when ai,h 6¼ 0, and ai,h 6¼ (1 bi,h) E(sei;t;h st), there is bias regardless of the value of bi,h. See Fig. 43.4, where the bias, E(st + h sei;t;h ), implies systematic underforecasts.
1206 Fig. 43.3 Unbiasedness with a ¼ 0, b 6¼ 1
R. Cohen et al. st+h − st
0
E(st+h − st)
E(s ei,t,h−st)
45⬚
s ei,t,h−st
0
st+h − st
0 E(st+h − st)
Fig. 43.4 Bias with a 6¼ 0, b¼1
bias
45⬚
E(s ei,t,h−st) 0
s ei,t,h−st
43
Rationality and Heterogeneity of Survey Forecasts
1207
To investigate the rationality implications of different values for ai,h and bi,h, we follow Clements and Hendry (1998) and rewrite the forecast error in the bivariate regression framework of Eq. 43.2 as (43.3) i, t, h ¼ stþh sei, t, h ¼ ai, h þ bi, h 1 sei, t, h st þ ei, t, h A special case of weak efficiency occurs when the forecast and forecast error are uncorrelated, i.e., h i 2 E i, t, h sei, t, h st ¼ 0 ¼ ai, h E sei, t, h st þ bi, h 1 E sei, t, h st h i þ E ei, t, h sei, t, h st (43.4) Thus, satisfaction of the joint hypothesis (ai,h, bi,h) ¼ (0,1) is also sufficient for weak efficiency with respect to the current forecast. However, it should be noted that Eq. 43.4 may still hold even if the joint hypothesis is rejected. Thus, satisfaction of the joint hypothesis represents sufficient conditions for both unbiasedness and this type of weak efficiency, but necessary conditions for neither. If bi,h ¼ 1, then, whether or not ai,h ¼ 0, the variance of the forecast error equals the variance of the bivariate regression residual, since then var(i,t,h) ¼ (bi, h 1)2var(sei;t;h st) + var(ei,t,h) + 2(bi,h 1)cov[(sei;t;h st),ei,t,h] ¼ var(ei,t,h). Figure 43.4 illustrates this point. Mincer and Zarnowitz (1969) required only that bi, h ¼ 1 in their definition of forecast efficiency. If in addition to bi,h ¼ 1, ai,h ¼ 0, then the mean square forecast error also equals the variance of the forecast. Mincer and Zarnowitz emphasized that as long as the loss function is symmetric, as is the case with a minimum mean square error criterion, satisfaction of the joint hypothesis implies optimality of forecasts.
43.4.1.2 Empirical Results of Joint Tests Since Hansen and Hodrick (1980), researchers have recognized that, when data are sampled more frequently than the forecast horizon (h), forecast errors may follow an h-1 period moving average process. The typical procedure has been to use a variance-covariance matrix which allows for generalized serial correlation. Throughout this chapter, we use the Newey-West (1987) procedure, with the number of lagged residuals set to h-1. To ensure a positive semi-definite VCV matrix, we use a Bartlett window (see Hamilton 1994, pp. 281–84). In Tables 43.1, 43.2, and 43.3 we report results for the joint unbiasedness tests. We reject the joint hypothesis (ai,h, bi,h) ¼ (0, 1) at the 5 % significance level for all groups except banks and brokers at the 1-month horizon (indicating the possible role of inefficiency with respect to the current forecast), but only for the exporters at the 3- and 6-month horizons. Now consider the results of the separate tests of the joint hypothesis. The significance of the ai,h’s in the joint regressions (Eq. 43.2) generally deteriorates
1208
R. Cohen et al.
Table 43.1 Joint unbiasedness tests (1-month forecasts) Individual regressions st + h st ¼ ai,h + bi,h(sei;t;h st) + ei,t,h for h ¼ 2 Degrees of freedom ¼ 260 i¼1 i¼2 i¼3 Banks and Insurance and brokers trading companies Export industries ai,h 0.003 0.004 0.007 t (NW) 1.123 1.428 2.732 p-value 0.262 0.153 0.006 bi,h 0.437 0.289 0.318 t (NW) 1.674 1.382 1.237 R2 0.014 0.008 0.007 H0 : bi,h ¼ 1, for i ¼ 1,2,3,4 w2 4.666 4.666 4.666 p-value 0.031 0.001 0.000 Unbiasedness tests: H0 : ai,h ¼ 0, bi,h ¼ 1, for i ¼ 1,2,3,4 w2(NW) 4.696 11.682 29.546 p-value 0.096 0.003 0.000 MH tests H0 : ai,h ¼ aj,h, bi,h ¼ bj,h for all i, j 6¼ i w2(GMM) 9.689 p-value 0.138
(43.2) i¼4 Life insurance and import companies 0.006 1.903 0.057 0.008 0.038 0.000 4.666 0.000 19.561 0.000
See Appendix 1 for structure of GMM variance-covariance matrix
with horizon. There are only two rejections at the 5 % level for each of the two shorter horizons. However, the ai,h’s are all rejected at the 6.7 % significance level for the 6-month horizon. The test results for the bi,h’s follow the opposite pattern with respect to horizon. The null that bi,h ¼ 1 is rejected for all groups at the 1-month horizon, but only for the exporters at the 3-month horizon. There are no rejections at the 6-month horizon. Thus, it appears that the pattern of rejection of the joint hypothesis is predominantly influenced by tests of whether the slope coefficient equals one. In particular, tests of the joint hypothesis at the 1-month horizon are rejected due to failure of this type of weak efficiency, not simple unbiasedness. For this reason, Mincer and Zarnowitz (1969) and Holden and Peel (1990) suggest that, if one begins by testing the joint hypothesis, rejections in this first stage should be followed by tests of the simple unbiasedness hypothesis in a second stage. Only if unbiasedness is rejected in this second stage should one conclude that forecasts are biased. For reasons described below (in Sect. 43.4.2), our treatment eliminates the first stage, so that unbiasedness and weak efficiency are separately assessed using the forecast error as the dependent variable. Finding greater efficiency at the longer horizon is unusual, because forecasting difficulty is usually thought to increase with horizon. However, the longer horizon result may not be as conclusive as the bi,h statistics suggest. For all tests at all horizons, in only one case can the null hypothesis that bi,h equals zero not be rejected. Thus, for the longer two horizons (with just the one exception for exporters at the
43
Rationality and Heterogeneity of Survey Forecasts
1209
Table 43.2 Joint unbiasedness tests (3-month forecasts) Individual regressions st + h st ¼ ai,h + bi,h(sei;t;h st) + ei,t,h for h ¼ 6 Degrees of freedom ¼ 256 i¼1 i¼2 i¼3 Banks and Insurance and brokers trading companies Export industries ai,h 0.013 0.011 0.017 t (NW) 1.537 1.362 2.060 p-value 0.124 0.173 0.039 bi,h 0.521 0.611 0.082 t (NW) 1.268 1.868 0.215 R2 0.018 0.026 0.001 H0 : bi,h ¼ 1, for i ¼ 1, 2, 3, 4 w2 1.362 1.415 5.822 p-value 0.243 0.234 0.016 Unbiasedness tests: H0 : ai,h ¼ 0, bi,h ¼ 1, for i ¼ 1, 2, 3, 4 w2(NW) 2.946 2.691 11.156 p-value 0.229 0.260 0.004 MH tests H0 : ai,h ¼ aj,h, bi,h ¼ bj,h for all i, j 6¼ i w2(GMM) 5.783 p-value 0.448
(43.2) i¼4 Life insurance and import companies 0.014 1.517 0.129 0.484 1.231 0.016 1.728 0.189 3.023 0.221
See Appendix 1 for structure of GMM variance-covariance matrix
3-month horizon), hypothesis testing cannot distinguish between the null hypotheses that bi,h equals one or zero. Therefore, we cannot conclude that weak efficiency with respect to the current forecast holds while unbiasedness may not. The failure to precisely estimate the slope coefficient also produces R2s that are below 0.05 in all regressions.12 The conclusion is that testing only the joint hypothesis has the potential to obscure the difference in performance between the unbiasedness and weak efficiency tests. This conclusion is reinforced by an examination of Figs. 43.5, 43.6, and 43.7, the scatterplots, and regression lines for the bivariate regressions.13 All three scatterplots have a strong vertical orientation. With this type of data, it is easy to find the vertical midpoint and test whether it is different from zero. Thus, (one-parameter) tests of simple unbiasedness are feasible. However, it is difficult to fit a precisely
12 As we report in Sect. 43.5, this lack of power is at least consistent with the failure to reject microhomogeneity at all three horizons. 13 Note that, for illustrative purposes only, we compute the expectational variable as the four-group average percentage change in the forecast. However, recall that, despite the failure to reject microhomogeneity at any horizon, the Figlewski-Wachtel critique implies that these parameter estimates are inconsistent in the presence of private information. (See the last paragraph in this subsection.)
1210
R. Cohen et al.
Table 43.3 Joint unbiasedness tests (6-month forecasts) Individual regressions st + h st ¼ ai,h + bi,h(sei;t;h st) + ei,t,h for h ¼ 12 Degrees of freedom ¼ 256 i¼1 i¼2 i¼3 Banks and Insurance and brokers trading companies Export industries ai,h 0.032 0.032 0.039 t (NW) 1.879 1.831 2.099 p-value 0.060 0.067 0.036 bi,h 0.413 0.822 0.460 t (NW) 0.761 1.529 0.911 R2 0.01 0.044 0.021 H0 : bi,h ¼ 1, for i ¼ 1, 2, 3, 4 w2 1.166 0.110 1.147 p-value 0.280 0.740 0.284 Unbiasedness tests: H0 : ai,h ¼ 0, bi,h ¼ 1, for i ¼ 1, 2, 3, 4 w2(NW) 4.332 3.5 7.899 p-value 0.115 0.174 0.019 MH tests H0 : ai,h ¼ aj,h, bi,h ¼ bj,h for all i, j 6¼ i w2(GMM) 7.071 p-value 0.314
(43.2) i¼4 Life insurance and import companies 0.034 1.957 0.050 0.399 0.168 0.012 1.564 0.211 5.006 0.082
See Appendix 1 for structure of GMM variance-covariance matrix
estimated regression line to this scatter, because the small variation in the forecast variable inflates the standard error of the slope coefficient. This explains why the bi,h’s are so imprecisely estimated that the null hypotheses that bi,h ¼ 1 and 0 are simultaneously not rejected. This also explains why the R2s are so low. Thus, examination of the scatterplots also reveal why bivariate regressions are potentially misleading about weak efficiency as well as simple unbiasedness. Therefore, in contrast to both Mincer and Zarnowitz (1969) and Holden and Peel (1990), we prefer to separate tests for unbiasedness from tests for (all types of) weak efficiency at the initial stage. This obviates the need for a joint test. In the next section, we conduct such tests, making use of cointegration between forecast and realization where it exists.14 More fundamentally, the relatively vertical scatter of the regression observations around the origin is consistent with an approximately unbiased forecast of a random 14 However, in the general case of biased and/or inefficient forecasts, Mincer and Zarnowitz (1969, p. 11) also viewed the bivariate regression ‘as a method of correcting the forecasts . . . to improve [their] accuracy . . . Theil (1966, p. 33) called it the “optimal linear correction.”’ That is, the correction would involve (1) subtracting ai,h and then (2) multiplying by 1/bi,h. Graphically, this is a translation of the regression line followed by a rotation, until the regression line coincides with the 45 line.
43
Rationality and Heterogeneity of Survey Forecasts
Fig. 43.5 Actual versus expected depreciation, 1-month-ahead forecast
1211
st+h − st 0.1
0.05
0
−0.05
−0.1
−0.15 −0.15
−0.1
0
−0.05
0.05
0.1 s ei,t,h−st
st+h − st 0.2
0.15 0.1 0.05 0 −0.05 −0.1 −0.15
Fig. 43.6 Actual versus expected depreciation, 3-month-ahead forecast
−0.2 −0.2 −0.15 −0.1 −0.05
0
0.05
0.1
0.15 0.2 s ei,t,h−st
1212 Fig. 43.7 Actual versus expected depreciation: 6-month-ahead forecast
R. Cohen et al. st+h − st 0.3
0.2
0.1
0
−0.1
−0.2
−0.3 −0.3
−0.2
−0.1
0
0.1
0.2
0.3 s ei,t,h−st
walk-in exchange rate levels.15 In Figs. 43.8, 43.9, and 43.10, we observe a corresponding time series pattern of variation between the forecasts and realizations in return form. As Bryant lamented in reporting corresponding regressions using a shorter sample from the JCIF, “the regression. . .is. . .not one to send home proudly to grandmother” (Bryant 1995, p. 51). He drew the conclusion that “analysts should have little confidence in a model specification [e.g., uncovered interest parity] setting [the average forecast] exactly equal to the next-period value of the model. . .[M]odel-consistent expectations. . .presume a type of forward-looking behavior [e.g., weak efficiency] that is not consistent with survey data on expectations” (Bryant 1995, p. 40).
43.4.2 Pretests for Rationality: The Stationarity of the Forecast Error To test the null hypothesis of a unit root, we estimate the augmented Dickey-Fuller (1979) (ADF) regression p X Dytþ1 ¼ a þ byt þ gt þ yk Dytþ1k þ etþ1 (43.5) k¼1
15 Other researchers (e.g., Bryant 1995) have found similar vertical scatters for regressions where the independent variable, e.g., the forward premium/discount ft,h st, the “exchange risk premium” ft,h st+h, or the difference between domestic and foreign interest rates (i i*), exhibits little variation.
43
Rationality and Heterogeneity of Survey Forecasts
Fig. 43.8 Actual and expected depreciation, 1-month-ahead forecast
1213
0.1 0.08 0.06 0.04 0.02 0 −0.02 −0.04 −0.06 sei,t,h − st st+h − st
−0.08 −0.1
86/05/14
91/01/16
96/10/15
91/01/16
96/07/30
0.2 0.15 0.1 0.05 0 −0.05 −0.1 −0.15
Fig. 43.9 Actual and expected depreciation, 3-month-ahead forecast
−0.2
sei,t,h − st st+h − st 86/05/14
where y is the level and first difference of the spot exchange rate, the level and first difference of each group forecast, the residual from the (unrestricted) cointegrating regression, and the forecast error (i.e., the residual from the “restricted” cointegrating equation). The number of lagged differences to include in Eq. 43.5
1214 Fig. 43.10 Actual and expected depreciation, 6-month-ahead forecast
R. Cohen et al. 0.25 0.2 0.15 0.1 0.05 0 −0.05 −0.1 −0.15 sei,t,h − st st+h − st
−0.2 −0.25
86/05/14
90/11/27
96/05/15
is chosen by adding lags until a Lagrange multiplier test fails to reject the null hypothesis of no serial correlation (up to lag 12). We test the null hypothesis of a unit root (i.e., b ¼ 0) with the ADF t and z tests. We also test the joint null hypothesis of a unit root and no linear trend (i.e., b ¼ 0 and g ¼ 0). As can be seen in Tables 43.4, 43.5, and 43.6, we fail to reject the null of a unit root in the log of the spot rate in two of the three unit root tests (the exception being the joint null), but we reject the unit root in the h th difference for all three horizons. We conclude that the log of the spot rate is integrated of order one. Similarly, we conclude that the log of the forecast of each spot rate is integrated of order one. Thus, we can conduct cointegration tests on the spot rate and each corresponding forecast. The null of a unit root in the (unrestricted) residual in the “cointegrating regression” is rejected at the 10 % level or less for all groups and horizons except group three (exporters) at the 6-month horizon. Thus, we can immediately reject unbiasedness for the latter group and horizon. Next, since a stationary forecast error is a necessary condition for unbiasedness, we test for unbiasedness (as well as) and weak efficiency in levels using Liu and Maddala’s (1992) method of “restricted cointegration.” This specification imposes the joint restriction ai,h ¼ 0, bi,h ¼ 1 on the bivariate regression stþh ¼ ai, h þ bi, h sei, t, h þ ei, t, h
(43.6)
and tests whether the residual (the forecast error) is nonstationary. In a bivariate regression, any cointegrating vector is unique. Therefore, if we find that the forecast
43
Rationality and Heterogeneity of Survey Forecasts
1215
Table 43.4 Unit root tests (1-month forecasts h ¼ 2) Dyt ¼ a + byt1 + gt + ∑pk¼1 ytDytk + et for h ¼ 2 Lags ADF t test Log of spot rate (n ¼ 276) 0 2.828 Hth difference log spot rate 12 4.306*** Group 1 Banks and brokers Log of forecast 0 2.274* Hth difference log forecast 0 7.752*** Forecast error Restricted CI eq. 1 11.325*** Unrestricted CI eq. 1 65.581*** Group 2 Insurance and trading companies Log of forecast 0 2.735* Hth difference log forecast 0 7.895*** Forecast error Restricted CI eq. 1 11.624*** Unrestricted CI eq. 1 11.750*** Group 3 Export industries Log of forecast 1 2.372 Hth difference log forecast 0 8.346*** Forecast error Restricted CI eq. 1 10.324*** Unrestricted CI eq. 1 10.392*** Group 4 Life insurance and import companies Log of forecast 0 2.726* Hth difference log forecast 1 5.216*** Forecast error Restricted CI eq. 1 10.977*** Unrestricted CI eq. 1 10.979***
ADF z test 9.660 167.473***
(43.5) Joint test 6.820** 9.311***
5.072 96.639***
6.327** 30.085***
249.389***
64.676***
5.149 94.986***
6.705*** 31.252***
270.302***
68.053***
4.806 111.632***
5.045** 34.889***
211.475***
53.757***
5.009 52.911***
6.438** 3.630***
231.837***
60.820***
*
Rejection at 10 % level Rejection at 5 % level *** Rejection at 1 % level **
errors are stationary, then the joint restriction is not rejected, and (0, 1) must be the unique cointegrating vector.16 The advantage of the one-step restricted cointegration is that if the joint hypothesis is true, then tests which impose this cointegrating vector have greater power than those which estimate a cointegrating vector. See, e.g., Maynard and Phillips (2001). Note that the Holden and Peel (1990) critique does not apply in the I(1) case, because the intercept cannot be an unbiased forecast of a nonstationary variable. Thus, the cointegrating regression line of the level realization on the level forecast 16
It is also possible to estimate the cointegrating parameters and jointly test whether they are zero and one. A variety of methods, such as those due to Saikkonen (1991) or Phillips and Hansen (1990), exist that allow for inference in cointegrated bivariate regressions.
1216
R. Cohen et al.
Table 43.5 Unit root tests (3-month forecasts h ¼ 6) Dyt ¼ a + byt 1 + gt + ∑pk¼1 ytDyt k + et for h ¼ 6 Lags 0 2
Log of spot rate (n ¼ 276) Hth difference log spot rate Group 1 Banks and brokers Log of forecast 0 Hth difference log forecast 0 Forecast error Restricted CI eq. 6 Unrestricted CI eq. 6 Group 2 Insurance and trading companies Log of forecast 0 Hth difference log forecast 0 Forecast error Restricted CI eq. 6 Unrestricted CI eq. 2 Group 3 Export industries Log of forecast 0 Hth difference log forecast 1 Forecast error Restricted CI eq. 6 Unrestricted CI eq. 5 Group 4 Life insurance and import companies Log of forecast 0 Hth difference log forecast 1 Forecast error Restricted CI eq. 5 Unrestricted CI eq. 4
ADF t test 2.828 4.760***
ADF z test 9.660 49.769***
(43.5) Joint test 6.820** 11.351***
2.840* 5.092***
4.852 48.707***
7.610*** 12.990***
3.022** 3.343*
29.429***
4.673**
2.778* 6.514***
4.533 71.931***
8.858*** 21.588***
3.068** 4.539***
31.038***
4.956**
3.105** 4.677***
4.549 41.524***
9.090*** 10.944***
3.317** 5.115***
31.207***
5.659**
2.863* 4.324***
4.400 39.870***
8.161*** 9.352***
4.825*** 5.123***
118.586***
11.679***
*
Rejection at 10 % level Rejection at 5 % level *** Rejection at 1 % level **
must have both a ¼ 0 and b ¼ 1 for unbiasedness to hold. This differs from Fig. 43.3, the scatterplot in differences, where ai,h ¼ 0 but bi,h 6¼ 1. Intuitively, the reason for the difference in results is that the scatterplot in levels must lie in the first quadrant, i.e., no negative values of the forecast or realization. At the 1-month horizon, the null of a unit root in the residual of the restricted cointegrating regression (i.e., the forecast error) is rejected at the 1 % level for all groups. We find nearly identical results at the 3-month horizon; the null of a unit root in the forecast error is rejected at the 5 % level for all groups. Thus, for these regressions we can conduct rationality tests by regressing the forecast error on a constant (hypothesized equal to zero for unbiasedness) and other information set variables (whose coefficients are hypothesized equal to zero for efficiency). (Recall just above that we failed to reject the null of a unit root in the unrestricted residual
43
Rationality and Heterogeneity of Survey Forecasts
1217
Table 43.6 Unit root tests (6-month forecasts h ¼ 12) Dyt ¼ a + byt 1 + gt + ∑pk¼1 ytDyt k + et for h ¼ 12 Lags ADF t test Log of spot rate (n ¼ 276) 0 2.828 Hth difference log spot rate 17 3.189** Group 1 Banks and brokers Log of forecast 0 2.947** Hth difference log forecast 0 4.772*** Forecast error Restricted CI eq. 1 2.373 Unrestricted CI eq. 7 3.285** Group 2 Insurance and trading companies Log of forecast 0 2.933** Hth difference log forecast 0 6.007*** Forecast error Restricted CI eq. 1 2.114 Unrestricted CI eq. 1 2.684* Group 3 Export industries Log of forecast 0 3.246** Hth difference log forecast 12 4.961*** Forecast error Restricted CI eq. 12 1.515 Unrestricted CI eq. 0 2.931 Group 4 Life insurance and import companies Log of forecast 0 3.133** Hth difference log forecast 0 4.795*** Forecast error Restricted CI eq. 2 2.508 Unrestricted CI eq. 1 2.851*
ADF z test 9.660 26.210***
(43.5) Joint test 6.820** 5.500**
4.254 44.018***
9.131*** 11.389***
13.577*
4.004 64.923*** 11.464*
4.059 44.532*** 5.601
4.196 44.062*** 14.535**
2.947
9.531*** 18.044*** 2.399
10.704*** 12.331*** 1.466
9.549*** 11.537*** 3.148
*
Rejection at 10 % level Rejection at 5 % level *** Rejection at 1 % level **
for the 6-month forecasts of exporters.) Now, in the case of the restricted residual, the other three groups failed to reject a unit root at the 10 % level in two out of three of the unit root tests.17 (See Figs. 43.11, 43.12, and 43.13.) Thus, in contrast to the results for the two shorter horizons, at the 6-month horizon, the evidence is clearly in favor of a unit root in the forecast error for all four groups. Therefore, we reject the null of simple unbiasedness because a forecast error with a unit root cannot be mean zero. In fact, given our finding of a unit root in the forecast errors, rationality tests regressing the forecast error on a constant and/or other information set variables would be invalid.
17
As expected, exporters failed to reject at the 10 % level in all three tests.
1218
R. Cohen et al.
Fig. 43.11 Actual versus expected exchange rate: 1-month-ahead forecast
st+h 260 240 220 200 180 160 140 120 100 80 80
100
120
140
160
180
200
220
240
260 sei,t,h
43.4.3 Univariate Tests for Unbiasedness The unbiasedness equation is specified as i, t, h ¼ stþh sei, t, h ¼ ai, h þ ei, t, h ,
(43.7)
where i,t,h is the forecast error of individual i, for an h-period-ahead forecast made at time t. The results are reported in Tables 43.7 and 43.8. For the 1-month horizon, unbiasedness cannot be rejected at conventional significance levels for any group. For the 3-month horizon, unbiasedness is rejected only for exporters (at a p-value of 0.03). As we saw in the previous subsection, rationality is rejected for all groups at the 6-month horizon, due to nonstationary forecast errors.18 In these unbiasedness tests, as well as all others, it is possible that coefficient estimates for the entire sample are not stable over subsamples. The lower panels of Tables 43.7 and 43.8 contain results of the test for equality of intercepts in four equal subperiods, each consisting of approximately 75 biweekly forecasts: i, t, h ¼ stþh sei, t, h ¼ ai, h, 1 þ ai, h, 2 þ ai, h, 3 þ ai, h, 4 þ ei, t, h :
18
(43.8)
The direction of the bias for exporters is negative; that is, they systematically underestimate the value of the yen, relative to the dollar. Ito (1990) found the same tendency using only the first two years of survey data (1985–1987). He characterized this depreciation bias as a type of “wishful thinking” on the part of exporters.
43
Rationality and Heterogeneity of Survey Forecasts
Fig. 43.12 Actual versus expected exchange rate: 3-month-ahead forecast
1219
st+h 260 240 220 200 180 160 140 120 100 80 80
100
120
140
160
180
200
220
240
260 sei,t,h
100
120
140
160
180
200
220
240
260 sei,t,h
st+h 260 240 220 200 180 160 140 120 100
Fig. 43.13 Actual versus expected exchange rate: 6-month-ahead forecast
80 80
1220
R. Cohen et al.
Table 43.7 Simple unbiasedness tests on individuals (1-month forecasts) st + h sei;t;h ¼ ai,h + ei,t,h h ¼ 2, degrees of freedom ¼ 261 i¼1 i¼2 Banks and Insurance and brokers trading companies ai,h 0.000 0.001 t (NW) 0.115 0.524 p-value 0.909 0.600 MH tests H0 : ai,h ¼ aj,h, for all i, j 6¼ i 41.643 w2(GMM) st + h sei;t;h ¼ ai,h,1 + ai,h,2 + ai,h,3 + ai,h,4 + ei,t,h i¼1 i¼2 Banks and Insurance and 1985:05:29–1988.03:16 brokers trading companies 0.007 0.005 ai,h,1 p-value 0.213 0.353 1988:03:30–1991:01:16 ai,h,2 0.009 0.008 p-value 0.134 0.164 1991:01:29–1993:11:16 ai,h,3 0.002 0.001 p-value 0.598 0.810 1993:11:30–1996:10:15 ai,h,4 0.002 0.004 p-value 0.675 0.433 Structural break tests H0 : ai,h,1 ¼ ai,h,2 ¼ · · · ¼ ai,h,4 w2 4.245 3.267 p-value 0.236 0.352
(43.7) i¼3 Export industries 0.002 0.809 0.418
i¼4 Life insurance and import companies 0.002 0.720 0.472
p-value
0.000
i¼3 Export industries 0.013 0.017
(43.8) i¼4 Life insurance and import companies 0.003 0.573
0.010 0.090
0.010 0.114
0.004 0.374
0.001 0.757
0.001 0.814
0.003 0.519
8.425 0.038
2.946 0.400
See Appendix 1 for structure of GMM variance-covariance matrix
For both 1- and 3-month horizons, all four forecaster groups undervalued the yen in the first and third subperiods. This is understandable, as both these subperiods were characterized by overall yen appreciation. (See Fig. 43.1.) Evidently, forecasters underestimated the degree of appreciation. Exporters were the only group to undervalue the yen in the last subperiod as well, although that was not one of overall yen appreciation. This is another perspective on the “wishful thinking” of exporters.19 The main difference between the two horizons is in the significance of the test for structural breaks. For the 1-month horizon, the estimates of the individual break dummies generally do not reach statistical significance, and the test for their 19 Ito (1994) conducted a similar analysis for the aggregate of all forecasters, but without an explicit test for structural breaks.
43
Rationality and Heterogeneity of Survey Forecasts
1221
Table 43.8 Simple unbiasedness tests on individuals (3-month forecasts) st+h sei;t;h ¼ ai,h + ei,t,h h ¼ 6, degrees of freedom ¼ 257 i¼1 i¼2 Banks and Insurance and brokers trading companies ai,h 0.01 0.008 t (NW) 1.151 0.929 p-value 0.25 0.353 MH tests H0 : ai,h ¼ aj,h, for all i, j 6¼ i w2(GMM) 40.16 st+h sei;t;h ¼ ai,h,1 + ai,h,2 + ai,h,3 + ai,h,4 + ei,t,h
(43.7) i¼3 Export industries 0.019 2.165 0.03
i¼4 Life insurance and import companies 0.01 1.121 0.262
p-value
i¼1 i¼2 Banks and Insurance and 1985:05:29–1988.02:24 brokers trading companies 0.040 0.039 ai,h,1 p-value 0.005 0.007 1988:03:16–1990:12:11 ai,h,2 0.023 0.025 p-value 0.169 0.111 1990:12:25–1993:09:28 ai,h,3 0.020 0.018 p-value 0.064 0.125 1993:10:12–1996:0730 ai,h,4 0.000 0.001 p-value 0.994 0.941 Structural break test H0 : ai,h,1 ¼ ai,h,2 ¼ · · · ¼ ai,h,4 w2 9.319 9.925 p-value 0.025 0.019
0.000
i¼3 Export industries 0.057 0.000
(43.8) i¼4 Life insurance and import companies 0.039 0.008
0.022 0.187
0.021 0.229
0.029 0.023
0.020 0.094
0.013 0.466
0.001 0.970
13.291 0.004
7.987 0.046
See Appendix 1 for structure of GMM variance-covariance matrix
equality rejects only for the exporters. Thus, the exporters’ bias was not constant throughout the sample. In contrast, for the 3-month horizon, the test for no structural breaks is rejected at the 5 % level for all groups, even though unbiasedness itself is rejected for the full sample only for exporters. Even setting aside the bias and variability of exporters’ forecasts, our structural break tests allow us to conclude that there is considerably more variation around roughly zero mean forecast errors at the longer horizon. This probably reflects the additional uncertainty inherent in longer-term forecasts.20
20
This is consistent with the finding of nonstationary forecast errors for all groups at the 6-month horizon.
1222
R. Cohen et al.
43.4.4 Unbiasedness Tests Using Error Correction Models As mentioned at the beginning of the previous subsection, the Error Correction Model provides an alternate specification for representing the relationship between cointegrated variables: stþh st ¼ ai, h st gi, h sei, th, h þ bi, h sei, t, h sei, th, h þ di ðlags of stþh st Þ þ i lags of sei, t, h sei, th, h þ ei, t, h
(43.9)
According to this specification of the ECM, the change in the spot rate is a function of the change in the forecast, interpreted as a short-run effect, and the current forecast error, interpreted as a long-run adjustment to past disequilibria. ai,h, the coefficient of the error correction term, represents the fraction of the forecast error observed at t-h that is corrected by time t. A negative coefficient indicates a stabilizing adjustment of expectations. This formulation of the ECM has the advantage that the misspecification (due to omitted variable bias) of the regression of the differenced future spot rate on the differenced current forecast can be gauged by the statistical significance of the error correction term.21 The regressors include the smallest number of lagged dependent variables required such that we do not reject the hypothesis that the residuals are white noise. We impose gi,h ¼ 1 when “restricted” cointegration of st + h and sei;t;h is not rejected. Recall that 1- and 3-month forecast errors were found to be stationary, so it was for these two horizons that estimation of the simple unbiasedness equation was possible. Although it would be valid to estimate the ECM at the 6-month horizon using the (unrestricted) stationary cointegrating residual (i.e., for all groups but exporters), we elect not to, because the nonstationarity of the forecast error itself implies a failure of the unbiasedness restrictions.22 Then, as first asserted by Hakkio and Rush (1989), the unbiasedness restriction is represented by the joint hypothesis that ai,h ¼ bi,h ¼ 1 and all d and coefficients equal zero.23 (The hypothesized coefficient on the error correction term of 1
21 Zacharatos and Sutcliffe (2002) note that the inclusion of the contemporaneous spot forecast (in their paper, the forward rate) as a regressor assumes that the latter is weakly exogenous; that is, deviations from unbiasedness are corrected only by movements in the realized spot rate. These authors prefer a bivariate ECM specification, in which the change in the future spot rate and the change in the contemporaneous forecast are functions of an error correction term and lags of the dependent variables. However, Zivot (2000) points out that if the spot rate and forecast are contemporaneously correlated, then our single-equation specification does not make any assumptions about the weak exogeneity of the forecast. 22 Our empirical specification of the ECM also includes an intercept. This will help us to determine whether there are structural breaks in the ECM. 23 Since we include an intercept, we also test the restriction that the intercept equals zero – both individually and as part of the joint unbiasedness hypothesis.
43
Rationality and Heterogeneity of Survey Forecasts
1223
reflects the unbiasedness requirement that the entire forecast error is corrected within the forecast horizon h.) We also test unbiasedness without including lagged dependent variables but incorporating robust standard errors which allow for generalized serial correlation and heteroscedasticity. This allows comparison with the univariate and bivariate unbiasedness equations. First, we compare the ECM results to the joint unbiasedness restrictions in the change regressions, using robust standard errors in both cases. Although the estimated coefficient of the error correction term is generally negative, indicating a stable error correction mechanism,24 the coefficient does not reach a 5 % significance level in any of the regressions. Thus, there is little evidence that the error correction term plays a significant role in the long-run dynamics of exchange rate changes. The ECM test results are nearly identical to the joint unbiasedness test results in Table 43.9. In both specifications, unbiasedness is rejected for three of four groups at the 1-month horizon and not rejected for three of four groups at the 3-month horizon. However, even though the EC term is not (individually) significant in the ECMs, it does provide explanatory power, relative to the joint unbiasedness specification. The R2s in the ECM, while never more than 0.044, still are greater than in the joint unbiasedness specification, typically by factors of three to five (Tables 43.10, 43.11, and 43.12). Second, we compare the ECM results to the univariate simple unbiasedness regressions, again using robust standard errors in both cases. The ECM unbiasedness restrictions are rejected at a 5 % level more often than in the simple unbiasedness tests. Whereas the only rejection of simple unbiasedness at the shorter two horizons is for exporters at the 3-month horizon, the ECM restrictions are rejected for three out of four groups at the 1-month horizon as well as for exporters at the 3-month horizon. While it is uncontroversial that, for testing unbiasedness, the ECM is preferred to the conventional bivariate specification in returns, it is not at all clear that the ECM is preferred to the simple univariate test of unbiasedness. Can the more decisive rejections of unbiasedness using the ECM versus the simple univariate specification be reconciled?25 One way to proceed is to determine whether the unbiasedness restrictions imposed on the ECM are necessary as well as sufficient, as is the case for the simple unbiasedness test, or just sufficient, as is the case for the bivariate unbiasedness test. Thus, it is possible that the stronger rejections of unbiasedness in the ECM specification are due to the implicit test of weak efficiency with respect to the current forecast. That is, the Holden and Peel (1990) critique applies to the Hakkio and Rush (1989) test in Eq. 43.9, as well as the joint unbiasedness test in the returns regression. Setting bi,h, the coefficient of the contemporaneous differenced forecast, equal to one produces an ECM in which the dependent variable is the forecast error:
24
The only exception is for exporters at the 1-month horizon. The standard errors in the univariate regression are about the same as those for the ECM. (By definition, of course, the R2s for the univariate regression equal zero.)
25
1224
R. Cohen et al.
Table 43.9 Error correction models (1-month forecasts) Group 1 Banks and brokers st+h st ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h With robust standard errors (R ¼ 0.0195) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 With whitened residuals (R2 ¼ 0.605) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h ) 2
Coeff 0.002 0.465 0.465 0.491 Coeff 0.001 0.025 0.453
bi,h ¼ 1 imposed, robust standard errors Constant ai,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1
w (n) 0.377 2.884 3.813 3.847 3.910 F(n,d f) 0.238 14.696 4.895 6.790 51.115
n 1 1 1 1 3 n 1 1 1 3 12
p-value 0.539 0.089 0.051 0.050 0.271 p-value 0.627 0.000 0.028 0.000 0.000
w2(n) 0.293 0.015 0.294
n 1 1 2
p-value 0.582 0.903 0.863
w2(n) 1.168 1.111 8.807 10.703 10.965 F(n,d f) 0.563 13.614 12.639 6.792 54.557
n 1 1 1 1 3 n 1 1 1 3 11
p-value 0.280 0.292 0.003 0.001 0.012 p-value 0.454 0.000 0.001 0.000 0.000
(43.10)
bi,h ¼ 1 imposed, robust standard errors Coeff Constant 0.002 ai,h ¼ 1 0.991 Constant ¼ 0 and ai,h ¼ 1 Group 2 Insurance and trading companies st+h st ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h With robust standard errors (R2 ¼ 0.009) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 With whitened residuals (R2 ¼ 0.596) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h )
(43.9) 2
Coeff 0.003 0.262 0.262 0.278 Coeff 0.002 0.036 0.207
(43.9)
(43.10) Coeff 0.003 1.026
w (n) 0.787 0.113 1.052 2
n 1 1 2
p-value 0.3751 0.7368 0.591
43
Rationality and Heterogeneity of Survey Forecasts
1225
Table 43.10 Error correction models (1-month forecasts) Group 3 Export industries st+h st ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h With robust standard errors (R ¼ 0.009) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 With whitened residuals (R2 ¼ 0.602) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h ) 2
Coeff 0.006 0.305 0.305 0.256 Coeff 0.002 0.107 0.055
bi,h ¼ 1 imposed, robust standard errors Constant ai,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1
w (n) 4.202 1.335 24.402 20.516 27.207 F(n, d f) 0.632 18.043 17.987 8.455 55.054
n 1 1 1 1 3 n 1 1 1 3 10
p-value 0.040 0.248 0.000 0.000 0.000 p-value 0.428 0.000 0.000 0.000 0.000
w2(n) 0.082 2.321 2.578
n 1 1 2
p-value 0.7749 0.1277 0.276
w2(n) 1.734 0.083 16.501 16.086 17.071 F(n, d f) 0.392 20.268 12.020 10.794 57.702
n 1 1 1 1 3 n 1 1 1 3 11
p-value 0.188 0.773 0.000 0.000 0.001 p-value 0.532 0.000 0.001 0.000 0.000
(43.10)
bi,h ¼ 1 imposed, robust standard errors Coeff Constant 0.001 ai,h ¼ 1 0.887 Constant ¼ 0 and ai,h ¼ 1 Group 4 Life Insurance and import companies st+h st ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h With robust standard errors (R2 ¼ 0.003) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 With whitened residuals (R2 ¼ 0.607) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h )
(43.9) 2
Coeff 0.004 0.066 0.066 0.112 Coeff 0.002 0.026 0.226
(43.9)
(43.10) Coeff 0.003 0.949
w (n) 1.254 0.481 1.628 2
n 1 1 2
p-value 0.2629 0.4879 0.443
1226
R. Cohen et al.
Table 43.11 Error correction models (3-month forecasts) Group 1 Banks and brokers st+h st ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h Tests with robust standard errors (R ¼ 0.036) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Tests with whitened residuals (R2 ¼ 0.863) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h ) 2
Coeff 0.010 0.377 0.377 0.501 Coeff 0.008 0.233 0.178
Tests with robust standard errors (R ¼ 0.044) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Tests with whitened residuals (R2 ¼ 0.844) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h ) bi,h ¼ 1 imposed, robust standard errors Constant ai,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1
w (n) 1.306 0.590 1.604 1.268 2.348 F(n, d f) 4.173 56.755 51.113 28.974 121.851
n 1 1 1 1 3 n 1 1 1 3 14
p-value 0.253 0.443 0.205 0.260 0.503 p-value 0.044 0.000 0.000 0.000 0.000 (43.10)
bi,h ¼ 1 imposed, robust standard errors Coeff Constant 0.006 ai,h ¼ 1 0.889 Constant ¼ 0 and ai,h ¼ 1 Group 2 Insurance and trading companies st+h st ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h 2
(43.9) 2
Coeff 0.008 0.556 0.556 0.663 Coeff 0.005 0.080 0.167
w (n) 0.556 0.896 1.330 2
n 1 1 2
p-value 0.456 0.344 0.514 (43.9)
w (n) 0.874 2.061 1.310 0.965 1.833 F(n, d f) 1.400 31.425 40.346 23.551 148.338 2
n 1 1 1 1 3 n 1 1 1 3 10
p-value 0.350 0.151 0.252 0.326 0.608 p-value 0.239 0.000 0.000 0.000 0.000 (43.10)
Coeff 0.005 0.897
w (n) 0.291 0.773 0.945 2
n 1 1 2
p-value 0.589 0.379 0.623
43
Rationality and Heterogeneity of Survey Forecasts
1227
Table 43.12 Error correction models (3-month forecasts) Group 3 Export industries st+h si,t,h ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h Tests with robust standard errors (R2 ¼ 0.026) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Tests with whitened residuals (R2 ¼ 0.856) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h )
Coeff 0.013 0.253 0.253 0.411 Coeff 0.003 0.006 0.205
bi,h ¼ 1 imposed, robust standard errors Constant ai,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1
n 0 1 1 1 3 n 1 1 1 3 10
p-value 0.129 0.531 0.064 0.147 0.054 p-value 0.361 0.000 0.000 0.000 0.000 (43.10)
bi,h ¼ 1 imposed, robust standard errors Coeff Constant 0.012 ai,h ¼ 1 0.775 Constant ¼ 0 and ai,h ¼ 1 Group 4 Life insurance and import companies st+h si,t,h ¼ ci + ai(st gisei;th;h ) + bi(sei;t;h sei;th;h ) + ei,h Tests with robust standard errors (R2 ¼ 0.038) Constant ai,h ¼ 0 ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Tests with whitened residuals (R2 ¼ 0.845) Constant ai,h ¼ 1 bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 Constant ¼ 0 and ai,h ¼ 1 and bi,h ¼ 1 and all lags of realizations and forecasts ¼ 0 st+h sei;t;h ¼ di + (1 + ai,h)(st sei;th;h )
w2(n) 2.303 0.393 3.422 2.102 7.663 F(n, d f) 0.840 29.512 40.582 16.290 182.912
Coeff 0.009 0.478 0.478 0.604 Coeff 0.003 0.050 0.062
w (n) 1.971 3.791 6.337
n 1 1 2
p-value 0.160 0.052 0.042
w2(n) 0.993 1.250 1.488 0.919 2.451 F(n, d f) 0.510 32.000 32.469 21.673 169.286
n 1 1 1 1 3 n 1 1 1 3 9
p-value 0.319 0.264 0.223 0.338 0.484 p-value 0.477 0.000 0.000 0.000 0.000
2
(43.10) Coeff 0.006 0.865
w (n) 0.455 1.405 1.726 2
n 1 1 2
p-value 0.500 0.236 0.422
1228
R. Cohen et al.
st sei, t, h ¼ 1 þ ai, h st sei, th, h
(43.10)
Thus, in the ECM the necessary and sufficient condition for unbiasedness is that ai,h equals 1.26 Table 43.9 contains tests of this conjecture. Here the joint hypothesis that the intercept equals zero and ai,h equals minus one produces exactly the same results as in the simple unbiasedness tests.27 It is interesting that, even when we can decouple the test for weak efficiency with respect to the current forecast from the unbiasedness test, the test of unbiasedness using this ECM specification still requires weak efficiency with respect to the current forecast error.28
43.4.5 Explicit Tests of Weak Efficiency The literature on rational expectations exhibits even less consensus as to the definition of efficiency than it does for unbiasedness. In general, an efficient forecast incorporates all available information – private as well as public. It follows that there should be no relationship between forecast error and any information variables known to the forecaster at the time of the forecast. Weak efficiency commonly denotes the orthogonality of the forecast error with respect to functions of the target and prediction. For example, there is no contemporaneous relationship between forecast and forecast error which could be exploited to reduce the error. Strong efficiency denotes orthogonality with respect to the remaining variables in the information set. Below we perform two types of weak efficiency tests. In the first type, we regress each group’s forecast error on three sets of weak efficiency variables29:
26
Since we estimate the restricted ECM with an intercept, unbiasedness also requires the intercept to be equal to zero. 27 Since the intercept in Eq. 43.10 is not significant in any regression, the simple hypothesis that ai,h equals one also fares the same as the simple unbiasedness tests. 28 For purposes of comparison with both the bivariate joint and simple unbiasedness restrictions, we have used the ECM results using the robust standard errors. In all cases testing the ECM restrictions using F-statistics based on whitened residuals produces rejections of all restrictions, simple and joint, except a zero intercept. Hakkio and Rush (1989) found similarly strong rejections of Eq. 43.9, where the forecast was the forward rate. 29 Notice that the first two sets of weak efficiency variables include the mean forecast, rather than the individual group forecast. Our intention is to allow a given group to incorporate information from other groups’ forecasts via the prior mean forecast. This requires an extra lag in the information set variables, relative to a contemporaneously available variable such as the realized exchange rate depreciation.
43
Rationality and Heterogeneity of Survey Forecasts
1229
1. Single and cumulative lags of the mean forecast error (lagged one period): bi, tþhk stþhk sem, tþhk, h þ ei, t, h
hþ7 X
stþh sei, t, h ¼ ai, h þ
(43.11)
k¼hþ1
2. Single and cumulative lags of mean expected depreciation (lagged one period): stþh sei, t, h ¼ ai, h þ
hþ7 X
bi, tþhk sem, tþhk, h stk þ ei, t, h
(43.12)
k¼hþ1
3. Single and cumulative lags of actual depreciation: stþh sei, t, h ¼ ai, h þ
hþ6 X
bi, tþhk ðstþhk stk Þ þ ei, t, h
(43.13)
k¼h
For each group and forecast horizon, we regress the forecast error on the most recent seven lags of the information set variable, both singly and cumulatively. We use a Wald test of the null hypothesis ai,h ¼ bi,t + h k ¼ 0 and report chi-square test statistics, with degrees of freedom equal to the number of regressors excluding the intercept. If we were to perform only simple regressions (i.e., on each lag individually), estimates of coefficients and tests of significance could be biased toward rejection due to the omission of relevant variables. If we were to perform only multivariate regressions, tests for joint significance could be biased toward nonrejection due to the inclusion of irrelevant variables. It is also possible that joint tests are significant but individual tests are not. This will be the case when the linear combination of (relatively uncorrelated) regressors spans the space of the dependent variable, but individual regressors do not. In the only reported efficiency tests on JCIF data, Ito (1990) separately regressed the forecast error (average, group, and individual firm) on a single lagged forecast error, lagged forward premium, and lagged actual change. He found that, for the 51 biweekly forecasts between May 1985 and June 1987, rejections increased from a relative few at the 1- or 3-month horizons to virtual unanimity at the 6-month horizon. When he added a second lagged term for actual depreciation, rejections increased “dramatically” for all horizons. The second type of weak efficiency tests uses the Breusch (1978)-Godfrey (1978) LM test for the null of no serial correlation of order k ¼ h or greater, up to order k ¼ h + 6, in the residuals of the forecast error regression, Eq. 43.11. Specifically, we estimate ^e i, t, h ¼ ai, h þ
h1 X k¼1
hþ6 X bi, k stþhk sei, tk, h þ fi, l^e i, tl, h þ i, t, h l¼h
(43.14)
1230
R. Cohen et al.
and test the null hypothesis H0 : bi,h ¼ . . . ¼ bi,h + 6 ¼ 0 for h ¼ 2,6.30,31 Results for all efficiency tests for the 1- and 3-month horizons are presented in Tables 43.13, 43.14, 43.15, 43.16, 43.17, 43.18, 43.19, and 43.20. (Recall that the nonstationarity of the forecast errors at the 6-month horizon is an implicit rejection of weak efficiency.) For each group, horizon, and variable, there are seven individual tests, i.e., on a single lag, and six joint tests, i.e., on multiple lags. These 13 tests are multiplied by four groups times two horizons times three weak efficiency variables for a total of 312 efficiency tests. Using approximately nine more years of data than Ito (1990), we find many rejections. In some cases, nearly all single lag tests are rejected, yet few, if any, joint tests are rejected. (See, e.g., expected depreciation at the 3-month horizon.) In other cases, nearly all joint tests are rejected, but few individual tests. (See, e.g., actual depreciation at the 3-month horizon.) Remarkably, all but one LM test for serial correlation at a specified lag produces a rejection at less than a 10 % level, with most at less than a 5 % level. Thus, it appears that the generality of the alternative hypothesis in the LM test permits it to reject at a much greater rate than the conventional weak efficiency tests, in which the variance-covariance matrix incorporates the Newey-West-Bartlett correction for heteroscedasticity and serial correlation. Finally, unlike Ito (1990), we find no strong pattern between horizon length and number of rejections.
43.5
Micro-homogeneity Tests
In addition to testing the rationality hypotheses at the individual level, we are interested in the degree of heterogeneity of coefficients across forecasters. Demonstrating that individual forecasters differ systematically in their forecasts (and forecast-generating processes) has implications for the market microstructure research program. As Frankel and Froot (1990, p. 182) noted, “the tremendous volume of foreign exchange trading is another piece of evidence that reinforces the idea of heterogeneous expectations, since it takes differences among market participants to explain why they trade.” Micro-homogeneity should have implications for rationality as well. Intuitively, if all forecasters pass rationality tests, then their corresponding regression coefficients should be equal. However, the converse is not necessarily true: if all forecasters have equal regression coefficients, they will not satisfy rationality conditions if they are all biased or inefficient to the same degree with respect to
30
This is a general test, not only because it allows for an alternative hypothesis of higher-order serial correlation of specified order but also because it allows for serial correlation to be generated by AR, MA, or ARMA processes. 31 We use the F-statistic because the w2 test statistics tend to over-reject, while the F-tests have more appropriate significance levels (see Kiviet 1987).
43
Rationality and Heterogeneity of Survey Forecasts
1231
Table 43.13 Weak efficiency tests (1-month forecasts) hþ6 bi,t + h p(st + h p sem;tþhp;h ) + ei,t,h for h ¼ 2 st + h sei;t;h ¼ ai,h + ∑p¼h
i¼1
i¼2 Insurance and trading companies w2 p-value
Banks and brokers Lags w2 p-value Single 2 0.132 0.895 0.139 0.709 3 0.914 0.361 1.971 0.160 4 0.160 0.689 0.006 0.938 5 0.450 0.502 0.050 0.823 6 0.046 0.831 0.104 0.747 7 0.002 0.967 0.282 0.595 8 0.091 0.763 0.436 0.509 Cum. 3 0.765 0.682 1.778 0.411 4 4.626 0.201 3.463 0.326 5 4.747 0.314 4.382 0.357 6 5.501 0.358 5.592 0.348 7 6.252 0.396 6.065 0.416 8 5.927 0.548 5.357 0.617 Selected micro-homogeneity tests H0 : ai,h ¼ aj,h, bi,t + hp ¼ bj,t + hp for all i, j 6¼ i Single 2 8 Cum. 3 8
w2(GMM)
p-value
122.522 43.338
0.000 0.000
6 6
136.830 201.935
0.000 0.000
9 24
Export industries w2 p-value
(43.11) i¼4 Life insurance and import companies w2 p-value
1.871 0.027 1.634 1.749 0.686 0.069 0.022
0.171 0.869 0.201 0.186 0.408 0.793 0.883
0.334 0.186 0.714 1.180 0.188 0.001 0.300
0.563 0.667 0.398 0.277 0.665 0.970 0.584
1.746 8.763 7.680 7.652 8.879 8.390
0.418 0.033 0.104 0.176 0.180 0.299
0.585 5.349 5.081 5.768 6.677 6.087
0.746 0.148 0.279 0.329 0.352 0.530
i¼3
n
See Appendix 1 for structure of GMM VCV matrix incorporating Newey-West correction for serial correlation w2 statistics for mean forecast error regressions (p-value underneath) Degrees of freedom (n) represent number of regressors, excluding intercept (n ¼ 1 for single lag, n ¼ max. lag 2 for cumulative lags)
the same variables. For the univariate unbiasedness regressions, the null of microhomogeneity is given by H0: aih ¼ ajh, for all i, j 6¼ i. Before testing for homogeneous intercepts in Eq. 43.7, we must specify the form for our GMM system variance-covariance matrix. Keane and Runkle (1990) first accounted for crosssectional correlation (in price level forecasts) using a GMM estimator on pooled data. Bonham and Cohen (2001) tested the pooling specification by replacing Zellner’s (1962) SUR variance-covariance matrix with a GMM counterpart that incorporates the Newey-West single-equation corrections (used in our individual equation tests above) plus allowances for corresponding cross-covariances, both contemporaneous and lagged. Bonham and Cohen (2001)
1232
R. Cohen et al.
Table 43.14 Weak efficiency tests (1-month forecasts) hþ6 bi,t + h p(sem;tþhp;h st p) + ei,t,h for h ¼ 2 st + h sei;t;h ¼ ai,h + ∑p¼h
i¼1
i¼2 Insurance and Banks and brokers trading companies w2 p-value w2 p-value
Lags Single 2 2.325 0.020 5.641 0.018 3 4.482 0.106 3.519 0.061 4 3.162 0.075 2.580 0.108 5 3.956 0.047 2.993 0.084 6 6.368 0.012 4.830 0.028 7 8.769 0.003 6.786 0.009 8 5.451 0.020 4.114 0.043 Cum. 3 5.592 0.061 6.138 0.046 4 5.638 0.131 5.896 0.117 5 5.189 0.268 4.964 0.291 6 6.025 0.304 5.068 0.408 7 7.044 0.317 5.746 0.452 8 10.093 0.183 8.494 0.291 Selected micro-homogeneity tests H0 : ai,h ¼ aj,h, bi,t + hp ¼ bj,t + hp for all i, j 6¼ i Single 2 8 Cum. 3 8
w2(GMM)
p-value
n
40.462 30.739
0.000 0.000
6 6
42.047 46.124
0.000 0.004
6 24
i¼3 Export industries w2 p-value
(43.12) i¼4 Life insurance and import companies w2 p-value
3.658 3.379 2.805 3.102 5.952 7.755 4.417
0.056 0.066 0.094 0.078 0.015 0.005 0.036
7.011 5.877 4.911 7.467 9.766 12.502 7.564
0.008 0.015 0.027 0.006 0.002 0.000 0.006
4.116 4.283 3.784 4.847 5.940 7.919
0.128 0.232 0.436 0.435 0.430 0.340
7.508 7.888 8.009 8.401 9.434 12.530
0.023 0.048 0.091 0.136 0.151 0.084
See Appendix 1 for structure of GMM VCV matrix incorporating Newey-West correction for serial correlation w2 statistics for mean forecast error regressions (p-value underneath) Degrees of freedom (n) represent number of regressors, excluding intercept (n ¼ 1 for single lag, n ¼ max. lag 2 for cumulative lags)
constructed a Wald statistic for testing the micro-homogeneity of individual forecaster regression coefficients in a system.32 Keane and Runkle (1990) provided some empirical support for their modeling of cross-sectional correlations, noting that the average covariance between a pair of
32
Elliott and Ito (1999) used single-equation estimation that incorporated a White correction for heteroscedasticity and a Newey-West correction for serial correlation. (See the discussion below of Ito’s tests of forecaster heterogeneity.)
43
Rationality and Heterogeneity of Survey Forecasts
1233
Table 43.15 Weak efficiency tests (1-month forecasts) hþ6 bi,t + h p(st + h p st p) + ei,t,h for h ¼ 2 st + h sei;t;h ¼ ai,h + ∑p¼h
i¼1
i¼2 Insurance and Banks and brokers trading companies w2 p-value w2 p-value
Lags Single 2 0.328 0.743 0.639 0.424 3 1.621 0.203 3.060 0.080 4 0.002 0.964 0.335 0.562 5 0.086 0.770 0.042 0.837 6 0.165 0.685 0.916 0.339 7 0.850 0.357 1.861 0.172 8 0.597 0.440 1.088 0.297 Cum. 3 1.978 0.372 3.169 0.205 4 3.304 0.347 3.501 0.321 5 3.781 0.436 4.248 0.373 6 3.651 0.601 4.646 0.461 7 4.493 0.610 5.609 0.468 8 5.619 0.585 6.907 0.439 Selected micro-homogeneity tests H0 : ai,h ¼ aj,h, bi,t + hp ¼ bj,t + hp for all i, j 6¼ i Single 2 8 Cum. 3 8
w2(GMM)
p-value
n
150.698 45.652
0.000 0.000
6 6
161.950 214.970
0.000 0.000
9 24
i¼3 Export industries w2 p-value
(43.13) i¼4 Life insurance and import companies w2 p-value
1.249 0.000 0.819 1.001 0.029 0.329 0.317
0.264 0.993 0.366 0.317 0.864 0.566 0.574
0.023 0.550 0.146 0.344 0.095 1.152 1.280
0.879 0.458 0.702 0.557 0.758 0.283 0.258
1.940 5.567 5.806 5.756 6.608 7.907
0.379 0.135 0.214 0.331 0.359 0.341
1.132 3.318 3.598 3.819 5.040 6.521
0.568 0.345 0.463 0.576 0.539 0.480
See Appendix 1 for structure of GMM VCV matrix incorporating Newey-West correction for serial correlation w2 statistics for mean forecast error regressions (p-value underneath) Degrees of freedom (n) represent number of regressors, excluding intercept (n ¼ 1 for single lag, n ¼ max. lag 2 for cumulative lags)
forecasters is 58 % of the average forecast variance. In contrast, we use Pesaran’s (2004) CD (cross-sectional dependence) test to check for lagged as well as contemporaneous correlations of forecast errors among pairs of forecasters: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N 1 X N 2T X ^ , CD ¼ r N ðN 1Þ i¼1 j¼iþ1 ij
(43.15)
where T is the number of time periods, N ¼ 4 is the number of individual forecasters, ^ ij is the sample correlation coefficient between forecasters i and j, i ¼ 6 j. and r
1234
R. Cohen et al.
Table 43.16 LM test for serial correlation (1-month forecasts) H0 : bi,h ¼ . . . ¼ bi,h + 6 ¼ 0, for h ¼ 2 Xhþ6 Xh1 in ^e i, t, h ¼ ai, h þ b stþhk sei, tk, h þ f ^e þ i, t, h , k¼1 i, k l¼h i, l i, tl, h where e is generated from h1 st + h sei;t;h ¼ ai,h + ∑ k¼1 bi,k(st + h k sei;tk;h ) + ei,t,h Cum. lags (k) 2 3 nk 219 205 i¼1 Banks and brokers F(k, n k) 29.415 18.339 p-value 0.000 0.000 i¼2 Insurance and trading companies F(k, n k) 30.952 19.506 p-value 0.000 0.000 i¼3 Export industries F(k, n k) 32.387 20.691 p-value 0.000 0.000 i¼4 Life insurance and import companies F(k, n k) 29.694 18.606 p-value 0.000 0.000
4 192
5 179
6 166
7 153
8 144
14.264 0.000
11.180 0.000
9.699 0.000
7.922 0.000
6.640 0.000
15.372 0.000
11.661 0.000
9.695 0.000
8.120 0.000
7.050 0.000
16.053 0.000
12.951 0.000
10.628 0.000
9.418 0.000
7.520 0.000
14.596 0.000
11.093 0.000
9.586 0.000
9.154 0.000
7.937 0.000
Under the null hypothesis of no cross-correlation, CD a N ð0; 1Þ.33 See Table 43.21 for CD test results. We tested for cross-correlation in forecast errors from lag zero up to lags four and eight for the 1 and 3-month forecast horizons, respectively. (The nonstationarity of the 6-month forecast error precludes using the CD test at that horizon.) At the 1-month horizon, cross-correlations from lags zero to four are each significant at the 5 % level. Since rational forecasts allow for (individual) serial correlation of forecast errors at lags of h-1 or less, and h ¼ 2 for the 1-month horizon, the cross-correlations at lags two through four indicate violations of weak efficiency. Similarly, at the 3-month horizon, where h-1 ¼ 5, there is significant cross-correlation at lag six.34 However, it should be noted that, for many lags shorter than h, one cannot reject the null hypothesis that there are no cross-correlated forecast errors. 33 Unlike Breusch and Pagan’s (1980) LM test for cross-sectional dependence, Pesaran’s (2004) CD test is robust to multiple breaks in slope coefficients and error variances, as long as the unconditional means of the variables are stationary and the residuals are symmetrically distributed. 34 There are three instances of statistically significant negative test statistics for lags greater than h-1, none for lags less than or equal to h-1. Thus, some industries produce relatively high forecast errors several periods after others produce relative low forecast errors, and this information is not fully incorporated in some current forecasts.
43
Rationality and Heterogeneity of Survey Forecasts
1235
Table 43.17 Weak efficiency tests (3-month forecasts) hþ6 bi,t + h p(st + h p sem;tþhp;h ) + ei,t,h for h ¼ 6 st + h sei;t;h ¼ ai,h + ∑p¼h
i¼1 Banks and brokers w2 p-value
i¼2 i¼3 Insurance and trading companies Export industries w2 p-value w2 p-value
Lags Single 6 0.667 0.414 0.954 0.329 7 0.052 0.820 0.071 0.789 8 0.006 0.940 0.010 0.921 9 0.055 0.814 0.043 0.836 10 0.264 0.607 0.278 0.598 11 0.299 0.585 0.381 0.537 12 0.172 0.678 0.336 0.562 Cum. 7 8.966 0.011 11.915 0.003 8 12.288 0.006 16.263 0.001 9 11.496 0.022 15.528 0.004 10 8.382 0.136 12.136 0.033 11 11.596 0.072 18.128 0.006 12 11.527 0.117 15.983 0.025 Selected micro-homogeneity tests H0 : ai,h ¼ aj,h, bi,t + hp ¼ bj,t + hp for all i, j 6¼ i Single 6 12 Cum. 7 12
w2(GMM)
p-value
n
188.738 63.364
0.000 0.000
6 6
217.574 229.567
0.000 0.000
9 24
(43.11) i¼4 Life insurance and import companies w2 p-value
4.493 1.434 0.382 0.140 0.001 0.020 0.011
0.034 0.231 0.537 0.708 0.980 0.888 0.918
1.719 0.268 0.001 0.060 0.432 0.598 0.633
0.190 0.605 0.976 0.806 0.511 0.439 0.426
19.663 23.290 22.417 16.839 23.782 21.626
0.000 0.000 0.000 0.005 0.001 0.003
12.350 15.146 14.778 12.014 15.330 13.038
0.002 0.002 0.005 0.035 0.032 0.071
See Appendix 1 for structure of GMM VCV matrix incorporating Newey-West correction for serial correlation w2 statistics for mean forecast error regressions (p-value underneath) Degrees of freedom (n) represent number of regressors, excluding intercept (n ¼ 1 for single lag, n ¼ max. lag 2 for cumulative lags)
Nevertheless, in our micro-homogeneity tests, we follow Bonham and Cohen (2001), allowing for an MA(h-1) residual process, both individually and among pairs of forecast errors. (See the Appendix 1 for details.) By more accurately describing the panel’s residual variance-covariance structure, we expect this systems approach to improve the consistency of our estimates. Consider first the four bivariate regressions in Tables 43.1, 43.2, and 43.3. Recall that we rejected the joint hypothesis (ai,h, bi,h) ¼ (0, 1) at the 5 % significance level for all groups at the 1-month horizon (indicating the possible role of inefficiency with respect to the
1236
R. Cohen et al.
Table 43.18 Weak efficiency tests (3-month forecasts) hþ6 st + h sei;t;h ¼ ai,h + ∑p¼h bi,t + h p(sem,t + h p,h st p) + ei,t,h for h ¼ 6
i¼1 Banks and brokers
i¼2 Insurance and trading companies w2 p-value
Lags w2 p-value Single 6 3.457 0.063 2.947 0.086 7 4.241 0.039 3.834 0.050 8 5.748 0.017 5.177 0.023 9 6.073 0.014 5.843 0.016 10 8.128 0.004 7.868 0.005 11 8.511 0.004 8.004 0.005 12 6.275 0.012 6.691 0.010 Cum. 7 4.717 0.095 4.985 0.083 8 5.733 0.125 5.209 0.157 9 5.195 0.268 5.411 0.248 10 7.333 0.197 9.245 0.100 11 8.539 0.201 6.658 0.354 12 8.758 0.271 6.747 0.456 Selected micro-homogeneity tests H0 : ai,h ¼ aj,h, bi,t + hp ¼ bj,t + hp for all i, j 6¼ i w2(GMM) p-value Single 6 12 Cum. 7 12
(43.12)
i¼3 Export industries w2
p-value
i¼4 Life insurance and import companies w2 p-value
3.470 4.390 5.410 5.968 7.845 8.308 6.635
0.062 0.036 0.020 0.015 0.005 0.004 0.010
3.681 4.370 6.053 6.474 8.521 8.429 6.079
0.055 0.037 0.014 0.011 0.004 0.004 0.014
4.954 5.045 5.112 9.456 7.488 7.796
0.084 0.168 0.276 0.092 0.278 0.351
4.928 6.736 6.053 7.872 7.955 8.698
0.085 0.081 0.195 0.163 0.241 0.275
n
57.130 58.230
0.000 0.000
6 6
63.917 126.560
0.000 0.000
9 24
See Appendix 1 for structure of GMM VCV matrix incorporating Newey-West correction for serial correlation w2 statistics for mean forecast error regressions (p-value underneath) Degrees of freedom (n) represent number of regressors, excluding intercept (n ¼ 1 for single lag, n ¼ max. lag 2 for cumulative lags)
current forecast), but only for the exporters at the 3- and 6-month horizons. However, there are no rejections of micro-homogeneity for any horizon.35 The micro-homogeneity test results are very different for both the 1- and 3-month systems of univariate unbiasedness regressions in Tables 43.7 and 43.8. (Recall that
35
The nonrejection of micro-homogeneity in bivariate regressions does not, however, mean that one can avoid aggregation bias by using the mean forecast. Even if the bivariate regressions were correctly interpreted as joint tests of unbiasedness and weak efficiency with respect to the current forecast, and even if the regressions had sufficient power to reject a false null, the micro-homogeneity tests would be subject to additional econometric problems. According to the Figlewski-Wachtel (1983) critique, successfully passing a pretest for micro-homogeneity does not ensure that estimated coefficients from such consensus regressions will be consistent. See Sect. 43.2.1.
43
Rationality and Heterogeneity of Survey Forecasts
1237
Table 43.19 Weak efficiency tests (3-month forecasts) hþ6 st + h sei;t;h ¼ ai,h + ∑p¼h bi,t + h p(st + h p st p) + ei,t,h for h ¼ 6
i¼1 Banks and brokers
i¼2 Insurance and trading companies w2 p-value
Lags w2 p-value Single 6 0.268 0.604 0.450 0.502 7 0.055 0.814 0.037 0.848 8 0.331 0.565 0.305 0.581 9 0.513 0.474 0.482 0.488 10 1.038 0.308 1.077 0.299 11 1.335 0.248 1.532 0.216 12 1.184 0.276 1.620 0.203 Cum. 7 6.766 0.034 8.767 0.012 8 8.752 0.033 11.784 0.008 9 8.654 0.070 11.588 0.021 10 9.421 0.093 12.890 0.024 11 9.972 0.126 13.137 0.041 12 8.581 0.284 11.823 0.107 Selected micro-homogeneity tests H0 : ai,h ¼ aj,h, bi,t + hp ¼ bj,t + hp for all i, j 6¼ i Single 6 12 Cum. 7 12
w2(GMM) p-value
n
151.889 66.313
0.000 0.000
6 6
164.216 193.021
0.000 0.000
9 24
(43.13)
i¼3 Export industries w2
p-value
i¼4 Life insurance and import companies w2 p-value
3.657 0.599 0.029 0.022 0.318 0.563 0.616
0.056 0.439 0.864 0.883 0.573 0.453 0.433
1.065 0.003 0.230 0.577 1.344 1.872 1.979
0.302 0.957 0.632 0.448 0.246 0.171 0.159
15.683 18.330 18.929 19.146 19.597 17.670
0.000 0.000 0.001 0.002 0.003 0.014
10.052 11.162 11.309 12.275 13.003 11.431
0.007 0.011 0.023 0.031 0.043 0.121
See Appendix 1 for structure of GMM VCV matrix incorporating Newey-West correction for serial correlation w2 statistics for mean forecast error regressions (p-value underneath) Degrees of freedom (n) represent number of regressors, excluding intercept (n ¼ 1 for single lag, n ¼ max. lag 2 for cumulative lags)
unbiasedness was rejected for all groups at the 6-month horizon due to the nonstationarity of the forecast error.) Despite having only one failure of unbiasedness at the 5 % level for the two shorter horizons, micro-homogeneity is rejected at a level of virtually zero for both horizons. The rejection of micro-homogeneity at the 1-month horizon occurs despite the failure to reject unbiasedness for any of the industry groups. We hypothesize that the consistent rejection of micro-homogeneity regardless of the results of individual unbiasedness tests is the result of sufficient variation in individual bias estimates as well as precision in these estimates. According to these tests, aggregation of individual forecasts into a mean forecast is invalid at all horizons.
1238
R. Cohen et al.
Table 43.20 LM test for serial correlation (3-month forecasts) H0 : bi,h ¼ . . . ¼ bi,h + 6 ¼ 0, for h ¼ 6 Xhþ6 Xh1 in ^e i, t, h ¼ ai, h þ b stþhk sei, tk, h þ f ^e þ i, t, h , k¼1 i, k l¼h i, l i, tl, h where e is generated from Xh1 e stþh sei, t, h ¼ ai, h þ þ ei,t,h b s s tþhk i , k i , tk , h k¼1 Cum. lags (k) 6 7 nk 126 117 i¼1 Banks and brokers F(k, n k) 3.452 2.856 p-value 0.003 0.009 i¼2 Insurance and trading companies F(k, n k) 3.499 2.850 p-value 0.003 0.009 i¼3 Export industries F(k, n k) 4.687 3.956 p-value 0.000 0.001 i¼4 Life insurance and import companies F(k, n k) 2.352 2.482 p-value 0.035 0.021
8 108
9 99
10 94
11 89
12 84
3.023 0.004
2.951 0.004
2.599 0.008
2.652 0.006
2.921 0.002
3.408 0.002
2.907 0.004
2.492 0.011
2.584 0.007
2.341 0.012
4.409 0.000
3.572 0.001
2.928 0.003
2.819 0.003
2.605 0.005
2.501 0.016
2.168 0.031
1.866 0.060
1.794 0.067
1.811 0.059
In addition to testing the weak efficiency hypothesis at the individual level, we are interested in the degree of heterogeneity of coefficients across forecasters. Here the null of micro-homogeneity is given by H0: fil ¼ fjl, for l ¼ h, . . . h + 6, for all i, j 6¼ i. As explained in the section on efficiency tests, there are 312 tests (not 468, due to a nonstationary forecast error for all four groups at the 6-month horizon)/four groups ¼ 83 micro-homogeneity tests. The null hypothesis of equal coefficients is H0 : ai,h ¼ aj,h, bi,t+hk ¼ bj,t+hk for all i, j 6¼ i. As with the micro-homogeneity tests for unbiasedness, our GMM variance-covariance matrix accounts for serial correlation of order h-1 or less, generalized heteroscedasticity, and cross-sectional correlation or order h-1 or less. We report w2(n) statistics, where n is the number of coefficient restrictions, with corresponding p-values. Rather than perform all 83 micro-homogeneity tests, we choose a sample consisting of the shortest and longest lag for which there are corresponding individual and joint tests (i.e., for the k ¼ h + 1st and k ¼ h + 6th lag). Thus, there are four tests (two individual and two corresponding joint tests) times two horizons times three variables for a total of 24 tests. Every one of the micro-homogeneity tests are rejected at the 0 % level. As pointed out by Bryant (1995), a finding of micro-heterogeneity in unbiasedness and weak efficiency tests also casts doubt on the assumption of a rational representative agent commonly used in macroeconomic and asset-pricing models (Table 43.22).
43
Rationality and Heterogeneity of Survey Forecasts
Table 43.21 CD tests for cross-sectional
1239
st+h sei;t;h ¼ ai,h + ei,t,h qffiffiffiffiffiffiffiffiffiffiffiffiXN1 XN a 2T ^ N ð0; 1Þ r CD ¼ NðN1 Þ i¼1 j¼iþ1 ij Lag length 3-month horizon 0 1 2 3 4 h1¼5 6 7 8
(43.7)
CD
(43.15) p-value
31.272 2.461 0.387 2.322 1.594 1.461 5.887 0.340 1.456
0.000 0.014 0.699 0.020 0.111 0.144 0.000 0.734 0.145
^ ij is the sample correlation coefficient between N ¼ 24, T ¼ 276, r forecasters i and j, i 6¼ j
43.5.1 Ito’s Heterogeneity Tests In Table 43.23, we replicate Ito’s (1990) and Elliott and Ito’s (1999) test for forecaster “heterogeneity.” This specification regresses the deviation of the individual forecast from the cross-sectional average forecast on a constant. Algebraically, Ito’s regression can be derived from the individual forecast error regression by subtracting the mean forecast error regression. Thus, because it simply replaces the forecast error with the individual deviation from the mean forecast, it does not suffer from aggregation bias (c.f. Figlewski and Wachtel (1983)) or pooling bias (c.f. Zarnowitz 1985) (Table 43.24).36, 37 sei, t, h sem, t, h ¼ ai, h am þ ei, t, h em, t
(43.16)
As above, we use the Newey-West-Bartlett variance-covariance matrix. One may view Ito’s “heterogeneity” tests as complementary to our microhomogeneity tests. On the one hand, one is not certain whether a single (or pair of?) individual rejection(s) of, say, the null hypothesis of a zero mean deviation in Ito’s test would result in a rejection of micro-homogeneity overall. On the other hand, a rejection of micro-homogeneity does not tell us which groups are the most significant violators of the null hypothesis. It turns out that Ito’s mean deviation test produces rejections at a level of 6 % or less for all groups at all horizons except for
36
Recall that our group results are not entirely comparable to Ito’s (1990), since our data set, unlike his, combines insurance companies and trading companies into one group and life insurance companies and import-oriented companies into another group. 37 Chionis and MacDonald (1997) performed an Ito-type test on individual expectations data from Consensus Forecasts of London.
1240
R. Cohen et al.
Table 43.22 Ito tests (1-month forecasts) Individual regressions sei;t;h sem;t;h ¼ (ai,h am) + (ei,t,h em,t) for h ¼ 2 Degrees of freedom ¼ 263 i¼1 i¼2 Banks and Insurance and trading brokers companies ai,h 0 0.001 t (NW) 0.173 2.316 p-value 0.863 0.021 MH tests H0 : ai,h ¼ aj,h, for all i, j 6¼ I w2(GMM) 40.946 p-value 0
i¼3 Export industries 0.003 5.471 0
i¼4 Life insurance and import companies 0.002 3.965 0
i¼3 Export industries 0.008 5.903 0
i¼4 Life insurance and import companies 0.002 1.883 0.06
i¼3 Export industries 0.01 4.549 0
i¼4 Life insurance and import companies 0 0.392 0.695
Table 43.23 Ito tests (3-month forecasts) Individual regressions sei;t;h sem;t;h ¼ (ai,h am) + (ei,t,h em,t) for h ¼ 6 Degrees of freedom ¼ 263 i¼1 i¼2 Banks and Insurance and trading brokers companies ai,h 0.002 0.003 t (NW) 2.307 3.986 p-value 0.021 0 MH tests H0 : ai,h ¼ aj, for all i, j 6¼ I w2(GMM) 37.704 p-value 0
Table 43.24 Ito tests (6-month forecasts) Individual regressions sei;t;h sem;t;h ¼ (ai,h am) + (ei,t,h em,t) for h ¼ 12 Degrees of freedom ¼ 263 i¼1 i¼2 Banks and Insurance and trading brokers companies ai,h 0.004 0.003 t (NW) 3.52 2.34 p-value 0 0.019 MH tests H0 : ai,h ¼ aj,h, for all i, j 6¼ I w2(GMM) 23.402 p-value 0.001
43
Rationality and Heterogeneity of Survey Forecasts
1241
banks and brokers at the 1-month horizon and life insurance and import companies at the 6-month horizon.38 Since Ito’s regressions have a similar form (though not a similar economic interpretation) to the tests for univariate unbiasedness in Tables 43.7 and 43.8, it is not surprising that micro-homogeneity tests on the four-equation system of Ito equations produce rejections at a level of virtually zero for all three horizons.
43.6
Conclusions
In this chapter, we undertake a reexamination of the rationality and diversity of JCIF forecasts of the yen-dollar exchange rate. In several ways we update and extend the seminal paper by Ito (1990). In particular, we have attempted to explore the nature of rationality tests on integrated variables. We show that tests based on the “conventional” bivariate regression in change form, while correctly specified in terms of integration accounting, have two major shortcomings. First, following Holden and Peel (1990), they are misspecified as unbiasedness tests, because rejection of the (0, 1) restriction on the slope and intercept is a sufficient, not a necessary, condition for unbiasedness. Only a zero restriction on the intercept in a regression of the forecast error on a constant is both necessary and sufficient for unbiasedness. Second, tests using the bivariate specification suffer from a lack of power. Yet, this is exactly what we would expect in an asset market whose price is a near random walk: the forecasted change is nearly unrelated to (and varies much less than) the actual change. In contrast, we conduct pretests for rationality based on determining whether the realization and forecast are each integrated and cointegrated. In this case, following Liu and Maddala (1992), a “restricted” cointegration test, which imposes a (0, 1) restriction on the cointegrating vector, is necessary for testing unbiasedness. (We show that the Holden and Peel (1990) critique does not apply if the regressor and regressand are cointegrated.) If a unit root in the restricted residual is rejected, then the univariate test which regresses the forecast error on a constant is equivalent to the restricted cointegration test. Testing this regression for white noise residuals is one type of weak efficiency test. Testing other stationary regressors in the information set for zero coefficients produces additional efficiency tests. In the univariate specification, we find that, for each group, the ability to produce unbiased forecasts deteriorates with horizon length: no group rejects unbiasedness at the 1-month horizon, but all groups reject at the 6-month horizon, because the forecast errors are nonstationary. Exporters consistently perform worse than the other industry groups, with a tendency toward depreciation bias. 38
Elliott and Ito (1999), who have access to forecasts for the 42 individual firms in the survey, find that, for virtually the same sample period as ours, the null hypothesis of a zero deviation from the mean forecast is rejected at the 5 % level by 17 firms at the 1-month horizon, 13 firms for the 3-month horizon, and 12 firms for the 6-month horizon. These authors do not report results by industry group.
1242
R. Cohen et al.
Using only 2 years of data, Ito (1990) found the same result for exporters, which he described as a type of “wishful thinking.” The unbiasedness results are almost entirely reversed when we test the hypothesis using the conventional bivariate specification. That is, the joint hypothesis of zero intercept and unit slope is rejected for all groups at the 1-month horizon, but only for exporters and the 3- and 6-month horizons. Thus, in stark contrast to the univariate unbiasedness tests, as well as Ito’s (1990) bivariate tests, forecast performance does not deteriorate with increases in the horizon. Also, since Engle and Granger (1987) have showed that cointegrated variables have an error correction representation, we impose joint “unbiasedness” restrictions first used by Hakkio and Rush (1989) on the ECM. However, we show that these restrictions also represent sufficient, not necessary, conditions, so these tests could tend to over-reject. We then develop and test restrictions which are both necessary and sufficient conditions for unbiasedness. The test results confirm that the greater rate of rejections of the joint “unbiasedness” restrictions in the ECM is caused by the failure of the implicit restriction of weak efficiency with respect to the lagged forecast. When we impose the restriction that the coefficient of the forecast equals one, the ECM unbiasedness test results mimic those of the simple univariate unbiasedness tests. For this data set, at least, it does not appear that an ECM provides any value added over the simple unbiasedness test. Furthermore, since the error correction term is not statistically significant in any regressions, it is unclear whether the ECM provides any additional insight into the long-run adjustment mechanism of exchange rate changes. The failure of more general forms of weak efficiency is borne out by two types of explicit tests for weak efficiency. In the first type, we regress the forecast error on single and cumulative lags of mean forecast error, mean forecasted depreciation, and actual depreciation. We find many rejections of weak efficiency. In the second type, we use the Godfrey (1978) LM test for serial correlation of order h through h + 6 in the residuals of the forecast error regression. Remarkably, all but one LM test at a specified lag length produces a rejection at less than a 10 % level, with most at less than a 5 % level. (As in the case of the univariate unbiasedness test, all weak efficiency tests at the 6-month horizon fail due to the nonstationarity of the forecast error.) Whereas Ito (1990) and Elliott and Ito (1999) measured diversity as a statistically significant deviation of an individual’s forecast from the crosssectional average forecast, we perform a separate test of micro-homogeneity for each type of rationality test – unbiasedness as well as weak efficiency – that we first conducted at the industry level. In order to conduct the systems estimation and testing required for the micro-homogeneity test, our GMM estimation and inference make use of an innovative variance-covariance matrix that extends the Keane and Runkle (1990) counterpart from a pooled to an SUR-type structure. Our variancecovariance matrix takes into account not only serial correlation and heteroscedasticity at the individual level (via a Newey-West-Bartlett correction) but also forecaster cross-correlation up to h-1 lags. We document the statistical significance of the cross-sectional correlation using Pesaran’s (2004) CD test.
43
Rationality and Heterogeneity of Survey Forecasts
1243
In the univariate unbiasedness tests, we find that, irrespective of the ability to produce unbiased forecasts at a given horizon, micro-homogeneity is rejected at virtually a 0 % level for all horizons. We find this result to be somewhat counterintuitive, in light of our prior belief that micro-homogeneity would be more likely to obtain if there were no rejections of unbiasedness. Evidently, there is sufficient variation in the estimated bias coefficient across groups and/or high precision of these estimates to make the micro-homogeneity test quite sensitive. Microhomogeneity is also strongly rejected in the weak efficiency tests. In contrast to the results with the univariate unbiasedness specification, microhomogeneity is not rejected at any horizon in the bivariate regressions. We conjecture that the imprecise estimation of the slope coefficient makes it difficult to reject joint hypotheses involving this coefficient. In conclusion, we recommend that all rationality tests be undertaken using simple univariate specifications at the outset (rather than only if the joint bivariate test is rejected, as suggested by Mincer and Zarnowitz (1969) and Holden and Peel (1990) and employed by Gavin (2003)). Before conducting such tests, one should test the restricted cointegrated regression residuals, i.e., the forecast error, for stationarity. Clearly, integration accounting and regression specification matter for rationality testing. While our rationality tests do not attempt to explain cross-sectional dispersion, the widespread rejection of micro-homogeneity in different specifications of unbiasedness and weak efficiency tests39 provides more motivation for the classification of forecasters into types (e.g., fundamentalist and chartist/noise traders) than for simply assuming a representative agent (with rational expectations). There are characteristics of forecasts other than rationality which are of intrinsic interest. Given our various rejections of rational expectations, it is natural to explore what expectational mechanism the forecasters use. Ito (1994) tested the mean JCIF forecasts for extrapolative and regressive expectations, as well as a mixture of the two.40 Cohen and Bonham (2006) extend this analysis using individual forecast-generating processes and additional learning model specifications. And, much of the literature on survey forecasts has analyzed the accuracy of predictions, typically ranking forecasters by MSE. One relatively unexplored issue is the statistical significance of the ranking, regardless of loss function. However, other loss functions, especially nonsymmetric ones, are also reasonable. For example, Elliott and Ito (1999) have ranked individual JCIF forecasters using a profitability criterion. As mentioned in Sect. 43.2.2, the loss function may incorporate strategic considerations that result in “rational bias.” Such an exploration would require more disaggregated data than the JCIF industry forecasts to which we have access.
39
We put less weight on the results of the weaker tests for micro-homogeneity in the bivariate regressions. 40 He also included regressors for adaptive expectations and the forward premium.
1244
R. Cohen et al.
Appendix 1: Testing Micro-homogeneity with Survey Forecasts The null hypothesis of micro-homogeneity is that the slope and intercept coefficients in the equation of interest are equal across individuals. This chapter considers the case of individual unbiasedness regressions such as Eq. 43.2 in the text, repeated here for convenience, stþh st ¼ ai, h þ bi, h sei, t, h st þ ei, t, h
(43.17)
and tests H0 : a1 ¼ a2 ¼ . . . ¼ aN and b1 ¼ b2 ¼ . . . ¼ bN. Stack all N individual regressions into the Seemingly Unrelated Regression system S ¼ Fy þ e
(43.18)
where S is the NT 1 stacked vector of realizations, st + h, and F is an NT 2N block diagonal data matrix: 2 F54
F1
3 5:
⋱
(43.19)
FN Each Fi ¼ [i sei;t;h ] is a T 2 matrix of ones and individual i’s forecasts, y ¼ [a1 b1 . . . aN bN]0 , and e is an NT 1 vector of stacked residuals. The vector of restrictions, Ry ¼ r, corresponding to the null hypothesis of micro-homogeneity is normally distributed, with Ry r N[0, R(F0 F)1F0 OF(F0 F)1R0 ], where R is the 2(N 1) 2N matrix 2
1 0 6 0 1 R¼6 4⋮ 0 0 ...
1 0 ⋱ 0
0 1 ⋱ 1
3 ... 0 0 ⋮7 7, ⋱ 0 5 0 1
(43.20)
and r is a 2(N 1) 1 vector of zeros. The corresponding Wald test statistic, 0 h i 0 ^ F0 F 1 R0 R^y r , is asymptotically distributed as R^ y r RðF0 FÞ1 F OF a chi-square random variable with degrees of freedom equal to the number of restrictions, 2(N 1). For most surveys, there are a large number of missing observations. Keane and Runkle (1990), Davies and Lahiri (1995), Bonham and Cohen (1995, 2001), and to the best of our knowledge all other papers which make use of pooled regressions in tests of the REH have dealt with the missing observations using the same approach. The pooled or individual regression is estimated by eliminating the missing data points in both the forecasts and the realization. The regression residuals are then padded with zeros in place of missing observations to allow for the calculation of own and
43
Rationality and Heterogeneity of Survey Forecasts
1245
cross-covariances. As a result, many individual variances and cross-covariances are calculated with relatively few pairs of residuals. These individual cross-covariances are then averaged. In Keane and Runkle (1990) and Bonham and Cohen (1995, 2001) the assumption of 2(k + 1) second moments, which are common to all forecasters, is made for analytical tractability and for increased reliability. In contrast to the forecasts from the Survey of Professional Forecasters used in Keane and Runkle (1990) and Bonham and Cohen (1995, 2001), the JCIF data set contains virtually no missing observations. As a result, it is possible to estimate each individual’s variance-covariance matrix (and cross-covariance matrix) rather than average over all individual variances and cross-covariance pairs as in the aforementioned papers. We assume that for each forecast group i, E ei, t, h ei, t, h ¼ s2i, 0 for all i, t, (43.21) E ei, t, h ei, tþk ¼ s2i, k for all i, t, k such that 0 < k h, E ei, t, h ei, tþk ¼ 0 for all i, t, k such that k > h, Similarly, for each pair of forecasters i and j, we assume E ei, t, h ej, t ¼ di, j ð0Þ 8i, j, t, E ei, t, h ej, tþk ¼ di, j ðkÞ 8i, j, t, k such that k 6¼ 0, and h k h: 8i, j, t, k such that k > jhj: E ei, t, h ej, tþk ¼ 0
(43.22)
Thus, each pair of forecasters has a different T T cross-covariance matrix: 2
di, j ð0Þ
6 d ð 1Þ 6 i, j 6 Pi , j ¼ 6 6 ⋮ 6 4 0
di, j ð1Þ
...
di, j ðhÞ
di, j ð0Þ
di, j ð1Þ
...
⋱ ...
⋱ di, j ð1Þ
⋱ di , j ð 0Þ
di, j ðhÞ
...
di , j ð 1Þ
0
3
7 7 7 ⋮ 7 7, 7 di, j ð1Þ 5 di , j ð 0Þ 0
(43.23)
0
Finally, note that Pi,j 6¼ Pj,i, rather Pi;j ¼ Pj,i. The complete variance-covariance matrix, denoted O, has dimension NT NT, with matrices Qi on the main diagonal and Pi,j off the diagonal. The individual Qi, variance-covariances matrices are calculated using the Newey and West (1987) heteroscedasticity-consistent, MA(j)-corrected form. The Pi,j matrices are estimated in an analogous manner.
References Banerjee, A., Dolado, J. J., Galbraith, J. W., & Hendry, D. F. (1993). Co-integration, error correction and the econometric analysis of non-stationary data. Oxford: Oxford University Press. Batchelor, R., & Dua, P. (1990a). Forecaster ideology, forecasting technique, and the accuracy of economic forecasts. International Journal of Forecasting, 6, 3–10.
1246
R. Cohen et al.
Batchelor, R., & Dua, P. (1990b). Product differentiation in the economic forecasting industry. International Journal of Forecasting, 6, 311–316. Batchelor, R., & Dua, P. (1992). Conservatism and consensus-seeking among economic forecasters. Journal of Forecasting, 11, 169–181. Batchelor, R., & Peel, D. A. (1998). Rationality testing under asymmetric loss. Economics Letters, 61, 49–54. Bonham, C. S., & Cohen, R. H. (1995). Testing the rationality of price forecasts: Comment. American Economic Review, 85(1), 284–289. Bonham, C. S., & Cohen, R. H. (2001). To aggregate, pool, or neither: Testing the rationalexpectations hypothesis using survey data. Journal of Business & Economic Statistics, 19, 278–291. Boothe, P., & Glassman, D. (1987). Comparing exchange rate forecasting models. International Journal of Forecasting, 3, 65–79. Breusch, T. S. (1978). Testing for autocorrelation in dynamic linear models. Australian Economic Papers, 17, 334–355. Breusch, T. S., & Pagan, A. R. (1980). The Lagrange multiplier test and its application to model specifications in econometrics. Review of Economic Studies, 47, 239–253. Bryant, R. C. (1995). The “exchange risk premium,” uncovered interest parity, and the treatment of exchange rates in multicountry macroeconomic models. Technical report, The Brookings Institution. Cavaglia, S. M. F. G., Verschoor, W. F. C., & Wolff, C. C. P. (1994). On the biasedness of forward foreign exchange rates: Irrationality or risk premia? Journal of Business, 67, 321–343. Chionis, D., & MacDonald, R. (1997). Some tests of market microstructure hypotheses in the foreign exchange market. Journal of Multinational Financial Management, 7, 203–229. Clements, M. P., & Hendry, D. F. (1998). Forecasting economic time series. Cambridge: Cambridge University Press. Cohen, R. H., & Bonham, C. S. (2006). Specifying the forecast generating process for exchange rate survey forecasts. Working paper, Department of Economics, University of Hawaii Manoa. Davies, A., & Lahiri, K. (1995). A new framework for testing rationality and measuring aggregate shocks using panel data. Journal of Econometrics, 68, 205–227. Dickey, D. A., & Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74, 427–431. Ehrbeck, T., & Waldmann, R. (1996). Why are professional forecasters biased? Agency versus behavioral explanations. The Quarterly Journal of Economics, 111, 21–40. Elliott, G., & Ito, T. (1999). Heterogeneous expectations and tests of efficiency in the yen/dollar forward exchange rate market. Journal of Monetary Economics, 43, 435–456. Engle, R., & Granger, C. W. S. (1987). Cointegration and error correction representation, estimation and testing. Econometrica, 55, 251–276. Figlewski, S. (1978). Market “efficiency” in a market with heterogeneous information. Journal of Political Economy, 86, 581–597. Figlewski, S. (1982). Information diversity and market behavior. Journal of Finance, 37, 87–102. Figlewski, S. (1984). Information diversity and market behavior: A reply. Journal of Finance, 39, 299–302. Figlewski, S., & Wachtel, P. (1983). Rational expectations, informational efficiency, and tests using survey data: A reply. Review of Economics and Statistics, 65, 529–531. Frankel, J. A., & Froot, K. A. (1987). Using survey data to test standard propositions regarding exchange rate expectations. American Economic Review, 77, 133–153. Frankel, J. A., & Froot, K. A. (1990). Chartists, fundamentalists, and trading in the foreign exchange market. American Economic Review, 80, 181–185. Froot, K. A., & Frankel, J. A. (1989). Forward discount bias: Is it an exchange risk premium? Quarterly Journal of Economics, 104, 139–161. Gavin, W. T. (2003) FOMC forecasts: Is all the information in the central tendency? Economic Review, Federal Reserve Bank of St. Louis, May/June, 27–46.
43
Rationality and Heterogeneity of Survey Forecasts
1247
Godfrey, L. G. (1978). Testing for higher order serial correlation in regression equations when the regressors include lagged dependent variables. Econometrica, 46, 1303–1310. Goldberg, M. D., & Frydman, R. (1996). Imperfect knowledge and behaviour in the foreign exchange market. Economic Journal, 106, 869–893. Granger, C. W. J. (1991). Developments in the study of cointegrated economic variables. In R. F. Engle & C. W. J. Granger (Eds.), Long run economic relationships: Readings in Cointegration (pp. 65–80). Oxford: Oxford University Press. Hakkio, C. S., & Rush, M. (1989). Market efficiency and cointegration: An application to the sterling deutschemark exchange markets. Journal of International Money and Finance, 8, 75–88. Haltiwanger, J. C., & Waldman, M. (1989). Rational expectations in the aggregate. Economic Inquiry, 27, 619–636. Hamilton, J. (1994). Time series analysis. Princeton: Princeton University Press. Hansen, L. P., & Hodrick, R. J. (1980). Forward exchange rates as optimal predictors of future spot rates: An econometric analysis. Journal of Political Economy, 88, 829–853. Holden, K., & Peel, D. A. (1990). On testing for unbiasedness and efficiency of forecasts. Manchester School, 58, 120–127. Ito, T. (1990). Foreign exchange rate expectations: Micro survey data. American Economic Review, 80, 434–449. Ito, T. (1994). Short-run and long-run expectations of the yen/dollar exchange rate. Journal of the Japanese and International Economies, 8, 119–143. Keane, M. P., & Runkle, D. E. (1990). Testing the rationality of price forecasts: New evidence from panel data. American Economic Review, 80, 714–735. Kirman, A. P. (1992). Whom or what does the representative individual represent? Journal of Economic Perspectives, 6, 117–136. Kiviet, J. F. (1987). Testing linear econometric models. Ph.D. Thesis, University of Amsterdam. Lai, K. (1990). An evaluation of survey exchange rate forecasts. Economics Letters, 32, 61–65. Lamont, O. (2002). Macroeconomic forecast and microeconomic forecasters. Journal of Economic Behavior and Organization, 48, 265–280. Laster, D., Bennett, P., & Geoum, I. S. (1999). Rational bias in macroeconomic forecasts. Quarterly Journal of Economics, 293–318. LeBaron, B. (2000) Technical trading profitability in foreign exchange markets in the 1990’s. Technical report, Brandeis U. Leitch, G., & Tanner, J. E. (1991). Economic forecast evaluation: Profits versus the conventional error measures. The American Economic Review, 81, 580–590. Liu, P., & Maddala, G. S. (1992). Rationality of survey data and tests for market efficiency in the foreign exchange markets. Journal of International Money and Finance, 11, 366–381. Lyons, R. K. (2002). Foreign exchange: Macro puzzles, micro tools. Economic Review, Federal Reserve Bank of San Francisco, 51–69. MacDonald, R. (1992). Exchange rate survey data: A disaggregated G-7 perspective. The Manchester School, LX(Supplement), 47–62. Maynard, A., & Phillips, P. C. B. (2001). Rethinking and old empirical puzzle: Econometric evidence on the forward discount anomaly. Journal of Applied Econometrics, 16, 671–708. Mincer, J., & Zarnowitz, V. (1969). The evaluation of economic forecasts. In Economic forecasts and expectations. New York: National Bureau of Economic Research. Muth, J. (1961). Rational expectations and the theory of price movements. Econometrica, 29, 315–335. Newey, W., & West, K. (1987). A simple positive semi-definite heteroscedasticity and autocorrelation consistent covariance matrix. Econometrica, 55, 703–708. Osterberg, W. P. (2000). New results on the rationality of survey measures of exchange-rate expectations. Federal Reserve Bank of Cleveland Economic Review, 36, 14–21. Pesaran, H. M. (2004). General diagnostic tests for cross section dependence in panels. CESifo working paper series 1229, CE-Sifo Group Munich.
1248
R. Cohen et al.
Pesaran, H. M., & Weale, M. (2006). Survey expectations. In G. E. C. W. J. Granger & A. Timmermann (Eds.), Handbook of economic forecasting. Amsterdam: North-Holland. Phillips, P., & Hansen, B. E. (1990). Statistical inference in instrumental variables regression with I(1) processes. Review of Economic Studies, 57, 99–125. Phillips, P. C. B., & Maynard, A. (2001). Rethinking an old empirical puzzle: Econometric evidence on the forward discount anomaly. Journal of Applied Econometrics, 16, 671–708. Pilbeam, K. (1995). Exchange rate models and exchange rate expectations: an empirical investigation. Applied Economics, 27, 1009–1015. Saikkonen, P. (1991). Asymptotically efficient estimation of cointegration regressions. Econometric Theory, 7, 1–21. Stockman, A. C. (1987). Economic theory and exchange rate forecasts. International Journal of Forecasting, 3, 3–15. Theil, H. (1966). Applied economic forecasting. Amsterdam: North-Holland. Zacharatos, N., & Sutcliffe, C. (2002). Is the forward rate for the Greek Drachma unbiased? A VECM analysis with both overlapping and non-overlapping data. Journal of Financial Management and Analysis, 15, 27–37. Zarnowitz, V. (1985). Rational expectations and macroeconomic forecasts. Journal of Business & Economic Statistics, 3, 293–311. Zellner, A (1962). An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. Journal of the American Statistical Association, June, 348–368. Zellner, A. (1986). Biased predictors, rationality and the evaluation of forecasts. Economics Letters, 21, 45–48. Zivot, E. (2000). Cointegration and forward and spot exchange rate regressions. Journal of International Money and Finance, 19, 785–812.
Stochastic Volatility Structures and Intraday Asset Price Dynamics
44
Gerard L. Gannon
Contents 44.1 44.2 44.3 44.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stochastic Volatility and GARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serial Correlation in Returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Persistence, Co-Persistence, and Non-Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.4.1 Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.4.2 Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.5 Weighted GARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.6 Empirical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.6.1 Index Futures, Market Index, and Stock Price Data . . . . . . . . . . . . . . . . . . . . . . . . . 44.6.2 Estimates of the Autoregressive Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.6.3 Conditional Variance Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.6.4 Weighted GARCH Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1250 1252 1255 1258 1260 1260 1260 1261 1261 1263 1264 1267 1270 1273 1274 1275
Abstract
The behavior of financial asset price data when observed intraday is quite different from these same processes observed from day to day and longer sampling intervals. Volatility estimates obtained from intraday observed data can be badly distorted if anomalies and intraday trading patterns are not accounted for in the estimation process. In this paper I consider conditional volatility estimators as special cases of a general stochastic volatility structure. The theoretical asymptotic distribution of the measurement error process for these estimators is considered for particular
G.L. Gannon Deakin University, Burwood, VIC, Australia e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_44, # Springer Science+Business Media New York 2015
1249
1250
G.L. Gannon
features observed in intraday financial asset price processes. Specifically, I consider the effects of (i) induced serial correlation in returns processes, (ii) excess kurtosis in the underlying unconditional distribution of returns, (iii) market anomalies such as market opening and closing effects, and (iv) failure to account for intraday trading patterns. These issues are considered with applications in option pricing/trading strategies and the constant/dynamic hedging frameworks in mind. Empirical examples are provided from transactions data sampled into 5-, 15-, 30-, and 60-min intervals for heavily capitalized stock market, market index, and index futures price processes. Keywords
ARCH • Asymptotic distribution • Autoregressive parameters • Conditional variance estimates • Constant/dynamic hedging • Excess kurtosis • Index futures • Intraday returns • Market anomalies • Maximum likelihood estimates • Misspecification • Mis-specified returns • Persistence • Serial correlation • Stochastic volatility • Stock/futures • Unweighted GARCH • Volatility co-persistence
44.1
Introduction
One issue considered in Nelson (1990a) is whether it is possible to formulate an ARCH data generation process that is similar to the true process, in the sense that the distribution of the sample paths generated by the ARCH structure and the underlying diffusion process becomes “close” for increasingly finer discretizations of the observation interval. Maximum likelihood estimates are difficult to obtain from stochastic differential equations of time-varying volatility common in the finance literature. If the results in Nelson hold for “real-time” data when ARCH structures approximate a diffusion process, then these ARCH structures may be usefully employed in option pricing equations. In this paper I consider the ARCH structure as a special case of a general stochastic volatility structure. One advantage of an ARCH structure over a general stochastic volatility structure lies in computational simplicity. In the ARCH structure, it is not necessary for the underlying processes to be stationary or ergodic. The crucial assumption in an option pricing context is that these assumed processes approach a diffusion limit. These assumed diffusion limits have been derived for processes assumed to be observed from day-to-day records. Given that market anomalies such as market opening and market closing effects exist, any volatility structure based on observations sampled on a daily basis will provide different volatility estimates. Evidence of these intraday anomalies and effects on measures of constant volatility is reported in Edwards (1988) and Duffie et al. (1990). Brown (1990) argues that the use of intraday data in estimating volatility within an option pricing framework leads to volatility estimates that are too low. This can be overcome by rescaling, assuming the anomalies are accounted for.
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1251
Better estimates of volatility may then be obtained by employing intraday observations and allowance made for anomalies and trading activity within conditional volatility equations. However, mis-specifications in either or both first- and second-moment equations may mitigate against satisfying the conditions for an approximate diffusion limit. Then it is important to investigate cases where the diffusion limit is not attainable and identify the factors which help explain the behavior of the process. If these factors can be accounted for in the estimation process then these formulations can be successfully employed in options pricing and trading strategies. The specific concern in this paper is the effect on the asymptotic distribution of the measurement error process and on parameter estimates, obtained from the Generalized ARCH (GARCH(1,1)) equations for the conditional variance, as the observation interval approaches transactions records (d!0). Three issues are considered for cases where the diffusion limit may not be achieved at these observation intervals. The first issue is the effect of mis-specifying the dynamics of the first-moment-generating equation on resultant GARCH(1,1) parameter estimates. The second issue is the effect on measures of persistence obtained from the GARCH structure when increasing kurtosis is induced in the underlying unconditional distribution as d!0. This leads to a third issue which is concerned with evaluating effects of inclusion of weighting (mixing) variables on parameter estimates obtained from these GARCH(1,1) equations. If these mixing variables are important then standard, GARCH equation estimates will be seriously distorted. These mixing variables may proxy the level of activity within particular markets or account for common volatility of assets trading in the same market. Sampling the process too finely does result in induced positive or negative serial correlation in return processes. The main distortion to the basis change is generated from cash index return equations. However, the dominant factor distorting unweighted GARCH estimates is induced excess kurtosis in unconditional distributions of returns. Many small price changes are dominated by occasional large price changes. This effect leads to large jumps in the underlying distribution causing continuity assumptions for higher derivatives of the conditional variance function to break down. These observations do not directly address issues related to intraday market trading activity and possible contemporaneous volatility effects transmitted to and from underlying financial asset price processes. This effect was considered within the context of a structural Simultaneous Volatility (SVL) model in Gannon (1994). Further results for the SVL model are documented, along with results of parameter estimates, in Gannon (2010). In this paper the intraday datasets on cash index and futures price processes from Gannon (2010) are again employed to check the effects on parameter estimates obtained from GARCH and weighted GARCH models. A set of intraday sampled stock prices are also employed in this paper. The relative importance of mis-specification of the second-moment equation dynamics over mis-specification of first-moment equation dynamics is the most important issue. If intraday trading effects are important, this has
1252
G.L. Gannon
implications for smoothness and continuity assumptions necessary in deriving diffusion limit results for the unweighted GARCH structure. If these effects are severe then an implied lower sampling boundary needs to be imposed in order to obtain sensible results. This is because the measure of persistence obtained from GARCH structures may approach the Integrated GARCH (IGARCH) boundary and become explosive or conditional heteroskedasticity may disappear. This instability can be observed when important intraday anomalies such as market opening and closing effects are not accounted for within the conditional variance specifications. Distortions to parameter estimates are most obvious when conditional second-moment equations are mis-specified by failure to adequately account for observed intraday trading patterns. These distortions can be observed across a wide class of financial assets and markets. If these financial asset price processes have active exchange traded options contracts written on these “underlying assets,” it is important to study the intraday behavior of these underlying financial asset price processes. If systematic features of the data series can be identified then it is possible to account for these features in the estimation process. Then intraday estimates of volatility obtained from conditional variance equations, which incorporate structural effects in first and/or conditional second-moment equations, can be usefully employed in option pricing equations. Identifying intraday anomalies and linking trading activity to contemporaneous volatility effects mean the improved estimator can be employed within a trading strategy. This can involve analysis of the optimal time to buy options within the day to minimize premium cost. Alternatively, optimal buy or sell straddle strategies based on comparison of estimated volatility estimates relative to market implied volatility can be investigated. In this paper I focus on theoretical results which can explain the empirically observed behavior of these estimators when applied to intraday financial asset price processes. I start by summarizing relevant results which are currently available for conditional variance structures as the observation interval reduces to daily records (h!0). These results are modified and extended in order to accommodate intraday observation intervals. The alternative first-moment-generating equations are described and the basis change defined and discussed within the context of the co-persistence structure. I then focus on the general GARCH structure and state some further results for specific cases of the GARCH and weighted GARCH (GARCH-W) structure.
44.2
Stochastic Volatility and GARCH
Nelson and Foster (1994) derive and discuss properties for the ARCH process as the observation interval reduces to daily records (h!0) when the underlying process is driven by an assumed continuous diffusion process. Nelson and Foster (1991) generalized a Markov process with two state variables, hXt and hs2t , only one of which hXt is ever directly observable. The conditional variance hs2t is defined conditional on the increments in hXt per unit time and conditional on an information set hBt. Modifying the notation from h to d (to account for intraday discretely
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1253
observed data), and employing the notation d!0 to indicate reduction in the observation interval from above, when their assumptions 2, 3, and 10 hold, when d is small, dXt, j(ds2t )) is referred to as a near diffusion if for any T, 0 T 1, (dXt, j (ds2t ))0tT ) (Xt, j(s2t ))0tT. If we assume these data-generating processes are near diffusions, then the general discrete time stochastic volatility structure, defined in Nelson and Foster (1991), may be described using the following modified notation: 2
d Xðkþ1Þd
3
2
d Xkd
3
2
mðd Xkd ; skd Þ
3
7 6 6 7 6 7 4 5 5 þ d :4 5 ¼ 4 j d sð2kþ1Þd j d s2kd lðd Xkd ; d skd Þ 3 31 2 2 2 Lj, x ðd Xkd ; d skd Þ 2 d Z1, kd d skd 7 76 6 þ d1=2 4 5 54 Z Lj, x ðd Xkd ; d skd Þ L2 ðd Xkd ; d skd Þ d 2, kd (44:1) where (dZ1,kd,d Z2,kd)k¼0,1 is i.i.d. with mean zero and identity covariance matrix. In Eq. 44.1 d is the size of the observation interval, X may describe the asset price return, and s2 the volatility of the process. It is not necessary to assume the datagenerating processes are stationary or ergodic, but the crucial assumption is that the data-generating processes are near diffusions. In the ARCH specification, dZ2,kd is a function of dZ1,kd so that ds2kd(dhkd) can be inferred from past values of the one observable process dXkd. This is not true for a general stochastic volatility structure where there are two driving noise terms. For the first-order Markov ARCH structure, a strictly increasing function of ∧2 2 2 estimates d st d h∧ t of the conditional variance process dst (dht) is defined as f(s ), and estimates of the conditional ∧ mean per∧unit of time of the increments in X and ∧ ∧ f(s2) are defined as m x, s and k x, s . Estimates of ds2kd are updated by the recursion: ∧2 ∧2 ∧ ∧ f d sðkþ1Þd ¼ s d skd þ d k d Xkd ; d skd ∧ ∧2 1=2 ∧ þ d a d Xkd ; d skd g d Z1, kd ; d Xkd , d skd ∧
∧
(44:2)
where k ð:Þ, a ð:Þ, m ð:Þ, and g(.) are continuous on bounded (j(s2), x) sets and g(z1, x.s2) assumed continuous everywhere with the first three derivatives of g with respect to z1 well defined and bounded. The function g(dZ1,kd,.) is normalized to have mean zero and unit conditional variance. Nonzero drifts in f d s2kd are ∧ allowed for in the k ð:Þ term and non-unit conditional variances accounted for in the a(.) term. The second term on the right measures the change in f d s2kd forecast by the ARCH structure while the last term measures the surprise change.
1254
G.L. Gannon
∧ The GARCH(1,1) fðs2 Þ ¼ s2 , k ðx;sÞ ¼ o ys2 , 2 structure 2 is obtained by setting gðz1 Þ ¼ z1 1 =SD z1 , and a(x,s) ¼ a . s2 . SD(z21). The parameters o, y, and a index a family of GARCH(1,1) structures. In a similar manner the EGARCH volatility ∧ structure can be defined. That is, by selectingf ð:Þ, k ð:Þ,gð:Þ, and a(.), various “filters” can be defined within this framework. Other conditions in Nelson and Foster (1991) define the rate at which the normalized measurement error process dQt mean reverts relative to (dXt,ds2t ). It becomes white noise as d!0 on a standard time scale but operates on a faster time scale mean-reverting more rapidly. Asymptotically optimal choice of a(.) and f(.) given g(.) can be considered with respect to minimizing the asymptotic variance of the measurement error. This is considered on a faster time scale (T + t) than T. The asymptotically optimal choice of g(.) depends upon the assumed relationship between Z1 and Z2. In the ARCH structure, Z2 is a function of Z1 so that the level driving ds2t can be recovered from shocks driving dX2t . Without further structure in the equation specified for s2t , we are unable to recover information about changes in s2t . Their discussion is strictly in terms of constructing a sequence of optimal ARCH filters which minimize the asymptotic variance of the asymptotic distribution of the measurement errors. This approach is not the same as choosing an ARCH structure that minimizes the measurement error variance for each d. The asymptotic distribution of the measurement error process, for large t and small d,
∧2 2 2 s 1=2 s Q ; s ; X ¼ q; s2 ; x 1=2 d Tþtd d d T d T d T Tþtd
with derivatives evaluated as f0 d s2T , j0 (ds2T), etc. and the notation simplified as f0 and j0 is approximately normal with mean
d1=2
h∧ i h i ∧ 00 ð2s2 j0 Þ k d =j0 a2 j =2ðj0 Þ3 ld =f0 þ L2 f0 3 þ 2 as: md m d :E½gz a : E½ Z1 : g z (44:3)
and variance d1=2
ð2s2 f0 Þ : ða2 =f0 þ l2 =½j0 2 2 aL:CovðZ2 ; gÞ=½s0 ½j0 : a:E½Z1 :gz
(44:4)
General results in Nelson and Foster (1991, 1994) for the GARCH(1,1) structure are that GARCH(1,1) can be more accurately measured firstly the less variable and the smaller is ds2t , second the thinner the tails of Z1, and third the more the true data-generating mechanism resembles an ARCH
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1255
structure as opposed to a stochastic volatility structure. If the true datagenerating process is GARCH(1,1), then Corr (Z21, Z2) ¼ 1. As d!0, the first result will generally hold, and the second can be checked from the unconditional distribution of the returns process. The latter result is the most difficult to evaluate. Now reconsider some assumptions necessary to obtain these results and reasons these assumptions may not hold when d!0. (i) Mis-specification of the difference between the estimated and true drift in h ∧ i mean, d m t d mt , is assumed fixed as d!0 so that effects of mis-specifying this drift has an effect that vanishes at rate d1/2 and is negligible asymptotically. These terms do not appear in the expression for the variance of the asymptotic distribution of the measurement error. As d!0, the effect of bid/ask bounce and order splitting in futures price processes and non-trading-induced effects on market indices becomes more severe. Mis-specification of the drift in the mean is not constant. Whether this effect transfers to estimates of conditional variances is an empirical issue. (ii) The conditional variance of the increments in ds2t involves the fourth moment of dZ1,kd so that the influence of this fourth moment remains as the diffusion limit is approached. Excess kurtosis is a feature of intraday financial price changes. ∧ (iii) Values of k d and ld are considered fixed as d!0 so that effects of mis-specifying the drift in s2 has an effect that also vanishes at rate d1/2. As well, although these drift terms enter the expression for the asymptotic bias of the measurement error, these also do not appear in the expression for the asymptotic variance. The term gz represents part of the “surprise” change in the recursion defined in Eq. 44.2 and is directly linked to departures from normality observed in point (ii). These departures from normality can be generated by extremes in Z1 induced by large jumps in the underlying distribution. In this case first and second derivatives of f may be discontinuous throughout the sample space as well. Then the expression for the bias in this asymptotic distribution of the measurement errors may be explosive. (iv) The ARCH specification of the drift in mean and variance only enters the 0p(d1/2) terms of the measurement error. Asymptotically, the differences in the conditional variance specifications are more important, appearing in the 0p(d1/4) terms. If the conditional variance specification is not correct then the measurement error variance is affected. This is because matching the ARCH and true variance of the variance cannot proceed. I will consider these issues further by generalizing a theoretical framework in which to address each in the above order. Firstly I consider the relationship between serial correlation in returns on the market index, the index futures, and basis change as d!0. Second, these effects are considered in the context of the co-persistence structure for the basis change. Finally, I consider effects on conditional variance parameter estimates when mixing and weighting variables are included in the equations for the conditional variance.
1256
44.3
G.L. Gannon
Serial Correlation in Returns
Consider a first-order autoregressive process for the spot asset price return (an AR(D) representation): st ¼ r1 st1 þ et
(44:5)
where st is the return on the asset (the difference in the natural logarithm of the spot asset price levels or the difference in the spot asset price levels) between time t and t1, r1 is the first-order serial correlation coefficient between s at time t and t1 and et is an assumed homoskedastic disturbance term, s2e and 1 < r1 < 1. This equation provides an approximation to the time-series behavior of the spot asset price return. An alternative specification to Eq. 44.5 is a simple autoregressive equation for the level of the spot asset price S (or natural logarithm of the levels) at time t and t1, (an AR(L) representation) St ¼ r1 St1 þt at
(44:6)
where at is an assumed homoskedastic disturbance term, s2a , and the value of f1 may be greater or less than one. The assumed bid/ask bounce in futures price changes is approximated by a MA (1) process in Miller et al. (1994). For the index futures price, this specification is f t ¼ at þ y1 at1 ,
(44:7)
where ft is the index futures price change and at is an assumed mean zero, serially uncorrelated shock variable with a homoskedastic variance, s2a , and 1 < y1 < 0. The basis change is defined as bt ¼ f t i t ,
(44:8)
where f and i are the index futures return and index portfolio return, respectively. Whether shocks generated from Eqs. 44.5 or 44.6 generate differences in parameter estimates and measures of persistence obtained from conditional second-moment equations is an issue. Measures of persistence for the basis change may be badly distorted from employing index futures and index portfolio changes. Measures of persistence for index futures and index portfolios may be badly distorted by employing observed price changes. The simplest GARCH structure derived from Eq. 44.1 for the conditional variance is the GARCH(1,1): ht ¼ o þ a1 e2t1 þ b1 ht1
(44:9)
where ht(s2t ) is the conditional variance at time t and e2t1 are squared unconditional shocks generated from any assumed first-moment equation and 0 a1, b1 1 and
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1257
a1 + b1 1. This parameterization is a parsimonious representation of an ARCH (p0 ) process where a geometrically declining weighting pattern on lags of e2 is imposed. This is easily seen by successive substitution for htj(j ¼ 1, . . . , J) as J ! 1, ht ¼ o 1 þ b1 þ þ bJ1 þ a1 e2t1 þ b1 e2t2 þ þ bJ1 e2tJ1 þ Remainder:
(44:10) Now consider Eq. 44.6, with f1 fixed at 1, as representing one mis-specified spot asset price process, Eq. 44.5 representing the mis-specified differenced autoregressive process, and Eq. 44.7 representing the mis-specified differenced moving average process. Taking expected values, then the unconditional variance when Eq. 44.6 is the mis-specified representation and f1 set equal to one is E s2t ¼ E a2t :
(44:11)
When Eq. 44.7 is the moving average (MA) representation, the unconditional variance relative to shocks generated via Eq. 44.11 is E s2t MA ¼ E 1 þ y2 a2t ,
(44:12)
and when Eq. 44.5 is the autoregressive (AR) representation, the unconditional variance relative to shocks generated via Eq. 44.11 is E s2t AR ¼ E
1 r1 2 at : 1 þ r1
(44:13)
The conditional variance from a GARCH(1,1) structure for Eq. 44.11 can be rewritten as ht ðsÞ ¼ o 1 þ b1 þ þ bJ1 þ a1 a2t1 þ b1 a2t2 þ þ bJ1 a2tJ1 þ Remainder:
(44:14)
If Eq. 44.7 is the representation, then, relative to the conditional variance equation from Eq. 44.11, ht ðsÞMA ¼ o 1 þ b1 þ þ bJ1 þ a1 1 þ y2 a2t1 þ b 1 þ y2 a2t2 þ bJ1 1 þ y2 a2tJ1 þ Remainder,
(44:15)
1258
G.L. Gannon
and if Eq. 44.5 is the representation, then, relative to the conditional variance equation from Eq. 44.11, ht ðsÞAR ¼ o 1 þ b1 þ þ bJ1 1 r1 2 1 r1 2 1 r1 2 at1 þ b1 at2 þ bJ1 atJ1 þ Remainder: þ a1 1 þ r1 1 þ r1 1 þ r1
(44:16) If o, a1, and b1 were equivalent in Eqs. 44.14, 44.15, and 44.16, when the conditional variance is driven by Eq. 44.7 with y1 negative, then ht(s)MA > ht(s), and when the conditional variance is driven by Eq. 44.5 with r1 negative, then ht(s)AR > ht(s) and with r1 positive then ht(s)AR < ht(s). However, given the scaling factor in Eq. 44.15 relative to Eq. 44.16, the potential for distortions to GARCH parameter estimates is greater when the underlying process is driven by Eq. 44.5 relative to Eq. 44.7.
44.4
Persistence, Co-Persistence, and Non-Normality
Now define et as shocks from any of the assumed first-moment equations from Sect. 3 with the following simplified representation of a GARCH(1,1) structure obtained from Eq. 44.1 where ht(s2t ) represents the conditional variance and zt(dZ1 ,kd) the stochastic part: et ¼
pffiffiffi ht z t
zt NIDð0; 1Þ:
(44:17)
In the univariate GARCH(1,1) structure, ht converges and is strictly stationary if E[1n(b1 + a1z2ti)] < 0. Then ∑i¼1,kd1n(b1 + a1z2ti) is a random walk with negative drift which diverges to 1 as the observation interval reduces. Now consider the co-persistence structure in the context of the constant hedging model. Defining the true processes for the differences in the natural logarithm of the spot index price and the natural logarithm of the futures price as it ¼ g1 xt þ it f t ¼ g2 xt þ ft ,
(44:18)
the common “news” factor xt is IGARCH, in the co-persistence structure, while the idiosyncratic parts are assumed jointly independent and independent of xt and not IGARCH. The individual processes have infinite unconditional variance. If a linear combination is not IGARCH, then the unconditional variance of the linear combination is finite and a constant hedge ratio (defined below) leads to substantial reduction in portfolio risk. A time-varying hedge ratio can lead to greater reduction in portfolio risk under conditions discussed in Ghose and Kroner (1994) when the processes are
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1259
co-persistent in variance. From a practical perspective, account needs to be taken of the rebalancing costs of portfolio adjustment. A nonoptimal restricted linear combination is the basis change defined as the difference between the change in the log of the index futures price and change in the log of the spot index level. This implied portfolio is short 1 unit of the spot for every unit long in the futures. For the futures and spot price processes reported in McCurdy and Morgan (1987), the basis change is co-persistent in variance. If there are “news factors” xf t 6¼ xi t, then the constant hedge ratio may not exist. Define these processes as it ¼ g1 xi t þ i t f t ¼ g 2 xf t þ f t
(44:19)
then the estimated constant hedge ratio which is short g units of the spot for every 1 unit long in the futures is " ∧
g¼ ∧
∧
cov ðf; iÞ
#
∧
var ðiÞ
jt
2 2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi3 3 ∧ ∧ ∧ p var ½xf var ½xi ∧ 5 þ var Z i 5 ¼ 4g 2 4 ∧ var ½xi
(44:20)
jt
where r is the correlation between xft and xit. When both xf and xi follow IGARCH processes and no common factor structure exists, then the estimated constant hedge diverges. Ghose and Kroner (1994) investigate this case. When xft follows an IGARCH process but xit is weak GARCH, then the estimated constant hedge ratio cannot be evaluated. There are two problems: (a) The estimated sample variance of xf in Eq. 44.20 is infinite as T ! 1.
∧ pffiffiffi ∧ ∧ (b) p ¼ c ov xf = v a r xf var ½xi so that there is no linear combination jt of xf and xi which can provide a stationary unconditional variance. This last observation has a direct parallel from the literature for cointegration in the means of two series. If xf is an approximate I(1) process and xi is I(0), then there is no definable linear combination of xf and xi. When observing spot index and futures prices over successively finer intervals, the co-persistence structure may not hold for at least two further reasons. This argument relates directly to the horizon t+kd for the hedging strategy. This argument also relates directly to distortions possibly induced onto a dynamic hedging strategy, as d!0. Perverse behavior can be observed in spot index level changes as oversampling becomes severe. The smoothing effect due to a large proportion of the portfolio entering the non- and thin-trading group generates a smoothly evolving process with short-lived shocks generated by irregular news effects.
1260
G.L. Gannon
General results assume that the z0ts are drawn from a continuous distribution. When sampling futures price data at high frequency, then the discrete nature of the price recording mechanism guarantees that there are discontinuities in returngenerating processes. The distribution of the z0ts can become extremely peaked due to multiple small price changes and can have very long thin tails due to abrupt shifts 2 in the distribution. As d ! 0, 1 < E[1n(b1 + a1zti )] < + 1 for a large range of values for 0 a1, b1 1 and a1 + b1 < 1, and this depends on the distribution induced by oversampling and resultant reduction in (a1 + b1). In the limit E[ (zt)4]/[E (z2t )]2 ! 1. Then even-numbered higher moments of zt are unbounded as d!0. This oversampling can lead to two extreme perverse effects generated by bid/ask bounce or zero price changes. The effect depends upon liquidity in the respective markets.
44.4.1 Case 1 a1 ! 1, a1 þ b1 > 1 and E ln b1 þ a1 z2ti > 0: The intuitive explanation for this result relies on oversampling (not overdifferencing) in highly liquid markets. The oversampling approaches analysis of transactions. At this level bid/ask bounce and order splitting require an appropriate model. Any arbitrary autoregressive model, for unconditional first moments, generates unconditional shocks relating predominantly to behavior of the most recent shock. These effects carry through to conditional squared innovations.
44.4.2 Case 2 Oversampling can produce many zero price changes in thin markets. In this latter case as d!0 then a1 + b1 ! 0. This explanation can apply to relatively illiquid futures (and spot asset) price changes. That is, conditional heteroskedasticity disappears as oversampling becomes severe. The effect on the basis change when there is a relatively illiquid futures market and oversampling may be badly distorted. As well, failure to account for anomalies in conditional variance equations can severely distort estimates.
44.5
Weighted GARCH
∧ Recall from Eq. 44.1 that k is a function defining the estimated drift in f d s2t so that l is a function defining true drift in {j(ds2t )}. In the ARCH structure, the drift ∧
00
in ht(s2t ) in the diffusion limit is represented by k =f0 a2 f =2ðf0 Þ3, whereas for the stochastic differential equation defined from assumptions 2, 3, and 10 in Nelson and
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1261
Foster (1991, 1994), this diffusion limit is l/j0 + L2j00 /2 (j0 )3. The effect on the expression for the bias in the asymptotic distribution of the measurement error 00 00 process can be explosive if derivatives in the terms a2 f =2ðf0 Þ3 L2 j =2ðj0 Þ3 cannot be evaluated because of discontinuities in the process. This can happen when important intraday effects are neglected in the conditional variance equation specification. As well, the bias can diverge as d!0 if the dZ1,kd terms are badly distorted. Occasional large jumps in the underlying distribution contribute large Op(1) movements while the near diffusion components contribute small Op(d1/2) increments. When sampling intraday financial data, there are often many small price changes which tend to be dominated by occasional large shifts in the underlying distribution. Failure to account for intraday effects (large shocks to the underlying distribution) can lead to a mixture of Op(1) and Op(d1/2) effects in the process. One approach is to specify this mixed process as a jump diffusion. An alternative is to account for these effects by incorporating activity measures in the specification of conditional variance equations. It follows that failure for these Op(1) effects can lead to an explosive to account
measurement error
∧ d h t d ht
. However, in empirical applications this measure-
ment error is unobservable since dht is unobservable. Failure to account for these jumps in the underlying distribution imply that the unweighted GARCH(1,1) structure cannot satisfy the necessary assumptions required to approximate a diffusion limit.
44.6
Empirical Examples
The first issue is the effect of possible mis-specification of the first-moment equation dynamics and resultant effect on estimates of persistence of individual processes as d ! 0. If the effect is not important (mean irrelevance) then the focus of attention is on the estimates from the co-persistence structure and implications for a constant hedge ratio. The second issue is the effect of inclusion of variables to proxy intraday activity on measures of persistence. However, it is still important to consider possible effects from mis-specifying the first-moment equation on parameter estimates obtained from conditional variance equations as d ! 0. If there is strong conditioning from measures of activity onto the market price processes, the conditioning should be independent of specification of the dynamics of the first-moment equation (mean irrelevance).
44.6.1 Index Futures, Market Index, and Stock Price Data The data has been analyzed for the common trading hours for 1992 from the ASX and SFE, i.e., from 10.00 a.m. to 12.30 p.m. and from 2.00 p.m. to 4.00 p.m.
1262
G.L. Gannon
This dataset was first employed in examples reported in Gannon (1994) and a similar analysis undertaken in Gannon (2010) for the SVL models as is undertaken in this paper for the GARCH and GARCH-W models. Further details of the sampling and institutional rules operating in these markets are briefly reported in the Appendix of this paper. Transactions for the Share Price Index futures (SPI) were sampled from the nearest contract 3 months to expiration. The last traded price on the SPI levels and stock price levels were then recorded for each observation interval. The All Ordinaries Index (AOI) is the recorded level at the end of each observation interval. During these common trading hours, the daily average number of SPI futures contracts traded 3 months to expiration was 816. As well, block trades were extremely rare in this series. Transactions on the heavily capitalized stock prices were extremely dense within the trading day. These seven stocks were chosen from the four largest market capitalized groupings according to the ASX classification code in November 1991, i.e., general industrial, banking, manufacturing, and mining. The largest capitalized stocks were chosen from the first three categories as well as the four largest capitalized mining stocks. This selection provides for a diversified portfolio of very actively traded stocks which comprised 32.06 % of total company weights from the 300 stocks comprising the AOI. All datasets were carefully edited in order to exclude periods where the transaction capturing broke down. The incidence of this was rare. As well, lags were generated and therefore the effects of overnight records removed. A natural logarithmic transformation of the SPI and AOI prices is undertaken prior to analysis. Opening market activity for the SPI is heaviest during the first 40 min of trading. Trade in the SPI commences at 9.50 a.m. but from 10.30 a.m. onwards volume of trade tapers off until the lunchtime close. During the afternoon session, there is a gradual increase in volume of trade towards daily market close at 4.10 p.m. Excluding the market opening provides the familiar U-shaped pattern of intraday trading volume observed on other futures markets. SPI price volatility is highest during the market opening period with two apparent reverse J-shaped patterns for the two daily trading sessions (small J effect in afternoon session). The first and last 10 min of trade in the SPI are excluded from this dataset. Special features govern the sequence at which stocks open for trade on the ASX. Individual stocks are allocated a random opening time to within plus or minus 30 s of a fixed opening time. Four fixed opening times, separated by 3-min intervals starting at 10.00 a.m., operated throughout 1992. Four alphabetically ordered groups then separately opened within the first 10 min of trading time. The last minute of trading on the ASX is also subject to a random closing time between 3.59 p.m. and 4.00 p.m. The effect of these institutional procedures on observed data series can be potentially severe. Both of these activity effects in market opening prices of trading on the SFE, and ASX should be accounted for in the estimation process.
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1263
Table 44.1 Autoregressive parameter estimates for SPI, AOI, and basis change Interval Full day 30 min 15 min 05 min Excluding market open 30 min 15 min 05 min
ft
it
bt
0.0324 (1.54) 0.0497 (3.45) 0.0441 (5.14)
0.0909 (4.33) 0.0953 (6.42) 0.3317 (41.2)
0.1699 (8.19) 0.1348 (9.15) 0.0137 (1.60)
0.0299 (1.34) 0.0301 (1.91) 0.0532 (5.84)
0.1213 (5.48) 0.2531 (16.6) 0.3284 (38.2)
0.1947 (8.89) 0.2145 (13.9) 0.0968 (10.7)
Asymptotic t-statistics in brackets ft ¼ Ft – Ft is the difference in the observed log level of the SPI it ¼ It – I is the difference in the observed log level of the AOI Equation 44.5, i.e., an autoregressive specification for the differences, an [AR(D)], is estimated for both data series with one lag only for the autoregressive parameter Dummy variables are included, for the first data series, in order to account for market opening effects for the SPI, institutional features governing market opening on the ASX and therefore effects transmitted to the basis change. Two separate dummy variables are included for the first pair of 5-min intervals This form for the basis change is bt ¼ ft – it In the lower panel, results are reported from synchronized trading from 10.30 a.m. That is, the first 40 min and first 30 min of normal trade in the SPI and AOI, respectively, is excluded
44.6.2 Estimates of the Autoregressive Parameters In Table 44.1 the first-order autoregressive parameter estimate is reported for the observed differenced series for the SPI (f), AOI (i), and basis change (b). In the top panel, these parameter estimates are from observations for the full (synchronized) trading day. These equations include dummy variables to account for institutional market opening effects. In the following tables of results, SING refers to singularities in the estimation process. For the SPI futures price process, low first-order negative serial correlation is initially detected in the log of the price change. As the sampling interval is reduced, low first-order positive serial correlation can be detected in the series. This feature of the data accords with order splitting and non-trading-induced effects. Low positive first-order serial correlation can be detected in differences of the log of the market index and serial correlation increases. Positive serial correlation is high at 15-min intervals for the opening excluded set. When sampling the market index at 5-min intervals, substantial positive first-order serial correlation is detected
1264
G.L. Gannon
in the log of the price change process. Miller et al. (1994) demonstrate that thin trading and non-trading in individual stocks induce a positive serial correlation in the observed spot index price change process. However, the smoothing effect from non-trading in individual stocks, averaging bid/ask bounce effects in heavily traded stocks, and under-differencing the aggregate process also contribute. Of interest is the reduction in serial correlation of the basis spread as d ! 0. As the log futures price change moves into the order splitting/non-price change region, positive serial correlation is induced in the log of the futures price change. This, in part, helps offset the increasing positive serial correlation induced in the log of the spot index.
44.6.3 Conditional Variance Estimates Allowing for and excluding market open/closing effects in first-moment equations make little difference to GARCH parameter estimates. This observation holds for alternative specifications of the dynamics for the first-moment equation at 30- and 15-min intervals. However, mis-specifying the first-moment equation dynamics is important in the conditional second-moment equations for the AOI at 5-min intervals. The GARCH(1,1) estimates for the SPI, AOI, and basis are generated from Eq. 44.6, i.e., an autoregressive specification of the levels AR(L) and Eq. 44.5 the AR(D), respectively (Table 44.2). Opening dummy variables are incorporated in both first-moment and conditional second-moment equations for the full-day data series. Two dummy variables, corresponding to the first- two 5-min intervals, are included in both first and conditional second-moment equations for the full-day data series. Results in the lower panel are obtained from excluding all trade in the SPI and AOI prior to 10.30 a.m. At 5-min sampling intervals, a transient effect is introduced into the unconditional distributions which carries through to the conditional variance estimates. Values for a1 and b1 differ both within the same data series for alternative first-moment equations and across data series for the same form of first-moment equation. For one extreme case (with exclusion of market opening), the GARCH parameter estimates for the AOI and therefore the basis change do depend on the mis-specification of the first-moment equation. From the first set of results (full day), it would appear as if the SPI is persistent in variance, the AOI is not and neither is the basis change persistent in variance. A second extreme case occurs at 30- and 15-min intervals (with exclusion of market opening). In this case GARCH parameter estimates are almost identical for alternative specifications of the dynamics of the first-moment equation. However, for the AOI the sum of the GARCH parameter estimates is near the IGARCH boundary, and the same feature is observed in the basis change. It would appear that, given anomalies are adequately accounted for, mis-specification of the (weak) dynamic structure of price changes in these processes is only relevant for estimating these GARCH equations for the AOI at 5-min intervals. The important issue is the correct specification of the market opening effects.
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1265
Table 44.2 GARCH estimates for the SPI (Ft), AOI (It), and basis change (Bt) for AR(L) and AR(D) specifications It
Ft Full day 30 min a1 b1 15 min a1 b1 05 min a1
Bt
0.0527 (7.84) 0.9266 (115)
0.0517 (7.58) 0.9284 (111)
0.1019 (7.23) 0.0196 (2.09)
0.1002 (6.91) 0.0227 (2.24)
0.0904 (9.04) 0.0204 (2.06)
0.0741 (7.70) 0.0251 (2.31)
0.0602 (14.7) 0.9151 (224)
0.0592 (14.5) 0.9167 (225)
0.1922 (13.2) 0.0893 (9.89)
0.1514 (11.4) 0.1100 (10.9)
0.1050 (13.6) 0.0852 (5.45)
0.1000 (13.6) 0.1186 (7.17)
0.0362 (34.6) 0.9530 (1087)
0.3540 (35.8) 0.2074 (26.8)
0.2338 (27.9) 0.2428 (29.3)
0.1705 (39.6) 0.4345 (43.3)
0.1685 (38.5) 0.4469 (45.6)
0.0444 (8.12) 0.9478 (150)
0.0668 (8.71) 0.9074 (92.9)
0.0693 (9.22) 0.9050 (93.3)
0.0684 (11.5) 0.9114 (114)
0.0631 (10.5) 0.9172 (109)
0.0394 (14.5) 0.9564 (369)
0.0867 (12.9) 0.8832 (113)
0.0835 (13.2) 0.8872 (115)
0.0310 (12.9) 0.9499 (232)
0.0403 (18.8) 0.9537 (374)
0.0329 (30.3) 0.9638 (906)
0.3326 (37.5) 0.4470 (35.2)
0.2229 (33.6) 0.5795 (46.4)
4.6E-5 (27.4) 1.000 SING
0.0236 (35.9) 0.9742 (1353)
0.0358 (34.0) b1 0.9535 (1075) Excluding market open 30 min a1 0.0448 (8.31) b1 0.9473 (155) 15 min a1 0.0385 (14.8) b1 0.9551 (382) 05 min a1 5.8E-5 (33.7) b1 1.000 SING
Asymptotis t-statistics in brackets
These preliminary results have been obtained from observations sampled for all four futures contracts and a continuous series constructed for 1992. The AOI is not affected by contract expiration. In Table 44.3, GARCH(1,1) estimates for the separate SPI futures contracts, 3 months to expiration, and synchronously sampled observations on the AOI are recorded. The full data series corresponding to synchronized trading on the SFE and ASX is employed. The same form of dummy variable set was imposed in both first and secondmoment equations. As well, a post-lunchtime dummy is included to account for the
1266
G.L. Gannon
Table 44.3 GARCH estimates for the SPI and AOI 3 months to expiration MAR Ft 30 min a1 0.042 (2.37) b1 0.933 (35.6) 15 min a1 0.028 (3.11) b1 0.958 (72.6) 05 min a1 0.051 (7.86) b1 0.881 (52.9)
It
JUN Ft
It
SEP Ft
It
DEC Ft
It
0.026 (0.94) 0.054 (2.05)
0.145 (3.95) 0.116 (0.56)
0.015 (0.76) 0.004 (0.21)
0.019 (2.96) 0.977 (118)
0.089 (3.65) 0.021 (1.12)
0.060 (3.24) 0.918 (35.0)
0.147 (3.70) 0.037 (1.30)
0.186 (5.85) 0.108 (6.65)
SING
0.107 (4.75) 0.035 (2.11)
0.037 (4.96) 0.956 (120)
0.183 (6.84) 0.055 (3.75)
0.036 (6.08) 0.958 (114)
0.195 (5.80) 0.102 (3.67)
0.307 (14.8) 0.118 (7.16)
0.174 (22.2) 0.649 (38.5)
0.378 (17.0) 0.161 (10.6)
0.039 (11.0) 0.956 (247)
0.409 (19.3) 0.259 (16.9)
0.240 (12.4) 0.231 (12.1)
0.998 (3235) SING 0.999 (21170)
GARCH(1,1) parameter estimates for the 3 months corresponding to expiration of the March, June, September, and December contracts for 1992. Full-day data series are employed with opening and post-lunchtime dummy variables in both first and conditional second-moment equations. An AR(L) specification of the (log) mean equation is employed for these results
break in daily market trade at the SFE during 1992. The log of levels is specified for the first-moment equations. There is some instability within this set of parameter estimates. However, a similar pattern emerges within the set of futures and the set of market index estimates as was observed for the full-day series for 1992. The futures conditional variance parameter estimates are close to the IGARCH boundary while the index conditional variance estimates are not. This again implies that these processes cannot be co-persistent in variance for these samples and observation intervals. In order to obtain further insight into the, seemingly, perverse results for the market index, a similar analysis was undertaken on seven of the largest market capitalized stocks which comprised the AOI during 1992. Relevant market opening and closing dummy variables were included in both first and conditional second-moment equations accordingly. These effects were not systematic in the first-moment equations and are not reported. The stock price processes have not been transformed to natural logarithmic form for these estimations. This is because the weighted levels of the stock prices are employed in construction of the market index. As the observation interval is reduced for these stock prices: (i) The autoregressive parameter estimates for the price levels equation converges to a unit root. (ii) The first-order serial correlation coefficient for the price difference equation moves progressively into the negative region.
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1267
If these were the only stocks comprising the construction of the index, then we might expect to see increasing negative serial correlation in the index. But this observation would only apply if these processes were sampled from a continuous process which was generated by a particular form of ARMA specification. Only in the case of data observed from a continuous process could the results on temporal aggregation of ARMA processes be applied. These results should not be surprising as the combination of infrequent price change and “price bounce” from bid/ask effects starts to dominate the time series. As well, these bid/ask boundaries can shift up and down. When these processes are aggregated, the effects of price bounce can cancel out. As well, the smoothing effect of thinly and zero traded stocks within the observation intervals dampens and offsets individual negative serial correlation observed in these heavily traded stocks (Table 44.4). Some of these autoregressive parameter estimates are quite high for the AR(D) specifications. However, it is apparent that there is almost no difference in GARCH(1,1) estimates from either the AR(L) or AR(D) specifications at each observation interval for any stock price. In some instances estimation breaks down at 5 min intervals. If the conditional variances of these stock price movements contain common news and announcement effects, then it should not be surprising that the weighted aggregated process is not persistent in variance. This can happen when news affects all stocks in the same market. As well, smoothing effects from thin-traded stocks help dampen volatility shocks observed in heavily traded stocks. These news and announcement effects may be irrelevant when observing these same processes at daily market open to open or close to close. These news and announcement effects may be due to private information filtering onto the market prior to and following market open. It is during this period that overnight information effects can be observed in both price volatility and volume. As well, day traders and noise traders are setting positions. However, the ad hoc application of dummy variables is not sufficient to capture the interaction between volatility and volume. In the absence of specific measures of these news “variables,” the effects cannot be directly incorporated into a structural model. However, these effects are often captured in the price volatility and reflected in increased trading activity.
44.6.4 Weighted GARCH Estimates Weighted GARCH estimates for the futures (log) price process with the accumulated number of futures contracts traded within the interval t to t1 are reported in Table 44.5. This choice ensures that volume measures are recorded within the interval that actual prices define. The weighting variable employed in the index (log) price process is the squared shock from the futures price mean equation within the interval t to t1. There is no natural “volume” of trade variable available for the market index. The form of mean
1268
G.L. Gannon
Table 44.4 Unconditional mean and GARCH(1,1) estimates: Australian stock prices AR(L) AR(D) p1 r1
Interval NAB 60 min
0.999
0.160
30 min
0.999
0.135
15 min
0.999
0.199
05 min
1.00
0.287
BHP 60 min
1.00
0.034
30 min
1.00
0.087
15 min
1.00
0.104
05 min
1.00
0.123
BTR 60 min
0.995
0.035
30 min
0.998
0.109
15 min
0.999
0.109
05 min
1.00
0.074
WMC 60 min
1.00
0.052
30 min
1.00
0.016
15 min
1.00
0.008
05 min CRA 60 min
1.00
0.065
0.999
0.031
30 min
0.999
0.008
15 min
1.00
0.003
AR(L) a1
b1
AR(D) a1
b1
0.125 (17.2) 0.086 (25.3) 0.226 (22.5) 2.4E-5 (0.94)
0.762 (39.2) 0.862 (123) 0.627 (70.0) 0.042 (0.28)
0.122 (16.4) 0.081 (24.4) 0.220 (22.2) 2.3E-5 (0.53)
0.774 (40.1) 0.876 (135) 0.629 (61.1) 0.049 (0.24)
0.096 (9.26) 0.067 (14.2) 0.054 (20.5) 0.132 (66.0)
0.843 (45.6) 0.898 (129) 0.923 (281) 0.795 (361)
0.098 (9.14) 0.066 (13.7) 0.054 (20.2) 0.130 (61.9)
0.839 (44.2) 0.899 (128) 0.924 (279) 0.799 (363)
0.194 (9.68) 0.232 (14.3) 0.136 (17.6) 0.076 (45.9)
SING (SING) 0.294 (7.36) 0.615 (31.5) 0.831 (231)
0.196 (9.68) 0.231 (13.9) 0.131 (17.5) 0.074 (45.5)
SING (SING) 0.297 (7.55) 0.626 (32.8) 0.835 (233)
0.015 (6.58) 0.272 (13.7) 0.185 (19.6) 0.013
0.981 (368) 0.234 (5.39) 0.594 (33.7) 0.003
0.015 (6.54) 0.265 (13.70 0.186 (19.3) SING
0.981 (363) 0.251 (5.85) 0.597 (34.0) SING
0.332 (13.0) 0.231 (10.2) 0.132 (24.0)
0.058 (1.52) 0.456 (5.89) 0.687 (58.8)
0.333 (12.5) 0.231 (9.98) 0.132 (23.8)
0.074 (1.88) 0.458 (5.69) 0.688 (58.5) (continued)
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1269
Table 44.4 (continued) Interval 05 min
AR(L) AR(D) p1 r1 1.00 0.006
MIM 60 min
0.998
0.015
30 min
0.999
0.090
15 min
1.00
0.091
05 min
1.00
0.084
CML 60 min
1.00
0.033
30 Min
1.00
0.045
15 Min
1.00
0.056
05 min
1.00
0.078
AR(L) a1 0.087 (84.1)
b1 0.840 (433)
AR(D) a1 0.087 (80.1)
b1 0.841 (437)
0.016 (5.22) 0.166 (25.30) 0.151 (23.9) 0.001 (2.66)
0.979 (227) 0.279 (123) 0.612 (40.9) 0.841 (9.86)
0.037 (6.47) .160 (24.4) 0.150 (23.5) 0.001 (2.35)
0.934 (94.4) 0.281 (135) 0.614 (40.9) 0.814 (9.46)
0.088 (8.88) 0.037 (14.4) 0.196 (28.2) 0.095 (74.6)
0.867 (52.1) 0.953 (52.1) 0.647 (62.4) 0.844 (456)
0.085 (9.02) 0.039 (14.4) 0.189 (27.3) 0.110 (75.90
0.873 (56.2) 0.950 (269) 0.658 (64.6) 0.846 (471)
Asymptotic t-statistics in brackets Column 1 contains the observation interval Column 2 contains the autoregressive parameter from an AR(L) specification of the price levels equation Column 3 contains the first-order autoregressive parameter estimate from an AR(D) specification of the price change Columns 4 and 5 contain the GARCH(1,1) parameter estimates from an AR(L) specification of the price levels equation Columns 6 and 7 contain the corresponding estimates from an AR(D) specification of the price changes
equation is the same as the generated corresponding results for Table 44.3, i.e., an AR(L). However, the results are almost identical when alternative forms for the mean equation are employed, i.e., AR(L) and AR(D). The specifications generating the reported stock price estimates are augmented to include a measure of trade activity within the observation interval. These measures are the accumulated number of stocks traded in each individual stock within the interval t to t1. The estimates are reported in Table 44.6. The conditional variance parameter estimates are almost identical from the AR(L) and AR(D) specifications of the mean equation. The logarithmic transformation has not been taken for these stock prices. Direct comparison of these GARCH parameter estimates (a1 and b1) with those from the unweighted GARCH(1,1) estimates demonstrates the importance of this measure of activity. The change in GARCH parameter estimates is striking.
1270
G.L. Gannon
Table 44.5 Weighted GARCH estimates for the SPI and AOI 3 months to expiration March Ft 30 min a1 0.093 (2.07) b1 0.014 (0.24) 15 min a1 0.052 (1.84) b1 0.004 (0.19) 05 min a1 0.000 (4.04) b1 0.000 (5.06)
It
June Ft
It
September Ft It
December Ft It
SING
0.079 (2.73) 0.033 (1.91)
0.030 (1.01) SING
0.007 (0.40) 0.025 (1.24)
0.040 (1.51) 0.039 (1.62)
0.134 (4.58) SING
0.027 (1.10) 0.068 (2.74)
0.185 (3.97) 0.086 (3.82)
0.175 (6.67) 0.000 (0.00)
0.022 (1.50) 0.163 (7.61)
0.039 (2.47) 0.000 (0.00)
0.042 (2.35) 0.106 (4.99)
0.076 (2.89) 0.001 (0.01)
0.058 (3.04) 0.136 (5.67)
0.156 (8.34) 0.238 (11.9)
0.095 (7.79) SING
0.170 (9.55) 0.232 (11.8)
0.004 (32.1) 0.000 (0.12)
0.162 (9.58) 0.270 (16.0)
0.030 (8.32) 0.000 (7.79)
0.128 (7.30) 0.327 (17.8)
SING
The measures of persistence from these weighted estimates are never near the IGARCH boundary. These effects are summarized in Table 44.7. These results are generated from an AR(L) specification of the first-moment equation. By adequately accounting for contemporaneous intraday market activity, the time persistence of volatility shocks becomes less relevant. It follows that deviations of the estimated conditional variance from the true (unobservable) conditional variance are reduced (Table 44.8).
44.6.5 Discussion In this section some empirical evidence is documented on the behavior of unconditional first and conditional second-moment effects for the market index, futures contracts written on the market index, and for heavily traded stock prices. These results are for Australian financial assets sampled on an intraday basis as the observation interval approaches transactions time d ! 0. The specific empirical findings were: 1. The autoregressive parameter estimate from a difference equation for the log of the index futures is initially negative but moves into the positive region. This can be attributed to bid/ask bounce being dominated by order splitting and non-trading effects. 2. The autoregressive parameter estimate from a difference equation for the log of the market index displays increasing positive serial correlation. This can be attributed to non-trading smoothing effects in low capitalized stocks, averaging of bid/ask bounce effects, and under-differencing the aggregated index.
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1271
3. The basis change between the log futures price change and the log index level change displayed two surprising effects: (i) The autoregressive parameter for the basis change is initially negative, but the strength of this effect weakens. This can be attributed to the log of the Table 44.6 Weighted GARCH estimates: Australian stock prices Stock NAB
Interval 60 min 30 min 15 min
BHP
60 min 30 min 15 min
BTR
60 min 30 min 15 min
WMC
60 min 30 min 15 min
CRA
60 min 30 min 15 min
MIM
60 min 30 min 15 min
AR(L) a1 0.147 (11.7) 0.168 (11.4) 0.140 (15.7) 0.125 (8.97) 0.108 (10.8) 0.132 (13.9) 0.045 (3.06) 0.116 (9.74) 0.070 (14.0) 0.110 (5.68) 0.107 (8.35) 0.078 (14.6) 0.116 (7.04) 0.056 (8.30) 0.000 (0.0) 0.074 (5.36) 0.071 (5.84) 0.040 (6.67)
b1 0.028 (2.89) 0.143 (12.9) 0.135 (17.1) 0.053 (2.94) 0.133 (11.2) 0.110 (13.9) 0.024 (1.29) 0.051 (3.94) 0.023 (5.28) 0.040 (2.18) 0.008 (0.65) 0.053 (6.47) 0.020 (2.06) 0.023 (4.95) 0.000 (4.04) SING 0.074 (4.13) 0.068 (9.29)
X 3.0E-9 (36.9) 2.8E-9 (47.0) 3.1E-9 (114) 9.4E-9 (27.7) 8.7E-9 (32.8) 9.9E-9 (56.6) 7.8E-9 (19.0) 8.0E-10 (24.8) 1.9E-9 (38.80 1.9E-9 (25.3) 1.9E-9 (30.8) 1.8E-9 (38.0) 4.3E-8 (26.8) 5.4E-8 (38.4) 2.9E-7 (207) 6.4E-10 (21.7) 6.1E-10 (24.6) 8.7E-10 (40.0)
AR(D) a1 0.144 (11.1) 0.171 (11.6) 0.131 (15.6) 0.124 (8.49) 0.095 (9.95) 0.121 (13.2) 0.042 (2.85) 0.106 (9.46) 0.058 (11.60 0.110 (5.67) 0.108 (8.39) 0.074 (14.2) 0.116 (7.07) 0.054 (8.11) 0.000 (1.02) 0.077 (5.24) 0.070 (6.28) 0.030 (4.98)
b1 0.027 (2.35) 0.139 (11.6) 0.142 (16.80 0.057 (3.22) 0.137 (12.8) 0.113 (12.6) 0.011 (0.64) 0.056 (4.34) 0.026 (5.45) 0.040 (2.22) 0.009 (0.76) 059 (7.21) 0.015 (1.69) 0.024 (5.14) 0.000 (6.88) 0.000 (0.00) 0.087 (4.90) 0.074 (9.68)
X 3.0E-9 (36.2) 2.9E-9 (46.8) 3.1E-9 (112) 9.5E-9 (28.2) 8.9E-9 (34.1) 10E-9 (58/8) 8.0E-9 (19.2) 8E-10 (25.3) 1.9E-9 (38.9) 1.9E-9 (25.5) 1.9E-9 (30.7) 1.8E-9 (38.1) 4.3E-8 (26.7) 5.5E-8 (38.4) 2.9E-7 (208) 6.E-10 (21.6) 6.E-10 (25.0) 9.E-10 (40.5) (continued)
1272
G.L. Gannon
Table 44.6 (continued) Stock CML
Interval 60 min 30 min 15 min
AR(L) a1 0.125 (5.92) 0.108 (8.65) 0.056 (16.3)
b1 SING 0.029 (3.44) 0.000 (0.00)
X 1.9E-8 (17.8) 2.5E-8 (30.8) 4.0E-8 (56.2)
AR(D) a1 0.122 (5.81) 0.101 (8.27) 0.057 (17.4)
b1 SING 0.034 (4.16) SING
X 1.9E-8 (17.8) 2.6E-8 (31.1) 4.4E-8 (59.2)
Asymptotic t-statistics in brackets Columns 3 and 4 contain the weighted GARCH parameter estimates from an AR(L) specification of the price levels equation with the volume parameter estimate in column 5 Columns 6–8 contain the corresponding estimates from an AR(D) specification of the price changes Estimates are quite unstable at 5-min intervals
Table 44.7 Measures of persistence from GARCH(1,1) and weighted GARCH equations for the SPI 3 months to expiration Interval 30 min 15 min 05 min
G March 0.975 0.986 0.932
G-W 0.107 0.056 0.000
G June 0.261 SING SING
G-W SING 0.175 SING
G G-W September 0.996 SING 0.993 0.039 0.823 0.004
G December 0.978 0.994 0.995
G-W SING 0.077 0.030
futures price change behavior where the autoregressive parameter moved into the positive region and price change became less frequent. (ii) The unweighted log of the futures price change was close to the IGARCH boundary. For the full-day data series, the log of the market index change was not close to the IGARCH boundary and neither was the basis change. When market opening trade was excluded, the log of the futures price change, the log of the market index price change, and the basis change were close to the IGARCH boundary although this effect dissipated as d ! 0. However, volume of trade on both the SFE and ASX is heaviest within this excluded interval. It follows that any conclusions concerning the co-persistence structure would be misleading from this intraday data. 4. The autoregressive parameter from a levels equation for the stock prices converges to a unit root. This can be attributed to small and zero price change effects. 5. The autoregressive parameter from a difference equation for the stock prices displays increasing tendency towards and into the negative serial correlation region. This can be attributed to price bounce effects where the boundaries are tight and relatively stable. 6. GARCH(1,1) or weighted GARCH conditional variance parameter estimates do not depend on the specification of the dynamics of first-moment equation for 30- and 15-min intervals of these futures, market index, and stock price processes.
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1273
Table 44.8 Measures of persistence from GARCH(1,1) and weighted GARCH equations for Australian stock price processes 60 min 30 min 15 min 60 min 30 min 15 min
NAB 0.887 0.948 0.853 CRA 0.390 0.687 0.819
0.175 0.311 0.275 0.136 0.079 0.000
BHP 0.939 0.965 0.977 MIM 0.995 0.445 0.763
0.178 0.241 0.242 SING 0.145 0.108
BTR SING 0.526 0.751 CML 0.995 0.990 0.843
0.069 0.167 0.093
WMC 0.996 0.506 0.779
0.150 0.115 0.131
SING 0.137 0.056
An AR(L) specification has been employed to generate these results The measure of persistence is the calculated as a+b from the conditional variance equations G represents persistence obtained from a GARCH(1,1) specification G-W represents persistence obtained from a weighted GARCH specification SING indicates that one or more of these GARCH parameters could not be evaluated due to singularities
The GARCH(1,1) parameter estimates for the AOI at 5-min intervals were different when alternative forms of first-moment equations were specified. There was no perceivable difference in the other processes at this sampling frequency. It would then appear that increasing positive serial correlation (smoothing) in the observed returns process has a greater distorting effect on GARCH(1,1) parameter estimates than increasing negative serial correlation (oscillation) in observed returns processes. The most important effect is mis-specification of the conditional variance equations from failure to adequately account for the interaction between market activity and conditional variance (volatility). Some implications of these results are: 7. When aggregating stock prices, which may be close to the IGARCH boundary, persistence in variance can be low for the market index because (i) there is common persistence present in heavily traded stock prices which when aggregated do not display persistence in variance because heavily traded stock prices react to market specific news instantaneously, (ii) the smoothing effect of less heavily traded stocks dampens the volatility clustering which is often observed in other financial assets such as exchange rates, and (iii) high volatility and volume of trade effects within these markets following opening is better measured by employing relevant measures of activity than an ad hoc approach. 8. There is a strong and quantifiable relationship between activity in these markets and volatility.
44.7
Conclusion
The behavior of financial asset price data observed intraday is quite different from these data observed at longer sampling intervals such as day to day. Market anomalies which distort intraday observed data mean that volatility estimates
1274
G.L. Gannon
obtained from data observed from day-to-day trades will provide different volatility estimates. This latter feature then depends upon when the data is sampled within the trading day. If these anomalies and intraday trading patterns are accounted for in the estimation process, then better estimates of volatility are obtainable by employing intraday observed data. However, this is dependent on a sampling interval that is not so fine that these estimators break down. The specific results indicate that serial correlation in returns processes can distort parameter estimates obtainable from GARCH estimators. However, induced excess kurtosis may be a more important factor in distortions to estimates. The most important factor is mis-specification of the conditional variance (GARCH) equation from omission of relevant variables which explain the anomalies and trading patterns observed in intraday data. Measures of activity do help explain systematic shifts in the underlying returns distribution and in this way help explain “jumps” in the volatility process. This effect can be observed in the likelihood function and in asymptotic standard errors of weighting (mixing) variables. One feature is that the measure of volatility persistence observed in unweighted univariate volatility estimators is reduced substantially with inclusion of weighting variables.
Appendix At the time the ASX data were collected, the exchange had just previously moved from floor to screen trading with the six main capital city exchanges linked via satellite and trade data streamed to trading houses and brokers instantaneously via a signal G feed. The SFE maintained Pit trading for all futures and options on futures contracts at the time. Legal restrictions on third party use and development of interfaces meant the ASX had a moratorium on such usage and development. The author was required to obtain special permission from the ASX to capture trade data from a live feed from broking house Burdett, Buckeridge, and Young (BBY). There was a further delay in reporting results of research following the legal agreement obtained from the ASX. Trade data for stock prices and volume of trade were then sampled into 5-min files and subsequently into longer sampling interval files. The market index was refreshed at 1-min intervals and the above sampling scheme repeated. Futures price trades were supplied in two formats via feed: Pit (voice recorded) data and Chit data. Although the Pit data provides an instantaneous record of trade data during the trading day, some trades are lost during frantic periods of activity. The Chit records are of every trade (price, volume, buyer, seller, time stamped to the nearest second, etc.). The recorded Chits are placed in a wire basket on a carriageway and transferred up the catwalk where recorders on computers enter details via a set of simplified keystrokes. The average delay from trade to recording is around 30 s for the Chit trades. These are then fed online to trading houses and brokers. At the end of the trading day, these recorded trades are supplemented with a smaller set of records that
44
Stochastic Volatility Structures and Intraday Asset Price Dynamics
1275
were submitted to the catwalk late, e.g., morning trades that may have gone to lunch in a brokers pocket and submitted during the afternoon session and also some late submitted trades. We created the intraday sampled files from both the Pit and Chit records. However, we employed the Chit trades for analysis in this paper so as to have the correct volume of trade details for each trading interval. All trades were reallocated using the time stamps to the relevant time of trade, including trades not submitted on time but supplied as an appendix to the trading day data. In this study the average number of late Chits were not a high proportion of daily trades. These futures price records were then sampled into relevant 5-min records and longer sampling frames generated in the same manner as was employed for the stock prices. For all series the first price and last price closest to the opening and closing nodes for each sampling interval were recorded with volume of trade the accumulated volume of trade within the interval defined by first and last trade.
References Brown, S. (1990). Estimating volatility. In S. Figlewski et al. (Eds.), Financial options: From theory to practice (pp. 516–537). Homewood: Business One Irwin. Duffee, G., Kupiec, P., & White, A. P. (1990). A primer on program trading and stock price volatility: A survey of the issues and evidence (Working Paper No. 109). FEDS Board of Governors of the Federal Reserve System. Edwards, F. R. (1988). Futures trading and cash market volatility: Stock index and interest rate futures. Journal of Futures Markets, 8, 421–439. Gannon, G. L. (1994). Simultaneous volatility effects in index futures. Review of Futures Markets, 13, 1027–1066. Gannon, G. L. (2010). Simultaneous volatility transmissions and spillovers: Theory and evidence. Review of Pacific Basin Financial Markets and Policies, 13, 127–156. Ghose, D., & Kroner, K. F. (1994). Common persistence in conditional variances: Implications for optimal hedging. Paper presented at the 1994 Meeting of the Australasian Econometric Society. McCurdy, T., & Morgan, I. G. (1987). Tests of the martingale hypothesis for foreign currency futures with time varying volatility. International Journal of Forecasting, 3, 131–148. Miller, M. H., Muthuswamy, J., & Whaley, R. E. (1994). Mean reversion of standard and poor’s 500 index basis changes: Arbitrage-induced or statistical illusion? Journal of Finance, 49, 479–513. Nelson, D. B. (1990a). ARCH models as diffusion approximations. Journal of Econometrics, 45, 7–38. Nelson, D. B. (1990b). Stationarity and persistence in the GARCH(1,1) model. Econometric Reviews, 6, 318–334. Nelson, D. B., & Foster, D. P. (1991). Estimating conditional variances with misspecified ARCH models: Asymptotic theory. Graduate School of Business, University of Chicago, mimeo. Nelson, D. B., & Foster, D. P. (1994). Asymptotic filtering theory for univariate ARCH models. Econometrica, 62, 1–41.
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market Ken Hung and Suresh Srivastava
Contents 45.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45.2 Value at Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45.3 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Value at Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Optimal Portfolio Under a VaR Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1278 1279 1281 1288 1288 1289 1290
Abstract
Value at risk (VaR) measures the worst expected loss over a given time horizon under normal market conditions at a specific level of confidence. These days, VaR is the benchmark for measuring, monitoring, and controlling downside financial risk. VaR is determined by the left tail of the cumulative probability distribution of expected returns. Expected probability distribution can be generated assuming normal distribution, historical simulation, or Monte Carlo simulation. Further, a VaR-efficient frontier is constructed, and an asset allocation model subject to a target VaR constraint is examined. This paper examines the riskiness of the Taiwan stock market by determining the VaR from the expected return distribution generated by historical simulation. Our result indicates the cumulative probability distribution has a fatter left tail, compared with the left tail of a normal distribution. This implies a riskier market.
K. Hung (*) Division of International Banking & Finance Studies, Texas A&M International University, Laredo, TX, USA e-mail: [email protected] S. Srivastava University of Alaska Anchorage, Anchorage, AK, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_45, # Springer Science+Business Media New York 2015
1277
1278
K. Hung and S. Srivastava
We also examined a two-sector asset allocation model subject to a target VaR constraint. The VaR-efficient frontier of the TAIEX traded stocks recommended mostly a corner portfolio. Keywords
Value at risk • Asset allocation • Cumulative probability distribution • Normal distribution • VaR-efficient frontier • Historical simulation • Expected return distribution • Two-sector asset allocation model • Delta • Gamma • Corner portfolio • TAIEX
45.1
Introduction
Risk is defined as the standard deviation of unexpected outcomes, also known as volatility. Financial market risks are of four types: interest rate risk, exchange rate risk, equity risk, and commodity risk. For a fixed-income portfolio, the linear exposure to the interest rate movement is measured by duration. Second-order exposure is measured by convexity. In the equity market, linear exposure to market movement is measured by the systematic risk or beta coefficient. In the derivative markets, the first-order sensitivity to the value of underlying asset is measured by delta, and second-order exposure is measured by gamma. Innovations in the financial markets have introduced complicated portfolio choices. Hence, it is becoming more difficult for managers to get useful and practical tools of market risk measurement. The simple linear considerations such as Basis Point Value, or first- or second-order volatility, are inappropriate. They can’t accurately reflect risk at the time of dramatic price fluctuation. VaR (value at risk) has become a popular benchmark for the downside risk measurement.1 VaR converts the risks of different financial products into one common standard: potential loss, so it can estimate market risk for various kinds of investment portfolio. VaR is used to estimate the market risk of financial assets. Special concern of the market risk is the downside risk of portfolio values resulting from the fluctuation of interests, exchange rates, stock prices, or commodity prices. VaR is consistent in estimating the financial risk estimation. It indicates risk of dollar loss of portfolio value. Now the risks exposure of different investment portfolios (such as equity and fixed income) or different financial products (such as interest rate swaps and common stock) have a common basis for direct comparison. For decision makers, VaR is not only a statistical summary; it can also be used as a management and risk control tool to decide capital adequacy, asset allocation, synergy-based salary policy, and so on.
1
Extensive discussion of value at risk can be found in Basak and Shapiro (2001), Beder (1995), Dowd (1998), Fong and Vasicek (1997), Hendricks (1996), Hoppe (1999), Jorion (1997, 1997), Schachter (1998), Smithson and Minton (1996a, b), and Talmor (1996).
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
1279
VaR is a concept widely accepted by dealers, investors, and legislative authorities. J.P. Morgan has advocated VaR and incorporated it in RiskMetrics (Morgan 1996). RiskMetrics contain most of the data and formulas used to estimate daily VaR, including daily updated fluctuation estimations for hundreds of bonds, securities, currencies, commodities, and financial derivatives. Regulatory authorities and central bankers from various countries at Basel Committee meetings agreed to use VaR as the risk-monitoring tool for the management of capital adequacy. VaR has also been widely accepted and employed by securities corporations, investment banks, commercial banks, retirement funds, and nonfinancial institutions. Risk managers have employed VaR in ex-post evaluation, that is, to estimate and justify the current market risk exposure.2 Confidence-based risk measure was first proposed by Roy (1952). The inclusion of VaR into the asset allocation model means the inclusion of downside risk into model constraints. Within the feasible scope of investment portfolio that meets shortfall constraints, the optimal investment portfolio is decided by maximum expected return. The definition of shortfall constraint is that the probability of investment portfolio value dropping to a certain level is set as the specific disaster probability. The asset allocation framework that takes VaR as one of its constraints has increased the importance of VaR and has employed VaR as an ex-ante control tool of market risk.
45.2
Value at Risk
One difficulty in estimating VaR is the choice of various VaR methods and corresponding hypotheses. There are three major methods to estimate VaR: variance-covariance analysis, historical simulation, and Monte Carlo simulation. Variance-covariance analysis assumes that market returns for financial products are normally distributed, and VaR can be determined from market return’s variance and covariance. The normal distribution hypothesis of variance-covariance analysis makes it easy to estimate VaR at different reliability and different holding period (see Appendix 1). Its major disadvantage is that the return in the financial market is usually not in normal distribution and has fat tails. This means the probability of extreme loss is more frequent than estimated by variance-covariance analysis. Historical simulation assumes the future market return of the investment portfolio is identical to the past returns; hence, the attributes of current market can be used to simulate the future market return (Hendricks 1996; Hull and White 1998). Historical simulation approach does not suffer from the tail-bias problem, for it does not assume normal distribution. It relies on actual market return distribution, and the estimation reflects what happened during the past sample period. It has another advantage over variance-covariance analysis: it can be used for nonlinear products, such as commodity derivatives. However, the problem with historical
2
Institutional use of VaR can be found in Basel (1995, 1998a, b, c, 1999), Danielsson et al. (1998), and Danielsson Hartmann and de Vries (1998).
1280
K. Hung and S. Srivastava
simulation is its sensitivity to sample data. Many scholars pointed out that if October 1987 is included into the observation period, then it would make great difference to the estimation of VaR. Another problem with historical simulation is that the left tail of actual return distribution is at zero stock prices. In other words, it would not be accurate to assume a zero probability of loss that is greater than the past loss. Lastly, the estimation of historical simulation is more complicated than that of variance-covariance analysis. The VaR needs to be reestimated every time the level of reliability or holding period changes. Monte Carlo simulation can be used to generate future return distribution for a wide range of financial products. It is done in two steps. First, a stochastic process is specified for each financial variable along with appropriate parameters. Second, simulated prices are determined for each variable, and portfolio loss is calculated. This process is repeated 1,000 times to produce a probability distribution of losses. Monte Carlo simulation is the most powerful tool for generating the entire probability distribution function and can be used to calculate VaR for a wide range of financial products. However, it is time consuming and expensive to implement. Modern investment portfolio theories try to achieve optimal asset allocation via maximizing the risk premium per unit risk, also known as the Sharpe ratio (Elton and Gruber 1995). Within the framework of mean-variance, market risk is defined as the expected probable variance of investment portfolio. To estimate risk with standard deviation implies investors pay the same attention to the probabilities of negative and positive returns. Yet investors have different aversion to investment’s downside risk than to capital appreciation. Some investors may use semi-variance to estimate the downside risk of investment. However, semi-variance has not become popular. Campbell et al. (2001) have developed an asset allocation model that takes VaR as one of its constraints. This model takes the maximum expected loss preset by risk managers (VaR) as a constraint to maximize expected return. In other words, the optimal investment portfolio deduced from this model meets the constraint of VaR. This model is similar to the mean-variance model that generates the Sharpe index. If the expected return is a normal distribution, then this model is identical with mean-variance model. Details of this model are presented in the Appendix. Other researchers have examined four models to introduce VaR for ex-ante asset allocation of optimal investment portfolio: mean-variance (MV) model, mini-max (MM) model, scenario-based stochastic programming (SP) model, and a model that combines stochastic programming and aggregation/convergence (SP-A). The investment portfolio constructed using the SP-A model has a higher return in all empirical and simulation tests. Robustness test indicates that VaR strategy results in higher risk tolerance than risk assessment that takes severe loss into consideration. Basak and Shapiro (2001) pointed out that the drawback of risk management lies in its focus on loss probability instead of loss severity. Although loss probability is a constant, when severe loss occurs, it has greater negative consequence than non-VaR risk management.
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
1281
Lucas and Klaassen (1998) pointed out the importance of correctly assessing the fat-tailed nature of return distribution. If the real return is not in normal distribution, the asset allocation under the hypothesis of normal distribution will result in non-efficiency or non-feasibility. An excellent discussion of VaR and risk measurements is presented by Jorion (2001).
45.3
Empirical Results
In July 1997, financial crisis broke out in Southeast Asian nations and then proliferated to other Asian regions and led to a series of economic problems. Taiwan had also been attacked by financial crisis in late 1998. Domestic financial markets fluctuated. Corporations and individuals greatly suffered. The proliferation of financial crisis within or among countries makes it impossible for corporations and individuals to ignore market risk. Risk measurement is the first thing to do before investment. The Taiwan market general weighted stock index, individual weighted stock price indexes, and interbank short loan interest rate used in this research paper are obtained from the data base of AREMOS. We divide the historical period into two groups, from 1980 to 1999 and from 1991 to 1999, so as to analyze the impact of violent stock market fluctuation on VaR estimation, such as the New York stock market collapse in October 1987 and Taiwan stock market dramatic uprising from 1988 to 1990. The first period is rather long and can indicate the nature of dramatic stock fluctuation. The second period is rather short and can reflect the change of stock market tendency. This paper employs historical simulation to reproduce the daily fluctuations of returns for electrical machinery, cement, food, pulp and paper, plastics and petroleum, and textile and fiber stocks trading in the Taiwan stock market during the periods of 1980–1999 and 1991–1999. We estimate their VaRs under reliability levels of 95 %, 97.5 %, and 99 %. The expected return of investment portfolio in 1999 is the sum of annual mean returns of various stocks multiplied with their respective weights. We use this expected return to estimate year 1999 optimal stock holding proportion and analyze the impact of different historical simulation periods on optimal asset allocation. Table 45.1 presents the summary of TSE general weighted stock index (daily data) and estimated VaR for periods 1980–1999 and 1991–1999. Table 45.2 presents cumulative probability distributions of TSE daily index return for the 1980–1999 period and daily returns under the assumption of normality. It shows that at confidence level lower than 95.8 % (e.g., 90 %), the left-tail probability for Table 45.1 Summary of Taiwan Stock Exchange (TSE) daily index Period 1980–1999 1991–1999
Mean return 0.06 % 0.04 %
Standard deviation 1.67 % 1.60 %
Kurtosis 2.735 2.464
VaR* 0.0263 0.0249
Annualized return for the two periods is 23.64 % and 11.87 %, respectively. VaR* is the maximum expected return loss for 1-day holding period at a reliability level of 95 %
1282
K. Hung and S. Srivastava
Table 45.2 Cumulative probability distribution of TSE daily index Taiwan index Return (%) 7 6 5 4 3 2.9 2.8 2.7 2.5 2.3 2 1 0 1 2 3 4 5 6 7
Historical data Cumulative probability 0 0.004723 0.01102 0.021165 0.038482 0.040581 0.042855 0.045828 0.054749 0.066993 0.082736 0.185062 0.477873 0.775407 0.905895 0.960994 0.982158 0.992304 0.995977 1
Normal distribution Cumulative probability 1.20946E-05 0.000144816 0.001236891 0.007577989 0.033571597 0.0382868 0.043528507 0.049334718 0.062791449 0.078949736 0.108825855 0.262753248 0.485257048 0.712585369 0.876745371 0.960522967 0.990731272 0.998424485 0.999807736 0.999983254
Period: 1980–1999
historical distribution is higher than the normal return probability (4.2 %). Hence, under the normal distribution assumption, the VaR is overestimated, and this leads to an overcautious investment decision. At confidence level higher than 95.8 % (e.g., 97.5 %), the left-tail probability for historical distribution is lower than the normal return probability (4.2 %). Hence, under the normal distribution assumption, the VaR is underestimated, and this leads to an overactive investment decision. Figure 45.1 is the graphical presentation of the data in Table 45.2. The solid blue line represents cumulative probability distributions of TSE daily index return, and the dashed red line represents the normal distribution. The bottom panel is the enlarged view of the left tail. This graph also indicates that VaR estimated using extreme values of historical distribution will lead to an overactive investment decision. Table 45.3 reports annualized returns and standard deviations for TSE daily index and six selected industries: cement, electrical machinery, food, pulp and paper, plastics and petroleum, and textile and fiber. For the 1980–1999 period, the food industry had the greatest risk with a standard deviation of 53.47 % and 15.13 % annual return, whereas the overall market had a standard deviation of 50.80 % with 23.22 % annual return. Textile and fiber was the least risky industry with a standard deviation of 41.21 % and 12.78 % annual return. For the 1991–1999 period, the electrical machinery industry had the greatest risk with a standard
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
1283
Cumulative Probability
1 Empirical
0.8
Normal 0.6 0.4 0.2 0 −8%
−6%
−4%
−2%
2%
0%
4%
6%
8%
−2%
−1%
return 0.3
Probability
0.25
Empirical Normal
0.2 0.15 0.1 0.05 0 −8%
−7%
−6%
−5%
−4%
−3%
return
Fig. 45.1 Cumulative probability distribution of Taiwan daily stock index. Period: 1980–1999. Left tail of the cumulative probability of Taiwan weighted daily stock index. Period 1980–1999. Lower panel shows a fat left tail
Table 45.3 TSE index and selected industry’s returns and standard deviations Period Industry TSE index Cement Electrical machinery Food Pulp and paper Plastics and petroleum Textile and fiber
1980–1998 Annual return (%) 23.22 12.48 19.49
Standard deviation (%) 50.80 48.37 46.76
1991–1998 Annual return (%) 9.40 0.45 26.12
Standard deviation (%) 36.58 24.92 42.26
15.13 8.50 13.32
53.47 45.78 43.99
9.55 4.64 11.39
32.29 39.41 36.26
12.78
41.21
7.88
36.62
1284
K. Hung and S. Srivastava
0.250 r(p, 1980–1998)
expected return
0.200
0.150
0.100
0.050
0.000 72.500
73.000
73.500
74.000
74.500
75.000
75.500
76.000
76.500
risk 0.250 r(p, 1980–1998)
expected return
0.200
0.150
0.100
0.050
0.000 76.500
77.000
77.500
78.000
78.500
79.000
79.500
risk
Fig. 45.2 VaR-efficient frontier. The upper figure refers to investment portfolio of electrical machinery and plastics and petroleum stocks, and the lower figure refers to investment portfolio of cement and food stocks. VaR is set at reliability level of 95 %. Expected return and VaR are estimated from TSE industry indexes from 1980 to 1998
deviation of 42.267 % and 26.12 % annual return, whereas the overall market had a standard deviation of 36.58 % with 9.40 % annual return. The cement industry was the least risky industry with a standard deviation of 24.92 % and 0.45 % annual return. Next we constructed a two-industry optimal portfolio subject to VaR constraint. The optimal asset allocation for the two-industry portfolio is obtained by maximizing S(p) (derivation discussed in Appendix). The resulting VaR-efficient frontiers are plotted in Fig. 45.2. The upper panel in Fig. 45.2 refers
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
1285
Table 45.4 Optimal asset allocation for the two-industry portfolio obtained by maximizing S(p) at different level of confidence Portfolio choices Electrical machinery cement Electrical machinery food Electrical machinery pulp and paper Electrical machinery plastics and petroleum Electrical machinery textile and fiber Cement food Cement pulp and paper Cement plastics and petroleum Cement textile and fiber Food pulp and paper Food plastics and petroleum Food textile and fiber Pulp and paper plastics and petroleum Pulp and paper textile and fiber Plastics and petroleum textile and fiber
Confidence level 95 % {1,0}; {1,0}a {1,0}; {1,0} {1,0}; {1,0} {1,0}; {1,0}
97.5 % {1,0}; {1,0} {1,0}; {1,0} {1,0}; {1,0} {1,0}; {1,0}
99 % {1,0}; {1,0} {1,0}; {1,0} {1,0}; {1,0} {1,0}; {1,0}
{1,0}; {1,0} {0,1}; {0,1} {0,1}; {0,1} {0.12, 0.88}; {0,1} {0.67, 0.33}; {0,1} {1,0}; {1,0}
{1,0}; {1,0} {0,1}; {0,1} {0,1}; {0,1} {0.04, 0.96}; {0,1} {0.6, 0.4}; {0,1}
{1,0}; {1,0} {1,0}; {1,0} {0,1}; {0,1} {0,1}; {0,1} {0.99, 0.01}; {1,0}
{1,0}; {1,0} {1,0}; {1,0} {0,1}; {0,1} {0,1}; {0,1} {0.89, 0.11}; {1,0}
{1,0}; {1,0} {0,1}; {0,1} {0,1}; {0,1} {0.17, 0.83}; {0,1} {0.15, 0.85}; {0,1} {0.98, 0.02}; {0,1} {1,0}; {1,0} {1,0}; {1,0} {0,1}; {0,1} {0,1}; {0,1} {1,0}; {1,0}
{1,0}; {1,0}
First set {x, y} refers to the historical simulation for period 1980–1998, and second set {x, y} refers to the historical simulation for period 1991–1998 a {1, 0} represents 100 % investment in electrical machinery industry and 0 % investment in cement industry
to VaR-efficient portfolios of electrical machinery and plastics and petroleum stocks, and the lower panel in Fig. 45.2 refers to VaR-efficient portfolios of cement and food stocks. VaR is set at the 95 % confidence level. Table 45.4 presents portfolio weights for different combinations of industry stocks at different levels of confidence. Asset allocations for most of the industry combinations represent corner solutions, i.e., 100 % investment in one industry. For example, when electrical machinery stocks are combined with stocks from any other industry, the optimal portfolio is 100 % investment in electrical machinery stocks, for both the time periods and the three confidence levels. Asset allocation for cement stocks is dominated by the other five industry stocks. Allocations of food stocks dominate all other stock weights except electrical machinery. The general nature of asset allocation is the same for both time periods and the confidence levels. Suppose an investor selects the VaR constraint for maximizing S(p) at a specified level of confidence (say 95 %) and the actual VaR(c, p*) is at a higher level (97.5 %); then VaR(portfolio) will be greater that target VaR*. In this case investors will have to invest a portion of the fund in T-bills (B > 0, defined in Appendix). This will make investment VaR(c, p*) equal to the VaR* in the preset
1286
K. Hung and S. Srivastava
Table 45.5 Optimal allocation for two-industry portfolio (historical simulation, period 1980–1998) Confidence level (%) 95 97.5 99 95 97.5 99 95 97.5 99 Confidence level (%) 95 97.5 99 Confidence level (%) 95 97.5 99
Confidence level (%) 95 97.5 99 Confidence level (%) 95 97.5 99
Cement Food Portfolio VaR Lending, Cement Food (%) (%) (c, p*) B yuan VaR* (%) (%) 100 0 31.12 0 31.12 100 0 100 0 42.57 121 31.12 87.9 0 100 0 53.92 216 31.12 78.4 0 0 100 27.49 0 27.49 0 100 0 100 40.22 139 27.49 0 86.1 0 100 56.27 267 27.49 0 73.3 12 88 27.52 0 27.52 12 88 4 96 41.99 154 27.52 3.4 81.2 17 83 53.42 246 27.52 12.8 62.6 Pulp and Textile Portfolio Pulp and Textile paper and fiber VaR(c, Lending, paper and fiber (%) (%) p*) B yuan VaR* (%) (%) 0 100 29.58 0 29.58 0 100 0 100 41.97 132 29.58 0 86.8 0 100 53.85 230 29.58 0 77 Textile Textile Cement and fiber Portfolio Lending, Cement and fiber (%) (%) VaR(c, p*) B yuan VaR* (%) (%) 67 33 25.86 0 25.86 12 88 60 40 41.97 172 25.86 49.7 33.1 15 85 52.51 256 25.86 11.2 63.2 Plastics Plastics and Textile Portfolio and Textile Lending, petroleum and fiber VaR petroleum and fiber B yuan VaR* (%) (%) (%) (c, p*) (%) 99 1 28.43 0 28.43 99 1 89 11 41.35 139 28.43 76.6 9.5 100 0 55.50 253 28.43 74.7 0 Food Textile and Portfolio Lending, Food Textile and (%) fiber (%) VaR(c, p*) B yuan VaR* (%) fiber (%) 100 0 27.49 0 27.49 100 0 100 0 40.22 121 27.49 87.9 0 100 0 56.27 267 27.49 73.3 0
Cash (%) 0 12.1 21.6 0 13.9 26.7 0 15.4 24.6 Cash (%) 0 13.2 23 Cash (%) 0 17.2 25.6
Cash (%) 0 13.9 25.3 Cash (%) 0 12.1 26.7
Two-industry portfolio with initial investment of 1,000 yuan Allocations for other industry combinations are available to interested readers
constraint. In opposite case, the VaR* constraint is specified at a higher level than the portfolio VaR(c, p*); then investors will borrow money to invest in risky assets (B < 0). Tables 45.5 and 45.6 list examples of investment in two-industry stocks and T-bills for periods 1980–1999 and 1991–1999, respectively. In each case, the target VaR* in the preset constraint is at a 95 % level of confidence, and 1,000 yuan is invested in the portfolio. In the first panel of Table 45.5, 1,000 yuan is invested in
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
1287
Table 45.6 Optimal allocation for two-industry portfolio (historical simulation, period 1991–1998) Confidence level (%) 95 97.5 99 95 97.5 99 95 97.5 99 Confidence level (%) 95 97.5 99 Confidence level (%) 95 97.5 99
Confidence level (%) 95 97.5 99 Confidence level (%) 95 97.5 99
Cement Food (%) (%) 100 0 100 0 100 0 0 100 0 100 0 100 0 100 0 100 0 100 Pulp and Textile paper and fiber (%) (%) 0 100 0 100 0 100 Textile Cement and fiber (%) (%) 0 100 0 100 0 100 Plastics Textile and and petroleum fiber (%) (%) 100 0 100 0 100 0 Food Textile and (%) fiber (%) 100 0 100 0 100 0
Portfolio VaR Lending, Cement Food (c, p*) B yuan VaR* (%) (%) 30.55 0 30.55 100 0 42.57 128 30.55 87.2 0 53.92 221 30.55 77.9 0 24.57 0 24.57 0 100 34.68 117 24.57 0 88.3 48.41 238 24.57 0 76.2 27.19 0 27.19 0 100 35.95 100 27.19 0 90 48.59 213 27.19 0 78.7 Portfolio Pulp and Textile VaR(c, Lending, paper and fiber p*) B yuan VaR* (%) (%) 27.75 0 27.75 0 100 38.98 124 27.75 0 87.6 49.06 212 27.75 0 78.8 Textile Portfolio Lending, Cement and fiber VaR(c, p*) B yuan VaR* (%) (%) 27.75 0 27.75 0 100 38.98 124 27.75 0 87.6 49.06 212 27.75 0 78.8 Plastics Textile and and Portfolio Lending, petroleum fiber VaR(c, p*) B yuan VaR* (%) (%) 27.19 0 27.19 0 100 35.95 100 27.19 0 90 48.59 213 27.19 0 78.7 Portfolio Lending, Food Textile and fiber (%) VaR(c, p*) B yuan VaR* (%) 24.57 0 24.57 100 0 34.68 117 24.57 88.3 0 48.41 238 24.57 76.2 0
Cash (%) 0 12.8 22.1 0 11.7 23.8 0 10 21.3 Cash (%) 0% 12.4 21.2 Cash (%) 0 12.4 21.2
Cash (%) 0 10 21.3 Cash (%) 0 11.7 23.8
Two-industry portfolio with initial investment of 1,000 yuan Allocations for other industry combinations are available to interested readers
electrical machinery stocks and 0 in cement stocks. These allocations are from Table 45.4. Portfolio VaR(c, p*) is 31.12, 42.57, and 53.92 yuan at 95 %, 97.5 %, and 99 % respectively. This leads to the lending of 0, 121, and 216 yuan to meet the target VaR of 31.12. Thus, a 97.5 % portfolio consists of 87.9 % in electrical machinery stocks, 0 in cement stocks, and 21.1 % in T-bills.
1288
45.4
K. Hung and S. Srivastava
Conclusion
TSE daily index was found to be riskier than the market risk under the assumption of normal distribution for market returns. This resulted in the left tail of cumulative return distribution being fatter and a higher value at risk, indicating an overactive investment activity. For asset allocation model under a constrained VaR framework, most of the optimal portfolios have predominant investment in stocks from one industry. Hence, it will be inappropriate to comment on the optimal allocation of future investment portfolio based on the past stock performance of this unique period studied.
Appendix 1: Value at Risk Let W0 be the initial investment and R be the rate of return of a portfolio. The value of the portfolio at the end of the target horizon will be W ¼ W0(1 + R). Let m and s be the expected return and standard deviation of R. The lowest portfolio value at the confidence level c is defined as W* ¼ W0 (1 + R*). The relative VaR is the dollar loss relative to the mean: VaRðmeanÞ ¼ EðW Þ W ¼ W 0 ðR mÞ
(45.1)
The absolute VaR is the dollar loss relative to zero: VaRðzeroÞ ¼ W 0 W ¼ W 0 R
(45.2)
W* and R* are minimum value and cutoff return, respectively. In this paper we are discussing absolute VaR. The general form of VaR can be derived from the probability distribution of the future portfolio value f(w). For a given confidence level c, the worst possible portfolio value W* is such that probability of exceeding W* is c: c¼
ð1
f ðwÞdw
(45.3)
W
The probability of a value lower than W*, p ¼ P(w W*) is 1–c:
p ¼ Pðw W Þ ¼ 1 c ¼
ð W 1
f ðwÞdw
(45.4)
Typical confidence level c is 95 %. This computation of VaR does not require estimation of variance-covariance matrix. When portfolio returns are normally distributed, then distribution f(w) can be translated into a standard normal distribution F(e), where e has mean zero and standard deviation of one. VaR can be determined from the tables of the cumulative standard normal distribution:
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
VaR ¼ Nð1 cÞ ¼
ð 1c 1
’ðeÞde
1289
(45.5)
and the cutoff return R* ¼ zs + m. This Appendix is based on Jorion (2001). Details of normal distribution can be found in Johnson and Wichern (2007).
Appendix 2: Optimal Portfolio Under a VaR Constraint We present an asset allocation model under a value-at-risk constraint. This model sets maximum expected loss not to exceed the VaR for a selected investment horizon, T at a given confidence level. Then asset proportions are allocated across the portfolio such that the wealth at the end of investment horizon is maximized. Suppose W0 is the investor’s initial wealth and B is the amount that the investor can borrow (B > 0) or lend (B < 0) at the risk-free interest rate rf. Let n be the number of risky assets, gi be the fraction invested in risky asset i, and P(i, t) be the price of asset I at time t. Then the initial value of the portfolio W0 þ B ¼
n X
gi Pði; 0Þ
(45.6)
i¼1
represents the budget constraint. Let VaR* be the target VaR consistent with investor’s risk aversion and WT be the wealth at the end of the holding period, T. The downside risk constraint can be written as PrfðW 0 W T Þ VaR g ð1 cÞ
(45.7)
where Pr {.} denotes the expected probability conditioned on information available at time, t ¼ 0, and c is the confidence level. Equation 45.2 can be written as PrfW T ðW 0 VaR Þg ð1 cÞ
(45.8)
Let rp be the total portfolio return at the end of the holding period and T then the expected wealth at the end of holding period; T can be written as (45.9) EðW T Þ ¼ ðW 0 þ BÞ 1 þ rp Bð1 þ rf Þ Investor’s constrained wealth maximizing objective can be written as Max: EðW T Þ s:t: PrfW T ðW 0 VaR Þg ð1 cÞ
(45.10)
Performance measure S(p) and borrowed amount can be deduced from Eq. 45.5. Let p* be the maximizing portfolio and q(c, p) defines the quantile that corresponds to probability (1 c) which can be obtained from portfolio return’s cumulative density function. Maximizing portfolio p* is defined as
1290
p : max SðpÞ ¼ p
K. Hung and S. Srivastava
rp rf W 0 r f W 0 qðc; pÞ
(45.11)
Initial wealth in the denominator of Eq. 45.6 is a scale constant and does not affect the asset allocation. Let VaR(c, p) denote portfolio p’s VaR, and then the denominator of Eq. 45.6 can be written as Fðc; pÞ ¼ W0 rf VaRðc; pÞ
(45.12)
If we consider rf as the benchmark return, then F(c, p) represents potential for portfolio losses at the confidence level c. Performance measure S(p) represents Sharpe-like reward-risk ratio, and optimization problem becomes Optimal portfolio: p : max SðpÞ ¼ p
rp rf Fðc; pÞ
Optimal portfolio allocation is independent of the initial wealth. It is also independent of the target VaR*. Risk measure F(c, p*) depends on VaR(c, p*) and not on VaR*. Investors first allocates the wealth among risky assets and then decides borrowing or lending depending on the value of {VaR* VaR(c, p*)}. Borrowed amount B can be written as B¼
W 0 ðVaR VaRðc; p ÞÞ F0 ðc, p0 Þ
If {VaR* VaR(c, p*)} is positive, then there is an opportunity to increase the portfolio return by borrowing at the risk-free rate and invest it in risky asset. If {VaR* VaR(c, p*)} is negative, then the portfolio risk needs to be reduced by investing a portion of the initial wealth in the risk-free asset. In either case the relative proration of funds invested in risky assets remains the same. Since VaR(c, p*) depends on the choice of holding period, confidence level, VaR estimation technique, and the assumption regarding the expected return distribution, the borrowing (B > 0) or lending (B < 0) will also change. This Appendix is based on Campbell et al. (2001).
References Basak, S., & Shapiro, A. (2001). Value-at-risk based risk management: Optimal policies and asset prices. The Review of Financial Studies, 14(2), 371–405. Beder, T. S. (1995). VAR: Seductive but dangerous. Financial Analysts Journal, 51(5), 12–24. Campbell, R., Huisman, R., & Koedijk, K. (2001). Optimal portfolio selection in a value-at-risk framework. Journal of Banking and Finance, 25, 1789–1804. Dowd, K. (1998). Beyond value at risk: The new science of risk management. New York: Wiley.
45
Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market
1291
Elton, E. J., & Gruber, M. J. (1995). Modern portfolio theory and investment analysis (5th ed.). New York: Wiley. Fong, G., & Vasicek, O. A. (1997). A multidimensional framework for risk analysis. Financial Analysts Journal, 53, 51–57. Hendricks, D. (1996). Evaluation of value-at-risk models using historical data. FRBNY Economic Policy Review, 2, 39–69. Hoppe, R. (1999). It’s time we buried value-at-risk. Risk Professional, 1. October page 18. London. Hull, J., & White, A. (1998). Incorporating volatility updating into the historical simulation method for value-at-risk. Journal of Risk, 1, 5–19. Johnson, R., & Wichern, D. (2007). Applied multivariate statistical analysis (6th ed.). New York: Pearson Education. Jorion, P. (1997). In defense of VAR. Derivatives Strategy, 2, 20–23. Jorion, P. (2001). Value at risk (2nd ed.). New York: McGraw Hill. Lucas, A., & Klaassen, P. (1998). Extreme returns, downside risk, and optimal asset allocation. Journal of Portfolio Management, 25(1), 71–79. Morgan, J. P. (1996). RiskMetricsTM – technical document. New York: Morgan Guaranty Trust Company. Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 20, 431–449. Schachter, B. (1998). An irreverent guide to value at risk. Risks and Rewards, (1), 17–18. Smithson, C., & Minton, L. (1996a). Value-at-risk (1): The debate on the use of VaR. Risk, 9, 25–27. Smithson, C., & Minton, L. (1996b). Value-at-risk (2): The debate on the use of VaR. Risk, 9, 38–39. Talmor, S. (1996). History repeat itself. The Banker, (1), 75–76.
Alternative Methods for Estimating Firm’s Growth Rate
46
Ivan E. Brick, Hong-Yi Chen, and Cheng-Few Lee
Contents 46.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46.2 The Discounted Cash Flow Model and the Gordon Growth Model . . . . . . . . . . . . . . . . . . 46.3 Internal Growth Rate and Sustainable Growth Rate Models . . . . . . . . . . . . . . . . . . . . . . . . . . 46.4 Statistical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1294 1295 1300 1303 1309 1309
Abstract
The most common valuation model is the dividend growth model. The growth rate is found by taking the product of the retention rate and the return on equity. What is less well understood are the basic assumptions of this model. In this paper, we demonstrate that the model makes strong assumptions regarding the financing mix of the firm. In addition, we discuss several methods suggested in the literature on estimating growth rates and analyze whether these approaches are consistent with the use of using a constant discount rate to evaluate the firm’s assets and equity. This chapter is a slightly revised version of Chapter 64 of Encyclopedia of Finance, 2nd Edition and Brick et al. (2014). I.E. Brick (*) Department of Finance and Economics, Rutgers, The State University of New Jersey, Newark/New Brunswick, NJ, USA e-mail: [email protected] H.-Y. Chen Department of Finance, National Central University, Taoyuan, Taiwan e-mail: [email protected] C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_46, # Springer Science+Business Media New York 2015
1293
1294
I.E. Brick et al.
The literature has also suggested estimating growth rate by using the average percentage change method, compound-sum method, and/or regression methods. We demonstrate that the average percentage change is very sensitive to extreme observations. Moreover, on average, the regression method yields similar but somewhat smaller estimates of the growth rate compared to the compound-sum method. We also discussed the inferred method suggested by Gordon and Gordon (1997) to estimate the growth rate. Advantages, disadvantages, and the interrelationship among these estimation methods are also discussed in detail.
Keywords
Growth rate • Discount cash flow model • Internal growth rate • Sustainable growth rate • Compound sum method
46.1
Introduction
One of the more highly used valuation models is that developed by Gordon and Shapiro (1956) and Gordon (1962) known as the dividend growth model. In security analysis and portfolio management, growth rate estimates of earnings, dividends, and price per share are important factors in determining the value of an investment or a firm. These publications demonstrate that the growth rate is found by taking the product of the retention rate and the return on equity. What is less well understood are the basic assumptions of this model. In this paper, we demonstrate that the model makes strong assumptions regarding the financing mix of the firm. In addition, we will also discuss several methods suggested in the literature on estimating growth rates. We will analyze whether these approaches are consistent with the use of using a constant discount rate to evaluate the firm’s assets and equity. In particular, we will demonstrate that the underlying assumptions of the internal growth rate model (whereby no external funds are used to finance growth) is incompatible with the constant discount rate model of valuation. The literature has also suggested estimating growth rate by taking the average of percentage change of dividends over a sample period, taking the geometric average of the change in dividends or using regression analysis to estimate the growth rate (e.g., Lee et al. 2009; Lee et al. 2012; Lee et al. 2000; and Ross et al. 2010). Gordon and Gordon (1997) suggest first using the Capital Asset Pricing Model (CAPM) to determine the cost of equity of the firm and then using the dividend growth model to infer the growth rate. Advantages, disadvantages, and the interrelationship among these estimation methods are also discussed in detail. This paper is organized as follows. In Sect. 46.2 we present the Gordon and Shapiro model (1956). We discuss the inherent assumptions of the model and its implied method to estimate the growth rate. Section 46.3 analyzes the internal growth rate and sustainable growth rate models. Section 46.4 describes leading statistical methods for estimating firm’s growth rates. We will also present the
46
Alternative Methods for Estimating Firm’s Growth Rate
1295
inferred method suggested by Gordon and Gordon (1997) to estimate the growth rate. Concluding remarks appear in Sect. 46.5.
46.2
The Discounted Cash Flow Model and the Gordon Growth Model
The traditional academic approach to evaluate a firm’s equity is based upon the constant discount rate method. One approach uses the after-tax weighted average cost of capital as a discount rate. This model is expressed as: Value of Equity ¼
1 X t¼1
CFut Debtt , ð1 þ ATWACOCÞt
(46:1)
where CFut is the expected unlevered cash flow of the firm at time t and Debtt is the market value of debt outstanding. ATWACOC equals L(1 t)Rd + (1 L)r where L is the market value proportion of debt, t is the corporate tax rate, Rd is the cost of debt and r is the cost of equity. The first term on the right hand side of Eq. 46.1 is the value of the assets. Subtracting out the value of debt yields the value of equity. The price per share is therefore the value of equity divided by the number of shares outstanding. Alternatively, the value of equity can be directly found by discounting the dividends per share by the cost of equity, or more formally: Value of Common Stock ðP0 Þ ¼
1 X t¼1
dt , ð1 þ r Þt
(46:2)
where dt is the dividend per share at time t. Boudreaux and Long (1979), and Chambers et al. (1982) demonstrate the equivalence of these two approaches assuming that the level of that the level of debt is a constant percentage of the value of the firm.1 Accordingly: 1 X t¼1
Xt Debtt ð1 þ ATWACOCÞt #of Shares Outstaning
¼
1 X t¼1
dt ð1 þ r Þt
(46:3)
If we assume that dividends per share grow at a constant rate g, then Eq. 46.2 is reduced to the basic dividend growth model2: P0 ¼
1
d1 : ð r gÞ
(46:4)
See Brick and Weaver (1984, 1997) concerning the magnitude of error in the valuation using a constant discount rate when the firm does not maintain a constant market based leverage ratio. 2 Gordon and Shapiro’s (1956) model assume that dividends were paid continuously and hence P0 ¼ d1/(r g).
1296
I.E. Brick et al.
Gordon and Shapiro (1956) demonstrates that if b is the fraction of earnings retained within the firm, and r is the rate of return the firm will earn on all new investments, then g ¼ br. Let It denote the level of new investment at time t. Because growth in earnings arises from the return on new investments, earnings can be written as: Et ¼ Et1 þ rI t1 ,
(46:5)
where Et is the earnings in period t.3 If the firm’s retention rate is constant and used in new investment, then the earnings at time t is Et ¼ Et1 þ rbEt1 ¼ Et1 ð1 þ rbÞ:
(46:6)
Growth rate in earnings is the percentage change in earnings and can be expressed as gE ¼
Et Et1 Et1 ð1 þ rbÞ Et1 ¼ ¼ rb: Et1 Et1
(46:7)
If a constant proportion of earnings is assumed to be paid out each year, the growth in earnings equals the growth in dividends, implying g ¼ br. It is worthwhile to examine the implication of this model for the growth in stock prices over time. The growth in stock price is gP ¼
Ptþ1 Pt : Pt
(46:8)
Recognizing that Pt and Pt+1 can be defined by Eq. 46.4, while noting that dt+2 is equal to dt+1(1 + br) then: d tþ2 dtþ1 d tþ2 d tþ1 dtþ1 ð1 þ br Þ d tþ1 gP ¼ k rb k rb ¼ ¼ ¼ br: d tþ1 dtþ1 dtþ1 k rb
(46:9)
Thus, under the assumption of a constant retention rate, for a one-period model, dividends, earnings, and prices are all expected to grow at the same rate. The relationship between the growth rate, g, the retention rate, b, and the return on equity, r, can be expanded to a multi-period setting as the following numerical example illustrates. In this example, we assume that the book value of the firm’s assets equal the market value of the firm. We will assume that the growth rate of the firm sales and assets is 4 % and the tax rate is equal to 40 %. The book value of the
3
Earnings in this model are defined using the cash-basis of accounting and not on an accrual basis.
46
Alternative Methods for Estimating Firm’s Growth Rate
1297
assets at time 0 is $50 and we assume a depreciation rate of 10 % per annum. The amount of debt outstanding is $12.50 and amount of equity outstanding is $37.50. We assume that the cost of debt, Rd, is 12 % and the cost of equity, r, is 25 %, implying an ATWACOC of 20.55 %. The expected dividend at t ¼ 1, d1, must satisfy Eq. 46.4. That is, 37.50 ¼ d1/(0.25 0.04). The unlevered cash flow is defined as Sales less Costs (excluding the depreciation expense) less Investment less the tax paid. Tax paid is defined as the tax rate (which we assume to be 40 %) times Sales minus Costs minus the Depreciation Expense. Recognizing that the value of the firm is given by CFu1/(ATWACOC g), if firm value is $50, g ¼ 4 % and ATWACOC is 20.55 %, then the expected unlevered cash flow is at time 1 is $8.28. We assume that the asset turnover ratio is 1.7. Hence, if assets at time 0 is $50, the expected sales at time 1 is $85. To obtain the level of investment, note that the depreciation expense at time 1 is $5. If the book value of assets equals $52, then the firm must invest $7. To obtain an expected unlevered cash flow at t ¼ 1 of $8.28, the Gross Profit Margin is assumed to be approximately 26.03 %, resulting in expected costs at time 1 of $62.88. The interest expense at time 1, is the cost of debt times the amount of debt outstanding at time zero, or $1.50. The Earnings Before Taxes (EBT) is defined as Sales – Costs – Interest Expense – Depreciation Expense, which equals $15.63 at time 1. 40 % of EBT is the taxes paid or $6.25 resulting in a net income (NI) of $9.38. ROE, which equals Net Income/Book Value of Equity at the beginning of the period is 25 %. Since the aggregate level of dividends at time 1 is $7.88, then the dividend payout ratio (1 b) is 84 %. Note that b is therefore equal to 16 % and b ROE ¼ 4%.4 Further note that the firm will increase its book value of equity via retention of NI by $1.50 (RE in the table). In order to maintain a leverage ratio of 25 %, the firm must increase the level of debt from time 0 to time 1 by $0.50. The entries for time periods 2–5 follow the logical extension of the above discussion, and as shown in the table, the retention rate b is 16 % and ROE ¼ 25 % for each period. Again the product of b and ROE results in the expected growth rate of 4 %. Further note, that g ¼ 4 % imply that sales, costs, book value of asset, depreciation, unlevered cash flow, cash flow to stockholders, value of debt and value of equity to increase by 4 % per annum. Investors may use a one-period model in selecting stocks, but future profitability of investment opportunities plays an important role in determining the value of the firm and its EPS and dividend per share. The rate of return on new investments can be expressed as a fraction, c (perhaps larger than 1), of the rate of return security holders require (r): k ¼ cr:
4
(46:10)
Generally, practioners define ROE as the ratio of the Net Income to the end of year Stockholders Equity. Here we are defining ROE as the ratio of the Net Income to the beginning of the year Stockholders Equity. Brick et al. (2012) demonstrate that the practitioner’s definition is one of the sources for the Bowman Paradox reported in the Organization Management literature.
1298
I.E. Brick et al.
Table 46.1 The book value of the firm’s assets equal the market value of the firm (growth rate is 4 %) Assets Debt Equity Rd r ATWACOC Asset turnover GPM Sales Cost Depreciation Interest exp. EBT Tax NI DIV New debt CFu Firm value Investment Vequity RE ROE 1-b g
0 $50.00 $12.50 $37.50 0.12 0.25 0.2055
$50.00 $37.50
1 $52.00 $13.00 $39.00 0.12 0.25 0.2055 1.7 0.26029 $85.00 $62.88 $5.00 $1.50 $15.63 $6.25 $9.38 $7.88 $.50 $8.28 $52.00 $7.00 $39.00 $1.50 0.25 0.84 0.04
2 $54.08 $13.52 $40.56 0.12 0.25 0.2055 1.7 0.26029 $88.40 $65.39 $5.20 $1.56 $16.25 $6.50 $9.75 $8.19 $0.52 $8.61 $54.08 $7.28 $40.56 $1.56 0.25 0.84 0.04
3 $56.24 $14.06 $42.18 0.12 0.25 0.2055 1.7 0.26029 $91.94 $68.01 $5.41 $1.62 $16.90 $6.76 $10.14 $8.52 $0.54 $8.95 $56.24 $7.57 $42.18 $1.62 0.25 0.84 0.04
Substituting this into the well-known relationship that rearranging, we have k¼
ð1 bÞE1 : ð1 cbÞP0
4 $58.49 $14.62 $43.87 0.12 0.25 0.2055 1.7 0.26029 $95.61 $70.73 $5.62 $1.69 $17.58 $7.03 $10.55 $8.86 $0.56 $9.31 $58.49 $7.87 $43.87 $1.69 0.25 0.84 0.04
5 $60.83 $15.21 $45.62 0.12 0.25 0.2055 1.7 0.26029 $99.44 $73.55 $5.85 $1.75 $18.28 $7.31 $10.97 $9.21 $0.59 $9.68 $60.83 $8.19 $45.62 $1.75 0.25 0.84 0.04
r ¼ Pd10 þ g
and
(46:11)
If a firm has no extraordinary investment opportunities (r ¼ k), then c ¼ 1 and the rate of return that security holders require is simply the inverse of the stock’s price to earnings ratio. In our example of Table 46.1, NI at time 1 is $9.38 and the value of equity at time 0 is $37.50. The ratio of these two numbers (which is equivalent to EPS/P) is ROE or 25 %. On the other hand, if the firm has investment opportunities that are expected to offer a return above that required by the firm’s stockholders (c > 1), the earnings to price ratio at which the firm sells will be below the rate of return required by investors. To illustrate consider the following example whereby market value of the
46
Alternative Methods for Estimating Firm’s Growth Rate
1299
Table 46.2 The market value of the firm and equity is greater than its book value Assets Firm value Debt Equity Rd r ATWACOC Asset turnover GPM Sales Cost Depreciation Interest exp. EBT Tax NI DIV New debt CFu Firm value Investment Vequity RE Market based ROE 1-b g
0 $50.00 $60.00 $12.50 $47.50 0.12 0.25 0.2129
$60.00 $47.50
1 $52.00 $62.40 $13.00 $49.40 0.12 0.25 0.2129 1.7 0.3093 $85.00 $58.71 $5.00 $1.50 $19.79 $7.92 $11.88 $9.98 $.50 $10.38 $62.40 $7.40 $49.40 $1.90 0.25 0.84 0.04
2 $54.08 $64.90 $13.52 $51.38 0.12 0.25 0.2129 1.7 0.3093 $88.40 $61.06 $5.20 $1.56 $20.58 $8.23 $12.35 $10.37 $0.52 $10.79 $64.90 $7.70 $51.38 $1.98 0.25 0.84 0.04
3 $56.24 $67.49 $14.06 $53.43 0.12 0.25 0.2129 1.7 0.3093 $91.94 $63.50 $5.41 $1.62 $21.41 $8.56 $12.84 $10.79 $0.54 $11.22 $67.49 $8.00 $53.43 $2.06 0.25 0.84 0.04
4 $58.49 $70.19 $14.62 $55.57 0.12 0.25 0.2129 1.7 0.3093 $95.61 $66.04 $5.62 $1.69 $22.26 $8.91 $13.36 $11.22 $0.56 $11.67 $70.19 $8.32 $55.57 $2.14 0.25 0.84 0.04
5 $60.83 $73.00 $15.21 $57.79 0.12 0.25 0.2129 1.7 0.3093 $99.44 $68.68 $5.85 $1.75 $23.15 $9.26 $13.89 $11.67 $0.59 $12.14 $73.00 $8.66 $57.79 $2.22 0.25 0.84 0.04
firm and equity is greater than its book value. This example is depicted in Table 46.2. The basic assumptions of the model is as follows: We will assume that the growth rate of the firm sales and book value of the assets is 4 %. The book value of the assets at time 0 is again $50 and we assume a depreciation rate of 10 % per annum. However, note that the market value of the firm is $60. The entries for Debt and Equity represent market values. The amount of debt outstanding is $12.50 and amount of equity outstanding is now $47.50. We assume that the cost of debt, Rd, is 12 % and the cost of equity, r, is 25 %, implying an ATWACOC of 21.29 %. For the valuation of the firm to be internally consistent, the unlevered cash flow at time 1 is $10.38. Similarly, the value of equity to be internally consistent, the expected dividends at t ¼ 1 is $9.98. Note that net income is $11.88 implying a dividend payout ratio of 84 % and a retention rate of 16 %. The book value based ROE, k, is found by taking the net income divided by the book value of equity. In our example, implied book value of equity is $37.50. Hence, k ¼ 31.68 %, implying
1300
I.E. Brick et al.
that the book value ROE is greater than the cost of equity which is the required rate of return. But g is given by the market value based ROE which is defined as Net Income over market value of equity. That is r ¼ 25 %. Note again, br is 4 %. An investor could predict next year’s dividends, the firm’s long-term growth rate, and the rate of return stockholders require (perhaps using the CAPM to estimate r) for holding the stock. Equation 46.4 could then be solved for the theoretical price of the stock that could be compared with its present price. Stocks that have theoretical prices above actual price are candidates for purchase; those with theoretical prices below their actual price are candidates for sale or for short sale.
46.3
Internal Growth Rate and Sustainable Growth Rate Models
The internal growth rate model assumes that the firm can only finance its growth by its internal funds. Consequently, the cash to finance growth must come from only retained earnings. Therefore, retained earnings can be expressed as Retained Earnings ¼ Earnings Dividends ¼ Profit Margin Total Sales-Dividends ¼ pðS þ DSÞ pðS þ DSÞð1 bÞ ¼ pbðS þ DSÞ,
(46:12)
where p ¼ the profit margin on all sales; S ¼ annual sales; and DS ¼ the increase in sales during the year. Because retained earnings is the only source of new funds, the use of cash represented by the increase in assets must equal the retained earnings: Uses of Cash ¼ Sources of Cash Increases in Assets ¼ Retained earnings DST ¼ pðS þ DSÞb ¼ pbS þ pbDS, DS½T pb ¼ pSb, DS pb ¼ , S T pb
(46:13)
where T ¼ the ratio of total assets to sales. If we divide both numerator and denominator of Eq. 46.13 by T and make rearrange the terms, then we can show that the internal growth rate is:
46
Alternative Methods for Estimating Firm’s Growth Rate
g¼
DS pb=T b ROA ¼ ¼ , S 1 pb=T 1 b ROA
1301
(46:14)
where ROA is the return on assets. The internal growth rate is the maximum growth rate that can be achieved without debt or equity kind of external financing. But note this assumption of not issuing new debt or common stock to finance growth is inconsistent with the basic assumption of the constant discount rate models that the firm maintains a constant market based leverage ratio. Hence, this model cannot be used to estimate the growth rate and be employed by the Gordon Growth Model. Higgins (1977, 1981, 2008) has developed a sustainable growth rate under assumption that firms can generate new funds by using retained earnings or issuing debt, but not issuing new shares of common stock. Growth and its management present special problems in financial planning. From a financial perspective, growth is not always a blessing. Rapid growth can put considerable strain on a company’s resources, and unless management is aware of this effect and takes active steps to control it, rapid growth can lead to bankruptcy. Assuming a company is not raising new equity, the cash to finance growth must come from retained earnings and new borrowings. Further, because the company wants to maintain a target debt-to-equity ratio equal to L, each dollar added to the owners’ equity enables it to increase its indebtedness by $L. Since the owners’ equity will rise by an amount equal to retained earnings, the new borrowing can be written as: New Borrowings ¼ Retained Earnings Target Debt-to-Equity Ratio ¼ pbðS þ DSÞL: The use of cash represented by the increase in assets must equal the two sources of cash (retained earnings and new borrowings)5: Uses of Cash ¼ Sources of Cash Increases in Assets ¼ Retained Earnings þ New Borrowing DST ¼ pbðS þ DSÞ þ pbðS þ DS L ¼ pbð1 þ LÞS þ pbð1 þ LÞDS DS½T pbð1 þ LÞ ¼ pbð1 þ LÞS g¼
5
DS pbð1 þ LÞ ¼ : S T pbð1 þ LÞ
(46:15)
Increased in Assets is the net increase in assets. The total investment should also include the depreciation expense as can be seen in our examples delineated in Tables 46.1 and 46.2. But depreciation expense is also a source of funding. Hence, it is netted out in the relationship between increases in assets and retained earnings and new borrowings.
1302
I.E. Brick et al.
In Eq. 46.15 the DS/S or g is the firm’s sustainable growth rate assuming no infusion of new equity. Therefore, a company’s growth rate in sales must equal the indicated combination of four ratios, p, b, L, and T. In addition, if the company’s growth rate differs from g, one or more of the ratios must change. For example, suppose a company grows at a rate in excess of g, then it must either use its assets more efficiently, or it must alter its financial policies. Efficiency is represented by the profit margin and asset-to-sales ratio. It therefore would need to increase its profit margin (p) or decrease its asset-to-sales ratio (T) in order to increase efficiency. Financial policies are represented by payout or leverage ratios. In this case, a decrease in its payout ratio (1-b) or an increase in its leverage (L) would be necessary to alter its financial policies to accommodate a different growth rate. It should be noted that increasing efficiency is not always possible and altering financial policies are not always wise. If we divide both numerator and denominator of Eq. 46.15 by T and rearrange the terms, then we can show that the sustainable growth rate can be shown as g¼
DS pbð1 þ LÞ=T b ROE ¼ ¼ : S 1 pbð1 þ LÞ=T 1 b ROE
(46:16)
Please note that, in the framework of internal growth rate and sustainable growth rate presented above, the source of cash are taken from the end of period values of assets and assumed that the required financing occurs at the end of the period. However, Ross et al. (2010) show that if the source of cash is from the beginning of the period, the relationship between the use and the source of cash can be expressed for the internal growth rate model as DST ¼ pSb and for the sustainable growth rate model, DST ¼ pbS + pbSL . Such relationship will result an internal growth rate of b ROA and a sustainable growth rate of b ROE. For example, Table 46.3 assumes identical assumptions to that of Table 46.1, but now we will assume a growth rate of 4.1667 % and use total asset, total equity, and total debt from the beginning of the period balance sheet to calculate the net income. Recall that ROE is the net income divided by stockholders’ equity at the beginning of the period. Note that the product of ROE and b will yield 4.1667 %. Note that the intent of the Higgins’ sustainable growth rate allows only internal source and external debt financing. Chen et al. (2013) incorporate Higgins (1977) and Lee et al. (2011) frameworks, allowing company use both external debt and equity, and derive a generalized sustainable growth rate as gðtÞ ¼
b ROE l Dn P=E þ , 1 b ROE 1 b ROE
where l ¼ degree of market imperfection; Dn ¼ number of shares of new equity issued; P ¼ price per share of new equity; and E ¼ total equity:
(46:17)
46
Alternative Methods for Estimating Firm’s Growth Rate
1303
Table 46.3 The book value of the firm’s assets equal the market value of the firm (sustainable growth rate is 4.1667 %) Assets Value Debt Equity R Re ATWACOC Asset turnover GPM Sales Cost Depreciation Interest exp. EBT Tax NI DIV New debt CFu Value Investment RE ROE (1-b) g
0 $50.00 $50.00 $12.50 $37.50 0.12 0.25 0.2055
$50.00
1 $52.08 $52.08 $13.02 $39.06 0.12 0.25 0.2055 1.7 0.26029 $85.00 $62.88 $5.00 $1.50 $15.63 $6.25 $9.38 $7.81 $7.60 $8.19 $52.08 $7.08 $1.56 0.25 0.833333 0.041667
2 $54.25 $54.25 $13.56 $40.69 0.12 0.25 0.2055 1.7 0.26029 $88.54 $65.49 $5.21 $1.56 $16.28 $6.51 $9.77 $8.14 $7.92 $8.53 $54.25 $7.38 $1.63 0.25 0.833333 0.041667
3 $56.51 $56.51 $14.13 $42.39 0.12 0.25 0.2055 1.7 0.26029 $92.23 $68.22 $5.43 $1.63 $16.95 $6.78 $10.17 $8.48 $8.25 $8.89 $56.51 $7.69 $1.70 0.25 0.833333 0.041667
4 $58.87 $58.87 $14.72 $44.15 0.12 0.25 0.2055 1.7 0.26029 $96.07 $71.07 $5.65 $1.70 $17.66 $7.06 $10.60 $8.83 $8.59 $9.26 $58.87 $8.01 $1.77 0.25 0.833333 0.041667
5 $61.32 $61.32 $15.33 $45.99 0.12 0.25 0.2055 1.7 0.26029 $100.08 $74.03 $5.89 $1.77 $18.40 $7.36 $11.04 $9.20 $8.95 $9.64 $61.32 $8.34 $1.84 0.25 0.833333 0.041667
Comparing Eq. 46.17, the generalized sustainable growth rate has an additional positive term, 1lDnp=E ð1DÞROE , when the new equity issue is taken into account. Therefore, Chen et al. (2013) show that Higgins’ (1977) sustainable growth rate is underestimated because of the omission of the source of the growth related to new equity issue.
46.4
Statistical Methods
Instead of relying on financial ratios to estimate firm’s growth rates, one may use statistical methods to determine firm’s growth rates. A simple growth rate can be estimated by calculating the percentage change in earnings over a time period, and taking the arithmetic average. For instance, the growth rate in earnings over one period can be expressed as:
1304
I.E. Brick et al.
gt ¼
Et Et1 : Et1
(46:18)
n 1X g: n t¼1 t
(46:19)
The arithmetic average is given by g¼
A more accurate estimate can be obtained by solving for the compounded growth rate: X t ¼ X 0 ð 1 þ gÞ t ,
(46:20)
or g¼
Xt X0
1=t 1,
(46:21)
where X0 ¼ measure in the current period (measure can be sales, earnings, or dividends); and Xt ¼ measure in period t. This method is called the discrete compound sum method of growth-rate estimation. For this approach to be consistent with the dividend growth model, the duration of each period (e.g., quarterly or yearly) must be consistent with the compounding period used in the dividend growth model. Another method of estimating the growth rate uses the continuous compounding process. The concept of continuous compounding process can be expressed mathematically as Xt ¼ X0 egt :
(46:22)
Equation 46.21 describes a discrete compounding process and Eq. 46.22 describes a continuous compounding process. The relationship between Eqs. 46.21 and 46.22 can be illustrated by using an intermediate expression such as: g mt X t ¼ X0 1 þ , m
(46:23)
where m is the frequency of compounding in each year. If m ¼ 4, Eq. 46.23 implies a quarterly compounding process; if m ¼ 365, it describes a daily process; and if m approaches infinity, it describes a continuous compounding process. Thus Eq. 46.22 can be derived from Eq. 46.23 based upon the definition
46
Alternative Methods for Estimating Firm’s Growth Rate
1 m lim 1 þ ¼ e: m!1 m
1305
(46:24)
Then the continuous analog for Eq. 46.20 can be rewritten as m g mt 1 ð g Þgt ¼ X0 lim 1 þ ¼ X0 egt : lim Xt ¼ lim X0 1 þ m!1 m!1 m!1 m m=g
(46:25)
Therefore, the growth rate estimated by continuous compound-sum method can be expressed by 1 Xt g ¼ ln : t X0
(46:26)
If you estimate the growth rate via Eq. 46.26, you are implicitly assuming the dividends are growing continuously and therefore the dividend growth model. In this case, according to Gordon and Shapiro’s (1956) model, P0 ¼ d0/(r g). To use all the information available to the security analysts, two regression equations can be employed. These equations can be derived from Eqs. 46.20 and 46.22 by taking the logarithm (ln) on both sides of equation: lnXt ¼ lnX0 þ tlnð1 þ gÞ:
(46:27)
If Eq. 46.27 can be used to estimate the growth rate, then the antilog of the regression slope estimate would equal the growth rate. For the continuous compounding process, lnXt ¼ lnX0 þ gt:
(46:28)
Both Eqs. 46.27 and 46.28 indicate that Xn is linearly related to t; and the growth rate can be estimated by the ordinary least square (OLS) regression. For example, growth rates for EPS and DPS can be obtained from an OLS regression by using ln
EPSt EPS0
¼ a0 þ a1 T þ e1t ,
(46:29)
¼ b0 þ b1 T þ e2t ,
(46:30)
and
DPSt ln DPS0
where EPSt and DPSt are earnings per share and dividends per share, respectively, in period t, and T is the time indicators (i.e., T ¼ 1, 2, . . ., n). We denote a^1 and b^1 as
1306
I.E. Brick et al.
Table 46.4 Dividend behavior of firms Pepsico and Wal-Mart in dividends per share (DPS) Year 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995
T 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
PEP 3.61 2.4 3.01 2.19 4.51 1.75 2.30 2.90 3.40 1.37 1.35 1.61 1.96 2.22 2.00
WMT 1.73 2.5 1.82 1.4 1.91 1.16 1.59 1.11 1.48 1.9 1.14 1.4 1.74 1.02 1.17
Year 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
T 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
PEP 0.72 0.98 1.35 1.4 1.51 1.51 1.89 2.07 2.45 2.43 3.42 3.48 3.26 3.81 3.97
WMT 1.33 1.56 1.98 1.25 1.41 1.49 1.81 2.03 2.41 2.68 2.92 3.17 3.36 3.73 4.2
the estimated coefficients for Eqs. 46.29 and 46.30. The estimated growth rates for EPS and DPS, therefore, are expða^1 Þ 1 and exp b^1 1 in terms of discrete compounding process and a^1 and b^1 in terms of continuous compounding process.6 Table 46.4 provides dividends per share of Pepsico and Wal-Mart during the period from 1981 to 2010. Using the data in Table 46.4 for companies Pepsico and Wal-Mart, we can estimate the growth rates for their respective dividend streams. Table 46.5 presents the estimated the growth rates for Pepsico and Wal-Mart by arithmetic average method, geometric average method, compound-sum method, and the regression method in terms of discrete and continuous compounding processes. Graphs of the regression equations for Pepsico and Wal-Mart are shown in Fig. 46.1. The slope of the regression for Pepsico shows an estimated coefficient for the intercept is 0.56. The estimated intercept for Wal-Mart is 7.04. The estimated growth rates for Pepsico and Wal-Mart, therefore, are 0.56 % and 7.29 % in terms of discrete compounding process. Figure 46.1 also shows the true DPS and predicted DPS for Pepsico and Wal-Mart. We find that the regression method, to some extent, can estimate the growth rate for Wal-Mart more precisely than for Pepsico. Comparing to the geometric average method, the regression method yields a similar value of the estimated growth rate for Wal-Mart, while not for Pepsico. There are some complications to be aware of when employing the arithmetic average, the geometric average, and regression model in estimating the growth rate. The arithmetic average is quite sensitive to extreme values. The arithmetic average, therefore, has an upward bias that increases directly with the variability of the data. 6
If the earnings (or dividend) process follows Eq. 46.27, we can get same results from the non-restricted model as Eqs. 46.29 and 46.30.
46
Alternative Methods for Estimating Firm’s Growth Rate
1307
Table 46.5 Estimated dividend growth rates for Pepsico and Wal-Mart Pepsico (%) 4.64 0.99 0.99 0.56 0.56
Arithmetic average Geometric average Compound-sum method Regression method (continuous) Regression method (discrete)
Wal-Mart (%) 8.99 5.45 5.30 7.04 7.29
DPS
Pepsico 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0
True DPS Predicted DPS 0
5
ln
10
15 Time
20
25
30
DPSt = − 0.6236 + 0.1947 T + εt DPS0 (0.0056) (0.0113)
Wal-Mart
1.4 1.2
DPS
1 0.8 True DPS Predicted DPS
0.6 0.4 0.2 0 0
10
20
30
Time
Fig. 46.1 Regression models for Pepsico and Wal-Mart
ln
DPSt = − 0.9900 + 0.0704 T + εt DPS0 (0.1286) (0.0075)
Consider the following situation. Dividends in years 1, 2 and 3 are $2, $4 and $2. The arithmetic average of growth rate is 25 % but the true growth rate is 0 %. The difference in the two average techniques will be greater when the variability of the data is larger. Therefore, it is not surprising that we find differences in the estimated growth rates using arithmetic average and geometric average methods for Pepsico and Wal-Mart in Table 46.5.
1308
I.E. Brick et al.
Table 46.6 Estimated dividend growth rates for 50 randomly selected companies
Arithmetic average Geometric average Compound-sum method Regression method (continuous) Regression method (discrete)
50 Firms (%) 4.95 0.93 0.83
Firms with positive growth (35 firms) (%) 7.27 3.00 2.91
Firms with negative growth (15 firms) (%) 0.47 3.88 4.02
0.66
2.32
3.22
0.71
2.37
3.15
The regression method uses more available information than the geometric average, discrete compounding and continuous compounding methods in that it takes into account the observed growth rates between the first and last period of the sample. A null hypothesis test can be used to determine whether the growth rate obtained from the regression method is statistically significantly different from zero or not. However, logarithms cannot be taken with zero or negative numbers. Under this circumstance the arithmetic average will be a better alternative. We further randomly select 50 companies from S&P 500 index firms, which paid dividends during 1981–2010, to estimate their dividend growth rates by arithmetic average method, geometric average method, compound-sum method, and the regression method in terms of discrete and continuous compounding processes. Table 46.6 shows averages of estimated dividend growth rates for 50 random companies by different methods. As we discussed before, the arithmetic average is sensitive to extreme values and has an upward bias. We, therefore, find a larger average of the estimated dividend growth rate using the arithmetic average method. We also find that on average, the geometric, and compound sum methods yield relatively smaller growth rate estimates as compared to the estimates obtained using the regression methods to estimate growth rate. However, it appears that estimates obtained using the geometric, compound sum and regression methods are very similar. Finally, Gordon and Gordon (1997) suggest that one can infer the growth rate using the dividend growth model. In particular, the practitioner can use regression analysis to calculate the beta of the stock and use the CAPM to estimate the cost of equity. Since P0 ¼
d 0 ð 1 þ gÞ ðr gÞ
(46:31)
and the price of the stock is given by the market, the cost of equity is obtained using the CAPM, and d0 and the current dividend is known, one can infer the growth rate using Eq. 46.31. If the inferred growth rate is less than the practitioner’s estimate, then the recommendation will be to buy the stock. On the other hand, if the inferred
46
Alternative Methods for Estimating Firm’s Growth Rate
1309
growth is greater than the practitioner’s estimate, the recommendation will be to sell the stock. However, it should be noted that the explanatory power of the CAPM to explain the relationship between stock returns and risk has been extensively questioned in the literature. See for example, Fama and French (1992).
46.5
Conclusion
The most common valuation model is the dividend growth model. The growth rate is found by taking the product of the retention rate and the return on equity. What is less well understood are the basic assumptions of this model. In this paper, we demonstrate that the model makes strong assumptions regarding the financing mix of the firm. In addition, we discuss several methods suggested in the literature on estimating growth rates and analyze whether these approaches are consistent with the use of using a constant discount rate to evaluate the firm’s assets and equity. In particular, we demonstrate that the underlying assumptions of the internal growth rate model (whereby no external funds are used to finance growth) are incompatible with the constant discount rate model of valuation. The literature has also suggested estimating growth rate by using the average percentage change method, compoundsum method, and/or regression methods. We demonstrate that the average percentage change is very sensitive to extreme observations. Moreover, on average, the regression method yields similar but somewhat smaller estimates of the growth rate compared to the compound-sum method. We also discussed the inferred method suggested by Gordon and Gordon (1997) to estimate the growth rate. Advantages, disadvantages, and the interrelationship among these estimation methods are also discussed in detail. Choosing an appropriate method to estimate firm’s growth rate can yield a more precise estimation and be helpful for the security analysis and valuation. However, all of these methods use historical information to obtain growth estimates. To the extent that the future may differ from the past, will ultimately determine the efficacy of any of these methods.
References Boudreaux, K. J., & Long, R. W. (1979). The weighted average cost of capital as a cutoff rate: A further analysis. Financial Management, 8, 7–14. Brick, I. E., & Weaver, D. G. (1984). A comparison of capital budgeting techniques in identifying profitable investments. Financial Management, 13, 29–39. Brick, I. E., & Weaver, D. G. (1997). Calculating the cost of capital of an unlevered firm for use in project evaluation. Review of Quantitative Finance and Accounting, 9, 111–129. Brick, I. E., Palmon, O., & Venezia, I. (2012). The risk-return (Bowman) paradox and accounting measurements. In I. Venezia & Z. Wiener (Eds.), Bridging the GAAP: Recent advances in finance and accounting. London: World Scientific Publishing Company. Brick, I. E., Chen H. Y., Hsieh C. H., & Lee C. F. (2014, forthcoming). A comparison of alternative models for estimating firm’s growth rate. Review of Quantitative Finance and Accounting.
1310
I.E. Brick et al.
Chambers, D. R., Harris, R. S., & Pringle, J. J. (1982). Treatment of financing mix in analyzing investment opportunities. Financial Management, 11, 24–41. Chen, H. Y., Gupta, M. C., Lee, A. C., & Lee, C. F. (2013). Sustainable growth rate, optimal growth rate, and optimal payout ratio: A joint optimization approach. Journal of Banking and Finance, 37, 1205–1222. Fama, E. F., & French, K. R. (1997). The cross-section of expected stock returns. The Journal of Finance, 47, 427–465. Gordon, J., & Gordon, M. (1997). The finite horizon expected return model. The Financial Analysts Journal, 53, 52–61. Gordon, M., & Shaprio, E. (1956). Capital equipment analysis: The required rate of profit. Management Science, 3, 102–110. Higgins, R. C. (1977). How much growth can a firm afford? Financial Management, 6, 7–16. Higgins, R. C. (1981). Sustainable growth under inflation. Financial Management, 10, 36–40. Higgins, R. C. (2008). Analysis for financial management (9th ed.). New York: McGraw-Hill. Lee, C. F., & Lee, A. C. (2013). Encyclopedia of finance (2nd ed.). New York: Springer. Lee, C. F., Lee, J. C., & Lee, A. C. (2000). Statistics for business and financial economics (2nd ed.). Singapore: World Scientific Publishing Company. Lee, A. C., Lee, J. C., & Lee, C. F. (2009). Financial analysis, planning and forecasting: Theory and application (2nd ed.). Singapore: World Scientific Publishing Company. Lee, C. F., Finnerty, J. E., Lee, A. C., Lee, J., & Wort, D. H. (2012a). Security analysis, portfolio management, and financial derivatives (2nd ed.). New York: Springer. Lee, C. F., Gupta, M. C., Chen, H. Y., & Lee, A. C. (2012b). Optimal payout ratio under uncertainty and the flexibility hypothesis: Theory and empirical evidence. Journal of Corporate Finance, 17, 483–501. Ross, S. A., Westerfield, R. W., & Jaffe, J. (2010). Corporate finance (9th ed.). New York: McGraw-Hill/Irwin.
Econometric Measures of Liquidity
47
Jieun Lee
Contents 47.1 47.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Low-Frequency Liquidity Proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.2.1 The Roll Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.2.2 Effective Tick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.2.3 Amihud (2002) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.2.4 Lesmond, Ogden, and Trzcinka (LOT 1999) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.3 High-Frequency Liquidity Proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.3.1 Percent Quoted Spread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.3.2 Percent Effective Spread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.4 Empirical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.4.2 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Solution to LOT (1990) Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1312 1313 1313 1314 1315 1316 1318 1318 1318 1319 1319 1319 1321 1321 1322
Abstract
A security is liquid to the extent that an investor can trade significant quantities of the security quickly, at or near the current market price, and bearing low transaction costs. As such, liquidity is a multidimensional concept. In this chapter, I review several widely used econometrics or statistics-based measures that researchers have developed to capture one or more dimensions of a security’s liquidity (i.e., limited dependent variable model (Lesmond, D. A. et al. Review of Financial Studies, 12(5), 1113–1141, 1999) and autocovariance of price changes (Roll, R., Journal of Finance, 39, 1127–1139, 1984). These alternative proxies have been designed to be estimated using either low-frequency or high-frequency
J. Lee Economic Research Institute, Bank of Korea, Seoul, South Korea e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_99, # Springer Science+Business Media New York 2015
1311
1312
J. Lee
data, so I discuss four liquidity proxies that are estimated using low-frequency data and two proxies that require high-frequency data. Low-frequency measures permit the study of liquidity over relatively long time horizons; however, they do not reflect actual trading processes. To overcome this limitation, high-frequency liquidity proxies are often used as benchmarks to determine the best low-frequency proxy. In this chapter, I find that estimates from the effective tick measure perform best among the four low-frequency measures tested. Keywords
Liquidity • Transaction costs • Bid-ask spread • Price impact • Percent effective spread • Market model • Limited dependent variable model • Tobin’s model • Log likelihood function • Autocovariance • Correlation analysis
47.1
Introduction
A security is liquid to the extent that an investor can trade significant quantities of the security quickly, at or near the current market price, and bearing low transaction costs. A security’s liquidity is an important characteristic variable, relevant in asset pricing studies, studies of market efficiency, and even corporate finance. In the asset pricing literature, researchers have considered whether liquidity is a priced risk factor (e.g., Amihud and Mendelson 1986; Brennan and Subrahmanyam 1996; Amihud 2002; Pastor and Stambaugh 2003). In corporate finance, researchers have found that liquidity is related to capital structure, mergers and acquisitions, and corporate governance (e.g., Lipson 2003; Lipson and Mortal 2007, 2009; Bharath 2009; Chung et al. 2010). In these and many other studies, researchers have chosen from a variety of liquidity measures that have been developed. In turn, the variety of available liquidity measures reflects the multidimensional aspect of liquidity. Note that the definition of liquidity given above features four dimensions of liquidity: trading quantity, trading speed, price impact, and trading cost. Some extant measures focus on a single dimension of liquidity, while others encompass several dimensions. For instance, the bid-ask spread measure in Amihud and Mendelson (1986), the estimator of the effective spread in Roll (1984), and the effective tick estimator in Goyenko et al. (2009) relate to the trading cost dimension. The turnover measure of Datar et al. (1998) captures the trading quantity dimension. The measures in Amihud (2002) and Pastor and Stambaugh (2003) are relevant to price impact. The number of zero trading volume days in Liu (2006) emphasizes trading speed. Finally, and different from the others, the measure in Lesmond et al. (1999) encompasses several dimensions of liquidity. Among the available measures, this chapter focuses on six liquidity proxies, including four that are commonly estimated using low-frequency data (i.e., daily closing prices) and two that are commonly estimated using high-frequency data (i.e., intraday trades and quotes). The low-frequency measures are in Roll (1984), Goyenko et al. (2009), Lesmond et al. (1999), and Amihud (2002).
47
Econometric Measures of Liquidity
1313
The high-frequency measures are the percent quoted spread and the percent effective spread. The low-frequency proxies are advantageous because they are more amenable to the study of liquidity over relatively long time horizons and across countries. However, they are limited because they do not directly reflect actual trading processes, while the high-frequency measures do. Thus, highfrequency liquidity proxies are often used as benchmarks to determine the best low-frequency proxy. This is not a universal criterion, however, because each measure captures a different dimension of liquidity and may lead to different results in specific cross-sectional or time-series applications. The remainder of this chapter is organized as follows. In Sects. 47.2 and 47.3, I introduce and briefly discuss each of the low-frequency and high-frequency liquidity measures, respectively. Section 47.4 provides an empirical analysis of these liquidity measures, including the aforementioned test of the best low-frequency measure. Section 47.5 concludes.
47.2
Low-Frequency Liquidity Proxies
Below I describe four widely used measures of liquidity: the Roll (1984) measure; effective tick; the Amihud (2002) measure; and the Lesmond et al. (1999) measure.
47.2.1 The Roll Measure Roll (1984) develops a measure of the effective bid-ask spread. He assumes that the true value of a stock follows a random walk and that Pt, the observed closing price on day t, is equal to the stock’s true value plus or minus half of the effective spread. He also assumes that a security trades at either the bid price or the ask price, with equal frequency. This relationship can be expressed as follows: Pt ¼ Pt þ Qt Qt IID
s 2
þ1 with probability 1=2 ðbuyer initiatedÞ 1 with probability 1=2 ðseller initiatedÞ
where Qt is an order-type indicator variable, indicating whether the transaction at time t is at the ask (buyer-initiated) or at the bid (seller-initiated) price. His assumption that Pt is the fundamental value of the security implies that E [Qt] ¼ 0; hence, Pr(Qt ¼ 1) ¼ Pr(Qt ¼ 1) ¼ 1/2. Also, there are no changes in the fundamental value of the security (i.e., over a short horizon). It follows that the process for price changes DPt is s s DPt ¼ DPt þ Qt Qðt1Þ ¼ Qt Qðt1Þ : 2 2
1314
J. Lee
Under the assumption that Qt is IID, the variance and covariance of DPt can be easily calculated: s2 var½DPt ¼ : 2 s2 cov½DPt ; DPt1 ¼ : 4 hs i s cov½DPt ; DPt1 ¼ cov ðQt Qt1 Þ, ðQt1 Qt2 Þ 2 2 s2 ¼ ½covðQt Qt1 Þ, covðQt1 Qt2 Þ 4 s2 ¼ ½covðQt ; Qt1 Þ covðQt1 ; Qt1 Þ 4 þcovðQt1 ; Qt2 Þ covðQt ; Qt2 Þ s2 s2 1 1 s2 2 2 ð1 0Þ þ ð1 0Þ ¼ ¼ ½varðQt1 Þ ¼ 2 4 4 2 4 cov½DPt ; DPk1 ¼ 0, k > 1: Solving for S yields Roll’s effective spread estimator: S¼2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Cov DPt; ; DPt1 :
Roll’s measure is simple and intuitive: If P* is fixed so that prices take only two values, bid or ask, and if the current price is the bid, then the change between current price and previous price must be either 0 or –s and the price change between current price and next price must be either 0 or s. Analogous possible price changes apply when the current price is the ask. The Roll measure S is generally estimated using daily data on price changes. Roll (1984) and others have found that for some individual stocks, the autocovariance that defines S is positive, rather than negative, so that S is undefined. In this case, researchers generally choose one of three solutions: (1) treat the observation as missing, (2) set the Roll spread estimate to zero, or (3) multiply the covariance by negative one, calculate S, and multiply this estimate by negative one to produce a negative spread estimate. In my empirical analysis to follow, I find that results are insensitive to the alternative solutions, so I only report results of setting S to zero when the observed autocovariance is positive.
47.2.2 Effective Tick Goyenko et al. (2009) and Holden (2009) develop an effective tick measure that is based on price clustering and changes in tick size. Below I describe the effective tick measure in Goyenko et al. (2009), which is elegant in its simplicity.
47
Econometric Measures of Liquidity
1315
Consider four possible bid-ask spreads for a stock: $1/8, $1/4, $1/2, and $1. If the spread is $1/4, the authors assume that bid and ask prices are associated with only even quarters. Thus, if an odd-eighth transaction price shows up, it is instead inferred that the spread is $1/8. The number of quotes that occur at $1/8 spread is given by N1. The number of quotes at odd-quarter fractions ($1/8 and $1/4) is N2. The number of quotes at odd-half ($1/8, $1/4, and $1/2) is N3. Finally, the number of whole dollar quotes is given by N4. The following is the proportion of each price fraction observed during the day: Ni Fi ¼ XI
Ni i¼1
for i ¼ 1, . . . , I:
Next, suppose that the unconstrained probability of the effective ith estimated spread is 2Fi i ¼ 1 U ¼ 2Fi Fi1 i ¼ 2, . . . , i Fi Fi1 i ¼ I: The effective tick is a simply probability-weighted average of effective spread size divided by average price in a given time interval: XI Effective Tickit ¼
i¼1
a Si
P
,
where the probability a is constrained to be nonnegative and to be no more than 1 minus the probability of a finer spread, S is the spread, and P is the average price in the time interval. To obtain estimates of effective tick in this chapter, I must deal with changes in minimum tick size that were instituted over time in the US equity markets. For NYSE, AMEX, and NASDAQ stocks from 1/93 to 5/97, I used a fractional grid accounting for price increments as small as $1/8. For NYSE and AMEX (NASDAQ) stocks from 6/97 to 1/01 (6/97 to 3/01), I used a minimum tick size increment of $1/16. Thereafter, I used a decimal grid for all stocks.
47.2.3 Amihud (2002) The measures in Amihud (2002) and Pastor and Stambaugh (2003) both purport to capture the price impact dimension of liquidity. Goyenko et al. (2009) show that the Amihud (2002) measure performs well in measuring price impact while the Pastor and Stambaugh (2003) measure is dominated by other measures. Pastor and Stambaugh (2003, p. 679) also caution against their measure as a liquidity measure for individual stocks, reporting large sampling errors in individual estimates.
1316
J. Lee
Referring also to Hasbrouck (2009), I do not discuss the Pastor and Stambaugh (2003) measure. The Amihud (2002) measure is a representative proxy for price impact, i.e., the daily price response associated with one dollar of trading volume: Amiit ¼
jRetit j , Volit
where Retit is the stock i’s return on day t and Volit is the stock i’s dollar volume on day t. The average is calculated over all positive-volume days, since the ratio is undefined for zero volume days.
47.2.4 Lesmond, Ogden, and Trzcinka (LOT 1999) The Lesmond et al. (LOT 1999) liquidity measure is based on the idea that an informed trader observing a mispriced stock will execute a trade only if the difference between the current market price and the true price exceeds the trader’s transaction costs; otherwise, no trade will occur. Therefore, they argue that a stock with high transaction costs will have less frequent price movements and more zero returns than a stock with low transaction costs. Based on this relationship, they develop a measure of the marginal trader’s effective transaction costs for an individual stock. Their measure utilizes the limited dependent variable regression model of Tobin (1958) and Rosett (1959) applied to the “market model.”
47.2.4.1 Market Model The basic market model is a regression of the return, Rit on security i and period t, on the contemporaneous market return, Rmt: Rit ¼ ai þ bi Rmt þ eit
(47.1)
The market model implies that a security’s return reflects the effect of new information on the value of the stock, which can be divided into two components: contemporaneous market-wide information (biRmt) and firm-specific information eit. In an ideal market without frictions such as transaction costs, new information will be immediately reflected into the security’s price, so Rit is the true return on security i.
47.2.4.2 Relationship Between Observed and True Returns In the presence of transaction costs, investors will trade only when the marginal profits exceed the marginal transaction costs. In this context, transaction costs would include various dimensions such as bid-ask spread, commissions, and price impact, as well as taxes or short-selling costs, because investors make trading decisions after considering overall transaction costs. Transaction costs inhibit informative trades and therefore drive a wedge between observed returns and true
47
Econometric Measures of Liquidity
1317 Observed Return (Rit)
Region3
True Return (R*it ) Region1
Region2
Fig. 47.1 This figure illustrates the relationship between the observed return on a stock in the presence of transaction costs that inhibit trading, Rit, and its true return in the absence of transaction costs, R*it, where the latter reflects the true effects of new market-wide or firm-specific information. The relationship can be divided into three regions: (1) Region 1, where the value of new information is negative and exceeds transaction costs; (2) Region 2, where the transaction costs exceed the value of new information regardless of the direction of the value of information; and (3) Region 3, where the value of new information is positive and exceeds transaction costs
returns. In the presence of transaction costs, Lesmond et al. (1999) propose the following relationship between observed and true returns: Rit ¼ Rit mSi ,
(47.2)
where mSi is the spread adjustment for security i, Rit is the observed return, and Rit is true return. Specifically, the relationship is as follows: Rit ¼ Rit a1i Rit ¼ 0 Rit ¼ Rit a2i
if Rit < a1i if a1i < Rit < a2i if Rit > a2i
(47.3)
where a1i < 0 and a2i > 0. a1i is the transaction costs for the marginal investor when information has a negative shock (selling), and a2i is the transaction costs for the marginal investor when information has a positive shock (buying). Consequently, the difference between a1i and a2i is a measure of round-trip transaction costs. If the true return exceeds transaction costs, the marginal investor will continuously trade, and the market price will respond until, for the next trade, marginal profit is equal to marginal transaction costs. If the transaction costs are greater than true returns, then the marginal investors will not trade, price will not move, and consequently the zero returns will occur. Therefore, in this model the frequency of zero returns is a simple alternative measure of transaction costs. The relationship between observed returns and true returns is illustrated in Fig. 47.1.
1318
47.3
J. Lee
High-Frequency Liquidity Proxies
Next I describe two well-known spread proxies that can be estimated using highfrequency data. These are the percent quoted spread and percent effective spread.
47.3.1 Percent Quoted Spread The ask (bid) quotation is the price at which shares can be purchased (sold) with immediacy. The difference, known as the percent quoted spread, is the cost of a round-trip transaction and is generally expressed as a proportion of the average of the bid and ask prices: Percent quoted spreadit ¼
Askit Bidit : Mit
In the empirical analysis in the next section, I estimate percent quoted spreads using high-frequency data. Following convention, for each stock and trading day, I find the highest bid and lowest ask prices over all venues at every point during the day, denoting these “inside” ask and bid prices as Askit and Bidit, respectively. Mit is then the average of, or midpoint between, Askit and Bidit. I then calculate the average percent quoted spread for a stock and day as the time-weighted average of all spreads observed for that stock during the day. Finally, percent quoted spread for each stock is calculated by averaging the daily estimates across all trading days within a given month.
47.3.2 Percent Effective Spread Some trades occur within the range of inside bid and ask quotes, as when simultaneous buy and sell market orders are simply crossed. Thus, the inside bid-ask spread may overestimate the realized amount of this component of transaction costs. Hasbrouck’s (2009) measure of percent effective spread attempts to adjust for this bias. For a given stock, percent effective spread is computed for all trades relative to the prevailing quote midpoint: Pit Mit Percent effective spreadit ¼ 2Dit , Mit where, for stock i, Dit is the buy-sell indicator variable which takes a value of 1 (–1) for buyer-initiated (seller-initiated) trades, Pit is the transaction price, and Mit is the midpoint of the most recently posted bid and ask quotes. The average percent effective spread for each day is a trade-weighted average across all trades during the day. The monthly percent effective spread for each security is calculated by averaging across all trading days within a given month.
47
Econometric Measures of Liquidity
47.4
1319
Empirical Analysis
47.4.1 Data I estimate liquidity measures for NYSE, AMEX, and NASDAQ common stocks over the years 1993–2008. I estimate the low-frequency measures using daily data from the Center for Research in Security Prices (CRSP) database. I estimate the high-frequency measures using the New York Stock Exchange Trades and Automated Quotes (TAQ) database. TAQ data is available only since 1993, which is therefore the binding constraint in terms of staring year. In order to be included in the sample, a stock must have at least 60 days of past return data. I discard certificates, American Depositary Receipts (ADRs), shares of beneficial interest, units, companies incorporated outside United States, American Trust components closed-end funds, preferred stocks, and Real Estate Investment Trusts (REITs). Regarding estimating the high-frequency measures, I determine the highest bid and lowest ask across all quoting venues at every point during the day (NBBO quotes) and then follow filters referring to Huang and Stoll (1997) and Brownless and Gallo (2006). To reduce errors and outliers, I remove (1) quotes if either the bid or ask price is negative; (2) quotes if either the bid or ask size is negative; (3) quotes if bid-ask spread is greater than $5 or negative; (4) the quotes if transaction price is negative; (5) quotes before-the-open and after-the-close trades and quotes; (6) quotes if the bid, ask, or trade price differ by more than 20 % from the previous quote or trade price; (7) quotes originating in market other than the primary exchange because regional quotes tend to closely follow the quotes posted by the primary exchange; and (8) %effective spread/%quoted spread>4.0.
47.4.2 Empirical Results Table 47.1 reports correlations among the various liquidity estimates. In this table, observations are pooled across all stocks and all months. All correlations are reliably positive and substantial in magnitude, ranging from 0.382 to 0.971. The two high-frequency measures, percent effective spread and percent quoted spread, are very highly correlated (0.971). Using percent effective spread as our highfrequency “benchmark,” its correlations with the low-frequency measures are 0.742 (Roll), 0.757 (effective tick), 0.621 (Amihud), and 0.586 (LOT). Based on the aforementioned criterion, these results indicate that the effective tick and Roll measures are the “best” low-frequency measures, as they have the highest correlation with percent effective spread. Table 47.2 presents the time-series means of monthly correlations of percent effective spread with each of the low-frequency measures for the full sample period as well as subperiods 1993–2000 (pre-decimalization) and 2001–2008 (postdecimalization). For three of the four low frequencies measured, the correlation with percent effective spread is higher in the first subperiod than the second subperiod, which may reflect differential effects of decimalization on the various
1320
J. Lee
Table 47.1 Pooled correlations ES QS Roll Eff. tick Ami LOT
ES 1 0.971 0.742 0.757 0.621 0.586
QS
Roll
Eff. tick
Ami
LOT
1 0.744 0.760 0.631 0.562
1 0.638 0.512 0.710
1 0.472 0.541
1 0.382
1
This table presents correlations among the liquidity estimates based on the pooled sample of monthly time-series and cross-sectional observations. Observation can be dropped if there are fewer than 60 days observations for the firm or if liquidity estimates are missing Table 47.2 Average cross-sectional correlations with percent effective spread, monthly estimates 1993–2008 1993–2000 2001–2008
Roll 0.662 0.748 0.576
Eff. tick 0.685 0.754 0.615
Ami 0.684 0.662 0.706
LOT 0.561 0.670 0.452
For each month, I estimate the cross-sectional correlation between the liquidity proxies from the low-frequency data and percent effective spread from TAQ. This table presents the average crosssectional correlations across all months. A stock is excluded only if it trades for less than 60 days prior to an observation or if liquidity estimates are missing
Table 47.3 Summary statistics for stock-by-stock time-series correlations Full NYSE AMEX NASDAQ
N 13,374 3,137 1,597 9,870
Roll 0.319 0.180 0.231 0.353
Eff. tick 0.578 0.694 0.517 0.534
Ami 0.603 0.690 0.643 0.567
LOT 0.308 0.201 0.310 0.335
For each stock, I estimate the time-series correlation between the estimated liquidity measure and percent effective spread from TAQ. The table presents the average time-series correlation across all stocks. Observations are dropped if there are fewer than 60 days observations for the firm or if a spread estimate is missing
dimensions of liquidity. For the full period as well as the first subperiod, effective tick has the correlation with percent effective spread, while for the second period the Amihud measure has the highest correlation with percent effective spread. Table 47.3 shows stock-by-stock time-series correlations between the highfrequency measure percent effective spread and each of the low-frequency measures, using the full-period data but also breaking the observations down by the exchange on which a stock trades. For the full sample as well as every exchange, the Amihud and effective tick estimates have relatively high correlations with percent effective spread, while the correlations are relatively low for the Roll and LOT measures.
47
Econometric Measures of Liquidity
1321
Overall, based on the suggested criteria of correlation with a high-frequency measure, the effective tick measure is best among the low-frequency measures tested, as it exhibits higher correlations with percent effective spread based on timeseries, cross-sectional, and pooled tests. Again, though, I caution that, since each liquidity measure captures only part of the multidimensional nature of liquidity, it is difficult to judge which measure is best.
47.5
Conclusions
This chapter discusses several popular measures of liquidity that are based on econometric approaches and compares them via correlation analysis. Among the four low-frequency liquidity proxies, I find that the effective tick measure is generally more highly correlated with the high-frequency measure (i.e., percent effective spread). Thus, by this criterion the effective tick measure is the “best” low-frequency measure of liquidity. However, since each liquidity measure captures only part of the multidimensional nature of liquidity, it is difficult to judge which measure is best. Consequently, from among the available measures of liquidity, a researcher should choose the measure that is consistent with their research purpose or perhaps consider employing several of them.
Appendix 1: Solution to LOT (1990) Model To estimate transaction costs based on their model in Eq. 47.3, Lesmond et al. (1999) introduce the limited dependent variable regression model of Tobin (1958) and Rosett (1959). Tobin’s model specifies that data are available for the explanatory variable, x, for all the observation while data are only partly observable for the dependent variable, y, and for the other unobservable region, the information is given whether or not data are above a certain threshold. Considering this aspect of Tobin’s model, the limited dependent variable model is an appropriate econometric method for the LOT model because a nonzero observed return occurs only when marginal profit exceeds marginal transaction costs. Assuming that market model is correct in the presence of transaction costs, Lesmond et al. (1999) estimate transaction costs on the basis of Eqs. 47.1 and 47.3. The equation system is Rit ¼ bit Rmt þ eit , where Rit ¼ Rit a1i Rit ¼ 0 Rit ¼ Rit a2i
if Rit < a1i if a1i < Rit < a2i if Rit > a2i
(47.4)
1322
J. Lee
The solution to this limited dependent regression variable model requires a likelihood function to be maximized with respect to a1i, a2i, bi, and si.
Lða1i , a2i , bi , si =Rit , Rmt Þ ¼
Y Rit þ a1i b Rmt i Ø s i 1 Y Rit þ a2i b Rmt Rit þ a1i bi Rmt i F F si si 2 Y Rit þ a2i b Rmt i Ø , s i 3 (47.5)
where Ø refers to the standard normal density function and F refers to the cumulative normal distribution. The product is over the Region 1, 2, and 3 of observations for which R*it < a1i, a1i < R*it < a2i, and R*it > a2i, respectively. The log likelihood function is 2 6 log L ¼ S1 log4
3 1 7 S1 ðRit þ a1i bi Rmt Þ2 þ S2 log½F2 F1 5 1 2si 2 2 ð2psi Þ 2 3 2 1
6 þ S3 log6 4
1
7 7 1 S ðR þ a2i bi Rmt Þ2 : 1 5 2si 2 3 it
ð2psi 2 Þ2 (47.6) Given Eq. 47.6, a1i, a2i, bi, and si can be estimated. The difference between a2i and a1i is the proxy of a round-trip transaction cost in the LOT model.
References Amihud, Y. (2002). Illiquidity and stock returns: Cross-section and time-series effects. Journal of Financial Markets, 5, 31–56. Amihud, Y., & Mendelson, H. (1986). Asset pricing and the bid-ask spread. Journal of Financial Economics, 17(2), 223–249. Bharath, S. T., Pasquariello, P., et al. (2009). Does asymmetric information drive capital structure decisions? Review of Financial Studies, 22(8), 3211–3243. Brennan, M. J., & Subrahmanyam, A. (1996). Market microstructure and asset pricing: on the compensation for illiquidity in stock returns. Journal of Financial Economics, 41, 441–464. Brennan, M., Chordia, T., et al. (1998). Alternative factor specifications, security characteristics, and the cross-section of expected stock returns. Journal of Financial Economics, 49(3), 345–373. Brownlees, C. T., & Gallo, G. M. (2006). Financial econometric analysis at ultra-high frequency: Data handling concerns. Computational Statistics & Data Analysis, 51, 2232–2245.
47
Econometric Measures of Liquidity
1323
Chung, K. H., Elder, J., & Kim, J. (2010). Corporate governance and liquidity. Journal of Financial and Quantitative Analysis, 45(02), 265–291. Datar, V. T., Naik, N. Y., & Radcliffe, R. (1998). Liquidity and stock returns: An alternative test. Journal of Financial Markets, 1, 203–219. Goyenko, R. Y., Holden, C. W., et al. (2009). Do liquidity measures measure liquidity? Journal of Financial Economics, 92(2), 153–181. Hasbrouck, J. (2009). Trading costs and returns for US equities: Estimating effective costs from daily data. Journal of Finance, 64(3), 1445–1477. Holden, C. W. (2009). New low-frequency spread measures. Journal of Financial Markets, 12(4), 778–813. Huang, R. D., & Stoll, H. R. (1997). The components of the bid-ask spread: A general approach. Review of Financial Studies, 10(4), 995–1034. Lesmond, D. A., Ogden, J. P., et al. (1999). A new estimate of transaction costs. Review of Financial Studies, 12(5), 1113–1141. Lipson, M. L. (2003). Market microstructure and corporate finance. Journal of Corporate Finance, 9(4), 377–384. Lipson, M. L., & Mortal, S. (2007). Liquidity and firm characteristics: Evidence from mergers and acquisitions. Journal of Financial Markets, 10(4), 342–361. Lipson, M. L., & Mortal, S. (2009). Liquidity and capital structure. Journal of Financial Markets, 12(4), 611–644. Liu, W. (2006). A liquidity-augmented capital asset pricing model. Journal of Financial Economics, 82, 631–671. Maddala, G. S. (1986). Limited-dependent and qualitative variables in econometrics. Cambridge: Cambridge University Press. Pa´stor, L., & Stambaugh, R. (2003). Liquidity risk and expected stock returns. Journal of Political Economy, 111, 642–685. Roll, R. (1984). A simple implicit measure of the effective bid-ask spread in an efficient market. Journal of Finance, 39, 1127–1139. Rosett, R. N. (1959). A statistical model of friction in economics. Econometrica: Journal of the Econometric Society, 27, 263–267. Tobin, J. (1958). Estimation of relationships for limited dependent variables. Econometrica: Journal of the Econometric Society, 26, 24–36.
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting: Application to Equity Index Futures Markets
48
Oscar Carchano, Young Shin (Aaron) Kim, Edward W. Sun, Svetlozar T. Rachev, and Frank J. Fabozzi
Contents 48.1 48.2 48.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ARMA-GARCH Model with Normal and Tempered Stable Innovations . . . . . . . . . . . . VaR for the ARMA-GARCH Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.3.1 VaR and Backtesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.4 Introduction of Trading Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.4.1 Different Variants of Trading Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.4.2 Lagged Relative Change of Trading Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.4.3 Lagged Trading Volume or Forecasting Contemporaneous Trading Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: VaR on the CTS Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1326 1328 1329 1329 1332 1332 1334 1334 1337 1337 1339
O. Carchano Department of Financial Economics, University of Valencia, Valencia, Spain e-mail: [email protected] Y.S.A. Kim College of Business, Stony Brook University, Stony Brook, NY, USA e-mail: [email protected] E.W. Sun KEDGE Business School and BEM Management School, Bordeaux, France e-mail: [email protected] S.T. Rachev Department of Applied Mathematics and Statistics, College of Business, Stony Brook University, SUNY, Stony Brook, NY, USA FinAnalytica, Inc, New York, NY, USA e-mail: [email protected] F.J. Fabozzi (*) EDHEC Business School, EDHEC Risk Institute, Nice, France e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_48, # Springer Science+Business Media New York 2015
1325
1326
O. Carchano et al.
Abstract
We present the first empirical evidence for the validity of the ARMA-GARCH model with tempered stable innovations to estimate 1-day-ahead value at risk in futures markets for the S&P 500, DAX, and Nikkei. We also provide empirical support that GARCH models based on normal innovations appear not to be as well suited as infinitely divisible models for predicting financial crashes. The results are compared with the predictions based on data in the cash market. We also provide the first empirical evidence on how adding trading volume to the GARCH model improves its forecasting ability. In our empirical analysis, we forecast 1 % value at risk in both spot and futures markets using normal and tempered stable GARCH models following a quasi-maximum likelihood estimation strategy. In order to determine the accuracy of forecasting for each specific model, backtesting using Kupiec’s proportion of failures test is applied. For each market, the model with a lower number of violations is preferred. Our empirical result indicates the usefulness of classical tempered stable distributions for market risk management and asset pricing. Keywords
Infinitely divisible models • Tempered stable distribution • GARCH models • Value at risk • Kupiec’s proportion of failures test • Quasi-maximum likelihood estimation strategy
48.1
Introduction
Predicting future financial market volatility is crucial for risk management of financial institutions. The empirical evidence suggests that a suitable market risk model must be capable of handling the idiosyncratic features of volatility, that is, daily returns time variant amplitude and volatility clustering. There is a well-developed literature in financial econometrics that demonstrates how autoregressive conditional heteroskedastic (ARCH) and generalized ARCH (GARCH) models – developed by Engle (1982) and Bollerslev (1986), respectively – can be employed to explain the clustering effect of volatility1. Moreover, the selected model should consider the stylized fact that asset return distributions are not normally distributed, but instead have been shown to exhibit patterns of leptokurtosis and skewness.
1
For a description of ARCH and GARCH modeling, see Chap. 8 in Rachev et al. (2007). The chapter of the same reference describes ARCH and GARCH modeling with infinite variance innovations. Engle et al. (2008) provide the basics of ARCH and GARCH modeling with applications to finance.
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1327
Taking a different tact than the ARCH/GARCH with normal innovations approach for dealing with the idiosyncratic features of volatility, Kim et al. (2010) formulate an alternative model based on subclasses of the infinitely divisible (ID) distributions. More specifically, for the S&P 500 return, they empirically investigate five subclasses of the ID distribution, comparing their results to that obtained using GARCH models based on innovations that are assumed to follow a normal distribution. They conclude that, due to their failure to focus on the distribution in the tails, GARCH models based on the normal innovations may not be as well suited as ID models for predicting financial crashes. Because of its popularity, most empirical studies have examined value at risk (VaR) as a risk measure. These studies have focused on stock indices. For example, Kim et al. (2011), and Asai and McAleer (2009) examine the S&P 500, DAX 30, and Nikkei 225 stock indices, respectively. A few researchers have studied this risk measure for stock index futures contracts: Huang and Lin (2004) (Taiwan stock index futures) and Tang and Shieh (2006) (S&P 500, Nasdaq 100, and Dow Jones stock index futures). As far as we know, there are no empirical studies comparing VaR spot and futures indices. For this reason, we compare the predictive performance of 1-day-ahead VaR forecasts in these two markets. We then introduce trading volume into the model, particularly, within the GARCH framework. There are several studies that relate trading volume and market volatility for equities and equity futures markets. Studies by Epps and Epps (1976), Smirlock and Starks (1985), and Schwert (1989) document a positive relation between volume and market volatility. Evidence that supports the same relation for futures is provided by Clark (1973), Tauchen and Pitts (1983), Garcia et al. (1986), Ragunathan and Peker (1997), and Gwilym et al. (1999). Collectively, these studies clearly support the theoretical prediction of a positive and contemporaneous relationship between trading volume and volatility. This result is a common empirical finding for most financial assets, as Karpoff (1987) showed when he summarized the results of several studies on the positive relation between price changes and trading volume for commodity futures, currency futures, common stocks, and stock indices. Foster (1995) concluded that not only is trading volume important in determining the rate of information (i.e., any news that affects the market), but also lagged volume plays a role. Although contemporary trading volume is positively related to volatility, lagged trading volume presents a negative relationship. Empirically, investigating daily data for several indices such as the S&P 500 futures contract, Wang and Yau (2000) observe that there is indeed a negative link between lagged trading volume and intraday price volatility. This means that an increase in trading volume today (as a measure of liquidity) will imply a reduction in price volatility tomorrow. In their study of five currency futures contracts, Fung and Patterson (1999) do in fact find a negative relationship between return volatility and past trading volume. In their view, the reversal behavior of volatility with trading volume is generally consistent with the overreaction hypothesis (see Conrad et al. 1994) and supports the sequential information hypothesis (see Copeland 1976), which explains the relationship between return volatility and trading volume.
1328
O. Carchano et al.
Despite the considerable amount of research in this area, there are no studies that use trading volume in an effort to improve the capability of models to forecast 1-day-ahead VaR. Typically, in a VaR context, trading volume is only employed as a proxy for “liquidity risk” – the risk associated with trying to close out a position. In this paper, in contrast to prior studies, we analyze the impact of introducing trading volume on the ability to enhance performance in forecasting VaR 1 day ahead. We empirically test whether the introduction of trading volume will reduce the number of violations (i.e., the number of times when the observed loss exceeds the estimated one) in the spot and futures equity markets in the USA, Germany, and Japan. The remainder of this paper is organized as follows. ARMA-GARCH models with normal and tempered stable innovations are reviewed in Sect. 48.2. In Sect. 48.3, we discuss parameter estimation of the ARMA-GARCH models and forecasting daily return distributions. VaR values and backtesting of the ARMAGARCH models are also reported in Sect. 48.2, along with a comparison of the results for (1) the spot and futures markets and (2) the normal and tempered stable innovations. Trading volume is introduced into the ARMA-GARCH model with tempered stable innovations in Sect. 48.4. VaR and backtesting of the ARMAGARCH with different variants of trading volume are presented and compared to the results for models with and without trading volume. We summarize our principal findings in Sect. 48.5.
48.2
ARMA-GARCH Model with Normal and Tempered Stable Innovations
In this section, we provide a review of the ARMA-GARCH models with normal and tempered stable innovations. For a more detailed discussion, see Kim et al. (2011). Let (St)t0 be the asset price process and (yt)t0 be the return process of (St)t0 t defined by yt ¼ log StS1 . The ARMA(1,1)-GARCH(1,1) model is
yt ¼ ayt1 þ bst1 et1 þ st et þ ct : s2t ¼ a0 þ a1 s2 t1 e2 t1 þ b1 s2 t1
(48.1)
where e0 ¼ 0 and a sequence (et)tcN ¼ 0 of independent and identically distributed (iid) real random variables. The innovation et is assumed to follow the standard normal distribution. This ARMA(1,1)-GARCH(1,1) model is referred to as the “normal-ARMA-GARCH model.” If the ets are assumed to be tempered stable innovations, then we obtain a new ARMA(1,1)-GARCH(1,1) model. In this paper, we will consider the standard classical tempered stable (denoted by stdCTS) distributions. This ARMA(1,1)-GARCH(1,1) model is defined as follows: CTS-ARMA-GARCH
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1329
model, et stdCTS(a, l+, l_). This distribution does not have a closed-form solution for its probability density function. Instead, it is defined by its characteristic function as follows: Let a 2 (0,2)\{1},C, l+, l_ > 0, and m ∈ ℝ. Then a random variable X is said to follow the classical tempered stable (CTS) distribution if the characteristic function of X is given by fx ðuÞ ¼ fCTS u : a, C, lþ , l, m a1 (48.2) ¼ exp ium iuCT ð1 aÞ la1 þ l a a a a þ CGðaÞ ðlþ iuÞ lþ þ ðl iuÞ l , and we denote X CTS(a, C, l+, l_, m). The cumulants of X are defined by Cn ðXÞ ¼
1 ∂n log E eiuX ju ¼ 0, n ¼ 1, 2, 3, . . . : : n n i ∂u
For the tempered stable distribution, we have E[X] ¼ c1(X) ¼ m. The cumulants of the tempered stable distribution for n ¼ 2, 3, . . . are n an cn ðXÞ ¼ CGðn aÞ lan : þ þ ð1Þ l By substituting the appropriate value for the two parameters m and C into the three tempered stable distributions, we can obtain tempered stable distributions with zero mean and unit variance. That is, X CTS(a, C, l+, l, 0) has zero mean and unit variance by substituting a2 1 : (48.3) C ¼ Gð2 aÞ la2 þ þl The random variable X is referred to as the standard CTS distribution with parameters (a, l+, l_) and denoted by X stdCTS(a, l+, l_).
48.3
VaR for the ARMA-GARCH Model
In this section, we discuss VaR for the ARMA-GARCH model with normal and tempered stable innovations.
48.3.1 VaR and Backtesting The definition of VaR for a significance level is VaR ðXÞ ¼ inffx 2 ℝjPðX xÞ > g:
1330
O. Carchano et al.
If we take the ARMA-GARCH model described in Sect. 48.2, we can define VaR for the information until time t with significance level as2 VaRt, ytþ1 ¼ inf x 2 ℝPt ytþ1 x > , where Pt(A) is the conditional probability of a given event A for the information until time t. Two models are considered: normal-ARMA(1,1)-GARCH(1,1) and stdCTS-ARMA(1,1)-GARCH(1,1). For both models, the parameters have been estimated for the time series between December 14, 2004 and December 31, 2008. For each daily estimation, we worked with 10 years of historical daily performance for the S&P 500, DAX 30, and Nikkei 225 spot and futures indices. More specifically, we used daily returns calculated based on the closing price of those indices. In the case of futures indices, we constructed a unique continuous time series using the different maturities of each futures index following the methodology proposed by Carchano and Pardo (2009).3 Then, we computed VaRs for both models. The maximum likelihood estimation method (MLE) is employed to estimate parameters of the normal-ARMA(1,1)-GARCH(1,1) model. For the CTS distribution, the parameters are estimated as follows4: 1. Estimate parameters a0, a1, b1, a, b, c with normal innovations by the MLE. Volatility clustering is captured by the GARCH model. 2. Extract residuals using those parameters. The residual distribution still presents fat tail and skewness. 3. Fit the parameters of the innovation distribution (CTS) to the extracted residuals using MLE. The fat tailed and skewed features of the residual distribution are captured. In order to determine the accuracy of VaR for the two models, backtesting using Kupiec’s proportion of failures test (Kupiec 1995) is applied. We first calculate the number of violations. Then, we compare the number of violations with the conventional number of exceedances at a given significance level. In Table 48.1 the number of violations and p-values for Kupiec’s backtest for the three stock indices over the 41-year periods are reported. Finally, we sum up the number of violations and their related p-values for 1 % VaRs for the normal and CTS-ARMA-GARCH models.
2
VaR on the CTS distribution is described in the Appendix. Thus, the last trading day of the front contract is chosen as the rollover date. Then, the return of the day after the rollover date is calculated as the quotient between the closing price of the following contract and the previous closing price of such contract. By doing so, all the returns are taken from the same maturity. 4 A quasi-MLE strategy is followed because the ARMA-GARCH CTS model has too many parameters. If all the parameters are estimated at once, then the GARCH parameters go to zero. This strategy is also followed in Kim et al. (2009, 2010, 2011). For a discussion of the quasi- MLE methodology, see Rachev et al. (2007, pp. 292–293) or Verbeek (2004, pp. 182–184). 3
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1331
Table 48.1 Normal-ARMA-GARCH versus CTS-ARMA-GARCH
Model Normal-ARMAGARCH CTS-ARMA-GARCH Normal-ARMAGARCH CTS-ARMA-GARCH Normal-ARMAGARCH CTS-ARMA-GARCH Normal-ARMAGARCH CTS-ARMA-GARCH Normal-ARMAGARCH CTS-ARMA-GARCH Normal-ARMAGARCH CTS-ARMA-GARCH
1 year (255 days) Dec. 14, 2004 Dec. 16, 2005 Dec. 15, 2005 Dec. 20, 2006 N(p-value) N(p-value) S&P 500 spot 1(0.2660) 3(0.7829)
Dec. 21, 2006 Dec. 28, 2007 Dec. 27, 2007 Dec. 31, 2008 N(p-value) N(p-value) 8(0.0061)
10(0.0004)
6(0.0646)
4(0.3995)
7(0.0211)
9(0.0016)
3(0.7829)
4(0.3995)
5(0.1729)
4(0.3995)
3(0.7829)
6(0.0646)
4(0.3995) 4(0.3995) DAX 30 futures 3(0.7829) 5(0.1729)
3(0.7829)
4(0.3995)
6(0.0646)
6(0.0646)
6(0.0646)
3(0.7829)
5(0.1729)
5(0.1729)
0 2(0.7190) S&P 500 futures 3(0.7829) 3(0.7829) 1(0.2660) DAX 30 spot 4(0.3995)
3(0.7829) 4(0.3995) Nikkei 225 spot 2(0.7190) 4(0.3995) 1(0.2660) 3(0.7829) Nikkei 225 futures 2(0.7190) 2(0.7190)
4(0.3995)
5(0.1729)
7(0.0211)
5(0.1729)
5(0.1729)
6(0.0646)
6(0.0646)
5(0.1729)
The number of violations (N) and p-values of Kupiec’s proportion of failures test for the S&P 500, DAX 30, and Nikkei 225 spot and futures indices data has been shown. Normal-ARMA-GARCH and CTS-ARMA-GARCH compared
Based on Table 48.1, we conclude the following for the three stock indices. First, a comparison of the normal and tempered stable models indicates that there are no cases using the tempered stable model at the 5 % significance level, whereas the normal model is rejected five times. This evidence is consistent with the findings of Kim et al. (2011). Second, a comparison of the spot and futures indices indicates that spot data provide less than or the same number of violations than futures data. One potential explanation is that futures markets are more volatile, particularly, when the market falls.5 This overreaction to bad news could cause the larger number of violations. 5
We compared the spot and futures series when the markets discount bad news (negative returns). We find that for the three stock indices, futures volatility is significantly greater than spot volatility at a 5 % significance level. Moreover, for all three stock indices, the minimum return and the 1 % percentile return are also lower for futures data than spot data.
1332
48.4
O. Carchano et al.
Introduction of Trading Volume
In the previous section, we showed the usefulness of the tempered stable model for stock index futures. Motivated by the vast literature linking trading volume and volatility, for the first time we investigate whether the introduction of trading volume in the CTS model could improve its ability to forecast 1-day-ahead VaR. Let (St)t0 be the asset price process and (yt)t0 be the return process of (St)t0 t defined by yt ¼ log StS1 . We propose the following ARMA(1,1)-GARCH(1,1) with trading volume model:
yt ¼ ayt1 þ bst1 et1 þ st et þ c s2t ¼ a0 þ a1 s2 t1 e2 t1 þ b1 s2 t1 þ g1 Volt1 ,
(48.4)
where e0 ¼ 0 and a sequence (et)tcN ¼ 0 of iid real random variables. The innovation et is assumed to be the tempered stable innovation. We will consider the standard classical tempered stable distributions. This new ARMA(1,1)-GARCH(1,1)-V model is defined as follows: CTS-ARMA-GARCH-V model : et stdCTSða, lþ , l Þ: The inclusion of lagged volume as an independent variable along with lagged volatility into the model may cause a problem of multicollinearity. In order to determine the seriousness of the problem, we calculated the model without volume, extracted the GARCH series, and determined the degree of collinearity between both variables. The most recommended measure in the literature is to calculate the condition index following Belsley et al. (1980) and observe if the index exceeds 20, in which case collinearity is considered to be grave. In our case, the calculated value was 4.9268, 3.2589, and 4.5569 for the S&P, DAX, and Nikkei, respectively. Therefore, we concluded that collinearity is a minor problem. Moreover, the ARMA-GARCH model is only affected in the GARCH framework, particularly the equation coefficients (i.e., the volume variable can appear insignificant when it is indeed significant), but not the numerical estimation of the variance; neither is the forecast power of the global model. As our objective is to forecast the VaR, we believe that the multicollinearity problem can be ignored because the results will not be affected.
48.4.1 Different Variants of Trading Volume For the S&P 500 cash and futures markets, we test the following versions of trading volume in order to determine which one would be the most appropriate: • Lagged trading volume in levels: V(t 1) • Logarithm of lagged trading volume: log [V(t 1)] • Relative change of lagged trading volume: log [V(t 1)/V(t 2)]
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1333
Table 48.2 CTS-ARMA-GARCH with lagged volume
Model CTS-ARMA-GARCHV(t 1) CTS-ARMA-GARCHlog [V(t 1)] CTS-ARMA-GARCHln [V(t 1)/V(t 2)]
1 year (255 days) Dec. 14, 2004 Dec. 16, 2005 Dec. 15, 2005 Dec. 20, 2006 N(p-value) N(p-value) S&P 500 spot 16(0.0000) 10(0.0004) 1(0.2660) 0
Dec. 21, 2006 Dec. 28, 2007 Dec. 27, 2007 Dec. 31, 2008 N(p-value) N(p-value) 23(0.0000)
26(0.0000)
4(0.3995)
10(0.0004)
6(0.0646)
2(0.7190)
6(0.0646)
5(0.1729)
4(0.3995)
6(0.0646)
4(0.3995)
5(0.1729)
3(0.7829)
5(0.1729)
15(0.0000)
3(0.7829)
4(0.3995)
5(0.1729)
8(0.0061)
8(0.0061)
S&P 500 futures CTS-ARMA-GARCH1(0.2660) 11(0.0001) V(t 1) CTS-ARMA-GARCH1(0.2660) 3(0.7829) log [V(t 1)] CTS-ARMA-GARCH1(0.2660) 3(0.7829) ln [V(t 1)/V(t 2)] CTS-ARMA-GARCH- 0 1(0.2660) V$(t 1) CTS-ARMA-GARCH3(0.7829) 3(0.7829) log [V$(t 1)] CTS-ARMA-GARCH2(0.7190) 0 ln [V$(t 1)/V$(t 2)]
The number of violations (N) and p-values of Kupiec’s proportion of failures test for the S&P 500 spot and futures indices with the different variants of volume using the stdCTSARMA(1,1)-GARCH(1,1) model has been shown. V(t 1), log [V(t 1)], and ln [V(t 1)/ V(t 2)] stand for levels, logarithm, and relative change of the lagged trading volume, respectively. V$(t 1), log [V$(t 1)], and ln [V$(t 1)/V$(t 2)] stand for levels, logarithm, and relative change of the lagged trading volume in dollars, respectively
The spot series trading volume is in dollars; for the futures series, the trading value is in number of contracts. We can calculate the volume of the futures market in dollars too. The tick value of the S&P 500 futures contract is 0.1 index points or $25. Multiplying the number of contracts by the price and finally by $250 (the contract’s multiple), we obtain the trading volume series for the futures contract in dollars. Thus, for the futures contract we get three new versions of trading volume to test: • Lagged trading volume in dollars: V$(t 1) • Logarithm of trading volume in dollars: log [V$(t 1)] • Relative change of lagged trading volume in dollars: log [V$(t 1)/V$(t 2)] By doing that, we can determine which series (in dollars or in contracts) seems to be more useful for the futures index. In Table 48.2 we report the number of violations and p-values of Kupiec’s backtest for the different versions of the CTS-ARMA-GARCH-V model for the S&P 500 spot
1334
O. Carchano et al.
and futures indices. We count the number of violations and the corresponding p-values for 1 %-VaRs of both markets. From Table 48.2, we conclude the following: • The model with the lagged trading volume in level is rejected at the 1 % significance level in all 4 years for the S&P 500 spot, and for the second period (2005–2006) for the S&P 500 futures. • The logarithm of trading volume in the model is rejected at the 5 % significance level for the spot market for the third period (2006–2007), but it is not rejected in any period for the futures market. • The relative change of the lagged volume is not rejected at the 5 % significance level in any period in either market. Of the three versions of trading volume tests, this version seems to be the most useful for both spot and futures markets. • The results for trading volume in contracts and the trading volume in dollars in the futures market indicate that the former is rejected at the 1 % significance level only for the lagged trading volume in level in the second period (2005–2006). Trading volume in dollars is rejected three times, for the lagged trading volume in levels for the third period (2006–2007) and for the lagged relative trading volume change in the last two periods (2006–2007 and 2007–2008). These findings suggest that the trading volume in contracts is the preferred measure.
48.4.2 Lagged Relative Change of Trading Volume As we have just seen, the variant of trading volume that seems more useful for forecasting 1-day-ahead VaR using CTS-ARMA-GARCH is the relative change of trading volume. Next, we compare the original CTS-ARMA-GARCH model with the new CTS-ARMA-GARCH-V model where V is the lagged relative change of trading volume. Table 48.3 shows the number of violations and p-values of Kupiec’s backtest for the two models for the three stock indices and both markets. We sum up the number of violations and the corresponding p-values for 1 % VaRs for each case. Our conclusions from Table 48.3 are as follows. For the spot markets, the introduction of trading volume does not mean a reduction in the number of violations in any period for any index. However, for the futures markets, the numbers of violations are the same or lower for the model with trading volume than with the original model. Thus, by introducing trading volume, we get a slightly more conservative model, increasing the VaR forecasted for futures equity markets.
48.4.3 Lagged Trading Volume or Forecasting Contemporaneous Trading Volume Although there is some evidence which supports the relationship between lagged trading volume and volatility, the literature is not as extensive as the studies that establish a strong link between volatility and contemporaneous trading
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1335
Table 48.3 CTS-ARMA-GARCH versus CTS-ARMA-GARCH-V
Model CTS-ARMA-GARCH CTS-ARMA-GARCHln [V(t 1)/V(t 2)] CTS-ARMA-GARCH CTS-ARMA-GARCHln [V(t 1)/V(t 2)] CTS-ARMA-GARCH CTS-ARMA-GARCHln [V(t 1)/V(t 2)] CTS-ARMA-GARCH CTS-ARMA-GARCHln [V(t 1)/V(t 2)] CTS-ARMA-GARCH CTS-ARMA-GARCHln [V(t 1)/V(t 2)] CTS-ARMA-GARCH CTS-ARMA-GARCHln [V(t 1)/V(t 2)]
1 year (255 days) Dec. 14, 2004 Dec. 16, 2005 Dec. 15, 2005 Dec. 20, 2006 N(p-value) N(p-value) S&P 500 spot 0 2(0.7190) 0 2(0.7190)
Dec. 21, 2006 Dec. 28, 2007 Dec. 27, 2007 Dec. 31, 2008 N(p-value) N(p-value) 6(0.0646) 6(0.0646)
4(0.3995) 5(0.1729)
S&P 500 futures 1(0.2660) 3(0.7829) 1(0.2660) 3(0.7829)
4(0.3995) 3(0.7829)
5(0.1729) 5(0.1729)
DAX 30 spot 4(0.3995) 5(0.1729)
4(0.3995) 4(0.3995)
3(0.7829) 3(0.7829)
4(0.3995) 5(0.1729)
DAX 30 futures 3(0.7829) 4(0.3995) 3(0.7829) 4(0.3995)
6(0.0646) 5(0.1729)
3(0.7829) 3(0.7829)
Nikkei 225 spot 1(0.2660) 3(0.7829) 3(0.7829) 3(0.7829)
4(0.3995) 4(0.3995)
5(0.1729) 5(0.1729)
Nikkei 225 futures 5(0.1729) 5(0.1729) 4(0.3995) 4(0.3995)
6(0.0646) 6(0.0646)
6(0.0646) 6(0.0646)
The number of violations (N) and p-values of Kupiec’s proportion of failures test for the S&P 500, DAX 30, and Nikkei 225 spot and futures indices has been reported. CTS-ARMA-GARCH and CTS-ARMA-GARCH with lagged relative change of trading volume compared
volume. As there are countless ways to try to forecast trading volume, we begin by introducing contemporaneous trading volume relative change in the model as a benchmark to assess whether it is worthwhile to forecast trading volume. In Table 48.4 we show the number of violations and p-values of Kupiec’s backtest for the CTS-ARMA-GARCH with contemporaneous and lagged relative change of trading volume for the three stock indices for both markets. We count the number of violations and the corresponding p-values for 1 % VaRs for the six indices. Our conclusions based on the results reported in Table 48.4 are as follows. First, with the exception of the S&P 500 futures, the introduction of the contemporaneous relative change of trading volume in the model is rejected at the 1 % significance level for the last period analyzed (2007–2008). In the case of the S&P
1336
O. Carchano et al.
Table 48.4 CTS-ARMA-GARCH with contemporaneous volume versus CTS-ARMA-GARCH with lagged volume 1 year (255 days) Dec. 14, 2004 Dec. 16, 2005 Dec. 15, 2005 Dec. 20, 2006 N(p-value) N(p-value) Model S&P 500 spot CTS-ARMA-GARCH- 0 3(0.7829) ln [V(t)/V(t 1)] CTS-ARMA-GARCH- 0 5(0.1729) ln [V(t 1)/V(t 2)] S&P 500 futures CTS-ARMA-GARCH- 0 1(0.2660) ln [V(t)/V(t 1)] CTS-ARMA-GARCH- 1(0.2660) 3(0.7829) ln [V(t 1)/V(t 2)] DAX 30 spot CTS-ARMA-GARCH- 0 1(0.2660) ln [V(t)/V(t 1)] CTS-ARMA-GARCH- 5(0.1729) 4(0.3995) ln [V(t 1)/V(t 2)] DAX 30 futures CTS-ARMA-GARCH- 1(0.2660) 1(0.2660) ln [V(t)/V(t 1)] CTS-ARMA-GARCH- 3(0.7829) 4(0.3995) ln [V(t 1)/V(t 2)] Nikkei 225 spot CTS-ARMA-GARCH- 3(0.7829) 5(0.1729) ln [V(t)/V(t 1)] CTS-ARMA-GARCH- 3(0.7829) 3(0.7829) ln [V(t 1)/V(t 2)] Nikkei 225 futures CTS-ARMA-GARCH- 1(0.2660) 1(0.2660) ln [V(t)/V(t 1)] CTS-ARMA-GARCH- 4(0.3995) 4(0.3995) ln [V(t 1)/V(t 2)]
Dec. 21, 2006 Dec. 28, 2007 Dec. 27, 2007 Dec. 31, 2008 N(p-value) N(p-value) 3(0.7829)
8(0.0061)
6(0.0646)
5(0.1729)
7(0.0211)
6(0.0646)
3(0.7829)
5(0.1729)
3(0.7829)
11(0.0001)
3(0.7829)
5(0.1729)
2(0.7190)
8(0.0061)
5(0.1729)
3(0.7829)
7(0.0211)
8(0.0061)
4(0.3995)
5(0.1729)
3(0.7829)
11(0.0001)
6(0.0646)
6(0.0646)
The number of violations (N) and p-values of Kupiec’s proportion of failures test for the S&P 500, DAX 30, and Nikkei 225 spot and futures indices has been shown. CTS-ARMA-GARCH with the relative change of trading volume and CTS-ARMA-GARCH with lagged relative change of trading volume compared
500 futures, it is rejected at the significance level of 5 % for the third period (2006–2007). Second, the model with lagged relative change of trading volume is not rejected for any stock index or market. It seems to be more robust than contemporaneous trading volume (although, in general, there are fewer violations when using it).
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1337
Our results suggest that it is not worth making an effort to predict contemporaneous trading volume because the forecasts will be flawed and two variables would have to be predicted (VaR and contemporaneous trading volume). Equivalently, the lagged trading volume relative change appears to be more robust because it is not rejected in any case, although it provides a poor improvement to the model.
48.5
Conclusions
Based on an empirical analysis of spot and futures trading for the S&P 500, DAX 30, and Nikkei 225 stock indices, in this paper we provide empirical evidence about the usefulness of using classical tempered stable distributions for predicting 1-day-ahead VaR. Unlike prior studies that investigated CTS models in the cash equity markets, we analyzed their suitability for both spot markets and futures markets. We find in both markets the CTS models perform better in forecasting 1-day-ahead VaR than models that assume innovations follow the normal law. Second, we introduced trading volume into the CTS model. Our empirical evidence suggests that lagged trading volume relative change provides a slightly more conservative model (i.e., reduces the number of violations) to predict 1-day-ahead VaR for stock index futures contracts. We cannot state the same for the cash market because the results are mixed depending on the index. After that, we introduced contemporaneous trading volume to try to improve the forecasting ability of the model, but in the end, it did not seem to be worth the effort. That is, trading volume appeared not to offer enough information to improve forecasts. Finally, we compared the number of violations of the estimated VaR in the spot and futures equity markets. For the CTS model without volume, in general, we find fewer violations in the spot indices than in the equivalent futures contracts. In contrast, our results suggest that the number of violations in futures markets is less in the case of the CTS model with trading volume in comparison to the CTS model that ignores trading volume. But if we contrast spot and futures equity markets, violations are still greater for futures than in spot markets. A possible reason is that futures markets demonstrate extra volatility or an overreaction when the market falls with respect to their corresponding spot markets.
Appendix: VaR on the CTS Random Variable Let X be a CTS random variable. Since the CTS random variable is continuous and infinitely divisible, we obtain VaR(X) ¼ FX(), where the cumulative distribution function FX of X is provided by the following proposition. Proposition Let X be an infinitely divisible random variable and fx(u) be the
characteristic function of X. If there is a r > 0 such that jfx(Z)j < 1 for all the complex z with ℑ(z) ¼ r, then
1338
O. Carchano et al.
exr ℜ FX ð x Þ ¼ p
ð / e
ixu
0
fX ðu þ irÞ du , for x 2 ℝ r ui
(48.5)
where ℜ(z) is the real part of a complex number z. Proof By the definition of the cumulative density function, we have FX ð x Þ ¼ Pð X x Þ ¼
ðx 1
f XðtÞdt,
where fX(x) is the density function of X. The probability density function fX(t) can be obtained from the characteristic function fX by the complex inverse formula (see Doetsch 1970); that is, f X ðt Þ ¼
1 2p
ð 1þir 1þir
eitz fX ðzÞdz,
and we have FX ð x Þ ¼ ¼
ðx 1
1 2p
1 2p
ð 1þip
1þip ð 1þip ð x 1þip 1
eitz fX ðzÞdzdt eitz dtfX ðzÞdz:
Note that if r > 0, then it lim eitðaþirÞ ¼ lim e ðaþirÞ ¼ lim ert ¼ 0, a 2 ℝ,
t!1
t!1
t!1
and hence ðx 1
eitz dt ¼
1 itz x 1 e ¼ eixz 1 iz iz
where z ∈ ℂ with ℑ(z) ¼ r Thus, we have ð 1 1þip 1 ixz e fX ðzÞdz FX ðxÞ ¼ 2p 1þip iz ð 1 1 1 eixðuþirÞ fx ðu þ irÞdu ¼ 2p 1 iðu þ irÞ ð exr 1 ixu fX ðu þ irÞ du: ¼ e r iu 2p 1
48
A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting
1339
Let gr ð uÞ ¼
fX ðu þ irÞ : r iu
Then we can show that gr ðuÞ ¼ gr ðuÞ with u ∈ ℝ and hence we have
ð 1 ð1 eixu gr ðuÞdu ¼ 2ℜ eixu gr ðuÞdu : 1
0
Therefore, we obtain Eq. 48.5.
References Asai, M., & McAleer, M. (2009). The structure of dynamic correlations in multivariate stochastic volatility models. Journal of Econometrics, 150, 182–192. Belsley, D., Kuh, E., & Welsch, R. (1980). Regression diagnostics: Identifying influential data and sources of collinearity. New York: Wiley. Bollerslev, T. (1986). Generalized autoregressive conditional heteroscedasticity. Journal of Econometrics, 31, 307–327. Carchano, O., & Pardo, A. (2009). Rolling over stock index futures contracts. Journal of Futures Markets, 29, 684–694. Clark, P. K. (1973). A subordinated stochastic process model with finite variance for speculative prices. Econometrica, 41, 135–155. Conrad, J. S., Hameed, A., & Niden, C. (1994). Volume and autocovariances in short-horizon individual security returns. Journal of Finance, 49, 1305–1329. Copeland, T. E. (1976). A model of asset trading under the assumption of sequential information arrival. Journal of Finance, 34, 1149–1168. Doetsch, G. (1970). Introduction to the theory and application of the Laplace transformation. New York: Springer. Engle, R. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance with estimates of the variance of United Kingdom inflation. Econometrica, 50, 987–1007. Engle, R., Focardi, S. M., & Fabozzi, F. J. (2008). ARCH/GARCH Models in applied financial econometrics. In F. J. Fabozzi (Ed.), Handbook of finance. Hoboken: Wiley. vol III. Epps, T. W., & Epps, L. M. (1976). The stochastic dependence of security price changes and transaction volumes: Implications for the mixture-of-distribution hypothesis. Econometrica, 44, 305–321. Foster, F. J. (1995). Volume-volatility relationships for crude oil futures markets. Journal of Futures Markets, 15, 929–951. Fung, H. G., & Patterson, G. A. (1999). The dynamic relationship of volatility, volume, and market depth in currency futures markets. Journal of International Financial Markets Institutions and Money, 17, 33–59. Garcia, P., Leuthold, M. R., & Zapata, H. (1986). Lead-lag relationships between trading volume and price variability: New evidence. Journal of Futures Markets, 6, 1–10. Gwilym, O., MacMillan, D., & Speight, A. (1999). The intraday relationship between volume and volatility in LIFFE futures markets. Applied Financial Economics, 9, 593–604. Huang, C. Y., & Lin, J. B. (2004). Value-at-risk analysis for Taiwan stock index futures: Fat tails and conditional asymmetries in return innovations. Review of Quantitative Finance and Accounting, 22, 79–95.
1340
O. Carchano et al.
Karpoff, J. M. (1987). The relation between price changes and trading volume: A survey. Journal of Financial and Quantitative Analysis, 22, 109–126. Kim, Y. S., Rachev, S. T., Chung, D. M., & Bianchi, M. L. (2009). The modified tempered stable distribution, GARCH-models and option pricing. Probability and Mathematical Statistics, 29, 91–117. Kim, Y. S., Rachev, S. T., Bianchi, M. L., & Fabozzi, F. J. (2010). Tempered stable and tempered infinitely divisible GARCH models. Journal of Banking and Finance, 34, 2096–2109. Kim, Y. S., Rachev, S. T., Bianchi, M. L., Mitov, I., & Fabozzi, F. J. (2011). Time series analysis for financial market meltdowns. Journal of Banking and Finance, 35, 1879–1891. Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. Journal of Derivatives, 6, 6–24. Ragunathan, V., & Peker, A. (1997). Price variability, trading volume and market depth: Evidence from the Australian futures market. Applied Financial Economics, 7, 447–454. Schwert, G. W. (1989). Why does stock market volatility change over time? Journal of Finance, 44, 1115–1153. Smirlock, M., & Starks, L. (1985). A further examination of stock price changes and transactions volume. Journal of Financial Research, 8, 217–225. Tang, L. T., & Shieh, J. S. (2006). Long memory in stock index futures markets: A value-at-risk approach. Physica A: Statistical Mechanics and its Applications, 366, 437–448. Tauchen, G., & Pitts, M. (1983). The price variability-volume relationship on speculative markets. Econometrica, 51, 485–505. Verbeek, M. (2004). A guide to modern econometrics. Hoboken: Wiley. Wang, G., & Yau, J. (2000). Trading volume, bid-ask spread, and price volatility in futures markets. Journal of Futures Markets, 20, 943–970.
Computer Technology for Financial Service
49
Fang-Pang Lin, Cheng-Few Lee, and Huimin Chung
Contents 49.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.1.1 Information Technology (IT) for Financial Services . . . . . . . . . . . . . . . . . . . . . . . . 49.1.2 Competitiveness Through IT Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.2 Performance Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.2.1 High-End Computing Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.2.2 Compute Intensive IT Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.2.3 Data-Intensive IT Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.3 Distributed and Parallel Financial Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.3.1 Financial Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.3.2 Monte Carlo Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.3.3 Distribution and Parallelism Based on Random Number Generation . . . . . . . 49.4 Case Study and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.4.1 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.4.2 Grid Platforms Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1342 1342 1343 1346 1346 1349 1352 1354 1355 1357 1360 1367 1367 1369 1375 1377
F.-P. Lin (*) National Center for High Performance Computing, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] H. Chung Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_49, # Springer Science+Business Media New York 2015
1341
1342
F.-P. Lin et al.
Abstract
Securities trading is one of the few business activities where a few seconds processing delay can cost a company big fortune. The growing competition in the market exacerbates the situation and pushes further towards instantaneous trading even in split second. The key lies on the performance of the underlying information system. Following the computing evolution in financial services, it was a centralized process to begin with and gradually decentralized into a distribution of actual application logic across service networks. Financial services have tradition of doing most of its heavyduty financial analysis in overnight batch cycles. However, in securities trading it cannot satisfy the need due to its ad hoc nature and requirement of fast response. New computing paradigms, such grid and cloud computing, aiming at scalable and virtually standardized distributed computing resources, are well suited to the challenge posed by the capital market practices. Both consolidate computing resources by introducing a layer of middleware to orchestrate the use of geographically distributed powerful computers and large storages via fast networks. It is nontrivial to harvest the most of the resources from this kind of architecture. Wiener process plays a central role in modern financial modeling. Its scaled random walk feature, in essence, allows millions of financial simulation to be conducted simultaneously. The sheer scale can only be tackled via grid or cloud computing. In this study the core computing competence for financial services is examined. Grid and cloud computing will be briefly described. How the underlying algorithm for financial analysis can take advantage of grid environment is chosen and presented. One of the most popular practiced algorithms Monte Carlo simulation is used in our case study for option pricing and risk management. The various distributed computational platforms are carefully chosen to demonstrate the performance issue for financial services. Keywords
Financial service • Grid and cloud computing • Monte Carlo simulation • Option pricing • Risk management • Cyberinfrastructure • Random number generation • High-end comptuing • Financial simulation • Information technology
49.1
Introduction
49.1.1 Information Technology (IT) for Financial Services The finance services industry involves a broad range of organizations such as banks, credit card companies, insurance companies, consumer finance companies, stock brokerages, investment funds, and some government-sponsored enterprises. The industry represents a significant share of the global market. Information technology (IT) in the financial service industry is considered as an indispensable tool for productivity as well as competitiveness in the market. The IT spending in financial service industry grows constantly across different industry verticals (banking, insurance, and securities and investments). The impact directly from the use of advanced IT brings on financial services industry on the rise.
49
Computer Technology for Financial Service
1343
The structure of the industry has changed significantly in the last two decades as companies, which are not traditionally viewed as financial service providers, have taken advantage of opportunities created by technology to enter the market. New technology-based services keep emerging. These changes are direct result of the interaction of technology with the industrial environment, such as economic atmosphere, societal pressures, and the legal/regulatory environment in which the financial service industry operates. The effects of IT on the internal operations, the structure, and the types of services offered by the financial service industry have been particularly profound (Phillips et al. 1984; Hauswald and Marquez 2003; Griffiths and Remenyi 2003). IT technology has been and continues to be both a motivator and facilitator of change in the financial service industry, which ultimately leads to competitiveness of the industry. The change is in particular radical after 1991 when the World Wide Web was invented by Tim Berners-Lee and his group for information sharing in the community of high energy physics. It was later introduced to the rest of the world, which subsequently changed the face of how people doing business today. Informational considerations have long been recognized to determine not only the degree of competition but also the pricing and profitability of financial services and instruments. Recent technological progress has dramatically affected the production and availability of information, thereby changing the nature of competition in such informationally sensitive markets. Hauswald and Marquez (2003) investigate how advances in information technology (IT) affect competition in the financial services industry, particularly credit, insurance, and securities markets. Two aspects of improvement in IT are focused: better processing and easier dissemination of information. In other words, two dimensions of technology progress that affect competition in financial services can be defined as advances in the ability to process and evaluate information and in the ease of obtaining information generated by competitors. While better technology may result in improved information processing, it might also lead to low cost or even free access to information through, for example, informational spillovers. They show that in the context of credit screening, better access to information decreases interest rates and the returns from screening. On the other hand, an improved ability to process information increases interest rates and bank profits. Hence predictions regarding financial claims’ pricing hinge on the overall effect ascribed to technological progress. Their results conclude that in general financial market informational asymmetries drive profitability. The viewpoint of Hauswald and Marquez is adopted in this work. Assuming competitors in the dynamics of financial market possess similar capacity, the informational asymmetries can be created sometimes only between seconds and now are possible to be achieved through the outperformance of underlying IT platforms.
49.1.2 Competitiveness Through IT Performance Following the computing evolution in financial services, it was a centralized process to begin with and gradually decentralized into a distribution of practical
1344
F.-P. Lin et al.
cluster computing
grid computing
cloud computing
big data Google Trends
Search Volume index
F
0 2004
2005
2006
2007
2008
2009
2010
2011
2012
News reference volume
0
Fig. 49.1 The trend history from Google Trend according to global Search Volume and global News Reference Volume, in which the alphabetic letters represent the specific events that relate to each curve
trading application logic across service networks. Financial services have tradition of doing most of its heavy lifting financial analysis in overnight batch cycles. However, in securities trading it cannot satisfy the need due to its ad hoc nature and requirement of fast response. New computing paradigms, grid computing and cloud computing were subsequently emerged in the last decade. The grid computing was initially incorporated into the core context of a well-referenced Atkins’ report of National Science Board of the United States, namely, “Revolutionizing Science and Engineering Through Cyberinfrastructure” (Atkins et al. 2003), which lays down a visionary path for future IT platform development of the world. One may observe this trend from statistics from Google Trend regarding the global Search Volume and global News Reference Volume of key phrases of “cluster computing,” “grid computing,” “cloud computing,” and “Big Data” (Fig. 49.1), which represents four main stream computing paradigms in high-end quantitative analysis. Cluster computing is a group of coupled computers that work closely together so that in many respects they can be viewed as though they are a single computer. They are connected with high-speed local area networks and the purpose is usually to gain more compute cycles with better cost performance and higher availability. The grid computing aims at virtualizing scalable geographically distributed computing and observatory resources to maximize compute cycles and data transaction rates with minimum cost. Cloud computing is more of recent development owing to the similar technology used in global information services providers, such as Google and Amazon. The cloud is referred to as a subset of Internet if to be explained in a simplest fashion. Within the cloud the computers also talk with servers instead of communicating with each other similarly to that of peer-to-peer computing (Milojicic et al. 2002). There are no definitive definitions for the above
49
Computer Technology for Financial Service
1345
terminology. However, people tend to view clusters as one of foundational components of grids, or grids as a meta-cluster on wide area networks. This is also known as horizontal integration. The cloud virtualizes further the compute, store, and network in a utility sense and provides an interface between users and grids. We refer to Foster et al. (2008) and Yelick et al. (2011) for a comparison. This perspective considers grids as a backbone of cyberinfrastructure to support the clouds. Similarly, in early days of development of grids, there is a so-called “@HOME” style PC grids (Korpela et al. 2001), which are exactly working on at least ten of thousands of PCs, in which owners of PCs donate their CPU times when their machines are in idle. The PC grids can be specifically categorized as clouds. Figure 49.1 shows that there is a gradual drop in the curve of search volume for grid computing and cluster computing, and many surges, grows but a recent quick drop on cloud computing since its introduction in mid-2007. The new rising technology is Big Data (Bughin et al. 2010), which implies a paradigm shift from compute centric, network centric gradually to data-centric computing. However, the size of the search volume strongly relates to the degree of maturity of each computing paradigm. This is obvious in cluster computing. Clusters are the major market products, either in supercomputers from big vendors, such as IBM, HP, SGI, and NEC, or from aggregation of PCs in university research laboratories. Figure 49.1 also implies constant market need for high-end computing. The performance and security issues are fundamental to general distributed and parallel computing, which also remain as a challenge to cluster, grid, cloud, and Big Data (Lauret et al. 2010; Ghoshal et al. 2011; Ramakrishnan et al. 2011). Performance models in compute-based grid, which is also cloud-like, environment, are adopted in this work. The general definitions of grid and cloud computing will be introduced and briefly compared. To tackle the core performance issue, grids are chosen to demonstrate how fundamental financial calculations can be improved, hence leverage the financial service. Grid computing, following by cloud computing as shown in Fig. 49.1, has been matured to serve as a production environment for finance services in recent years. Grid computing is well suited to the challenge posed by the capital market practices. In this study the core computing competence for financial services will be examined and how underlying algorithms for financial analysis can take advantage of grid environment scrutinized. One of the most popular practiced algorithms is Monte Carlo simulation (MCS), and it will be specifically used in our case study for calculations of option pricing and for value at risk (VaR) in risk management. Three grid platforms are carefully chosen to exploit the performance issue for financial services. The first one is traditional grid platform with heterogeneous and distributed resources. Usually digital packets are connected via optical fibers. For long distance, depending on network traffics, it will produce approximately 150-300 microseconds (mm) latency across the Pacific Ocean. This is the physical constrain of light speed when traveling through the fiber channels. Therefore, even in split-second packets can still travel to anywhere in the world. The Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) grid is a typical example, which linked with 14 countries and 36 sites. The system is highly heterogeneous. The computer nodes mounted to PRAGMA grid range
1346
F.-P. Lin et al.
from usual PC clusters to high-end supercomputers. The second one is a special Linux, or DRBL, PC cluster. It converts system into a homogenous Linux system and exploits the compute cycles of the cluster. The intention is to provide dynamic and flexible resources to cope better with uncertainty of the traders’ cycle demand. Finally, PC grid is chosen to demonstrate finance services that can be effectively conducted through a cloud-based computing. The usefulness of PC grid is based on the fact that 90 % of CPUs time of PCs were in idled status.
49.2
Performance Enhancement
In this section two types of grid systems, compute intensive and data intensive, respectively, are introduced. The classification of the types is based on various grid applications. Traditionally, the grid systems provide a general platform to harvest or to scavenge, if used only in idle status, compute cycles for a collection of resources across boundaries of institutional administration. In real world most applications are in fact data centric. For example, in a trading center, it collects tick-by-tick volume data from all related financial markets and is driven by informational flows, hence typical data centric. However, as noted in Sect. 49.3.2.1, the core competence still lies on the performance enhancement of the IT system. The following two subsections will give more details of compute intensive as well as data-intensive grid systems by a survey of current development of grids specifically for financial services. In some cases, e.g., high-frequency data with real-time analysis, two systems have to work together to get better performance. Our emphasis will be more on compute intensive grid system.
49.2.1 High-End Computing Technology 49.2.1.1 Definitions of High-End Computing Grid was coined by Ian Foster (Foster and Kessleman 2004) who gave the essence of the definitions as below: The sharing that we are concerned with is not primarily file exchange but rather direct access to computers, software, data, and other resources, as is required by a range of collaborative problem solving and resource-brokering strategies emerging in industry, science, and engineering. This sharing is, necessarily, highly controlled, with resource providers and consumers defining clearly and carefully just what is shared, who is allowed to share, and the conditions under which sharing occurs. A set of individuals and/or institutions defined by such sharing rules form what we call a virtual organization.
The definition is centered on the concept of virtual organization, but it is not explicit enough to explain what the grid is. Foster then provides additional checklist as below to safeguard the possible logic pitfalls of the definition. Hereby, grid is a system that: 1. Coordinates resources that are not subject to centralized control. A grid integrates and coordinates resources and users that live within different control domains – for example, the user’s desktop vs. central computing, different
49
Computer Technology for Financial Service
1347
administrative units of the same company, or different companies – and addresses the issues of security, policy, payment, membership, and so forth that arise in these settings. Otherwise, we are dealing with a local management system. 2. Using standard, open, general-purpose protocols and interfaces. A grid is built from multipurpose protocols and interfaces that address such fundamental issues as authentication, authorization, resource discovery, and resource access. As I discuss further below, it is important that these protocols and interfaces be standard and open. Otherwise, we are dealing with an application specific system. 3. To deliver nontrivial qualities of service. A grid allows its constituent resources to be used in a coordinated fashion to deliver various qualities of service, relating, for example, to response time, throughput, availability, and security, and/or co-allocation of multiple resource types to meet complex user demands, so that the utility of the combined system is significantly greater than that of the sum of its parts. The definition of grid thus far is well accepted and has been stably used up to now. The virtual organization (VO) has strong implication of community driven and collaborative sharing of distributed resources. The advance of development of optical fiber network in recent years plays a critical role of why grids can be a reality. It is also the reason why now the computing paradigm shifts to distributed/grid computing. Additionally, perhaps the most generally useful definition is that a grid consists of shared heterogeneous computing and data resources networked across administrative boundaries. Given such a definition, a grid can be thought of as both an access method and a platform, with grid middleware being the critical software that enables grid operation and ease of use. The term “cloud computing” has been used to refer to different concepts, models, and services over the last few years. The definition for cloud computing provided by the National Institute of Standards and Technology (NIST) is well received in the IT community, which defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort service provider interaction (Mell and Grance 2011). The model gains popularity in the industry for its emphasis on pay-as-you-go and elasticity, the ability to quickly expand and collapse the utilized service as demand requires. Thus new approaches to distributed computing and data analysis have also emerged in conjunction with the growth of cloud computing. These include models like MapReduce (Dean and Ghemawat 2004) and scalable key-value stores like Big Table (Chang et al. 2006). From the high-end computing perspective, cloud computing technology allows users to have the ability to get on-demand access to resources to replace or supplement existing systems, as well as the ability to control the software environment. Yet the core competence still lies on the performance of financial calculation and further of the transactions of financial processes. This work will focus on the
1348
F.-P. Lin et al.
core competence in financial calculation based on grid environment. Grid computing technology will be used to explain how the core financial calculations can be significantly accelerated in various distributed and parallel computing environments. The calculation models in this work can be easily migrated to pure cloud environments.
49.2.1.2 Essence of IT Technology To realize the above goal, it needs to handle technically interoperability of middleware that is capable of communicating between heterogeneous computer systems across institutional boundaries. The movement of grid began in 1996 by Ian Foster and Kessleman (2004). Before their development, another branch of high-performance computing that focuses on connecting geographically distributed supercomputers to achieve one single grand task had been developed by Smarr and Catlett (1992). They coined such a methodology as metacomputing and their query has been how we can have infinite computing power under the physical limit, such as Moore’s Law. However, it remains to be less useful because its limit goal on pursuing top performance without noticing practical use in real world. The idea lives on and generates many tools dedicated to high-performance/high-throughput computing, such as Condor (Litzkow et al. 1988), Legion (Grimshaw and Wulf 1997), and UNICORE (Almond and Snelling 1999). Condor, as suggested by the name of the project, is devised to scavenge a large cluster of idle workstations. Legion is closer to the development of worldwide virtual computer. The goal of UNICORE is even much simpler and practical. It was developed when Germany government decided to consolidate their five national supercomputer centers into a virtual one to reduce the management cost and needed a software tool to integrate them, hence the UNICORE. These tools were successful under their development scope. However they fail to meet the first and the second items in Foster’s checklist in the previous section. The emergency of grids follows the similar path as that of Condor and Legion at the first place, in which its development aims at resources sharing in high-performance computing. However, its vision in open standards and the concept of virtual organization allows its development go far beyond merely cluster supercomputers together. It gives a broader view of resources sharing, in which it is not only limited to the sizable computing cycles and storage space to be shared but also extended virtually to calculable machines that are able to hook up to the Internet, such as sensors and sensor loggers, storage servers, and computers. Since 1996, Foster and his team have been developing software tools to achieve the purpose. Their software Globus Toolkit (Foster and Kessleman 2004) is now a de facto middleware for grids. However, the ambitious development is still considered insufficient to meet the ever-growing complexity of grid systems. As mentioned earlier that grid is based on open specifications and standards, they allow all stakeholders within the virtual organization/grid to communicate with each other with ease and enable ones more to focus on integrated value creation activities. The open specifications and standards are made by the community of Open Grid Forum (OGF), which plays as a standard body and made, discussed, and announced new standards during regular OGF meetings. Grid Specifications and Standards
49
Computer Technology for Financial Service
1349
include architecture, scheduling, resource management, system configuration, data, data movement, security, and grid security infrastructure. In 2004, OGF announced Globus Toolkit version, which adopts both the open standard of grid, Open Grid Services Architecture (OGSA), and the more widely adopted World Wide Web standard, web services resource framework (WSRF), which ultimately enable grids to tackle issues of both scalability and complexity of very large grid systems.
49.2.2 Compute Intensive IT Systems The recent development of computational finance based on grids is hereby scrutinized and remarks given. Our major interest is to see if the split-second performance is well justified under the grid architecture. Also, real-time issue with real market parametric data should be used as input for practical simulation. In addition, issues of intersystem, interdisciplinary and geographically distribution of resources, and the degree of virtualization are crucial to the success of such a grid. The chosen projects are reviewed and discussed as follows: 1. PicsouGrid This is a French grid project for financial service. It provides a general framework for computation finance and targets on applications of option trading, option pricing, Monte Carlo simulation, aggregation of statistics, etc. (StokesRees et al. 2007). The key for this development is the implementation of the middleware ProActive. ProActive is an in-house Java library for distributed computing developed by INRIA Sophia Antipolis, France. It provides transparent asynchronous distributed method calls and is implemented on top of Java RMI. It is also used in commercial applications. It also provides fault tolerance mechanism. The architecture is shown in Fig. 49.2, which is very similar to most of grid applications apart from the software stack used. The option pricing was tested in an approximately 894 CPUs. The underlying computer systems are heterogeneous. The system is used for metacomputing. As a result, the system has to specifically design to orchestrate and to synchronize and re-synchronize the whole distributed processes for one calculation. Once the grid system requires synchronization between processes, which implies stronger coupling of algorithm of interest, the performance will be seriously affected. There is no software treatment to solve such problems and should be tackled by physical infrastructure, e.g., optical fiber network with Layer 2 light path. 2. FinGrid FinGrid stands for Financial Information Grid. Its study includes components of bootstrapping, sentimental analysis, and multi-scale analysis, which focuses on information integration and analysis, e.g., data mining. It takes advantage of the huge collection of numerical and textual data simultaneously to emphasize the study of societal issues (Amad et al. 2004; Ahmad et al. 2005; Gillam et al. 2005). The architecture of FinGrid is shown in Fig. 49.3. It is a typical 3tier system, in which the first tier facilitates the client in sending a request to one of the services: Text Processing Service or Time Series Service; the second tier
1350
F.-P. Lin et al.
option pricing request MC simulation packet heartbeat monitor MC result ProActive
Client
Server
SubServer
ProActive
SubServer
ProActive
reserve workers
Worker
Worker
ProActive
JavaSpace virtual shared memory (to v3)
DB
Fig. 49.2 Architecture of PicsouGrid for option pricing based on Monte Carlo simulation (Stokes-Rees et al. 2007)
1.Send service request Client
Main Cluster
2.Distribute the tasks
Slave 0
Text Processing Service
Slave 1
4.Notify the user about the results Time Series Service
Slave 2 3.Receive results
Slave 3
Data Provider at location X (Reuters News Source)
Data Provider at location Y (Numerical Data)
CoG
Java SSL
Globus
Triarch SSL
“Machines” (FinGrid)
“Machines” (Reuters)
Fig. 49.3 The architecture of Financial Information Grid (FinGrid)
facilitates the execution of parallel tasks in the main cluster and is distributed to a set of slave machines (nodes), and the third tier comprises the connection of the slave machines to the data providers. This work focuses on small scale and dedicated grid system. It pumps in real and live numerical and textual data from say Reuters and performs real-time sophisticated data mining analysis. This is a good prototype for financial grid. However, it will encounter similar problem as that of PicsouGrid if it is to scale up. The model is more successful in automatically combining real data and the analysis. 3. IBM Japan collaborates with life insurance company and adopts PC grids concept to scavenge more compute cycles (Tanaka 2003): In this work an integrated risk management system (see Fig. 49.4) is modified, in which the future scenarios of red circle of Fig. 49.4 are send via grid middleware to a cluster of PCs. According to the size of the given PCs, the
49
Computer Technology for Financial Service
1351
Current portfolio
Present value of portfolio PV of each asset
Market price of risk Risk premium adjustment rate Basic equations (Stochastic differential eqs.)
Future scenarios
Scenarios
Default-free interest rates (domestic. foreign) 1
3
N
Future valuation
Hazard rate
Stock price
2
Price 1
Price 2 Price 3
Price N
Exchange rate Distribution of future value of portfolio (individual assets)
Return valuation
Risk valuation
Fig. 49.4 Architecture of Integrated Risk Management System (Tanaka 2003)
number scenarios are then divided in a work balanced manner for each PC. This is the most typical use of compute intensive grid systems and a good practice for production system. However, the key issues that discussed in the above two cases cannot be answered in this study. Similar architecture can also be found in EGrid (Leto et al. 2005). 4. UK e-Science developed a grid service discovery in the financial market sector focusing on integration of different knowledge flows (Bell and Ludwig 2005). From application’s viewpoint, business and technical architecture of financial service applications may be segmented by product, process, or geographic concerns. Segmented inventories make inter silo reuse difficult. The service integration model is adopted and a loosely coupled inventory – containing differing explicit
1352
F.-P. Lin et al.
SEDI4G COMPONENT REPOSTORY SCR (Apache JNDI or MDS)
1
WEB START CLIENT 2
SEDI4G CLIENT VIEW SCV
4
SEDI4G DISCOVERY CONTROL SERVICE SDCS
3
SEDI4G MATCHING SERVICE SMAS
5 Ontology
Grid Service Description 1
Grid Service Description 2
Grid Service Description 3
Grid Service Description n 6
Described in
Grid Service 1
Grid Service 2
Grid Service 3
Grid Service n
Fig. 49.5 The semantic discovery for Grid Services Architecture (SEDI4G) (Bell and Ludwig 2005)
capability knowledge. Three use cases were specifically chosen in this work to explore the use of semantic searching: Use case 1 – Searching for trades executed with a particular counterparty Use case 2 – Valuing a portfolio of interest rate derivative products Use case 3 – Valuing an option-based product The use cases were chosen to provide examples of three distinct patterns of use – aggregation, standard selection, and multiple selection. The architecture (see Fig. 49.5) is bound specifically with the user cases. The advantage for grid in this case is that it can be easily tailored into specific user need to integrate different applications, which is a crucial strength of using grid.
49.2.3 Data-Intensive IT Systems Grid in financial services from the perspective of web services towards financial services industry. The perspective is more on transactional side. Once the bottleneck of compute cycle is solved, the data-centric nature will play the key role again.
49
Computer Technology for Financial Service
1353
The knowledge flows back to the customized business logic should provide the best path for users to access the live data of interest. There is no strong focus of development on this data-intensive grid system. Even in FinGrid, which claims in streaming live data for real-time analysis, the data issue remains part of compute grids. However, the need for dynamic data management is obvious as mentioned in Amad et al. (2004). Hereby, we like to introduce and implement a dynamic data management software Ring Buffer Network Bus (RBNB) DataTurbine to serve such a purpose. RBNB DataTurbine was used recently to support global environmental observatory network, which involves linking with ten of thousand of sensors and is able to obtain the observed data online. It meets grid/cyberinfrastructure (CI) requirements with regard to data acquisition, instrument management, and state-of-health monitoring including reliable data capture and transport, persistent monitoring of numerous data channels, automated processing, event detection and analysis, integration across heterogeneous resources and systems, real-time tasking and remote operations, and secure access to system resources. To that end, streaming data middleware provides the framework for application development and integration. Use cases of RBNB DataTurbine include adaptive sampling rates, failure detection and correction, quality assurance, and simple observation (see Tilak et al. 2007). Real-time data access can be used to generate interest and buy-in from various stakeholders. Real-time streaming data is a natural model for many applications in observing systems, in particular event detection and pattern recognition. Many of these applications involve filters over data values, or more generally, functions over sliding temporal windows. The RBNB DataTurbine middleware provides a modular, scalable, robust environment while providing security, configuration management, routing, and data archival services. The RBNB DataTurbine system acts as an intermediary between dissimilar data monitoring and analysis devices and applications. As shown in Fig. 49.6, a modular architecture is used, in which a source or “feeder” program is a Java application that acquires data from an external live data sources and feeds it into the RBNB server. Additional modules display and manipulate data fetched from the RBNB server. This allows flexible configuration where RBNB serves as a coupling between relatively simple and “single purpose” suppliers of data and consumers of data, both of which are presented a logical grouping of physical data sources. RBNB supports the modular addition of new sources and sinks with a clear separation of design, coding, and testing (ref. Fig. 49.6). From the perspective of distributed systems, the RBNB DataTurbine is a “black box” from which applications and devices send data and receive data. RBNB DataTurbine handles all data management operations between data sources and sinks, including reliable transport, routing, scheduling, and security. RBNB accomplishes this through the innovative use of memory and file-based ring buffers combined with flexible network objects. Ring buffers are a programmer-configurable mixture of memory and disk, allowing system tuning to meet application-dependent data management requirements. Network bus elements perform data stream multiplexing and routing. These elements
1354
F.-P. Lin et al.
Client Live Data Source
DAV TCP/IP Client
Live Data Source
Tomcat USB
HTTP
Feeder
HTTP Client TCP/IP Live Data Source API RBNB Client
Feeder
Live Data Source
API
API
Client Plugin Plugin
Fig. 49.6 RBNB DataTurbine use scenario for collaborative applications
combine to support seamless real-time data archiving and distribution over existing local and wide area networks. Ring buffers also connect directly to client applications to provide streaming-related services including data stream subscription, capture, rewind, and replay. This presents clients with a simple, uniform interface to real-time and historical (playback) data.
49.3
Distributed and Parallel Financial Simulation
In the previous sections, we address issues of incorporating IT technology for financial competitiveness and derive that the core lies on the performance of IT platform, providing the competitors in the market have similar capacity and are
49
Computer Technology for Financial Service
1355
equally informed. Grid technology, as the leading IT development in highperformance computing, is introduced as the cutting-edge IT platform to meet our goal. Many companies have adopted similar technology of grids with success as mentioned in Sect. 49.1. There are also increasing research interests, which result in the work discussed in Sect. 49.2.3. Better performance, however, cannot be achieved by merely using a single architecture as observed in the cases of Sect. 49.2.3. The architecture obviously has to be specifically chosen for the analysis of interest. Simultaneously, the analysis procedures have to be tailored into the chosen architecture for performance fine-tune. In this section, we will introduce and discuss analysis procedures of financial simulation and how to tailor the analysis procedures into grid architectures by distribution and parallelism. The popular calculations for option pricing and for value at risk (VaR) in trading practice are used to serve the purpose. The calculation is based on Monte Carlo simulation, which is chosen not only because it is a wellreceived approach due to the absence of straightforward closed-form solutions for many financial models but also a numerical method intrinsically suited to mass distribution and mass parallelism. The success of Monte Carlo simulation lies on the quality of random number generator, which will be discussed in details at the end of the section.
49.3.1 Financial Simulation There are wide variety of sophisticated financial models developed, to name a few, ranging from analysis in time series, fractals, nonlinear dynamics, and agent-based modeling to applications in optional pricing, portfolio management, and market risk measure, etc. (Schmidt 2005), in which option pricing and VaR calculations of market risk measure can be considered crucial and one of the most practiced activities in market trading.
49.3.1.1 Option Pricing An option is an agreement between two parties to buy or sell an asset at a certain time in the future for a certain price. There are two types of options: Call Option: A call option is a contract that gives the right to its holder (i.e., buyer) without creating an obligation, to buy a prespecified underlying asset at a predetermined price. Usually this right is created for a specific time period, e.g., 6 months or more. If the option can be exercised only at its expiration (i.e., the underlying asset can be purchased only at the end of the life of the option), the option is referred to as a European-style Call Option (or European Call). If it can be exercised any date before its maturity, the option is referred to as an American-style Call Option (or American Call). Put Option: A put option is a contract that gives its holder the right without creating the obligation to sell a prespecified underlying asset at a predetermined price. If the option can be exercised only at its expiration (i.e., the underlying asset can be sold only at the end of the life of the option), the option is referred to as
1356
F.-P. Lin et al.
a European-style Put Option (or European Put). If it can be exercised any date before its maturity, the option is referred to as an American-style Put Option (or American Put). To price options in computational finance, we use the following notation: K is the strike price; T is the time to maturity of the option; St is the stock price at time t; r is the risk-free interest rate; m is the drift rate of the underlying asset (a measure of the average rate of growth of the asset price); s is the volatility of the stock; and V denotes the option value. Here is an example to illustrate the concept of option pricing. Suppose an investor enters into a call option contract to buy a stock at price K after 3 months. After 3 months, the stock price is St. If St > K then one can exercise one’s option by buying the stock at price K and by immediately selling in the market to make a profit of ST K. On the other hand, if he ST K to be buy the stock. Hence, we see that a call option to buy the stock at time T at price K will get payoff (ST K)+, where (ST K)+ max(ST K, 0) (Schmidt 2005; Hull 2003).
49.3.1.2 Market Risk Measurement Based on VaR Market risks are the prospect of financial losses or gains, due to unexpected changes in market prices and rates. Evaluating the exposure to such risks is nowadays of primary concern to risk managers in financial institutions. Until the late 1980s market risk was estimated through gap and duration analysis (interest rates), portfolio theory (securities), sensitivity analysis (derivatives), or scenarios analysis. However, all these methods could be either applied only to very specific assets or relied on subjective reasoning. Since the early 1990s a commonly used market risk estimation methodology has been the value at risk (VaR). A VaR measure is the highest possible loss L incurred from holding the current portfolio over a certain period of time at a given confidence level (Dowd 2002): PðL > VaRÞ 1 c
(49.1)
where c is the confidence level, typically 95 %, 97.5 %, or 99 %, and P is cumulative distribution function. By convention, L ¼ DX(t), where DX(t) is the relative change (return) in portfolio value over the time horizon t. Hence, large values of L correspond to large losses (or large negative returns). The VaR figure has two important characteristics: (1) it provides a common consistent measure of risk across different positions and risk factors and (2) it takes into account the correlations or dependencies between different risk factors. Because of its intuitive appeal and simplicity, it is no surprise that in a few years value at risk has become the standard risk measure used around the world. However, VaR has a few deficiencies, among them the non-subadditivity – a sum of VaR’s two portfolios can be smaller than the VaR of the combined portfolio. To cope with these shortcomings, Artzner et al. proposed an alternative measure that satisfies the assumptions of a coherent risk measure. The expected
49
Computer Technology for Financial Service
1357
shortfall (ES), also called expected tail loss (ETL) or conditional VaR, is the expected value of the losses in excess of VaR: ES ¼ EðLjL > VaRÞ
(49.2)
It is interesting to note that although new to the finance industry – expected shortfall has been familiar to insurance practitioners for a long time. It is very similar to the mean excess function which is used to characterize claim size distribution; see (Cizek et al. 2011). The essence of the VaR and ES computations is estimation of low quantiles in the portfolio return distributions. Hence, the performance of market risk measurement methods depends on the quality of distribution assumptions on the underlying risk factors. Many of the concepts in theoretical and empirical finance developed over the past decades, including the classical portfolio theory, the Black-Scholes-Merton option pricing model, and even the RiskMetrics variance-covariance approach to VaR rest upon the assumption that asset returns follow a normal distribution. The assumption is not justified by real market data. Our interest is more on the calculation side. For interested readers we refer further to (Weron 2004).
49.3.2 Monte Carlo Simulations 49.3.2.1 Monte Carlo and Quasi-Monte Carlo Methods In general, Monte Carlo (MC) and quasi-Monte Carlo (QMC) methods are applied to estimate the integral of function f(x) over [0, 1]d unit hypercube where d is the dimension of the hypercube: ð f ðxÞdx (49.3) I¼ ½0;1d
In MC methods, I is estimated by evaluating f(x) at N independent points randomly chosen from a uniform random distribution over [0, 1]d and then evaluating average N X ^I ¼ 1 f ðxi Þ N i¼1
(49.4)
From the law of large numbers, ^I ! I as N ! 1. The standard deviation is vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u 1 X t ðf ðx i Þ I Þ2 N 1 i¼1 Therefore, the error of MC methods is proportional to N 1/2.
(49.5)
1358
F.-P. Lin et al.
QMC methods compute the above integral based on low-discrepancy (LD) sequences. The elements in a LD sequence are “uniformly” chosen from [0, 1]d rather than “randomly.” The discrepancy is a measure to evaluate the uniformity of points over [0, 1]d. Let {qn} be a sequence in [0, 1]d; the discrepancy D*N of qn is defined as follows, using Niederreiter’s notation (Niederreiter 1992): DN ðqn Þ ¼
AðB; qn Þ sup vd ðBÞ d B 2 ½0; 1Þ N
(49.6)
where B is a subcube of [0, 1]d containing the origin, A(B, qn) is the number of points in qn that fall into B, and Vd(B) is the d-dimensional Lebesgue measure of B. The elements of qn are said uniformly distributed if its discrepancy D*N ! 0 as N ! 1. From the theory of uniform distribution sequences (Kuipers and Niederreiter 1974), the estimate of the integral using a uniformly distributed N X sequence {qn} is ^I ¼ N1 f ðqn Þ, as N ! 1 then ^I ! I . The integration error n¼1
bound is given by the Koksman-Hlawka inequality: N 1X f ðqn Þ V ðf ÞDN ðqn Þ I N n¼1
(49.7)
where V(f) is the variation of the function in the sense of Hardy and Krause (see Kuipers and Niederreiter 1974), which is assumed to be finite. The inequality suggests a smaller error can be obtained by using sequences with smaller discrepancy. The discrepancy of many uniformly distributed sequences satisfies O((log N)d/N). These sequences are called low-discrepancy (LD) sequences (Chen et al. 2006). Inequality (49.7) shows that the estimates using a LD sequence satisfy the deterministic error bound O((log N)d/N).
49.3.2.2 Monte Carlo Simulations for Option Pricing Under the risk-neutral measure, the price of a fairly valued European call option is the expectation of the payoff E[e rT(ST K)+]. In order to compute the expectation, Black and Scholes (1973) modeled the stochastic process generating the price of a non-dividend-paying stock as geometric Brownian motion: dSt ¼ mSt dt þ sSt dW t
(49.8)
where W is a standard Wiener process, also known as Brownian motion. Under the risk-neutral measure, the drift m is set to m ¼ r. To simulate the path followed by S, suppose the life of the option has been divided into n short intervals of length Dt(Dt ¼ T/n), the updating of the stock price at t + Dt from t is (Hull 2003): pffiffiffiffiffi StþDt St ¼ rSt Dt þ sSt Z Dt (49.9)
49
Computer Technology for Financial Service
1359
where Z is a standard random variable, i.e., Z(0,1). This enables the value of SDt to be calculated from initial value St at time Dt, the value at time 2Dt to be calculated from SDt, and so on. Hence, a completed path for S has been constructed. In practice, in order to avoid discretization errors, it is usual to simulate lnS rather than S. From It ^o’s lemma, the process followed by of Eq. 49.9 is (Bratley and Fox 1988) dlnS ¼
r
s2 dt þ sdz 2
(49.10)
so that lnStþDt lnSt ¼
pffiffiffiffiffi s2 dt þ sZ Dt 2
(49.11)
pffiffiffiffiffi s2 dt þ sZ Dt 2
(49.12)
r
or equivalently StþDt ¼ St exp
r
Substituting independent samples Zi, , Zn from the normal distribution into (Eq. 49.12) yields independent samples ST(i), i ¼ 1, , n, of the stock price at expiry time T. Hence, the option value is given by V¼
n n h i 1X 1X Vi ¼ erT max ST ðiÞ K, 0 n i¼1 n i¼1
(49.13)
The QMC simulations follow the same steps as the MC simulations, except that the pseudorandom numbers are replaced by LD sequences. The basic LD sequences known in literature are Halton (1960), Sobol (1967), and Faure (1982). Niederreiter (1992) proposed a general principle of generating LD sequences. In finance, several examples have shown that the Sobol sequence is superior to others. For example, Galanti and Jung (1997) observed that the Sobol sequence outperforms the Faure sequence, and the Faure marginally outperforms the Halton sequence. In this research, we use Sobol sequence in our experiments. The generator used for generating the Sobol sequence comes from the modified algorithm 659 of Joe and Kuo (2003).
49.3.2.3 Monte Carlo Bootstrap for VaR Monte Carlo simulation is applicable with virtually any model of changes in risk factors and any mechanism for determining a portfolio’s value in each market scenario. But revaluing a portfolio in each scenario can present a substantial computational burden, and this motivates research into ways of improving the efficiency of Monte Carlo methods for VaR.
1360
F.-P. Lin et al.
The bootstrap (Efon 1981; Efron and Tibshirani 1986) is a simple and straightforward method for calculating approximated biases, standard deviations, confidence intervals, and so forth, in almost any nonparametric estimation problem. Method is a keyword here, since little is known about the bootstrap’s theoretical basis, except that (a) it is closely related to the jackknife in statistic inferring; (b) under reasonable condition, it gives asymptotically correct results; and (c) for some simple problems which can be analyzed completely, for example, ordinary linear regression, the bootstrap automatically produces standard solutions. The bootstrap method is straightforward. Suppose we observe returns Xi ¼ xi, i ¼ 1, 2, , n, where the Xi are independent and identically distributed (iid) according to some unknown probability distribution F. The Xi may be real valued and two-dimensional or take values in a more complicated space. A given parameter y(F), perhaps the mean, median, correlation, and so forth, is to be estimated, and we agree ^ , where F ^ is the empirical distribution function putting to use the estimate ^y ¼ y F mass 1/n at each observed value xi. We wish to assign some measure of accuracy to ^y. Let s(F) be some measure of accuracy that we would use if F were known, for
(idd) ^ example, sðFÞ ¼ SDF ^y , the standard deviation of y when X1, X2, , Xn F . ^ is the nonparametric maximum ^¼s F The bootstrap estimate of accuracy s ^ it is usually necessary to employ likelihood estimate of s(F). In order to calculate s ^ in which numerical methods. (a) A bootstrap sample X*1, X*2, , X*n is drawn from F, * each Xi independently takes value xj with probability 1/n, j ¼ 1, 2, , n. In other words, X*1, X*2, , X*n is an independent sample of size n drawn with replacement from the set of observations {x1,x2, , xn}. (b) This gives a bootstrap empirical ^ , the empirical distribution distribution function F of the n values X*1, X*2, , X*n,
^ . (c) Steps (a) and (b) are and a corresponding bootstrap value ^y ¼ y F repeated, independently, in a large number of times, say N, giving bootstrap values ^ 1 , y ^ 2 , , y ^ N . (d) The value of s ^ is approximated, in the case where s(F) is the y standard deviation by the sample standard deviation of the ^y values, where Xn j ^y ^¼ m j¼1 (49.14) N and ^2 ¼ s
2 Xn j ^y m ^ j¼1 N1
(49.15)
49.3.3 Distribution and Parallelism Based on Random Number Generation Financial variables, such as prices and returns, are random time-dependent variables. Wiener process plays the central role in modeling. As shown in
49
Computer Technology for Financial Service
1361
Eqs. 49.8 and 49.9 for approximating the underlying prices St+Dt, or the bootstrap samples of return X*i , the solution methods involve basic market parameters, drift m, volatility s, and risk-free interest rate r, current underlying price S or return X, strike price K, and Wiener process, which isprelated to time to ffiffiffiffiffi maturity Dt and standard random variable Z, i.e., DW ¼ Z Dt . Monte Carlo methods simulate this nature of the Brownian motion directly. It follows Wiener process and approximates the standard random variable Z by introducing pseudo iid random number into each Wiener process. When the simulation number is large enough, e.g., if n in Eq. 49.13 is large enough, the mean value will approach the exact solution. The large number for n also implies the performance problems are the key problems for Monte Carlo methods. One the other hand, the iid property of the random number Z shows possible solution to tackle the performance problem through mass distribution and/or parallelism. The solution method centers on the random number generation. The techniques of random number generation can be developed in a simple form through the approximation of a d-dimensional integral, e.g., (Eq. 49.3). Mass distribution and parallelism required solutions of for large dimension. However, most modern techniques in random number generation have limitations. In this study, both tradition pseudorandom number generation and high-dimensional low-discrepancy random number generator are considered. Following Sect. 49.3.2.1 better solution can be achieved by making use of Sobol sequences, which were proposed by Sobol (1967). A computer implementation in Fortran 77 was subsequently given by Bratley and Fox (1988) as Algorithm 659. Other implementations are available as C, Fortran 77, or Fortran 90 routines in the popular Numerical Recipes collection of software. However, as given, all these implementations have a fairly heavy restriction on the maximum value of d allowed. For Algorithm 659, Sobol sequences may be generated to approximate integrals in up to 40 dimensions, while the Numerical Recipes routines allow the generation of Sobol sequences to approximate integrals in up to six dimensions only. The FinDer software of Paskov and Traub (1995) provides an implementation of Sobol sequences up to 370 dimensions, but it is licensed software. As computers become more powerful, there is an expectation that it should be possible to approximate integrals in higher and higher dimensions. Integrals in hundreds of variables arise in applications such as mathematical finance (e.g., see Paskov and Traub (1995)). Also, as new methods become available for these integrals, one might wish to compare these new methods with Sobol sequences. Thus, it would be desirable to extend these existing implementations such as Algorithm 659 so they may be used for higher-dimensional integrals. We remark that Sobol sequences are now considered to be examples of (t, d)-sequences in base 2. The general theory of these low-discrepancy (t, d)-sequences in base b is discussed in detail in Niederreiter (1992). The generation of Sobol sequences is clearly explained in Bratley and Fox (1988). We review the main points so as to show what extra data would be required to allow Algorithm 659 to generate Sobol sequences to approximate integrals in more than 40 dimensions. To generate the j th component of the
1362
F.-P. Lin et al.
points in a Sobol sequence, we need to choose a primitive polynomial of some degree sj in the field ℤ2, that is, a polynomial of the form xsj þ a1, j xsj 1 þ þ asj 1, j x þ 1, where the coefficients a1, j _ _asj 1, j are either 0 or 1. We use these coefficients to define a sequence {m1, j, m2, integers by the recurrence relation
(49.16)
j,
mk, j ¼ a1, j mk1, j 22 a2, j mk2, j 2sj 1 asj 1, j mksj þ1, j 2sj asj , j mksj , j mksj , j
} of positive
(49.17)
for k sj + 1, where is the bit-by-bit exclusive-OR operator. The initial values m1, j , m2, j , , msj , j can be chosen freely provided that each Mk,j, 1 k sj is odd and less than 2k. The “direction numbers” {v1,j,v2,j, } are defined by v1, j
mk, j 2k
(49.18)
Then xi,j, the j th component of the ith point in a Sobol sequence, is given by x i , j ¼ b1 v 1 , j b2 v 2 , j
(49.19)
Where bl is the lth bit from the right when i is written in binary, that is, ( b2b1)2 is the binary representation of i. In practice, a more efficient Gray code implementation proposed by Antonov and Saleev (1979) is used; see Bratley and Fox (1988) for details. We then see that the implementation in Bratley and Fox (1988) may be used to generate Sobol sequences to approximate integrals in more than 40 dimensions by providing more data in the form of primitive polynomials and direction numbers (or equivalently, values of m1, j , m2, j , , msj , j ). When generating such Sobol sequences, we need to ensure that the primitive polynomials used to generate each component are different and that the initial values of the mk,j’s are chosen differently for any two primitive polynomials of the same degree. The error bounds for Sobol sequences given in Sobol (1967) indicate we should use primitive polynomials of as low a degree as possible. We discuss how additional primitive polynomials may be obtained in the next section. After these primitive polynomials have been found, we need to decide upon the initial values of the mk,j for 1ksj. As explained above, all we require is that they be odd and that mk,j > > year5 < Y> > > :
year1 > >
" 1þ
T Y
N X
1 þ Ri, j
> > :
year1 > >
1þ
i¼1
9 > > 1> > = > > > > ;
N "
N X
#
j¼1
i¼1
8 > > > year5 < Y>
T Y
1 þ E Ri , j
j¼1
N
#
9 > > 1> > = > > > > ;
(50.4)
Note that T stands for 252 days as an event year, and year 1 to year 5 represent the first event year to the fifth event year.
1386
Y. Wang
Distinct from CAR and BHAR, the quarterly earnings announcement return is the estimate of the long-term stock performance by successive short-term quarterly announcement returns (La Porta et al. 1997; Denis and Sarin 2001; Chan et al. 2004). In other words, we can use the 2-day or 3-day abnormal returns around quarterly announcement dates, and the successive earnings surprises should be aligned with the long-run stock abnormal return. The benchmark selection determines the long-run abnormal return estimation because of its statistical property (will be discussed in next section), so the results may be changed when benchmark settings are altered. This sensitive outcome driven from the benchmark problem leads to an inconclusive long-run abnormal return. Yet, the short-run return is not sensitive to the selection of benchmarks. For example, given 10 % and 20 % of market index returns, the 3-day market index returns are expected to be 0.12 % and 0.24 % only. When an event occurs, the 3-day announcement return could be generally above 1 %, which significantly exceeds any selected benchmark returns. Therefore, if a corporate event is followed by profitability improvements that are not observed by investors, then there should be successive earnings surprises following the event date, and the short-term announcement abnormal returns around the earnings announcement dates should be positive. To capture the long-run abnormal return, papers usually study 12–20 quarters (for 3–5 years) for the quarterly earnings announcement abnormal return. Generally, the quarterly earnings announcement abnormal returns capture about 25–40 % of the long-run stock return of a firm (Bernard and Thomson 1989; Sloan 1996).
50.2.2 Benchmark Problem Long-run stock return has some issues regarding the calculation and testing when we select the benchmarks for the expected return of the firm. Namely, the conventional methodologies for the long-run stock return might be biased if the chosen matching procedure is inappropriate. First, some long-term return measures have the rebalancing and new-listing biases problems. Second, long-run stock return is positive skewed, thus the traditional t-test is inappropriate for the long-run return. Although I will review the skewness-adjusted t-statistics and empirical p-value in the next section, matching firm method is another way to alleviate the skewness concern in the long-run return studies. The first benchmark for the expected return is the CRSP equal-weighted index return, which includes whole stocks in CRSP database and computes the simple average of stock returns. However, this approach involves rebalancing problem, which ignores the transaction costs from broker’s fee and tax to the government. To maintain the weights on stocks equally, investors must sell profitable stocks and buy stocks with loss. This rebalancing leads to huge transaction costs that are not considered in the CRSP equal-weighted index return, making the abnormal return underestimated. In fact, CAR per se also has this rebalancing problem because of its average in cross section.
50
Long-Run Stock Return and the Statistical Inference
1387
The second benchmark is the CRSP value-weighted index return, which also comes from CRSP database. This index return uses firm value (equal to the price timing shares outstanding) as the weighted scheme to compute the weighted average of stock returns. Most importantly, it is a rebalance-free benchmark and accordingly has no transaction cost during the holding period (except the beginning and end). Therefore, recent papers tend to use CRSP value-weighted index return instead of CRSP equal-weighted index return as the benchmark return. The third benchmark is the reference portfolio return. Before constructing the reference portfolio, we may need matching criterions. As the important pricing factors, size and book-to-market (BM) ratio are the two most important in determining the expected return (Fama and French 1992, 1993, 1996; Lakonishok et al. 1994; Daniel and Titman 1997). In particular, we have to determine the matching pool, which includes stocks that are irrelevant to the sample firm. We form 50 size and book-to-market portfolios (ten size portfolios and five bookto-market portfolios in each size decile) where the size and book-to-market cutoff points are obtained from stock in NYSE exchange. We then are able to compute either equal-weighted or value-weighted portfolio return as the expected return. The last one is the matching firm method. Because the long-run stock return is positive skewed, we may take the long-run return of the matching firm as the expected return. The skewed return of the sample firm and skewed return of the matching firm offset each other and make the abnormal return symmetric (Barber and Lyon,1997). In addition, matching firm method avoids the new-listing bias and rebalancing problem. In general, the matching criterions of the matching firm are similar to the reference portfolio. Within each reference portfolio, a matching firm could be selected by minimizing the book-to-market difference between the sample firm and the matching firm (Ikenberry et al. 1995). Sometimes, papers use few matching firms but not single matching firm as the benchmark to avoid few outlier impacts (e.g., Lee 1997) or use different matching variable (e.g., Ikenberry and Ramnath 2002; Eberhart et al. 2004). Generally, various matching methods under size and book-to-market effect controls do not largely change the results.
50.2.3 Statistical Inference The most important statistical problem for the long-run abnormal return is the skewness. The minimum loss of a long-term stock investment is 100 % while the maximum potential gain approaches infinite. Thus, the distribution of the longrun stock return is positive skewed. If we test the long-run stock return by a standard normal distribution, then we tend to reject the null hypothesis (that suggests no abnormal return) for negative returns and accept the null for positive returns. This misspecification leads to a type I error in the distributional left tail but causes a type II error in the distributional right tail. To solve the skewness problem, Barber and Lyon (1997) suggest the matching firm method because abnormal stock return is the return difference between the sample firm and matching firm, making the skewness from matching firm and
1388
Y. Wang
sample firm offset by each other. In addition, there are two ways to alleviate the testing problem under the skewness, one is the skewness-adjusted t-statistic and the other is the empirical p-value, suggested by Ikenberry et al. (1995) and Lyon et al. (1999). Given an abnormal return ARi, the skewness-adjusted t-statistic illustrated by Lyon et al. (1999) is tested as follows: tsa ¼
pffiffiffiffi 1 1 ^g , N ðSÞ þ ^g S2 þ 3 6N
where N X
S¼
AR , and ^g ¼ sðARi Þ
ARi AR
i¼1
NsðARi Þ3
3 :
(50.5)
Note that the S is the conventional t-statistic and ^g is the coefficient for the skewness adjustment. The second suggested statistical inference method is the empirical p-value. This approach uses bootstrapping method to construct an empirical distribution with general long-term return features. We use bootstrapping method to select the pseudo-sample firm. Thus, we are able to compare sample firm and pseudo-sample firm as the base of the empirical p-value. In fact, it is possible that we may face moment conditions (higher than third moment condition) in the long-term return estimation, and the skewness-adjusted t-statistic is not enough to capture the return characteristic. Also, the strong cross-sectional correlations among sample observations in BHARs can lead to poorly specified test statistics (Fama 1998; Lyon et al. 1999; Brav 2000). Under the empirical distribution, we can examine the statistical inference without a parametric distribution but are able to capture more unknown statistical features. As mentioned above, the empirical p-value is generated from the empirical distribution from bootstrapping, and the empirical distribution well controls the skewness and time-dependent properties of the long-run stock returns (Ikenberry, et al. 1995; and Lyon et al. 1999; Chan et al. 2004). In addition, this empirical p-value also solves the statistical inference problem in RBHAR because we may have too few observations in times series for computing standard deviation.2 To construct the empirical distribution, we need to find pseudo-sample firms that share similar firm characteristics as the sample firm but do not have the interested corporate events. Next, we construct 25 size and book-to-market portfolios (five size portfolios and five book-to-market portfolios in each size quintile) from all
2
For example, if we compute a 5-year RBHAR, then we have five averages of BHARs in five event years only.
50
Long-Run Stock Return and the Statistical Inference
1389
nonevent firms. The nonevent firm selection criterions are similar to the matching firm selection. Then, we randomly sample one pseudo-sample firm out of the corresponding portfolio for each sample firm. For example, if an event firm is with the second size quintile and third book-to-market quintile, then we randomly choose a pseudo firm that is also with the second size quintile and third book-to-market quintile. Hence, we are able to obtain N pseudo firms for N sample firms. Upon these pseudo-sample firms, we calculate the long-run return by CAR, BHAR, or RBHAR method. Finally, we repeat the sampling and long-run return estimation for 1,000 times and obtain 1,000 averages of long-run returns for pseudosample firms. The empirical distribution is plotted according to the frequency distribution diagram of those 1,000 average returns of pseudo-sample firms. If the long-run return is larger than the (1a) percentile of the 1,000 average pseudo-sample firm returns, then we obtain the p-value as a for testing the positive average abnormal return. Similarly, if the long-run return is smaller than a percentile of the 1,000 average pseudo-sample firm returns, then we obtain the p-value as a for testing the negative average abnormal return. Figures 50.2a–50.2d are empirical distributions for 1-year to 4-year long-run returns upon size/book-to-market controlled pseudo-sample firms of US repurchase firms during 1980–2008. I collect the repurchase data from SDC database as the example for the empirical distribution construction. For those empirical distributions, it is obvious that 4-year return has more outliers than 1-year return. Moreover, the 4-year return figure has lower kurtosis and is more positive skewed. Obviously, the shape of long-run returns does not obey normal distribution, and the empirical p-value is more relevant to the long-run abnormal return testing. I also show the specification (statistical size) for different long-run return methods in Table 50.1, which is obtained from Table 5 of Barber and Lyon (1997) and Table 3 of Lyon et al. (1999). They show the percentage of 1,000 random samplings from 200 firms rejecting the null hypothesis that suggests no abnormal return in terms of CAR, BHAR, and RBHAR with different benchmark and testing methods. First, the CAR with matching portfolio as the benchmark has type I error in left tail when measuring the abnormal return in 5 years. Second, the matching firm method yields good specification no matter how we focus on size control, BM control, or a combination control for both size and BM ratio. Third, skewness-adjusted t-statistics has type I error, implying that skewness is not the only statistical feature that we should address. Forth, empirical p-value method performs well even when adopting the reference portfolio as the benchmark, at least for 10 % significance level. Figure 50.3 shows the testing power of alternative tests by using BHAR as the primary method, and this figure is originally plotted in Fig. 1 of Lyon et al. (1999). The empirical distribution performs better in testing power than classical t-test, no matter what the standard empirical distribution or the bootstrapped skewnessadjusted t-statistic is employed. In sum, in the event-time approach, the BHAR is suggested. Matching firm is a better matching method than other approaches. In statistical testing, the empirical p-value could be the best way due to its well statistical size control and testing power.
1390
Y. Wang
0.26
0.25
0.24
0.23
0.22
0.21
0.2
0.19
0.19
0.18
0.17
0.16
0.15
0.14
0.13
0.12
0.11
0.1
0.1
0.08
60 50 40 30 20 10 0
0.09
a 70
0.43
0.44
0.45
0.67
0.68
0.68
0.42
0.41
0.41
0.4
0.37 0.61
0.39
0.37 0.61
0.38
0.36 0.6 0.83
0.35 0.59 0.83
0.34 0.59 0.82
0.33
0.33
0.32
0.31
0.29
35 30 25 20 15 10 5 0
0.3
b 40
0.65
0.66
0.88
0.89
0.66
0.64 0.87
0.64
0.63
0.62
0.58
0.57
0.57
0.56
0.55
0.54
30 25 20 15 10 5 0
0.54
c 35
d 30 25 20 15 10
0.91
0.91
0.90
0.87
0.86
0.85
0.84
0.81
0.80
0.79
0.79
0.77
0
0.78
5
Fig. 50.2 (a) One-year return distribution of size/BM controlled pseudo-sample firms. (b) Two-year return distribution of size/BM controlled pseudo-sample firms. (c) Three-year return distribution of size/BM controlled pseudo-sample firms. (d) Four-year return distribution of size/ BM controlled pseudo-sample firms
50
Long-Run Stock Return and the Statistical Inference
1391
Table 50.1 Specification (size) for alternative statistical tests A: Specification (size) for CAR with different benchmarks Two-tailed theoretical significant level (%) 1 5 10 Theoretical cumulative density function (%) 0.5 99.5 2.5 97.5 5.0 95.0 Description of return benchmark Mean Skew Panel C: 60-month CARs Size deciles 0 2.4* 0.6 8.0* 1.2 14.7* 3.45 1.11 Book-to-market deciles 0.1 0.7 1.9 4.4* 2.6 7.6* 1.47 1.24 Fifty size/book-to-market portfolio 0.2 1.3* 0.9 5.5* 2.2 10.0* 2.10 1.21 Equally weighted market index 0.0 5.5* 0.2 17.3* 0.5 25.1* 6.27 1.11 Size-matched control firm 0.6 0.3 2.1 2.2 5.2 4.3 0.59 0.14 Book-to-market-matched control firm 0.4 0.8 2.9 3.1 5.2 5.4 0.00 0.01 Size-/book-to-market-matched control firm 0.2 0.4 2.4 2.3 4.3 4.3 0.63 0.07 Fama-French three-factor model a 0.5 0.3 2.1 2.3 4.9 5.1 0.94 1.76 B: Specification (size) for BHAR with different benchmarks Statistic Benchmark Two-tailed theoretical significance level 1% 5% 10 % Theoretical cumulative density function (%) 0.5 99.5 2.5 97.5 5 95 t-Statistic Rebalanced size/book-to11.7* 0.0 23.7* 0.0 33.2* 0.2 market portfolio t-Statistic Buy-and-hold size/book-to2.4* 0.0 6.1* 0.5 10.5* 1.6 market portfolio Skewness-adjusted Buy-and-hold size/book-to1.4* 0.4 4.4* 1.7 8.2* 4.8 t-statistic market portfolio t-Statistic Size/book-to-market control 0.1 0.1 3.0 1.9 5.4 3.9 firm Bootstrapped Buy-and-hold size/book-to0.6 1.2* 2.2 3.1 5.0 5.7 skewness-adjusted market portfolio t-statistic Empirical p-value Buy-and-hold size/book-to0.2 1.5* 2.7 3.7* 4.9 6.3 market portfolio This table is from Table 5 of Barber and Lyon (1997, p. 363) and Table 3 of Lyon et al. (1999, p. 179). The numbers presented represent the percentage of 1,000 random samples of 200 firms that reject the null hypothesis of 5-year CAR (Panel A) and buy-and-hold abnormal return (Panel B) at the theoretical significance levels of 1 %, 5 %, or 10 % in favor of the alternative hypothesis of a significantly negative abnormal return (i.e., calculated p-value is less than 0.5 % at the 1 % significance level) or a significantly positive abnormal return (calculated p-value is greater than 99.5 % at the 1 % significance level). The alternative statistics and benchmarks are described in detail in the main text. * indicates the percentage is significantly different from the theoretical significance level at the 5 % (Panel A) and 1 % level (Panel B), one-sided binomial test statistic
1392
Y. Wang
Percentage of Samples Rejecting Null
100 empirical p value
80
bootstrapped skewness-adj. t statistic
60
40 t statistic, control firm 20
0 −20
−15
−10
−5 0 5 10 Induced Level of Abnormal Return (%)
15
20
Fig. 50.3 Power of alternative tests in random ample. This figure is from Fig. 1 of Lyon et al. (1999, p. 180). The percentage of 1,000 random samples of 200 firms rejecting the null hypothesis of no annual buy-and-hold abnormal return at various induced levels of abnormal return (horizontal axis) based on control firm method, bootstrapped skewness-adjusted t-statistic, and empirical p-values
50.3
Long-Run Return Estimation in Calendar-Time Approach
A potential problem in the event-time return estimations is the cross-sectional dependence. For example, buy-and-hold return is computed over a long horizon; it is possible that many sample firms’ returns overlap with each other, making strong cross-sectional correlations among long-horizon returns. This crosssectional dependence is even more profound when we have repeating events by the same firm, such as repurchase, SEO, merger, and stock splits. In addition, buy-and-hold returns also enlarge the long-run abnormal return because of the inflated returns stemming from the compounding effect. Therefore, the long-run return results might disappear if we apply other methodologies to compute the long-run abnormal return (Mitchell and Stafford 2000). Fama (1998) also documents that the long-run stock return should be examined by the value-weighted factor model since the buy-and-hold return usually uses an equal-weighted scheme that is related to the ignored transaction costs. Although Loughran and Ritter (2000) suggest that the value-weighted factor model is the least powerful test for long-run returns, the calendar-time method could be always a robust check for our long-run return estimation.
50
Long-Run Stock Return and the Statistical Inference
1393
To estimate the abnormal return in calendar time, we need to form a monthly portfolio for each calendar month. The portfolio return could be generated by an equal-weighted, value-weighted, or a log-value-weighted method (Ikenberry et al. 2000). Upon these monthly returns in calendar time, we are able to carry out the mean monthly abnormal return or factor model analysis.
50.3.1 Return Estimations: Mean Monthly Return and Factor Models The first method in the calendar-time approach is the mean monthly abnormal return (MMAR). We can start from a 5-year holding strategy on a corporate event. For any given calendar month (e.g., June 2002), we need to include a stock if it had the corporate event in the past 60 months (e.g., looking backward at a period of May 2002–June 1997). We then need to form a monthly portfolio for this specific calendar month, and the portfolio return could be computed upon equal-weighted, valueweighted or log-value-weighted scheme. Next, we repeat the abovementioned step for all calendar months throughout the sample period, and then the mean monthly return is the time-series average of monthly portfolio returns. Similar to the benchmark problem in event-time approach, we need to select the CRSP market index return, reference portfolio, or the matching firm as the benchmark for the MMAR. For any benchmark E(Ri), the MMAR is T X
MMAR ¼ where Nt X
wi, t Ri, t
RPt RM t
t¼1
,
T Nt X
wi, t E Ri, t
i¼1 , and RM : (50.6) t ¼ Nt Nt T is for calendar month in this setting; RPt is the monthly portfolio return of sample firms; RM t is the monthly portfolio return of benchmarks; and Nt is the number of observations in each calendar month. The statistical inference can be either classical t-statistic or the Newey and West (1987) estimation. As suggested by Fama (1998), Mitchell and Stafford (2000), and Schultz (2003), factor model is a robust method in estimating the long-run stock abnormal return. The standard Fama and French three-factor and Carhart four-factor models could be described as
RPt ¼
i¼1
RPt r f , t ¼ a þ b r m, t r f , t þ sSMBt þ hHMLt þ et ,
(50.7)
RPt r f , t ¼ a þ b r m, t r f , t þ sSMBt þ hHMLt þ mMOMENTUMt þ et , (50.8)
1394
Y. Wang
where RPt is the sample firm portfolio return for each calendar month, and could be obtained from the equal-weighted, value-weighted, or log-value-weighted average. This average return in calendar is similar to what we compute in the MMAR. rf is the risk-free rate, usually the short-term Treasury bill rate; rm is usually computed as CRSP value-weighted index return; SMB is small firm portfolio return minus big-firm portfolio return; HML is the high book-to-market portfolio return minus low book-to-market portfolio return; MOMENTUM is the winner portfolio return minus loser portfolio return where winner and loser portfolios are identified by past 1-year return. SMB, HML, and MOMENTUM are applied to control size and book-to-market and momentum effects, respectively (Fama and French 1992, 1993; Jegadeesh and Titman 1993; Lakonishok et al. 1994; Carhart 1997). The abnormal return is the regression intercept and can be tested based on the t-values or Newey and West (1987) estimator. Next, I introduce a modification of the factor model analysis: the zeroinvestment portfolio method. Daniel and Titman (1997) and Eberhart et al. (2004) study the long-run return by using the zero-investment portfolio approach to control for both risk and firm characteristic effects. To form the factor model under a zero-investment portfolio strategy, we have to buy sample stocks and short-sell matching stocks. Taking Carhart (1997) four-factor model as the example, we have P M RPt RM þ ðbP bM Þ r m, t r f , t þ ðsP sM ÞSMBt t ¼ a a þ ðhP hM ÞHMLt þ ðmP mM ÞMOMENTUMt þ et ,
(50.9)
and we use (aP–aM) as the abnormal return controlling for both risks and firm characteristics. It is also the hedging portfolio return controlled for the common risk factors. The matching firm selection criterions can apply the steps in benchmark problem section. For other modifications of the factor model analysis, Eberhart et al. (2004) provide more examples in their Table 3.
50.3.2 Conditional Market Model and Ibbotson’s RATS One major challenge to the standard market model is that the risk loadings are assumed to be unchanged. To estimate the factor loadings, we usually need long time series to obtain the estimated risk loadings, and the fixed risk loading over time is naturally assumed in the OLS analysis. Yet, the magnitude of the risk of a firm could be changed; in particular many corporate events change the risk of the firm (e.g., R&D increases could be followed by risk increases, and share repurchase could be followed by risk decreases). Accordingly it is needed to introduce the conditional market model to address the time-varying market model.
50
Long-Run Stock Return and the Statistical Inference
1395
There are at least two ways to address the time-varying risks in the regression model as the conditional market model. To simplify the problem, I use the CAPM as the first example. First, the systematic risk could change along with some firm characteristics and macroeconomic variables. Petkova and Zhang (2005) use the following regression analysis to estimate abnormal return: RPt r f , t ¼ a þ ðb0 þ b1 DIV t þ b2 DEFt þ b3 TERMt þ b4 TBt Þ r m, t r f , t þ et : (50.10) (b0 + b1DIVt + b2DEFt + b3TERMt + b4TBt) is the bt that accommodates to the time-varying risk loading. They assume that the risk changes with the dividend yield (DIV), the default spread (DEF), the term spread (TERM), and the short-term Treasury bill rate (TB). Again, the abnormal return is the intercept a. The second method is the rolling regression, as suggested by Petkova and Zhang (2005) and Eberhart et al. (2004). If we have a sample period from January 1990, then we can use the portfolio returns in the first 60 months (i.e., January 1990–December 1994) to carry out the Carhart (1997) four-factor regression. We substitute the estimated factor loadings from these 60 monthly returns into the equity premiums in 61th month (i.e., January 1995) and then obtain the expected portfolio return for 61th month. Thus, the abnormal return for 61th month is from the portfolio return of the sample firm minus the expected portfolio return. Next, we need to repeat the abovementioned steps for every month by rolling return windows. Finally, we estimate the abnormal return as the average of abnormal returns across time and use the time-series volatility to test the statistical significance. The final method relating to the time-varying risk is the Ibbotson (1975) RATS though it is not under the family of the factor model analysis. The original setting of Ibbotson RATS is designed for the long-run return estimation, yet recent papers use this method combining the factor model analysis to measure the long-run abnormal return (e.g., Peyer and Vermaelen 2009). Based on Carhart (1997) four-factor model, we regress the security excess return on the Carhart (1997) four factors for each month in the event time. Given a 60-month holding strategy, we have to carry out this regression for 1st month to 60th month following the corporate event date. Then, the abnormal return for month t is the intercept of this four-factor regression: Ri, t r f , t ¼ at þ bt r m, t r f , t þ st SMBt þ ht HMLt þ mt MOMENTUMt þ et : (50.11) The regression analysis is similar to what I introduce in Eq. 50.8; however, the regression is examined every event month t. For every event month or event year, we can obtain the average abnormal return as the average of the intercepts (at), which is obtained from a model allowing time-varying risks.
1396
50.4
Y. Wang
Conclusion
The long-run return studies have been investigated for many corporate events and asset pricing studies in the past two decades. I introduce the long-run stock return methodologies and their statistical inference adopted in recent papers. Two categories of long-run return methods are illustrated: the event-time approach and calendar-time approach. Under the event-time category, we have methods including cumulative abnormal return, buy-and-hold abnormal return, and abnormal returns around earnings announcements. Although the event-time approach is able to be implemented as an investment strategy in real world, it also raises more benchmark and statistical inference problems. Under the calendar-time category, we have mean monthly abnormal return, factor models, and Ibbotson’s RATS. Generally, calendar-time approach is more popular due to its robustness and variety. For any long-run return study, I may suggest that combining works on those methodologies could be necessary.
References Agrawal, A., Jaffe, J., & Mandelker, G. (1992). The post-merge performance of acquiring firms: A re-examination of an anomaly. Journal of Finance, 47, 1605–1621. Barber, B., & Lyon, J. (1997). Detecting long-run abnormal stock returns: The empirical power and specification of test statistics. Journal of Financial Economics, 43, 341–372. Bernard, V., & Thomas, J. (1989). Post-earnings-announcement drift: Delayed price response or risk premium. Journal of Accounting Research, 27, 1–36. Brav, A. (2000). Inference in long-horizon event studies: A Bayesian approach with application to initial public offerings. Journal of Finance, 55, 1979–2016. Carhart, M. (1997). On persistence in mutual fund performance. Journal of Finance, 52, 57–82. Chan, K., Ikenberry, D., & Lee, I. (2004). Economic sources of gain in share repurchases. Journal of Financial and Quantitative Analysis, 39, 461–479. Chan, K., Ikenberry, D., Lee, I., & Wang, Y. (2010). Share repurchases as a potential tool to mislead investors. Journal of Corporate Finance, 16, 137–158. Daniel, K., & Titman, S. (1997). Evidence on the characteristics of cross sectional variation in stock returns. Journal of Finance, 52, 1–33. Denis, D., & Sarin, A. (2001). Is the market surprised by poor earnings realizations following seasoned equity offerings? Journal of Financial and Quantitative Analysis, 36(2), 169–193. Eberhart, A., Maxwell, W., & Siddique, A. (2004). An examination of long-term abnormal stock returns and operating performance following R&D increases. Journal of Finance, 59(2), 623–650. Fama, E., & French, K. (1992). The cross-section of expected returns. Journal of Finance, 47, 427–466. Fama, E., & French, K. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33, 3–56. Fama, E., & French, K. (1996). Multifactor explanations of asset pricing anomalies. Journal of Finance, 51, 55–84. Fama, E. F., (1998). Market efficient, long-term returns, and behavioral finance. Journal of Financial Economics, 49, 283–306. Ibbotson, R. (1975). Price performance of common stock new issues. Journal of Financial Economics, 2, 235–272.
50
Long-Run Stock Return and the Statistical Inference
1397
Ikenberry, D., & Ramnath, S. (2002). Underreaction to self-selected news events: The case of stock splits. Review of Financial Studies, 15, 489–526. Ikenberry, D., Lakonishok, J., & Vermaelen, T. (1995). Market underreaction to open market share repurchases. Journal of Financial Economics, 39, 181–208. Ikenberry, D., Lakonishok, J., & Vermaelen, T. (2000). Stock repurchases in Canada: Performance and strategic trading. Journal of Finance, 55, 2373–2397. Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency. Journal of Finance, 48, 65–91. Kothari, S. P., & Warner, J. (1997). Measuring long-horizon security price performance. Journal of Financial Economics, 43, 301–339. La Porta, R., Lakonishok, J., Shleifer, A., & Vishny, R. (1997). Good news for value stocks: Further evidence on market efficiency. Journal of Finance, 52(2), 859–874. Lakonishok, J., Shleifer, A., & Vishny, R. (1994). Contrarian investment, extrapolation, and risk. Journal of Finance, 49, 1541–1578. Lee, I. (1997). Do firms knowingly sell overvalued equity? Journal of Finance, 52, 1439–1466. Loughran, T., & Ritter, J. (2000). Uniformly least powerful tests of market efficiency. Journal of Financial Economics, 55, 361–389. Lyon, J., Barber, B., & Tsai, C. L. (1999). Improved methods for tests of long-run abnormal stock returns. Journal of Finance, 54, 165–201. Mitchell, M., & Stafford, E. (2000). Managerial decisions and long-term stock price performance. Journal of Business, 73, 287–329. Newey, W., & West, K. (1987). A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica, 55, 703–708. Petkova, R., & Zhang, L. (2005). Is value riskier than growth. Journal of Financial Economics, 78, 187–202. Peyer, U., & Vermaelen, T. (2009). The nature and persistence of buyback anomalies. Review of Financial Studies, 22, 1693–1745. Ritter, J. (1991). The long-run performance of initial public offerings. Journal of Finance, 46, 1–27. Schultz, P. (2003). Pseudo market timing and the long-run underperformance of IPOs. Journal of Finance, 58, 483–517. Sloan, R. (1996). Do stock prices fully reflect information in accruals and cash flows about future earnings? Accounting Review, 71, 289–315.
Value-at-Risk Estimation via a Semi-parametric Approach: Evidence from the Stock Markets
51
Cheng-Few Lee and Jung-Bin Su
Contents 51.1 51.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.1 Parametric Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.2 Semi-parametric Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Evaluation Methods of Model-Based VaR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.1 Log-Likelihood Ratio Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.2 Binary Loss Function or Failure Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.3 Quadratic Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.4 The Unconditional Coverage Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.5 Unexpected Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.1 Data Preliminary Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.2 Estimation Results for Alternate VaR Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.3 The Results of VaR Performance Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: The Left-Tailed Quantiles of the Standardized SGT . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: The Procedure of Parametric VaR Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: The Procedure of Semi-parametric VaR Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1400 1403 1403 1406 1406 1407 1408 1408 1408 1409 1409 1409 1410 1413 1422 1423 1423 1428 1429
C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] J.-B. Su (*) Department of Finance, China University of Science and Technology, Nankang, Taipei, Taiwan e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_51, # Springer Science+Business Media New York 2015
1399
1400
C.-F. Lee and J.-B. Su
Abstract
This study utilizes the parametric approach (GARCH-based models) and the semi-parametric approach of Hull and White (Journal of Risk 1: 5–19, 1998) (HW-based models) to estimate the Value-at-Risk (VaR) through the accuracy evaluation of accuracy for the eight stock indices in Europe and Asia stock markets. The measure of accuracy includes the unconditional coverage test by Kupiec (Journal of Derivatives 3: 73–84, 1995) as well as two loss functions, quadratic loss function, and unexpected loss. As to the parametric approach, the parameters of generalized autoregressive conditional heteroskedasticity (GARCH) model are estimated by the method of maximum likelihood and the quantiles of asymmetric distribution like skewed generalized student’s t (SGT) can be solved by composite trapezoid rule. Sequentially, the VaR is evaluated by the framework proposed by Jorion (Value at Risk: the new benchmark for managing financial risk. New York: McGraw-Hill, 2000). Turning to the semi-parametric approach of Hull and White (Journal of Risk 1: 5–19, 1998), before performing the traditional historical simulation, the raw return series is scaled by a volatility ratio where the volatility is estimated by the same procedure of parametric approach. Empirical results show that the kind of VaR approaches is more influential than that of return distribution settings on VaR estimate. Moreover, under the same return distributional setting, the HW-based models have the better VaR forecasting performance as compared with the GARCH-based models. Furthermore, irrespective of whether the GARCHbased model or HW-based model is employed, the SGT has the best VaR forecasting performance followed by student’s t, while the normal owns the worst VaR forecasting performance. In addition, all models tend to underestimate the real market risk in most cases, but the non-normal distributions (student’s t and SGT) and the semi-parametric approach try to reverse the trend of underestimating. Keywords
Value-at-Risk • Semi-parametric approach • Parametric approach • Generalized autoregressive conditional heteroskedasticity • Skewed generalized student’s t • Composite trapezoid rule • Method of maximum likelihood • Unconditional coverage test • Loss function
51.1
Introduction
Over the last two decades, a number of global and national financial disasters have occurred due to failures in risk management procedures. For instance, US Savings and Loan crisis of 1989–1991, Japanese asset price bubble collapse of 1990, Black Wednesday of 1992–1993, 1994 economic crisis in Mexico, 1997 Asian Financial Crisis, 1998 Russian financial crisis, financial crisis of 2007–2010, followed by the late 2000s recession, and the 2010 European sovereign debt crisis. The crises caused many enterprises to be liquidated and many countries to face near depressions in their economies. These painful experiences once again underline the importance of accurately measuring financial risks and implementing sound risk management
51
Valuet-isk Estimation via a Semi-rametric Approach
1401
policies. Hence, Value-at-Risk (VaR) is a widely used risk measure of the risk of loss on a specific portfolio of financial assets because it is an attempt to summarize the total risk with a single number. For example, if a portfolio of stocks has a 1-day 99 % VaR of US$1,000, there is a 1 % probability that the portfolio will fall in value by more than US$1,000 over a 1-day period. In other words, we are 99 % certain that we will not lose more than US$1,000 in the next 1 day, where 1 day is the time horizon, 99 % is the confidence level, and the US$1,000 is the VaR of the portfolio. VaR estimates are currently based on either of three main approaches: the historical simulation, the parametric method, and the Monte Carlo simulation. The Monte Carlo simulation is a class of computational algorithms that rely on repeated random sampling to compute their results. That is, this approach allows for an infinite number of possible scenarios you are exposing yourself to huge model risks in determining the likelihood of any given path. In addition, as you had more and more variables that could possibly alter your return paths, model complexity and model risks also increase in scale. Like historical simulation, however, this methodology removes any assumption of normality and thus, if modeled accurately, probably would give the most accurate measure of the portfolio’s true VaR. Besides, little research such as Vlaar (2000) had applied this approach to estimate the VaR. The parametric method is also known as variance/covariance approach. This method is popular because the only variables you need to do the calculation are the mean and standard deviation of the portfolio, indicating the simplicity of the calculations. The parametric method assumes that the returns of the portfolios are normally distributed and serially independent. In practice, this assumption of return normality has proven to be extremely risky. Indeed, this was the biggest mistake that LTCM made gravely underestimating their portfolio risks. Another weakness with this method is the stability of the standard deviation through time as well as the stability of the variance/covariance matrix in your portfolio. However, it is easy to depict how correlations have changed over time particularly in emerging markets and through contagion in times of financial crisis. Additionally, numerous studies focused on the parametric approach of generalized autoregressive conditional heteroskedasticity (GARCH) family variance specifications (i.e., risk metrics, asymmetric power ARCH (APARCH), exponential GARCH (EGARCH), threshold GARCH (TGARCH), integrated GARCH (IGARCH), and fractional IGARCH (FIGARCH)) to estimate the VaR (see Vlaar (2000), Giot and Laurent (2003a, b), Gencay et al. (2003), Cabedo and Moya (2003), Angelidis et al. (2004), Huang and Lin (2004), Hartz et al. (2006), So and Yu (2006), Sadeghi and Shavvalpour (2006), Bali and Theodossiou (2007), Bhattacharyya et al. (2008), Lee et al. (2008), Lu et al. (2009), Lee and Su (2011), and so on). Lately, in the empirical study of parametric VaR approach, several researches have utilized the other type of volatility specifications besides GARCH family such as the ARJI-GARCH-based model (hereafter ARJI) of Chan and Maheu (2002) which combines the GARCH specification of volatility and autoregressive jump intensity (ARJI) in jump intensity (see Su and Hung (2011) Chang et al. (2011), and so on). Moreover, the other types of long memory volatility specifications such as fractional integrated APARCH (FIAPARCH) and hyperbolic
1402
C.-F. Lee and J.-B. Su
GARCH (HYGARCH) besides FIGARCH mentioned above are also used to estimate the VaR (see Aloui and Mabrouk (2010), Degiannakis et al. (2012)). As to distribution setting, some special distributions like the Weibull distribution (see Gebizlioglu et al. (2011)), the asymmetric Laplace distribution (see Chen et al. (2012)), and the Pearson type-IV distribution (see Stavroyiannis et al. (2012)) are also employed to estimate VaR. The historical simulation assumes that the past will exactly replicate the future. The VaR calculation of this approach is literally ranking all of your past historical returns in terms of lowest to highest and computing with a predetermined confidence rate what your lowest return historically has been.1 In addition, several studies such as Vlaar (2000), Gencay et al. (2003), Cabedo and Moya (2003), and Lu et al. (2009) had applied this approach to estimate the VaR. Even though it is relatively easy to implement, there is a couple of shortcomings of this approach, and first of all is that it imposes a restriction on the estimation assuming asset returns are independent and identically distributed (iid) which is not the case. From empirical evidence, it is known that asset returns are clearly not independent as they exhibit volatility clustering.2 Therefore, it can be unrealistic to assume iid asset returns. Second restriction relates to time. Historical simulation applies equal weight to all returns of the whole period, and this is inconsistent with the nature where there is diminishing predictability of data that are further away from the present. These two shortcomings of historical simulation lead this paper to use the approach proposed by Hull and White (1998) (hereafter, HW method) as a representative of the semi-parametric approach. This semi-parametric approach combines the abovementioned parametric approach of GARCH-based variance specification with the weighted historical simulation. The weighted historical simulation applies decreasing weights to returns that are further away from the present, which overcomes the inconsistency of historical simulation with diminishing predictability of data that are further away from the present. Hence, this study utilizes the parametric approach (GARCH-N, GARCH-T, and GARCHSGT models) and the semi-parametric approach of Hull and White (1998) (HW-N, HW-T, and HW-SGT models), totaling six models, to estimate the VaR for the eight stock indices in Europe and Asia stock markets, then uses three accuracy measures: one likelihood ratio test (the unconditional coverage test (LRuc) of Kupiec (1995)) and two loss functions (the average quadratic loss function (AQLF) of Lopez (1999) and the unexpected loss (UL)) to compare the forecasting ability of the aforementioned models in terms of VaR. Our results show that the kind of VaR approaches is more influential than that of return distribution settings on VaR estimate. Moreover, under the same return distributional setting, the HW-based models have the better VaR forecasting 1
This means if you had 200 past returns and you wanted to know with 99 % confidence what’s the worst you can do, you would go to the 2nd data point on your ranked series and know that 99 % of the time you will do no worse than this amount. 2 Large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes.
51
Valuet-isk Estimation via a Semi-rametric Approach
1403
performance as compared with the GARCH-based models. Furthermore, irrespective of whether the GARCH-based model or HW-based model is employed, the skewed generalized student’s t (SGT) has the best VaR forecasting performance followed by student’s t, while the normal owns the worst VaR forecasting performance. In addition, all models tend to underestimate the real market risk in most cases, but the non-normal distributions (student’s t and SGT) and the semi-parametric approach try to reverse the trend of underestimating. The remainder of this paper is organized as follows. Section 51.2 describes the methodology of two dissimilar VaR approaches (the parametric and semiparametric approaches) and the VaR calculations using these approaches. Section 51.3 provides criteria for evaluating risk management, and Sect. 51.4 reports on and analyzes the empirical results of the out-of-sample VaR forecasting performance. The final section makes some concluding remarks.
51.2
Empirical Methodology
In this paper, there are two approaches of calculating VaR to be introduced, that is, the parametric method and the semi-parametric approach. Here, we use the GARCH(1,1) model with three conditional distributions, namely, the normal, student’s t, and SGT distributions, to estimate the corresponding volatility in terms of different stock indices then employ the framework of Jorion (2000) to evaluate the VaR of parametric approach whereas utilizing the weighting scheme of volatility proposed by Hull and White (1998) (hereafter, HW method) which is a straightforward extension of traditional historical simulation to calculate the VaR of semi-parametric VaR.
51.2.1 Parametric Method Many time series data of financial assets appear to exhibit autocorrelated and volatility clustering. Bollerslev et al. (1992) showed that the GARCH(1,1) specification works well in most applied situations. Furthermore, the unconditional distribution of those returns displays leptokurtosis and a moderate amount of skewness. Hence, this study thus considers the applicability of the GARCH(1,1) model with three conditional distributions, namely, the normal, student’s t, and SGT distributions, to estimate the corresponding volatility in terms of different stock indices and use the GARCH model as an official delegate of the VaR model.
51.2.1.1 GARCH Model with Normal Distribution Let rt ¼ (ln Pt ln Pt 1) 100, where Pt denotes the stock price and rt denotes the continuously compounded daily returns of the underlying assets on time t. The GARCH (1,1) model with SGT distribution (GARCH-SGT) can be expressed as follows: rt ¼ m þ et , et ¼ et st , et IID SGTð0; 1; k; l; nÞ
(51.1)
1404
C.-F. Lee and J.-B. Su
s2t ¼ o þ ae2t1 þ bs2t1
(51.2)
where et is the current error and m and st2 are the conditional mean and variance of return, respectively. Moreover, the variance parameters o, a, and b are the parameters to be estimated and obey the constraints o, a, b > 0 and a + b < 1. IID denotes that the standardized errors et are independent and identically distributed. Since et is drawn from the standard normal distribution, the probability density function for et is 2 1 e f ðet Þ ¼ pffiffiffiffiffiffi exp t , 2 2p
(51.3)
and the log-likelihood function of GARCH-N model thus can be written as LðcÞ ¼ ln f ðrt jOt1 ; cÞ ¼ 0:5 ln 2p þ ln s2t þ ðrt mÞ2 =s2t
(51.4)
where c ¼ [m, o, a, b] is the vector of parameters to be estimated and Ot 1 denotes the information set of all observed returns up to time t 1. Under the framework of the parametric techniques (Jorion 2000), the 1-day-ahead VaR based on GARCH-N model can be calculated as ^ tþ1jt VaRN tþ1jt ¼ m þ Fc ðet Þ s
(51.5)
where Fc(et) is the left-tailed quantile at c% for the standardized normal distribu^ tþ1jt is the one-step-ahead forecasts of the standard deviation of the returns tion. s conditional on all information upon the time t.
51.2.1.2 GARCH Model with Student’s t Distribution Since the characteristics of many financial data are non-normal, the student’s t distribution is most commonly employed to capture the fat-tailed properties of their empirical distributions. Moreover, Bollerslev (1986) argued that using the student’s t distribution as the conditional distribution for GARCH model is more satisfactory since it exhibits thicker tail and larger kurtosis than normal distribution. Under the same specifications of mean and variance equation as the GARCH-N model, the probability density function for the standardized student’s t distribution can be represented as follows: nþ1 2 Gð0:5ðn þ 1ÞÞ e2t pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ f ð et Þ ¼ n2 Gð0:5 nÞ pðn 2Þ
(51.6)
where G(•) is the gamma function and n is the shape parameter. Hence, the log-likelihood function of the GARCH-T model can be expressed as
51
Valuet-isk Estimation via a Semi-rametric Approach
1405
"
# Gð0:5ðn þ 1ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi LðcÞ ¼ ln f ðrt jOt1 ; cÞ ¼ ln Gð0:5nÞ pðn 2Þ " # nþ1 rt m 2 1 ln 1 þ ln st ð n 2Þ 2 st
(51.7)
where c ¼ [m, o, a, b, n] is the vector of parameters to be estimated. The 1-day-ahead VaR based on GARCH-T model can be obtained as ^ tþ1jt VaRTtþ1jt ¼ m þ Fc ðet ; nÞ s
(51.8)
where Fc(et ; n) denotes the left-tailed quantile at c% for standardized student’s t distribution with shape parameter n.
51.2.1.3 GARCH Model with Skewed Generalized Student’s t Distribution This study also employs the SGT distribution of Theodossiou (1998) which allows return innovation to follow a flexible treatment of both skewness and excess kurtosis in the conditional distribution of returns. Under the same specifications of mean and variance as the GARCH-N model, the probability density function for the standardized SGT distribution is derived by Lee and Su (2011) and can be represented as follows: f ð et Þ ¼ C 1 þ
j e t þ dj k ½1 þ sign ðet þ dÞlk yk
nþ1 k (51.9)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1
12 , SðlÞ ¼ 1 þ 3l2 4A2 l2 , where y ¼ Sð1lÞ B k1 ; kn 2 B k3 ; n2 k
1 n 0:5 3 n2 0:5 1 n 1 k A ¼ B k2 ; n1 B k; k , d ¼ S2lA k B k;k ðlÞ, C ¼ 2y B k ; k where k, n, and l are scaling parameters and C and y are normalizing constants ensuring that f(•) is a proper p.d.f. The parameters k and n control the height and tails of density with constraints k > 0 and n > 2, respectively. The skewness parameter l controls the rate of descent of the density around the mode of et with 1 < l < 1. In the case of positive (resp. negative) skewness, the density function skews toward the right (resp. left). Sign is the sign function, and B(•) is the beta function. The parameter n has the degrees of freedom interpretation in case l ¼ 0 and k ¼ 2. The log-likelihood function of the GARCH-SGT model thus can be written as k rt m nþ1 LðcÞ ¼ ln f ðrt jOt1 ; cÞ ¼ ln C lnst ln 1 þ þ d k st k rt m k 1 þ sign þd l y st
(51.10)
1406
C.-F. Lee and J.-B. Su
where c ¼ [m, o, a, b, k, l, n] is the vector of parameters to be estimated, and Ot 1 denotes the information set of all observed returns up to time t 1. Under the framework of the parametric techniques (Jorion 2000), the 1-day-ahead VaR based on GARCH-SGT model can be calculated as ^ tþ1jt VaRSGT tþ1jt ¼ m þ Fc ðet ; k; l; nÞ s
(51.11)
where Fc(et; k, l, n) denotes the left-tailed quantile at c% for standardized SGT distribution with shape parameters k, l, and n and can be evaluated by a numerical integral method (composite trapezoid rule).3 Particularly, the SGT distribution generates the student’s t distribution for l ¼ 0 and k ¼ 2. Moreover, the SGT distribution generates the normal distribution for l ¼ 0, k ¼ 2, and n ¼ 1.
51.2.2 Semi-parametric Method In this paper, we use the approach proposed by Hull and White (1998) (hereafter, HW method) as a representative of the semi-parametric approach. The HW method is a straightforward extension of traditional historical simulation. Instead of using the actual historical percentage changes in market variables for the purposes of calculating VaR, we use historical changes that have been adjusted to reflect the ratio of the current daily volatility to the daily volatility at the time of the observation and assume that the variance of each market variable during the period covered by the historical data is monitored using a GARCH model. The methodology is explained in the following three steps: First, use a raw return series, {r1, r2, r3,......, rt¼T}, to fit the GARCH(1,1) models with alternative distributions expressed as in Sect. 51.2.1. Thus, a series of daily volatility estimates, {s1, s2, s3,......, st¼T}, are obtained where T is the number of estimated samples. Second, the modified return series are obtained by the raw return series multiplied by the ratio of the current daily volatility to the daily volatility at the time of the observation, sT/si. That is, the modified return series are expressed as {r1*, r2*, r3*,......, rt¼T*}, where ri* ¼ ri(sT/si). Finally, sort the returns ascendingly to achieve the empirical distribution. Thus, VaR is the percentile that corresponds to the specified confidence level. The HW-GARCH-SGT model (simply called HW-SGT) implies that the standardized residual return of the GARCH-SGT model is applied by the HW approach to estimate the VaR so are HW-N and HW-T models.
51.3
Evaluation Methods of Model-Based VaR
Many financial institutions have been required to hold capital against their market risk exposure, while the market risk capital requirements are based on the VaR estimates
3
See Faires and Burden (2003) for more details.
51
Valuet-isk Estimation via a Semi-rametric Approach
1407
generated by the financial institutions’ own risk management models. Explicitly, the accuracy of these VaR estimates is of concern to both financial institutions and their regulators. Hence, model accuracy is important to all VaR model users. To compare the forecasting ability of the aforementioned models in terms of VaR, this study considers three accuracy measures: the unconditional coverage test of Kupiec (1995) which are quite standard in the literatures. Moreover, the quadratic loss function and the unexpected loss are introduced and used for determining the accuracy of modelbased VaR measurements.
51.3.1 Log-Likelihood Ratio Test Before performance competition of alternative VaR models, with regard to two models, we can use the log-likelihood ratio test to compare which one model has the better matching ability of between actual data and empirical model. This can be regarded as the preliminary analysis. The log-likelihood ratio test is a statistical test used to compare the fit of two models, one of which, the null model, is a special case of the other, the alternative model. The test is based on the likelihood ratio, which expresses how many times more likely the data are under one model than the other. This log-likelihood ratio can then be used to compute a p-value, or compared to a critical value, to decide whether to reject the null model in favor of the alternative model. The log-likelihood ratio test LRN (LRT), used to test the null hypothesis that log-returns are normally (student’s t) distributed against the alternative hypothesis, is given by LRN or LRT ¼ 2ðLRr LRu Þ w2 ðmÞ
(51.12)
where LRr and LRu are, respectively, the maximum value of the log-likelihood values under the null hypothesis of the restricted model and the alternative hypothesis of the unrestricted model and m is the number of the restricted parameters in the restricted model. For example, LRN for GARCH-SGT model could be used to test the null hypothesis that log-returns are normally distributed against the alternative hypothesis that they are SGT distributed. The null hypothesis for testing normality is H0:k ¼ 2, l ¼ 0 and n!1, and the alternative hypothesis is H1:k∊R+, n > 2 and |l| < 1. Restate, LRN ¼ 2(LRr LRu) w2(3) where LRr and LRu are, respectively, the maximum value of the log-likelihood values under the null hypothesis of restricted model (GARCH-N model) and the alternative hypothesis of unrestricted model (GARCH-SGT model) and m is the number of the restricted parameters in the restricted model (k ¼ 2, l ¼ 0 and n!1) and equal to 3 in this case. At the same inference, LRN for GARCH-T model follows the w2(1) distribution with one degree of freedom. Moreover, LRT for GARCH-SGT model follows the w2(2) distribution with two degrees of freedom.
1408
C.-F. Lee and J.-B. Su
51.3.2 Binary Loss Function or Failure Rate If the predicted VaR is not able to cover the realized loss, this is termed a violation. A binary loss function (BLF) is merely the reflection of the LR test of unconditional coverage test and gives a penalty of one to each exception of the VaR. The BLF for long position can be defined as follows: 1 if rtþ1 < VaRtþ1jt , BLtþ1 ¼ (51.13) 0 if rtþ1 VaRtþ1jt : where BLt+1 represents the 1-day-ahead BLF for long position. If a VaR model truly provides the level of coverage defined by its confidence level, then the average binary loss function (ABLF) or the failure rate over the full sample will equal c for the (1 c)th percentile VaR.
51.3.3 Quadratic Loss Function The quadratic loss function (QLF) of Lopez (1999) penalizes violations differently from the binary loss function and pays attention to the magnitude of the violation. The QLF for long position can be expressed as QLtþ1 ¼
2 1 þ rtþ1 VaRtþ1jt 0
if rtþ1 < VaRtþ1jt , if rtþ1 VaRtþ1jt :
(51.14)
where QLt+1 represents the 1-day-ahead QLF for long position. The quadratic term in Eq. 51.14 ensures that large violations are penalized more than the small violations which provides a more powerful measure of model accuracy than the binary loss function.
51.3.4 The Unconditional Coverage Test Kupiec (1995) proposes the unconditional coverage test which is a likelihood ratio test for testing the model accuracy which is identical to a test of the null hypothesis that the probability of failure for each trial (^ p) equals the specified model probability (p). The likelihood ratio test statistics is given by ^ Þn0 Þ w2 ð1Þ ^ n1 ð1 p LRuc ¼ 2lnðpn1 ð1 pÞn0 p
(51.15)
^ ¼ n0nþn1 1 is the maximum likelihood estimate of p, n1 denotes a Bernoulli where p random variable representing the total number of VaR violations, and n0 + n1
51
Valuet-isk Estimation via a Semi-rametric Approach
1409
represents the full sample size. The LRuc test can be employed to test whether the sample point estimate is statistically consistent with the VaR model’s prescribed confidence level or not.
51.3.5 Unexpected Loss The unexpected loss (UL) will equal the average magnitude of the violation over the full sample. The magnitude of the violation for long position is given by Ltþ1 ¼
rtþ1 VaRtþ1jt 0
if rtþ1 < VaRtþ1jt , if rtþ1 VaRtþ1jt :
(51.16)
where Lt+1 is the 1-day-ahead magnitude of the violation for long position.
51.4
Empirical Results
The study data comprises daily prices of the following eight stock indices: the Austria ATX (6/29/1999–8/10/2009), the Belgium Brussels (10/19/1999– 8/10/2009), the France CAC40 (10/22/1999–8/10/2009) and the Switzerland Swiss (9/8/1999–8/10/2009) in Europe, the India Bombay (7/8/1999–8/10/2009), the Malaysia KLSE (6/23/1999–8/10/2009), the South Korea KOSPI (6/21/1999– 8/10/2009), and the Singapore STRAITS (8/24/1999–8/10/2009) in Asia, where the numbers in parentheses are the start and end dates for our sample. Daily closing spot prices for the study period, totaling 2,500 observations, were obtained from http://finance.yahoo.com. The stock returns are defined as the first difference in the logarithms of daily stock prices then multiplied by 100.
51.4.1 Data Preliminary Analysis Table 51.1 summarizes the basic statistical characteristics of return series for both the estimation and forecast periods. Notably, the average daily returns are all negative (resp. positive) for forecast (resp. estimation) period and very small compared with the variable standard deviation, indicating high volatility. Except the Brussels of estimation period and the CAC40, Swiss and Bombay of forecast period, all returns series almost exhibit negative skewness for both the estimation and forecast periods. The excess kurtosis all significantly exceeds zero at the 1 % level, indicating a leptokurtic characteristic. Furthermore, J-B normality test statistics are all significant at the 1 % level and thus reject the hypothesis of normality and confirm that neither return series is normally distributed. Moreover, the Ljung-Box Q2(20) statistics for the squared returns are all significant at the 1 %
1410
C.-F. Lee and J.-B. Su
Table 51.1 Descriptive statistics of daily return Mean Std. dev. Max. Min. Panel A. Estimation period (2,000 observations) ATX 0.0675 0.9680 4.6719 7.7676 Brussels 0.0179 1.1577 9.3339 5.6102 CAC40 0.0090 1.4012 7.0022 7.6780 Swiss 0.0086 1.1602 6.4872 5.7803 Bombay 0.0648 1.5379 7.9310 11.8091 KLSE 0.0243 0.9842 5.8504 6.3422 KOSPI 0.0397 1.8705 7.6971 12.8046 STRAITS 0.0239 1.1282 4.9052 9.0949 Panel B. Forecast period (500 observations) ATX 0.1352 2.6532 12.0210 10.2526 Brussels 0.1195 1.9792 9.2212 8.3192 CAC40 0.0907 2.1564 10.5945 9.4715 Swiss 0.0704 1.8221 10.7876 8.1077 Bombay 0.0101 2.6043 15.9899 11.6044 KLSE 0.0166 1.2040 4.2586 9.9785 KOSPI 0.0327 2.1622 11.2843 11.1720 STRAITS 0.0594 2.0317 7.5305 9.2155
Skewness Kurtosis J-B
Q2(20)
0.6673c 0.2607c 0.0987a 0.0530 0.5632c 0.3765c 0.4671c 0.5864c
4.4547c 6.0567c 2.9924c 4.5084c 4.2350c 6.2537c 3.6010c 4.8254c
1,802.17c 547.23c 3,079.66c 1,479.84c 749.48c 2,270.73c 1,694.78c 1,985.51c 1,600.43c 707.26c 3,306.35c 5,54.16c 1,153.39c 365.29c 2,055.03c 239.53c
0.0360 0.0888 0.2209b 0.2427b 0.2529b 1.1163c 0.4177c 0.1183
2.4225c 3.1545c 4.4068c 4.4101c 3.4248c 9.8218c 4.3977c 2.3827c
122.37c 207.97c 408.65c 410.10c 249.70c 2,113.64c 417.47c 119.45c
735.66c 581.42c 353.15c 502.59c 57.24c 17.87c 343.50c 219.52c
Notes: 1. a, b, and c denote significantly at the 10 %, 5 %, and 1 % levels, respectively. 2. J-B statistics are based on Jarque and Bera (1987) and are asymptotically chi-squared distributed with 2 degrees of freedom. 3. Q2(20) statistics are asymptotically chi-squared distributed with 20 degrees of freedom
level and thus indicate that the return series exhibit linear dependence and strong ARCH effects. Therefore, the preliminary analysis of the data suggests the use of a GARCH model to capture the fat-tails and time-varying volatility found in these stock indices return series. Descriptive graphs (levels of spot prices, density of the daily returns against normal distribution) for each stock index are illustrated in Fig. 51.1a–h. As shown in Fig. 51.1, all stock indices have experienced a severe slide in price levels and display pictures of volatile bear markets for forecast period. Moreover, comparing density graphs against the normal distribution shows that each return distribution of data employed exhibits non-normal characteristics. This provides evidence in favor of some of skewed, leptokurtic, and fat-tailed return distributions. These results are in line with those of Table 51.1.
51.4.2 Estimation Results for Alternate VaR Models This section estimates the GARCH(1,1) model with alternative distributions (normal, student’s t, and SGT) for performing VaR analysis. For each data series, three GARCH models are estimated with a sample of 2,000 daily returns, and the estimation period is then rolled forwards by adding one new day and dropping the
51
Valuet-isk Estimation via a Semi-rametric Approach
a
CLOSE
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
daily return density for overall period(ATX) 0.6
Mean 0.02702 Std Error 1.47034 Skewness −0.38145 Exc Kurtosis 9.57690
0.5 0.4 0.3 0.2 0.1 0.0 −15
19 99 20 00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
5000 4500 4000 3500 3000 2500 2000 1500 1000
ATX stock index : overall-sample period
1411
−10
−5
0
5
10
15
Date
b
CLOSE
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
daily return density for overall period(Brussel)
0.6
Mean −0.00951 Std Error 1.36291 Skewness 0.00652 Exc Kurtosis 6.24403
0.5 0.4 0.3 0.2 0.1 0.0
−10
−5
0
5
10
15
20
00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
5000 4500 4000 3500 3000 2500 2000 1500 1000
Brussel stock index : overall-sample period
c 7000
Date CAC40 stock index : overall-sample period CLOSE
6000 5000 4000 3000
Mean −0.01090 Std Error 1.58137 Skewness 0.02402 Exc Kurtosis 5.01027
−10
00 01 20
20
daily return density for overall period(CAC40)
0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00
20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
2000
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
−5
0
5
10
15
Date
d 10000
Swiss stock index : overall-sample period CLOSE
9000 8000 7000 6000 5000 4000 20 00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
3000
Date
Fig. 51.1 (continued)
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
daily return density for overall period(Swiss) 0.6
Mean −0.00717 Std Error 1.31938 Skewness 0.06587 Exc Kurtosis 5.91237
0.5 0.4 0.3 0.2 0.1 0.0 −10
−5
0
5
10
15
1412
C.-F. Lee and J.-B. Su
e
Bombay stock index : overall-sample period
22500 20000 17500 15000 12500 10000 7500 5000 2500
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00
Date KLSE stock index : overall-sample period
f 1600
CLOSE
1400 1200 1000 800 600
−10
0
10
20
30
daily return density for overall period(KLSE) 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
Mean 0.01615 Std Error 1.03179 Skewness −0.62305 Exc Kurtosis 7.87111
−15
−10
−5
0
5
10
19
99 20 00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
400
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
Mean 0.04980 Std Error 1.80202 Skewness −0.15519 Exc Kurtosis 5.59973
−20
19 9 20 9 00 20 0 20 1 02 20 0 20 3 04 20 0 20 5 0 20 6 07 20 0 20 8 09
CLOSE
daily return density for overall period(Bombay)
Date KOSPI stock index : overall-sample period
g 2200 2000 1800 1600 1400 1200 1000 800 600 400
CLOSE
Mean 0.02528 Std Error 1.93219 Skewness −0.46118 Exc Kurtosis 3.94058
0.30 0.25 0.20 0.15 0.10
00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
Date STRAITS stock index : overall-sample period CLOSE
3500
−20
20
99
4000
0.35
0.05 −1.00 0.00
19
h
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75
daily return density for overall period(KOSPI)
3000 2500 2000 1500
19
99 20 00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09
1000
1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 −1.00
−10
0
10
20
30
daily return density for overall period(STRAITS)
0.6
Mean 0.00730 Std Error 1.35780 Skewness −0.39368 Exc Kurtosis 5.34473
0.5 0.4 0.3 0.2 0.1 0.0 −10
−5
0
5
10
15
Date
Fig. 51.1 The stock index in level and daily return density (versus normal) for whole sample (a) ATX, (b) Brussels, (c) CAC40, (d) Swiss, (e) Bombay, (f) KLSE, (g) KOSPI, (h) STRAITS stock indices
51
Valuet-isk Estimation via a Semi-rametric Approach
1413
most distant day. In this procedure, according to the theory of Sect. 51.2, the out-of-sample VaR is computed for the next 500 days. Table 51.2 lists the estimation results4 of the GARCH-N, GARCH-T, and GARCH-SGT models for the ATX, Brussels, CAC40, and Swiss stock indices in Europe and the Bombay, KLSE, STRAITS, and KOSPI stock indices in Asia during the first in-sample period. The variance coefficients o, a, and b are all positive and significant almost at the 1 % level. Furthermore, the sums of parameters a and b for these three models are less than one thus ensuring that the conditions for stationary covariance hold. As to the fat-tail parameters in student’s t distribution, the fat-tail parameter (n) ranges from 4.9906 (KLSE) to 14.9758 (CAC40) for GARCH-T model. All these shape parameters are all significant at 1 % level and obey the constraint n > 2 and thereby implying that the distribution of returns has larger, thicker tails than the normal distribution. Turning to the shape parameters in SGT distribution, the fat-tail parameter (n) ranges from 4.9846 (KLSE) to 21.4744 (KOSPI), and the fat-tail parameter (k) is between 1.5399 (KOSPI) and 2.3917 (Bombay). The skewness parameter (l) ranges from 0.1560 (Bombay) to 0.0044(KLSE). Moreover, these three coefficients are almost significant at the 1 % level and thereby these negative skewness parameters imply that the distribution of returns has a leftward tail. Therefore, both fat-tails and skewness cannot be ignored in modeling these stock indices returns. The Ljung-Box Q2(20) statistics for the squared returns are all not significant at the 10 % level and thus indicate that serial correlation does not exist in standard residuals, confirming that the GARCH (1,1) specification in these models is sufficient to correct the serial correlation of these eight return series in the conditional variance equation. Moreover, as shown in Table 51.2, the LRN statistics for both GARCH-T and GARCH-SGT models are all significant at the 1 % level, indicating that the null hypothesis of normality for either stock index is rejected. These results thus imply that both the student’s t and SGT distributions closely approximate the empirical return series as compared with the normal distribution. Furthermore, except for ATX and KLSE stock indices, the LRT statistics of GARCH-SGT model are all significant, implying that the SGT distribution more closely approximates the empirical return series than the student’s t does. To sum up, the SGT distribution closely approximates the empirical return series followed by student’s t and normal distributions.
51.4.3 The Results of VaR Performance Assessment In this paper, we utilize the parametric approach (GARCH-N, GARCH-T, and GARCH-SGT models) and the semi-parametric approach (HW-N, HW-T, and HW-SGT models), totaling six models, to estimate the VaR.; thereafter, it was compared with the observed return, and both results were recorded. This section
4
The parameters are estimated by QMLE (quasi-maximum likelihood estimation; QMLE) and the BFGS optimization algorithm, using the econometric package of WinRATS 6.1.
1414
C.-F. Lee and J.-B. Su
Table 51.2 Estimation results for alternative models (estimation period) ATX Brussels Panel A. GARCH(1,1) with normal distribution 0.0760c(0.0177) m 0.1007c(0.0202) c o 0.0612 (0.0124) 0.0239c(0.0032) c a 0.1146 (0.0158) 0.1470c(0.0090) c b 0.8209 (0.0227) 0.8354c(0.0034) 2 Q (20) 16.147 14.731 LL 2648.73 2649.75 Panel B. GARCH(1,1) with student’s t distribution m 0.0998c(0.0178) 0.0775c(0.0162) c o 0.0623 (0.0160) 0.0179c(0.0049) c a 0.0986 (0.0188) 0.1319c(0.0177) c b 0.8324 (0.0292) 0.8560c(0.0180) c n 7.3393 (1.0609) 9.4946c(1.7035) 2 Q (20) 17.334 19.676 LL 2610.15 2626.31 LRN 77.16c 46.88c Panel C. GARCH(1,1) with SGT distribution m 0.0875c(0.0177) 0.0691c(0.0158) c o 0.0626 (0.0173) 0.0175c(0.0044) c a 0.0952 (0.0189) 0.1277c(0.0163) b 0.8343c(0.0323) 0.8590c(0.0161) n 7.8001c(2.4847) 6.9261c(1.7169) b l 0.0660 (0.0290) 0.1019c(0.0346) k 1.9710c(0.2290) 2.3745c(0.2782) 2 Q (20) 17.779 20.509 LL 2608.01 2621.51 LRN (LRT) 81.44c(4.28) 56.48c(9.6c) Bombay KLSE Panel A. GARCH(1,1) with normal distribution 0.0453c(0.0164) m 0.1427c(0.0262) c o 0.0906 (0.0201) 0.0077c(0.0029) c a 0.1438 (0.0167) 0.0998c(0.0174) c b 0.8189 (0.0206) 0.8989c(0.0165) 2 Q (20) 19.954 27.905 LL 3453.18 2490.05 Panel B. GARCH(1,1) with student’s t distribution m 0.1583c(0.0250) 0.0351b(0.0142) c o 0.0863 (0.0222) 0.0116b(0.0050) c a 0.1417 (0.0198) 0.1116c(0.0254) c b 0.8225 (0.0234) 0.8848c(0.0248) c n 8.3410 (1.3143) 4.9906c(0.5782)
CAC40
Swiss
0.0537b(0.0227) 0.0143c(0.0029) 0.0799c(0.0100) 0.9132c(0.0105) 22.333 3156.40
0.0488c(0.0187) 0.0243c(0.0057) 0.1184c(0.0136) 0.8632c(0.0150) 19.883 2748.86
0.0622c(0.0216) 0.0118c(0.0043) 0.0785c(0.0110) 0.9166c(0.0109) 14.9758c(4.0264) 21.719 3146.96 18.88c
0.0589c(0.0167) 0.0177c(0.0052) 0.1078c(0.0151) 0.8799c(0.0153) 9.6205c(1.6621) 18.712 2724.27 49.18c
0.0525b(0.0217) 0.0115c(0.0043) 0.0774c(0.0103) 0.9170c(0.0106) 13.3004b(6.3162) 0.1175c(0.0335) 2.1219c(0.2220) 21.803 3140.58 31.64c(12.76c) KOSPI
0.0479c(0.0175) 0.0172c(0.0048) 0.1086c(0.0148) 0.8787c(0.0150) 7.8987c(2.1709) 0.1175c(0.0323) 2.2601c(0.2519) 18.791 2717.94 61.84c(12.66c) STRAITS
0.1212c(0.0318) 0.0214c(0.0082) 0.0799c(0.0146) 0.9177c(0.0141) 11.214 3843.01
0.0623c(0.0196) 0.0143c(0.0042) 0.1031c(0.0134) 0.8938c(0.0123) 15.574 2895.36
0.1377c(0.0302) 0.0163b(0.0078) 0.0639c(0.0128) 0.9332c(0.0127) 7.2792c(1.1365)
0.0652c(0.0192) 0.0135c(0.0049) 0.0806c(0.0136) 0.9119c(0.0139) 6.7643c(0.9319) (continued)
51
Valuet-isk Estimation via a Semi-rametric Approach
1415
Table 51.2 (continued) Bombay KLSE 19.980 24.477 Q2(20) LL 3420.18 2412.82 LRN 66.0c 154.46c Panel C. GARCH(1,1) with SGT distribution m 0.1266c(0.0261) 0.0341b(0.0147) c o 0.0836 (0.0201) 0.0116b(0.0049) c a 0.1350 (0.0196) 0.1117c(0.0242) c b 0.8282 (0.0228) 0.8847c(0.0240) c n 6.2282 (1.3602) 4.9846c(1.0922) c l 0.1560 (0.0303) 0.0044(0.0281) k 2.3917c(0.2801) 2.0016c(0.2450) 2 Q (20) 21.167 24.455 LL 3408.46 2412.81 LRN (LRT) 89.44c(23.44c) 154.48c(0.02)
KOSPI 11.304 3801.67 82.68c
STRAITS 16.554 2838.34 114.04c
0.1021c(0.0285) 0.0167b(0.0077) 0.0613c(0.0133) 0.9345c(0.0135) 21.4744(15.8310) 0.1006c(0.0266) 1.5399c(0.1513) 11.067 3791.66 102.7c(20.02c)
0.0516c(0.0185) 0.0132c(0.0045) 0.0785c(0.0135) 0.9138c(0.0136) 6.2641c(1.4297) 0.0745b(0.0296) 2.1194c(0.2256) 16.595 2835.52 119.68c(5.64a)
Notes: 1.a, b andc denote significantly at the 10 %, 5 %, and 1 % levels, respectively. 2. Numbers in parentheses are standard errors. 3. LL indicates the log-likelihood value. 4. The critical value of the LRN test statistics at the 1 % significance level is 6.635 for GARCH-T and 11.345 for GARCH-SGT model. 5. The critical value of the LRT test statistics at the 10 %, 5 %, and 1 % significance level is 4.605, 5.991, and 9.210 for GARCH-SGT model, respectively. 6. Q2(20) statistics are asymptotically chi-squared distributed with 20 degrees of freedom
then uses three accuracy measures: one likelihood ratio test (the unconditional coverage test (LRuc) of Kupiec (1995)) and two loss functions (the average quadratic loss function (AQLF) of Lopez (1999) and the unexpected loss (UL)) to compare the forecasting ability of the aforementioned models in terms of VaR. Figure 51.2 graphically illustrates the long VaR forecasts of the GARCH-N, GARCH-T, and GARCH-SGT models at alternate levels (95 %, 99 %, and 99.5 %) for all stock indices. Tables 51.3, 51.4 and 51.5 provide the failure rates and the results of the prior three accuracy evaluation tests (LRuc, AQLF, and UL) for the aforementioned six models at the 95 %, 99 %, and 99.5 % confidence levels, respectively. As observed in Tables 51.3, 51.4 and 51.5, we find that, except for a few cases at the 99 % and 99.5 % confidence levels, all models tend to underestimate real market risk because the empirical failure rate is higher than the theoretical failure rate in most cases. The abovementioned exceptional cases emerge at the GARCH-SGT model of 99 % level (CAC40); both the GARCH-T and GARCH-SGT models of 99.5 % level (KLSE and STRAITS); the HW-N (KLSE), HW-T (KLSE), and HW-SGT (KLSE and STRAITS) models of 99 % level; and the HW-N (STRAITS), HW-T (ATX and STRAITS), and HW-SGT (ATX, KLSE, and STRAITS) models of 99.5 % level, where the stock indices in parentheses behind the models are the exceptional cases. Moreover, the empirical failure rate of the above exceptional cases is lower than the theoretical failure rate, indicating that the non-normal distributions (student’s t and SGT) and the semi-parametric approach try to reverse the trend of underestimating real market risk, especially at the 99.5 % level.
1416
C.-F. Lee and J.-B. Su
a
ATX index: VaR at alternate levels
10
10
10
10
5
5
5
5
0
0
0
0
-5
−5
−5
−5
−10
−10
−10
−10
−15
−15
−15
−15
−20
−20
2008 Date
CAC40 index: VaR at alternate levels
d
2009
Swiss index: VaR at alternate levels
10
10
10
5
5
5
5
0
0
0
0
−5
−5
−5
−5
−10
−10
−10
−10
−15
−15
−15
−15
−20
−20
Return
10
−20
2009
15
f
Bombay index: VaR at alternate levels
−20 2008 Date
Date
e
15
5.0
15
2009
KLSE index: VaR at alternate levels
5.0
2.5
2.5
0.0
0.0
−2.5
−2.5
5
0
0
−5.0
−5.0
−7.5
−7.5
−10.0
−10.0
−12.5
−12.5 −15.0
Return
5
−5
−10
−10
−15
−15
−15.0
−20
−17.5
−20 2008 Date
Fig. 51.2 (continued)
2009
−17.5 2008 Date
2009
Return
10
Return
10
−5
Return
15
Return
Return
−20 2008 Date
2009
15
15
2008
Return
15
Return
Return
15
−20
c
Brussels index: VaR at alternate levels
b 15
Return
Return
15
Valuet-isk Estimation via a Semi-rametric Approach
g
1417 STRAITS index: VaR at alternate levels
10
10
10
5
5
5
5
0
0
0
0
−5
−5
−5
−5
−10
−10
−10
−10
−15
−15
−15
−15
−20
−20
−20 2008
−20 2008
2009
15
Return
10
Return
15
15
15
Return
h
KOSPI index: VaR at alternate levels
Return
51
2009
Date
Date Return
GARCH_SGT 95%
GARCH_SGT 99%
GARCH_N 95%
GARCH_N 99%
GARCH_N 995%
GARCH_T 95%
GARCH_T 99%
GARCH_T 995%
GARCH_SGT 995%
Fig. 51.2 Long VaR forecasts at alternative level for the normal, student’s t, and SGT distribution. (a) ATX, (b) Brussels, (c) CAC40, (d) Swiss, (e) Bombay, (f) KLSE, (g) KOSPI, (h) STRAITS stock indices
As to the back-testing, the back-testing is a specific type of historical testing that determines the performance of the strategy if it had actually been employed during the past periods and market conditions. In this paper, the unconditional coverage tests (LRuc) proposed by Kupiec (1995) is employed to test whether the unconditional coverage rate is statistically consistent with the VaR model’s prescribed confidence level and thus is applied as the back-testing to measure the accuracy performance of these six VaR models. To interpret the result of accepting back-testing in Tables 51.3, 51.4 and 51.5, there is an illustration in the following. In Table 51.3, the VaR estimates based on GARCH-N, GARCH-T, and GARCH-SGT models, respectively, have a total of 2 (KLSE and STRAITS), 2 (KLSE and STRAITS), and 5 (Brussels, Bombay, KLSE, STRAITS, and KOSPI) acceptances for the LRuc test when applying to all stock indices returns under 95 % confidence level, where the stock indices in parentheses behind the number are the acceptance cases. For 99 % confidence level, Table 51.4 shows that the GARCH-N, GARCH-T, and GARCH-SGT models pass the LRuc tests with a total of 1, 5, and 8 stock indices, respectively; for 99.5 % confidence level, Table 51.5 gives that the GARCH-N, GARCH-T, and GARCH-SGT models pass the LRuc tests with a total of 3, 7, and 7 stock indices, respectively. Hence, under all confidence levels, there is a total of 6, 14, and 20 acceptances for GARCH-N, GARCH-T, and GARCH-SGT models (the parametric approach), respectively. On the contrary, for 95 % confidence
1418
C.-F. Lee and J.-B. Su
Table 51.3 Out-of-sample long VaR performance at the 95 % confidence level GARCH
HW-GARCH
AQLF
UL
0.30360 0.3 2058 0.29877
–0.10151 –0.10865 –0.10212
0.0960(17.75) 0.0920(15.04) 0.0980(19.18)
0.30642 0.30342 0.32263
–0.10308 –0.10384 –0.10858
0.20246 0.19494 0.18178
–0.07099 –0.07020 –0.06602
0.0680(3.08*) 0.0620(1.41*) 0.0620(1.41*)
0.18374 0.16978 0.17067
–0.06673 –0.06268 –0.06262
0.21446 0.22598 0.21146
–0.06485 –0.06887 –0.06281
0.0700(3.76*) 0.0720(4.51) 0.0700(3.76*)
0.20928 0.19510 0.19390
–0.05844 –0.05617 –0.05600
0.18567 0.18729 0.17506
–0.06181 –0.06345 –0.05698
0.0620(1.41*) 0.0600(0.99*) 0.0560(0.36*)
0.15104 0.14222 0.13928
–0.05088 –0.04694 –0.04697
0.34428 0.35999 0.31488
–0.10214 –0.10617 –0.09366
0.0800(8.07) 0.0780(7.10) 0.0800(8.07)
0.33643 0.33182 0.33629
–0.09967 –0.09639 –0.09824
0.20084 0.21827 0.21375
–0.04595 –0.05152 –0.05007
0.0740(5.31) 0.0700(3.76*) 0.0640(1.90*)
0.23255 0.23146 0.22378
–0.05623 –0.05276 –0.05306
0.0580(0.64*) 0.0640(1.90*) 0.0620(1.41*)
0.21980 0.25978 0.24670
–0.06233 –0.07222 –0.06736
0.0640(1.90*) 0.0620(1.41*) 0.0560(0.36*)
0.24194 0.23921 0.23035
–0.06675 –0.06511 –0.06428
Panel H. KOSPI N 0.0740(5.31) T 0.0760(6.18) SGT 0.0672(2.82*)
0.27949 0.32360 0.27616
–0.08826 –0.09624 –0.08236
0.0600(0.99*) 0.0620(1.41*) 0.0620(1.41*)
0.24267 0.25234 0.24475
–0.07816 –0.07522 –0.07531
Failure rate (LRuc) Panel A. ATX N 0.0960(17.75) T 0.0960(17.75) SGT 0.0900(13.75)
Failure rate (LRuc)
AQLF
UL
Panel B. Brussels N T SGT
0.0780(7.10) 0.0720(4.51) 0.0660(2.45*)
Panel C. CAC40 N T SGT
0.0740(5.31) 0.0800(8.07) 0.0760(6.18)
Panel D. Swiss N T SGT
0.0760(6.18) 0.0740(5.31) 0.0740(5.31)
Panel E. Bombay N T SGT
0.0780(7.10) 0.0820(9.11) 0.0700(3.76*)
Panel F. KLSE N T SGT
0.0560(0.36*) 0.0600(0.99*) 0.0580(0.64*)
Panel G. STRAITS N T SGT
Notes: 1. *Indicates that the model passes the unconditional coverage test at the 5 % significance level and the critical value of the LRuc test statistics at the 5 % significance level is 3.84. 2. The red (resp. blue) font represents the lowest (resp. highest) AQLF and unexpected loss when the predictive accuracies of three different innovations with the same VaR method are compared. 3. The delete-line font represents the lowest AQLF and unexpected loss when the predictive accuracies of two different VaR methods with the same innovation are compared. 4. The model acronyms stand for the following methods: HW-GARCH non-parametric method proposed by Hull and White (1998), GARCH parametric method of GARCH model, N the standard normal distribution, T the standardized student’s t distribution, SGT the standardized SGT distribution proposed by Theodossiou (1998)
51
Valuet-isk Estimation via a Semi-rametric Approach
1419
Table 51.4 Out-of-sample long VaR performance at the 99 % confidence level GARCH Failure rate (LRuc) AQLF
UL
HW-GARCH Failure rate (LRuc) AQLF
UL
Panel A. ATX N 0.0300(13.16) T 0.0200(3.91) SGT 0.0160(1.53*)
0.20577 0.19791 0.17702
–0.02704 –0.01986 –0.01725
0.0160(1.53*) 0.0160(1.53*) 0.0160(1.53*)
0.06164 0.04977 0.05279
–0.01770 –0.01551 –0.01626
Panel B. Brussels N 0.0260(8.97) 0.0180(2.61*) T SGT 0.0160(1.53*)
0.13491 0.11962 0.11033
–0.02538 –0.01947 –0.01711
0.0180(2.61*) 0.0160(1.53*) 0.0160(1.53*)
0.03685 0.03465 0.03536
–0.01726 –0.01559 –0.01596
Panel C. CAC40 N 0.0200(3.91) 0.0100(0.00*) T SGT 0.0060(0.94*)
0.14794 0.14325 0.13144
–0.01913 –0.01494 –0.01354
0.0160(1.53*) 0.0100(0.00*) 0.0140(0.71*)
0.07472 0.05628 0.05992
–0.01808 –0.01436 –0.01466
Panel D. Swiss N 0.0260(8.97) 0.0160(1.53*) T SGT 0.0140(0.71*)
0.12387 0.11943 0.10151
–0.01967 –0.01516 –0.01244
0.0180(2.61*) 0.0140(0.71*) 0.0140(0.71*)
0.03938 0.03528 0.03512
–0.01353 –0.01341 –0.01338
Panel E. Bombay N 0.0300(13.16)
0.23811
–0.03748
0.0120(0.18*)
0.04961
–0.01751
T
0.0220(5.41)
0.22706
–0.02822
0.0120(0.18*)
0.04539
–0.01586
SGT
0.0180(2.61*)
0.19232
–0.02152
0.0120(0.18*)
0.04504
–0.01617
Panel F. KLSE 0.0160(1.53*) N 0.0100(0.00*) T SGT 0.0100(0.00*)
0.16522 0.16891 0.16278
–0.01892 –0.01585 –0.01551
0.0080(0.21*) 0.0060(0.94*) 0.0060(0.94*)
0.07681 0.08114 0.07965
–0.01341 –0.01425 –0.01384
Panel G. STRAITS N 0.0240(7.11) 0.0100(0.00*) T *) 0.0100(0.00 SGT
0.16096 0.17606 0.16258
–0.01848 –0.01568 –0.01406
0.0120(0.18*) 0.0100(0.00*) 0.0080(0.21*)
0.09107 0.07477 0.07278
–0.01763 –0.01403 –0.01361
Panel H. KOSPI N 0.0220(5.41) T 0.0200(3.91) SGT 0.0163(1.68*)
0.18050 0.20379 0.15465
–0.02675 –0.02199 –0.01563
0.0200(3.91) 0.0180(2.61*) 0.0180(2.61*)
0.03799 0.04722 0.04487
–0.01639 –0.01942 –0.01941
Note: 1. *indicates that the model passes the unconditional coverage test at the 5 % significance level and the critical value of the LRuc test statistics at the 5 % significance level is 3.84. 2. The red (resp. blue) font represents the lowest (resp. highest) AQLF and unexpected loss when the predictive accuracies of three different innovations with the same VaR method are compared. 3. The delete-line font represents the lowest AQLF and unexpected loss when the predictive accuracies of two different VaR methods with the same innovation are compared. 4. The model acronyms stand for the following methods: HW-GARCH non-parametric method proposed by Hull and White (1998), GARCH parametric method of GARCH model, N the standard normal distribution, T the standardized student’s t distribution, SGT the standardized SGT distribution proposed by Theodossiou (1998)
1420
C.-F. Lee and J.-B. Su
Table 51.5 Out-of-sample long VaR performance at the 99.5 % confidence level GARCH
HW-GARCH
AQLF
UL
0.30360 0.3 2058 0.29877
–0.10151 –0.10865 –0.10212
0.0960(17.75) 0.0920(15.04) 0.0980(19.18)
0.30642 0.30342 0.32263
–0.10308 –0.10384 –0.10858
0.20246 0.19494 0.18178
–0.07099 –0.07020 –0.06602
0.0680(3.08*) 0.0620(1.41*) 0.0620(1.41*)
0.18374 0.16978 0.17067
–0.06673 –0.06268 –0.06262
0.21446 0.22598 0.21146
–0.06485 –0.06887 –0.06281
0.0700(3.76*) 0.0720(4.51) 0.0700(3.76*)
0.20928 0.19510 0.19390
–0.05844 –0.05617 –0.05600
0.18567 0.18729 0.17506
–0.06181 –0.06345 –0.05698
0.0620(1.41*) 0.0600(0.99*) 0.0560(0.36*)
0.15104 0.14222 0.13928
–0.05088 –0.04694 –0.04697
0.34428 0.35999 0.31488
–0.10214 –0.10617 –0.09366
0.0800(8.07) 0.0780(7.10) 0.0800(8.07)
0.33643 0.33182 0.33629
–0.09967 –0.09639 –0.09824
0.20084 0.21827 0.21375
–0.04595 –0.05152 –0.05007
0.0740(5.31) 0.0700(3.76*) 0.0640(1.90*)
0.23255 0.23146 0.22378
–0.05623 –0.05276 –0.05306
0.0580(0.64*) 0.0640(1.90*) 0.0620(1.41*)
0.21980 0.25978 0.24670
–0.06233 –0.07222 –0.06736
0.0640(1.90*) 0.0620(1.41*) 0.0560(0.36*)
0.24194 0.23921 0.23035
–0.06675 –0.06511 –0.06428
Panel H. KOSPI N 0.0740(5.31) T 0.0760(6.18) SGT 0.0672(2.82*)
0.27949 0.32360 0.27616
–0.08826 –0.09624 –0.08236
0.0600(0.99*) 0.0620(1.41*) 0.0620(1.41*)
0.24267 0.25234 0.24475
–0.07816 –0.07522 –0.07531
Failure rate (LRuc) Panel A. ATX N 0.0960(17.75) T 0.0960(17.75) SGT 0.0900(13.75)
Failure rate (LRuc)
AQLF
UL
Panel B. Brussels N T SGT
0.0780(7.10) 0.0720(4.51) 0.0660(2.45*)
Panel C. CAC40 N T SGT
0.0740(5.31) 0.0800(8.07) 0.0760(6.18)
Panel D. Swiss N T SGT
0.0760(6.18) 0.0740(5.31) 0.0740(5.31)
Panel E. Bombay N T SGT
0.0780(7.10) 0.0820(9.11) 0.0700(3.76*)
Panel F. KLSE N T SGT
0.0560(0.36*) 0.0600(0.99*) 0.0580(0.64*)
Panel G. STRAITS N T SGT
Note: 1.*Indicates that the model passes the unconditional coverage test at the 5 % significance level and the critical value of the LRuc test statistics at the 5 % significance level is 3.84. 2. The red (resp. blue) font represents the lowest (resp. highest) AQLF and unexpected loss when the predictive accuracies of three different innovations with the same VaR method are compared. 3. The delete-line font represents the lowest AQLF and unexpected loss when the predictive accuracies of two different VaR methods with the same innovation are compared. 4. The model acronyms stand for the following methods: HW-GARCH non-parametric method proposed by Hull and White (1998), GARCH parametric method of GARCH model, N the standard normal distribution, T the standardized student’s t distribution, SGT the standardized SGT distribution proposed by Theodossiou (1998)
51
Valuet-isk Estimation via a Semi-rametric Approach
1421
level, Table 51.3 describes that the HW-N, HW-T, and HW-SGT models pass the LRuc tests with a total of 5, 5, and 6 stock indices, respectively. Moreover, for 99 % confidence level, Table 51.4 depicts that the HW-N, HW-T, and HW-SGT models pass the LRuc tests with a total of 7, 8, and 8 stock indices, respectively; for 99.5 % confidence level, Table 51.5 illustrates that the HW-N, HW-T, and HW-SGT models pass the LRuc tests with a total of 7, 8, and 8 stock indices, respectively. Hence, under all confidence levels, there is a total of 19, 21, and 22 acceptances for HW-N, HW-T, and HW-SGT models (the semi-parametric approach), respectively. From the abovementioned results, we can find the following two important phenomena: First, under the same return distributional setting, the number of acceptance of the HW-based models is greater or equal than those of the GARCH-based models, irrespective of whether the case of individual level (95 %, 99 %, or 99.5 %) or all levels (95 %, 99 %, and 99.5 %) is considered. For example, with regard to all levels, the number of acceptance of the HW-N model (19) is greater than those of the GARCH-N models (6). These results reveal that the HW-based models (semi-parametric approach) have the better VaR forecasting performance as compared with GARCH-based models (parametric approach). Second, the number of acceptance of the SGT distribution is the greatest followed by the student’s t and normal distributions, irrespective of whether the GARCH-based model (parametric) or HW-based model (semi-parametric approach) is employed. For instance, with regard to all levels, the number of acceptance of the GARCH-SGT model (20) is the greatest followed by the GARCH-T model (14) and GARCH-N model (6). These results indicate that the SGT has the best VaR forecasting performance followed by student’s t while the normal owns the worst VaR forecasting performance. Turning to the other two accuracy measures (i.e., AQLF and UL), the two loss functions (the average quadratic loss function (AQLF) and the unexpected loss (UL)) reflect the magnitude of the violation which occur as the observed return exceeds the VaR estimation. The smaller the AQLF and UL are generated, the better the forecasting performance of the models is. As observed in Tables 51.3, 51.4 and 51.5, we can also find the following two important phenomena which are similar as those of the back-testing as was mentioned above: First, under the same return distributional setting, the AQLF and UL generated by the HW-based models are smaller than those generated by the GARCH-based models, irrespective of whether the 95 %, 99 %, or 99.5 % level is considered. These results reveal that the HW-based models (semi-parametric approach) significantly have the better VaR forecasting performance as compared with GARCH-based models (parametric approach), which is in line with the results of the back-testing. Second, for all confidence levels, the GARCH-SGT model yields the lowest AQLF and UL for most of the stock indices. Moreover, for most of the stock indices, the GARCH-N model produces the highest AQLF and UL for both 99 % and 99.5 % levels, while the GARCH-T model gives the highest AQLF and UL for 95 % level. These results indicate that the GARCH-SGT model significantly owns the best out-of-sample VaR
1422
C.-F. Lee and J.-B. Su
performance, while the GARCH-N model appears to have the worst out-ofsample VaR performance. On the contrary, for all confidence levels, the HW-N model bears the highest AQLF and UL for most of the stock indices, while the HW-SGT model gives the lowest AQLF and UL for half of the stock indices, indicating that the HW-N model significantly owns the worst out-of-sample VaR performance, while the HW-SGT model appears to bear the highest out-ofsample VaR performance. Consequently, it seems reasonable to conclude that the SGT has the best VaR forecasting performance followed by student’s t, while the normal owns the worst VaR forecasting performance, which appears to be consistent with the results of back-testing. To sum up, according to the three accuracy measures, the HW-based models (semi-parametric approach) have the better VaR forecasting performance as compared with GARCH-based models (parametric approach), and the SGT has the best VaR forecasting performance followed by student’s t, while the normal owns the worst VaR forecasting performance. In addition, the kind of VaR approach is more influential than that of return distribution setting on VaR estimate.
51.5
Conclusion
This study utilizes the parametric approach (GARCH-N, GARCH-T, and GARCHSGT models) and the semi-parametric approach of Hull and White (1998) (HW-N, HW-T, and HW-SGT models), totaling six models, to estimate the VaR for the eight stock indices in Europe and Asia stock markets, then uses three accuracy measures: one likelihood ratio test (the unconditional coverage test (LRuc) of Kupiec (1995)) and two loss functions (the average quadratic loss function (AQLF) of Lopez (1999) and the unexpected loss (UL)) to compare the forecasting ability of the aforementioned models in terms of VaR. The empirical findings can be summarized as follows. First, according to the results of the log-likelihood ratio test, the SGT distribution closely approximates the empirical return series followed by student’s t and normal distributions. Second, in terms of the failure rate, all models tend to underestimate the real market risk in most cases, but the non-normal distributions (student’s t and SGT) and the semi-parametric approach try to reverse the trend of underestimating real market risk, especially at the 99.5 % level. Third, the kind of VaR approaches is more influential than that of return distribution settings on VaR estimate. Moreover, under the same return distributional setting, the HW-based models (semi-parametric approach) have the better VaR forecasting performance as compared with the GARCH-based models (parametric approach). Finally, irrespective of whether the GARCH-based model (parametric) or HW-based model (semiparametric approach) is employed, the SGT has the best VaR forecasting performance followed by student’s t, while the normal owns the worst VaR forecasting performance.
51
Valuet-isk Estimation via a Semi-rametric Approach
1423
Appendix 1: The Left-Tailed Quantiles of the Standardized SGT The standardized SGT distribution was derived by Lee and Su (2011) and expressed as follows: nþ1 k j e t þ dj k f ð et Þ ¼ C 1 þ (51.17) k k ½1 þ signðet þ dÞl y pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1
12 where y ¼ Sð1lÞ B k1 ; kn 2 B k3 ; n2 , SðlÞ ¼ 1 þ 3l2 4A2 l2 , k
1 n 0:5 3 n2 0:5 1 n 1 k A ¼ B k2 ; n1 B k; k , d ¼ S2lA k B k;k ðlÞ, C ¼ 2y B k ; k where k, n, and l are scaling parameters and C and y are normalizing constants ensuring that f(•) is a proper p.d.f. The parameters k and n control the height and tails of density with constraints k > 0 and n > 2, respectively. The skewness parameter l controls the rate of descent of the density around the mode of et with 1 < l < 1. In the case of positive (resp. negative) skewness, the density function skews toward the right (resp. left). Sign is the sign function, and B(•) is the beta function. The parameter n has the degrees of freedom interpretation in case l ¼ 0 and k ¼ 2. Particularly, the SGT distribution generates the student’s t distribution for l ¼ 0 and k ¼ 2. Moreover, the SGT distribution generates the normal distribution for l ¼ 0, k ¼ 2, and n ¼ 1. As observed from Table 51.2, the shape parameters in SGT distribution, the fat-tail parameter (n) ranges from 4.9846 (KLSE) to 21.4744 (KOSPI), and the fat-tail parameter (k) is between 1.5399 (KOSPI) and 2.3917 (Bombay). The skewness parameter (l) ranges from 0.1560 (Bombay) to 0.0044 (KLSE). Therefore, the left-tailed quantiles of the SGT distribution with various combinations of shape parameters (0.15 l 0.05; 1.0 k 2.0; n ¼ 10) at alternate levels are obtained by the composite trapezoid rule and are listed in Table 51.6. Moreover, Fig. 51.3 depicts the left-tailed quantiles surface of SGT (versus normal) distribution with various combinations of shape parameters (0.25 l 0.25; 0.8 k 2.0; n ¼ 10 and 20) at 10 %, 5 %, 1 %, and 0.5 % levels. Notably, Fc(et;k ¼ 2, l ¼ 0, n ¼ 1) where c ¼ 0.1, 0.05, 0.01, and 0.005 in Fig. 51.3 represents the left-tailed quantiles of normal distribution at 10 %, 5 %, 1 %, and 0.5 % levels, which is 1.28155, 1.64486, 2.32638, and 2.57613, respectively.
Appendix 2: The Procedure of Parametric VaR Approach The parametric method is very popular because the only variables you need to do the calculation are the mean and standard deviation of the portfolio, indicating the simplicity of the calculations. Moreover, from the literatures’ review mentioned above, numerous studies focused on the parametric approach of the GARCH family
k\l 0.15 Panel A. 10 % level 1.0 1.6848(1.1536) 1.1 1.7005(1.1831) 1.2 1.7103(1.2066) 1.3 1.7160(1.2257) 1.4 1.7189(1.2414) 1.5 1.7196(1.2544) 1.6 1.7189(1.2652) 1.7 1.7171(1.2744) 1.8 1.7147(1.2822) 1.9 1.7116(1.2888) 2.0 1.7083(1.2946) Panel B. 5 % level 1.0 1.6848(1.7252) 1.1 1.7005(1.7349) 1.2 1.7103(1.7397) 1.3 1.7160(1.7413) 1.4 1.7189(1.7406) 1.5 1.7196(1.7385) 1.6 1.7189(1.7353) 1.7 1.7171(1.7318) 1.8 1.7147(1.7270) 1.9 1.7116(1.7224) 2.0 1.7083(1.7177)
0.05 1.6003(1.1165) 1.6193(1.1472) 1.6324(1.1721) 1.6413(1.1927) 1.6471(1.2097) 1.6508(1.2241) 1.6528(1.2363) 1.6536(1.2467) 1.6536(1.2557) 1.6528(1.2635) 1.6516(1.2704) 1.6003(1.6418) 1.6193(1.6553) 1.6324(1.6639) 1.6413(1.6690) 1.6471(1.6716) 1.6508(1.6726) 1.6528(1.6723) 1.6536(1.6711) 1.6536(1.6694) 1.6528(1.6671) 1.6516(1.6646)
0.10
1.6443(1.1359) 1.6615(1.1659) 1.6729(1.1901) 1.6800(1.2098) 1.6843(1.2261) 1.6863(1.2398) 1.6869(1.2512) 1.6864(1.2610) 1.6850(1.2694) 1.6831(1.2766) 1.6807(1.2828)
1.6443(1.6851) 1.6615(1.6966) 1.6729(1.7031) 1.6800(1.7064) 1.6843(1.7073) 1.6863(1.7065) 1.6869(1.7047) 1.6864(1.7021) 1.6850(1.6990) 1.6831(1.6955) 1.6807(1.6918)
1.5532(1.5955) 1.5742(1.6114) 1.5893(1.6222) 1.6001(1.6294) 1.6078(1.6340) 1.6132(1.6368) 1.6168(1.6382) 1.6191(1.6387) 1.6205(1.6383) 1.6211(1.6375) 1.6211(1.6362)
1.5532(1.0958) 1.5742(1.1273) 1.5893(1.1530) 1.6001(1.1744) 1.6078(1.1923) 1.6132(1.2075) 1.6168(1.2204) 1.6191(1.2316) 1.6205(1.2413) 1.6211(1.2498) 1.6211(1.2573)
0.00
Table 51.6 The left-tailed quantiles of SGT distribution with n ¼ 10 and various combinations (k, l) at alternate levels
1.5033(1.5468) 1.5266(1.5652) 1.5439(1.5785) 1.5568(1.5880) 1.5665(1.5947) 1.5738(1.5995) 1.5792(1.6027) 1.5831(1.6048) 1.5860(1.6061) 1.5881(1.6067) 1.5894(1.6068)
1.5033(1.0739) 1.5266(1.1062) 1.5439(1.1329) 1.5568(1.1552) 1.5665(1.1740) 1.5738(1.1901) 1.5792(1.2039) 1.5831(1.2159) 1.5860(1.2264) 1.5881(1.2356) 1.5894(1.2438)
0.05
1424 C.-F. Lee and J.-B. Su
2.9871(2.9336) 2.9289(2.8673) 2.8732(2.8057) 2.8207(2.7489) 2.7717(2.6968) 2.7261(2.6489) 2.6839(2.6049) 2.6447(2.5644) 2.6083(2.5270) 2.5745(2.4925) 2.5431(2.4606) 3.6542(3.5224) 3.5470(3.4077) 3.4490(3.3050) 3.3599(3.2131) 3.2791(3.1307) 3.2057(3.0564) 3.1389(2.9893) 3.0779(2.9284) 3.0221(2.8730) 2.9710(2.8223) 2.9239(2.7757)
3.0964(3.0358) 3.0323(2.9630) 2.9711(2.8956) 2.9136(2.8336) 2.8600(2.7768) 2.8103(2.7247) 2.7643(2.6769) 2.7216(2.6329) 2.6821(2.5924) 2.6454(2.5550) 2.6113(2.5204)
3.7941(3.6512) 3.6789(3.5277) 3.5736(3.4173) 3.4778(3.3185) 3.3910(3.2299) 3.3121(3.1502) 3.2403(3.0782) 3.1749(3.0129) 3.1150(2.9534) 3.0601(2.8991) 3.0097(2.8492)
3.5043(3.3851) 3.4064(3.2802) 3.3168(3.1862) 3.2352(3.1020) 3.1612(3.0263) 3.0938(2.9581) 3.0325(2.8963) 2.9765(2.8403) 2.9252(2.7891) 2.8781(2.7424) 2.8348(2.6994)
2.8702(2.8247) 2.8188(2.7657) 2.7693(2.7106) 2.7224(2.6596) 2.6785(2.6126) 2.6376(2.5693) 2.5995(2.5295) 2.5641(2.4927) 2.5312(2.4588) 2.5006(2.4274) 2.4720(2.3983)
3.3459(3.2405) 3.2583(3.1464) 3.1780(3.0618) 3.1047(2.9859) 3.0380(2.9175) 2.9773(2.8557) 2.9219(2.7997) 2.8712(2.7488) 2.8248(2.7023) 2.7820(2.6597) 2.7426(2.6206)
2.7467(2.7100) 2.7029(2.6589) 2.6602(2.6109) 2.6195(2.5661) 2.5811(2.5247) 2.5452(2.4864) 2.5117(2.4511) 2.4804(2.4184) 2.4512(2.3881) 2.4240(2.3600) 2.3986(2.3339)
Note: 1. k and l denote the shape parameter and skewness parameter of SGT distribution, respectively. 2. Numbers in parentheses are quantiles with n ¼ 20. 3. Numbers in table are obtained using the composite Simpson’s rule with WinRATS 6.1 packages
Panel C. 1 % level 1.0 3.1972(3.1305) 1.1 3.1281(3.0522) 1.2 3.0623(2.9797) 1.3 3.0005(2.9132) 1.4 2.9430(2.8522) 1.5 2.8896(2.7963) 1.6 2.8402(2.7451) 1.7 2.7945(2.6980) 1.8 2.7522(2.6547) 1.9 2.7129(2.6147) 2.0 2.6764(2.5777) Panel D. 0.5 % level 1.0 3.9229(3.7705) 1.1 3.8012(3.6396) 1.2 3.6897(3.5224) 1.3 3.5882(3.4176) 1.4 3.4961(3.3235) 1.5 3.4124(3.2389) 1.6 3.3363(3.1625) 1.7 3.2669(3.0932) 1.8 3.2034(3.0301) 1.9 3.1452(2.9724) 2.0 3.0916(2.9196)
51 Valuet-isk Estimation via a Semi-rametric Approach 1425
1426
C.-F. Lee and J.-B. Su
a
Quantiles of SGT distribution with various combinations (κ,λ) at 10%
LHS Critical Value
−0.8
Fα=0.10(ε;κ,λ,n=20)
−1 −1.2 −1.4 −1.6 Fα=0.10(ε;κ,λ,n=∞)
−1.8 2 1.5
Fα=0.10(ε;κ,λ,n=10)
1 0.5 −0.4
Kappa(κ)
b
−0.2
0.4 0.2
0 lambda(λ)
Quantiles of SGT distribution with various combinations (κ,λ) at 5%
LHS Critical Value
−1 −1.2 −1.4 −1.6 −1.8 −2 2
Fα=0.05(ε;κ,λ,n=∞)
1.5
Fα=0.05(ε;κ,λ,n=10) Fα=0.05(ε;κ,λ,n=20)
1 Kappa(κ)
c
0.5 −0.4
−0.2
0 lambda(λ)
Quantiles of SGT distribution with various combinations (κ,λ) at 1%
−2 LHS Critical Value
0.4 0.2
F∝=0.01(ε;κ,λ,n=20)
−2.5 −3 −3.5
F (ε;κ=2,λ=0,n=∞) α=0.01
Fα=0.01(ε;κ,λ,n=10)
−4 2 0.4
1.5
0.2 1
Kappa(κ)
Fig. 51.3 (continued)
0.5 −0.4
−0.2
0 lambda(λ)
51
Valuet-isk Estimation via a Semi-rametric Approach
d
1427
Quantiles of SGT distribution with various combinations (κ,λ) at 0.5%
LHS Critical Value
−2 −2.5 −3 −3.5 −4
F (ε;κ,λ,n=20) α=0.005
F(ε;κ=2,λ=0,n=∞) α=0.005
−4.5 2
F(ε;κ,λ,n=10) α=0.005
1.5
Kappa(κ)
0.4 0.2
1 0.5 −0.4
−0.2
0 lambda(λ)
Fig. 51.3 The left-tailed quantiles of SGT distribution with n ¼ 10, 20, and various combinations (k, l). (a) 10 %, (b) 5 %, (c) 1 %, (d) 0.5 % confidence levels
variance specifications to estimate the VaR. Furthermore, numerous time series data of financial assets appear to exhibit autocorrelated and volatility clustering, and the unconditional distribution of those returns displays leptokurtosis and a moderate amount of skewness. This study thus considers the applicability of the GARCH(1,1) model with three conditional distributions (the normal, student’s t, and SGT distributions) to estimate the corresponding volatility in terms of different stock indices, then employs the framework of Jorion (2000) to evaluate the VaR of parametric approach. We take an example of the GARCH-SGT model. The methodology of parametric VaR approach is based on a rolling window procedure. The window size is fixed at 2,000 observations. More specifically, the procedure is conducted in the following manner: Step 1: For each data series, using the econometric package of WinRATS 6.1, the parameters are estimated with a sample of 2,000 daily returns by quasi-maximum likelihood estimation (QMLE) of log-likelihood function such as Eq. 51.10 and by the BFGS optimization algorithm. Thus, with c ¼ [m, o, a, b, k, l, n], the vector of parameters is estimated. The empirical results of GARCH-SGT model are listed in Table 51.2 for all stock indices surveyed in this paper. As to the empirical results of GARCH-N and GARCH-T models, they are also provided by the same approach. Step 2: Based on the framework of the parametric techniques (Jorion 2000), the 1-day-ahead VaR based on GARCH-SGT model can be calculated by Eq. 51.11. Then the one-step-ahead VaR forecasts are compared with the observed returns, and the comparative results are recorded for subsequent evaluation using statistical tests. Step 3: The estimation period is then rolled forwards by adding one new day and dropping the most distant day. By replicating step 1 and step 2, the vector of
1428
C.-F. Lee and J.-B. Su
parameters is estimated, and then the 1-day-ahead VaR can be calculated for the next 500 days. Step 4: For the out-sample period (500 days), via the comparable results between the one-step-ahead VaR forecasts and the observed returns, the 1-day-ahead BLF, QLF, and UL can be calculated by using Eqs. 51.13, 51.14 and 51.16. On the other hand, the unconditional coverage test, LRuc, is evaluated by employing Eq. 51.15. Thereafter, with regard to the GARCH-based models with alternate distributions (GARCH-N, GARCH-T, and GARCH-SGT), the unconditional coverage test (LRuc) and three loss functions (failure rate, AQLF, and UL) are obtained and are reported in the left panel of Tables 51.3, 51.4 and 51.5 for 95 %, 99 %, and 99.5 % levels.
Appendix 3: The Procedure of Semi-parametric VaR Approach In this paper, we use the approach proposed by Hull and White (1998) as a representative of the semi-parametric approach. This method mainly couples a weighting scheme of volatility with the traditional historical simulation. Hence, it can be regarded as a straightforward extension of traditional historical simulation. The weighting scheme of volatility is expressed as follows. Instead of using the actual historical percentage changes in market variables for the purposes of calculating VaR, we use historical changes that have been adjusted to reflect the ratio of the current daily volatility to the daily volatility at the time of the observation and assume that the variance of each market variable during the period covered by the historical data is monitored using a GARCH-based models. We take an example of the HW-SGT model. This methodology is explained in the following five steps: Step 1: For each data series, using the econometric package of WinRATS 6.1, the parameters are estimated with a sample of 2,000 daily returns by quasimaximum likelihood estimation (QMLE) of log-likelihood function such as Eq. 51.10 and by the BFGS optimization algorithm. Thus, with c ¼ [m, o, a, b, k, l, n], the vector of parameters is estimated. This step is the same as the first step of parametric approach. Consequently, a series of daily volatility estimates, {s1, s2, s3,......, st¼T}, are obtained where T is the number of estimated samples and equals 2,000 in this study. Step 2: The modified return series are obtained by the raw return series multiplied by the ratio of the current daily volatility to the daily volatility at the time of the observation, sT/si. That is, the modified return series are expressed as {r1*, r2*, r3*,......, rt¼T*}, where ri* ¼ ri(sT/si). Step 3: Resort this modified return series ascendingly to achieve the empirical distribution. Thus, VaR is the percentile that corresponds to the specified confidence level. Then the one-step-ahead VaR forecasts are compared with the observed returns, and the comparative results are recorded for subsequent evaluation using statistical tests. Step 4: The estimation period is then rolled forwards by adding one new day and dropping the most distant day. By replicating steps 1–3, the vector of parameters
51
Valuet-isk Estimation via a Semi-rametric Approach
1429
is estimated, and then the 1-day-ahead VaR can be calculated for the next 500 days. This step is the same as the third step of parametric approach. Step 5: For the out-sample period (500 days), via the comparable results between the one-step-ahead VaR forecasts and the observed returns, the 1-day-ahead BLF, QLF, and UL can be calculated by using Eqs. 51.13, 51.14, and 51.16. On the other hand, the unconditional coverage test, LRuc, is evaluated by employing Eq. 51.15. Thereafter, with regard to HW-based models with alternate distributions (HW-N, HW-T, and HW-SGT), the unconditional coverage test (LRuc) and three loss functions (failure rate, AQLF, and UL) are obtained and are reported in the right panel of Tables 51.3, 51.4 and 51.5 for 95 %, 99 %, and 99.5 % levels.
References Aloui, C., & Mabrouk, S. (2010). Value-at-risk estimations of energy commodities via longmemory, asymmetry and fat-tailed GARCH models. Energy Policy, 38, 2326–2339. Angelidis, T., Benos, A., & Degiannakis, S. (2004). The use of GARCH models in VaR estimation. Statistical Methodology, 1, 105–128. Bali, T. G., & Theodossiou, P. (2007). A conditional-SGT-VaR approach with alternative GARCH models. Annals of Operations Research, 151, 241–267. Bhattacharyya, M., Chaudhary, A., & Yadav, G. (2008). Conditional VaR estimation using Pearson’s type IV distribution. European Journal of Operational Research, 191, 386–397. Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31, 307–327. Bollerslev, T., Chou, R. Y., & Kroner, K. F. (1992). ARCH modeling in finance: A review of the theory and empirical evidence. Journal of Econometrics, 52, 5–59. Cabedo, J. D., & Moya, I. (2003). Estimating oil price ‘Value at Risk’ using the historical simulation approach. Energy Economics, 25, 239–253. Chan, W. H., & Maheu, J. M. (2002). Conditional jump dynamics in stock market returns. Journal of Business Economics Statistics, 20, 377–389. Chang, T. H., Su, H. M., & Chiu, C. L. (2011). Value-at-risk estimation with the optimal dynamic biofuel portfolio. Energy Economics, 33, 264–272. Chen, Q., Gerlach, R., & Lu, Z. (2012). Bayesian value-at-risk and expected shortfall forecasting via the asymmetric Laplace distribution. Computational Statistics and Data Analysis, 56, 3498–3516. Degiannakis, S., Floros, C., & Dent, P. (2012). Forecasting value-at-risk and expected shortfall using fractionally integrated models of conditional volatility: International evidence. International Review of Financial Analysis, 27, 21–33. Faires, J. D., & Burden, R. (2003). Numerical methods (3rd ed.). Pacific Grove: Tomson Learning. Gebizlioglu, O. L., S¸enog˘lu, B., & Kantar, Y. M. (2011). Comparison of certain value-at-risk estimation methods for the two-parameter Weibull loss distribution. Journal of Computational and Applied Mathematics, 235, 3304–3314. Genc¸ay, R., Selc¸uk, F., & Ulug€ ulyaˇgci, A. (2003). High volatility, thick tails and extreme value theory in value-at-risk estimation. Insurance: Mathematics and Economics, 33, 337–356. Giot, P., & Laurent, S. (2003a). Value-at-risk for long and short trading positions. Journal of Applied Econometrics, 18, 641–664. Giot, P., & Laurent, S. (2003b). Market risk in commodity markets: A VaR approach. Energy Economics, 25, 435–457. Hartz, C., Mittnik, S., & Paolellad, M. (2006). Accurate value-at-risk forecasting based on the normal-GARCH model. Computational Statistics & Data Analysis, 51, 2295–2312.
1430
C.-F. Lee and J.-B. Su
Huang, Y. C., & Lin, B. J. (2004). Value-at-risk analysis for Taiwan stock index futures: Fat tails and conditional asymmetries in return innovations. Review of Quantitative Finance and Accounting, 22, 79–95. Hull, J., & White, A. (1998). Incorporating volatility updating into the historical simulation method for value-at-risk. Journal of Risk, 1, 5–19. Jarque, C. M., & Bera, A. K. (1987). A test for normality of observations and regression residuals. International Statistics Review, 55, 163–172. Jorion, P. (2000). Value at risk: The new benchmark for managing financial risk. New York: McGraw-Hill. Kupiec, P. (1995). Techniques for verifying the accuracy of risk management models. The Journal of Derivatives, 3, 73–84. Lee, C. F., & Su, J. B. (2011). Alternative statistical distributions for estimating value-at-risk: Theory and evidence. Review of Quantitative Finance and Accounting, 39, 309–331 . Lee, M. C., Su, J. B., & Liu, H. C. (2008). Value-at-risk in US stock indices with skewed generalized error distribution. Applied Financial Economics Letters, 4, 425–431. Lopez, J. A. (1999). Methods for evaluating value-at-risk estimates. Federal Reserve Bank of San Francisco Economic Review, 2, 3–17. Lu, C., Wu, S. C., & Ho, L. C. (2009). Applying VaR to REITs: A comparison of alternative methods. Review of Financial Economics, 18, 97–102. Sadeghi, M., & Shavvalpour, S. (2006). Energy risk management and value at risk modeling. Energy Policy, 34, 3367–3373. So, M. K. P., & Yu, P. L. H. (2006). Empirical analysis of GARCH models in value at risk estimation. International Financial Markets, Institutions & Money, 16, 180–197. Stavroyiannis, S., Makris, I., Nikolaidis, V., & Zarangas, L. (2012). Econometric modeling and value-at-risk using the Pearson type-IV distribution. International Review of Financial Analysis, 22, 10–17. Su, J. B., & Hung, J. C. (2011). Empirical analysis of jump dynamics, heavy-tails and skewness on value-at-risk estimation. Economic Modelling, 28, 1117–1130. Theodossiou, P. (1998). Financial data and the skewed generalized t distribution. Management Science, 44, 1650–1661. Vlaar, P. J. G. (2000). Value at risk models for Dutch bond portfolios. Journal of Banking & Finance, 24, 1131–1154.
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model Long Kang
Contents 52.1 52.2 52.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3.1 Copula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3.2 Modeling Marginal Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3.3 Modeling Dependence Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.5 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.5.1 Marginal Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.5.2 Copulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.5.3 Time-Varying Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1432 1434 1436 1436 1437 1438 1440 1441 1442 1442 1444 1444 1448 1448
Abstract
We illustrate a framework to model joint distributions of multiple asset returns using a time-varying Student’s t copula model. We model marginal distributions of individual asset returns by a variant of GARCH models and then use a Student’s t copula to connect all the margins. To build a time-varying structure for the correlation matrix of t copula, we employ a dynamic conditional correlation (DCC) specification. We illustrate the two-stage estimation procedures for the model and apply the model to 45 major US stocks returns selected from nine
L. Kang Department of Finance, Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, China The Options Clearing Corporation and Center for Applied Economics and Policy Research, Indiana University, Bloomington, IN, USA e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_52, # Springer Science+Business Media New York 2015
1431
1432
L. Kang
sectors. As it is quite challenging to find a copula function with very flexible parameter structure to account for difference dependence features among all pairs of random variables, our time-varying t copula model tends to be a good working tool to model multiple asset returns for risk management and asset allocation purposes. Our model can capture time-varying conditional correlation and some degree of tail dependence, while it also has limitations of featuring symmetric dependence and inability of generating high tail dependence when being used to model a large number of asset returns. Keywords
Student’s t copula • GARCH models • Asset returns • US stocks • Maximum likelihood • Two-stage estimation • Tail dependence • Exceedance correlation • Dynamic conditional correlation • Asymmetric dependence
52.1
Introduction
There have been a large number of applications of copula theory in financial modeling. The popularity of copula mainly results from its capability of decomposing joint distributions of random variables into marginal distributions of individual variables and the copula which links the margins. Then the task of finding a proper joint distribution becomes to find a copula form which features a proper dependence structure given that marginal distributions of individual variables are properly specified. Among many copula functions, Student’s t copula is a good choice, though not perfect, for modeling multivariate financial data as an alternative to a normal copula, especially for a very large number of assets. The t copula models are very useful tools to describe joint distributions of multiple assets for risk management and asset allocation purposes. In this chapter, we illustrate how to model the joint distribution of multiple asset returns under a Copula-GARCH framework. In particular, we show how we can build and estimate a time-varying t copula model for a large number of asset returns and how well the time-varying t copula accounts for some dependence features of real data. There are still two challenging issues when applying copula theory to multiple time series. The first is how to choose a copula that best describes the data. Different copulas feature different dependence structure between random variables. Some copulas may fit one particular aspect of the data very well but do not have a very good overall fit, while others may have the opposite performance. What criteria to use when we choose from copula candidates is a major question remaining to be fully addressed. Secondly, how to build a multivariate copula which is sufficiently flexible to simultaneously account for the dependence structure for each pair of random variables in joint distributions is still quite challenging. We hope to shed some light on those issues by working through our time-varying t copula model. Under a Copula-GARCH framework, we first model each asset return with a variant of GARCH specification. Based on different properties of asset returns, we choose a proper GARCH specification to formulate conditional distributions of
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
1433
each return. Then, we choose a proper copula function to link marginal distributions of each return to form the joint distribution. As in marginal distributions of each return, the copula parameters can also be specified as being dependent on previous observations to make the copula structure time varying for a better fit of data. In this chapter, we have an AR(1) process for the conditional mean and a GJR-GARCH (1,1) specification for the conditional volatility for each return. We employ a Student’s t copula with a time-varying correlation matrix (by a DCC specification) to link marginal distributions. Usually the specified multivariate model contains a huge number of parameters, and the estimation by maximum likelihood estimator (MLE) can be quite challenging. Therefore, we pursue a two-stage procedure, where all the GARCH models for each return are estimated individually first and copula parameters are estimated in the second stage with estimated cumulative distribution functions from the first stage. We apply our model to modeling log returns of 45 major US stocks selected from nine sectors with a time span ranging from January 3, 2000 to November 29, 2011. Our estimation results show that AR(1) and GJR-GARCH(1,1) can reasonably well capture empirical properties of individual returns. The stock returns possess fat tails and leverage effects. We plot the estimated conditional volatility on selected stocks and volatility spikes which happened during the “Internet Bubbles” in the early 2000s and the financial crisis in 2008. We estimate a DCC specification for the timevarying t copula and also a normal copula for comparison purposes. The parameter estimates for time-varying t copula are statistically significant, which indicates a significant time-varying property of the dependence structure. The time-varying t copula yields significantly higher log-likelihood than normal copula. This improvement of data fitness results from flexibility of t copula (relative to normal copula) and its time-varying correlation structure. We plot the time-varying correlation parameter for selected pairs of stocks under the time-varying t copula model. The correlation parameters fluctuate around certain averages, and they spike during the 2008 crisis for some pairs. For 45 asset returns, the estimated degree-of-freedom (DoF) parameter of the t copula is around 25. Together with the estimated correlation matrix of the t copula, this DoF leads to quite low values of tail dependence coefficients (TDCs). This may indicate the limitation of t copulas in capturing possibly large tail dependence behavior for some asset pairs when being used to model a large number of asset returns. Nevertheless, the time-varying Student’s t copula model has a relatively flexible parameter structure to account for the dependence among multiple asset returns and is a very effective tool to model the dynamics of a large number of asset returns in practice. This chapter is organized as follows. Section 52.2 gives a short literature review on recent applications of copulas to modeling financial time series. Section 52.3 introduces our copula model where we introduce copula theory, Copula-GARCH framework, and estimation procedures. In particular, we elaborate on how to construct and estimate a time-varying t copula model. Section 52.4 documents the data source and descriptive statistics for the data set we use. Section 52.5 reports estimation results and Sect. 52.6 concludes.
1434
52.2
L. Kang
Literature Review
Copula-GARCH models were previously proposed by Jondeau and Rockinger (2002) and Patton (2004, 2006a).1 To measure time-varying conditional dependence between time series, the former authors use copula functions with timevarying parameters as functions of predetermined variables and model marginal distributions with an autoregressive version of Hansen’s (1994) GARCH-type model with time-varying skewness and kurtosis. They show for many market indices, dependency increases after large movements and for some cases it increases after extreme downturns. Patton (2006a) applies the Copula-GARCH model to modeling the conditional dependence between exchange rates. He finds that mark-dollar and yen-dollar exchange rates are more correlated during depreciation against dollar than during appreciation periods. By a similar approach, Patton (2004) models the asymmetric dependence between “large cap” and “small cap” indices and examines the economic and statistical significance of the asymmetries for asset allocations in an out-of-sample setting. As in above literature, copulas are mostly used in capturing asymmetric dependence and tail dependence between times series. Among copula candidates, Gumbel’s copula features higher dependence (correlation) at upper side with positive upper tail dependence, and rotated Gumbel’s copula features higher dependence (correlation) at lower side with positive lower tail dependence. Hu (2006) studies the dependence structure between a number of pairs of major market indices by a mixed copula approach. Her copula is constructed by a weighted sum of three copulas–normal, Gumbel’s, and rotated Gumbel’s copulas. Jondeau and Rockinger (2006) model the bivariate dependence between major stock indices by a Student’s t copula where the parameters are assumed to be modeled by a two-state Markov process. The task of flexibly modeling dependence structure becomes more challenging for n-dimensional distributions. Tsafack and Garcia (2011) build up a complex multivariate copula to model four international assets (two international equities and two bonds). In his model, he assumes that the copula form has a regime-switching setup where in one regime he uses an n-dimensional normal copula and in the other he uses a mixed copula of which each copula component features the dependence structure of two pairs of variables. Savu and Trede (2010) develop a hierarchical Archimedean copula which renders more flexible parameters to characterize dependency between each pair of variables. In their model, each pair of closely related random variables is modeled by a copula of a particular Archimedean class, and then these pairs are nested by copulas as well. The nice property of Archimedean family easily leads to the validity of the
1
Alternative approaches are also developed, such as in Ang and Bekaert (2002), Goeij and Marquering (2004), and Lee and Long (2009), to address non-normal joint distributions of asset returns.
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
1435
joint distribution constructed by this hierarchical structure. (Trivedi and Zimmer 2006) apply trivariate hierarchical Archimedean copulas to model sample selection and treatment effects with applications to the family health-care demand. Statistical goodness-of-fit tests can provide some guidance for selecting copula models. Chen et al. (2004) propose two simple goodness-of-fit tests for multivariate copula models, both of which are based on multivariate probability integral transform and kernel density estimation. One test is consistent but requires the estimation of the multivariate density function and hence is suitable for a small number of random variables, while the other may not be consistent but requires only kernel estimation of a univariate density function and hence is suitable for a large number of assets. Berg and Bakken (2006) propose a consistent goodness-of-fit test for copulas based on the probability integral transform, and they incorporate in their test a weighting functionality which can increase influence of some specific areas of copulas. Due to their parameter structure, the estimation of Copula-GARCH models also suffers from “the curse of dimensionality”.2 The exact maximum likelihood estimator (MLE) works in theory.3 In practice, however, as the number of time series being modeled increases, the numerical optimization problem in MLE will become formidable. Joe and Xu (1996) propose a two-stage procedure, where in the first stage only parameters in marginal distributions are estimated by MLE and then the copula parameters are estimated by MLE in the second stage. This two-stage method is called inference for the margins (IFM) method. Joe (1997) shows that under regular conditions the IFM estimator is consistent and has the property of asymptotic normality and Patton (2006b) also shows similar estimator properties for the two-stage method. Instead of estimating parametric marginal distributions in the IFM method, we can estimate the margins by using empirical distributions, which can avoid the problem of mis-specifying marginal distributions. This method is called canonical maximum likelihood (CML) method by Cherubini et al. (2004). Hu (2006) uses this method and she names it as a semi-parametric method. Based on Genest et al. (1995), she shows that CML estimator is consistent and has asymptotical normality. Moreover, copula models can also be estimated under a nonparametric framework. Deheuvels (1981) introduces the notion of empirical copula and shows that the empirical copula converges uniformly to the underlying true copula. Finally, Xu (2004) shows how the copula models can be estimated with a Bayesian approach. The author shows how a Bayesian approach can be used to account for estimation uncertainty in portfolio optimization based on a Copula-GARCH model, and she proposes to use a Bayesian MCMC algorithm to jointly estimate the copula models.
2
For a detailed survey on the estimation of Copula-GARCH model, see Chap. 5 of Cherubini et al. (2004). 3 See Hamilton (1994) and Greene (2003) for more details on maximum likelihood estimation.
1436
52.3
L. Kang
The Model
52.3.1 Copula We introduce our Copula-GARCH model framework by first introducing the concept of copula. A copula is a multivariate distribution function with uniform marginal distributions as its arguments, and its functional form links all the margins to form a joint distribution of multiple random variables.4 Copula theory is mainly based on the work of Sklar (1959), and we state the Sklar’s theorem for continuous marginal distributions as follows. Theorem 52.1 Let F1(x1), . . ., Fn(xn) be given marginal distribution functions and
continuous in x1,. . . , xn, respectively. Let H be the joint distribution of (x1,. . . , xn). Then there exists a unique copula C such that H ðx1 ; . . . ; xn Þ ¼ CðF1 ðx1 Þ, . . . , Fn ðxn ÞÞ,
n
8ð x 1 ; . . . ; x n Þ 2 ℝ :
(52.1)
Conversely, if we let F1(x1), . . ., Fn(xn) be continuous marginal distribution functions and C be a copula, then the function H defined by Eq. 52.1 is a joint distribution function with marginal distributions F1(x1), . . ., Fn(xn). The above theory allows us to decompose a multivariate distribution function into marginal distributions of each random variable and the copula form linking the margins. Conversely, it also implies that to construct a multivariate distribution, we can first find a proper marginal distribution for each random variable and then obtain a proper copula form to link the margins. Depending on which dependence measure used, the copula function mainly, not exclusively, governs the dependence structure between individual variables. Hence, after specifying marginal distributions of each variable, the task of building a multivariate distribution solely becomes to choose a proper copula form which best describes the dependence structure between variables. Differentiating Eq. 52.1 with respect to (x1,. . . , xn) leads to the joint density function of random variables in terms of copula density. It is given as hðx1 ; . . . ; xn Þ ¼ cððF1 ðx1 Þ, . . . , Fn ðxn ÞÞ
n Y
n
f i ðxi Þ, 8ðx1 ; . . . ; xn Þ 2 ℝ ,
(52.2)
i¼1
where c(F1(x1), . . ., Fn(xn)) is the copula density and fi(xi) is the density function for variable i. Equation 52.2 implies that the log-likelihood of the joint density can be decomposed into components which only involve each marginal density and a component which involves copula parameters. It provides a convenient structure for a two-stage estimation, which will be illustrated in details in the following sections. 4
See Nelsen (1998) and Joe (1997) for a formal treatment of copula theory, and Bouye et al. (2000), Cherubini et al. (2004), and Embrechts et al. (2002) for applications of copula theory in finance.
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
1437
To better fit the data, we usually assume the moments of distributions of random variables are time varying and depend on past variables. Therefore, the distribution of random variables at time t becomes a conditional one, and then the above copula theory needs to be extended to a conditional case. It is given as follows.5 Theorem 52.2 Let Ot1 be the information set up to time t, and let F1(x1,tjOt1),
. . ., Fn(xn,tjOt1) be continuous marginal distribution functions conditional on Ot1. Let H be the joint distribution of (x1,. . . , xn) conditional on Ot1. Then there exists a unique copula C such that n H x1 , . . . , xn jOt1 Þ ¼ C F1 x1 jOt1 Þ, . . . , Fn xn jOt1 ÞjOt1 Þ, 8ðx1 ; . . . ; xn Þ 2 ℝ : (52.3) Conversely, if we let F1(x1,tjOt1), . . ., Fn(xn,tjOt1) be continuous conditional marginal distribution functions and C be a copula, then the function H defined by Eq. 52.3 is a conditional joint distribution function with conditional marginal distributions F1(x1,tjOt1), . . ., Fn(xn,tjOt1). It is worth noting that for the above theorem to hold, the information set Ot1 has to be the same for the copulas and all the marginal distributions. If different information sets are used, the conditional copula form on the right side of Eq. 52.3 may not be a valid distribution. Generally, the same information set used may not be relevant for each marginal distributions and the copula. For example, the marginal distributions or the copula may be only conditional on a subset of the universally used information set. At the very beginning of estimation of the conditional distributions, however, we should use the same information set based on which we can test for insignificant explanatory variables so as to stick to a relevant subset for each marginal distribution or the copula.
52.3.2 Modeling Marginal Distributions Before building a copula model, we need to find a proper specification for marginal distributions of individual asset returns, as mis-specified marginal distributions automatically lead to a mis-specified joint distribution. Let xi,t be asset i return at time t, and its conditional mean and variance are modeled as follows: xi, t ¼ a0, i þ a1, i xi, t1 þ ei, t , ei, t ¼
pffiffiffiffiffiffiffi hi , t i , t ,
hi, t ¼ b0, i þ b1, i hi, t1 þ b2, i e2i, t1 þ b3, i e2i, t1 1 ei, t1 < 0 :
5
See Patton (2004).
(52.4) (52.5) (52.6)
1438
L. Kang
As shown in Eqs. 52.4, 52.5 and 52.6, we model the conditional mean as an AR(1) process and the conditional variance as a GJR(1,1) specification.6 We have parameter restrictions as b0,i > 0, b1,i 0, b2,i 0, b2,i + b3,i 0, and b1, i þ b2, i þ 12 b3, i < 1 . 1(ei,t1 < 0) is an indicator function, which equals one when ei,t1 < 0 and zero otherwise. We believe that our model specifications can capture the features of the individual stock returns reasonably well. It is worth noting that Eqs. 52.4, 52.5 and 52.6 can include more exogenous variables to better describe the data. Alternative GARCH specifications can be used to describe the time-varying conditional volatility. We assume i,t is i.i.d. across time and follows a Student’s t distribution with DoF vi. Alternatively, to model the conditional higher moments of the series, we can follow Hansen (1994) and Jondeau and Rockinger (2003) who assume a skewed t distribution for the innovation terms of GARCH specifications and find that the skewed t distribution fits financial time series better than normal distribution. Accordingly, we can assume i,t Skewed T(i,tjvi,t,li,t) with zero mean and unitary variance where vi,t is DoF parameter and li,t is skewness parameter. The two parameters are time varying and depend on lagged values of explanatory variables in a nonlinear form. For illustration purposes, however, we will only use Student’s t distribution for i,t in this chapter.
52.3.3 Modeling Dependence Structure Normal copula and Student’s t copula are two copula functions from elliptical families, which are frequently used in modeling joint distributions of random variables. In this chapter, we also estimate a normal copula model for comparison purposes. Let F1 denote the inverse of the standard normal distribution F and F∑,n be n-dimensional normal distribution with correlation matrix ∑. Hence, the n-dimensional normal copula is Cðu; SÞ ¼ FS, N F1 ðu1 Þ, . . . , F1 ðun Þ ,
(52.7)
and its density form is fS, n F1 ðu1 Þ, . . . , F1 ðun Þ , cðu; SÞ ¼ n Y f F1 ðui Þ
(52.8)
i¼1
where f and f∑,n are the probability density functions (pdfs) of F and F∑,n, respectively. It can be shown via Sklar’s theorem that normal copula generates standard joint normal distribution if and only if the margins are standard normal.
6
See Glosten et al. (1993).
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
1439
On the other hand, let Tv1 be the inverse of standard Student’s t distribution Tv with DoF parameter7 v > 2 and TR,v be n-dimensional Student’s t distribution with correlation matrix R and DoF parameter v. Then n-dimensional Student’s t copula is 1 Cðu; R; nÞ ¼ T R, n T 1 n ðu1 Þ, . . . , T n ðun Þ ,
(52.9)
and its density function is 1 tR, n T 1 v ðu1 Þ, . . . , T v ðun Þ , cðu1 ; . . . ; un Þ ¼ n Y 1 t n T v ð ui Þ i¼1
where tv and tR,v are the pdfs of Tv and TR,v, respectively. Borrowing from the dynamic conditional correlation (DCC) structure of multivariate GARCH models, we can specify a time-varying parameter structure in the t copula as follows.8 For a t copula, the time-varying correlation matrix is governed by Qt ¼ ð1 a bÞS þ a Bt1 B0t1 þ bQt1 ,
(52.10)
where S is the unconditional covariance matrix of Bt ¼ (Tn1(u1,t), . . ., Tn1(un,t))0 and a and b are nonnegative and satisfy the condition a + b < 1. We assign Q0 ¼ S and the dynamics of Qt is given by Eq. 52.10. Let qi,j,t be the i,j element of the matrix Qt, and the i,j element of the conditional correlation matrix Rt can be calculated as qi, j, t ri, j, t ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : qi, i, t qj, j, t
(52.11)
Moreover, the specification of Eq. 52.10 guarantees that the conditional correlation matrix Rt is positive definite. Proposition 52.1 In Eqs. 52.10 and 52.11, if
(a) a 0 and b 0, (b) a + b < 1, (c) All eigenvalues of S are strictly positive, then the correlation matrix Rt is positive definite.
7
In contrast to the previous standardized Student’s t distribution, the standard Student’s t distribution here has variance as v/(v2). 8 Please see Engle and Sheppard (2001) and Engle (2002) for details on the multivariate DCC-GARCH models.
1440
L. Kang 0
Proof First, (a) and (b) guarantee the system for BtBt is stationary and S exists. With
Q0 ¼ S, (c) guarantees Q0 is positive definite. With (a) to (c), Qt is the sum of a positive definite matrix, a positive semi-definite matrix, and a positive definite matrix both with nonnegative coefficients and then is positive definite for all t. Based on the proposition Eq. 52.1 in Engle and Sheppard (2001), we prove that Rt is positive definite.
52.3.4 Estimation We illustrate the estimation procedure by writing out the log-likelihoods for observations. Let Y ¼ {y,g1, . . .,gn} be the set of parameters in the joint distribution where y is the set of parameters in the copula and gt is the set of parameters in marginal distributions for asset i. Then the conditional cumulative distribution function (cdf) of n asset returns at time t is given as F x1, t , . . . , xn, t Xt1 , Y ¼ C u1, t , . . . , un, t Xt1 , y (52.12) where Xt1 is a vector of previous observations, C(|Xt1,y) is the conditional copula, and ui,t ¼ Fi(xi,t|Xt1,gi) is the conditional cdf of the margins. Differentiating both sides with respect to x1t,. . . . , xn,t leads to the density function as n Y f i xi, t Xt1 , gi , f x1, t , . . . , xn, t Xt1 , Y ¼ c u1, t , . . . , un, t Xt1 , y i¼1
(52.13) where c(| Xt1,y) is the density of the conditional copula and fi(xi,t| Xt1,gi) is the conditional density of the margins. Accordingly, the log-likelihood of the sample is given by LðYÞ ¼
T X
log f x1, t , . . . , xn, t Xt1 , Y :
(52.14)
t¼1
With Eq. 52.13, the log-likelihood can be written as Lðy; g1 ; . . . ; gn Þ ¼
T X t¼1
T X n X log c u1, t , . . . , un, t Xt1 , y þ f i xi, t Xt1 , gi : t¼1 i¼1
(52.15) From Eq. 52.15, we observe that the copula and marginal distributions are additively separate. Therefore, we can estimate the model by a two-stage MLE procedure. In the first stage, the marginal distribution parameters for each asset are estimated by MLE, and then with estimated cdf of each asset, we estimate the copula parameters by MLE. Based on Joe (1997) and Patton (2006b), this two-stage estimator is consistent and asymptotically normal.
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
1441
With our model specifications, we first estimate the univariate GJR-GARCH (1,1) with an AR(1) conditional mean and Student’s t distribution by MLE. In the second stage, we need to estimate the parameters for the constant normal copula and the time-varying Student’s t copula. Let xt ¼ (F 1(u1,t), . . ., F 1(un,t))0 , and ^ which maximizes the we can analytically derive the correlation matrix estimator S log-likelihood of the normal copula density as T X ^¼1 S xt x0t : T t¼1
(52.16)
As there is no analytical solution for MLE of Student’s t copula, the numerical maximization problem is quite challenging. Following Chen et al. (2004), however, 1 0 with Bt ¼ (T1 n (u1,t), . . ., Tn (un,t)) , we can calculate the sample covariance matrix ˆ of Bt as S, which is a function of DoF parameter v. By setting Q0¼Sˆ, we can express Qt and Rt for all t in terms of a, b, and v using Eq. 52.10. Then we can estimate a, b, and v by maximizing the log-likelihood of t copula density. In the following sections, we apply our estimation procedure to the joint distribution of 45 selected major US stock returns.
52.4
Data
We apply our model to modeling log returns of 45 major US stocks from nine sectors: Consumer Discretionary, Consumer Staples, Energy, Financials, Health Care, Industrials, Technology, Materials, and Utilities. Table 52.1 shows stock symbols and company names of the selected 45 companies. We select five major companies from each sector to form the stock group. The time span ranges from January 3, 2000 to November 29, 2011 with 2990 observations. We download data from yahoo finance (http://finance.yahoo.com/). The log returns are calculated from daily close stock prices adjusted for dividends and splits. To save space, we only plot and calculate descriptive statistics of nine stocks with each from one sector. Figure 52.1 plots the log returns of those nine selected stocks, and there are two periods of volatility clusterings due to “Internet Bubbles” in the early 2000s and the financial crisis in 2008, respectively. We observe that during the financial crisis in 2008, major banks, such as Citigroup, incurred huge negative and positive daily returns. Table 52.2 shows the calculated mean, standard deviation, skewness, and kurtosis for the nine stocks. The average returns for the nine stocks are close to zero. Major banks, represented by Citigroup, have significantly higher volatility. Most of the stocks are slightly positively skewed, and only two have slight negative skewness. All the stocks have kurtosis greater than three indicating fat tails, and again major banks have significantly fatter tails. All the descriptive statistics indicate that the data property of individual returns needs to be captured by a variant of GARCH specification.
1442
L. Kang
Table 52.1 Symbols and names of 45 selected stocks from nine sectors Sector Consumer discretionary Stock symbol MCD: McDonald’s
DIS: Walt Disney Co.
Consumer Staples WMT: Wal-Mart Stores Inc. PG: Procter & Gamble Co. KO: Coca-Cola Co.
TGT: Target
WAG: Walgreen Co.
LOW: Lowe’s
MO: Altria Group Inc.
HD: Home Depot
Sector Financials Stock symbol C: Citigroup Inc.
Health Care JNJ: Johnson & Johnson PFE: Pfizer Inc.
Energy XOM: Exxon Mobil Corp. CVX: Chevron Corp. COP: CONOCOPHILLIPS. DVN: Devon Energy Corp. SLB: Schlumberger Limited Industrials GE: General Electric Co.
BAC: Bank of UNP: Union Pacific Corp. America Corp. JPM: JPMorgan Chase & ABT: Abbott UTX: United Co. Laboratories Technologies Corp. USB: U.S. Bancorp MRK: Merck & MMM: 3 M Co. Co. Inc. WFC: Wells Fargo & Co. AMGN: Amgen Inc. BA: Boeing Co. Sector Technology Materials Utilities Stock symbol T: AT&T Inc. NEM: Newmont EXC: Exelon Corp. Mining Corp. MSFT: Microsoft Corp. DD: E.I. DuPont de FE: Nemours & Co. FirstEnergy Corp. IBM: International Business DOW: Dow Chemical Co. PPL: PPL Machines Corp. Corporation CSCO: Cisco Systems Inc. FCX: Freeport-McMoRan D: Dominion Copper & Gold Inc. Resources, Inc. HPQ: Hewlett-Packard Co. PX: Praxair Inc. DUK: Duke Energy Corp.
52.5
Empirical Results
52.5.1 Marginal Distributions We briefly report estimation results for marginal distributions of 45 stock returns. For convenience, we only show the estimates and standard errors (in brackets) for nine selected stocks with each from one sector in Table 52.3. The star indicates statistical significance at a 5 % level. Consistent with our observations in Table 52.2, all the nine stocks have low values of DoF indicating fat tails. The parameter b3,i is
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model MCD
WMT 0.15
0.05
Log returns
Log returns
0.1
0 −0.05 −0.1 −0.15 00
02
04
06
08
10
0.1 0.05 0 −0.05 −0.1 00
12
02
XOM
0.1 0 −0.1 02
04
06
08
10
−0.5 00
12
JNJ
02
Log returns
Log returns
−0.1 04
06
08
10
02
04
06
08
10
12
10
12
0.6
−0.1
0.4
06
08
10
0.2 0 −0.2 00
12
02
Date (in year)
04
06
08
Date (in year) EXC
0.2 Log returns
12
−0.1
Log returns
Log returns
10
Date (in year) NEM
0
04
08
0
T
02
06
0.1
−0.2 00
12
0.1
−0.2 00
12
GE
Date (in year) 0.2
04
0.2
0
02
10
Date (in year)
0.1
−0.2 00
08
0
Date (in year) 0.2
06 C
0.5 Log returns
Log returns
0.2
04
Date (in year)
Date (in year)
−0.2 00
1443
0.1 0 −0.1 −0.2 00
02
04
06
08
10
12
Date (in year)
Fig. 52.1 The log returns of the nine of our 45 selected stocks with each from one sector have been plotted
1444
L. Kang
Table 52.2 Descriptive statistics (mean, standard deviation, skewness, and kurtosis) for the nine of our 45 selected stocks with each from each sector Stock symbol Mean Std. dev. Skewness Kurtosis # obs.
MCD 3.73E04 0.017 0.21 8.25 2,990
WMT 7.21E06 0.017 0.13 7.72 2,990
XOM 3.15E04 0.017 0.02 12.52 2,990
C 8.05E04 0.037 0.48 35.55 2,990
JNJ 1.96E04 0.013 0.53 17.83 2,990
GE 2.89E04 0.022 0.04 9.99 2,990
T 1.23E05 0.019 0.12 8.68 2,990
NEM 3.65E04 0.027 0.34 8.22 2,990
EXC 4.48E04 0.018 0.05 10.58 2,990
statistically significant at a 5 % level for eight of the nine stocks indicating significant leverage effects for stock returns. The parameters in conditional mean are statistically significant for some stocks and not for others. In Fig. 52.2, we plot estimated conditional volatility for the stocks MCD, WMT, XOM, and C. Consistent with Fig. 52.1, we observe MCD and WMT have significant high volatility in the early 2000s and 2008, while XOM and C have their volatility hikes mainly in 2008 with C, representing Citigroup, having the highest conditional volatility during the 2008 crisis.
52.5.2 Copulas We report estimation results for the time-varying t copula parameters in Table 52.4. All the three parameters a, b, and v are statistically significant. The estimate a is close to zero and the estimate for b is close to one. The estimate for v is about 25. As our estimation is carried out on the joint distribution of 45 stock returns, the estimate for v shed some light on how much Student’s t copula can capture tail dependence when used to fit a relatively large number of variables. We also report the log-likelihood for time-varying Student’s t copula and normal copula in Table 52.4. As the correlation matrix in normal copula is estimated by its sample correlation, we did not report it here. We find that time-varying t copula has significantly higher log-likelihood than normal copula, which results from the more flexible parameter structure of t copula and the time-varying parameter structure.
52.5.3 Time-Varying Dependence Our time-varying t copula features a time-varying dependence structure among all the variables. The DoF parameter, together with the correlation parameters, governs the tail dependence behavior of multiple variables. We plot the estimated
Stock symbol MCD Conditional mean 6.69E-04* a0,i (2.25E-04) 0.030 a1,i (0.018) Conditional variance b0,i 1.69E-06* (5.21E-07) b1,i 0.947* (0.007) b2,i 0.024* (0.008) b3,i 0.046* (0.012) Degree of freedom vi 6.187* (0.67)
XOM
6.50E-04* (2.41E-04) 0.076* (0.019)
5.97E-06* (1.28E-06) 0.901* (0.013) 0.025* (0.012) 0.092* (0.018)
9.496* (1.59)
WMT
1.77E-05 (2.11E-04) 0.037* (0.018)
8.22E-07* (3.50E-07) 0.952* (0.007) 0.026* (0.008) 0.038* (0.013)
7.002* (0.92)
6.505* (0.75)
1.96E-06* (5.53E-07) 0.908* (0.009) 0.045* (0.011) 0.094* (0.017)
1.03E-06 (2.42E-04) 0.003 (0.018)
C
5.796* (0.62)
1.87E-06* (4.40E-07) 0.905* (0.011) 0.025* (0.011) 0.127* (0.021)
1.17E-04 (1.59E-04) 0.004 (0.018)
JNJ
6.622* (0.66)
1.26E-06* (3.93E-07) 0.948* (0.007) 0.017* (0.007) 0.067* (0.012)
1.93E-06 (2.32E-04) 0.015 (0.018)
GE
8.403* (1.20)
1.37E-06* (4.47E-07) 0.943* (0.007) 0.027* (0.008) 0.051* (0.012)
2.73E-04 (2.29E-04) 0.009 (0.018)
T
7.831* (1.09)
3.11E-06* (1.31E-06) 0.959* (0.007) 0.037* (0.009) 0.001 (0.011)
3.31E-04 (3.87E-04) 0.056* (0.018)
NEM
9.247* (1.28)
3.91E-06* (9.82E-07) 0.897* (0.012) 0.066* (0.014) 0.046* (0.018)
7.10E-04* (2.36E-04) 0.019 (0.019)
EXC
Table 52.3 The GARCH estimation results of individual stock returns for the nine of our selected 45 stocks with each from one sector. Values in brackets are standard errors. The star indicates the statistical significance at a 5 % level
52 Modeling Multiple Asset Returns by a Time-Varying t Copula Model 1445
x 10−3
1.5 1 0.5 0 00
6 Conditional Volatility
MCD Conditional Volatility
2
L. Kang
02
x 10−3
04 06 08 Date (in year)
10
XOM
4 3 2 1 02
04 06 08 Date (in year)
x 10−3
WMT
1.5 1 0.5 02
04 06 08 Date (in year)
10
10
12
C
0.06
5
0 00
2
0 00
12
Conditional Volatility
Conditional Volatility
1446
0.05 0.04 0.03 0.02 0.01
12
00
02
04 06 08 Date (in year)
10
12
Fig. 52.2 The estimated time-varying conditional volatility for four selected stocks has been plotted Table 52.4 The estimates and standard errors for time-varying Student’s t copula. Values in brackets are standard errors. The star indicates the statistical significance at a 5 % level. We also report the log-likelihood for time-varying t copula and normal copula Time-varying t copula Parameter estimates a b v Log-likelihood of copula component Log-likelihood
Normal copula
0.0031* (0.0002) 0.984* (0.0016) 25.81* (1.03) 40,445.44
38,096.36
conditional correlation parameters of t copula for four selected pairs of stock returns in Fig. 52.3. For those four pairs, the conditional correlation parameter fluctuates around certain positive averages. The two pairs, MCD-WMT and NEM-EXC, experienced apparent correlation spikes during the 2008 financial crisis. Moreover, Fig. 52.4 shows the estimated TDCs for the four pairs. We find that with the DoF around 25, the TDCs for those pairs of stock returns are very low, though some pairs do exhibit TDC spikes during the 2008 crisis. The low values of TDCs indicate possible limitations of t copula to account for tail dependence when being used to model a large number of variables.
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
Conditional correlation
0.5
Conditional correlation
XOM-C
0.45 0.4 0.35 0.3 0.25 0.2 00
02
04 06 08 Date in year
10
12
C-JNJ
0.45
Conditional correlation
Conditional correlation
52
0.4 0.35 0.3 0.25 0.2 00
02
04 06 08 Date in year
10
12
1447
MCD-WMT
0.45 0.4 0.35 0.3 0.25 00
02
04 06 08 Date in year
10
12
10
12
NEM-EXC
0.25 0.2 0.15 0.1 0.05 00
02
04 06 08 Date in year
4
x 10−3
XOM-C
3 2 1 0 00 01 02 03 04 05 06 07 08 09 10 11 12
Tail Dependence Coefficient
Tail Dependence Coefficient
Fig. 52.3 The estimated time-varying correlation parameters in t copula for four selected pairs of stock returns have been plotted
2.5
C-JNJ
2 1.5 1 0.5 0 00 01 02 03 04 05 06 07 08 09 10 11 12 Date in year
Tail Dependence Coefficient
Tail Dependence Coefficient
Date in year x 10−3
3.5
x 10−3
MCD-WMT
3 2.5 2 1.5 1 0.5 00 01 02 03 04 05 06 07 08 09 10 11 12 Date in year 6
x 10−4
NEM-EXC
5 4 3 2 1 0 00 01 02 03 04 05 06 07 08 09 10 11 12 Date in year
Fig. 52.4 The time-varying tail dependence coefficient (TDC) for the four selected pairs of stock returns has been plotted
1448
52.6
L. Kang
Conclusion
We illustrate an effective approach (Copula-GARCH models) to model the dynamics of a large number of multiple asset returns by constructing a time-varying Student’s t copula model. Under a general Copula-GARCH framework, we specify a proper GARCH model for individual asset returns and use a copula to link the margins to build the joint distribution of returns. We apply our time-varying Student’s t copula model to 45 major US stock returns, where each stock return is modeled by an AR(1) and GJR-GARCH(1,1) specification and a Student’s t copula with a DCC dependence structure is used to link all the returns. We illustrate how the model can be effectively estimated by a two-stage MLE procedure, and our estimation results show time-varying t copula model has significant better fitness of data than normal copula models. As it is quite challenging to find a copula function with very flexible parameter structure to account for difference dependence features among all pairs of random variables, our time-varying t copula model tends to be a good working tool to model multiple asset returns for risk management and asset allocation purposes. Our model can capture time-varying conditional correlation and some degree of tail dependence, while it also has limitations of featuring symmetric dependence and inability of generating high tail dependence when being used to model a large number of asset returns. Nevertheless, we hope that this chapter provides researchers and financial practitioners with a good introduction on the CopulaGARCH models and a detailed illustration on constructing joint distributions of multiple asset returns using a time-varying Student’s t copula model.
References Ang, A., & Bekaert, G. (2002). International asset allocation with regime shifts. Review of Financial Studies, 15(4), 1137–1187. Berg, D., & Bakken, H. (2006). A copula goodness-of-fit test based on the probability integral transform. Working Paper at http://www.danielberg.no/publications/Btest.pdf Bouye´, E., Valdo D., Ashkan N., Gae¨l R., & Thierry, R. (2000). Copulas for finance – A reading guide and some applications. Available at SSRN http://ssrn.com/abstract¼1032533. March. Chen, X., Yanqin F., & Andrew, J. P. (2004). Simple tests for models of dependence between multiple financial time series, with applications to U.S. Equity returns and exchange rates (Discussion Paper 483). London School of Economics, Revised July 2004. Cherubini, U., Luciano, E., & Vecchiato, W. (2004). Copula methods in finance (Wiley series in financial engineering). Chichester: Wiley. De Goeij, P., & Marquering, W. (2004). Modeling the conditional covariance between stock and bond returns: A multivariate GARCH approach. Journal of Financial Econometrics, 2(4), 531–564. Deheuvels, P. (1981). A Nonparametric test for independence. Paris: Universite de Paris, Institut de Statistique. Engle, R. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics, 20(3), 339–350.
52
Modeling Multiple Asset Returns by a Time-Varying t Copula Model
1449
Engle, R. F., & Sheppard, K. (2001). Theoretical and empirical properties of dynamic conditional correlation multivariate GARCH (NBER Working Paper No. 8554). Cambridge, MA: National Bureau of Economic Research. Genest, C., Ghoudi, K., & Rivest, L. (1995). A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika, 82, 543–552. Glosten, L. R., Ravi, J., & Runkle, D. E. (1993). On the relation between the expected value and the volatility of the nominal excess return on stocks. Journal of Finance, 48(5), 1779–1801. Greene, W. H. (2003). Econometric analysis. Upper Saddle River: Prentice Hall. Hamilton, J. D. (1994). Time series analysis. Princeton: Princeton University Press. Hansen, B. E. (1994). Autoregressive conditional density estimation. International Economic Review, 35(3), 705–730. Hu, L. (2006). Dependence patterns across financial markets: A mixed copula approach. Applied Financial Economics, 16, 717–729. Joe, H. (1997). Multivariate models and dependence concepts. London: Chapman and Hall. Joe, H., & Xu, J. J. (1996). The estimation method of inference functions for margins for multivariate models (Technical Report 166). Vancouver: Department of Statistics, University of British Columbia. Jondeau, E., & Rockinger, M. (2002). Conditional dependency of financial series: the copulaGARCH model (FAME Research Paper 69). Geneva: International Center for Financial Asset Management and Engineering. Jondeau, E., & Rockinger, M. (2003). Conditional volatility, skewness, and kurtosis: Existence, persistence, and comovements. Journal of Economic Dynamics and Control, 27(10), 1699–1737. Jondeau, E., & Rockinger, M. (2006). The copula-GARCH model of conditional dependencies: An international stock market application. Journal of International Money and Finance, 25, 827–853. Lee, T.-H., & Long, X. (2009). Copula-based multivariate GARCH model with uncorrelated dependent errors. Journal of Econometrics, 150, 207–218. Nelson, R. B. (1998). An introduction to copula. New York: Springer. Patton, A. J. (2004). On the out-of-sample importance of skewness and asymmetric dependence for asset allocation. Journal of Financial Econometrics, 2(1), 130–168. Patton, A. J. (2006a). Estimation of multivariate models for time series of possibly different lengths. Journal of Applied Econometrics, 21(2), 147–173. Patton, A. J. (2006b). Modelling asymmetric exchange rate dependence. International Economic Review, 47(2), 527–556. Savu, C., & Trede, M. (2010). Hierarchies of Archimedean copulas. Quantitative Finance, 10, 295–304. Sklar, A. (1959). Fonctions de repartition an dimensions et leurs marges. Paris: Publications de l’lnstitut de statistique de l’Universite de Paris. Trivedi, P. K., & Zimmer, D. M. (2006). Using trivariate copulas to model sample selection and treatment effects: Application to family health care demand. Journal of Business and Economic Statistics, 24(1), 63. Tsafack, G., & Garcia, R. (2011). Dependence structure and extreme comovements in international equity and bond markets. Journal of Banking and Finance, 35, 1954–1970. Xu, Y. (2004). Incorporating estimation risk in copula-based portfolio optimization. Miami: Department of Economics, University of Miami.
Internet Bubble Examination with Mean-Variance Ratio
53
Zhidong D. Bai, Yongchang C. Hui, and Wing-Keung Wong
Contents 53.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.4 Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1452 1454 1455 1458 1461 1463
Abstract
To evaluate the performance of the prospects X and Y, financial professionals are interested in testing the equality of their Sharpe ratios (SRs), the ratios of the excess expected returns to their standard deviations. Bai et al. (Statistics and Probability Letters 81, 1078–1085, 2011d) have developed the mean-varianceratio (MVR) statistic to test the equality of their MVRs, the ratios of the excess expected returns to its variances. They have also provided theoretical reasoning to use MVR and proved that their proposed statistic is uniformly most powerful unbiased. Rejecting the null hypothesis infers that X will have either smaller
Z.D. Bai (*) KLAS MOE & School of Mathematics and Statistics, Northeast Normal University, Changchun, China Department of Statistics and Applied Probability, National University of Singapore, Singapore, Singapore e-mail: [email protected] Y.C. Hui School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China e-mail: [email protected] W.-K. Wong Department of Economics, Hong Kong Baptist University, Kowloon, Hong Kong e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_53, # Springer Science+Business Media New York 2015
1451
1452
Z.D. Bai et al.
variance or larger excess mean return or both leading to the conclusion that X is the better investment. In this paper, we illustrate the superiority of the MVR test over the traditional SR test by applying both tests to analyze the performance of the S&P 500 index and the NASDAQ 100 index after the bursting of the Internet bubble in the 2000s. Our findings show that while the traditional SR test concludes the two indices being analyzed to be indistinguishable in their performance, the MVR test statistic shows that the NASDAQ 100 index underperformed the S&P 500 index, which is the real situation after the bursting of the Internet bubble in the 2000s. This shows the superiority of the MVR test statistic in revealing short-term performance and, in turn, enables investors to make better decisions in their investments. Keywords
Mean-variance ratio • Sharpe ratio • Hypothesis testing • Uniformly most powerful unbiased test • Internet bubble • Fund management
53.1
Introduction
Internet stocks obtained huge gains in the late 1990s, followed by huge losses from early 2000. In just 2 years from 1998 to early March 2000, prices of Internet stocks rose by sixfold and outperformed the S&P 500 by 482 %. Technology stocks generally showed a similar trend based on the fact that NASDAQ 100 index quadrupled in value over the same period and outperformed the S&P 500 index by 268 %. On the other hand, NASDAQ 100 index dropped by 64.28 % in value during the Internet bubble crash and underperformed the S&P 500 index by 173.87 %. The spectacular rise and fall of Internet stocks in the late 1990s has stimulated research into the causes of the Internet stock bubble. Theories had been developed to explain the Internet bubble. For example, Baker and Stein (2004) develop a model of market sentiment with irrationally overconfident investors and shortsale constraints. Ofek and Richardson (2003) provide circumstantial evidence that Internet stocks attract mostly retail investors who are more prone to be overconfident about their ability to predict future stock prices than institutional investors. Perkins and Perkins (1999) suggest that during the Internet boom, investors were confidently betting on the continued rise of Internet stocks because they knew that high demand and limited equity float implies substantial upside returns. Moreover, Ofek and Richardson (2003) provide indirect evidence that Internet stock prices were supported by a combination of factors such as limited float, short-sale constraints, and aggressive trend chased by retail investors, whereas Statman (2002) shows that this asymmetric payoff must have made Internet stocks appear to be an extremely attractive gamble for risk seekers. On the other hand, Fong et al. (2008) use stochastic dominance methodology (Fong et al. 2005; Broll et al. 2006; Chan et al. 2012; Lean et al. 2012) to identify dominant types of risk preferences in the Internet bull and bear markets. They conclude that investor risk
53
Internet Bubble Examination with Mean-Variance Ratio
1453
preferences (Wong and Li 1999; Wong and Chan 2008) have changed over this cycle, and the change is related to utility theory (Wong 2007; Sriboonchitta et al. 2009) and behavioral finance (Lam et al. 2010, 2012). In this paper, we apply both the mean-variance ratio (MVR) test and the Sharpe ratio (SR) test to examine the performance of the NASDAQ 100 index and the S&P 500 index during the bursting of the Internet bubble in the 2000s. The tests are relied on the theory of the mean-variance (MV) portfolio optimization (Markowitz 1952; Bai et al. 2009a, b). The Markowitz efficient frontier also provides the basis for many important financial economics advances, including the Sharpe-Lintner capital asset pricing model (CAPM, Sharpe 1964; Lintner 1965) and the wellknown optimal one-fund theorem (Tobin 1958). Originally motivated by the MV analysis, the optimal one-fund theorem, and the CAPM model, the Sharpe ratio, the ratio of the excess expected return to its volatility or standard deviation, is one of the most commonly used statistics in the MV framework. The SR is now widely used in many different areas in Finance and Economics, from the evaluation of portfolio performance to market efficiency tests (see, e.g., Ofek and Richardson 2003). Jobson and Korkie (1981) develop a SR statistic to test for the equality of two SRs. The test statistic has been modified and improved by Cadsby (1986) and Memmel (2003). Lo (2002) carries out a more thorough study of the statistical property of the SR estimator. Using standard econometric methods with several different sets of assumptions imposed on the statistical behavior of the returns series, Lo derives the asymptotic statistical distribution for the SR estimator and shows that confidence intervals, standard errors, and hypothesis tests can be computed for the estimated SRs in much the same way as regression coefficients such as portfolio alphas and betas are computed. The SR test statistic developed by Jobson and Korkie (1981) and others provides a formal statistical comparison of performance among portfolios. One deficiency of the SR statistic is that it has only an asymptotic distribution. Hence, the SR test has its statistical properties only for large samples, but not for small samples. Nevertheless, the performance of assets is often compared by using small samples, especially when markets undergo substantial changes resulting from changes in short-term factors and momentum. Under these circumstances, it is more meaningful to use limited data to predict the assets’ future performance. In addition, it is not meaningful to measure SRs for extended periods when the means and standard deviations of the underlying assets are found empirically to be nonstationary and/or to possess structural breaks. For small samples, the main difficulty in developing the SR test is that it is impossible to obtain a uniformly most powerful unbiased (UMPU) test to check for the equality of SRs. To circumvent this problem, Bai et al. (2011d) propose to use an alternative statistic, the MVR tests to compare performance of assets. They also discuss the evaluation of the performance of assets for small samples by providing a theoretical framework and then invoking both one-sided and two-sided UMPU MVR tests. Moreover, Bai et al. (2012) further extend the MVR statistics to compare the performance of prospects after the effect of the background risk has been mitigated.
1454
Z.D. Bai et al.
Applying the traditional SR test, we fail to reject the possibility of having any significant difference between the performance of the S&P 500 index and the NASDAQ 100 index during the bursting of the Internet bubble in the 2000s. This finding implies that the two indices being analyzed could be indistinguishable in their performance during the period under the study. However, we conjecture that this conclusion is most likely to be inaccurate as the lack of sensitivity of the SR test in analyzing small samples. Thus, we propose to use the MVR test in the analysis. As expected, the MVR test shows that the MVR of the weekly return on S&P 500 index is different from that on the NASDAQ 100 index. We conclude that the NASDAQ 100 index underperformed the S&P 500 index during the period under the study. The proposed MVR test can discern the performance of the two indices and hence is more informative than tests using the SR statistics for investors to decide on their investments. The rest of the paper is organized as follows: Section 53.2 discusses the data while Sect. 53.3 provides the theoretical framework and discusses the theory for both one-sided and two-sided MVR tests. In Sect. 53.4, we demonstrate the superiority of the MVR tests over the traditional SR tests by applying both tests to analyze the performance of the S&P 500 index and the NASDAQ 100 index during the bursting of the Internet bubble in the 2000s. This is followed by Sect. 53.4 which summarizes our conclusions and shares our insights.
53.2
Data
The data used in this study consists of weekly returns on two stock indices: the S&P 500 and the NASDAQ 100 index. We use the S&P 500 index to represent non-technology or “old economy” firms. Our proxy for the Internet and technology sectors is the NASDAQ 100 index. Firms represented in the NASDAQ 100 include those in the computer hardware and software, telecommunications, and biotechnology sectors. The NASDAQ 100 index is value weighted. Our sample period is from January 1, 2000 to December 31, 2002, to study the effect of the crash in the Internet bubble. Before 2000, there is a clear upward trend in technology stock prices emerging from around that period and this period spans a period of intense IPO and secondary market activities for Internet stocks. Schultz and Zaman (2001) report that 321 Internet firms went public between January 1999 and March 2000, accounting for 76 % of all new Internet issues since the first wave of Internet IPOs began in 1996. Ofek and Richardson (2003) find that the extraordinary high valuations of Internet stocks between the early 1998 and February 2000 were accompanied by very high trading volume and liquidity. The unusually high volatility of technology stocks is only partially explained by the rise in the overall market volatility. Our interest centers on the bear market from January 1, 2000 to December 31, 2002. All data for this study are from datastream.
53
Internet Bubble Examination with Mean-Variance Ratio
53.3
1455
Methodology
Let Xi and Yi (i ¼ 1, 2, , n) be independent excess returns drawn from the corresponding normal distributions N(m, s2) and N(, t2) with joint density p(x, y) such that pðx; yÞ ¼ k exp
Where k ¼ ð2ps2 Þ
mX 1 X 2 X 1 X 2 xi 2 xi þ 2 yi 2 yi 2 s 2s t 2t
n=2
ð2pt2 Þ
n=2
(53.1)
2 n2 exp nm exp 2 2 2s 2t
To evaluate the performance of the prospects X and Y, financial professionals are interested in testing the hypotheses H 0 :
m m versus H 1 : > s t s t
(53.2)
to compare the performance of their corresponding SRs, ms and t , the ratios of the excess expected returns to their standard deviations. If the hypothesis H0 is rejected, it infers that X is the better investment prospect with larger SR because X has either larger excess mean return or smaller standard deviation or both. Jobson and Korkie (1981) and Memmel (2003) develop test statistics to test the hypotheses in Eq. 53.2 for large samples but their tests would not be appropriate for testing small samples as the distribution of their test statistics is only valid asymptotically but not valid for small samples. However, it is especially relevant in investment decisions to test the hypotheses in Eq. 53.2 for small samples to provide useful investment information to investors. Furthermore, as it is impossible to obtain any UMPU test statistic to test the inequality of the SRs in Eq. 53.2 for small samples, Bai et al. (2011d) propose to use the following hypothesis to test for the inequality of the MVRs: H 01 :
m s2 t 2
versus
H11 :
m > : s2 t 2
(53.3)
In addition, they develop the UMPU test statistic to test the above hypotheses. Rejecting the hypothesis H0 infers that X will have either smaller variance or larger excess mean return or both leading to the conclusion that X is the better investment. As sometimes investors conduct the two-sided test to compare the MVRs, the following hypotheses are included in our study:
1456
Z.D. Bai et al.
H 02 :
m ¼ 2 versus 2 s t
H12 :
m 6¼ 2 : 2 s t
(53.4)
One may argue that the MVR test is that SR test is scale invariant, whereas the MV ratio test is not. To support the MVR test to be an acceptable alternative test statistic, Bai et al. (2011d) show the theoretical justification for the use of the MVR test statistic in the following remark: Remark 53.1 One may think that the MVR can be less favorable than the SR as the
former is not scale invariant while the latter is. However, in some financial processes, the mean change in a short period of time is proportional to its variance change. For example, many financial processes can be characterized by the following diffusion process for stock prices formulated as dY t ¼ mP ðY t Þdt þ sðY t ÞdW Pt , where mP is an N-dimensional function, s is an N N matrix and WPt is an N-dimensional standard Brownian motion under the objective probability measure P. Under this model, the conditional mean of the increment dYt given Yt is mP(Yt)dt and the covariance matrix is s(Yt)sT(Yt)dt. When N ¼ 1, the SR will be close to 0 while the MVR will be independent of dt. Thus, when the time period dt is small, the MVR will be advantageous over the SR. To further support for the use of MVR, Bai et al. (2011d) document the MVR in the context of Markowitz MV optimization theory as follows: suppose that there is p-branch of assets S ¼ (s1, , sp)T whose returns are denoted by r ¼ (r1, ∙, rp)T with mean m ¼ (m1, , mp)T and covariance matrix S ¼ (sij). In addition, we suppose that investors will invest capital C on the p-branch of securities S such that they solve for their optimal investment plans c ¼ (c1, ∙, cp)T to allocate their investable wealth on the p-branch of securities to obtain maximize return subject at a given level of risk. The above maximization problem can be formulated as the following optimization problem: max R ¼ cT m, subject to cT Sc s20
(53.5)
where s20 is a given risk level. We call R satisfying Eq. 53.5 the optimal return and c be its corresponding allocation plan. One could easily extend the separation theorem and the mutual fund theorem to obtain the solution of Eq. 53.51 from the following lemma:
1
We note that Bai et al. (2009a, b, 2011c) have also used the same framework as in 53.5.
53
Internet Bubble Examination with Mean-Variance Ratio
1457
Lemma 53.1 For the optimization setting displayed in Eq. 53.5, the optimal return,
R, and its corresponding investment plan, c, are obtained as follows: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R ¼ s0 mT S1 m and
s0 c ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S1 m: T m S1 m
(53.6)
From Lemma 53.1, the investment plan, c, is proportional to the MVR when S is a diagonal matrix. Hence, when the asset is concluded as superior in performance utilizing the MVR test, its corresponding weight could then be computed based on the corresponding MVR test value. Thus, another advantage of using the MVR test over the SR test is that it not only allows investors to compare the performance of different assets, but it also provides investors with information of the assets weight. The MVR test enables investors to compute the corresponding allocation for the assets. On the other hand, as the SR is not proportional to the weight of the corresponding asset, an asset with the highest SR would not infer that one should put highest weight on this asset as compared with our MVR. In this sense, the test proposed by Bai et al. (2011d) is superior to the SR test. Bai et al. (2011d) have also developed both one-sided UMPU test and two-sided UMPU test of equality of the MVRs in comparing the performances of different prospects with hypotheses stated in Eqs. 53.3 and 53.4, respectively. We first state the one-sided UMPU test for the MVRs as follows: Theorem 53.1 Let Xi and Yi (i ¼ 1, 2, , n) be independent random variables with
joint distribution function defined in Eq. 53.1. For the hypotheses setup in Eq. 53.3, there exists a UMPU level-a test with the critical function f(u, t) such that 1, when u C0 ðtÞ fðu; tÞ (53.7) 0, when u < C0 ðtÞ where C0 is determined by
ð1 C0
f n, t ðuÞ du ¼ K 1 ;
with n1 !n 1 1 1 2 2 2 u ðt1 uÞ 2 t3 f n, t ðuÞ ¼ t2 , n n ð K 1 ¼ a f n, t ðuÞ du; O
in which
(53.8)
1458
Z.D. Bai et al.
U¼ T2 ¼
n X i¼1 n X
Xi , X2i ,
T1 ¼ T3 ¼
i¼1
n X i¼1 n X
Xi þ
n X
Yi,
i¼1
Y 2i ,
T ¼ ðT 1 ; T 2 ; T 3 Þ;
i¼1
pffiffiffiffiffiffi pffiffiffiffiffiffi pffiffiffiffiffiffi pffiffiffiffiffiffi with O ¼ fujmaxð nt2 , t1 nt3 Þ u minð nt2 , t1 þ nt3 Þg to be the support of the joint density function of (U, T). We call the statistic U in Theorem 53.1 the one-sided MVR test statistic or simply the MVR test statistic for the hypotheses setup in Eq. 53.3 if no confusion arises. In addition, Bai et al. (2011d) have introduced the two-sided UMPU test statistic as stated in the following theorem to test for the equality of the MVRs listed in Eq. 53.4: Theorem 53.2 Let Xi and Yi (i ¼ 1, 2, , n) be independent random variables with
joint distribution function defined in Eq. 53.1. Then, for the hypotheses setup in Eq. 53.4, there exists a UMPU level-a test with critical function 1, when u C1 ðtÞ or C2 ðt fðu; tÞ ¼ (53.9) 0, when C1 ðtÞ < u < C2 ðt in which C1 and C2 satisfy 8 ð C2 > > > f n, t ðuÞ du ¼ K 2 < C1 , ð C2 > > > uf n, t ðuÞ du ¼ K 3 :
(53.10)
C1
where
ð K 2 ¼ ð 1 aÞ K 3 ¼ ð 1 aÞ
ðO O
f n, t ðuÞ du, u f n, t ðuÞ du:
The terms fn;t (u), Ti (i ¼ 1, 2, 3) and T are defined in Theorem 53.1. We call the statistic U in Theorem 53.2 the two-sided MVR test statistic or simply the MVR test statistic for the hypotheses setup in Eq. 53.4 if no confusion arises. To obtain the critical values C1 and C2 for the test, readers may refer to Bai et al. (2011d, 2012).
53.4
Illustration
In this section, we demonstrate the superiority of the MVR tests over the traditional SR tests by illustrating the applicability of the MVR tests to examine the Internet bubble during January 2000 and December 2002. For simplicity, we only
53
Internet Bubble Examination with Mean-Variance Ratio
1459
demonstrate the two-sided UMPU test.2 The data for this study consists of weekly returns on two stock indices: the S&P 500 and the NASDAQ 100 index. The sample period covers from January 2000 to December 2002 in which the data from the first week of November 2000 to the last week of January 2001 (3 months) are used to compute the MVR in January 2001, while the data from the first week of December 2000 to the last week of February 2001 are used to compute the MVR in February 2001, and so on. However, if the period used to compute the SRs is too short, the result would not be meaningful as discussed in our previous sections. Thus, we utilize a longer period from the first week of February 2000 to the last week of January 2001 (12 months) to compute the SR ratio in January 2001, from the first week of March 2000 to the last week of February 2001 to compute the SR ratio in February 2001, and so on. Let X with mean mX and variance s2X be the weekly return on S&P 500 while Y with mean mY and variance s2Y be the weekly return on the NASDAQ 100 index. We test the following hypotheses: H0 :
mX mY ¼ s2X s2Y
versus
H1 :
mX mY 6¼ : s2X s2Y
(53.11)
To test the hypotheses in Eq. 53.11, we first compute the values of the test function U for the MVR statistic shown in Eq. 53.9, then compute the critical values C1 and C2 under the test level of 5 % for the pair of indices and display the values in Table 53.1. For comparison, we also compute the corresponding SR statistic developed by Jobson and Korkie (1981) and Memmel (2003) such that z¼
^X m ^Y m ^X s ^Y s pffiffiffi , ^y
(53.12)
which follows standard normal distribution asymptotically with 1 1 2 2 1 2 2 mX mY 2 2 2 2sX sY 2sX sY sX, Y þ mX sY þ mY sX s y¼ T 2 2 sX sY X, Y to test for the equality of the SRs for the funds by setting the following hypotheses such that H 0 :
2
mX mY ¼ sX sY
versus
H1 :
mX mY 6¼ : sX sY
(53.13)
The results of the one-sided test which draw a similar conclusion are available on request.
1460
Z.D. Bai et al.
Table 53.1 The results of the mean-variance ratio test and Sharpe ratio test for NASDAQ and S&P 500, from January 2001 to December 2002 Date month/year 01/2001 02/2001 03/2001 04/2001 05/2001 06/2001 07/2001 08/2001 09/2001 10/2001 11/2001 12/2001 01/2002 02/2002 03/2002 04/2002 05/2002 06/2002 07/2002 08/2002 09/2002 10/2002 11/2002 12/2002
MVR test U 0.0556 0.0636 0.1291 0.0633 0.0212 0.0537 0.0421 0.1062 0.1623* 0.1106 0.0051 0.1190 0.0316 0.0067 0.0216 0.0444 0.0588 0.1477 0.2167* 0.1526* 0.2121* 0.0416 0.0218 0.1265
C1 0.1812 0.1843 0.2291 0.2465 0.1937 0.1478 0.1399 0.1815 0.1665 0.3507 0.2386 0.0165 0.0744 0.1389 0.1349 0.1739 0.1766 0.2246 0.0101 0.0452 0.0218 0.1249 0.1056 0.0015
C2 0.1267 0.1216 0.0643 0.1633 0.2049 0.1983 0.1132 0.0886 0.2728 0.1742 0.2825 0.2041 0.1389 0.1013 0.0853 0.0848 0.1094 0.0267 0.0578 0.1242 0.0551 0.2344 0.2150 0.2417
SR test Z 1.0906 1.8765 1.1787 0.9590 0.8313 0.8075 0.6422 0.6816 1.0125 0.5931 0.1898 0.1573 0.0157 0.0512 0.1219 0.1885 0.0446 0.3408 0.0984 0.1024 0.6304 0.0361 0.0008 0.3908
Note: The MVR test statistic U is defined in Eq. 53.9 and its critical values C1 and C2 are defined in Eqs. 53.10, respectively. The SR test statistic Z is defined in Eq. 53.12. The level is a ¼ 0.05, and “*” means significant at levels 5 %. Here, the sample size of the MVR test is 3 months, while the sample size of the SR test 12 months. Recall that z0.025 1.96
Instead of using a 2-month data to compute the values of our proposed statistic, we use the overlapping 12-month data to compute the SR statistic. The results are also reported in Table 53.1. The limitation of applying the SR test is that it would usually conclude indistinguishable performances between the indices, which may not be the situation in reality. In this aspect, looking for a statistic to evaluate the difference between indices for short periods is essential. The situation in reality is that the Internet stocks registered large gains in the late 1990s, followed by large losses from 2000. As we mentioned before, the NASDAQ 100 index comprises 100 of the largest domestic and international technology firms including those in the computer hardware and software, telecommunications, and biotechnology sectors, while the S&P
53
Internet Bubble Examination with Mean-Variance Ratio
1461
5000
NASDAQ S&P 500
Index
4000
3000
2000
11/10/2003
09/02/2003
06/23/2003
04/14/2003
02/03/2003
11/25/2002
09/16/2002
07/08/2002
04/29/2002
02/19/2002
12/10/2001
10/01/2001
07/16/2001
05/07/2001
02/26/2001
12/18/2000
10/09/2000
07/31/2000
05/22/2000
03/13/2000
01/03/2000
1000
Fig. 53.1 Weekly indices of NASDAQ and S&P 500 from January 3, 2000 to December 31, 2003
500 index represents non-technology or “old economy” firms. After the bursting of the Internet bubble in the 2000s, as shown in Fig. 53.1, the NASDAQ 100 declined much more and underperformed the S&P 500. From Table 53.1, we find that the MVR test statistic does not disappoint us in that it does pick up significant differences in performances between the S&P 500 and the NASDAQ 100 index in September 2001, July 2002, August 2002, and September 2002, but SR test does not conclude any distinguishable performances between the indices. Further to ^X > m ^ Y in September 2001, July 2002, say, from Table 53.1, we observe that m August 2002, and September 2002. This infers that the MVR test statistics can detect the real situation that the NASDAQ 100 index underperformed the S&P 500 index, but the traditional SR test cannot detect any difference. Thus, we conclude that investors could be able to profiteer from the Internet bubble if they apply the MVR test.
53.5
Concluding Remarks
In this paper, we employ the MVR test statistics developed by Bai et al. (2011d) to examine the performances between the S&P 500 index and the NASDAQ 100 index during Internet bubble from January 2000 to December 2002. We illustrate the superiority of the MVR test over the traditional SR test by applying both tests to analyze the performance of the S&P 500 index and the NASDAQ 100 index after the bursting of the Internet bubble in the 2000s. Our findings show that while the traditional SR test concludes the two indices being analyzed to be indistinguishable in their performance, the MVR test statistic shows that the NASDAQ 100 index underperformed the S&P 500 index, which is the real situation
1462
Z.D. Bai et al.
after the bursting of the Internet bubble in the 2000s. This shows the superiority of the MVR test statistic in revealing short-term performance and, in turn, enables the investors to make better decisions about their investments. There are two basic approaches to the problem of portfolio selection under uncertainty. One approach is based on the concept of utility theory (Gasbarro et al. 2007; Wong et al. 2006, 2008). Several stochastic dominance (SD) test statistics have been developed; see, for example, Bai et al. (2011a) and the references therein for more information. This approach offers a mathematically rigorous treatment for portfolio selection, but it is not popular among investors since investors would have to specify their utility functions and choose a distributional assumption for the returns before making their investment decisions. The other approach is the mean-risk (MR) analysis that has been discussed in this paper. In this approach, the portfolio choice is made with respect to two measures – the expected portfolio mean return and portfolio risk. A portfolio is preferred if it has higher expected return and smaller risk. These are convenient computational recipes and they provide geometric interpretations for the trade-off between the two measures. A disadvantage of the latter approach is that it is derived by assuming the Von Neumann-Morgenstern quadratic utility function and that returns are normally distributed (Hanoch and Levy 1969). Thus, it cannot capture the richness of the former approach. Among the MR analyses, the most popular measure is the SR introduced by Sharpe (1966). As the SR requires strong assumptions that the returns of assets being analyzed have to be iid, various measures for MR analysis have been developed to improve the SR, including the Sortino ratio (Sortino and van der Meer 1991), the conditional SR (Agarwal and Naik 2004), the modified SR (Gregoriou and Gueyie 2003), value at risk (Ma and Wong 2010), expected shortfall (Chen 2008), and the mixed Sharpe ratio (Wong et al. 2012). However, most of the empirical studies, see, for example, Eling and Schuhmacher (2007), find that the conclusions drawn by using these ratios are basically the same as that drawn by the SR. Nonetheless, Leung and Wong (2008) have developed a multiple SR statistic and find that the results drawn from the multiple Sharpe ratio statistic can be different from its counterpart pair-wise SR statistic comparison, indicating that there are some relationships among the assets that have not being revealed using the pair-wise SR statistics. The MVR test could be the right candidate to reveal these relationships. One may claim that the limitation of the MVR test statistic is that it can only draw conclusion for investors with quadratic utility functions and for normaldistributed assets. Wong (2006), Wong and Ma (2008), and others have shown that the conclusion drawn from the MR comparison is equivalent to the comparison of expected utility maximization for any risk-averse investor, not necessarily with only quadratic utility function, and for assets with any distribution, not necessarily normal distribution, if the assets being examined belong to the same location-scale family. In addition, one can also apply the results from Li and Wong (1999) and Egozcue and Wong (2010) to generalize the result so that it will be valid for any risk-averse investor and for portfolios with any distribution if the portfolios being examined belong to the same convex combinations of (same or different)
53
Internet Bubble Examination with Mean-Variance Ratio
1463
location-scale families. The location-scale family can be very large, containing normal distributions as well as t-distributions, gamma distributions, etc. The stock returns could be expressed as convex combinations of normal distributions, t-distributions, and other location-scale families; see, for example, Wong and Bian (2000) and the references therein for more information. Thus, the conclusions drawn from the MVR test statistics are valid for most of the stationary data including most, if not all, of the returns of different portfolios. Last, we note that to improve the effectiveness of applying the MVR test in evaluating financial assets performance, one may incorporate other techniques/ approaches/models, for example, fundamental analysis (Wong and Chan 2004), technical analysis (Wong et al. 2001, 2003), behavioral finance (Matsumura et al. 1990), prospect theory (Broll et al. 2010; Egozcue et al. 2011), and advanced econometrics (Wong and Miller 1990; Bai et al. 2010, 2011b), to measure the performance of different financial assets and assist investors to make wiser decisions. Acknowledgment We would like to thank the editor C.-F. Lee for his substantive comments that have significantly improved this manuscript. The third author would also like to thank Professors Robert B. Miller and Howard E. Thompson for their continuous guidance and encouragement. The research is partially supported by grants from North East Normal University, National University of Singapore, Hong Kong Baptist University and the Research Grants Council of Hong Kong. The first author thanks the financial support from NSF China grant 11171057, Program for Changjiang Scholars and Innovative Research Team in University, and the Fundamental Research Funds for the Central Universities and NUS grant R-155-000-141-112.
References Agarwal, V., & Naik, N. Y. (2004). Risk and portfolios decisions involving hedge funds. Review of Financial Studies, 17, 63–98. Bai, Z. D., Liu, H. X., & Wong, W. K. (2009a). Enhancement of the applicability of Markowitz’s portfolio optimization by utilizing random matrix theory. Mathematical Finance, 19, 639–667. Bai, Z. D., Liu, H. X., & Wong, W. K. (2009b). On the markowitz mean-variance analysis of selffinancing portfolios. Risk and Decision Analysis, 1, 35–42. Bai, Z. D., Wong, W. K., & Zhang, B. Z. (2010). Multivariate linear and non-linear causality tests. Mathematics and Computers in Simulation, 81, 5–17. Bai, Z. D., Li, H., Liu, H. X., & Wong, W. K. (2011a). Test statistics for prospect and markowitz stochastic dominances with applications. Econometrics Journal, 14, 278–303. Bai, Z. D., Li, H., Wong, W. K., & Zhang, B. Z. (2011b). Multivariate causality tests with simulation and application. Statistics and Probability Letters, 81, 1063–1071. Bai, Z. D., Liu, H. X., & Wong, W. K. (2011c). Asymptotic properties of eigenmatrices of a large sample covariance matrix. Annals of Applied Probability, 21, 1994–2015. Bai, Z. D., Wang, K. Y., & Wong, W. K. (2011d). Mean-variance ratio test, a complement to coefficient of variation test and Sharpe ratio test. Statistics and Probability Letters, 81, 1078–1085. Bai, Z. D., Hui, Y. C., Wong, W. K., & Zitikis, R. (2012). Evaluating prospect performance: Making a case for a non-asymptotic UMPU test. Journal of Financial Econometrics, 10(4), 703–732. Baker, M., & Stein, J. C. (2004). Market liquidity as a sentiment indicator. Journal of Financial Markets, 7, 271–300.
1464
Z.D. Bai et al.
Broll, U., Wahl, J. E., & Wong, W. K. (2006). Elasticity of risk aversion and international trade. Economics Letters, 91, 126–130. Broll, U., Egozcue, M., Wong, W. K., & Zitikis, R. (2010). Prospect theory, indifference curves, and hedging risks. Applied Mathematics Research Express, 2010, 142–153. Cadsby, C. B. (1986). Performance hypothesis testing with the Sharpe and Treynor measures: A comment. Journal of Finance, 41, 1175–1176. Chan, C. Y., de Peretti, C., Qiao, Z., & Wong, W. K. (2012). Empirical test of the efficiency of UK covered warrants market: Stochastic dominance and likelihood ratio test approach. Journal of Empirical Finance, 19, 162–174. Chen, S. X. (2008). Nonparametric estimation of expected shortfall. Journal of Financial Econometrics, 6, 87–107. Egozcue, M., & Wong, W. K. (2010). Gains from diversification on convex combinations: A majorization and stochastic dominance approach. European Journal of Operational Research, 200, 893–900. Egozcue, M., Fuentes Garcı´a, L., Wong, W. K., & Zitikis, R. (2011). Do investors like to diversify? A study of Markowitz preferences. European Journal of Operational Research, 215, 188–193. Eling, M., & Schuhmacher, F. (2007). Does the choice of performance measure influence the evaluation of hedge funds? Journal of Banking and Finance, 31, 2632–2647. Fong, W. M., Wong, W. K., & Lean, H. H. (2005). International momentum strategies: A stochastic dominance approach. Journal of Financial Markets, 8, 89–109. Fong, W. M., Lean, H. H., & Wong, W. K. (2008). Stochastic dominance and behavior towards risk: The market for internet stocks. Journal of Economic Behavior and Organization, 68, 194–208. Gasbarro, D., Wong, W. K., & Zumwalt, J. K. (2007). Stochastic dominance analysis of iShares. European Journal of Finance, 13, 89–101. Gregoriou, G. N., & Gueyie, J. P. (2003). Risk-adjusted performance of funds of hedge funds using a modified Sharpe ratio. Journal of Wealth Management, 6, 77–83. Hanoch, G., & Levy, H. (1969). The efficiency analysis of choices involving risk. Review of Economic Studies, 36, 335–346. Jobson, J. D., & Korkie, B. (1981). Performance hypothesis testing with the Sharpe and Treynor measures. Journal of Finance, 36, 889–908. Lam, K., Liu, T., & Wong, W. K. (2010). A pseudo-Bayesian model in financial decision making with implications to market volatility, under- and overreaction. European Journal of Operational Research, 203, 166–175. Lam, K., Liu, T., & Wong, W. K. (2012). A new pseudo Bayesian model with implications to financial anomalies and investors’ behaviors. Journal of Behavioral Finance, 13(2), 93–107. Lean, H. H., Phoon, K. F., & Wong, W. K. (2012). Stochastic dominance analysis of CTA funds. Review of Quantitative Finance and Accounting, doi:10.1007/s11156-012-0284-1. Leung, P. L., & Wong, W. K. (2008). On testing the equality of the multiple Sharpe ratios, with application on the evaluation of iShares. Journal of Risk, 10, 1–16. Li, C. K., & Wong, W. K. (1999). Extension of stochastic dominance theory to random variables. RAIRO Recherche Ope´rationnelle, 33, 509–524. Lintner, J. (1965). The valuation of risky assets and the selection of risky investment in stock portfolios and capital budgets. Review of Economics and Statistics, 47, 13–37. Lo, A. (2002). The statistics of Sharpe ratios. Financial Analysis Journal, 58, 36–52. Ma, C., & Wong, W. K. (2010). Stochastic dominance and risk measure: A decision-theoretic foundation for VaR and C-VaR. European Journal of Operational Research, 207, 927–935. Markowitz, H. M. (1952). Portfolio selection. Journal of Finance, 7, 77–91. Matsumura, E. M., Tsui, K. W., & Wong, W. K. (1990). An extended multinomial-Dirichlet model for error bounds for dollar-unit sampling. Contemporary Accounting Research, 6, 485–500. Memmel, C. (2003). Performance hypothesis testing with the Sharpe ratio. Finance Letters, 1, 21–23.
53
Internet Bubble Examination with Mean-Variance Ratio
1465
Ofek, E., & Richardson, M. (2003). A survey of market efficiency in the internet sector. Journal of Finance, 58, 1113–1138. Perkins, A. B., & Perkins, M. C. (1999). The internet bubble: Inside the overvalued world of hightech stocks. New York: Harper Business. Schultz, P., & Zaman, M. (2001). Do the individuals closest to internet firms believe they are overvalued? Journal of Financial Economics, 59, 347–381. Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance, 19, 425–442. Sharpe, W. F. (1966). Mutual funds performance. Journal of Business, 39, 119–138. Sortino, F. A., & van der Meer, R. (1991). Downside risk. Journal of Portfolio Management, 17, 27–31. Sriboonchitta, S., Wong, W. K., Dhompongsa, D., & Nguyen, H. T. (2009). Stochastic dominance and applications to finance, risk and economics. Boca Raton: Chapman and Hall/CRC. Statman, M. (2002). Lottery players/stock traders. Journal of Financial Planning, 14–21. Tobin, J. (1958). Liquidity preference as behavior towards risk. Review of Economic Studies, 25, 65–86. Wong, W. K. (2006). Stochastic dominance theory for location-scale family. Journal of Applied Mathematics and Decision Sciences, 2006, 1–10. Wong, W. K. (2007). Stochastic dominance and mean-variance measures of profit and loss for business planning and investment. European Journal of Operational Research, 182, 829–843. Wong, W. K., & Bian, G. (2000). Robust Bayesian inference in asset pricing estimation. Journal of Applied Mathematics & Decision Sciences, 4, 65–82. Wong, W. K., & Chan, R. (2004). The estimation of the cost of capital and its reliability. Quantitative Finance, 4, 365–372. Wong, W. K., & Chan, R. (2008). Markowitz and prospect stochastic dominances. Annals of Finance, 4, 105–129. Wong, W. K., & Li, C. K. (1999). A note on convex stochastic dominance theory. Economics Letters, 62, 293–300. Wong, W. K., & Ma, C. (2008). Preferences over location-scale family. Economic Theory, 37, 119–146. Wong, W. K., & Miller, R. B. (1990). Analysis of ARIMA-noise models with repeated time series. Journal of Business and Economic Statistics, 8, 243–250. Wong, W. K., Chew, B. K., & Sikorski, D. (2001). Can P/E ratio and bond yield be used to beat stock markets? Multinational Finance Journal, 5, 59–86. Wong, W. K., Manzur, M., & Chew, B. K. (2003). How rewarding is technical analysis? Evidence from Singapore stock market. Applied Financial Economics, 13, 543–551. Wong, W. K., Thompson, H. E., Wei, S., & Chow, Y. F. (2006). Do winners perform better than losers? A stochastic dominance approach. Advances in Quantitative Analysis of Finance and Accounting, 4, 219–254. Wong, W. K., Phoon, K. F., & Lean, H. H. (2008). Stochastic dominance analysis of Asian hedge funds. Pacific-Basin Finance Journal, 16, 204–223. Wong, W. K., Wright, J. A., Yam, S. C. P., & Yung, S. P. (2012). A mixed Sharpe ratio. Risk and Decision Analysis, 3(1–2), 37–65.
Quantile Regression in Risk Calibration
54
Shih-Kang Chao, Wolfgang Karl Ha¨rdle, and Weining Wang
Contents 54.1 54.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.2.1 Constructing Partial Linear Model (PLM) for CoVaR . . . . . . . . . . . . . . . . . . . . . . . 54.2.2 Backtesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.2.3 Risk Contribution Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.3.1 CoVaR Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.3.2 Backtesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.3.3 Global Risk Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Local Linear Quantile Regression (LLQR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Confidence Band for Nonparametric Quantile Estimator . . . . . . . . . . . . . . . . . . . . . . Appendix 3: PLM Model Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1468 1472 1472 1475 1476 1478 1478 1480 1482 1484 1484 1485 1487 1488
Abstract
Financial risk control has always been challenging and becomes now an even harder problem as joint extreme events occur more frequently. For decision makers and government regulators, it is therefore important to obtain accurate € The financial support from the Deutsche Forschungsgemeinschaft via SFB 649 “Okonomisches Risiko,” Humboldt-Universita¨t zu Berlin is gratefully acknowledged S.-K. Chao (*) • W. Wang Ladislaus von Bortkiewicz Chair of Statistics, C.A.S.E. – Center for Applied Statistics and Economics, Humboldt–Universita¨t zu Berlin, Berlin, Berlin, Germany e-mail: [email protected]; [email protected] W.K. Ha¨rdle Ladislaus von Bortkiewicz Chair of Statistics, C.A.S.E. – Center for Applied Statistics and Economics, Humboldt–Universita¨t zu Berlin, Berlin, Berlin, Germany Lee Kong Chian School of Business, Singapore Management University, Singapore, Singapore e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_54, # Springer Science+Business Media New York 2015
1467
1468
S.-K. Chao et al.
information on the interdependency of risk factors. Given a stressful situation for one market participant, one likes to measure how this stress affects other factors. The CoVaR (Conditional VaR) framework has been developed for this purpose. The basic technical elements of CoVaR estimation are two levels of quantile regression: one on market risk factors; another on individual risk factor. Tests on the functional form of the two-level quantile regression reject the linearity. A flexible semiparametric modeling framework for CoVaR is proposed. A partial linear model (PLM) is analyzed. In applying the technology to stock data covering the crisis period, the PLM outperforms in the crisis time, with the justification of the backtesting procedures. Moreover, using the data on global stock markets indices, the analysis on marginal contribution of risk (MCR) defined as the local first order derivative of the quantile curve sheds some light on the source of the global market risk. Keywords
CoVaR • Value-at-Risk • Quantile regression • Locally linear quantile regression • Partial linear model • Semiparametric model
54.1
Introduction
Sufficiently accurate risk measures are needed not only in crisis times. In the last two decades, the world has gone through several financial turmoils, and the financial market is getting riskier and the scale of loss soars. Beside marginal extremes that can shock even a well-diversified portfolio, the focus of intensified research in the recent years has been on understanding the interdependence of risk factors and their conditional structure. The most popular risk measure is the Value-at-Risk (VaR), which is defined as the t-quantile of the return distribution at time t + d conditioned on the information set F t: VaRttþd ¼ inf fx 2 ℝ : PðXtþd xjF t Þ tg: def
(54.1)
Here Xt denotes the asset return and t is taking values such as 0.05, 0.01 or 0.001 to reflect negative extreme risk. Extracting information in economic variables to predict VaR brings quantile regression into play here, since VaR is the quantile of the conditional asset return distribution. Engle and Manganelli (2004) propose the nonlinear Conditional Autoregressive Value-at-Risk (CaViaR) model, which uses (lag) VaR and lag returns. Chernozhukov and Umantsev (2001) propose linear and quadratic time series models for VaR prediction. Kuan et al. (2009) propose the Conditional AutoRegressive Expectile (CARE) model, and argue that expectiles are more sensitive to the scale of losses. These studies and many others apply quantile regression in a prespecified, often linear functional form. In a more nonparametric context, Cai and Wang (2008) estimate the conditioned cdf by a double kernel local linear estimator and find the
54
Quantile Regression in Risk Calibration
1469
quantile by inverting the cdf. Schaumburg (2011) uses the same technique together with extreme value theory for VaR prediction. Taylor (2008) proposes Exponentially Weighted Quantile Regression (EWQR) for estimating VaR time series. The aforementioned studies focus mainly on the VaR estimation for single assets and do not directly take into account the escalated spillover effect in crisis periods. This risk of joint tail events of asset returns has been identified and studied. Further, Brunnermeier and Pedersen (2008) show that the negative feedback effect of a “loss spiral” and a “margin spiral” leads to the joint depreciation of assets prices. It is therefore important to develop risk measures which can quantify the contagion effects of negative extreme event. Acharya et al. (2010) propose the concept of marginal expected shortfall (MES), which measures the contribution of individual assets to the portfolio expected shortfall. Via an equilibrium argument, the MES is shown to be a predictor to a financial institution’s risk contribution. Brownlees and Engle (2012) demonstrate that the MES can be written as a function of volatility, correlation, and expectation conditional on tail events. Huang et al. (2012) propose the distress insurance premium (DIP), a measure similar to MES but computed under the risk-neutral probability. This measure can therefore be viewed as the market insurance premium against the event that the portfolio loss exceeds a low level. Adams et al. (2012) construct financial indices on return of insurance companies, commercial banks, investment banks, and hedge funds, and use a linear model for the VaRs of the four financial indices to forecast the state-dependent sensitivity VaR (SDSVaR). The risk measures proposed above have some shortcomings though: The computation of DIP is demanding since this involves the simulation of rare events. MES suffers from the scarcity of data because it conditions on a rare event. In Adrian and Brunnermeier (2011) (henceforth AB), the CoVaR concept of conditional VaR is proposed, which controls the effect of the negative extreme event of some systemically risky financial institutions. Formally, let C(Xi,t) be some event of a asset i return Xi,t at time t and take Xj,t as another asset return (e.g., the market index). The CoVaRtj|i,t is defined as the t-quantile of the conditional probability distribution: n o P Xj, t CoVaRtjji, t C Xi, t , Mt ¼ t, (54.2) defined inoSect. 54.2.1. The standard where Mt is a vector of market variables n CoVaR approach is to set C Xi, t ¼ Xi, t ¼ VaRtXi, t . In AB, Xj,t is the weekly return which is constructed from a vast data set comprised of all publicly traded commercial banks, broker dealers, insurance companies, and real estate companies in the USA. Further, AB propose DCoVaR (measure of marinal risk contribution) as the difference between CoVaRtjji1, t and CoVaRtjji2, t, where t1 ¼ 0.5 is associated with the normal state and t2 ¼ 0.05 is associated with the financial distress state. The formulation of this conditional risk measure has several advantages. First, the cloning property: After dividing a systemically risky firm into several clones, the value of CoVaR conditioned on the entire firm does not differ from the one conditioned on one of the clones. Second, the conservativeness. The CoVaR value is more
1470
S.-K. Chao et al.
conservative than VaR, because it conditions on an extreme event. Third, CoVaR is endogenously generated and adapted to the varying environment of the market. The recipe of AB for CoVaR construction is as follows: In the first step, one predicts the VaR of an individual asset Xi, t through a linear model on market variables: Xi, t ¼ ai þ gΤi Mt1 þ ei, t ,
(54.3)
where gΤi means the transpose of gi and Mt is a vector of the state variables (see Sect. 54.2.1). This model is estimated with quantile regression of Koenker and Bassett (1978) to get the coefficients ð^a i ; ^g i Þ with F1 ei, t ðtjMt1 Þ ¼ 0. The VaR of asset i is predicted by d i, t ¼ ^a i þ ^g Τ Mt1 : VaR i
(54.4)
In the second step, one models the asset j return as a linear function of asset return i and market variables Mt: Xj, t ¼ ajji þ bjji Xi, t þ gΤjji Mt1 þ ej, t ,
(54.5) Again one employs quantile regression and obtains coefficients ^a jji ; b^jji ; ^g jji . The CoVaR is finally calculated as:
d i, t þ ^g Τ Mt1 : d AB ¼ ^a jji þ b^jji VaR CoVaR jji jji, t
(54.6)
In Eq. 54.5, the variable Xi,t influences the return Xj,t in a linear fashion. However, the linear parametric model may not be flexible enough to capture the tail dependence between i and j. The linearity of the conditioned quantile curves of Xj on Xi is challenged by the confidence bands of the nonparametric quantile curves, as shown in Fig. 54.1. The left tail quantile from linear parametric quantile regression (red) lies well outside the confidence band (gray dashed curve) of Hardle and Song (2010). This motivates empirically that a linear model is not flexible enough for the CoVaR question at hand. Nonparametric models can be used to account for the nonlinear structure of the conditional quantile, but the challenge for using such models is the curse of dimensionality, as the quantile regression in CoVaR modeling often involves many variables. Thus, we resort to semiparametric partial linear model (PLM) which preserves some flexibility of the nonparametric model while suffers little from the curse of dimensionality. As an illustration, the VaR/CoVaR of Goldman Sachs (GS) returns are shown, given the returns of Citigroup (C) and S&P500 (SP). S&P500 index return is used as a proxy for the market portfolio return. Choosing market variables is crucial for the VaR/CoVaR estimation. For the variables representing market states, we follow the most popular choices such as VIX, short-term liquidity spread, etc. In particular, the variable we use for real estate companies is the Dow Jones U.S. real estate index. The daily data date from August 4, 2006 to August 4, 2011.
54
Quantile Regression in Risk Calibration
0.0
1471
0.0
−0.5
−0.5
0.0
0.5
0.0
0.5
Fig. 54.1 Goldman Sachs (GS) and Citigroup (C) weekly returns 0.05(left) and 0.1(right) quantile functions. The y-axis is GS daily returns and the x-axis is the C daily returns. The blue curve are the locally linear quantile regression curves (see Appendix 1). The locally linear quantile regression bandwidth are 0.1026 and 0.0942. The red lines are the linear parametric quantile regression line. The antique white dashed curves are the asymptotic confidence band (see Appendix 2) with significance level 0.05. The sample size N ¼ 546
To see if the estimated VaRs/CoVaRs are accurate, we utilize the backtesting procedures described in Berkowitz et al. (2011). We compare three (Co)VaR estimating methods in this study: VaR computed by linear quantile regression on market variables; CoVaR; PLM CoVaR proposed here. The VaR is one-sided interval prediction, the violations (the asset return exceeds estimated VaR/CoVaR) should happen unpredictably if the VaR algorithm is accurate. In other words, the null hypothesis is that the series of violations of VaR is a martingale difference, given all the past information. Furthermore, if the time series is autocorrelated, we can reject the null hypothesis of martingale difference right away; therefore, autocorrelation tests can be utilized in this context. The Ljung-Box test is not the most appropriate approach here since it has a too strong null hypothesis (i.i.d. sequence). Thus, we additionally apply the Lobato test. The CaViaR test, which is inspired by the CaViaR model, is proposed and shown to have the best overall performance by Berkowitz et al. (2011) among other alternative tests with an exclusive desk-level data set. To illustrate the VaR/CoVaR performances in the crisis time, we separately apply the CaViaR test to the violations of the whole sample period and to the financial crisis period. The results show that during the financial crisis period from mid-2008 to mid-2009, the PLM CoVaR of GS given C performs better than that constructed from the technique of AB and the PLM CoVaR given SP. In particular, these results suggest that with appropriate modeling techniques (accounting for nonlinearity), the CoVaR of GS calculated from conditioning on C reflects some structurally risk which is not reflected from conditioning on market returns such as SP during financial crisis. In contrast to DCoVaR, we use a mathematically more intuitive way to analyze the marginal effect by taking the first order derivative of the quantile function.
1472
S.-K. Chao et al.
We call it “marginal contribution of risk” (MCR). Bae et al. (2003) and many others have pointed out the phenomenon of financial contagion across national borders. This motivates us to consider the stock indices of a few developed markets and explore their risk contribution to the global stock market. MCR results show that when the global market condition varies, the source of global market risk can be different. To be more specific, when the global market return is bad, the risk contribution from the USA is the largest. On the other hand, during financially stable periods, Hong Kong and Japan are more significant risk contributors than the USA to the global market. This study is organized as follows: Sect. 54.2 introduces the construction and the estimation of the PLM model of CoVaR. The backtesting methods and our risk contribution measure are also introduced in this section. Section 54.3 presents the Goldman Sachs CoVaR time series and the backtesting procedure results. Section 54.4 presents the conclusion and possible further studies. Appendices describe the detailed estimation and statistical inference procedures used in this study.
54.2
Methodology
Quantile regression is a well-established technique to estimate the conditional quantile function. Koenker and Bassett (1978) focus on the linear functional form. An extension of linear quantile regression is the PLM quantile regression. A partial linear model for the dynamics of assets return quantile is constructed in this section. The construction is justified by a linearity test based on a conservative uniform confidence band proposed in Hardle and Song (2010). For more details on semiparametric modeling and PLM, we refer to Ha¨rdle et al. (2004) and Ha¨rdle et al. (2000). The backtesting procedure is done via the CaViaR test. Finally, the methodology of MCR is introduced, which is an intuitive marginal risk contribution measure. We will apply the method to a data set of global market indices in developed countries.
54.2.1 Constructing Partial Linear Model (PLM) for CoVaR Recall how the CoVaR is constructed: d t ¼ ^a i þ ^g i Mt1 , VaRi, AB d d i, t þ ^gΤ Mt1 : CoVaR a jji þ b^jji VaR jji, t ¼ ^ jji where ð^ a i ; ^g i Þ and ^a jji ; b^jji ; ^g jji are estimated from a linear model using standard linear quantile regression. We have motivated the need for more general functional forms for the quantile curve. We therefore relax the model to a non- or semiparametric model. The market
54
Quantile Regression in Risk Calibration
1473
variable Mt is multidimensional, and the data frequency here is daily. The following key variables are entering our analysis: 1. VIX: Measuring the model-free implied volatility of the market. This index is known as the “fear gauge” of investors. The historical data can be found in the Chicago Board Options Exchange’s website. 2. Short-term liquidity spread: Measuring short-term liquidity risk by the difference between the 3-month treasury repo rate and the 3-month treasury bill rate. The repo data is from the Bloomberg database and the treasury bill rate data is from the Federal Reserve Board H.15. 3. The daily change in the 3-month treasury bill rate: AB find that the changes have better explanatory power than the levels for the negative tail behavior of asset returns. 4. The change in the slope of the yield curve: The slope is defined by the difference of the 10-year treasury rate and the 3-month treasury bill rate. 5. The change in the credit spread between 10 years BAA-rated bonds and the 10 years treasury rate. 6. The daily Dow Jones U.S. Real Estate index returns: The index reflects the information of lease rates, vacancies, property development, and transactions of real estates in the USA. 7. The daily S&P500 index returns: The approximate of the theoretical market portfolio returns. The variables 3, 4, 5 are from the Federal Reserve Board H.15 and the data of 6 and 7 are from Yahoo Finance. First we conduct a statistical check of the linearity between GS return and the market variables using the confidence band as constructed in Appendix 2. As shown in Fig. 54.2, except for some ignorable outsiders, the linear quantile regression line lies in the LLQR asymptotic confidence band. On the other hand, there is nonlinearity between two individual assets Xi and Xj. To illustrate this, we regress Xj on Mt, and then take the residuals and regress them on Xi. Again the Xj,t is GS daily return and Xi is C daily return. The result is shown in Fig. 54.3. The linear QR line (red) lies well outside the LLQR confidence band (magenta) when the C return is negative. The linear quantile regression line is fairly flat. The risk of using a linear model is obvious in this figure: The linear regression can “average out” the humped relation of the underlying structure (blue), and therefore imply a model risk in estimation. Based on the results of the linearity tests above, we construct a PLM model: Xi, t ¼ ai þ gΤi Mt1 þ ei, t ,
(54.7)
Τ Xj, t ¼ ^a jji þb^jji Mt1 þ ljji Xi, t þ ej, t ,
(54.8)
where Xi,t, Xj,t are asset returns of i, j firms. Mt is a vector of market variables at time t as introduced before. If i ¼ S&P500, Mt is set to consist of the first 6 market variables only. Notice the variable Xi,t enter the Eq. 54.8 nonlinearly.
1474
S.-K. Chao et al.
0.2 0.0
0.0
−0.4
−0.3 0.1
0.3
0.5
−1.5
0.7
VIX
−1.0
−0.5
0.0
0.5
Liquidity Spread
0.2
0.2
0.0 0.0 −0.3
−0.2 −0.5
0.0 0.5 Change in yields of 3 mon. TB
0.00
0.2
0.2
0.0
0.0
−0.3
−0.3 −0.001
0.01 0.02 0.03 Slope of yield curve
0.04
−0.05
0.001 0.003 Credit Spread
0.00 0.05 0.10 S&P500 Index Returns
0.2 0.0 −0.2 −0.2
−0.1
0.0
0.1
0.2
DJUSRE Index Returns
Fig. 54.2 The scatter plots of GS daily returns to the seven market variables with the LLQR curves. The bandwidths are selected by the method described in Appendix 1. The LLQR bandwidths are 0.1101, 0.1668, 0.2449, 0.0053, 0.0088, 0.0295 and 0.0569. The data period is from August 4, 2006, to August 4, 2011. N ¼ 1260. t ¼ 0.05
Applying the algorithm of Koenker and Bassett (1978) to Eq. n54.7 and the o process described in Appendix 3 to Eq. 54.8, we get f^a i ; ^g i g and ^a jji , b^i , ^l ðÞ 1 with F1 ei, t ðtjMt1 Þ ¼ 0 for Eq. 54.7 and Fei, t tjM t1 , X i, t ¼ 0 for Eq. 54.8. Finally, we estimate the PLM CoVaRjji,t by
54
Quantile Regression in Risk Calibration
1475
0.2 0.0 −0.2 −0.4 −0.6 −0.8 −0.15
−0.10
−0.05
0.00
0.05
0.10
Fig. 54.3 The nonparametric part l^GSjC ðÞ of the PLM estimation. The y-axis is the GS daily returns. The x-axis is the C daily returns. The blue curve is the LLQR quantile curve. The red line is the linear parametric quantile line. The magenta dashed curves are the asymptotic confidence band with significance level 0.05. The data is from June 25, 2008, to December 23, 2009. 378 observations. Bandwidth ¼ 0.1255. t ¼ 0.05
d i, t ¼ ^a i þ ^g Τi Mt1 , VaR
(54.9)
PLM ^Τ d d i, t : a^jji þ bej Mt1 þ ^l jji VaR CoVaR jji, t ¼ e
(54.10)
54.2.2 Backtesting The goal of the backtesting procedure is to check if the VaR/CoVaR is accurate enough so that managerial decisions can be made based on them. The VaR forecast is a (one-sided) interval forecast. If the VaR algorithm is correct, then the violations should be unpredictable, after using all the past information. Formally, if we define the violation time series as It ¼
1, 0,
dt ; if Xt < VaR t otherwise:
dt can be replaced by CoVaR d t in the case of CoVaR. It should form where VaR t t a sequence of martingale difference. There is a large literature on martingale difference tests. We adopt Ljung-Box test, Lobato test, and the CaViaR test. The Ljung-Box test and Lobato test aim to check whether the time series is autocorrelated. If the time series is autocorrelated, then we reject of course the hypothesis that the time series is a martingale difference. ^ k be the estimated autocorrelation of lag k of the sequence Particularly, let r of violation {It} and n be the length of the time series. The Ljung-Box test statistics is
1476
S.-K. Chao et al.
LBðmÞ ¼ nðn þ 2Þ
m X ^ 2k r L ! wðmÞ, nk k¼1
(54.11)
as n ! 1. This test is too strong though in the sense that the asymptotic distribution is derived based on the i.i.d. assumption. A modified Box-Pierce test is proposed by Lobato et al. (2001), who also consider the test of no autocorrelation, but their test is more robust to the correlation of higher (greater than the first) moments. (Autocorrelation in higher moments does not contradict with the martingale difference hypothesis.) The test statistics is given by Lð m Þ ¼ n as n ! 1, where ^v kk ¼
1 n
m X ^ 2k L r ! wðmÞ, ^v k¼1 kk
Xnk
2 ðyi yÞ2 yiþk y : n Xn o2 2 1 ð y y Þ N i¼1 i i¼1
The CaViaR test, proposed by Berkowitz et al. (2011), is based on the idea that if the sequence of violation is a martingale difference, there ought to be no correlation between any function of the past variables and the current violation. One way to test this uncorrelatedness is through a linear model. The model is I t ¼ a þ b1 I t1 þ b2 VaRt þ ut , where VaRt can be replaced by CoVaRt in the case of conditional VaR. The residual ut follows a Logistic distribution since It is binary. We get the estimates of the Τ coefficients b^1 ; b^2 . Therefore, the null hypothesis is b^1 ¼ b^2 ¼ 0 . This hypothesis can be tested by Wald’s test. We set m ¼ 1 or 5 for the Ljung-Box and Lobato tests. For the CaViaR test, two data periods are considered separately. The first is the overall data from August 4, 2006, to August 4, 2011. The second is the data from August 4, 2008, to August 4, 2009, the period when the financial market reached its bottom. By separately testing the two periods, we can gain more insights into the PLM model.
54.2.3 Risk Contribution Measure The risk contribution of one firm to the market is one of the top concerns among central bankers. The regulator can restrict the risky behaviors of the financial institution with high-risk contribution to the market, and reduce the institution’s incentive to take more risk. AB propose the idea of DCoVaR, which is defined by
54
Quantile Regression in Risk Calibration
DCoVaRtjji, t ¼ CoVaRtjji, t CoVaR0:5 jji, t :
1477
(54.12)
t where CoVaRjji,t is defined as in the introduction. j, i represent the financial system and an individual asset. t ¼ 0.5 corresponds to the normal state of the individual asset i. This is essentially a sensitivity measure quantifying the effect to the financial system from the occurrence of a tail event of asset Xi. In this study, we adopt a mathematically intuitive way to measure the marginal effect by searching the first order derivative of the quantile function. Because the spillover effect from stock market to stock market has already got much attention, it is important to investigate the risk contribution of a local market to the global stock market. The estimation is conducted as follows: First, one estimates the following model nonparametrically:
Xj, t ¼ f 0:05 ðX t Þ þ ej , j
(54.13)
The quantile function fj 0.05() is estimated with local linear quantile regression with t ¼ 0.05, described with more details in Appendix 1. Xj is the weekly return of the stock index of an individual country and X is the weekly return of the global stock market. 0:05 Second, with f^j ðÞ, we compute the “marginal contribution of risk” (MCR) of institution j by ^0:05 ðxÞ @ f j 1 MCRtj ¼ , (54.14) @x x ¼ F^x ðtk Þ 1
where F^ ðtk Þ is a consistent estimator of the tk quantile of the global market return, and it can be estimated by regressing Xt on the time trend. We put k ¼ 1, 2 with t1 ¼ 0.5 and t2 ¼ 0.05. The quantity Eq. 54.14 is similar to the MES proposed by Acharya et al. (2010) in the sense that the conditioned event belongs to the information set of the market return, but we reformulate it in the VaR framework instead of the expected shortfall framework. There are some properties of the MCR to be described further. First, tk determines the condition of the global stock market. This allows us to explore the risk contribution from the index j to the global market, given different global market status. Second, the higher the value of MCR, the more risk factor j imposes on the market in terms of risk. Third, since the function fj 0.05() is estimated by LLQR, the quantile curve is locally linear, and therefore, the local first order derivative is straightforward to compute. We choose indices j ¼ S&P500, NIKKEI225, FTSE100, DAX30, CAC40, Hang Seng as the approximate of the market returns of each developed country or market. The global market is approximated by the MSCI World (developed countries) market index. The data is weekly from April 11, 2004, to April 11, 2011, and t ¼ 0.05.
1478
S.-K. Chao et al.
54.3
Results
54.3.1 CoVaR Estimation The estimation results of VaR/CoVaR are shown in this section. We compute three types of VaR/CoVaR of GS, with a moving window size of 126 business days and t ¼ 0.05. First, the VaR of GS is estimated using linear quantile regression: d GS, t ¼ ^a GS þ^g ΤGS Mt1 , VaR
(54.15)
Mt 2 ℝ is introduced in Sect. 54.2.1. Second, the CoVaR of GS given C returns is estimated: 7
d C, t ¼ ^a C þ^g ΤC Mt1 ; VaR
(54.16)
d AB d C, t þ ^gΤ Mt1 : a GSjC þ b^GSjC VaR CoVaR GSjC, t ¼ ^ GSjC
(54.17)
If the SP replaces C, the estimates are generated from e d SP, t ¼ ^a SP þ ^gΤ M VaR SP t1 ; d AB d SP, t þ ^gΤ M e a GSjSP þ b^GSjSP VaR CoVaR GSjSP, t ¼ ^ GSjSP t1 ,
(54.18) (54.19)
e t 2 ℝ6 is the vector of market variables without the market portfolio return. where M Third, the PLM CoVaR is generated: d C, t ¼ ^a C þ ^gΤ Mt1 ; VaR C
(54.20)
^Τ d PLM ¼ e d C, t : a^GSjC þ beGSjC Mt1 þ ^l GSjC VaR CoVaR GSjC, t
(54.21)
If SP replaces C: e d SP, t ¼ ^a SP þ ^gΤ M VaR SP t1 ; ^Τ d PLM ¼ e e t1 þ ^ d SP, t : a^GSjSP þ beGSjSP M l GSjSP VaR CoVaR GSjSP, t
(54.22) (54.23)
The coefficients in Eqs. 54.15–54.20, and 54.22 are estimated from the linear quantile regression and those in Eqs. 54.21 and 54.23 are estimated from the method described in Appendix 3. d GS, t sequence. The VaR forecasts (red) seem to form Figure 54.4 shows the VaR a lower cover of the GS returns (blue). This suggests that the market variables Mt
54
Quantile Regression in Risk Calibration
d GS, t . The Fig. 54.4 The VaR d GS, t and red line is the VaR blue stars are daily returns of GS. The dark green curve is the median smoother of the d GS, t curve with h ¼ 2.75. VaR t ¼ 0.05. The window size is 252 days
1479
0.2
0.0
−0.2 2007
2008
2009
2010
2011
0.2
0.0
−0.2 2007
2008
2009
2010
2011
Fig. 54.5 The CoVaR of GS given the VaR of C. The gray dots mark the daily returns of GS. The d PLM . The dark blue curve is the median LLQR smoother of light green dashed curve is the CoVaR GSjC, t d AB . The purple the light green dashed curve with h ¼ 3.19. The cyan dashed curve is the CoVaR GSjC, t curve is the median LLQR smoother of the cyan dashed curve with h ¼ 3.90. The red curve is the d GS, t . t ¼ 0.05. The moving window size is 126 days VaR
have some predictive power for the left tail quantile of the GS return distribution. d AB d PLM Figure 54.5 shows the sequences CoVaR GSjSP, t (cyan) and CoVaRGSjC, t (light green). As the time series of the estimates is too volatile, we smooth it further by the median LLQR. The two estimates are similar as the market state is stable, but during the period of financial instability (from mid-2008 to mid-2009), the two estimates have different behavior. The performances of these estimates are evaluated by backtesting procedure in Sect. 54.3.2. Table 54.1 shows the summary statistics of the VaR/CoVaR estimates. The first d GS, t , VaR d C, t , and VaR d SP, t . The three rows show the summary statistics of VaR d GS, t has lower mean and higher standard deviation than the other two. ParticVaR ularly during 2008–2009, the standard deviation of the GS VaR is twice as much as d C, t and VaR d SP, t the other two. The mean and standard deviation of the VaR d PLM , are rather similar. The last four rows show the summary statistics of CoVaR GSjC, t
1480
S.-K. Chao et al.
Table 54.1 VaR/CoVaR summary statistics. The overall period is from August 4, 2006, to August 4, 2011. The crisis period is from August 4, 2008, to August 4, 2009. The numbers in the table are scaled up by 102 mean-overall 3.66
sd-overall 3.08
mean-crisis 7.43
sd-crisis 4.76
2.63
1.67
4.62
2.25
d SP, t VaR d PLM CoVaR
2.09
1.57
3.88
2.24
4.26
3.84
8.79
5.97
d AB CoVaR GSjC, t
4.60
4.30
10.36
6.32
d PLM CoVaR GSjSP, t
3.86
3.30
8.20
4.69
d AB CoVaR GSjSP, t
5.81
4.56
12.65
5.56
d GS, t VaR d C, t VaR
GSjC, t
d PLM , and CoVaR d AB d AB , CoVaR CoVaR GSjC, t GSjSP, t GSjSP, t . This shows that the CoVaR obtaining from the AB model has smaller mean and greater standard deviation than the CoVaR obtaining from PLM model. Figure 54.6 shows the bandwidth sequence of the nonparametric part of the PLM estimation. The bandwidth varies with time. Before mid-2007, the bandwidth sequence is stably jumping around 0.2. After that the sequence becomes very volatile. This may have something to do with the rising systemic risk.
54.3.2 Backtesting For the evaluation of the CoVaR models, we resort to the backtesting procedure described in Sect. 54.2.2. In order to perform the backtesting procedure, the sequences {It} (defined in Sect. 54.2.2) have to be computed for all VaR/CoVaR estimates. Figure 54.7 shows the timings of the violations d PLM , CoVaR d AB d {t : It ¼ 1} of CoVaR GSjC, t GSjC, t and VaR GS, t . This figure shows the total d GS, t has more number of violations of PLM CoVaR and CoVaR is similar, while VaR d violations than the both. The VaR GS, t has a few clusters of violations in both d GS, t to financial stable and unstable periods. This may result from the failure VaR PLM d adapt for the negative shocks. The violations of CoVaRGSjC, t are more evenly d AB distributed. The violations of CoVaR GSjC, t have large clusters during financially stable period, while the violation during financial crisis period is meager. This d AB tend to overreact, as it is slack during the stable contrast suggests that CoVaR GSjC, t period but is too tight during the unstable period. d PLM , Figure 54.8 shows the timings of the violations {t : It ¼ 1} of CoVaR GSjSP, t AB d PLM d d CoVaR GSjSP, t , and VaR GS, t . The overall number of violations of CoVaRGSjSP, t is d PLM d GS, t , and it has many clusters. CoVaR more than that of VaR GSjSP, t behaves PLM d differently from CoVaRGSjC, t. The SP may not be more informative than C, though
54
Quantile Regression in Risk Calibration
1481
0.4 0.1 2007
2008
2009
2010
2011
PLM ^ Fig. 54.6 LLQR bandwidth in the daily estimation of CoVaR GSjC, t . The average bandwidth is 0.24
Fig. 54.7 The timings of violations {t : It ¼ 1}. The top circles are the violations of d PLM , totally the CoVaR GSjC, t 95 violations. The middle squares are the violations of d AB , totally CoVaR GSjC, t 98 violations. The bottom stars are the violations of d GS, t , totally VaR 109 violations. Overall data N ¼ 1260
2007
2008
2009
2010
2011
d AB the efficient market hypothesis suggests so. The violation of CoVaR GSjSP, t is fewer than the other two measures, and the clustering is not significant. The backtesting procedure is performed separately for each sequence of {It}. The null hypothesis is that each sequence of {It} forms a series of martingale difference. Six different tests are applied for each {It}: Ljung-Box tests with lags 1 and 5, Lobato test with lags 1 and 5, and finally the CaViaR test with two data periods: overall and crisis period. d GS, t is The result is shown in Table 54.2. First, in Panel 1 of Table 54.2, the VaR rejected by the LB(5) test and the two CaViaR tests. This shows that a linear quantile regression on the seven market variables may not give accurate estimates, d GS, t does not form a martingale sequence. in the sense that the violation {It} of VaR d AB d PLM . In Panel 2, the low p-values of Next we turn to the CoVaR and CoVaR GSjSP, t GSjSP, t the two CaViaR tests show that both the AB model and PLM model conditioned on SP are rejected, though the p-value of the AB model almost reaches the 5 % d PLM significant level. In particular, the CoVaR GSjSP, t is rejected by the L(5) and LB (5) tests. Both the parametric and semiparametric models fail with this choice of variable. This suggests that the market return does not provide enough information in risk measurement. We therefore need more informative variables. Panel 3 of Table 54.2 illustrates this by using C daily returns, which may contain information not revealed in the
1482
S.-K. Chao et al.
Fig. 54.8 The timings of violations {t : It ¼ 1}. The top circles are the violations of d PLM , totally CoVaR GSjSP, t 123 violations. The middle squares are the violations of d AB CoVaR GSjSP, t , totally 39 violations. The bottom stars are the violations of d GS, t , totally VaR 109 violations. Overall data N ¼ 1,260 2007
2008
2009
2010
2011
Table 54.2 Goldman Sachs VaR/CoVaR backtesting p-values. The overall period is from August 4, 2006, to August 4, 2011. The crisis period is from August 4, 2008, to August 4, 2009. LB(1) and LB(5) are the Ljung-Box tests of lags 1 and 5. L(1) and L(5) are the Lobato tests of lags 1 and 5. CaViaR-overall and CaViaR-crisis are two CaViaR tests described in Sect. 2.2 applied on the two data periods Measure Panel 1 d GS, t VaR
LB(1)
LB(5)
L(1)
L(5)
CaViaR-overall
CaViaR-crisis
0.3449
0.0253*
0.3931
0.1310
1.265 106***
0.0024**
Panel 2 d AB CoVaR
0.0869
0.2059
0.2684
0.6586
8.716 107***
0.0424*
d PLM CoVaR GSjSP, t
0.0518
0.0006***
0.0999
0.0117*
2.2 1016***
0.0019**
Panel 3 d AB CoVaR
0.0489*
0.2143
0.1201
0.4335
3.378 109***
0.0001***
d PLM CoVaR GSjC, t
0.8109
0.0251*
0.8162
0.2306
2.946 109***
0.0535
GSjSP, t
GSjC, t
* **
,
and
***
denote significance at the 5 %, 1 % and 0.1 % levels
d AB market and improve the performance of the estimates. The CoVaR GSjC, t is rejected by the two CaViaR tests and the LB(1) test with 0.1 % and 5 % significant level. d PLM is not rejected by the CaViaR-crisis test. This implies that However, CoVaR GSjC, t the nonparametric part in the PLM model captures the nonlinear effect of C returns to GS returns, which can lead to better risk-measuring performance.
54.3.3 Global Risk Contribution In this section, we present the MCR (defined in Sect. 54.2.3), which measures the marginal risk contribution of risk factors. We choose t1 ¼ 0.5, associated to the
54
Quantile Regression in Risk Calibration
Fig. 54.9 The M C Rtj 1 , t ¼ 0.5. j:CAC, FTSE, DAX, Hang Seng, S&P500 and NIKKEI225. The global market return is approximated by MSCI World
1483
0.6 0.4 0.2 0.0 −0.2 2005
2006
2007
2008
2009
2010
2011
2008
2009
2010
2011
1.5
1.0
0.5
M CRtj 2 ,
Fig. 54.10 The t ¼ 0.05. j:CAC, FTSE, DAX, Hang Seng, S&P500 and NIKKEI225. The global market return is approximated by MSCI World
0.0
2005
2006
2007
normal (median) state and t2 ¼ 0.05, associated to an negative extreme state. Figure 54.9 shows the M C Rtj 1 from local markets j to the global market. When the MSCI World is at its normal state, the Hang Seng index in normal times contributes the most risk to the MSCI World at all times. The NIKKEI225 places second; the contribution from S&P500 varies most with the time; the risk contribution from DAX30 is nearly zero. The contributions from CAC40 and FTSE100 are negative. Assuming that the MSCI World is at its bad state (t2 ¼ 0.05), the M C Rtj 2 differs from M C Rtj 1 , see Fig. 54.10. One sees that the S&P500 imposes more pressure on the world economy than the other countries, especially during the financial crisis of 2008 and 2009. The contribution from Hang Seng is no longer of the same significance. The three European markets are relatively stable. This analysis suggests that the risk contribution from individual stock market varies a lot with the state of global economy.
1484
54.4
S.-K. Chao et al.
Conclusion
In this study, we construct a PLM model for the CoVaR, and we compare it to the AB model by backtesting. Results show that PLM CoVaR is preferable, especially during a crisis period. The study of the MCR reveals the fact that the risk from each country can vary with the state of global economy. As an illustration, we only study the Goldman Sachs conditional VaR with Citigroup and S&P500 as conditioned risk sources. In practice, we need to choose variables. In Hautsch et al. (2011), the Least Absolute Shrinkage and Selection Operator (LASSO) techniques are used to determine the most relevant systemic risk sources from a pool of financial institutions. A VAR (Vector Autoregression) model may be also suitable for capturing the asset dynamics, but the estimation may be more involved. We may include other firm-specific variables such as corporate bond yields as these variables can bear other information which is not included in the stock returns or stock indices.
Appendix 1: Local Linear Quantile Regression (LLQR) Let {(Xi, Yi)}in ¼ 1 ℝ2 be independently and identically distributed (i.i.d.) bivariate random variables. Denote by FY|x(u) the conditional cumulative distribution function (cdf) and l(x) ¼ F1 Y|x (t) the conditional quantile curve to level t, given observations {(xi, yi)}ni ¼ 1, one may write this as yi ¼ lðxi Þ þ ei , 1 (t) ¼ 0. A locally linear kernel quantile estimator (LLQR) is estimated as with Fe|x ^ l ðx 0 Þ ¼ ^ a 0 from: n x x X i 0 ^a 0 ; ^b 0 ¼ argmin K (54.24) h fa0 , b0 g i¼1
rt fyi a0 b0 ðxi x0 Þg,
(54.25)
where h is the bandwidth, K(·) is a kernel, and rt(·) is the check function given by rt ðuÞ ¼ t 1fu þ 12flogd þ loglogng, > p1=2 < ð2d lognÞ þ ð2d logn dn ¼ if c1 ðK > 0; > > : ð2d lognÞ1=2 þ ð2d logn1=2 logfc2 ðKÞgg, otherwise: 2p
Then " P
( ð2d lognÞ1=2 sup r ðxÞ x2J
! expf2expðzÞg,
jln ðxÞ lðxÞj lðK Þ1=2
) dn
# 1 to form their expectations. Thus, f(X) can be written as f(X, l), where f(X, l ¼ 1) represents the realistic cash flow distribution, while f(X, l > 1) 5
Palmon and Venezia (2012) explore the effect of managerial overconfidence on the firm’s stockholders and show that overconfidence may improve welfare. However, that study does not investigate the optimal strike price of managerial incentive options. 6 In our model we assume symmetry of information between the manager and the firm regarding the distribution of cash flows of the firm except for the different view of the effect of the manager’s effort on cash flows.
55
Strike Prices of Options for Overconfident Executives
1495
represents the cash flow distribution as viewed by overconfident managers. For notation brevity, we suppress the l in f(X, l). By the known properties of the 2 distribution, the mean and variance of X equal e½mðYÞþ0:5s and hlognormal2 i 2 e½2mðYÞþs es 1 , respectively. Thus, it follows from Eq. 55.2 that a person with a l overconfidence measure believes that the mean of the cash flows X is 2 emðYÞþ0:5s ¼ m0 þ 500lY and that their coefficient of variation is approximately s.7 Since managers and stockholders differ in their perception of the distributions of cash flows, one must be careful in their use. In what follows we refer to the distribution of cash flows as seen by stockholders as the realistic distribution, and will make a special note whenever the manager’s overconfident beliefs are used. Except for her overconfidence, the manager is assumed to be rational and to choose her extra effort so as to maximize the expected value of the following utility function which exhibits constant relative risk aversion (CRRA) with respect to compensation: UðI; YÞ ¼
1 1 1g NYb þ I 1g 1g
(55.3)
In Eq. 55.3, I denotes the manager’s monetary income, g denotes the constant relative risk aversion measure, N is a scaling constant representing the importance of effort relative to monetary income in the manager’s preferences, and the positive parameter b is related to the convexity of the disutility of effort. Since stockholders cannot observe the manager’s extra effort, they propose compensation schemes that depend on the observed cash flows, but not on Y. Stockholders, which we assume to be risk neutral, strive to make the compensation performance sensitive in order to better align the manager’s incentives with their own. Stockholders offer the manager a compensation package that includes two components: a fixed wage (W) that she will receive regardless of her extra effort and of the resulting cash flows and options with a strike price (K) for a fraction (s) of the equity of the firm. We assume that stockholders offer the contract that maximizes the value of their equity. The following timeline of decisions is assumed. At the beginning of the period, the firm chooses the parameters of the compensation contract (K, W, and s) and offers this contract to the manager. Observing the contract parameters, and taking into account the effects of her endeavors on firm cash flows and hence on her compensation, the manager determines the extra-effort level Y that maximizes her expected utility. At the end of the period, X is revealed, and the firm distributes the cash flows to the manager and to the stockholders and then dissolves. The priority of payments is as follows. The firm first pays the wages or only part of them if the cash flows do not suffice. If the cash flows exceed the wage, W, but not (K + W), then the h 2 i More precisely the square of the coefficient of variation is es 1 which can be approximated by s2 since for any small z, ez1 is close to z.
7
1496
O. Palmon and I. Venezia
managers just receive their fixed wage. The managers are paid the value of the options s(X K W), in addition to W if X exceeds K + W. The manager therefore receives the cash flows I(X) defined by 8 9 0 and a0 l0 > 0, belongs to the linear exponential (or Pearson) family with a closed form cumulative distribution. a0 and l0 are fixed parameters of the model. The constant elasticity of variance or CEV model is specified as follows: dXðtÞ ¼ a0 XðtÞdt þ s0 XðtÞb0 =2 dW ðtÞ, where W(t) is a standard Brownian motion and a0, s0, and b0 are fixed constants. Of note is that the interpretation of this model depends on b0, i.e., in the case of stock prices, if b0 ¼ 2, then the price process X(t) follows a lognormal diffusion; if b0 < 2, then the model captures exactly the leverage effect as price and volatility are inversely correlated. Amongst other authors, Beckers (1980) used this CEV model for stocks, Marsh and Rosenfeld (1983) apply a CEV parametrization to interest rates, and Emanuel and Macbeth (1982) utilize this setup for option pricing. The generalized constant elasticity of variance model is defined as follows: dXðtÞ ¼
a0 XðtÞð1b0 Þ þ l0 XðtÞ dt þ s0 XðtÞb0 =2 dW ðtÞ,
where the notation follows the CEV case. l0 is another parameter of the model. This process nests log diffusion when b0 ¼ 2 and nests square root diffusion when b0 ¼ 1. Brennan and Schwartz (1979) and Courtadon (1982) analyze the model: dXðtÞ ¼ ða0 þ b0 XðtÞÞdt þ s0 XðtÞ2 dW ðtÞ, where a0, b0, s0 are fixed constants and W(t) is a standard Brownian motion.
1516
D. Duong and N.R. Swanson
Duffie and Kan (1996) study the specification: dXðtÞ ¼ ða0 þ XðtÞÞdt þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b0 þ g0 XðtÞdW ðtÞ,
where W(t) is a standard Brownian motion and a0, b0, and g0 are fixed parameters. Aїt-Sahalia (1996) looks at a general case with general drift and CEV diffusion: dXðtÞ ¼ a0 þ b0 XðtÞ þ g0 XðtÞ2 þ 0 =XðtÞ dt þ s0 XðtÞb0 =2 dW ðtÞ: In the above expression, a0, b0, g0, 0, s0, and b0 are fixed constants and W(t) is again a standard Brownian motion.
56.2.1.2 Diffusion Models with Jumps For term structure modeling in empirical finance, the most widely studied class of models is the family of affine processes, including diffusion processes that incorporate jumps. Affine Jumps Diffusion Model: X(t) is defined to follow an affine jump diffusion if dXðtÞ ¼ k0 ða0 XðtÞÞdt þ O0
pffiffiffiffiffiffiffiffiffi DðtÞdW ðtÞ þ dJ ðtÞ,
where X(t) is an N-dimensional vector of variables of interest and is a cadlag process, W(t) is an N-dimensional independent standard Brownian motion, k0 and O0 are square N N matrices, a0 is a fixed long-run mean, and D(t) is a diagonal matrix with ith diagonal element given by dii ðtÞ ¼ y0i þ d00i XðtÞ: In the above expressions, y0i and d0i 0 are constants. The jump intensity is assumed to be a positive, affine function of X(t), and the jump size distribution is assumed to be determined by its conditional characteristic function. The attractive feature of this class of affine jump diffusions is that, as shown in Duffie et al. (2000), it has an exponential affine structure that can be derived in closed form, i.e., FðXðtÞÞ ¼ exp aðtÞ þ bðtÞ0 XðtÞ , where the functions a(t) and b(t) can be derived from Riccati equations.6 Given a known characteristic function, one can use either GMM to estimate the
6
For details, see Singleton (2006), p. 102.
56
Density and Conditional Distribution-Based Specification Analysis
1517
parameters of this jump diffusion, or one can used quasi-maximum likelihood (QML), once the first two moments are obtained. In the univariate case without jumps, as a special case, this corresponds to the above general CIR model with jumps.
56.2.1.3 Multifactor and Stochastic Volatility Model Multifactor models have been widely used in the literature, particularly in option pricing, term structure, and asset pricing. One general setup has (X(t), V(t))0 ¼ (X(t), V1(t), . . ., Vd(t))0 where only the first element, the diffusion process Xt, is observed while V(t) ¼ (V1(t), . . ., Vd(t))dx1 0 is latent. In addition, X(t) can be dependent on V(t). For instance, in empirical finance, the most well-known class of the multifactor models is the stochastic volatility model expressed as dXðtÞ dV ðtÞ
! ¼
b1 ðXðtÞ, y0 Þ b2 ðV ðtÞ, y0 Þ þ
! dt þ
s12 ðV ðtÞ, y0 Þ s22 ðV ðtÞ, y0 Þ
s11 ðV ðtÞ, y0 Þ 0
!
! dW 1 ðtÞ (56:4)
dW 2 ðtÞ,
where W1(t)11 and W2(t)11 are independent standard Brownian motions and V(t) is latent volatility process. b1(·) is a function of X(t) and b2(·), s11(·), s22(·), and s22(·) are general functions of V(t), such that system of Eq. 56.4 is well defined. Popular specifications are the square root model of Heston (1993), the GARCH diffusion model of Nelson (1990), lognormal model of Hull and White (1987), and the eigenfunction models of Meddahi (2001). Note that in this stochastic volatility case, the dimension of volatility is d ¼ 1. More general setup can involve d driving Brownian motions in V(t) equation. As an example, Andersen and Lund (1997) study the generalized CIR model with stochastic volatility, specifically dXðtÞ ¼ kx0 ðx0 XðtÞÞdt þ
pffiffiffiffiffiffiffiffiffi V ðtÞdW 1 ðtÞ,
dXðtÞ ¼ kv0 ðv0 V ðtÞÞdt þ sv0
pffiffiffiffiffiffiffiffiffi V ðtÞdW 2 ðtÞ,
where X(t) and V(t) are price and volatility processes, respectively, kx0, kv0 > 0 to ensure stationarity, x0 is the long-run mean of (log) price process, and v0 and sv0 are constants. W1(t) and W2(t) are scalar Brownian motions. However, W1(t) and W2(t) are correlated such that dW1(t)dW2(t) ¼ rdt where the correlation r is some constant r 2 [1, 1]. Finally, note that volatility is a square root diffusion process, which requires that kv0 v0 > s2v0 .
1518
D. Duong and N.R. Swanson
Stochastic Volatility Model with Jumps (SVJ): A standard specification is pffiffiffiffiffiffiffiffiffi dXðtÞ ¼ kx0 x0 XðtÞÞdt þ V ðtÞdW 1 ðtÞ þ J u dqu J d dqd , dV ðtÞ ¼ kv0 ðv0 V ðtÞÞdt þ sv0
pffiffiffiffiffiffiffiffiffi V ðtÞdW 2 ðtÞ,
where qu and qd are Poisson processes with jump intensity parameters lu and ld, respectively, and are independent of the Brownian motions W1(t) and W2(t). In particular, lu is the probability of a jump-up, Pr(dqu (t) ¼ 1) ¼ lu, and ld is the probability of a jump-down, Pr(dqd(t) ¼ 1) ¼ ld. Ju and Jd are jump-up and jump-down sizes and have exponential distributions: f ðJ u Þ ¼ B1u exp JBuu and f ðJ d Þ ¼ B1d exp JBdd , where Bu, Bd > 0 are the jump magnitudes, which are
the means of the jumps, Ju and Jd. Three-Factor Model (CHEN): The three-factor model combines various features of the above models, by considering a version of the oft examined three-factor model due to Chan et al. (1992), which is discussed in detail in Dai and Singleton (2000). In particular, pffiffiffiffiffiffiffiffiffi dXðtÞ ¼ kx0 ðyðtÞ XðtÞÞdt þ pVffiffiffiffiffiffiffiffi ðtÞdW ffi 1 ðtÞ, ð t Þ dW 2 ðtÞ, dV ðtÞ ¼ kv0 ðv V ðtÞÞdt þ sv0 V pffiffiffiffiffiffiffiffi dyðtÞ ¼ ky0 yðtÞ yðtÞ dt þ sy0 yðtÞdW 3 ðtÞ,
(56:5)
where W1(t), W2(t) W3(t) are independent Brownian motions and V and y are the stochastic volatility and stochastic mean of X(t), respectively. kx0, kv0, ky0, v0 , y0 , sv0, sy0 are constants. As discussed above, non-negativity for V(t) and y(t) requires that 2kv0 v0 > s2v0 and 2ky0 y0 > s2y0 . Three-Factor Jump Diffusion Model (CHENJ): Andersen et al. (2004) extend the three-factor Chen (1996) model by incorporating jumps in the short rate process, hence improving the ability of the model to capture the effect of outliers and to address the finding by Piazzesi (2004, 2005) that violent discontinuous movements in underlying measures may arise from monetary policy regime changes. The model is defined as follows: pffiffiffiffiffiffiffiffiffi dXðtÞ ¼ kx0 ðyðtÞ XðtÞÞdt þ V ðtÞdW 1 ðtÞ þ J u dqu J d dqd , pffiffiffiffiffiffiffiffiffi dV ðtÞ ¼ kv0 v0 V ðtÞÞdt þ sv0 V ðtÞdW 2 ðtÞ, pffiffiffiffiffiffiffiffi dyðtÞ ¼ ky0 y0 yðtÞÞdt þ sy0 yðtÞdW 3 ðtÞ
(56:6)
where all parameters are similar as in Eq. 56.5; W1(t), W2(t), and W3(t) are independent Brownian motions; and qu and qd are Poisson processes with jump intensities lu0 and ld0, respectively, and are independent of the Brownian motions Wr(t), Wv(t), and Wy(t). In particular, lu0 is the
56
Density and Conditional Distribution-Based Specification Analysis
1519
probability of a jump-up, Pr(dqu (t) ¼ 1) ¼ lu0, and ld0 is the probability of a jumpdown, Pr(dqd(t) ¼ 1) ¼ ld0. Ju and Jd are and jump-down sizes and jump-up have exponential distributions f ðJ u Þ ¼ B1u0 exp BJu0u and f ðJ d Þ ¼ B1d0 exp BJd0d , where
Bu0, Bd0 > 0 are the jump magnitudes, which are the means of the jumps Ju and Jd.
56.2.2 Overview on Specification Tests and Model Selection The focus in this chapter is specification testing and model selection. The “tools” used in this literature have been long established. Several key classical contributions include the Kolmogorov-Smirnov test (see, e.g., Kolmogorov (1933) and Smirnov (1939)), various results on empirical processes (see, e.g., Andrews (1993) and the discussion in Chap. 19 of van der Vaart (1998) on the contributions of Glivenko, Cantelli, Doob, Donsker, and others), the probability integral transform (see, e.g., Rosenblatt (1952)), and the Kullback–Leibler information criterion (see, e.g., White (1982) and Vuong (1989)). For illustration, the empirical distribution mentioned above is crucial in our discussion of predictive densities because it is useful in estimation, testing, and model evaluation. Let Yt is a variable of interest with distribution F and parameter y0. The theory of empirical distributions provides a result that T 1 X pffiffiffi ð1fY t ug Fðujy0 ÞÞ T t¼1
satisfies a central limit theorem (with a parametric rate) if T is large (i.e., asymptotically). In the above expression, 1{Yt u} is the indicator function which takes value 1 if Yt u and 0 otherwise. In the case where there is parameter estimation error, we can use more general results in Chap. 19 of van der Vaart (1998). Define ð T 1X PT ð f Þ ¼ f ðY i Þ and Pð f Þ ¼ fdP, T i¼1 where P is a probability measure associated with F. Here, Pn( f ) converges to P( f ) almost surely for all the measurable functions f for which P( f ) is defined. Suppose one wants to test the null hypothesis that P belongs to a certain family fPy0 : y0 2 Yg , where y0 is unknown; it is natural to use a measure of the discrepancy between Pn and Py^ for a reasonable estimator y^t of y0. In particular, 1 if y^t converges to y0 at a root-T rate, pffiffiffi PT Pyt has been shown to satisfy ^ T 7 a central limit theorem. With regard to dynamic misspecification and parameter estimation error, the approach discussed for the class of tests in this chapter allows for the construction 7
See Theorem 19.23 in van der Vaart (1998) for details.
1520
D. Duong and N.R. Swanson
of statistics that admit for dynamic misspecification under both hypotheses. This differs from other classes of tests such as the framework used by Diebold et al. (1998), Hong (2001), and Bai (2003) in which correction dynamic specification under the null hypothesis is assumed. In particular, DGT use the probability ð Yt integral transform to show that Ft ðY t jℑt1 ; y0 Þ ¼ f t ðyjℑt1 ; y0 Þdy is identically 1
and independently distributed as a uniform random variable on [0; 1], where Ft(·) and ft(·) are a parametric distribution and density with underlying parameter y0, Yt is again our random variable of interest, and ℑt is the information set containing all “relevant” past information. thus suggest using the difference between the They empirical distribution of Ft Y t ℑt1 ; y^t : and the 45 line as a measure of “goodness
of fit,” where y^t is some estimator of y0. This approach has been shown to be very useful for financial risk management (see, e.g., Diebold et al. (1999)), as well as for macroeconomic forecasting (see, e.g., Diebold et al. (1998) and Clements and Smith (2000, 2002)). Similarly, Bai (2003) develops aKolmogorov-type test of ^ Ft(Yt|ℑt1, y0) on the basis of the discrepancy between Ft Y t ℑt1 ; y t : and the CDF
of a uniform on [0; 1]. As the test involves estimator y^t , the limiting distribution reflects the contribution of parameter estimation error and is not nuisance parameter-free. To overcome this problem, Bai (2003) proposes a novel approach based on a martingalization argument to construct a modified Kolmogorov test which has a nuisance parameter-free limiting distribution. This test has power against violations of uniformity but not against violations of independence. Hong (2001) proposes another related interesting test, based on the generalized spectrum, which has power against both uniformity and independence violations, for the case in which the contribution of parameter estimation error vanishes asymptotically. If the null is rejected, Hong (2001) also proposes a test for uniformity robust to nonindependence, which is based on the comparison between a kernel density estimator and the uniform density. Two features differentiate the tests surveyed in this chapter from the tests outlined in the other papers mentioned above. First, the tests discussed here assume strict stationarity. Second, they allow for dynamic misspecification under the null hypothesis. The second feature allows us to obtain asymptotically valid critical values even when the conditioning information set does not contain all of the relevant past history. More precisely, assume that we are interested in testing for correct specification, given a particular information set which may or may not contain all of the relevant past information. This is important when a Kolmogorov test is constructed, as one is generally faced with the problem of defining ℑt1. If enough history is not included, then there may be dynamic misspecification. Additionally, finding out how much information (e.g., how many lags) to include may involve pre-testing, hence leading to a form of sequential test bias. By allowing for dynamic misspecification, such pre-testing is not required. Also note that critical values derived under correct specification given ℑt1 are not in general valid in the case of correct specification given a subset of ℑt1. Consider the following example. Assume that we are interested in testing whether
56
Density and Conditional Distribution-Based Specification Analysis
1521
the conditional distribution of Yt|Yt1 follows normal distribution N(a1Yt1, s1). Suppose also that in actual fact the “relevant” information set has ℑt1 including both Yt1 and Yt2, so that the true conditional model is Ytjℑt 1 ¼ YtjYt1, Yt2 ¼ N(a1Yt1 + a2Yt2, s2). In this case, correct specification holds with respect to the information contained in Xt1; but there is dynamic misspecification with respect to Yt1 and Yt2. Even without taking account of parameter estimation error, the critical values obtained assuming correct dynamic specification are invalid, thus leading to invalid inference. Stated differently, tests that are designed to have power against both uniformity and independence violations (i.e., tests that assume correct dynamic specification under the null) will reject an inference which is incorrect, at least in the sense that the “normality” assumption is not false. In summary, if one is interested in the particular problem of testing for correct specification for a given information set, then the approach of tests in this chapter is appropriate.
56.3
Consistent Distribution-Based Specification Tests and Predictive Density-Type Model Selection for Diffusion Processes
56.3.1 One-Factor Models In this section, we outline the setup for the general class of one-factor jump diffusion specifications. All analyses carry through to the more complicated case of multifactor stochastic volatility models which we will elaborate upon in the next subsection. In the presentation of the tests, we follow a view that all candidate models, either single or multiple dimensional ones, are approximations of reality and can thus be misspecified. The issue of correct specification (or misspecification) of a single model and the model selection test for choosing amongst multiple competing models allow for this feature. To begin, fix the time interval [0, T] and consider a given single one-factor candidate model the same as Eq. 56.1, with the true parameters y0, l0, m0 to be replaced by its pseudo true analogs y{, l, m, respectively, and 0 t T: Xðt Þ ¼
ðt 0
{
ð
b Xðs Þ, y ds lt yfðyÞdy þ Y
ðt 0
Jt X s Xðs Þ, y{ dW ðsÞ þ yj , j¼1
or dXðtÞ ¼ b XðtÞ, y{ lm dt: ð þ s XðtÞ, y{ dW ðtÞ þ ypðdy; dtÞ,
(56:7)
Y
where variables are defined the same as in Eqs. 56.1 and 56.2. Note that as the above model is the one-factor version of Eqs. 56.1 and 56.2 where the dimension of X(t) is 1 1, W(t) is a one-dimensional standard Brownian motion and jump size, and yj
1522
D. Duong and N.R. Swanson
is one-dimensional variable for all j. Also note both Jt and yj are assumed to be independent of the driving Brownian motion. If the single model is correctly specified, then b(X(t), y{) ¼ b0(X(t), y0), s(X(t), y{) ¼ s0(X(t), y0), l ¼ l0, m ¼ m0, and f ¼ f0 where b0(X(t), y0), s0(X(t), y0), l0, m0, f0 are unknown and belong to the true specification. Now consider a different case (not a single model) where m candidate models are involved. For model k with 1 k m, denote its corresponding specification to be (bk(X(t), y{k ), sk(X(t), y{k ), lk, mk, fk). Two scenarios immediate arise. Firstly, if the model k is correctly specified, then bk(X(t), y{k ) ¼ b0(X(t), y0), sk(X(t), y{k ) ¼ s0(X(t), y0), lk ¼ l0, mk ¼ m0, and fk ¼ f0 which are similar to the case of a single model. In the second scenario, all the models are likely to be misspecified and modelers are faced with the choice of selecting the “best” one. This type of problem is well fitted into the class of accuracy assessment tests initiated earlier by Diebold and Mariano (1995) or White (2000). The tests discussed hereafter are Kolmogorov-type tests based on the construction of cumulative distribution functions (CDFs). In a few cases, the CDF is known in closed form. For instance, for the simplified version of the CIR model as in Eq. 56.3, X(t) belongs to the linear exponential (or Pearson) family with the gamma CDF of the form8 ð u 2ð1a=lÞ1 l exp x= l2 dx 2 (56:8) Fðu; a; lÞ ¼ 0 Gð2ð1 a=lÞÞ Ð1 where G(x) ¼ 0 txexp(t)dt, and a, l are constants. Furthermore, if we look at the pure diffusion process without jumps dXðtÞ ¼ b XðtÞ, y{ dt þ s XðtÞ, y{ dW ðtÞ,
(56:9)
where b(·) and s ¼ s(·) are drift and volatility functions, it is known that the stationary density, say f(x, y{), associated with the invariant probability measure can be expressed explicitly as9 ðx { ! { c y{ 2b u; y f x; y ¼ { exp du , 2 s x; y s2 u; y{ { { where Ð u c(y{) is a constant ensuring that f integrates to one. The CDF, say F(u,y ) ¼ f(x,y ) dx, can then be obtained using available numerical integration procedures. However, in most cases, it is impossible to derive the CDFs in closed form. To obtain a CDF in such cases, a more general approach is to use simulation. Instead of
8
See Wong (1964) for details. See Karlin and Taylor (1981) for details.
9
56
Density and Conditional Distribution-Based Specification Analysis
1523
estimating the CDF directly, simulation techniques estimate the CDF indirectly utilizing its generated sample paths and the theory of empirical distributions. The specification of a specific diffusion process will dictate the sample paths and thereby corresponding test outcomes. Note that in the historical context, many early papers in this literature are probability density based. For example, in a seminal paper, Ait-Sahalia (1996) compares the marginal densities implied by hypothesized null models with nonparametric estimates thereof. Following the same framework of correct specification tests, Corradi and Swanson (2005) and Bhardwaj et al. (2008), however, do not look at densities. Instead, they compare the cumulative distribution (marginal or joint) implied by a hypothesized null model with the corresponding empirical distribution. While Corradi and Swanson (2005) focus on the known unconditional distribution, Bhardwaj et al. (2008) look at the conditional simulated distributions. Corradi and Swanson (2011) make extensions to multiple models in the context of out-of-sample accuracy assessment tests. This approach is somewhat novel to this continuous time model testing literature. Now suppose we observe a discrete sample path X1, X2, . . ., XT (also referred as skeletons).10 The corresponding hypotheses can be set up as follows: Hypothesis 1 Unconditional Distribution Specification Test of a Single Model
H0 : F(u,y{) ¼ F0(u,y0), for all u, a.s. HA : Pr(F(u,y{) F0(u,y0) 6¼ 0) > 0, for some u 2 U, with nonzero Lebesgue measure. where F0(u, y0) is the true cumulativedistribution implied by the above density, { y{ i.e., F0(u,y0) ¼ Pr(Xt u). F u; y ¼ Pr Xt u is the cumulative distribution of {
the proposed model. Xyt is a skeleton implied by model (56.7). Hypothesis 2 Conditional Distribution Specification Test of a Single Model
H0 : Ft(u|Xt, y{) ¼ F0,t(u|Xt, y0), for all u, a.s. HA : Pr(Ft(u|Xt, y{) F0,t(u|Xt, y0) 6¼ 0) > 0, for some u 2 U, with nonzero Lebesgue measure. { { where Ft ujXt ; y{ ¼ Pr Xytþt u Xyt ¼ Xt is t-step ahead conditional distributions and t ¼ 1, . . . , T t. F0,t(u|Xt, y0) is t-step ahead true conditional distributions.
Hypothesis 3 Predictive Density Test for Choosing Amongst Multiple Competing Models The null hypothesis is that no model can outperform model 1 which is the benchmark model.11 10
As mentioned earlier, we follow Corradi and Swanson (2005) by using notation X(·) when defining continuous time processes and Xt for a skeleton. 11 See White (2000) for a discussion of a discrete time series analog to this case, whereby point rather than density-based loss is considered; Corradi and Swanson (2007b) for an extension of White (2000) that allows for parameter estimation error; and Corradi and Swanson (2006) for an extension of Corradi and Swanson (2007b) that allows for the comparison of conditional distributions and densities in a discrete time series context.
1524
D. Duong and N.R. Swanson
H0: ! max EX
F
k¼2, ..., m
y
{
X1,1 tþt ðXt Þ
ð u2 Þ F
y
{
X 1 ðXt Þ !2 1, tþt
ð u1 Þ
ðF0 ðu2 jXt Þ F0 ðu1 jXt ÞÞ ! EX
F
y
{
Xk,k tþt ðXt Þ
ðu2 Þ F
y
{
X k ðXt Þ !2k, tþt
ðF0 ðu2 jXt Þ F0 ðu1 jXt ÞÞ
ð u1 Þ
:
HA: negation of H00 where
F
y
{
Xk,k tþt ðXt Þ
ð uÞ ¼
Ftk
{ y{ yk { t k u Xt ; yk ¼ Py{ Xk, tþt u Xt ¼ Xt , k
which is the conditional distribution of Xt+t, given Xt, and evaluated at u under the y{
probability law generated by model k. Xk,k tþt ðXt Þ with 1 t T t is the skeleton implied by model k, parameter y{k , and initial value Xt. Analogously, define Ft0 ðujXt ; y0 Þ ¼ Pty0 ðXtþt ujXt Þ to be the “true” conditional distribution. Note that the three hypotheses expressed above apply exactly the same to the case of multifactor diffusions. Now, before moving to the statistics description section, we briefly explain the intuitions in facilitating construction of the tests. In the first case (Hypothesis 1), Corradi and Swanson (2005) construct a Kolmogorov-type test based on comparison of the empirical distribution and the unconditional CDF implied by the specification of the drift, variance, and jumps. Specifically, one can look at the scaled difference between { F u; y{ ¼ Pr Xyt u ¼
ðu
f x; y{ dx
and estimator of the true F0(u|Xt, y0), the empirical distribution of Xt defined as T 1X 1fXt ug, T t¼1
where 1{Yt u} is indicator function which takes value 1 if Yt u and 0 otherwise.
56
Density and Conditional Distribution-Based Specification Analysis
1525
Similarly for the second case of conditional distribution (Hypothesis 2), the test statistic VT can be a measure of the distance between the t-step ahead conditional { { distribution of Xytþt , given Xyt ¼ Xt , as { { Ft u Xt ; y{ ¼ Pr Xytþt u Xyt ¼ Xt , to an estimator of the true F0,t(u|Xt,y0), the conditional empirical distribution of Xt+t conditional on the initial value Xt defined as T t 1 X 1fXtþt ug, T t t¼1
In the third case (Hypothesis 3), model accuracy is measured in terms of a distributional analog of mean square error. As is commonplace in the out-of-sample evaluation literature, the sample of T observations is divided into two subsamples, such that T ¼ R + P, where only the last P observations are used for predictive evaluation. A t-step ahead prediction error under model k is 1{u1 Xt+t u2} (Ftk(u2|Xt, y{k ) Ftk(u1|Xt, y{k )) where 2 k m and similarly for model 1 by replacing index k with index 1. Suppose we can simulate P t paths of t-step ahead skeleton12 using Xt as starting values where t ¼ R, . . ., R + P t, from which we can construct a sample of P t prediction errors. Then, these prediction errors can be used to construct a test statistic for model comparison. In particular, model 1 is defined to be more accurate than model k if 0
!2 1 { { t u2 X t ; y1 F1 u1 X t ; y1 A F@ Ft1 ðu2 jXt ; y0 Þ Ft1 ðu1 jXt ; y0 Þ 0 !2 1 { { t t t t Fk u 2 X t ; y 1 Fk u 1 X t ; y k A, < E@ Ft0 ðu2 jXt ; y0 Þ Ft0 ðu1 jXt ; y0 Þ Ft1
where E(·) is an expectation operator and E(1{u1 Xt+t u2}jXt) ¼ Ft0(u2jXt,y0) Ft0(u1jXt,y0). Concretely, model k is worse than model 1 if on average t-step ahead prediction errors under model k is larger than that of model 1. Finally, it is important to point out some main features characterized by all the three test statistics. Processes X(t) hereafter are required to satisfy the regular conditions, i.e., assumptions A1–A8 in Corradi and Swanson (2011). Regarding { model estimation (in Sect. 56.3.3), y{ and yk are unobserved and need to be estimated. While Corradi and Swanson (2005) and Bhardwaj et al. (2008) utilize (recursive) simulated generalized method of moments (SGMM), Corradi and Swanson (2011) make extension to (recursive) nonparametric simulated
12
See Sect. 56.3.3.1 for model simulation details.
1526
D. Duong and N.R. Swanson
quasi-maximum likelihood (NPSQML). For the unknown distribution and conditional distribution, it will be pointed out in Sect. 56.3.3.2 that F(u,y{), Ft(u|Xt, y{), and F y{ ðuÞ can be replaced by their simulated counterparts using the (recursive) Xk,k tþt ðXt Þ SGMM and NPSQML parameter estimators. In addition, test statistics converge to functional of Gaussian processes with covariance kernels that reflect time dependence of the data and the contribution of parameter estimation error (PEE). Limiting distributions are not nuisance parameter-free, and critical values thereby cannot be tabulated by the standard approach. All the tests discussed in this chapter rely on the bootstrap procedures for obtaining the asymptotically valid critical values, which we will describe in Sect. 56.3.4.
56.3.1.1 Unconditional Distribution Tests For one-factor diffusions, we outline the construction of unconditional test statistics in the context where CDF is known in closed form. In order to test the Hypothesis 1, consider the following statistic: ð V 2T , N, h
¼ V 2T , N, h ðuÞpðuÞ, U
where T 1 X V T , N, h ¼ pffiffiffi 1fXt ug F u; y^T , N , h : T t¼1 ð pðuÞdu ¼ 1, 1fXt ug is
In the above expression, U is a compact interval and U
again the indicator function which returns value 1 if Xt u and 0 otherwise. Further, as defined in Sect. 56.3.3, y^T , N, h hereafter is a simulated estimator where T is sample size and h is the discretization interval used in simulation. In addition, with the abuse of notation, N is a generic notation throughout this chapter, i.e., N ¼ L, the length of each simulation path for (recursive) SGMM, and N ¼ M, the number of random draws (simulated paths) for (recursive) NPQML estimator.13 Also note in our notation that as the above test is in sample specification test, the estimator and the statistics are constructed using the entire sample, i.e., y^T , N, h . It has been shown in Corradi and Swanson (2005) that under regular conditions and if the estimator is estimated by SGMM, the above statistics converges to a functional of Gaussian process.14 In particular, pick the choice T, N ! 1, h ! 0, T/N ! 0, and Th2 ! 0. 13 M is often chosen to coincide with S, the number of simulated paths used when simulating distributions. 14 For details and the proof, see Theorem 1 in Corradi and Swanson (2005).
56
Density and Conditional Distribution-Based Specification Analysis
Under the null,
1527
ð V 2T , N, h !
Z 2 ðuÞpðuÞ, U
where Z is a Gaussian process with covariance kernel. Hence, the limiting distri2 is a functional of a Gaussian process with a covariance kernel that bution of VT,N,h reflects both PEE and the time series nature of the data. As y^T , N, h is root-T consistent, PEE does not disappear in the asymptotic covariance kernel. Under HA, there exists an e > 0 such that 1 2 V lim Pr > e ¼ 1: T!1 T T, N, h For the asymptotic critical value tabulation, we use the bootstrap procedure. In order to establish validity of the block bootstrap under SGMM with the presence of PEE, the simulated sample size should be chosen to grow at a faster rate than the historical sample, i.e., T/N ! 0. Thus, we can follow the steps in appropriate bootstrap procedure in Sect. 56.3.4. For instance, if the SGMM estimator is used, the bootstrap statistic is ð 2 V T , N, h ¼ V 2 T , N , h ðuÞpðuÞdu, U
where T
1 X ffiffiffi p V 2 ¼ 1 Xt u 1fXt ug T, N, h T t¼1 ^ ^ F u; y T , N, h F u; y T , N, h : In the above expression, y^T , N, h is the bootstrap analog of y^T , N, h and is estimated * * by the bootstrap sample X1, . . ., XT (see Sect. 56.3.4). With appropriate conditions, 2* Corradi and Swanson (2005) show that under the null, VT,N,h has a well-defined 2 limiting distribution which coincides with that of VT,N,h. We then can straightforwardly derive the bootstrap critical value by following Steps 1–5 in Sect. 56.3.4. In particular, in Step 5, the idea is to perform B bootstrap replications (B large) and compute the percentiles of the empirical distribution of the B bootstrap statistics. Reject H0 if V2T,N,h is greater than the (1a)th percentile of this empirical distribution. Otherwise, do not reject H0.
56.3.1.2 Conditional Distribution Tests Hypothesis 2 tests correct specification of the conditional distribution, implied by a proposed diffusion model. In practice, the difficulty arises from the fact that the functional form of neither t-step ahead conditional distributions Ft(u|Xt, y{) nor F0,t(u|Xt, y0) is unknown in most cases. Therefore, Bhardwaj et al. (2008) develop
1528
D. Duong and N.R. Swanson
bootstrap specification test on the basis of simulated distribution using the SGMM estimator.15 With the important inputs leading to the test such as simulated estimator, distribution simulation, and bootstrap procedures to be presented in the next section,16 the test statistic is defined as ZT ¼ where
sup uv2UV
jZT ðu; vÞj,
0 X n ^ o1 1 S y T, N, h T t 1 X 1 X u s , tþt @S A, s¼1 ZT ðu; vÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi T t t¼1 ð1fX ugÞ tþt
with U and V compact sets on the real line. y^T , N, h is the simulated estimator using entire sample X1, . . . , XT, and S is the number of simulated replications used in the estimation of conditional distributions as described in Sect. 56.3.3. If SGMM estimator is used (similar to unconditional distribution case and the same as in Bhardwaj et al. (2008)), then N ¼ L, where L is the simulation length used in parameter estimation. The above statistic is a simulation-based version of the conditional Kolmogorov test of Andrews (1997), which compare the joint empirical distribution: T t 1 X pffiffiffiffiffiffiffiffiffiffiffi 1fXtþt ug1fXt vg, T t t¼1
with its semi-empirical/semi-parametric analog given by the product of T t 1 X pffiffiffiffiffiffiffiffiffiffiffi F0, t ðujXt ; y0 Þ1fXt vg: T t t¼1
Intuitively, if the null is not rejected, the metric distance between the two should asymptotically disappear. In the simulation context with parameter estimation error, the asymptotic limit of ZT however is a nontrivial one. Bhardwaj et al. (2008) show that with the proper choice of T, N, S, h, i.e., T, N, S, T2/S ! 1 and h, T/N, T/S, Nh, h2T ! 0, then d
ZT !
sup
jZ ðu; vÞj,
uv2UV
where Z(u, v) is a Gaussian process with a covariance kernel that characterizes (1) long-run variance we would have if we knew F0,t(u|Xt, y0)), (2) the contribution 15
In this chapter, we assume that X(·) satisfies the regularity conditions stated in Corradi and Swanson (2011), i.e., assumptions A1–A8. Those conditions also reflect requirements A1–A2 in Bhardwaj et al. (2008). Note that the SGMM estimator used in Bhardwaj et al. (2008) satisfies the root-N consistency condition that Corradi and Swanson (2011) impose on their parameter estimator (see Assumption 4). 16 See Sects. 56.3.3 and 56.3.4 for further details.
56
Density and Conditional Distribution-Based Specification Analysis
1529
of parameter estimation error, and (3) The correlation between the first two. Furthermore, under HA, there exists some e > 0 such that 1 lim Pr pffiffiffi ZT > e ¼ 1: P!1 T As T/S ! 0, the contribution of simulation error is asymptotically negligible. The limiting distribution is not nuisance parameter-free and hence critical values cannot be tabulated directly from it. The appropriate bootstrap statistic in this context is ZT ¼ sup ZT ðu; vÞ , uv2UV
where Tt
1 X Z T ¼ ðu; vÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1 Xt v T t t¼1 !
S n o 1X y^T , N, h 1 Xs, tþt u 1 Xtþt u S s¼1 T t 1 X pffiffiffiffiffiffiffiffiffiffiffi 1fXt vg T t t¼1 !
S y^ 1X T, N, h 1 Xs, tþt u 1fXtþt ug : S s¼1 In the above expression, y^ T , N, h is the bootstrap parameter estimated using the y^
, N, h , s ¼ 1, . . . , S and t ¼ 1, . . . , T t is for t ¼ 1, . . . , T t. Xs,Ttþt the simulated date under y^T, N, h and X*t ; t ¼ 1, . . ., T t is a resampled series constructed using standard block bootstrap methods as described in Sect. 56.3.4. Note that in the original paper, Bhardwaj et al. (2008) propose bootstrap SGMM estimator for conditional distribution of diffusion processes. Corradi and Swanson (2011) extend the test to the case of simulated recursive NPSQML estimator. Regarding the generation of the empirical distribution of Z*T (asthmatically the same as ZT), follow Steps 1–5 in the bootstrap procedure in Sect. 56.3.4. This yields B bootstrap replications (B large) of Z*T. One can then compare ZT with the percentiles of the empirical distribution of Z*T, and reject H0 if ZT is greater than the (1 a)th percentile. Otherwise, do not reject H0. Tests carried out in this manner are correctly asymptotically sized and have unit asymptotic power. resampled data
X*t
56.3.1.3 Predictive Density Tests for Multiple Competing Models In many circumstances, one might want to compare one (benchmark) model (model 1) against multiple competing models (models k, 2 k m). In this case, recall in the null in Hypothesis 3 that no model can outperform the benchmark model. In testing the null, we first choose a particular interval, i.e., (u1, u2) 2 U U,
1530
D. Duong and N.R. Swanson
where U is a compact set so that the objective is evaluation of predictive densities for a given range of values. In addition, in the recursive setting (not full sample is used to estimate parameters), if we use the recursive NPSQML estimator, say y^1:t, N , h and y^k:t, N, h , for models 1 and k, respectively, then the test statistic is defined as DMax k, P, S ðu1 ; u2 Þ ¼ max Dk, P, S ðu1 ; u2 Þ, k¼2, ..., m where T t 1 X Dk, P, S ðu1 ; u2 Þ ¼ pffiffiffi P t¼R
"
S n o 1X y^ N, h 1 u1 X1,1,i,t,tþt ð X t Þ u2 S i¼1 #2
1fu1 Xtþt u2 g "
S n o 1X y^ N, h 1 u1 Xk,k,i,t,tþt ðXt Þ u2 S i¼1 #2 1 fu1 Xtþt u2 g A:
All notations are consistent with previous sections where S is the number of simulated replications used in the estimation of conditional distributions. y^ N, h y^ N, h X1,1,i,t,tþt ðXt Þ and Xk,k,i,t,tþt , i ¼ 1, . . ., S, t ¼ 1, . . ., T t, are the ith simulated path ^ ^ under y 1, t, N, h and y k, t, N, h . If models 1 and k are nonnested for at least one, k ¼ 2, . . ., m. Under regular conditions and if P, R, S, h are chosen such as P, R, N ! 1 and h, P/N, h2P ! 0, P/R ! p where p is finite then max Dk, P, N ðu1 ; u2 Þ mk ðu1 ; u2 Þ ! max Zk ðu1 ; u2 Þ: k¼2, ..., m k¼2, ..., m where, with an abuse of notation, mk(u1, u2) ¼ m1(u1, u2) mk(u1, u2), and 00
! 12 1
ð u 2 Þ F y{ ð u1 Þ C C B B F y{ mj ðu1 ; u2 Þ ¼ E@@ A A, Xj,j tþt ðXt Þ Xj,j tþt ðXt Þ ðF0 ðu2 jXt Þ F0 ðu1 jXt ÞÞ for j ¼ 1,. . ., m, and where (Z1(u1, u2), . . ., Zm(u1, u2)) is an m-dimensional Gaussian random variable the covariance kernels that involves error in parameter estimation. Bootstrap statistics are thereforerequired to reflect this parameterestimation errorissue.17 In the implementation, we can obtain the asymptotic critical value using a recursive version of the block bootstrap. The idea is that when forming block 17
See Corradi and Swanson (2011) for further discussion.
56
Density and Conditional Distribution-Based Specification Analysis
1531
bootstrap samples in the recursive setting, observations at the beginning of the sample are used more frequently than observations at the end of the sample. We can replicate Steps 1–5 in bootstrap procedure in Sect. 56.3.4. It should be stressed that the resampling in the Step 1 is the recursive one. Specifically, begin by resampling b blocks of length l from the full sample, with lb ¼ T. For any given t, it is necessary to jointly resample Xt, Xt+1, . . ., Xt+t. More precisely, let Zt,t ¼ (Xt, Xt+1, . . ., Xt + t), t ¼ 1, . . .,T t. Now, resample b overlapping blocks of t,t t, length l from Z . This yields Z ¼ X ; X ; . . . ; X , t ¼ 1,. . ., T t. Use these t
tþ1
tþt
data to construct bootstrap estimator y^k, t, N, h. Recall that N is chosen in Corradi and Swanson (2011) as the number of simulated series used to estimate the parameters (N ¼ M ¼ S) and such as N/R, N/P ! 1. Under this condition, simulation error vanishes and there is no need to resample the simulated series. Corradi and Swanson (2011) show that T 1 X pffiffiffi y^k, t, N, h y^k, t, N, h Þ P t¼R
has the same limiting distribution as T 1 X pffiffiffi y^k, t, N, h y{k Þ, P t¼R
conditional on all samples except a set with probability measure approaching zero. Given this, the appropriate bootstrap statistic is ( " Tt S n o 1 X 1X y^ N, h Dk, P, S ðu1 ; u2 Þ ¼ pffiffiffi 1 u1 X1,1,i,t,tþt X t u2 S i¼1 P t¼R #2
1 u1 Xtþt u2 " T S n o 1X 1X y^ N , h 1 u1 X1,1,i,t,tþt Xj u2 T j¼1 S i¼1 #2
1 u1 Xjþt u2
"
S n o 1X y^ N , h 1 u1 Xk,k,i,t,tþt Xt u2 S i¼1 #2
1 u1 Xtþt u2
" S S n o 1X 1X y^ N, h 1 u1 Xk,k,i,t,tþt X j u2 S i¼1 S i¼1 # 2 119 =
AA : 1 u1 Xjþt u2 ;
1532
D. Duong and N.R. Swanson
As the bootstrap statistic is calculated from the last P resampled observations, it is necessary to have each bootstrap term recentered around the (full) sample mean. This is true even in the case there is no need to mimic PEE, i.e., the choice of P, R is such that P/R ! 0. In such a case, above statistic can be formed using y^k, t, N, h rather than y^k, t, N, h . For any bootstrap replication, repeat B times (B large) bootstrap replications * which yield B bootstrap statistics Dk,P,S . Reject H0 if Dk,P,S is greater than the (1 a)th percentile of the bootstrap empirical distribution. For numerical implementation, it is of importance to note that in thecase where P/R !0, P, T, R ! 1, ^ ^ ^ ^ there is no need to re-estimate y 1, t, N, h y k, t, N , h . Namely, y 1, t, N, h y k, t, N, h can be used in all bootstrap experiments. Of course, the above framework can also be applied using entire simulated distributions rather than predictive densities, by simply estimating parameters once, using the entire sample, as opposed to using recursive estimation techniques, say, as when forming predictions and associated predictive densities.
56.3.2 Multifactor Models Now, let us turn our attention to multifactor diffusion models of the form (X(t), V(t))0 ¼ (X(t), V1(t), . . ., Vd(t))0 , where only the first element, the diffusion process Xt, is observed while V(t) ¼ (V1(t), . . ., Vd(t))0 is latent. The most popular class of the multifactor models is stochastic volatility model expressed as below:
dXðtÞ ¼ dV ðtÞ
! b1 XðtÞ,y{ dt þ b2 V ðtÞ,y{
! s11 V ðtÞ,y{ dW 1 ðtÞ þ 0
! s12 V ðtÞ,y{ dW 2 ðtÞ, s22 V ðtÞ,y{
(56:10) where W1(t)11 and W2(t)11 are independent Brownian motions.18 For instance, many term structure models require the multifactor specification of the above form (see Dai and Singleton (2000)). In a more complicated case, the drift function can also be specified to be a stochastic process which poses even more challenges to testing. As mentioned earlier, the hypotheses (Hypothesis 1, 2, 3) and the test
18 Note that the dimension of X(·) can be higher and we can add jumps to the above specification such that it satisfies the regularity conditions outlined in the one-factor case. In addition, Corradi and Swanson (2005) provide a detailed discussion of approximation schemes in the context of stochastic volatility models.
56
Density and Conditional Distribution-Based Specification Analysis
1533
construction strategy for multifactor models are the same as for one-factor model. All theory essentially applies immediately to multifactor cases. In implementation, the key difference is in the simulated approximation scheme facilitating parameter and CDF estimation. X(t) cannot simply be expressed as a function of d + 1 driving Ðt Brownian motions but instead involves a function of (Wjt, 0 WjsdWis), i, j ¼ 1, .. . , d + 1 (see, e.g., Pardoux and Talay (1985, pp. 30–32) and Corradi and Swanson (2005)). For illustration, we hereafter focus on the analysis of a stochastic volatility model (56.10) where drift and diffusion coefficients can be written as b¼ s¼
! b1 XðtÞ, y{ b2 V ðtÞ, y{ ! s11 V ðtÞ, y{ s12 V ðtÞ, y{ 0 s22 V ðtÞ, y{
We also examine a three-factor model (i.e., the Chen model as in Eq. 56.5) and a three-factor model with jumps (i.e., CHENJ as in Eq. 56.6). By presenting twoand three-factor models as an extension of our above discussion, we make it clear that specification tests of multiple factor diffusions with d 3 can be easily constructed in similar manner. In distribution estimation, the important challenge for multifactor models lies in the missing variable issue. In particular, for simulation of Xt, one needs initial values of the latent processes V1, . . . , Vd, which are unobserved. To overcome this problem, it suffices to simulate the process using different random initial values for the volatility process; then construct the simulated distribution using those initial values and average them out. This allows one to integrate out the effect of a particular choice of volatility initial value. For clarity of exposition, we sketch out a simulation strategy for a general model of d latent variables in Sect. 56.3.3. This generalizes the simulation scheme of three-factor models in Cai and Swanson (2011). As a final remark before moving to the statistic presentation, note that the class of multifactor diffusion processes considered in this chapter is required to match the regular conditions as in previous section (assumption from A1 to A8 in Corradi and Swanson (2011) with A4 being replaced by A40 ).
56.3.2.1 Unconditional Distribution Tests Following the above discussion on test construction, we specialize to the case of two-factor stochastic volatility models. Extension to general multidimensional and multifactor models follows similarly. As the CDF is rarely known in closed form for stochastic volatility models, we rely on simulation technique. With the simulation scheme, estimators, simulated distribution, and bootstrap procedures to be presented in the next sections (see Sects. 56.3.3 and 56.3.4), the test statistics for Hypothesis 1 turns out to be
1534
D. Duong and N.R. Swanson T S 1 X 1X y^ SV T , S, h ¼ pffiffiffi 1fX t ug 1 Xt,Th, N, L, h u : S t¼1 T t¼1
In the above expression, recall that S is the number of simulation paths used in distribution simulation; y^T , N, L, h is a simulated estimator (see Sect. 56.3.3). N is a generic notation throughout this chapter, i.e., N ¼ L, the length of each simulation path for SGMM, and N ¼ M, the number of random draws (simulated paths) for NPQML estimator. h is the discretization interval used in simulation. Note that y^T , N, L, h is chosen in Corradi and Swanson (2005) to be SGMM estimator using full sample and therefore N ¼ L ¼ S.19 To put it simply, one can write y^T , S, h ¼ y^T , N, L, h . Under the null, choose T, S to satisfy T, S ! 1, Sh ! 0, T/S ! 0, then ð SV 2T , S, h !
SV 2 ðuÞpðuÞ, U
where Z is a Gaussian process with covariance kernel that reflects both PEE and the time-dependent nature of the data. The relevant bootstrap statistic is T
1 X ffiffiffi p SV 2 ¼ 1 Xt u 1fXt ug T , S, h T t¼1 T S ^ 1 X 1X y^ y 1 Xt,Th, N, L, h u 1 Xt,Th, N, L, h u , pffiffiffi T t¼1 S t¼1
where y^T, S, h is the bootstrap analog of y^T , S, h . Repeat Steps 1–5 in the bootstrap procedure in Sect. 56.3.4 to obtain critical values which are the percentiles of the empirical distribution of Z*T. Compare SVT,S,h with the percentiles of the empirical distribution of the bootstrap statistic and reject H0 if SVT,S,h is greater than the (1a)th percentile thereof. Otherwise, do not reject H0.
56.3.2.2 Conditional Distribution Tests To test Hypothesis 2 for the multifactor models, first we present the test statistic for the case of the stochastic volatility model (Xt, Vt) in Eq. 56.10 (i.e., for two-factor diffusion), and then we discuss testing with the three-factor model (Xt, V1t , V2t ) as in Eq. 56.5. Other multiple factor models can be tested analogously.
As seen in assumption A40 in Corradi and Swanson (2011) and Sect. 56.3.3 of this chapter, ^ y T , N, L, h can be other estimators such as the NPSQML estimator. Importantly, y^T , N, L, h satisfies condition A40 in Corradi and Swanson (2011).
19
56
Density and Conditional Distribution-Based Specification Analysis
1535
Note that for illustration, we again assume use of the SGMM estimator y^T , N, L, h , as in the original work of Bhardwaj et al. (2008) (namely, y^T , N, L, h is the simulated estimator described in Sect. 56.3.3). Specifically, N is chosen as the length of sample path L used in parameter estimation. The associated test statistic is SZT ¼
sup uv2UV
jSZ T ðu, vÞj
T t 1 X SZT ðu, vÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1fX t v g T t t¼1 ! N X S n o 1 X y^T , N, L, h 1 Xj, i, tþt u 1fXtþt ug , NS j¼1 i¼1
y^
, L, h is t-step ahead simulated skeleton obtained by simulation procedure where Xj,Ti,,Ntþt for multifactor model in Sect. 56.3.3.1. In a similar manner, the bootstrap statistic analogous to SZT is SZ T ¼
SZ ðu; vÞ ,
sup uv2UV
T
T t X
1 SZ T ðu; vÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1 Xt v T t t¼1 !
N X S ^
1 X y T , N, L, h 1 Xj, i, tþt u 1 Xtþt u NS j¼1 i¼1 T t 1 X pffiffiffiffiffiffiffiffiffiffiffi 1fX t v g T t t¼1 ! N X S n o 1 X y^T , N, L, h 1 Xj, i, tþt u 1fXtþt ug NS j¼1 i¼1
where y^T , N, L, h is the bootstrap estimator described in Sect. 56.3.4. For the threefactor model, the test statistic is defined as
MZT ¼
sup uv2UV
jMZ T ðu; vÞj,
T t 1 X MZT ðu; vÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1fX t v g T t t¼1 ! L X L X S n o 1 X y^T , N , L, h 1 Xs, tþt u 1fXtþt ug , L2 S j¼1 k¼1 i¼1
and bootstrap statistics is
1536
D. Duong and N.R. Swanson T t 1 X MZ T ðu; vÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1fX t v g T t t¼1 !
L X L X S ^
1 X y t, N , L , h 1 Xs, tþt u 1 Xtþt u 2 L S j¼1 k¼1 i¼1 T t 1 X pffiffiffiffiffiffiffiffiffiffiffi 1fX t v g T t t¼1 ! L X L X S n o 1 X y^t, N, L, h 1 Xs, tþt u 1fXtþt ug 2 L S j¼1 k¼1 i¼1
1, y^ 2, y^ y^ , N , L, h y^ , N, L, h ¼ Xs,Ttþt Xt ; V j T, N, L, h ; V k T, N, L, h Xs,Ttþt y^ N , L, h 1, y^ 2, y^ Xs,t,tþt Xt ; V j t, N, L, h ; V k t, N, L, h .
where
and
y^ N, L, h ¼ Xs,t,tþt
The first-order asymptotic validity of inference carried out using bootstrap statistics formed as outlined above follows immediately from Bhardwaj et al. (2008). For testing decision, one compares the test statistics SZT,S,h and * MZT,S,h with the percentiles or the empirical distributions of SZ*T and MZT,S,h , respectively. Then, reject H0 if the actual statistic is greater than the (1 a)th percentile of the empirical distribution of the bootstrap statistic, as in Sect. 56.3.4. Otherwise, do not reject H0.
56.3.2.3 Predictive Density Tests for Multiple Competing Models For illustration, we present the test for the stochastic volatility model (two-factor model). Again, note that extension to other multifactor models follows immediately. In particular, all steps in the construction of the test in the one-factor model case carry through immediately to the stochastic volatility case with the statistic defined as DV P, L, S ¼ max DV k, P, L, S ðu1 ; u2 Þ, k¼2, ..., m where T t 1 X DV k, P, L, S ðu1 ; u2 Þ ¼ pffiffiffi P t¼R
!2 L X S n o 1 X y^1, t, N , L, h y^1, t, N , L, h 1 u1 X1, tþt, i, j Xt ; V 1, j u2 1fu1 Xtþt u2 g SL j¼1 i¼1 0 !2 1 L X S n o X 1 y^k, t, N , L, h y^k, t, N, L, h @ 1 u1 Xk, tþt, i, j Xt ; V k, j u2 1fu1 Xtþt u2 g A: SL j¼1 i¼1
56
Density and Conditional Distribution-Based Specification Analysis
1537
Critical values for these tests can be obtained using a recursive version of the block bootstrap. The corresponding bootstrap test statistic is DV P, L, S ¼ max DV k, P, L, S ðu1 ; u2 Þ k¼2, ..., m where T t 1 X DV k, P, L, S ðu1 ;u2 Þ ¼ pffiffiffi P t¼R 80" #2
L X S < X
1 y^1, t, N , L, h y^1, t, N , L, h @ u2 1 u1 Xtþt u2 1 u1 X1, tþt, i, j Xt ;V 1, j : SL j¼1 i¼1
" #2 1 T L X S n o ^ ^ 1X 1X y , t , N , L, h y 1, t, N , L, h 1 u1 < X1,1tþt u2 1fu1 Xlþt u2 g A , i, j Xl ;V 1, j T l¼1 SL j¼1 i¼1 0" #2
L X S X ^
1 y^k, t, N , L, h y , t , N , L , h k u2 1 u1 Xtþt u2 1 u1 Xk, tþt, i, j Xt ;V k, j @ SL j¼1 i¼1 " # 2 19 = T L X S n o X X ^ ^ 1 1 y , t, N , L, h y k, t, N , L, h A 1 u1 Xk,ktþt X ;V X u u 1 u f g 2 1 lþt 2 , i, j l k , j ; T l¼1 SL j¼1 i¼1
Of note is that we follow Cai and Swanson (2011) by adopting the recursive NPSQML estimator y^1, t, N, L, h and y^k, t, N, L, h for model 1 and k, respectively, as introduced in Sect. 56.3.3.4 with the choice N ¼ M ¼ S. y^1, t, N , L, h and y^k, t, N, L, h are bootstrap analogs of y^1, t, N, L, h and y^k, t, N, L, h , respectively (see Sect. 56.3.4). In addition, we do not need to resample the volatility process, although volatility is simulated under both y^k, t, N, L, h and y^k, t, N, L, h , k ¼ 1, . . ., m. Repeat Steps 1–5 in the bootstrap procedure in Sect. 56.3.4 to obtain critical values. Compare DVP, L, S with the percentiles of the empirical distribution of * DVP, L, S, and reject H0 if DVP, L, S is greater than the (1 a)th percentile. Otherwise, do not reject H0. Again, in implementation, there is no need to re-estimate y^k, t, N, L, h for each bootstrap replication if P/R ! 0, P, T, R ! 1, as parameter estimation error vanishes asymptotically in this case.
56.3.3 Model Simulation and Estimation 56.3.3.1 Simulating Data Approximation schemes are used to obtain simulated distributions and simulated parameter estimators, which are needed in order to construct the test statistics outlined in previous sections. We therefore devote the first part of this section to
1538
D. Duong and N.R. Swanson
a discussion of the Milstein approximation schemes that have been used in Corradi and Swanson (2005), Bhardwaj et al. (2008), and Corradi and Swanson (2011). Let L be the length of each simulation path and h be the discretization interval, L ¼ Qh, and y be a generic parameter in simulation expression. We consider three cases: The pure diffusion process as in Eq. 56.9: 1 Xyqh Xyðq1Þh ¼ b Xyðq1Þh ; y h þ s Xðyq1Þh ; y ϵ qh s Xyðq1Þh ; y 0 s Xyðq1Þh ; y h 2 1 y 0 y 2 þ s Xðq1Þh ; y s Xðq1Þh ; y ϵ qh , 2
where
iid W qh W ðq1Þh ¼ ϵ qh N ð0; hÞ,
q ¼ 1, . . . , Q, with ϵ qh iid N ð0; hÞ, and where s0 is the derivative of s( ) with respect to its first argument. Hereafter, Xyqh denotes the values of the diffusion at time qh, simulated under generic y, and with a discrete interval equal to h, and so is a y fine-grain analog of Xt,h . The pure jump diffusion process without stochastic volatility as in Eq. 56.1: 0 1 Xðyqþ1Þh Xyqh ¼ b Xyqh ; y h þ s Xyqh ; y ϵ ðqþ1Þh s Xyqh ; y s Xyqh ; y h 2 0 1 þ s Xyqh ; y s Xyqh ; y ϵ ð2qþ1Þh lmy h 2 J X
þ yj 1 qh U j ðq þ 1Þh : j¼1
(56:11) The only difference between this approximation and that used for the pure diffusion is the jump part. Note that the last term on the right-hand side (RHS) of Eq. 56.11 is nonzero whenever we have one (or more) jump realization(s) in the interval [(q 1)h, qh]. Moreover, as neither the intensity nor the jump size is state dependent, the jump component can be simulated without any discretization error, as follows. Begin by making a draw from a Poisson distribution with intensity parameter l^t , say J . This gives a realization for the number of jumps over the simulation time span. Then draw J uniform random variables over [0, L], and sort them in ascending order so that U 1 U 2 . . . U J . These provide realizations for the J independent draws from f, say y1 , . . . , yJ . SV models without jumps as in Eq. 56.4 (using a generalized Milstein scheme):
56
Density and Conditional Distribution-Based Specification Analysis
1539
Xðyqþ1Þh ¼ Xyqh þ e b 1 Xyqh ; y h þ s11 V yqh ; y ϵ 1, ðqþ1Þh þ s12 V yqh ; y ϵ 2, ðqþ1Þh y @s11 V yqh ; y 1 y @s12, k V qh ; y 2 ϵ 2, ðqþ1Þh þ s22 V yqh ; y þ s22 V qh ; y @V @V 2ð ðqþ1Þh ð s dW 1, t dW 2:s , qh
qh
(56:12) V ðyqþ1Þh ¼ V yqh þ e b 2 V yqh ; y h þ s22 V yqh ; y ϵ 2, ðqþ1Þh (56:13) y @s V ; y 22 qh 1 y 2 ϵ 2, ðqþ1Þh , þ s22 V qh ; y @V 2 where h 1/2ϵi,qh N(0,1), i ¼ 1, 2, E ϵ 1, qh ϵ 2, q0 h ¼ 0 for all q 6¼ q0 , and
e b ðV; yÞ ¼
e b 1 ðV; yÞ e b 2 ðV; yÞ
0
1 1 @s12 ðV; yÞ s ð V; y Þ ð V; y Þ b 22 B 1 C 2 @V : ¼@ 1 @s22 ðV; yÞ A b2 ðV; yÞ s22 ðV; yÞ 2 @V
The last terms on the RHS of Eq. 56.12 involve stochastic integrals and cannot be explicitly computed. However, they can be approximated up to an error of order o(h) by (see, e.g., Eq. 3.7, pp. 347 in Kloeden and Platen (1999)) ð ðqþ1Þh ð s
pffiffiffiffiffi 1 dW 1, t dW 2, s h x1 x2 þ rp m1, p x2 m2, p x1 2 qh qh p pffiffiffi pffiffiffi h X1 B1, r 2x2 þ 2, r B2, r 2x1 þ 1, r , þ 2p r r¼1
1 where for j ¼ 1, 2, xj, mj, p, Bj, r, j, r are i.i.d. N(0, 1) random variables, rp ¼ 12 Xp 1 2p1 2 , and p is such that as h ! 0, p ! 1. r¼1 r 2
Stochastic Volatility with Jumps Simulation of sample paths of diffusion processes with stochastic volatility and jumps follows straightforwardly from the previous two cases. Whenever both intensity and jump size are not state dependent, a jump component can be simulated and added to either X(t) or the V(t) in the same manner as above. Extension to general multidimensional and multifactor models both with and without jumps also follows directly.
1540
D. Duong and N.R. Swanson
56.3.3.2 Simulating Distributions In this section, we sketch out methods used to construct t-step ahead simulated conditional distributions using simulated data. In applications, simulation techniques are needed when the functional form conditional distribution is unknown. We first illustrate the technique for one-factor models and then discuss multifactor models. One-factor Models Consider the one-factor model as in Eq. 56.7. To estimate the simulated CDFs: Step 1: Obtain y^T , N, h (using the entire sample) or y^t, N, h (recursive estimator) where y^T , N, h and y^t, N, h are estimators as discussed in Sects. 56.3.3.3 and 56.3.3.4. Step 2: Under y^T , N , h or y^t, N, h ,20 simulate S paths of length t, all having the same starting value, Xt. In particular, for each path i ¼ 1, . . ., S of length t, generate y^ , N, h ðXt Þ according to a Milstein schemes detailed in previous section, with Xi,Ttþt iid
y ¼ y^T , N, h or y^t, N, h. The errors used in simulation are eqh e N ð0; hÞ, and Qh ¼ t. eqh is assumed to be independent across simulations, so that E(ei,qhej,qh) ¼ 0, for all i 6¼ j and E(ei,qhei,qh) ¼ h, for any i, j. In addition, as the simulated diffusion is ergodic, the effect of the starting value approaches zero at an exponential rate, as t ! 1. Step 3: If y^T , N, h y^t, N, h is used, an estimate for the distribution, at time t + t, ^ ^ conditional on Xt, with estimator y T , N, h y t, N, h , is defined as S n ^ o 1X y , N, h ^ ^ F t u X t ; y T , N , h ¼ 1 Xi,Ttþt ðX t Þ u : S i¼1 Bhardwaj et al. (2008) show that if the model is correctly specified, then o XS n y^T, N, h 1 X ð X Þ u provides a consistent of the conditional distribution t iþtþt i¼1
1 S
y{ u|Xy{ Ft(u|Xt, y{) ¼ Pr(Xt+t t ¼ Xt). Specifically, assume that T, N, S ! 1. Then, for the case of SGMM estimator, if h ! 0, T/N ! 0, and h2T ! 0, T2/S ! 1, the following result holds for any Xt, t 1, uniformly in u
pr F^t u Xt ; y^T , N, h Ft ujXt ; y{ ! 0,
Note that N ¼ L for the SGMM estimator while N ¼ M ¼ S for NSQML estimator.
20
56
Density and Conditional Distribution-Based Specification Analysis
1541
In addition, if the model is correctly specified (i.e., if m(,) ¼ m0(,) and s(,) ¼ s0(,)), then pr F^t u Xt ; y^T , N, h F0, t ðujXt , y0 Þ! 0: Step 4: Repeat Steps 1–3 for t ¼ 1, . . ., T t. This yields T t conditional distributions that are t-steps ahead which will be used in the construction of the specification tests. The CDF simulation in the case selection test of multiple models with recursive estimator is similar. For model k, let y^k, t, N, h be the recursive estimator of “pseudo y^ N, h ðXt Þ is true” y{k computed using all observations up to varying time t. Then, Xk,k,i,t,tþt ^ generated according to a Milstein schemes as in Sect. 56.3.3.1, with y ¼ y k, t, N, h and the initial value Xt, Qh ¼ t. The corresponding empirical distribution of the y^ N , h XðtÞ can then be constructed. Under some regularity simulated series Xk,k,i,t,tþt conditions, S n o pr 1X y^ N , h 1 u1 Xk,k,i,t,tþt ð X t Þ u 2 ! F y{ ð u 2 Þ F y{ ðu1 Þ, t ¼ R, . . . , T t, S i¼1 Xk,k tþt ðXt Þ Xk,k tþt ðXt Þ
where F
y
{ k
Xk, tþt ðXt Þ
y{
k ðuÞ is the marginal distribution of Xtþt ðXt Þ implied by k model (i.e.,
by the model used to simulate the series), conditional on the (simulation) starting { value Xt. Furthermore, the marginal distribution of Xytþt ðXt Þ is the distribution of Xt+t conditional on the values observed at time t. Thus, FXy{ ðX Þ ðuÞ ¼ FTk u Xt ; y{k . kþtþt
t
y^ N , h of Xk,k,i,t,tþt ðXt Þ,
Of important note is that in the simulation i ¼ 1, . . ., S, for each t, t ¼ R, . . ., T t, we must use the same set of randomly drawn errors and similarly the same draws for numbers of jumps, jump times, and jump sizes. Thus, we only allow for the starting value to change. In particular, for each i ¼ 1, . . ., S, y^ , N, h y^ , N, h ðX Þ . This yields an S P matrix we generate Xk,k,i,RRþt ðXR Þ, . . . , Xk,k,i,Tt Tt t of simulated values, where P ¼ T R t + 1 refers to the length of the out-of y^ , N, h (at time R + j + t) can be seen as t periods sample period. Xk,k,i,Rþj Rþjþt XRþj ahead value “predicted” by model k using all available information up to time R + jR+j, j ¼ 1, . . ., P (the initial value XR+j and y^k, Rþj, N, h estimated using X1, . . ., XR+j). The key feature of this setup is that it enables us to compare y^ , N, h “predicted” t periods ahead values (i.e., Xk,k,i,Rþj Rþjþt X Rþj ) with actual values that are t periods ahead (i.e., XR+j+t), for j ¼ 1, . . ., P. In this manner, simulation-based tests under ex-ante predictive density comparison framework can be constructed.
1542
D. Duong and N.R. Swanson
Multifactor Model Consider the multifactor model with a skeleton (Xt, V1t , . . .,Vdt )0 (e.g., stochastic mean, stochastic volatility models, stochastic volatility of volatility) where only the first element Xt is observed. For simulation of the CDF, the difficulty arises as we do not know the initial values of latent variables (V1t , . . .,Vdt )0 at each point in time. We generalize the simulation plan of Bhardwaj et al. (2008) and Cai and Swanson (2011) to the case of d dimensions. Specifically, to overcome the initial value difficulty, a natural strategy is to simulate a long path of length L for each latent variable V1t , . . ., Vdt and use them to construct Xt+t and the corresponding simulated CDF of Xt+t; and finally, we average out the volatility values. Note that there are Ld combinations of the initial values V1t , . . ., Vdt . For illustration, consider the case of stochastic volatility (d ¼ 1) and the Chen three-factor model as in Eq. 56.5 (d ¼ 2), using recursive estimators. For the case of stochastic volatility (d ¼ 1), i.e., (Xt, Vt)0 , the steps are as follows: Step 1: Estimate y^t, N, L, h using recursive SGMM or NSQML estimation methods. y^ Step 2: Using the scheme in Eq. 56.13 with y ¼ y^t, N, L, h, generate the path V qht, N, L, h y^ for q ¼ 1/h, . . ., Qh with Qh ¼ L and hence obtain V t, N, L, h j ¼ 1, . . . L. j
Step 3: Using schemes in Eqs. 56.12, 56.13, simulate L S paths of length t, setting the initial value for the observable state variable to be Xt. For the initial values of y^ unobserved volatility, use V j,t,qhN, L, h j ¼ 1,. . ., L as retrieved in Step 2. Also, Ð Ð ( sqhdW1,t)dW2,s) keep the simulated random innovations (i.e., e1,qh,. e1,qh, (q+1)h qh to be constant across ^each j and t. Hence, for each replication i, using y , N, h y^ , L, h initial values Xt and V j,Tqh , we obtain Xj,t,i,Ntþt ðXt Þ which is a t-step ahead simulated value. Step 4: Now the estimator of Ft(u|Xt,y{) is defined as L X S n ^ o 1 X y ,h F^t u Xt ; y^t, N, L, h ¼ 1 Xj,t,i,Ntþt ðX t Þ u : LS j¼1 i¼1
Note that by averaging over the initial value of the volatility process, we have o 1 X S n y t, N , h integrated out its effect. In other words, is an 1 X ð X Þ u t j, i, tþt i¼1 S y^ estimate of Ft u Xt ; V j,t,hN, h , y{ . Step 5: Repeat Steps 1–4 for t ¼ 1, . . ., T t. This yields T t conditional distributions that are t-steps ahead which will be used in the construction of the specification tests. For three-factor model (d ¼ 2), i.e., (Xt, V1t , V2t ), consider model (56.5), where Wt ¼ (W1t , W2t , W3t ) are mutually independent standard Brownian motions. Step 1: Estimate y^t, N, L, h using SGMM or NSQML estimation methods.
56
Density and Conditional Distribution-Based Specification Analysis
1543 ^
1, y Step 2: Given the estimated parameter y^t, N, L, h , generate the paths V qh t, N, L, h and 2, y^ 1, y^ V ph t, N, L, h for q, p ¼ 1/h, . . ., Qh with Qh ¼ L, and hence, obtain V j T, N, L, h , 2, y^ j V k T, N, L, h , k ¼ 1, . . ., L. 1, y^ Step 3: Given the observable X and the L L simulated latent paths (V t, N, L, h and t j 2, y^t, N, L, h y^t, N, L, h Vk j, k ¼ 1, . . ., L) as the start values, we simulate t-step ahead Xtþt 1, y^ 2, y^ Xt ; V j t, N, L, h ; V k t, N, L, h . Since the start values for the two latent variables are L L length, so for each Xt we have N2 path. Now to integrate out
the initial effect of latent variables, form the estimate of conditional distribution as L X L n ^ o 1X 1, y^ 2, y^ y N , L, h F^t, s u Xt y^ ¼ 2 1 Xs,t,tþt Xt ; V j t, N, L, h ; V k t, N, L, h u , L j¼1 k¼1
where s denotes the sth simulation. y^ N, L, h S times, that is, repeat Step 3 S times, i.e., s ¼ 1, . . ., S. Step 4: Simulate Xs,t,tþt The estimate of Ft(u|Xt,y{) is S 1X F^t u Xt y^ ¼ F^t, s u Xt ; y^T , N, h : S i¼1
Step 5: Repeat Steps 1–4 for t ¼ 1, . . ., T t. This yields T t conditional distributions that are t-steps ahead which will be used in the construction of the specification tests. As a final remark, for the case of multiple competing models, we can proceed similarly. In addition, in the next two subsections, we present the exactly identified simulated (recursive) general method of moments and recursive nonparametric simulated quasi-maximum likelihood estimators that can be used in simulating distributions as well as constructing test statistics described in Sect. 56.3.2. The bootstrap analogs of those estimators will be discussed in Sect. 56.3.4.
56.3.3.3 Estimation: (Recursive) Simulated Generalized Method of Moments (SGMM) Estimators Suppose that we observe a discrete sample (skeleton) of T observations, say (X1, X2, . . ., XT)0 , from the underlying diffusion in Eq. 56.7. The (recursive) SGMM estimator y^t, L, h with 1 t T is specified as
1544
D. Duong and N.R. Swanson
! t L 0 X 1X 1 y^t, L, h ¼ arg min g Xj g Xyj, h t j¼1 L j¼1 y2Y ! t L X 1X 1 1 y Wt g Xj g X j, h t j¼1 L j¼1
(56:14)
¼ arg min Gt, L, h ðyÞ0 W t Gt, L, h ðyÞ, y2Y
where g is a vector of p moment conditions, Y ℜp (so that we have as many y moment conditions as parameters), and Xj,h ¼ Xy[Qjh/L], with L ¼ Qh is the simulated y path under generic parameter y and with discrete interval h. Xj,h is simulated using the Milstein schemes. Note that in the above expression, in the context of the specification test, y^t, L, h is estimated using the whole sample, i.e., t ¼ T. In the out-of-sample context, the recursive SGMM estimator y^t, L, h is estimated recursively using the using sample from 1 up to t. Typically, the p moment conditions are based on the difference between sample moments of historical and simulated data or between sample moments and model implied moments, whenever the latter are known in closed form. Finally, Wt is the heteroskedasticity and autocorrelation (HAC) robust covariance matrix estimator, defined as
W 1 t
lt tlt t X 1X 1X ¼ wn g Xj g Xj t n¼l j¼nþ1þlt t j¼1
!
g Xjn
t
!0 t 1X g Xj , t j¼1 (56:15)
where wv ¼ 1 v/(lT + 1). Further, the pseudo true value, y{, is defined to be y{ ¼ arg min G1 ðyÞ0 W 0 G1 ðyÞ, y2Y
where G1 ðyÞ0 W 0 G1 ðyÞ ¼ p
lim
L, T!1, h!0
GT , L, h ðyÞ0 W T GT , L, h ðyÞ,
and where y{ ¼ y0 if the model is correctly specified. In the above setup, the exactly identified case is considered rather than the overidentified SGMM. This choice guarantees that G1(y{) ¼ 0 even under misspecification, in the sense that the model differs from the underlying DGP. As pointed out by Hall and Inoue (2003), the root-N consistency does not hold for overidentified SGMM estimators of misspecified models. In addition,
56
Density and Conditional Distribution-Based Specification Analysis
1545
0 ∇y G1 y{ W { G1 y{ ¼ 0: However, in the case for which the number of parameters and the number of moment conditions are the same, ∇yG1(y{)0 W{ is invertible, and so the first-order conditions also imply that G1(y{) ¼ 0. Also note that other available estimation methods using moments include the efficient method of moments (EMM) estimator as proposed by Gallant and Tauchen (1996, 1997), which calculates moment functions by simulating the expected value of the score implied by an auxiliary model. In their setup, parameters are then computed by minimizing a chi-square criterion function.
56.3.3.4 Estimation: Recursive Nonparametric Simulated Quasi-maximum Likelihood Estimators In this section, we outline a recursive version of the NPSQML estimator of Fermanian and Salanie´ (2004), proposed by Corradi and Swanson (2011). The bootstrap counterpart of the recursive NPSQML estimator will be presented in the next section. One-Factor Models Hereafter, let f(Xt|Xt1, y{) be the conditional density associated with the above jump diffusion. If f is known in closed form, we can just estimate y{ recursively, using standard QML as21 t 1X y^t ¼ arg max ln f Xj Xj1 ; y , t ¼ R, . . . R þ P 1: y2Y t j¼2
(56:16)
Note that, similarly to the case of SGMM, the pseudo true value y{ is optimal in the sense: y{ ¼ arg max Eðlnf ðXt jXt1 ; yÞÞ: y2Y
(56:17)
For the case f is not known in closed form, we can follow Kristensen and Shin (2008) and Cai and Swanson (2011) to construct the simulated analog f^of f and then use it to estimate y{. f^ is estimated as function of the simulated sample paths y Xt,i (Xt 1), for t ¼ 2, . . ., T 1, i ¼ 1, . . ., M. First, generate T 1 paths of length one for each simulation replication, using Xt1 with t ¼ 1, . . ., T as starting values. y Hence, at time t and simulation replication i, we obtain skeletons Xt,i (Xt 1), for t ¼ 2, . . ., T 1; i ¼ 1,. . .M where M is the number of simulation paths (number of y y random draws or Xt,j (Xt1) and Xt,l (Xt1) are i.i.d.) for each simulation replication.
Note that as model k is, in general, misspecified, ∑ T1 t¼1 fk(Xt|Xt1,yk) is a quasi-likelihood and fk(Xt|Xt 1,y{k ) is not necessarily a martingale difference sequence.
21
1546
D. Duong and N.R. Swanson
M is fixed across all initial values. Then the recursive NPSQML estimator is defined as follows: t 1X y^t, M, h ¼ arg max lnf^M, h ðXi jXi1 ; yÞtM f^M, h ðXi jXi1 ; yÞ , t R, y2Y t i¼2
where ! M X Xyt, i, h ðXt1 Þ Xt 1 K : f^M, h ðXt jXt1 ; yÞ ¼ MxM i¼1 xM Note that with abuse of notation, we define y^t, L, h for SGMM and y^t, M, h for NPSQML estimators where L and M have different interpretations (L is the length of each simulation path and M is number of random draws). The function tM f^M, h ðXt jXt1 ; yÞ is a trimming function. It has some characteristics such as positive and increasing, tM f^M, h ðXt ; Xt1 ; yÞ ¼ 0, if f^M, h ðXt ; Xt1 ; yÞ < xdM and tM f^M, h ðXt ; Xt1 ; yÞ ¼ 1 if f^M, h ðXt ; Xt1 ; yÞ > 2xdM , for some d > 0.22 Note that when the log density is close to zero, the derivative tends to infinity and thus even very tiny simulation errors can have a large impact on the likelihood. The introduction of the trimming parameter into the optimization function ensures the impact of this case to be minimal asymptotically. Multifactor Models Since volatility is not observable, we cannot proceed as in the single-factor case when estimating the SV model using NPSQML estimator. Instead, let Vyj be generated according to Eq. 56.13, setting qh ¼ j, and j ¼ 1, . . ., L. The idea is to simulate L different starting values for unobservable volatility, construct the simulated likelihood functions accordingly, and then average them out. For each simulation replication at time t, we simulate L different values of Xt(Xt1, Vyj ) by generating L paths of length one, using fixed observable Xt1 and unobservable Vyj , j ¼ 1, . . . , L as starting values. Repeat this procedure for any t ¼ 1, . . . , T 1 and for any set j, j ¼ 1, . . . , L of random errors e1,t + (q + 1)h,j and e2,t+(q+1)h,j, q ¼ 1, . . . , 1/h. Note that it is important to use the same set of random errors e1,t+(q+1)h,j and e2, t+(q+1)h,j across different initial values for volatility. Denote the simulated value
22
Fermanian and Salanie (2004) suggest using the following trimming function:
tN ðxÞ ¼ for aN x 2aN.
4ðx aN Þ3 3ðx aN Þ4 , a4N a3N
56
Density and Conditional Distribution-Based Specification Analysis
1547
at time t and simulation replication i, under generic parameter y, using Xt1, Vyj as starting values as Xyt,i,h(Xt1, Vyj ). Then 0 1 y y L M X X ; V X X X t1 t t j , i , h 1 1 A, K@ f^M, L, h ðXt jXt1 ; yÞ ¼ L j¼1 MxM i¼1 xM and note that by averaging over the initial values for the unobservable volatility, its effect is integrated out. Finally, define23 t 1X y^t, M, L, h ¼ arg min ln f^M, L, h ðXs jXs1 ; yÞtM f^M, L, h ðXs jXs1 ; yÞ , t R: t y2Y s¼2
Note that in this case, Xt is no longer Markov (i.e., Xt and Vt are jointly Markovian, but Xt is not). Therefore, even in the case of true data generating process, the joint likelihood cannot be expressed as the product of the conditional and marginal distributions. Thus, y^t, M, L, h is necessarily a QML estimator. Furthermore, note that ∇yf(Xt|Xt1,y{) is no longer a martingale difference sequence; therefore, we need to use HAC robust covariance matrix estimators, regardless of whether the model is the “correct” model or not.
56.3.4 Bootstrap Critical Value Procedures The test statistics presented in Sects. 56.3.1 and 56.3.2 are implemented using critical values constructed via the bootstrap. As mentioned earlier, motivation for using the bootstrap is clear. The covariance kernel of the statistic limiting distributions contains both parameter estimation error and the data-related time-dependence components. Asymptotic critical value cannot thus be tabulated in a usual way. Several methods have been proposed to tackle this issue. One is the block bootstrap procedures which we discuss. Others have been mentioned above. With regard to the validity of the bootstrap, note that, in the case of dependent observations without PEE, we can tabulate valid critical value using a simple empirical version of the K€unsch (1989) block bootstrap. Now, the difficulty in our context lies in accounting for parameter estimation error. Goncalves and White (2002) establish the first-order validity of the block bootstrap for QMLE (or m-estimator) for dependent and heterogeneous data. This is an important result
For discussion of asymptotic properties of y^k, t, M, L, h , as well as of regularity conditions, see Corradi and Swanson (2011).
23
1548
D. Duong and N.R. Swanson
for the class of SGMM and NSQML estimators surveyed in this chapter and allows Corradi and Swanson in CS (2011) and elsewhere to develop asymptotically valid version of the bootstrap that can be applied under generic model misspecification, as assumed throughout this chapter. For the SGMM estimator, as shown in Corradi and Swanson (2005), the firstorder validity of the block bootstrap is valid in the exact identification case, and when T/S ! 0. In this case, SGMM is asymptotically equivalent to GMM, and consequently, there is no need to bootstrap the simulated series. In addition, in the exact identification case, GMM estimators can be treated the same way that QMLE estimators are treated. For the NSQML estimator, Corradi and Swanson (2011) point out that the NPSQML estimator is asymptotically equivalent to the QML estimator. Thus, we do not need to resample the simulated observations as the negligible contribution of simulation errors. Also note that critical values for these tests can be obtained using a recursive version of the block bootstrap. When forming block bootstrap samples in the recursive case, observations at the beginning of the sample are used more frequently than observations at the end of the sample. This introduces a location bias to the usual block bootstrap, as under standard resampling with replacement, all blocks from the original sample have the same probability of being selected. Also, the bias term varies across samples and can be either positive or negative, depending on the specific sample. A first-order valid bootstrap procedure for nonsimulation-based m-estimators constructed using a recursive estimation scheme is outlined in Corradi and Swanson (2007a). Here we extend the results of Corradi and Swanson (2007a) by establishing asymptotic results for cases in which simulation-based estimators are bootstrapped in a recursive setting. Now the details of bootstrap procedure for critical value tabulation can be outlined in 5 steps as follows: Step 1: Let T ¼ bl, where b denotes the number of blocks and l denotes the length of each block. We first draw a discrete uniform random variable, I1, that can take values 0, 1, . . ., T l with probability 1/(T l + 1). The first block is given by XI1 þ1 , . . . , XI1 þl. We then draw another discrete uniform random variable, say I2, and a second block of length l is formed, say XI2 þ1 , . . . , XI2 þl . Continue in the same manner, until you draw the last discrete uniform, say Ib, and so the last block is XIb þ1 , . . . , XIb þl . Let’s call the X*t the resampled series, and note that X*1, X*2, . . ., X*T corresponds to XI1 þ1 , XI1 þ2 , . . . , XIb þl . Thus, conditional on the sample, the only random element is the beginning of each block. In particular, X1 , . . . , Xl , Xlþ1 , . . . , X2l , XTlþ1 , . . . , XT , conditional on the sample, can be treated as b i.i.d. blocks of discrete uniform random variables. Note that it can be shown that except a set of probability measure approaching zero,
56
Density and Conditional Distribution-Based Specification Analysis
E
Var
T 1 X
T 1=2
t¼1
! Xt
T 1X X T t¼1 t
! ¼
T 1X Xt þ OP ðl=T Þ T t¼1
Tl X l T 1X 1X ¼ Xt Xt T t¼l i¼l T t¼1
!
1549
(56:18)
! T 1X Xtþi Xt þ OP l2 =T , T t¼1 (56:19)
where E* and Var* denote the expectation and the variance operators with respect to P* (the probability law governing the resampled series or the probability law governing the i.i.d. uniform random variables, conditional on the sample) and where OP*(l/T)(OP*(l2/T)) denotes a term converging in probability P* to zero, as l/T ! 0(l2/T ! 0). In the case of recursive estimators, we proceed the bootstrap similarly as follows. Begin by resampling b blocks of length l from the full sample, with lb ¼ T. For any given t, it is necessary to jointly resample Xt, Xt+1, . . ., Xt+t. More precisely, let Zt,t ¼ (Xt, Xt+1, . . ., Xt+t), t ¼ 1,. . ., T t. Now, resample b overlapping blocks of length l from Zt,t. This yields Zt,* ¼ (X*t , X*t+1, . . ., * Xt+t ), t ¼ 1, . . ., T t. Step 2: Re-estimate y^t, N, h y^T , N, L, h using the bootstrap sample Zt,* ¼ (X*t , X*t+1, * . . .,Xt+t ), t ¼ 1, . . ., T t (or full sample X*1, X*2, . . ., X*T). Recall that if we use the entire sample for the estimation, as the specification test in Corradi and Swanson (2005) and Bhardwaj et al. (2008), then y^t, N, h is denoted as y^T , N, h. The bootstrap estimators for SGMM and NPSQML are presented below: Bootstrap (Recursive) SGMM Estimators If the full sample is used in the specification test as in Corradi and Swanson (2005) and Bhardwaj et al. (2008), the bootstrap estimator is constructed straightforward as
y^T , N, h
! T L 0 1X 1X y ¼ arg min g Xj g X j, h L i¼1 y2Y T j¼1 ! T L X X 1 1 W 1 g Xj g Xyj, h , T T j¼1 L i¼1
where W1 T and g(.) are defined in Eq. 56.15 and L is the length of each simulation path. Note that it is convenient not to resample the simulated series as the simulation error vanishes asymptotically. In implementation, we do not mimic its contribution to the covariate kernel. In the case of predictive density-type model selection where recursive estimators are needed, define the bootstrap analog as
1550
D. Duong and N.R. Swanson
0
y^t, L, h
00 1 t T 1X X 1 @@g X ¼ arg min @ g X j0 A j t j¼1 T j0 ¼1 y2Y !!!0 L L 1X X ^t, L, h y y L1 g X j, h g X j, h L i¼1 i¼1 0 00 1 t T 1X X 1 @ @@g X O1 g Xj0 A j t t j¼1 T j0 ¼1 !!! L L 1X ^t, L, h 1X y g Xyj, h g Xj, h L i¼1 L i¼1 ¼ arg min Gt, L, h ðyÞ0 O1 Gt, L, h ðyÞ, t y2Y
where
O1 t
2 32 3 lt tlt T T 1X 1X X 1X 4g X ¼ w n, t g Xj0 54g Xjn g X j0 5 : j t n¼l T j0 ¼1 T j0 ¼1 j¼nþ1þl t
t
Note that each bootstrap term is recentered around the (full) sample mean. The intuition behind the particular recentering in bootstrap recursive SGMM estimator is that it ensures that the mean of the bootstrap moment conditions, evaluated at y^t, L, h , is zero, up to a negligible term. Specifically, we have 0
0 1 t T 1X X 1 @g X E@ g Xj0 A j t j¼1 T j0 ¼1
!! L ^ L 1X 1X y t, L , h y^t, L, h g X j, h g Xj, h L i¼1 L i¼1
T 1 X ¼ E g Xj g X j0 T j0 ¼1 ¼ Oðl=T Þ, with l ¼ o T 1=2 ,
where the O(l/T) term is due to the end block effect (see Corradi and Swanson (2007b) for further discussion).
56
Density and Conditional Distribution-Based Specification Analysis
1551
Bootstrap Recursive NPSQML Estimators * Let Zt,* ¼ (X*t , X*t+1, . . ., Xt+t ), t ¼ 1, . . ., T t. For each simulation replication, generate T 1 paths of length one, using X*1, . . ., X*T1 as starting y values, and so obtaining Xt,j (X*t1) for t ¼ 2, . . ., T 1, i ¼ 1,.., M. Further, let: f^M, h Xt Xt1 , y ¼
! M Xyt, i, h Xt1 Xt 1 X K , MxM i¼1 xM
Now, for t ¼ R, . . ., R + P 1, define t 1X ln f^M, h Xl Xl1 ; y tM y^t, M, h ¼ arg max t y2Y l¼2 ^ f M, h Xl Xl1 , y T ∇y f^M, h ðXl0 jXl0 1 ; yÞ 1X y¼^yt, M, h T l0 ¼2 f^M, h Xl0 jXtl0 y tM f^M, h Xl0 Xl0 1 ; y^t, M, h þt0M f^M, h Xl0 Xl0 1 ; y^t, M, h ∇y f^M, h ðXl0 jXl0 1 ; yÞ ^yt, M, h ! lnf^M, h ðXl0 jXl0 1 y^t, M, h Þ ,
y0
0 ( ) denotes the derivative of tM( ) with respect to its argument. Note where tM that each term in the simulated likelihood is recentered around the (full) sample mean of the score, evaluated at y^t, M, h. This ensures that the bootstrap score has mean zero, conditional The recentering term requires on the sample. computation of ∇y f^M, h Xl0 Xl0 1 ; y^t, M, h , which is not known in closed form. Nevertheless, it can be computed numerically, by simply taking the numerical derivative of the simulated likelihood. Bootstrap Estimators for Multifactor Model The SGMM and the bootstrap SGMM estimators in the case of multifactor model are similar as in one-factor model. The difference is that the simulation schemes (56.12) and (56.13) are used instead of Eq. 56.11. For recursive NPSQML estimators, to construct the bootstrap counterpart y^t, M, L, h of y^t, M, L, h , since M/T ! 1 and L/T ! 1, the contribution of simulation error is asymptotically negligible. Hence, there is no need to resample the simulated observations or the simulated initial values for volatility. Define
1552
D. Duong and N.R. Swanson
0 1 y y L M X X ; V X X t, i, h t1 j Xt 1 1 A: K@ f^M, L, h Xt Xt1 , y ¼ L j¼1 MxM i¼1 xM
Now, for t ¼ R, . . ., R + P 1, define 1 y^t, M, L, h ¼ arg max t y2Y
t X l¼2
logf^t, M, L, h Xl Xl1 , y tM f^t, M, L, h Xl Xl1 , y T ∇y f^t, M, L, h ðXl0 jXl0 1 ; yÞ 1X yt, M, L, h T l0 ¼2 f^t, M, L, h Xl0 Xtl0 ; y tM ¼ f^t, M, L, h Xl0 Xl0 1 ; y^t, M, L, h þt0M f^t, M, L, h Xl0 Xl0 1 ; y^t, M, L, h ∇y f^t, M, L, h ðXl0 jXl0 1 ; yÞ ^yt, M, L, h !! lnf^t, M, L, h ðXl0 jXl0 1 , y^t, M, L, h Þ ,
y0
where tM 0 ( ) denotes the derivative with respect to its argument. Of note is that each bootstrap term is recentered around the (full) sample mean. This is necessary because the bootstrap statistic is constructed using the last P resampled observations, which in turn have been resampled from the full sample. In particular, this is necessary regardless of the ratio, P/R. In addition, in the case P/R ! 0, so that there is no need to mimic parameter estimation error, the bootstrap statistics can be constructed using y^t, M, L, h instead of y^t, M, L, h . Step 3: Using the same set of random variables used in the construction of the y^ N, h y^t, N , h actual statistics, construct Xi,t,tþt or X , k, i, tþt, , i ¼ 1, . . ., S and t ¼ 1,. . ., T t. Note that we do not need resample the simulated series (as L/T ! 1, simulation error is asymptotically negligible). Instead, simulate the series using bootstrap estimators and using bootstrapped values as starting values. 2* * * 2 * * Step 4: Corresponding bootstrap statistics VT,N,h (or ZT,N,h , Dk,P,S , SV T,N,h, SZT,N,h, * depending on the types of tests) which are built on y^t, N, h y^t, N, L, h Þ then SDk,P,S are followed correspondingly. For the numerical implementation, again, of important note is that in the case where we pick the choice P/R ! 0, P, T,
56
Density and Conditional Distribution-Based Specification Analysis
1553
R ! 1, there is no need to re-estimate y^t, N, h y^t, N, L, h . y^t, N, h y^t, N, L, h can be used in all the bootstrap replications. Step 5: Repeat the bootstrap Steps 1–4 B times, and generate the empirical distribution of the B bootstrap statistics.
56.4
Summary of Empirical Applications of the Tests
In this section, we briefly review some empirical applications of the methods discussed above. We start with unconditional distribution test, as in Corradi and Swanson (2005) and then give a specific empirical example using the conditional distribution test from Bhardwaj et al. (2008). Finally, we briefly discuss on conditional distribution specification test applied to multiple competing models. The list of the diffusion models considered is provided in Table 56.1. Note that specification testing of the first model – a simplified version of the CIR model (we refer to this model as Wong) – is carried out using the unconditional distribution test. With the cumulative distribution function known in closed form as in Eq. 56.8, the test statistic can be straightforwardly calculated. It is also convenient to use GMM estimation in this case as the first two moments are known in closed form, i.e., a l and a/2(a b), respectively. Corradi and Swanson (2005) examine Hypothesis 1 using simulated data. Their Monte Carlo experiments suggest that the test is useful, even for samples as small as 400 observations. Hypothesis 2 is tested in Bhardwaj et al. (2008) and Cai and Swanson (2011). For illustration, we focus on the results in Bhardwaj et al. (2008) where CIR, SV, and SVJ models are empirically tested using the 1-month Eurodollar deposit rate (as a proxy for short rate) for the sample period January 6, 1971–September 30, 2005, which yields 1,813 weekly observations. Note that one might apply these tests to other datasets including the monthly federal funds rate, the weekly 3-month T-bill rate, the weekly US dollar swap rate, the monthly yield on zerocoupon bonds with different maturities, and the 6-month LIBOR. Some of these variables have been examined elsewhere, for example, in Ait-Sahalia (1999), Andersen et al. (2004), Dai and Singleton (2000), Diebold and Li (2006), and Piazzesi (2001). The statistic needed to apply the test discussed in Sect. 56.3.1.2 is ZT ¼ sup jZ T ðvÞj, v2V
1554
D. Duong and N.R. Swanson
Table 56.1 Specification test hypotheses of continuous time spot rate processa Model Wong
CIR
Specification drðtÞ ¼ ða l rðtÞÞdt þ
drðtÞ ¼ kr ðr rðtÞÞdt þ
pffiffiffiffiffiffiffiffiffiffi ar ðtÞdW r ðtÞ
pffiffiffiffiffiffiffiffiffi V ðtÞdW r ðtÞ,
CEV
drðtÞ ¼ kr ðr rðtÞÞdt þ sr rðtÞr dW r ðtÞ
SV
drðtÞ ¼ kr ðr rðtÞÞdt þ
pffiffiffiffiffiffiffiffiffi V ðtÞdW r ðtÞ,
dV ðtÞ ¼ kr ðv V ðtÞÞdt þ sv SVJ
drðtÞ ¼ kr ðr rðtÞÞdt þ
pffiffiffiffiffiffiffiffiffi V ðtÞdW v ðtÞ,
pffiffiffiffiffiffiffiffiffi V ðtÞdW r ðtÞ þ J u dqu J d dqd ,
pffiffiffiffiffiffiffiffiffi dV ðtÞ ¼ kv ðv V ðtÞÞdt þ sv V ðtÞdW v ðtÞ, CHEN
drðtÞ ¼ kr ðyðtÞ rðtÞÞdt þ
pffiffiffiffiffiffiffiffiffi V ðtÞdW r ,
Reference and data Corradi and Swanson (2005) Simulated data Bhardwaj et al. (2008) Eurodollar rate (1971–2005) Cai and Swanson (2011) Eurodollar Rate (1971–2008) Cai and Swanson (2011) Eurodollar rate (1971–2008) Bhardwaj et al. (2008) Cai and Swanson (2011) Bhardwaj et al. (2008) Cai and Swanson (2011) Cai and Swanson (2011) Eurodollar rate
pffiffiffiffiffiffiffiffiffi dV ðtÞ ¼ kv ðv V ðtÞÞdt þ sv V ðtÞdW v ðtÞ, pffiffiffiffiffiffiffiffi (1971–2008) dyðtÞ ¼ ky y yðtÞ dt þ sy yðtÞdW y ðtÞ, pffiffiffiffiffiffiffiffiffi CHENJ drðtÞ ¼ kr ðyðtÞ rðtÞÞdt þ V ðtÞdW r ðtÞ þ J u dqu J d dqd , Cai and Swanson (2011) pffiffiffiffiffiffiffiffiffi Eurodollar rate dV ðtÞ ¼ kv ðv V ðtÞÞdt þ sv V ðtÞdW v ðtÞ, pffiffiffiffiffiffiffiffi (1971–2008) dyðtÞ ¼ ky y yðtÞ dt þ sy yðtÞdW y ðtÞ, a
Hypothesis H1
H2
H2, H3
H2, H3 H2 H2, H3 H2 H2, H3
H2, H3
H2, H3
Note that the third column, “Reference and data,” provides the referenced papers and data used in empirical applications. In the fourth column, H1, H2, and H3 denote Hypothesis 1, Hypothesis 2, and Hypothesis 3, respectively. The hypotheses are presented corresponding to the references in the third column. For example, for CIR model, H2 corresponds to Bhardwaj et al. (2008) and H2 and H3 correspond to Cai and Swanson (2011)
where Tt 1 X ZT ðvÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1fXt vg T t t¼1 ! S n o
1X y^T , N, h 1 u Xs, tþt u 1 u Xtþt u , S s¼1
56
Density and Conditional Distribution-Based Specification Analysis
and Z T ¼ sup Z T ðvÞ , v2V
Where T t
1 X Z T ðvÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1 Xt v T t t¼1 !
S
1X y^T , N, h 1 u Xs, tþt, u 1 u Xtþt u S s¼1 T t 1 X 1fXt vg: pffiffiffiffiffiffiffiffiffiffiffi T t t¼1 ! S n o
1X y^T , N , h 1 u Xs, tþt u 1 u Xtþt u : S s¼1
For the case of stochastic volatility models, similarly we have SZT ¼ sup jSZ T ðvÞj, v2V
where T t 1 X SZT ðvÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1fXt vg T t t¼1 ! L X S n o
1 X y^T , N, h 1 u Xj, s, tþt u 1 u Xtþt u , LS j¼1 s¼1
and its bootstrap analog SZ T ¼ sup SZ T ðvÞ , v2V
where
L X S T t
1 X 1 X y^ N, h 1 u Xj,i,s,T,tþt u SZT ðvÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi 1 Xt v , LS j¼1 s¼1 T t t¼1 ! T t
1 X 1 u Xtþt u 1fX t v g pffiffiffiffiffiffiffiffiffiffiffi T t t¼1 ! L X S n o
1 X y^i, T , N, h 1 u Xj, s, tþt u 1 u Xtþt u : LS j¼1 s¼1
1555
1556
D. Duong and N.R. Swanson
Bhardwaj et al. (2008) carry out these tests using t-step ahead confidence intervals. They set t ¼ {1, 2, 4, 12} which to 1-week, 2-week, 1-month, and corresponds one-quarter ahead intervals and set uu ¼ X 0:5sX , X sX , covering 46.3 % and 72.4 % coverage, respectively. X and sX are the mean and variance of an initial sample of data. In addition, S ¼ {10T, 20T} and l ¼ {5, 10, 20, 50}. For illustrative purposes, we report one case from Bhardwaj et al. (2008). The test is implemented by setting S ¼ 10T and l ¼ 25 for the calculation of both ZT and SZT. In Table 56.2, single, double, and triple starred entries represent rejection using 20 %, 10 %, and 5 % size tests, respectively. Not surprisingly, the findings are consistent with some other papers in the specification test literature such as Aı¨t-Sahalia (1996) and Bandi (2002). Namely, the CIR model is rejected using 5 % size tests in almost all cases. When considering SV and SVJ models, smaller confidence intervals appear to lead to more model rejections. Moreover, results are somewhat mixed when evaluating the SVJ model, with a slightly higher frequency of rejection than in the case of SV models. Finally, turning to Hypothesis 3, Cai and Swanson (2011) use an extended version of the above dataset, i.e., the 1-month Eurodollar deposit rate from January 1971 to April 2008 (1,996 weekly observations). Specifically, they examine whether the Chen model is the “best” model amongst multiple alternative models including those outlined in Table 56.1. The answer is “yes.” In this example, the test was implemented using Dk,p,N(u1, u2), as described in Sects. 56.3.1 and 56.3.2, where P ¼ T/2 and predictions are constructed using recursively estimated models and the simulation sample length used to address latent variable initial values is set at L ¼ 10T. The choice of other inputs to the test such as t and interval u, u is the same as in Bhardwaj et al. (2008). The number of replications S, the block length l, and number of bootstrap replications are S ¼ 10T, l ¼ 20, and B ¼ 100. Cai and Swanson (2011) also compare the Chen model with the so-called smooth transition autoregression (STAR) model defined as follows: r t ¼ ðy1 þ b1 r t1 ÞGðg; zt ; cÞ þ ðy1 þ b2 r t1 Þð1 Gðg; zt ; cÞÞ þ ut , whereut is a disturbance term; y1, b1,g, b2, and c are constants; G(·) is the logistic CDF i:e:; G; ðg; zt ; cÞ; ¼; 1þexpð1gðzt cÞÞ and the number of lags; and p is selected via the use of Schwarz information criterion. Test statistics and predictive density-type “mean square forecast error” (MSFEs) values are again calculated as in Sects. 56.3.1 and 56.3.2.24 Their results indicate that at a 90 % level of confidence, one cannot reject the null hypothesis that the Chen model generates predictive densities at least as accurate as the STAR model, regardless of forecast horizon and confidence interval width. Moreover, in almost all cases, the Chen model has lower MSFE, and the magnitude of the MSFE differential between the Chen model and STAR model rises as the forecast horizon increases. This confirms their in-sample findings that the Chen model also wins when carrying out in-sample tests.
24
See Table 6 in Cai and Swanson (2011) for complete details.
10 % CV 0.9031 0.7254 1.4900 1.2243 2.6109 2.2745 5.2832 5.6522
SV 5 % CV 0.8729 0.6954 1.3751 1.1933 2.3297 2.2549 4.9298 5.2601
1.1319 1.2272* 0.9615* 1.2571 1.5012* 0.9901* 2.4237* 1.4522
SZT
1.8468 1.1203 0.8146 1.3316 1.1188 0.9793 2.0818 1.7400
SVJ 5 % CV
2.1957 1.3031 1.1334 1.4096 1.6856 1.0507 3.0640 2.1684
10 % CV
a
Tabulated entries are test statistics and 5 %, 10 %, and 20 % level critical values. Test intervals are given in the second column of the table, for t ¼ 1, 2, 4, 12. All tests are carried out using historical 1-month Eurodollar deposit rate data for the period January 1971–September 2005, measured at a weekly frequency. * ** , , and *** denote rejection at the 20 %, 10 %, and 5 % levels, respectively. Additionally, X and sX are the mean and standard deviation of the historical data. See above for complete details
Table 56.2 Empirical illustration of specification testing – CIR, SV, SVJ modelsa u, u CIR ZT 5 % CV 10 % CV SZT l ¼ 25 0.5274*** 0.2906 0.3545 0.9841*** 1 X 0:5sX 0.4289*** 0.2658 0.3178 0.6870 X sX *** 2 0.6824 0.4291 0.4911 0.4113 X 0:5sX * 0.4897 0.4264 0.5182 0.3682 X sX 4 0.8662** 0.7111 0.8491 1.2840 X 0:5sX 0.8539* 0.7512 0.9389 1.0472 X sX 12 1.1631* 1.0087 1.3009 1.7687 X 0:5sX 1.0429 1.4767 2.0222 1.7017 X sX
56 Density and Conditional Distribution-Based Specification Analysis 1557
1558
56.5
D. Duong and N.R. Swanson
Conclusion
This chapter reviews a class of specification and model selection-type tests developed by Corradi and Swanson (2005), Bhardwaj et al. (2008), and Corradi and Swanson (2011) for continuous time models. We begin with outlining the setup used to specify the types of diffusion models considered in this chapter. Thereafter, diffusion models in finance are discussed, and testing procedures are outlined. Related testing procedures are also discussed, both in contexts where models are assumed to be either correctly specified under the null hypothesis or generically misspecified under both the null and alternative test hypotheses. In addition to discussing tests of correct specification and test for selecting amongst alternative competing models, using both in-sample methods and via comparison of predictive accuracy, methodology is outlined allowing for parameter estimation, model and data simulation, and bootstrap critical value construction. Several extensions that are left to future research are as follows. First, it remains to construct specification tests that do not integrate out the effects of latent factors. Additionally, it remains to examine the finite sample properties of the estimators and bootstrap methods discussed in this chapter. Acknowledgments The authors thank the editor, Cheng-Few Lee, for many useful suggestions given during the writing of this chapter. Duong and Swanson would like to thank the Research Council at Rutgers University for research support.
References Ait-Sahalia, Y. (1999). Transition densities for interest rate and others nonlinear diffusions. Journal of Finance, LIV, 1361–1395. Aїt-Sahalia, Y. (1996). Testing continuous time models of the spot interest rate. Review of Financial Studies, 9, 385–426. Aїt-Sahalia, Y. (2002). Maximum likelihood estimation of discretely sampled diffusions: A closed form approximation approach. Econometrica, 70, 223–262. Aїt-Sahalia, Y. (2007). Estimating continuous-time models using discretely sampled data. In R. Blundell, P. Torsten, & W. K. Newey (Eds.), Econometric Society World Congress invited lecture, in Advances in economics and econometrics, theory and applications, Ninth World Congress, Econometric Society Monographs. Cambridge University Press. Aїt-Sahalia, Y., Fan, J., & Peng, H. (2009). Nonparametric transition-based tests for diffusions. Journal of the American Statistical Association, 104, 1102–1116. Altissimo, F., & Mele, A. (2009). Simulated nonparametric estimation of dynamic models with application in finance. Review of Economic Studies, 76, 413–450. Andersen, T. G., & Lund, J. (1997). Estimating continuous time stochastic volatility models of short term interest rates. Journal of Econometrics, 72, 343–377. Andersen, T. G., Benzoni, L., & Lund, J. (2004). Stochastic volatility, mean drift, and jumps in the short-term interest rate (Working paper). Northwestern University. Andrews, D. W. K. (1993). An introduction to econometric applications of empirical process theory for dependent random variables. Econometric Reviews, 12, 183–216. Andrews, D. W. K. (1997). A conditional Kolmogorov test. Econometrica, 65, 1097–1128. Bai, J. (2003). Testing parametric conditional distributions of dynamic models. Review of Economics and Statistics, 85, 531–549.
56
Density and Conditional Distribution-Based Specification Analysis
1559
Bandi, F. (2002). Short-term interest rate dynamics: A spatial approach. Journal of Financial Economics, 65, 73–110. Beckers, S. (1980). The constant elasticity of variance model and US implications for option pricing. Journal of Finance, 35, 661–673. Bhardwaj, G., Corradi, V., & Swanson, N. R. (2008). A simulation based specification test for diffusion processes. Journal of Business and Economic Statistics, 26, 176–193. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81, 637–654. Bontemps, C., & Meddahi, N. (2005). Testing normality: A GMM approach. Journal of Econometrics, 124, 149–186. Brennan, M. J., & Schwartz, E. S. (1979). A continuous-time approach to the pricing bonds. Journal of Banking and Finance, 3, 133–155. Cai, L., & Swanson, N. R. (2011). An empirical assessment of spot rate model stability. Journal of Empirical Finance, 18, 743–764. Chan, K. C., Karolyi, G. A., Longstaff, F. A., & Sanders, A. B. (1992). An empirical comparison of alternative models of the short-term interest rate. Journal of Finance, 47, 1209–1227. Chen, L. (1996). Stochastic mean and stochastic volatility – A three-factor model of term structure of interest rates and its application to the pricing of interest rate derivatives. Oxford, UK: Blackwell. Clements, M. P., & Smith, J. (2000). Evaluating the forecast densities of linear and nonlinear models: Applications to output growth and unemployment. Journal of Forecasting, 19, 255–276. Clements, M. P., & Smith, J. (2002). Evaluating multivariate forecast densities: A comparison of two approaches. International Journal of Forecasting, 18, 397–407. Corradi, V., & Swanson, N. R. (2005). A bootstrap specification test for diffusion processes. Journal of Econometrics, 124, 117–148. Corradi, V., & Swanson, N. R. (2006). Predictive density and conditional confidence interval accuracy tests. Journal of Econometrics, 135, 187–228. Corradi, V., & Swanson, N. R. (2007a). Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes. International Economic Review February, 2007(48), 67–109. Corradi, V., & Swanson, N. R. (2007b). Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes. International Economic Review, 48, 67–109. Corradi, V., & Swanson, N. R. (2011). Predictive density construction and accuracy testing with multiple possibly misspecified diffusion models. Journal of Econometrics, 161, 304–324. Corradi, V. & Swanson, N. R. (2002). A consistent test for nonlinear out of sample predictive accuracy. Journal of Econometrics, 110, 353–381. Courtadon, G. (1982). The pricing of options on default-free bonds. Journal of Financial and Quantitative Analysis, 17, 75–100. Cox, J. C., Ingersoll, J. E., & Ross, S. A. (1985). A theory of the term structure of interest rates. Economerica, 53, 385–407. Dai, Q., & Singleton, K. J. (2000). Specification analysis of affine term structure models. Journal of Finance, 55, 1943–1978. Diebold, F. X., & Li, C. (2006). Forecasting the term structure of government bond yields. Journal of Econometrics, 130, 337–364. Diebold, F. X., & Mariano, R. S. (1995). Comparing predictive accuracy. Journal of Business and Economic Statistics, 13, 253–263. Diebold, F. X., Gunther, T., & Tay, A. S. (1998a). Evaluating density forecasts with applications to finance and management. International Economic Review, 39, 863–883. Diebold, F. X., Tay, A. S., & Wallis, K. D. (1998b). Evaluating density forecasts of inflation: The survey of professional forecasters. In R. F. Engle & H. White (Eds.), Festschrift in honor of C.W.J. Granger. Oxford: Oxford University Press.
1560
D. Duong and N.R. Swanson
Diebold, F. X., Hahn, J., & Tay, A. S. (1999). Multivariate density forecast evaluation and calibration in financial risk management: High frequency returns on foreign exchange. Review of Economics and Statistics, 81, 661–673. Duan, J. C. (2003). A specification test for time series models by a normality transformation (Working paper). University of Toronto. Duffie, D., & Kan, R. (1996). A yield-factor model of interest rates. Mathematical Finance, 6, 379–406. Duffie, D., & Singleton, K. (1993). Simulated moment estimation of Markov models of asset prices. Econometrica, 61, 929–952. Duffie, D., Pan, J., & Singleton, K. (2000). Transform analysis and asset pricing for affine jump-diffusions. Econometrica, 68, 1343–1376. Duong, D., & Swanson, N. R. (2010). Empirical evidence on jumps and large fluctuations in individual stocks (Working paper). Rutgers University. Duong, D., & Swanson, N. R. (2011). Volatility in discrete and continuous time models: A survey with new evidence on large and small jumps. Missing Data Methods: Advances in Econometrics, 27B, 179–233. Emanuel, D., & Mcbeth, J. (1982). Further results on the constant elasticity of variance call option pricing model. Journal of Financial Quantitative Analysis, 17, 533–554. Fermanian, J. D., & Salanie´, B. (2004). A non-parametric simulated maximum likelihood estimation method. Econometric Theory, 20, 701–734. Gallant, A. R., & Tauchen, G. (1996). Which moments to match. Econometric Theory, 12, 657–681. Gallant, A. R., & Tauchen, G. (1997). Estimation of continuous time models for stock returns and interest rates. Macroeconomic Dynamics, 1, 135–168. Goncalves, S., & White, H. (2002). The bootstrap of the mean for dependent heterogeneous arrays. Econometric Theory, 18, 1367–1384. Hall, A. R., & Inoue, A. (2003). The large sample behavior of the generalized method of moments estimator in misspecified models. Journal of Econometrics, 114, 361–394. Hansen, B. E. (1996). Inference when a nuisance parameter is not identified under the null hypothesis. Econometrica, 64, 413–430. Heston, S. L. (1993). A closed form solution for option with stochastic volatility with applications to bond and currency options. Review of Financial Studies, 6, 327–344. Hong, Y. (2001). Evaluation of out-of-sample probability density forecasts with applications to S&P 500 stock prices (Working paper). Cornell University. Hong, Y. M., & Li, H. T. (2005). Nonparametric specification testing for continuous time models with applications to term structure interest rates. Review of Financial Studies, 18, 37–84. Hong, Y. M., Li, H. T., & Zhao, F. (2007). Can the random walk model be beaten in out-of-sample density forecasts? Evidence from intraday foreign exchange rates. Journal of Econometrics, 141, 736–776. Hull, J., & White, A. (1987). The pricing of options on assets with stochastic volatility. Journal of Finance, 42, 281–300. Inoue. (2001). Testing for distributional change in time series. Econometric Theory, 17, 156–187. Jiang, G. J. (1998). Nonparametric modelling of US term structure of interest rates and implications on the prices of derivative securities. Journal of Financial and Quantitative Analysis, 33, 465–497. Karlin, S., & Taylor, H. M. (1981). A second course in stochastic processes. San Diego: Academic. Kloeden, P. E., & Platen, E. (1999). Numerical solution of stochastic differential equations. New York: Springer. Kolmogorov, A. N. (1933). Sulla Determinazione Empirica di una Legge di Distribuzione. Giornale dell’Istitutodegli Attuari, 4, 83–91. Kristensen, D., & Shin, Y. (2008). Estimation of dynamic models with nonparametric simulated maximum likelihood (Creates Research paper 2008-58). University of Aarhus and Columbia University.
56
Density and Conditional Distribution-Based Specification Analysis
1561
K€unsch, H. R. (1989). The Jackknife and the bootstrap for general stationary observations. Annals of Statistics, 17, 1217–1241. Marsh, T., & Rosenfeld, E. (1983). Stochastic processes for interest rates and equilibrium bond prices. Journal of Finance, 38, 635–646. Meddahi, N. (2001). An Eigenfunction approach for volatility modeling (Working paper). University of Montreal. Merton, C. R. (1973). Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4, 141–183. Nelson, D. B. (1990). ARCH as diffusion approximations. Journal of Econometrics, 45, 7–38. Pardoux, E., & Talay, D. (1985). Discretization and simulation of stochastic differential equations. Acta Applicandae Mathematicae, 3, 23–47. Piazzesi, M. (2001). Macroeconomic jump effects and the yield curve (Working paper). UCLA. Piazzesi, M. (2004). Affine term structure models. Manuscript prepared for the Handbook of Financial Econometrics. University of California at Los Angeles. Piazzesi, M. (2005). Bond yields and the federal reserve. Journal of Political Economy, 113, 311–344. Politis, D. N., Romano, J. P., & Wolf, M. (1999). Subsampling. New York: Springer. Pritsker, M. (1998). Nonparametric density estimators and tests of continuous time interest rate models. Review of Financial Studies, 11, 449–487. Rosenblatt, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics, 23, 470–472. Singleton, K. J. (2006). Empirical dynamic asset pricing – Model specification and econometric assessment. Princeton: Princeton University Press. Smirnov, N. (1939). On the estimation of the discrepancy between empirical curves of distribution for two independent samples. Bulletin Mathematique de l’Universite’ de Moscou, 2, fasc. 2. Sullivan, R., Timmermann, A., & White, H. (1999). Data-snooping, technical trading rule performance, and the bootstrap. The Journal of Finance, 54(5), 1647–1691. Sullivan, R., Timmermann, A., & White, H. (2001). Dangers of data-driven inference: The case of calendar effects in stock returns. Journal of Econometrics, 105, 249–286. Thompson, S. B. (2008). Identifying term structure volatility from the LIBOR-swap curve. Review of Financial Studies, 21, 819–854. van der Vaart, A. (1998). Asymptotic statistics. Cambridge, MA: Cambridge University Press. Vasicek, O. A. (1977). An equilibrium characterization of the term structure. Journal of Financial Economics, 5, 177–188. Vuong, Q. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 57, 307–333. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50, 1–25. White, H. (2000). A reality check for data snooping. Econometrica, 68, 1097–1126. Wong, E. (1964). The construction of a class of stationary Markov processes. In R. Bellman (Ed.), Sixtenn symposia in applied mathematics: Stochastic processes in mathematical physics and engineering. Providence: American Mathematical Society.
Assessing the Performance of Estimators Dealing with Measurement Errors
57
Heitor Almeida, Murillo Campello, and Antonio F. Galvao
Contents 57.1 57.2
57.3
57.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dealing with Mismeasurement: Alternative Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.2.1 The Erickson–Whited Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.2.2 An OLS-IV Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.2.3 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monte Carlo Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3.1 Monte Carlo Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3.2 The EW Identification Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3.3 Bias and Efficiency of the EW, OLS-IV, and AB-GMM Estimators . . . . . . . 57.3.4 Heteroscedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3.5 Identification of the EW Estimator in Panel Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3.6 Revisiting the OLS-IV Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3.7 Distributional Properties of the EW and OLS-IV Estimators . . . . . . . . . . . . . . . Empirical Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.4.1 Theoretical Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.4.2 Data Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.4.3 Testing for the Presence of Fixed Effects and Heteroscedasticity . . . . . . . . . . . 57.4.4 Implementing the EW Identification Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1564 1570 1570 1574 1578 1581 1582 1585 1587 1593 1594 1598 1600 1602 1603 1604 1604 1605
This article is based on Almeida, H., M. Campello, and A. Galvao, 2010, Measurement Errors in Investment Equations, Review of Financial Studies, 23, pages 3279–3382. H. Almeida (*) University of Illinois at Urbana-Champaign, Champaign, IL, USA e-mail: [email protected] M. Campello Cornell University, Ithaca, NY, USA e-mail: [email protected] A.F. Galvao University of Iowa, Iowa City, IA, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_57, # Springer Science+Business Media New York 2015
1563
1564
H. Almeida et al.
57.4.5 Estimation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.4.6 Robustness of the Empirical OLS-IV Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1606 1611 1613 1615
Abstract
We describe different procedures to deal with measurement error in linear models and assess their performance in finite samples using Monte Carlo simulations and data on corporate investment. We consider the standard instrumental variable approach proposed by Griliches and Hausman (Journal of Econometrics 31:93–118, 1986) as extended by Biorn (Econometric Reviews 19:391–424, 2000) [OLS-IV], the Arellano and Bond (Review of Economic Studies 58:277–297, 1991) instrumental variable estimator, and the higher-order moment estimator proposed by Erickson and Whited (Journal of Political Economy 108:1027–1057, 2000, Econometric Theory 18:776–799, 2002). Our analysis focuses on characterizing the conditions under which each of these estimators produce unbiased and efficient estimates in a standard “errors-invariables” setting. In the presence of fixed effects, under heteroscedasticity, or in the absence of a very high degree of skewness in the data, the EW estimator is inefficient and returns biased estimates for mismeasured and perfectly measured regressors. In contrast to the EW estimator, IV-type estimators (OLS-IV and AB-GMM) easily handle individual effects, heteroscedastic errors, and different degrees of data skewness. The IV approach, however, requires assumptions about the autocorrelation structure of the mismeasured regressor and the measurement error. We illustrate the application of the different estimators using empirical investment models. Our results show that the EW estimator produces inconsistent results when applied to real-world investment data, while the IV estimators tend to return results that are consistent with theoretical priors. Keywords
Investment equations • Measurement error • Monte Carlo simulations • Instrumental variables • GMM • Bias • Fixed effects • Heteroscedasticity • Skewness • High-order moments
57.1
Introduction
OLS estimators are the workhorse of empirical research in many fields in applied economics. Researchers see a number of advantages in these estimators. Most notably, they are easy to implement and the results they generate are easy to replicate. Another appealing feature of OLS estimators is that they easily accommodate the inclusion of individual (e.g., firm and time) idiosyncratic effects. Despite their popularity, however, OLS estimators are weak in dealing with the problem of errors in variables. When the independent (right-hand side) variables of an empirical model are mismeasured, coefficients estimated via standard OLS are
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1565
inconsistent (attenuation bias). This poses a problem since, in practice, it is hard to think of any empirical proxies in applied research whose measurement is not a concern. In this chapter, we describe three estimators that deal with the problem of mismeasurement, namely, the standard instrumental variable approach extended by Biorn (2000) [OLS-IV], the Arellano and Bond (1991) instrumental variable estimator [AB-GMM], and the higher-order moment estimator proposed by Erickson and Whited (2000, 2002) [henceforth, EW]. We also assess the performance of these estimators in finite samples using Monte Carlo simulations and illustrate their application using data on corporate investment. While we provide a formal presentation in the next section, it is useful to discuss the intuition behind the estimation approaches we analyze. All approaches share the attractive property that they do not require the researcher to look for instruments outside the model being considered.1 They differ, however, on how identification is achieved. Both instrumental variable approaches rely on assumptions about the serial correlation of the latent variable and the innovations of the model (the model’s disturbances and the measurement error). There are two conditions that must hold to ensure identification. First, the true value of the mismeasured regressor must have some degree of autocorrelation. In this case, lags of the mismeasured regressor are natural candidates for the instrumental set since they contain information about the current value of the mismeasured regressor.2 This condition is akin to the standard requirement that the instrument be correlated with the variable of interest. The other necessary condition is associated with the exclusion restriction that is standard in IV methods and relates to the degree of serial correlation of the innovations. A standard assumption guaranteeing identification is that the measurement-error process is independently and identically distributed. This condition ensures that past values of the observed variables are uncorrelated with the current value of the measurement error, validating the use of lags of observed variables as instruments. Under certain conditions, one can also allow for autocorrelation in the measurement error. Examples in which identification works are when autocorrelation is constant over time or when it evolves according to a moving average process.3 The first assumption ensures identification because it means that while lagged values of the measurement error are correlated with its current value, any past shocks to the measurement-error process do not persist over time. The moving average assumption allows for shocks to persist over time, but it imposes restrictions on the instrumental set. In particular, as we show below, it precludes the 1
Naturally, if extraneous instruments are available, they can help solve the identification problem. See Rauh (2006) for the use of discontinuities in pension contributions as a source of variation in cash flows in an investment model. Bond and Cummins (2000) use information contained in financial analysts’ forecasts to instrument for investment demand. 2 Lags of the well-measured variable may also be included in the instrument set if they are believed to also contain information about the mismeasured one. 3 See, among others, Biorn (2000), Wansbeek (2001), and Xiao et al. (2008).
1566
H. Almeida et al.
use of shorter lags of observed variables in the instrument set, as the information contained in short lags may be correlated with the current value of the measurement error. The EW estimator is based on high-order moments of residuals obtained by “partialling out” perfectly measured regressors from the dependent, observed mismeasured, and latent variables, as well as high-order moments of the innovations of the model. The key idea is to create a set of auxiliary equations as a function of these moments and cross-moments. Implementation then requires a high degree of skewness in the distribution of the partialled out latent variable. Our analysis also shows that the presence of individual fixed effects and heteroscedasticity also impacts the performance of the EW estimator, particularly so if they are both present in the data process. We perform a series of Monte Carlo simulations to compare the performance of the EW and IV estimators in finite samples. Emulating the types of environments commonly found by empirical researchers, we set up a panel data model with individual fixed effects and potential heteroscedasticity in the errors. Monte Carlo experiments enable us to study those estimators in a “controlled environment,” where we can investigate the role played by each element (or assumption) of an estimator in evaluating its performance. Our simulations compare the EW and IV (OLS-IV and AB-GMM) estimators in terms of bias and root mean squared error (RMSE), a standard measure of efficiency. We consider several distributional assumptions to generate observations and errors. Experimenting with multiple distributions is important because researchers often find a variety of distributions in real-world applications and because one ultimately does not observe the distribution of the mismeasurement term. Since the EW estimator is built around the notion of skewness of the relevant distributions, we experiment with three skewed distributions (lognormal, chi-square, and F-distribution), using the standard normal (non-skewed) as a benchmark. The simulations also allow for significant correlation between mismeasured and wellmeasured regressors (as in Erickson and Whited 2000, 2002), so that the attenuation bias of the mismeasured regressor affects the coefficient of the well-measured regressor. Our simulation results can be summarized as follows. First, we examine the identification test proposed by Erickson and Whited (2002). This is a test that the data contain a sufficiently high degree of skewness to allow for the identification of their model. We study the power of the EW identification test by generating data that do not satisfy its null hypothesis of non-skewness. In this case, even for the most skewed distribution (lognormal), the test rejects the null hypothesis only 47 % of the time – this is far less than desirable, given that the null is false. The power of the test becomes even weaker after we treat the data for the presence of fixed effects in the true model (“within transformation”). In this case, the rejection rate under the lognormal distribution drops to 43 %. The test’s power declines even further when we consider alternative skewed distributions (chi-square and F-distributions). The upshot of this first set of experiments is that the EW model too often rejects data that are
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1567
generated to fit its identifying assumptions. These findings may help explain some of the difficulties previous researchers have reported when attempting to implement the EW estimator. We then study the bias and efficiency of the EW and IV estimators. Given that the true model contains fixed effects, it is appropriate to apply the within transformation to the data. However, because most empirical implementations of the EW estimator have used data in “level form” (i.e., not treated for the presence of fixed effects),4 we also experiment with cases in which we do not apply the within transformation. EW propose three estimators that differ according to the number of moments used: GMM3, GMM4, and GMM5. We consider all of them in our experiments. In a first round of simulations, we impose error homoscedasticity. When we implement the EW estimator with the data in level form, we find that the coefficients returned are significantly biased even when the data have a high degree of skewness (i.e., under the lognormal case, which is EW’s preferred case). Indeed, for the mismeasured regressors the EW biases are in excess of 100 % of their true value. As should be expected, the performance of the EW estimator improves once the within transformation is used. In the case of the lognormal distribution, the EW estimator bias is relatively small. In addition, deviations from the lognormal assumption tend to generate significant biases for the EW estimator. In a second round of simulations, we allow for heteroscedasticity in the data. We focus our attention on simulations that use data that are generated using a lognormal distribution after applying the within transformation (the best case scenario for the EW estimator). Heteroscedasticity introduces heterogeneity to the model and consequently to the distribution of the partialled out dependent variable, compromising identification in the EW framework. The simulations show that the EW estimator is biased and inefficient for both the mismeasured and well-measured regressors. In fact, biases emerge even for very small amounts of heteroscedasticity, where we find biases of approximately 40 % for the mismeasured regressor. Paradoxically, biases “switch signs” depending on the degree of heteroscedasticity that is allowed for in the model. For instance, for small amounts of heteroscedasticity, the bias of the mismeasured regressor is negative (i.e., the coefficient is biased downwards). However, the bias turns positive for a higher degree of heteroscedasticity. Since heteroscedasticity is a naturally occurring phenomenon in corporate data, our simulations imply that empirical researchers might face serious drawbacks when using the EW estimator. Our simulations also show that, in contrast to the EW estimator, the bias in the IV estimates is small and insensitive to the degree of skewness and heteroscedasticity in the data. Focusing on the OLS-IV estimator, we consider the case of time-invariant correlation in the error structure and use the second lag of the
4
Examples are Whited (2001, 2006), Hennessy (2004), and Colak and Whited (2007).
1568
H. Almeida et al.
observed mismeasured variable as an instrument for its current (differenced) value.5 We also allow the true value of the mismeasured regressor to have a moderate degree of autocorrelation. Our results suggest that the OLS-IV estimator renders fairly unbiased estimates. In general, that estimator is also distinctly more efficient than the EW estimator. We also examine the OLS-IV estimator’s sensitivity to the autocorrelation structures of the mismeasured regressor and the measurement error. First, we consider variations in the degree of autocorrelation in the process for the true value of the mismeasured regressor. Our simulations show that the IV bias is largely insensitive to variations in the autoregressive (AR) coefficient (except for extreme values of the AR coefficient). Second, we replace the assumption of timeinvariant autocorrelation in the measurement error with a moving average (MA) structure. Our simulations show that the OLS-IV bias remains small if one uses long enough lags of the observable variables as instruments. In addition, provided that the instrument set contains suitably long lags, the results are robust to variations in the degree of correlation in the MA process. As we discuss below, understanding these features (and limitations) of the IV approach is important given that the researcher will be unable to pin down the process followed by the measurement-error process. To illustrate the performance of these alternative estimators on real data, in the final part of our analysis, we estimate empirical investment models under the EW and IV frameworks. Concerns about measurement errors have been emphasized in the context of the empirical investment model introduced by Fazzari et al. (1988), where a firm’s investment is regressed on a proxy for investment demand (Tobin’s q) and cash flows. Theory suggests that the correct proxy for a firm’s investment demand is captured by marginal q, but this quantity is unobservable and researchers use instead its measurable proxy, average q. Since the two variables are not the same, a measurement problem naturally arises (Hayashi 1982; Poterba 1988). Following Fazzari et al. (1988), investment-cash flow sensitivities became a standard metric in the literature that examines the impact of financing imperfections on corporate investment (Stein 2003). These empirical sensitivities are also used for drawing inferences about efficiency in internal capital markets (Lamont 1997; Shin and Stulz 1998), the effect of agency on corporate spending (Hadlock 1998; Bertrand and Mullainathan 2005), the role of business groups in capital allocation (Hoshi et al. 1991), and the effect of managerial characteristics on corporate policies (Bertrand and Schoar 2003; Malmendier and Tate 2005). Theory does not pin down exact values for the expected coefficients on q and cash flow in an investment model. However, two conditions would seem reasonable in practice. First, given that the estimator is addressing measurement error in q that may be “picked up” by cash flow (joint effects of attenuation bias and regressor
5
The results for the Arellano–Bond GMM estimator are similar to those of the OLS-IV estimator. To save space and because the OLS-IV estimator is easier to implement, we focus on this estimator.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1569
covariance), we should expect the q coefficient to go up and the cash flow coefficient to go down, when compared to standard (likely biased) OLS estimates. Second, we would expect the q and cash flow coefficients to be nonnegative after addressing the problem of mismeasurement. If the original q-theory of investment holds and the estimator does a good job of addressing mismeasurement, then the cash flow coefficient would be zero. Alternatively, the cash flow coefficient could be positive because of financing frictions.6 Using data from Compustat from 1970 to 2005, we estimate investment equations in which investment is regressed on proxies for q and cash flow. Before doing so, we perform standard tests to check for the presence of individual fixed effects and heteroscedasticity in the data. In addition, we perform the EW identification test to check whether the data contain a sufficiently high degree of skewness. Our results are as follows. First, our tests reject the hypotheses that the data do not contain firm-fixed effects and that errors are homoscedastic. Second, the EW identification tests indicate that the data fail to display sufficiently high skewness. These initial tests suggest that the EW estimator is not suitable for standard investment equation applications. In fact, we find that, when applied to the data, the EW estimator returns coefficients for q and cash flow that are highly unstable across different years. Moreover, following the EW procedure for panel models (which comprises combining yearly cross-sectional coefficients into single estimates), we obtain estimates for q and cash flow that do not satisfy the conditions discussed above. In particular, EW estimators do not reduce the cash flow coefficient relative to that obtained by standard OLS, while the q coefficient is never statistically significant. In addition, those estimates are not robust with respect to the number of moments used: EW’s GMM3, GMM4, and GMM5 models procedure results that are inconsistent with one another. These results suggest that the presence of heteroscedasticity and fixed effects in real-world investment data hampers identification when using the EW estimator. In contrast to EW, the OLS-IV procedure yields estimates that are fairly sensible. The q coefficient goes up by a factor of 3–5, depending on the set of instruments used. At the same time, the cash flow coefficient goes down by about two-thirds of the standard OLS value. Similar conclusions apply to the AB-GMM estimator. We also examine the robustness of the OLS-IV to variations in the set of instruments used in the estimation, including sets that feature only longer lags of the variables in the model. The OLS-IV coefficients remain fairly stable after such changes. These results suggest that real-world investment data likely satisfies the assumptions that are required for identification of IV estimators. The remainder of the chapter is structured as follows. We start the next section discussing in detail the EW estimator, clarifying the assumptions that are needed for its implementation. Subsequently, we show how alternative IV models deal with 6 See Hubbard (1998) and Stein (2003) for comprehensive reviews. We note that the presence of financing frictions does not necessarily imply that the cash flow coefficient should be positive. See Chirinko (1993) and Gomes (2001) for arguments suggesting that financing frictions are not sufficient to generate positive cash flow coefficients.
1570
H. Almeida et al.
the errors-in-variables problem. In Sect. 57.3, we use Monte Carlo simulations to examine the performance of alternative estimators in small samples and when we relax the assumptions that are required for identification. In Sect. 57.4, we take our investigation to actual data, estimating investment regressions under the EW and IV frameworks. Section 57.5 concludes the chapter.
57.2
Dealing with Mismeasurement: Alternative Estimators
57.2.1 The Erickson–Whited Estimator In this section, we discuss the estimator proposed in companion papers by Erickson and Whited (2000, 2002). Those authors present a two-step generalized method of moments (GMM) estimator that exploits information contained in the high-order moments of residuals obtained from perfectly measured regressors (similar to Cragg 1997). We follow EW and present the estimator using notation of crosssection estimation. Let (yi, zi, xi), i ¼ 1,. . .,n, be a sequence of observable vectors, where xi (xi1,. . .,xiJ) and zi (1, zi1,. . ., ziL). Let (ui, wi, ei) be a sequence of unobservable vectors, where wi (wi1,. . .,wiJ) and ei (ei1,. . .,eiJ). Consider the following model: yi ¼ zi a þ wi b þ ui
(57.1)
where yi is the dependent variable, zi is a perfectly measured regressor, wi is a mismeasured regressor, ui is the innovation of the model, and a (a0, ai, . . .,aL)0 and b (b1, . . .,bJ)0 . The measurement error is assumed to be additive such that x i ¼ wi þ e i
(57.2)
where xi is the observed variable and ei is the measurement error. The observed variables are yi, zi, and xi; and by substituting Eq. 57.2 in Eq. 57.1, we have yi ¼ zi a þ xi b þ v i , where vi ¼ ui eib. In the new regression, the observable variable xi is correlated with the innovation term vi, causing the coefficient of interest, b, to be biased. To compute the EW estimator, it is necessary to first partial out the effect of the well-measured variable, zi, in Eqs. 57.1 and 57.2 and rewrite the resulting expressions in terms of residual populations: yi zi my ¼ i b þ ui
(57.3)
xi zi mx ¼ i þ ei ,
(57.4)
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1571
where (my, mx, mw) ¼ [E(z0izi)]1E[z0i(yi, xi, wi)] and i wi zimw. For the details of this derivation, see Erickson and Whited (2002, p. 779). One can then consider a two-step estimation approach, where the first step is to substitute least squares Xn 0 1 Xn 0 estimates m^y ; m^x zz z ðy ; x Þ into Eqs. 57.3 and 57.4 to i¼1 i i i¼1 i i i obtain a lower dimensional errors-in-variables model. The second step consists of estimating b using GMM using high-order sample moments of yi zi m^y and xi zi m^x . Estimates of a are then recovered via my ¼ a + mxb. Thus, the estimators are based on equations giving the moments of yi zimy and xi zimx as functions of b and the moments of (ui, ei, i). To give a concrete example of how the EW estimator works, we explore the case of J ¼ 1. The more general case is discussed below. By substituting
m^y ; m^x
n X
!1 z0i zi
i¼1
n X
zi 0 ðyi ; xi Þ
i¼1
into Eqs. 57.1 and 57.2, one can estimate b, E(u2i ), E(e2i ), and E(2i ) via GMM. Estimates of the lth element of a are obtained by substituting the estimate of b and the lth elements of m^y and m^x into al ¼ myl mxl b,
l 6¼ 0:
There are three second-order moment equations: h 2 i ¼ b2 E 2i þ E u2i E yi zi m y
(57.5)
E yi zi my ðxi zi mx Þ ¼ bE 2i
(57.6)
h i E ðxi zi mx Þ2 ¼ E 2i þ E e21 :
(57.7)
The left-hand side quantities are consistently estimable, but there are only three equations with which to estimate four unknown parameters on the right-hand side. The third-order product moment equations, however, consist of two equations in two unknowns: h i 2 E yi zi my ðxi zi mx Þ ¼ b2 E 3i , (57.8) h i E yi zi my ðxi zi mx Þ2 ¼ bE 3i :
(57.9)
It is possible to solve these two equations for b. Crucially, a solution exists if the identifying assumptions b 6¼ 0 and E(3i ) 6¼ 0 are true, and one can test the contrary
1572
H. Almeida et al.
hypothesis (i.e., b 6¼ 0 and/or E(3i ) ¼ 0) by testing whether their sample counterparts are significatively different from zero. Given b, Eqs. 57.5, 57.6, 57.7, and 57.9 can be solved for the remaining righthand side quantities. One obtains an overidentified equation system by combining Eqs. 57.5, 57.6, 57.7, 57.8, and 57.9 with the fourth-order product moment equations, which introduce only one new quantity, E(4i ): h i 3 E yi zi my ðxi zi mx Þ ¼ b3 E 4i þ 3E 2i E u2i ,
(57.10)
h i 2 E yi zi my ðxi zi mx Þ2 ¼ b2 E 4i þ E 2i E u2i þ E u2i E 2i þE e2i ,
(57.11)
h i E yi zi my ðxi zi mx Þ3 ¼ b E 4i þ 3E 2i E e2i :
(57.12)
The resulting eight-equation system Eqs. 57.5, 57.6, 57.7, 57.8, 57.9, 57.10, 57.11, and 57.12 contains the six unknowns (b, E(u2i ), E(e2i ), E(2i ), E(3i ), E(4i )). It is possible to estimate this vector by numerically minimizing a quadratic form that minimizes asymptotic variance. The conditions imposed by EW imply restrictions on the residual moments of the observable variables. Such restrictions can be tested using the corresponding sample moments. EW also propose a test for residual moments that is based on several assumptions.7 These assumptions imply testable restrictions on the residuals from the population regression of the dependent and proxy variables on the perfectly measured regressors. Accordingly, one can develop Wald-type partially adjusted statistics and asymptotic null distributions for the test. Empirically, one can use the Wald test statistic and critical values from a chi-square distribution to test whether the last moments are equal to zero. This is an identification test, and if in a particular application one cannot reject the null hypothesis, then the model is unidentified and the EW estimator may not be used. We study the finite sample performance of this test and its sensitivity to different data-generating processes in the next section. It is possible to derive more general forms of the EW estimator. In particular, the EW estimators are based on the equations for the moments of yi zimy and xi zimx as functions of b and the moments ui, ei, and i. To derive these equations, write Eq. 57.3 as yi zimy ¼ ∑ jJ ¼ 1ijbj + ui, where J is the number of well-measured regressors and the jth equation in Eq. 57.4 as xij zimxj ¼ ij + eij, where mxj is the jth column of and mx and (ij, eij) is the jth row of (0ij, e0ij). Next write 7
First, the measurement errors, the equation error, and all regressors have finite moments of sufficiently high order. Second, the regression error and the measurement error must be independent of each other and of all regressors. Third, the residuals from the population regression of the unobservable regressors on the perfectly measured regressors must have a nonnormal distribution.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
h
E yi zi my
r0 YJ j¼1
ðxi zi mx Þ
rj
i
" ¼E
J X
!r0 i b þ ui
YJ
j¼1
j¼1
1573
# ði þ ei Þ
rj
,
(57.13) where (r0, r1,. . ., rJ) are nonnegative integers. After expanding the right-hand side of Eq. 57.13, using the multinomial theorem, it is possible to write the above moment condition as E½gi ðmÞ ¼ cðyÞ, r where m ¼ vec(my, mx), gi(m) is a vector of distinct elements of the form yi zi my 0 YJ ðx zi mx Þrj , cðyÞ contains the corresponding expanded version of the rightj¼1 i
hand side of Eq. 57.13, and y is a vector containing the elements of b and the moments of (ui, ei, i). The GMM estimator y is defined as 0 ^ ðgi ðm^Þ cðtÞÞ, y^ ¼ arg min ðgi ðm^Þ cðtÞÞ W
t2Y
Xn ^ is a positive definite matrix. Assuming a g ðsÞ for all s, and W where g- i ðsÞ t¼1 i 8 number of regularity conditions, the estimator is consistent and asymptotically normal. It is important to notice that the estimator proposed by Erickson and Whited (2002) was originally designed for cross-sectional data. To accommodate a panellike structure, Erickson and Whited (2000) propose transforming the data before the estimation using the within transformation or differencing. To mimic a panel structure, the authors propose the idea of combining different cross-sectional GMM estimates using a minimum distance estimator (MDE). The MDE estimator is derived by minimizing the distance between the auxiliary parameter vectors under the following restrictions: f b; y^ ¼ Hb y^ ¼ 0, where the R · K K matrix H imposes (R 1) · K restrictions on y. The R K 1 vector y^ contains the R stacked auxiliary parameter vectors, and b is the parameter of interest. Moreover, H is defined by an R · K K – dimensional stacked identity matrix.
More specifically, these conditions are as follows: (zi, wi, ui, ei) is an independent and identically distributed sequence; ui and the elements of zi, wi, and ei, have finite moments of every order; (ui, ei) is independent of (zi, wi), and the individual elements in (ui, ei) are independent of each other; E(ui) ¼ 0 and E(ei) ¼ 0; E[(zi, wi)0 (zi, wi)] is positive definite; every element of b is nonzero; and the distribution of satisfies E[(ic)3] 6¼ 0 for every vector of constants c ¼ (c1, ,cJ) having at least one nonzero element.
8
1574
H. Almeida et al.
The MDE is given by the minimization of 0 h i1 DðbÞ ¼ f b; y^ V^ y^ f b; y^ ,
(57.14)
^ is the common estimated variance–covariance matrix of the auxiliary where V^ ½y parameter vectors. In order to implement the MDE, it is necessary to determine the covariances between the cross-sections being pooled. EW propose to estimate the covariance by using the covariance between the estimators’ respective influence functions.9 The procedure requires that each cross-section have the same sample size, that is, the panel needs to be balanced. Thus, minimization of D in Eq. 57.14 leads to b^ ¼
h i1 1 0 h i1 0 ^ H V^ y^ H H V^ y^ y,
with variance–covariance matrix: h i 0 h i1 1 V^ b^ ¼ H V^ y^ H : H is a vector in which R is the number of GMM estimates available (for each time period) and K ¼ 1, y^ is a vector containing allh the i EW estimates for each period, and b is the MDE of interest. In addition, V y^ is a matrix carrying the estimated variance–covariance matrices of the GMM parameter vectors.
57.2.2 An OLS-IV Framework In this section, we revisit the work of Griliches and Hausman (1986) and Biorn (2000) to discuss a class of OLS-IV estimators that can help address the errors-invariables problem. Consider the following single-equation model: yit ¼ gi þ wit b þ uit,
i ¼ 1, . . . N,
t ¼ 1, . . . , T,
(57.15)
where uit is independently and identically distributed, with mean zero and variance s2u, and Cov(wit, uis) ¼ Cov(gi, uis) ¼ 0 for any t and s, but Cov(gi, wit) 6¼ 0, y is an observable scalar, w is a 1 K vector, and b is K 1 vector. Suppose we do not observe wit itself, but rather the error-ridden measure:
9
See Erickson and Whited (2002) Lemma 1 for the definition of their proposed influence function.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
xit ¼ wit þ eit ,
1575
(57.16)
where Cov(wit, eit) ¼ Cov(gi, eit) ¼ Cov(uis, eit) ¼ 0, Var(eit) ¼ s2e , Cov(eit, eit 1) ¼ ges2e , and e is a 1 K vector. If we have a panel data with T > 3, by substituting Eq. 57.16 in Eq. 57.15, we can take first differences of the data to eliminate the individual effects gi and obtain yit yit1 ¼ ðxit xit1 Þb þ ½ðuit eit bÞ ðuit1 eit1 bÞ
(57.17)
Because of the correlation between the mismeasured variable and the innovations, the coefficient of interest is known to be biased. Griliches and Hausman (1986) propose an instrumental variable approach to reduce the bias. If the measurement error eit is i.i.d. across i and t, and x is serially correlated, then, for example, xit2, xit3, or (xit2 xit3) are valid as instruments for (xit xit1). The resulting instrumental variable estimator is consistent even though T is finite and N might tend to infinity. As emphasized by Erickson and Whited (2000), for some applications, the assumption of i.i.d. measurement error can be seen as too strong. Nonetheless, it is possible to relax this assumption to allow for autocorrelation in the measurement errors. While other alternatives are available, here we follow the approach suggested by Biorn (2000).10 Biorn (2000) relaxes the i.i.d. condition for innovations in the mismeasured equation and proposes alternative assumptions under which consistent IV estimators of the coefficient of the mismeasured regressor exists. Under those assumptions, as we will show, one can use the lags of the variables already included in the model as instruments. A notable point is that the consistency of these estimators is robust to potential correlation between individual heterogeneity and the latent regressor. Formally, consider the model described in Eqs. 57.15 and 57.16 and assume that (wit, uit, eit, gi) are independent across individuals. For the necessary orthogonality assumptions, we refer the reader to Biorn (2000), since these are quite standard. More interesting are the assumptions about the measurement errors and disturbances. The standard Griliche–Hausman’s assumptions are (A1) E(e0iteiy) ¼ 0KK, t 6¼ y, (A2) E(uituiy) ¼ 0, t 6¼ y which impose non-autocorrelation on innovations. It is possible to relax these assumptions in different ways. For example, we can replace (A1) and (A2) with (B1) E(e0iteiy) ¼ 0KK, jt yj > t, (B2) E(uituiy) ¼ 0, jt yj > t,
10
A more recent paper by Xiao et al. (2008) also shows how to relax the classical Griliches– Hausman assumptions for measurement error models.
1576
H. Almeida et al.
This set of assumptions is weaker since (B1) and (B2) allow for a vector moving average (MA) structure up to order t (1) for the innovations. Alternatively, one can use the following assumptions: (C1) E(e0itety) is invariant to t, y, t 6¼ y. (C2) E(uituiy) is invariant to t, y, t 6¼ y. Assumptions (C1) and (C2) allow for a different type of autocorrelation, more specifically they allow for any amount of autocorrelation that is time invariant. Assumptions (C1) and (C2) will be satisfied if the measurement errors and the disturbances have individual components, say eit ¼ e1i + e2it, uit ¼ u1i + u2it, where e1i, e2it, u1i, u2it i.i.d. Homoscedasticity of eit and/or uit across i and t need not be assumed; the model accommodates various forms of heteroscedasticity. Biorn also considers assumptions related to the distribution of the latent regressor vector wit: (D1) E(wit) is invariant to t. (D2) E(giwit) is invariant to t. Assumptions (D1) and (D2) hold when wit is stationary. Note that wit and i need not be uncorrelated. To ensure identification of the slope coefficient vector when panel data are available, it is necessary to impose restrictions on the second-order moments of the variables (wit, uit, eit, gi). For simplicity, Biorn assumes that this distribution is the same across individuals and that the moments are finite. More specifically, wn 0 ee uu 2 C(wit, wit) ¼ ∑ ww ty , E(wit i) ¼ ∑ t , E(eiteit) ¼ ∑ty, E(uituit) ¼ sty , E( i ) ¼ s , where C denotes the covariance matrix operator. Then, it is possible to derive the second-order moments of the observable variables and show that they only depend on these matrices and the coefficient b.11 In this framework, there is no need to use assumptions based on higher-order moments. Biorn proposes several strategies to estimate the slope parameter of interest. Under the OLS-IV framework, he proposes estimation procedures of two kinds: • OLS-IV A: The equation is transformed to differences to remove individual heterogeneity and is estimated by OLS-IV. Admissible instruments for this case are the level values of the regressors and/or regressands for other periods. • OLS-IV B: The equation is kept in level form and is estimated by OLS-IV. Admissible instruments for this case are differenced values of the regressors and/or regressands for other periods. Using moment conditions from the OLS-IV framework, one can define the estimators just described. In particular, using the mean counterpart and the moment conditions, one can formally define the OLS-IV A and OLS-IV B estimators.
ee ww w Formally, one can show that C(xit, xiy) ¼ ∑ ww ty + ∑ ty, E(xit, yiy) ¼ ∑ ty b + ∑ t , and E(yit, yiy) xn xn 0 uu H0 b + ∑ b + b (∑ ) + s + s . ¼ b0 ∑ ww ty t y ty
11
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1577
In particular, the estimator for OLS-IV A can be defined as " #1 " # N N X X 0 0 b^ ¼ x ðDxity Þ x ðDy Þ , xpðtyÞ
ip
ity
ip
i¼1
i¼1
where (t, y, p) are indices. Let the dimension of b be defined by K. If K ¼ 1, it is possible to define the following estimator for a given (t, y, p): " #1 " # N N X X b^ ¼ y ðDxity Þ y ðDy Þ : ypðtyÞ
ip
ip
i¼1
ity
i¼1
If K > 1, the latter estimator is infeasible, but it is possible to modify the former estimator by replacing one element in x0ip by yip. The estimator for OLS-IV B (equation in level and instruments in difference) can be defined as " #1 " # N N X X 0 0 b^ ¼ Dxipq xit Dxipq y : xðpqÞt
it
i¼1
i¼1
As in the previous case, if the dimension of b, K is equal to 1, it is possible to define the following estimator for (t, p, q): " #1 " # N N X X ^ ¼ Dy xit Dy y : b yðpqÞt
ipq
i¼1
ipq
it
i¼1
If K > 1, the latter estimator is infeasible, but it is possible to modify the former estimator by replacing one element in Dxip by Dyip. For some applications, it might be useful to impose weaker conditions on the autocorrelation of measurement errors and disturbances. In this case, it is necessary to restrict slightly further the conditions on the instrumental variables. More formally, if one replaces assumptions (A1) and (A2), or (C1) and (C2), by the weaker assumptions (B1) and (B2), then it is necessary to ensure that the IV set has a lag of at least t 2 and/or lead of at least t + 1 periods of the regressor in order to “clear” the t period memory of the MA process. Consistency of these estimators is discussed in Biorn (2000).12 To sum up, there are two simple ways to relax the standard assumption of i.i.d. measurement errors. Under the assumption of time-invariant autocorrelation, the set of instruments can contain the same variables used under Griliches–Hausman. In particular, if jt pj, jy pj > t, then (B1) and rank (E[w0ip(Dwity)]) ¼ K for some p 6¼ t 6¼ 0 ensure consistency of OLS–IV B, b^xpðtyÞ , and (B2) and the same rank condition ensure consistency of b^xyðtyÞ . In the same way, if jp tj, q t > t, (B1), (D1), (D2), and rank (E[(Dwipq)0 wit)]) ¼ K for some p 6¼ q 6¼ t ensure consistency of OLS-IV B, b^xðpqÞt , and (B2), (D1), (D2), and the same rank condition ensure consistency of b^yðpqÞt .
12
1578
H. Almeida et al.
For example, if one uses the OLS-IV A estimator (equation in differences and instruments in levels), then twice-lagged levels of the observable variables can be used as instruments. Under a moving average structure for the innovations in the measurement error [assumptions (B1) and (B2)], identification requires the researcher to use longer lags of the observable variables as instruments. For example, if the innovations follow an MA(1) structure, then consistency of the OLS-IV A estimator requires the use of instruments that are lagged three periods and longer. Finally, identification requires the latent regressor to have some degree of autocorrelation (since lagged values are used as instruments). Our Monte Carlo simulations will illustrate the importance of these assumptions and will evaluate the performance of the OLS-IV estimator under different sets of assumptions about the structure of the errors.
57.2.3 GMM Estimator Within the broader instrumental variable approach, we also consider an entire class of GMM estimators that deal with mismeasurement. These GMM estimators are close to the OLS-IV estimator discussed above but may attain appreciable gains in efficiency by combining numerous orthogonality conditions [see Biorn (2000) for a detailed discussion]. GMM estimators that use all the available lags at each period as instruments for equations in first differences were proposed by Holtz-Eakin et al. (1988) and Arellano and Bond (1991). We provide a brief discussion in turn. In the context of a standard investment model, Blundell et al. (1992) use GMM allowing for correlated firm-specific effects, as well as endogeneity (mismeasurement) of q. The authors use an instrumental variable approach on a first-differenced model in which the instruments are weighted optimally so as to form the GMM estimator. In particular, they use qit2 and twice-lagged investments as instruments for the first-differenced equation for firm i in period t. The Blundell, Bond, Devereux, and Schiantarelli estimators can be seen as an application of the GMM instrumental approach proposed by Arellano and Bond (1991), which was originally applied to a dynamic panel. A GMM estimator for the errors-in-variables model of Eq. 57.17 based on IV moment conditions takes the form b^ ¼
h
0 i1 0 0 0 1 Dx Z V 1 Z Dx Dx Z V Z Dy , N N
where Dx is the stacked vector of observations on the first difference of the mismeasured variable and Dy is the stacked vector of observations on the first difference of the dependent variable. As in Blundell et al. (1992), the instrument matrix Z has the following form13:
13
In models with exogenous explanatory variables, Zi may consist of sub-matrices with the block diagonal (exploiting all or part of the moment restrictions), concatenated to straightforward one-column instruments.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
0
x1 0 B 0 x1 Zi ¼ B @⋮ ⋮ 0 0
0 x2 ⋮ 0
0 0 ⋮ ⋮ x1
1579
1 0 0 C C: ⋮ ⋮ A xT2
According to standard GMM theory, an optimal choice of the inverse weight matrix VN is a consistent estimate of the covariance matrix of the orthogonality conditions E(Z0iDviDv0iZi), where Dvi are the first-differenced residuals of each XN 0 Z0 DD Zi , individual. Accordingly, a one-step GMM estimator uses V^ ¼ i1
i
where D is the first-difference matrix operator. A two-step GMM estimator uses XN e¼ ZD^ v i D^ v 0Z i where D^ v i are one-step GMM residuals. a robust choice V i¼1
i
Biorn (2000) proposes estimation of linear, static regression equations from panel data models with measurement errors in the regressors, showing that if the latent regressor is autocorrelated or nonstationary, several consistent OLS-IV and GMM estimators exist, provided some structure is imposed on the disturbances and measurement errors. He considers alternative GMM estimations that combine all essential orthogonality conditions. The procedures are very similar to the one described just above under non-autocorrelation in the disturbances. In particular, the required assumptions when allowing autocorrelation in the errors are very similar to those discussed in the previous section. For instance, when one allows for an MA(t) structure in the measurement error, for instance, one must ensure that the variables in the IV matrix have a lead or lag of at least t + 1 periods to the regressor. We briefly discuss the GMM estimators proposed by Biorn (2000). First, consider estimation using the equation in differences and instrumental variables in levels. After taking differences of the model, there are (T 1) + (T + 1) equations that can be stacked for individual i as 3 2 3 3 2 Dyi21 Dxi21 Dei21 7 6 7 7 6 6 Dyi32 Dxi32 Dei32 7 6 7 7 6 6 7 6 7 7 6 6 ⋮ ⋮ ⋮ 7 6 7 7 6 6 6 Dyi , T, T 1 7 6 Dxi T, T 1 7 6 Dei , T, T 1 7 7¼6 7b þ 6 7, 6 7 6 7 7 6 6 Dyi31 Dxi31 Dei31 7 6 7 7 6 6 7 7 7 6 6 6 Dyi42 Dxi42 Dei42 7 6 7 7 6 6 5 5 5 4 4 4 ⋮ ⋮ ⋮ Dyi , T, T 2 Dxi , T, T 2 Dei , T, T 2 2
or compactly Dyi ¼ DXi b þ D2i : The IV matrix is the ((2T 3) KT(T 2)) diagonal matrix with the instruments in the diagonal defined by Z. Let
1580
H. Almeida et al.
h i0 h i0 0 0 0 0 Dy ¼ ðDy1 Þ ; . . . ; ðDyN Þ , D2 ¼ ðD21 Þ ; . . . ; ðD2N Þ h i0 0 0 DX ¼ ðDX1 Þ ; . . . ; ðDXN Þ ,
0 Z ¼ Z01 ; . . . ; Z0N :
The GMM estimator that minimizes [N1(D2)0 Ζ](N2V)1[N1Z0 (D2)] for V ¼ Z0 Z can be written as 2" #31 #" #1 " X X X 0 ðDXi Þ Zi Z 0i Z i Z 0i ðDXi Þ 5 b^Dx ¼ 4 i
i
i
2" #3 #" #1 " X X X 0 4 ðDXi Þ Zi Z0 Zi Z0 ðDyi Þ 5: i
i
i
i
i
If D2 has a non-scalar covariance matrix, a more efficient GMM estimator, e b Dx, can be obtained setting V ¼ VZ(D2) ¼ E[Z0 (D2)(D2)0 Z] and estimating V^ zðD2Þ by V^ ZðD2Þ 1 X 0 c c 0 ¼ Z D2 D2 Z, N i N c i ¼ Dy ðDXi Þb^ . This procedure assumes that (A1) and (A2) are where D2 Dx i satisfied. However, as Biorn (2000) argues, one can replace them by (B1) or (B2) and then ensure that the variables in the IV matrix have a lead or lag of at least t + 1 periods to the regressor, to “get clear of” the t period memory of the MA(t) process. The procedure described below is also based on the same set of assumptions and can be extended similarly.14 The procedure for estimation using equation in levels and IVs in difference is similar. Consider the T stacked level equations for individual i: 2 3 2 3 2 3 2 3 yi1 c 2i1 xi1 4 ⋮ 5 ¼ 4 ⋮ 5 þ 4 ⋮ 5b þ 4 ⋮ 5, c xiT 2iT yiT or more compactly, yi ¼ eT c þ Xi b þ 2, where eΤ denotes a (T 1) vector of ones. Let the (T T(T 2)K) diagonal matrix of instrument be denoted by DZi. This matrix has the instruments in difference in the main diagonal. In addition,
14
See Propositions 1* and 2* in Biorn (2000) for a formal treatment of the conditions.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1581
define: 0 0 y ¼ y01 ; . . . ; y0N , 2 ¼ 201 ; . . . ; 20N h i0 0 0 0 X ¼ X01 ; . . . ; X0N , DZ ¼ ðDZ 1 Þ ; . . . ; ðDZN Þ : The GMM estimator that minimizes [N120 (DZi)0 ]|(N2VD)1[N1(DZ)0 2] for VD ¼ (DZ)0 (DZ) is 2" b^Lx ¼ 4
X
X0i ðDZi Þ
#" X
i
#1 " ðDZ i Þ0 DZi
i
X
#31 ðDZi Þ0 Xi 5
i
2" #" #1 " #3 X X X 0 4 Xi ðDZi Þ ðDZ i Þ0 DZi ðDZi Þ0 yi 5: i
i
i
If 2 has a non-scalar covariance matrix, a more efficient GMM estimator, e b Lx , can be obtained setting VD ¼ V(DZ)2 ¼ E[(DZ)220 (DZ)] and estimating V^ ðDZÞ2 by 0 V^ ðDZÞ2 1 X 0 ¼ ðDZÞ 2^ 2^ ðDZÞ, N i N
where 2^ ¼ yi Xi b^Lx . Finally, let us briefly contrast the OLS-IV and AB-GMM estimators. The advantages of GMM over IV are clear: if heteroscedasticity is present, the GMM estimator is more efficient than the IV estimator, while if heteroscedasticity is not present, the GMM estimator is no worse asymptotically than the IV. Implementing the GMM estimator, however, usually comes with a high price. The main problem, as Hayashi (2000, p. 215) points out, concerns the estimation of the optimal weighting matrix that is at the core of the GMM approach. This matrix is a function of fourth moments, and obtaining reasonable estimates of fourth moments requires very large sample sizes. Problems also arise when the number of moment conditions is high, that is, when there are “too many instruments.” This latter problem affects squarely the implementation of the AB-GMM, since it relies on large numbers of lags (especially in long panels). The upshot is that the efficient GMM estimator can have poor small sample properties [see Baum et al. (2003) for a discussion]. These problems are well documented and remedies have been proposed by, among others, Altonji and Segal (1996) and Doran and Schmidt (2006).
57.3
Monte Carlo Analysis
We use Monte Carlo simulations to assess the finite sample performance of the EW and IV estimators discussed in Sect. 57.2. Monte Carlo simulations are an ideal experimental tool because they enable us to study those two estimators in a controlled setting, where we can assess and compare the importance of elements
1582
H. Almeida et al.
that are key to estimation performance. Our simulations use several distributions to generate observations. This is important because researchers will often find a variety of distributions in real-world applications and because one ultimately does not see the distribution of the mismeasurement term. Our Monte Carlos compare the EW, OLS-IV, and AB-GMM estimators presented in Sect. 57.2 in terms of bias and RMSE.15 We also investigate the properties of the EW identification test, focusing on the empirical size and power of this test.
57.3.1 Monte Carlo Design A critical feature of panel data models is the observation of multiple data points from the same individuals over time. It is natural to consider that repeat samples are particularly useful in that individual idiosyncrasies are likely to contain information that might influence the error structure of the data-generating process. We consider a simple data-generating process to study the finite sample performance of the EW and OLS-IV estimators. The response variable yit is generated according to the following model: yit ¼ gi þ bwit þ z0it að1 þ rwit Þuit ,
(57.18)
where gi captures the individual-specific intercepts, b is a scalar coefficient associated with the mismeasured variable wit, a ¼ (a1,a2,a3)0 is 3 1 vector of coefficients associated with the 3 1 vector of perfectly measured variables zit ¼ (zit1, zit2, zit3), uit is the error in the model, and r modulates the amount of heteroscedasticity in the model. When r ¼ 0, the innovations are homoscedastic. When r > 0, there is heteroscedasticity associated with the variable wit, and this correlation is stronger as the coefficient gets larger. The model in Eq. 57.18 is flexible enough to allow us to consider two different variables as wit: (1) the individual-specific intercept gi and (2) the well-measured regressor zit. We consider a standard additive measurement error xit ¼ wit þ vit ,
(57.19)
where wit follows an AR(1) process:
The mean squared error (MSE) of an estimator y^ incorporates a component measuring the variability of the estimator (precision) and another measuring its bias (accuracy). An estimator with good MSE properties has small combined variance and bias. The MSE of y^ can be defined as h i2 Var y^ þ Bias y^ . The root mean squared error (RMSE) is simply the square root of the 15
^ For an MSE. This is an easily interpretable statistic, since it has the same unit as the estimator y. approximately unbiased estimator, the RMSE is just the square root of the variance, that is, the standard error.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
ð1 fLÞwit ¼ 2it :
1583
(57.20)
In all simulations, we set w*i, 50 ¼ 0 and generate wit for t ¼ 49, 48, . . .,T, such that we drop the first 50 observations. This ensures that the results are not unduly influenced by the initial values of the wit process. Following Biorn (2000), we relax the assumption of i.i.d. measurement error. Our benchmark simulations will use the assumption of time-invariant autocorrelation [(C1) and (C2)]. In particular, we assume that uit ¼ u1i + u2it and vit ¼ v1i + v2it. We draw all the innovations (u1i,u2it,v1i,v2it,) from a lognormal distribution; that is, we exponentiate two normal distributions and standardize the resulting variables to have unit variances and zero means (this follows the approach used by EW). In Sect. 57.3.6, we analyze the alternative case in which the innovations follow an MA structure. The perfectly measured regressor is generated according to zit ¼ mi þ 2it :
(57.21)
And the fixed effects, mi and gi, are generated as mi ¼ e1i
T 1 X gi ¼ e2i þ pffiffiffi W it , T t¼1
(57.22)
where Wit is the sum of the explanatory variables. Our method of generating mi and gi ensures that the usual random effects estimators are inconsistent because of the correlation that exists between the individual effects and the error term or the explanatory variables. The variables (e1i, e2i) are fixed as standard normal distributions.16 We employ four different schemes to generate the disturbances (2it, eit). Under Scheme 1, we generate them under a normal distribution, N(0,s2u). Under Scheme 2, we generate them from a lognormal distribution, LN(0,s2u). Under Scheme 3, we use a chi-square with 5 degrees of freedom, w25. Under Scheme 4, we generate the innovations from a Fm,n-distribution with m ¼ 10 and n ¼ 40. The latter three distributions are right-skewed so as to capture the key distributional assumptions behind the EW estimator. We use the normal (non-skewed) distribution as a benchmark. Naturally, in practice, one cannot determine how skewed – if at all – is the distribution of the partially out latent variable. One of our goals is to check how this assumption affects the properties of the estimators we consider. Figure 57.1 provides a visual illustration of the distributions we employ. By inspection, at least, the three skewed distributions we study appear to be plausible candidates for the distribution governing mismeasurement, assuming EW’s prior that measurement error must be markedly rightly skewed. 16
Robustness checks show that the choice of a standard normal does not influence our results.
1584
H. Almeida et al. Normal Density
Chi−Square Density
0.4
0.15
0.3 0.2
y
y
0.10
0.05
0.1
0.00
0.0 −3
−2
−1
0 x
1
2
0
3
F Density
10 x
15
20
Lognormal Density 0.6
0.8 0.6
0.4 y
y
5
0.4
0.2 0.2 0.0
0.0 0
5
10 x
15
20
0
5
10 x
15
20
Fig. 57.1
As in Erickson and Whited (2002), our simulations allow for cross-sectional correlation among the variables in the model. We do so because this correlation may aggravate the consequences of mismeasurement of one regressor on the estimated slope coefficients of the well-measured regressors. Notably, this source of correlation is emphasized by EW in their argument that the inferences of Fazzari et al. (1988) are flawed in part due to the correlation between q and cash flows. To introduce this correlation in our application, for each period in the panel, we generate (wi, zi1, zi2, zi3) using the correspondent error distribution and then multiply the resulting vector by [var(wi, zi1, zi2, zi3)]1/2 with diagonal elements equal to 1 and off-diagonal elements equal to 0.5. In the simulations, we experiment with T ¼ 10 and N ¼ 1,000. We set the number of replications to 5,000 and consider the following values for the remaining parameters: ðb; a1 ; a2 ; a3 Þ ¼ ð1, 1, 1, 1Þ f ¼ 0:6, s2u ¼ s2e1 ¼ s2e2 ¼ 1, where the set of slope coefficients b, ai is set similarly to EW.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1585
Notice that the parameter f controls the amount of autocorrelation of the latent regressor. As explained above, this autocorrelation is an important requirement for the identification of the IV estimator. While we set f ¼ 0.6 in the following experiments, we also conduct simulations in which we check the robustness of the results with respect to variations in f between 0 and 1 (see Sect. 57.3.6).
57.3.2 The EW Identification Test We study the EW identification test in a simple panel data setup. In the panel context, it is important to consider individual fixed effects. If the data contain fixed effects, according to Erickson and Whited (2000), a possible strategy is to transform the data first and then apply their high-order GMM estimator. Accordingly, throughout this section, our estimations consider data presented in two forms: “level” and “within.” The first refers to data in their original format, without the use of any transformation; estimations in level form ignore the presence of fixed effects.17 The second applies the within transformation to the data – eliminating fixed effects – before the model estimation. We first compute the empirical size and power of the test. Note that the null hypothesis is that the model is incorrectly specified, such that b ¼ 0 and/or E[3i ] ¼ 0. The empirical size is defined as the number of rejections of the null hypothesis when the null is true – ideally, this should hover around 5 %. In our case, the empirical size is given when we draw the innovations (2it, eit) from a non-skewed distribution, which is the normal distribution since it generates E[3i ] ¼ 0. The empirical power is the number of rejections when the null hypothesis is false – ideally, this should happen with very high probability. In the present case, the empirical power is given when we use skewed distributions: lognormal, chi-square, and F-distribution. Our purpose is to investigate the validity of the skewness assumption once we are setting b 6¼ 0. Erickson and Whited (2002) also restrict every element of b to be nonzero. We conduct a Monte Carlo experiment to quantify the second part of this assumption. It is important to note that we can compute the E[(i)3] since, in our controlled experiment, we generate wi and therefore observe it. Since the EW test is originally designed for cross-sectional data, the first difficulty the researcher faces when implementing a panel test is aggregation. Following EW, our test is computed for each year separately. We report the average of empirical rejections over the years.18 To illustrate the size and power of the test for the panel data case, we set the time series dimension of the
17
To our knowledge, all but one of the empirical applications of the EW model use the data in level form. In other words, firm-fixed effects are ignored outright in panel setting estimations of parameters influencing firm behavior. 18 The results using the median are similar.
1586
H. Almeida et al.
Table 57.1 The performance of the EW identification test Distribution Normal
Null is True
Lognormal
False
w23
False
F10, 40
False
Data form Level Within Level Within Level Within Level Within
Frequency of rejection 0.05 0.05 0.47 0.43 0.14 0.28 0.17 0.28
This table shows the performance of the EW identification test for different distributional assumptions displayed in column 1. The tests are computed for the data in levels and after applying a within transformation. Column 4 shows the frequencies at which the null hypothesis that the model is not identified is rejected at the 5 % level of significance
panel to T ¼ 10. Our tests are performed over 5,000 samples of cross-sectional size equal to 1,000. We use a simple homoscedastic model with r ¼ 0, with the other model parameters given as above. Table 57.1 reports the empirical size and power of the statistic proposed by EW for testing the null hypothesis H0 : E(y˙2i x˙i) ¼ E(y˙ix˙2i ) ¼ 0. This hypothesis is equivalent to testing H0 : b ¼ 0 and/or E(3i ) ¼ 0. Table 57.1 reports the frequencies at which the statistic of test is rejected at the 5 % level of significance for, respectively, the normal, lognormal, chi-square, and F-distributions of the datagenerating process. Recall that when the null hypothesis is true, we have the size of the test, and when the null is false, we have the power of the test. The results reported in Table 57.1 imply an average size of approximately 5 % for the test. In particular, the first two rows in the table show the results in the case of a normal distribution for the residuals (implying that we are operating under the null hypothesis). For both the level and within cases, the empirical sizes match the target significance level of 5 %. When we move to the case of skewed distributions (lognormal, chi-square, and F), the null hypothesis is not satisfied by design, and the number of rejections delivers the empirical power of the test. In the case when the data is presented in levels and innovations are drawn from a lognormal distribution (see row 2), the test rejects about 47 % of the time the null hypothesis of no skewness. Using within data, the test rejects the null hypothesis 43 % of the time. Not only are these frequencies low, but comparing these results, one can see that the within transformation slightly reduces the power of the test. The results associated with the identification test are more disappointing when we consider other skewed distributions. For example, for the F-distribution, we obtain only 17 % of rejections of the null hypothesis in the level case and only 28 % for the within case. Similarly, poor statistical properties for the model identification test are observed in the chi-square case.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1587
57.3.3 Bias and Efficiency of the EW, OLS-IV, and AB-GMM Estimators In this section we present simulation results that assess the finite sample performance of the estimators discussed in Sect. 57.2. The simulations compare the estimators in terms of bias and efficiency under several distributional assumptions. In the next subsection, we consider the cross-sectional setting, focusing on the properties of the EW estimator. Subsequently, we examine the panel case in detail, comparing the performance of the EW, OLS-IV, and AB-GMM estimators in terms of bias and efficiency.
57.3.3.1 The Cross-Sectional case We generate data using a simple model as in Eqs. 57.18, and 57.19 with T ¼ 1, such that there are no fixed effects, no autocorrelation(f ¼ 0), and no heteroscedasticity (r ¼ 0). The other parameters are (b, a1, a2, a3) ¼ (1, 1, 1, 1). Table 57.2 shows the results for bias and RMSE for four different distributions: lognormal, chi-square, F-distribution, and standard normal. For each distribution we estimate the model using three different EW estimators: EW-GMM3, EW-GMM4, and EW-GMM5. These estimators are based on the respective third, fourth, and fifth moment conditions. By combining the estimation of 4 parameters, under 4 different distributions, for all 3 EW estimators – a total of 48 estimates – we aim at establishing robust conclusions about the bias and efficiency of the EW approach. Panel A of Table 57.2 presents the results for bias and RMSE when we use the lognormal distribution to generate innovations (2i ei) that produce wi and zi. Under this particular scenario, point estimates are approximately unbiased, and the small RMSEs indicate that coefficients are relatively efficiently estimated. Panels B and C of Table 57.2 present the results for the chi-square and F-distribution, respectively. The experiments show that coefficient estimates produced by the EW approach are generally very biased. For example, Panel B shows that the b coefficient returned for the EW-GMM4 and EW-GMM5 estimators is biased downwards by approximately 35 %. Panel C shows that for EW-GMM3, the b coefficient is biased upwards about 35 %. Paradoxically, for EW-GMM4 and EW-GMM5, the coefficients are biased downwards by approximately 25 %. The coefficients returned for the perfectly measured regressors are also noticeably biased. And they, too, switch bias signs in several cases. Panels B and C show that the EW RMSEs are very high. Notably, the RMSE for EW-GMM4 under the chi-square distribution is 12.23, and under F-distribution, it is 90.91. These RMSE results highlight the lack of efficiency of the EW estimator. Finally, Panel D presents the results for the normal distribution case, which has zero skewness. In this case, the EW estimates are severely biased and the RMSEs are extremely high. The estimated coefficient for the mismeasured variable using EW-GMM3 has a bias of 1.91 (about three times larger than its true value) and an RMSE of 2305. These results reveal that the EW estimators only have acceptable performance in the case of very strong skewness (lognormal distribution). They relate to the last section in highlighting the poor identification of the EW framework, even in the
1588
H. Almeida et al.
Table 57.2 The EW estimator: cross-sectional data b Panel A. Lognormal distribution EW-GMM3 Bias 0.0203 RMSE 0.1746 EW-GMM4 Bias 0.0130 RMSE 0.2975 EW-GMM5 Bias 0.0048 RMSE 0.0968 Panel B. Chi-square distribution EW-GMM3 Bias 0.0101 RMSE 61.9083 EW-GMM4 Bias 0.3498 RMSE 12.2386 EW-GMM5 Bias 0.3469 RMSE 7.2121 Panel C. F-distribution EW-GMM3 Bias 0.3663 RMSE 190.9102 EW-GMM4 Bias 0.2426 RMSE 90.9125 EW-GMM5 Bias 0.2476 RMSE 210.4784 Panel D. Normal distribution EW-GMM3 Bias 1.9179 RMSE 2305.0309 EW-GMM4 Bias 1.0743 RMSE 425.5931 EW-GMM5 Bias 3.1066 RMSE 239.0734
a1
a2
a3
0.0054 0.0705 0.0034 0.1056 0.0013 0.0572
0.0050 0.0704 0.0033 0.1047 0.0013 0.0571
0.0056 0.0706 0.0040 0.1083 0.0019 0.0571
0.0092 16.9275 0.0938 3.2536 0.0854 1.8329
0.0101 16.1725 0.0884 2.9732 0.0929 1.8720
0.0060 14.7948 0.0831 3.1077 0.0767 1.6577
0.1058 53.5677 0.0580 24.9612 0.0709 53.5152
0.0938 52.4094 0.0649 24.6827 0.0643 55.8090
0.0868 43.3217 0.0616 21.1106 0.0632 52.4596
0.6397 596.1859 0.3012 111.8306 1.0649 60.3093
0.5073 608.2098 0.2543 116.2705 0.9050 65.5883
0.3512 542.2125 0.2640 101.4492 0.5483 58.3686
This table shows the bias and the RMSE associated with the estimation of the model in Eqs. 57.17, 57.18, 57.19, 57.20, and 57.21 using the EW estimator in simulated cross-sectional data. b is the coefficient on the mismeasured regressor, and a1 to a3 are the coefficients on the perfectly measured regressors. The table shows the results associated with GMM3, GMM4, and GMM5 for all the alternative distributions. These estimators are based on the respective third, fourth, and fifth moment conditions
most basic cross-sectional setup. Crucially, for the other skewed distributions we study, the EW estimator is significantly biased for both the mismeasured and the well-measured variables. In addition, the RMSEs are quite high, indicating low efficiency.
57.3.3.2 The Panel Case We argue that a major drawback of the EW estimator is its limited ability to handle individual heterogeneity – fixed effects and error heteroscedasticity – in panel data. This section compares the impact of individual heterogeneity on the EW, OLS-IV,
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1589
and AB-GMM estimators in a panel setting. In the first round of experiments, we assume error homoscedasticity by setting the parameter r in Eq. 57.18 equal to zero. We shall later allow for changes in this parameter. Although the EW estimations are performed on a period-by-period basis, one generally wants a single coefficient for each of the variables in an empirical model. To combine the various (time-specific) estimates, EW suggest the minimum distance estimator (MDE) described below. Accordingly, the results presented in this section are for the MDE that combines the estimates obtained from each of the ten time periods considered. For example, EW-GMM3 is the MDE that combines the ten different cross-sectional EW-GMM3 estimates in our panel. The OLS-IV is computed after differencing the model and using the second lag of the observed mismeasured variable x, xt2, as an instrument for Dxt. The AB-GMM estimates (Arellano and Bond 1991) use all the orthogonality conditions, with all available lags of x’s as instrumental variables. We also concatenate the well-measured variables z’s in the instruments’ matrix. The AB-GMM estimator is also computed after differencing Eq. 57.18. To highlight the gains of these various estimators vis-a`-vis the standard (biased) OLS estimator, we also report the results of simulations for OLS models using equation in first difference without instruments. We first estimate the model using data in level form. While the true model contains fixed effects (and thus it is appropriate to use the within transformation), it is interesting to see what happens in this case since most applications of the EW estimator use data in level form, and as shown previously, the EW identification test performs slightly better using data in this form. The results are presented in Table 57.3. The table makes it clear that the EW method delivers remarkably biased results when ignoring the presence of fixed effects. Panel A of Table 57.3 reports the results for the model estimated with the data under strong skewness (lognormal). In this case, the coefficients for the mismeasured regressor are very biased, with biases well in excess of 100 % of the true coefficient for the EW-GMM3, EW-GMM4, and EW-GMM5 estimators. The biases for the well-measured regressors are also very strong, all exceeding 200 % of the estimates’ true value. Panels B and C report results for models under chi-square and F-distributions, respectively. The EW method continues to deliver very biased results for all of the estimates considered. For example, the EW-GMM3 estimates that are returned for the mismeasured regressors are biased downwardly by about 100 % of their true values – those regressors are deemed irrelevant when they are not. Estimates for the well-measured regressors are positively biased by approximately 200 % – they are inflated by a factor of 3. The RMSEs reported in Panels A, B, and C show that the EW methodology produces very inefficient estimates even when one assumes pronounced skewness in the data. Finally, Panel D reports the results for the normal distribution. For the non-skewed data case, the EW framework can produce estimates for the mismeasured regressor that are downwardly biased by about 90 % of their true parameter values for all models. At the same time, that estimator induces an upward bias of larger than 200 % for the well-measured regressors.
1590
H. Almeida et al.
Table 57.3 The EW estimator: panel data in levels Panel A. Lognormal distribution EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE Panel B. Chi-square distribution EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE Panel C. F-distribution EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE Panel D. Normal distribution EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE
b
a1
a2
a3
1.6450 1.9144 1.5329 1.9726 1.3274 1.6139
2.5148 2.5606 2.5845 2.6353 2.5468 2.5944
2.5247 2.5711 2.5920 2.6443 2.5568 2.6062
2.5172 2.5640 2.5826 2.6354 2.5490 2.5994
1.0051 1.1609 0.9836 1.0540 0.9560 1.0536
2.2796 2.2887 2.2754 2.2817 2.2661 2.2728
2.2753 2.2841 2.2714 2.2776 2.2613 2.2679
2.2778 2.2866 2.2736 2.2797 2.2653 2.2719
0.9926 1.1610 0.9633 1.0365 0.9184 2.0598
2.2794 2.2890 2.2735 2.2801 2.2670 2.2742
2.2808 2.2904 2.2768 2.2836 2.2687 2.2761
2.2777 2.2870 2.2720 2.2785 2.2654 2.2725
0.8144 0.9779 0.9078 0.9863 0.8773 0.9846
2.2292 2.2363 2.2392 2.2442 2.2262 2.2316
2.228 2.2354 2.2363 2.2413 2.2225 2.2279
2.2262 2.2332 2.2351 2.2400 2.2217 2.2269
This table shows the bias and the RMSE associated with the estimation of the model in Eqs. 57.17, 57.18, 57.19, 57.20, and 57.21 using the EW estimator in simulated panel data. The table reports results from data in levels (i.e., without applying the within transformation). b is the coefficient on the mismeasured regressor, and a1, a2, a3 are the coefficients on the perfectly measured regressors. The table shows the results for the EW estimator associated with EW-GMM3, EW-GMM4, and EW-GMM5 for all the alternative distributions. These estimators are based on the respective third, fourth, and fifth moment conditions
Table 57.4 reports results for the case in which we apply the within transformation to the data. Here, we introduce the OLS, OLS-IV, and AB-GMM estimators. We first present the results associated with the set up that is most favorable for the EW estimations, which is the lognormal case in Panel A. The EW estimates for the lognormal case are relatively unbiased for the well-measured regressors (between 4 % and 7 % deviation from true parameter values). The same applies for the mismeasured regressors. Regarding the OLS-IV, Panel A shows that coefficient estimates are unbiased in all models considered. AB-GMM estimates are also approximately
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1591
Table 57.4 OLS, OLS-IV, AB-GMM, and EW estimators: panel data after within transformation Panel A. Lognormal distribution OLS Bias RMSE OLS-IV Bias RMSE AB-GMM Bias RMSE EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE Panel B. Chi-square distribution OLS Bias RMSE OLS-IV Bias RMSE AB-GMM Bias RMSE EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE Panel C. F-distribution OLS Bias RMSE OLS-IV Bias RMSE AB-GMM Bias RMSE EW-GMM3 Bias RMSE EW-GMM3 Bias RMSE EW-GMM3 Bias RMSE
b
a1
a2
a3
0.7126 0.7131 0.0065 0.1179 0.0248 0.0983 0.0459 0.0901 0.0553 0.1405 0.0749 0.1823
0.1553 0.1565 0.0019 0.0358 0.0080 0.0344 0.0185 0.0336 0.0182 0.0320 0.0161 0.0303
0.1558 0.1570 0.0014 0.0357 0.0085 0.0344 0.0184 0.0335 0.0182 0.0321 0.0161 0.0297
0.1556 0.1568 0.0015 0.0355 0.0081 0.0340 0.0183 0.0335 0.0183 0.0319 0.0161 0.0297
0.7126 0.7132 0.0064 0.1149 0.0231 0.0976 0.3811 0.4421 0.3887 0.4834 0.4126 0.5093
0.1555 0.1565 0.0011 0.0348 0.0083 0.0339 0.0982 0.1133 0.0788 0.0927 0.0799 0.0926
0.1553 0.1563 0.0017 0.0348 0.0077 0.0338 0.0987 0.1136 0.0786 0.0923 0.0795 0.0921
0.1556 0.1567 0.001 0.0348 0.0081 0.0342 0.0982 0.1133 0.0783 0.0919 0.0798 0.0923
0.7123 0.7127 0.0066 0.1212 0.0232 0.0984 0.3537 0.4239 0.3906 0.4891 0.4188 0.5098
0.1554 0.1565 0.0013 0.0359 0.0079 0.0343 0.0928 0.1094 0.0802 0.0939 0.0818 0.0939
0.1549 0.1559 0.0023 0.0362 0.0072 0.0342 0.0916 0.1086 0.0790 0.0930 0.0808 0.0932
0.1555 0.1566 0.001 0.0361 0.0085 0.0344 0.0917 0.1095 0.0791 0.0932 0.0813 0.0935 (continued)
1592
H. Almeida et al.
Table 57.4 (continued) Panel D. Normal distribution OLS Bias RMSE OLS-IV Bias RMSE AB-GMM Bias RMSE EW-GMM3 Bias RMSE EW-GMM4 Bias RMSE EW-GMM5 Bias RMSE
b
a1
a2
a3
0.7119 0.7122 0.0060 0.1181 0.0252 0.0983 0.7370 0.7798 0.8638 0.8847 0.8161 0.8506
0.1553 0.1563 0.0011 0.0353 0.0086 0.0344 0.1903 0.2020 0.2141 0.2184 0.1959 0.2021
0.1554 0.1564 0.0012 0.0355 0.0085 0.0339 0.1904 0.2024 0.2137 0.218 0.1955 0.2018
0.1551 0.1562 0.0014 0.0358 0.0084 0.0343 0.1895 0.2017 0.2137 0.2182 0.1955 0.2017
This table shows the bias and the RMSE associated with the estimation of the model in Eqs. 57.17, 57.18, 57.19, 57.20, 57.21 using the OLS, OLS-IV, AB-GMM, and EW estimators in simulated panel data. The table reports results from the estimators on the data after applying the within transformation. b is the coefficient on the mismeasured regressor, and a1, a2, a3 are the coefficients on the perfectly measured regressors. The table shows the results for the EW estimator associated with EW-GMM3, EW-GMM4, and EW-GMM5 for all the alternative distributions. These estimators are based on the respective third, fourth, and fifth moment conditions
unbiased, while standard OLS estimates are very biased. In terms of efficiency, the RMSEs of the EW-GMM3 are somewhat smaller than those of the OLS-IV and AB-GMM for the well-measured and mismeasured regressors. However, for the mismeasured regressor, both OLS-IV and AB-GMM have smaller RMSEs than EW-GMM4 and EW-GMM5. Panel B of Table 57.4 presents the results for the chi-square distribution. One can see that the EW yields markedly biased estimates in this case. The bias in the mismeasured regressor is approximately 38 % (downwards), and the coefficients for the well-measured variable are also biased (upwards). In contrast, the OLS-IV and AB-GMM estimates for both well-measured and mismeasured regressors are approximately unbiased. In terms of efficiency, as expected, the AB-GMM presents slightly smaller RMSEs than the OLS-IV estimator. These IV estimators’ RMSEs are much smaller than those associated with the EW estimators. Panels C and D of Table 57.4 show the results for the F and standard normal distributions, respectively. The results for the F-distribution in Panel C are essentially similar to those in Panel B: the instrumental variable estimators are approximately unbiased while the EW estimators are very biased. Finally, Panel D shows that deviations from a strongly skewed distribution are very costly in terms of bias for the EW estimator, since the bias for the mismeasured regressor is larger than 70 %, while for the well measured, it is around 20 %. A comparison of RMSEs shows that the IV estimators are more efficient in both the F and normal cases. In all, our simulations show that standard IV methods almost universally dominate the EW estimator in terms of bias and efficiency.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1593
We reiterate that the bias and RMSE of the IV estimators in Table 57.4 are all relatively invariant to the distributional assumptions, while the EW estimators are all very sensitive to those assumptions. In short, this happens because the EW relies on the high-order moment conditions as opposed to the OLS and IV estimators.
57.3.4 Heteroscedasticity One way in which individual heterogeneity may manifest itself in the data is via error heteroscedasticity. Up to this point, we have disregarded the case in which the data has a heteroscedastic error structure. However, most empirical applications in corporate finance entail the use of data for which heteroscedasticity might be relevant. It is important that we examine how the EW and the IV estimators are affected by heteroscedasticity.19 The presence of heteroscedasticity introduces heterogeneity in the model and consequently in the distribution of the partialled out dependent variable. This compromises identification in the EW framework. Since the EW estimator is based on equations giving the moments of (yi zimy) and (yi zimw) as functions of b and moments of (ui, ei, i), the heteroscedasticity associated with the fixed effects (ai) or with the perfectly measured regressor (zit) distorts the required moment conditions associated with (yi zimy), yielding biased estimates. These inaccurate estimates enter the minimum distance estimator equation and consequently produce incorrect weights for each estimate along the time dimension. As our simulations of this section demonstrate, this leads to biased MDE estimates, where the bias is a function of the amount of heteroscedasticity. We examine the biases imputed by heteroscedasticity by way of graphical analysis. The graphs we present below are useful in that they synthesize the outputs of numerous tables and provide a fuller visualization of the contrasts we draw between the EW and OLS-IV estimators. The graphs depict the sensitivity of those two estimators with respect to heteroscedasticity as we perturb the coefficient p in Eq. 57.18. In our simulations, we alternatively set wit ¼ gi or wit ¼ zit. In the first case, heteroscedasticity is associated with the individual effects. In the second, heteroscedasticity is associated with the well-measured regressor. Each of our figures describes the biases associated with the mismeasured and the well-measured regressors for each of the OLS-IV, EW-GMM3, EW-GMM4, and EW-GMM5 estimators.20 In order to narrow our discussion, we only present results for the highly skewed distribution case (lognormal distribution) and for data that is treated for fixed effects using the within transformation. As Sect. 57.3.3 shows, this is the only case in which the EW estimator returns relatively unbiased estimators for the parameters of interest. In all the other cases (data in levels and for data generated by
19
We focus on the OLS-IV estimator hereinafter for the purpose of comparison with the EW estimator. 20 Since estimation biases have the same features across all well-measured regressors of a model, we restrict attention to the first well-measured regressor of each of the estimated models.
1594
H. Almeida et al.
chi-square, F, and normal distributions), the estimates are strongly biased even under the assumption of homoscedasticity.21 Figure 57.2 presents the simulation results under the assumption that wit ¼ gi as we vary the amount of heteroscedasticity by changing the parameter r,22 the results for the mismeasured coefficients show that biases in the EW estimators are generally small for r equal to zero (this is the result reported in Sect. 57.3.3). However, as this coefficient increases, the bias quickly becomes large. For example, for r ¼ 0.45, the biases in the coefficient of the mismeasured variable are, respectively, 11 %, 20 %, and 43 %, for the EW-GMM3, EW-GMM4, and EW-GMM5 estimators. Notably, those biases, which are initially negative, turn positive for moderate values of r. As heteroscedasticity increases, some of the biases diverge to positive infinite. The variance of the biases of the EW estimators is also large. The results regarding the well-measured variables using EW estimators are analogous to those for the mismeasured one. Biases are substantial even for small amounts of heteroscedasticity, they switch signs for some level of heteroscedasticity, and their variances are large. In sharp contrast, the same simulation exercises show that the OLS-IV estimates are approximately unbiased even under heteroscedasticity. While the EW estimator may potentially allow for some forms of heteroscedasticity, it is clear that it is not well equipped to deal with this problem in more general settings.
57.3.5 Identification of the EW Estimator in Panel Data Our Monte Carlo experiments show that the EW estimator has a poor handle of individual fixed effects and that biases arise for deviations from the assumption of strict lognormality. Biases in the EW framework are further magnified if one allows for heteroscedasticity in the data (even under lognormality). The biases arising from the EW framework are hard to measure and sign, ultimately implying that it can be very difficult to replicate the results one obtains under that framework. To better understand these results, we now discuss in more mathematical details the identification of the EW estimator for the panel data case for both the model in level and after the within transformation. Extending the EW estimator to panel data seems to be a nontrivial task. EW have proposed to break the problem for each time series, estimate a cross-section model for each t, and after that combine the estimates using a minimum distance estimator. In what follows we show that this procedure might affect the identification condition. Consider the following model: yit ¼ ai þ bwit þ uit ,
21
i ¼ 1, . . . , N; t ¼ 1, . . . , T,
(57.23)
Our simulation results (available upon request) suggest that introducing heteroscedasticity makes the performance of the EW estimator even worse in these cases. 22 The results for wit ¼ zit are quite similar to those we get from setting wit ¼ gi. We report only one set of graphs to save space.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
Fig. 57.2
a
1595
Mismeasured Variable
6
Bias
4
2
0
0
1
2
3
4
5
ρ
b
Perfectly−Measured Variable 0.5
Bias
0.0
−0.5
−1.0
0
1
2
ρ
3
4
5
where uit is independent and identically distributed, with mean zero and variance s2u. Assume that the Var(xit) ¼ s2w. The independent variable and unobserved effects are exogenous, that is, Cov(wit, uis) ¼ Cov(ai, uit) ¼ 0 for any t and s. However, Cov(ai, wit) 6¼ 0. Now, assume that we do not observe the true variable wit, but rather a mismeasured variable, that is, you observe the following variable with an error: xit ¼ wit þ eit ,
i ¼ 1, . . . , N . . . , N; t ¼ 1, . . . T,
(57.24)
1596
H. Almeida et al.
where Cov(xit, eis) ¼ Cov(ai, eis) ¼ Cov(uit, eis) ¼ 0, and Var(eit) ¼ s2e , Cov(eit, eit 1) ¼ gs2e . In addition, assume here that there is no variable zit (a ¼ 0 in Eq. 57.18) to simplify the argument.
57.3.5.1 Model in Level As mentioned before, EW propose to fix a particular time series and estimate the model using the cross-section data. Without loss for generality, fix T ¼ 1. Thus, Eqs. 57.23 and 57.24 become yi1 ¼ ai þ bwi1 þ ui1 ,
i ¼ 1, . . . , N,
(57.25)
and xi1 ¼ wi1 þ ei1 ,
i ¼ 1, . . . , N:
(57.26)
However, the unobserved individual-specific intercepts, ai, are still present in Eq. 57.25 and in addition Cov(ai, wi1) 6¼ 0. Therefore, one can see that it is impossible to estimate b consistently since ai’s are unobserved. This argument is easily extended for every t ¼ 1, . . ., T. Thus, the estimator for each fixed t is inconsistent, and consequently the minimum distance estimator is inconsistent by construction. Therefore, we conclude that the EW minimum distance estimator produces inconsistent estimates for panel data model with fixed effects.
57.3.5.2 Model After Within Transformation Given the inconsistency of the model in levels presented in the last section, one strategy is to previously transform the data to eliminate the fixed effects. One suggestion is to use the within transformation in the data before estimation. In order to analyze the model after the transformation, let’s assume that T ¼ 2 for simplification. Using the within transformation in Eqs. 57.23 and 57.24, we obtain yit yit ¼ bðwit wi Þ þ ðuit ui Þ, and xit xi ¼ ðwit wi Þ þ ðeit ei Þ, where y- i ¼ 12 ðyil þ yi2 Þ, w- i ¼ 12 ðwi1 þ wi2 Þ, and so on. Now, again EW propose to use a particular time series and estimate the model using the cross-section data. Let’s use t ¼ 1 for ease of exposition. The model can be written as yil yi ¼ bðwi1 wi Þ þ ðui1 ui Þ
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1597
and xi1 xi ¼ ðwi1 wi Þ þ ðeit ei Þ: Now substituting the definition of the deviations and rearranging, we have 1 1 1 yi1 ðyi1 þ yi2 Þ ¼ b wi1 ðwi1 þ wi2 Þ þ ui1 ðui1 þ ui2 Þ , 2 2 2 yil þ yi2 ¼ bðwi1 þ wi2 Þ þ ðui1 þ ui2 Þ, and 1 xi1 ðxi1 þ xi2 Þ ¼ 2
1 1 wi1 ðwi1 þ wi2 Þ þ ei1 ðei1 þ ei2 Þ , 2 2 xi1 þ xi2 ¼ ðwi1 þ wi2 Þ þ ðei1 þ ei2 Þ:
Finally, our model can be described as yi1 þ yi2 ¼ bðwi1 þ wi2 Þ þ ðui1 þ ui2 Þ, and xi1 þ xi2 ¼ ðwi1 þ wi2 Þ þ ðei1 þ ei2 Þ: Let’s now define Yi ¼ yi1 + yi2, Xi ¼ xi1 + xi2, Ui ¼ ui1 + ui2, vi ¼ wi1 + wi2, and Ei ¼ ei1 + ei2. So, the model could be rewritten as Y i ¼ bvi þ Ui and X i ¼ v i þ Ei : Notice that the requirements for identification now are on the high-order moments of (V, U, E). However, note that vi ¼ wi1 + wi2, which is a sum of two random variables. As it is well known from the econometrics literature, convolution of random variables is in general a nontrivial object. One example of why the identification condition may worsen considerably is the following. Consider a model where wi1 and wi2 are independent chi-square distributions with 2 degrees ofpffiffiffiffiffiffiffi freedom. The skewness of the ffi chi-square with k degrees of freedom is 8=k . Note that the sum of two independent chi-squares with k degrees of freedom is a chi-square with 2k degrees
1598
H. Almeida et al.
of freedom. Therefore, the skewness of the vi ¼ wi1 + wi2 drops from two for the model using one distribution to 1.41 for the model using the summation of both wi1 and wi2. From this simple analysis one could conclude that the identification conditions required for EW estimator can deteriorate considerably when using the within transformation to eliminate the fixed effects in panel data. Thus, the required conditions to achieve unbiased estimates with EW are very strong.
57.3.6 Revisiting the OLS-IV Assumptions Our Monte Carlo simulations show that the OLS-IV estimator is consistent even when one allows for autocorrelation in the measurement-error structure. We have assumed, however, some structure on the processes governing innovations. In this section, we examine the sensitivity of the OLS-IV results with respect to our assumptions about measurement-error correlation and the amount of autocorrelation in the latent regressor. These assumptions can affect the quality of the instruments and therefore should be examined in some detail. We first examine conditions regarding the correlation of the measurement errors and disturbances. The assumption of time-invariant autocorrelation for measurement errors and disturbances implies that past shocks to measurement errors do not affect the current level of the measurement error. One way to relax this assumption is to allow for the measurement-error process to have a moving average structure. This structure satisfies Biorn’s assumptions (B1) and (B2). In this case, Proposition 1 in Biorn (2000) shows that for an MA(t), the instruments should be of order of at most t t 2. Intuitively, the set of instruments must be “older” than the memory of the measurement-error process. For example, if the measurement error is MA(1), then one must use third- and longer-lagged instruments to identify the model. To analyze this case, we conduct Monte Carlo simulations in which we replace the time-invariant assumption for innovation uit and vit with an MA(1) structure for the measurement-error process. The degree of correlation in the MA process is set to y ¼ 0.4. Thus, the innovation in Eqs. 57.18 and 57.19 has the following structure: uit ¼ u1it yu1it1 and vit ¼ v1it ¼ v1it yv1it1 , with jyj 1, and u1it and v1it are i.i.d. lognormal distributions. The other parameters in the simulation remain the same. The results are presented in Table 57.5. Using MA(1) in the innovations and the third lag of the latent regressor as an instrument (either on its own or in combination with the fourth lag), the bias of the OLS estimator is very small (approximately 2–3 %). The bias increases somewhat when we use only the fourth lag. While the fourth is an admissible instrument in this case, using longer lags decreases the implied autocorrelation in the latent regressor [which follows an
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1599
AR(1) process by Eq. 57.20]. This effect decreases somewhat the quality of the instruments. Notice also that when we do not eliminate short lags from the instrument set, the identification fails. For example, the bias is 60 % when we use the second lag as an instrument. These results thus underscore the importance of using long enough lags in this MA case. Table 57.5 also reports results based on an MA (2) structure. The results are qualitatively identical to those shown in the MA (1) case. Once again, the important condition for identification is to use long enough lags (no less than four lags in this case).23 The second condition underlying the use of the OLS-IV is that the latent regressor is not time invariant. Accordingly, the degree of autocorrelation in the process for the latent regressor is an important element of the identification strategy. We assess the sensitivity of the OLS-IV results to this condition by varying the degree of autocorrelation through the autoregressive coefficient in the AR(1) process for the latent regressor. In these simulations, we use a time-invariant autocorrelation condition for the measurement error, but the results are very similar for the MA case. Figure 57.3 shows the results for the bias in the coefficients of interest for the wellmeasured and mismeasured variables, using the second lag of the mismeasured variable as an instrument. The results show that the OLS-IV estimator performs well for a large range of the autoregressive coefficient. However, as expected, when the f coefficient is very close to zero or one, we have evidence of a weak instrument problem. For example, when f ¼ 1, then Dwit is uncorrelated with any variable dated at time t 2 or earlier. These simulations show that, provided that one uses adequately lagged instruments, the exact amount of autocorrelation in the latent variable is not a critical aspect of the estimation. The simulations of this section show how the performance of the OLS-IV estimator is affected by changes in assumptions concerning measurement errors and latent regressors. In practical applications, it is important to verify whether the results obtained with OLS-IV estimators are robust to the elimination of short lags from the instrumental set. This robustness check is particularly important given that the researcher will be unable to pin down the process followed by the measurement error. Our empirical application below incorporates this suggestion. In addition, identification relies on some degree of autocorrelation in the process for the latent regressor. While this condition cannot be directly verified, we can perform standard tests of instrument adequacy that rely on “first-stage” test statistics calculated from the processes for the observable variables in the model. Another important assumption in the OLS is non-autocorrelation in both uit and vit. For example, these innovations cannot follow an autoregressive process.
23
We note that if the instrument set uses suitably long lags, then the OLS-IV results are robust to variations in the degree of correlation in the MA process. In unreported simulations under MA(1), we show that the OLS bias is nearly invariant to the parameter y.
1600
H. Almeida et al.
Table 57.5 Moving average structures for the measurement-error process Instrument Xit2 Xit3 Xit3, Xit4 Xit3, Xit4, Xit5 Xit4 Xit4, Xit5 Xit5
MA(1) 0.593 (0.60) 0.028 (0.30) 0.025 (0.63) 0.011 (0.30) 0.107 (1.62) 0.113 (0.58) 0.076 (0.31)
MA(2) 0.368 (0.38) 0.707 (0.71) 0.077 (1.01) 0.759 (0.76) 0.144 (2.01) 0.140 (0.59) 0.758 (0.76)
This table shows the bias in the well-measured coefficient for OLS-IV using moving average structure for the measurement-error process. Numbers in parentheses are the RMSE
When this is the case, the IV strategy of using lags of mismeasured variable as valid instruments is invalid (see Biorn 2000).
57.3.7 Distributional Properties of the EW and OLS-IV Estimators A natural question is whether our simulation results are rooted in the lack of accuracy of the asymptotic approximation of the EW method. Inference in models with mismeasured regressors is based on asymptotic approximations; hence inference based on estimators with poor approximations might lead to wrong inference procedures. For instance, we might select wrong critical values for a test under poor asymptotic approximations and make inaccurate statements under such circumstances. In this section, we use the panel data simulation procedure of Sect. 3.3.2 to study and compare the accuracy of the asymptotic approximation of the EW and IV methods. To save space, we restrict our attention to the mismeasured regressor coefficient for the EW-GMM5 and OLS-IV cases. We present results where we draw the data from the lognormal, chi-square, and F-distributions. The EW-GMM5 estimator is computed after the within transformation and the OLS-IV uses second lags as instruments. One should expect both the IV and EW estimators to have asymptotically normal representations, such that when we normalize the estimator by subtracting the true parameter and divide by the standard deviation, this quantity behaves asymptotically as a normal distribution. Accordingly, we compute the empirical density and the distribution functions of the normalized sample estimators and their normal approximations. These functions are plotted in Fig. 57.4. The true normal density
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1601
Mismeasured Variable 2
Bias
1 0 −1 −2 0.0
0.2
0.4
0.6
0.8
1.0
0.8
1.0
φ Perfectly−Measured Variable
Bias
0.4
0.0
−0.4 0.0
0.2
0.4
0.6 φ
Fig. 57.3
and distribution functions (drawn in red) serve as benchmarks. The graphs in Fig. 57.4 depict the accuracy of the approximation. We calculate the density of the estimators using a simple Gaussian Kernel estimator and also estimate the empirical cumulative distribution function.24 Consider the lognormal distribution (first panel). In that case, the OLS-IV (black line) displays a very precise approximation to the normal curve in terms of both density and distribution. The result for the OLS-IV is robust across all of the distributions considered (lognormal, chi-square, and F). These results are in sharp contrast to those associated with the EW-GMM5 estimator. This estimator presents a poor asymptotic approximation for all distributions examined. For the lognormal case, the density is not quite centered at zero, and its shape does not fit the normal distribution. For the chi-square and F-distributions, Fig. 57.4 shows that the shapes of the density and distribution functions are very unlike the normal case, with the center of the distribution located far away from zero. These results imply that inference procedures using the EW estimator might be asymptotically invalid in simple panel data with fixed effects, even when the relevant distributions present high skewness. 24 The empirical cumulative distribution function Fn is a step function with jumps i/n at observation values, where i is the number of tied observations at that value.
1602
H. Almeida et al.
Fig. 57.4
57.4
Empirical Application
We apply the EW and OLS-IV estimators to Fazzari et al. (1988) investment equation. This is the most well-known model in the corporate investment literature, and we use this application as a way to illustrate our Monte Carlo-based results. In the Fazzari, Hubbard, and Petersen model, a firm’s investment spending is regressed on a proxy for investment demand (Tobin’s q) and the firm’s cash flow. Theory suggests that the correct proxy for the firm’s investment demand is marginal q, but this quantity is unobservable and researchers use instead its measurable proxy, average q. Because average q measures marginal q imperfectly, a measurement problem naturally arises. Erickson and Whited (2002) uses the Fazzari, Hubbard, and Petersen model to motivate the adoption of their estimator in applied work in panel data. A review of the corporate investment literature shows that virtually all empirical work in the area considers panel data models with firm-fixed effects (Kaplan and Zingales 1997; Rauh 2006; Almeida and Campello 2007). From an estimation point of view, there are distinct advantages in exploiting repeated observations from individuals to identify the model (Blundell et al. 1992). In an investment model setting, exploiting firm effects contributes to estimation precision and allows for
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1603
model consistency in the presence of unobserved idiosyncrasies that may be simultaneously correlated with investment and q. The baseline model in this literature has the form I it =K it ¼ i þ bq it þ aCFit =K it þ uit ,
(57.27)
where I denotes investment, K capital stock, q* is marginal q, CF cash flow, is the firm-specific effect, and u is the innovation term. As mentioned earlier, if q* is measured with error, OLS estimates of b will be biased downwards. In addition, given that q and cash flow are likely to be positively correlated, the coefficient a is likely to be biased upwards in OLS estimations. In expectation, these biases should be reduced by the use of estimators like the ones discussed in the previous section.
57.4.1 Theoretical Expectations In order to better evaluate the performance of the two alternative estimators, we develop some hypotheses about the effects of measurement-error correction on the estimated coefficients b and a from Eq. 57.27. Theory does not pin down the exact values that these coefficients should take. Nevertheless, one could argue that the two following conditions should be reasonable. First, an estimator that addresses measurement error in q in a standard investment equation should return a higher estimate for b and a lower estimate for a when compared with standard OLS estimates. Recall that measurement error causes an attenuation bias on the estimate for the coefficient b. In addition, since q and cash flow are likely to be positively correlated, measurement error should cause an upward bias on the empirical estimate returned under the standard OLS estimation. Accordingly, if one denotes the OLS and the measurement-error consistent estimates, respectively, by (bOLS, aOLS) and (bMEC, aMEC), one should expect: Condition 1. bOLS < bMEC and aOLS > aMEC. Second, one would expect the coefficients
for q and the cash flow to be nonnegative after treating the data for measurement error. The q-theory of investment predicts a positive correlation between investment and q (e.g., Hayashi 1982). If the theory holds and the estimator does a good job of adjusting for measurement error, then the cash flow coefficient should be zero (“neoclassical view”). However, the cash flow coefficient could be positive either because of the presence of financing frictions (as posited by Fazzari et al. 1988)25 or
25
However, financial constraints are not sufficient to generate a strictly positive cash flow coefficient because the effect of financial constraints is capitalized in stock prices and may thus be captured by variations in q (Chirinko 1993; Gomes 2001).
1604
H. Almeida et al.
due to fact that cash flow picks up variation in investment opportunities even after we apply a correction for mismeasurement in q. Accordingly, one should observe: Condition 2. bMEC 0 and aMEC 0. Notice that these conditions are fairly weak.
If a particular measurement-error consistent estimator does not deliver these basic results, one should have reasons to question the usefulness of that estimator in applied work.
57.4.2 Data Description Our data collection process follows that of Almeida and Campello (2007). We consider a sample of manufacturing firms over the 1970–2005 period with data available from Compustat. Following those authors, we eliminate firm years displaying asset or sales growth exceeding 100 %, or for which the stock of fixed capital (the denominator of the investment and cash flow variables) is less than $5 million (in 1976 dollars). Our raw sample consists of 31,278 observations from 3,084 individual firms. Summary statistics for investment, q, and cash flow are presented in Table 57.6. These statistics are similar to those reported by Almeida and Campello, among other papers. To save space we omit the discussion of these descriptive statistics.
57.4.3 Testing for the Presence of Fixed Effects and Heteroscedasticity Before estimating our investment models, we conduct a series of tests for the presence of firm-fixed effects and heteroscedasticity in our data. As a general rule, these phenomena might arise naturally in panel data applications and should not be ignored. Importantly, whether they appear in the data can have concrete implications for the results generated by different estimators. We first perform a couple of tests for the presence of firm-fixed effects. We allow for individual firm intercepts in Eq. 57.27 and test the null hypothesis that the coefficients associated with those firm effects are jointly equal to zero (Baltagi 2005). Table 57.7 shows that the F-statistic for this test is 4.4 (the associated p-value is 0.000). Next, we contrast the random effects OLS and the fixed effects OLS estimators to test again for the presence of fixed effects. The Hausman test statistic reported in Table 57.7 rejects the null hypothesis that the random effects model is appropriate with a test statistic of 8.2 (p-value of 0.017). In sum, standard tests strongly reject the hypothesis that fixed effects can be ignored. We test for homoscedasticity using two different panel data-based methods. First, we compute the residuals from the least squares dummy variables estimator and regress the squared residuals on a function of the independent variables [see Frees (2004) for additional details]. We use two different combinations of independent regressors – (qit, CFit) and (qit, q2it, CFit, CFit, CF2it) – and both of them robustly reject the null hypothesis of homoscedasticity. We report the results for the
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1605
Table 57.6 Descriptive statistics Variable Investment q Cash flow
Obs. 22,556 22,556 22,556
Mean 0.2004 1.4081 0.3179
Std. dev. 0.1311 0.9331 0.3252
Median 0.17423 1.1453 0.27845
Skewness 2.6871 4.5378 2.2411
This table shows the basic descriptive statistics for q, cash flow, and investment. The data are taken from the annual Compustat industrial files over the 1970–2005 period. See text for details Table 57.7 Diagnosis tests Test Pooling test Random effects vs. fixed effects Homoscedasticity 1 Homoscedasticity 2
Test statistic 4.397 8.17 55.19 7,396.21
p-value 0.0000 0.0169 0.0000 0.0000
This table reports results for specification tests. Hausman test for fixed effects models considers fixed effects models against the simple pooled OLS and the random effects model. A homoscedasticity test for the innovations is also reported. The data are taken from the annual Compustat industrial files over the 1970–2005 period. See text for details
first combination in Table 57.7, which yields a test statistic of 55.2 (p-value of 0.000). Our second approach for testing the null of homoscedasticity is the standard random effects Breusch-Pagan test. Table 57.7 shows that the Breusch-Pagan test yields a statistic of 7,396.2 (p-value of 0.000). Our tests hence show that the data strongly reject the hypothesis of error homoscedasticity.
57.4.4 Implementing the EW Identification Test Our preliminary tests show that one should control for fixed effects when estimating investment models using real data. In the context of the EW estimator, it is thus appropriate to apply the within transformation before the estimation. However, in this section, we also present results for the data in level form to illustrate the point made in Sect. 57.3.2 that applying the within transformation compromises identification in the EW context. Prior papers adopting the EW estimator have ignored (or simply dismissed) the importance of fixed effects (e.g., Whited 2001, 2006). We present the results for EW’s identification test in Table 57.8. Using the data in level form, we reject the hypothesis of no identification in 12 out of 30 years (or 36 % rejection). For data that is transformed to accommodate fixed effects (within transformation), we find that in only 7 out of 33 (or 21 %) of the years between 1973 and 2005, one can reject the null hypothesis that the model is not identified at the usual 5 % level of significance. These results suggest that the power of the test is low and decreases further after applying the within transformation to the data. These results are consistent with Almeida and Campello’s (2007) use of the EW estimator. Working with a 15-year Compustat panel, those authors report that they could only find a maximum of 3 years of data passing the EW identification test.
1606
H. Almeida et al.
The results in Table 57.8 reinforce the notion that it is quite difficult to operationalize the EW estimator in real-world applications, particularly in situations in which the within transformation is appropriate due to the presence of fixed effects. We recognize that the EW identification test rejects the model for most of the data at hand. However, recall from Sect. 57.3.2 that the test itself is likely to be misleading (“over-rejecting” the data). In the next section, we take the EW estimator to the data (a standard Compustat sample extract) to illustrate the issues applied researchers face when using that estimator, contrasting it to an easy-to-implement alternative.
57.4.5 Estimation Results We estimate Eq. 57.27 using the EW, OLS-IV, and AB-GMM estimators. For comparison purposes, we also estimate the investment equation using standard OLS and OLS with fixed effects (OLS-FE). The estimates for the standard OLS are likely to be biased, providing a benchmark to evaluate the performance of the other estimators. As discussed in Sect. 57.4.1, we expect estimators that improve upon the problem of mismeasurement to deliver results that satisfy Conditions 1 and 2 above. As is standard in the empirical literature, we use an unbalanced panel in our estimations. Erickson and Whited (2000) propose a minimum distance estimator (MDE) to aggregate the cross-sectional estimates obtained for each sample year, but their proposed MDE is designed for balanced panel data. Following Riddick and Whited (2009), we use a Fama–MacBeth procedure to aggregate the yearly EW estimations.26 To implement our OLS-IV estimators, we first take differences of the model in Eq. 57.27. We then employ the estimator denoted by OLS-IV A from Sect. 57.2.2, using lagged levels of q and cash flow as instruments for (differenced) qit. Our Monte Carlos suggest that identification in this context may require the use of longer lags of the model variables. Accordingly, we experiment with specifications that use progressively longer lags of q and cash flow to verify the robustness of our results. Table 57.9 reports our findings. The OLS and OLS-FE estimates, reported in columns (1) and (2), respectively, disregard the presence of measurement error in q. The EW-GMM3, EW-GMM4, and EW-GMM5 estimates are reported in columns (3), (4), and (5). For the OLS-IV estimates reported in column (6), we use qt—2 as an instrument.27 The AB-GMM estimator, reported in column (7), uses lags of q as instruments. Given our data structure, this implies using a total of 465 instruments. We account for firm-fixed effects by transforming the data. 26
Fama–MacBeth estimates are computed as a simple standard errors for yearly estimates. An alternative approach could use the Hall–Horowitz bootstrap. For completeness, we present in the appendix the actual yearly EW estimates. 27 In the next section, we examine the robustness of the results with respect to variation in the instrument set.
1984
1983
1982
1981
1980
1979
1978
1977
1976
1975
1974
1973
t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value
1.961 0.375 5.052 0.08 1.335 0.513 7.161 0.028 1.968 0.374 9.884 0.007 9.065 0.011 9.769 0.008 10.174 0.006 3.304 0.192 5.724 0.057 15.645 0 1
0
0
1
1
1
1
0
1
0
0
1984
1983
1982
1981
1980
1979
1978
1977
1976
1975
1974
1973
t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value
1.349 0.509 7.334 0.026 1.316 0.518 5.146 0.076 1.566 0.457 2.946 0.229 1.042 0.594 7.031 0.03 7.164 0.028 2.991 0.224 9.924 0.007 6.907 0.032 1
1
0
1
1
0
0
0
0
0
1
(continued)
#Rejections null 0
Level
Within transformation
Assessing the Performance of Estimators Dealing with Measurement Errors
Table 57.8 The EW identification test using real data #Rejections null 0
57 1607
1995
1994
1993
1992
1991
1990
1989
1988
1987
1986
1985
Level
t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value
Table 57.8 (continued)
16.084 0 4.827 0.089 19.432 0 5.152 0.076 0.295 0.863 0.923 0.63 3.281 0.194 2.31 0.315 1.517 0.468 2.873 0.238 0.969 0.616 0
0
0
0
0
0
0
0
1
0
#Rejections null 1
1995
1994
1993
1992
1991
1990
1989
1988
1987
1986
1985
t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value
Within transformation 1.089 0.58 5.256 0.072 13.604 0.001 1.846 0.397 0.687 0.709 1.3 0.522 3.17 0.205 2.573 0.276 1.514 0.469 4.197 0.123 1.682 0.431
0
0
0
0
0
0
0
0
1
0
#Rejections null 0
1608 H. Almeida et al.
t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value
17.845 0 0.14 0.933 0.623 0.732 0.354 0.838 13.44 0.001 3.159 0.206 13.616 0.001 12.904 0.002 5.212 0.074 2.365 0.306 Sum % of years 12 0.3636
0
0
1
1
0
1
0
0
0
1
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value
4.711 0.095 1.535 0.464 5.426 0.066 2.148 0.342 13.502 0.001 3.309 0.191 0.693 0.707 4.006 0.135 2.801 0.246 4.127 0.127 Sum % of years 7 0.2121
0
0
0
0
0
1
0
0
0
0
This table shows the test statistic and its p-value for the EW identification test, which tests the null hypothesis that the model is not identified. The tests are performed on a yearly basis. In the last columns, we collect the number of years in which the null hypothesis is rejected (sum) and compute the percentage of years in which the null is rejected. The data are taken from the annual Compustat industrial files over the 1970–2005 period. See text for details
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
57 Assessing the Performance of Estimators Dealing with Measurement Errors 1609
OLS 0.0174*** (0.002) 0.1310*** (0.011) 22,556 –
OLS-FE 0.0253*** (0.003) 0.1210*** (0.017) 22,556 –
EW-GMM3 0.0679 (0.045) 0.1299*** (0.031) 22,556 –
EW-GMM4 0.3031 (0.302) 0.3841* (0.201) 22,556 –
EW-GMM5 0.0230 (0.079) 0.1554*** (0.052) 22,556 –
OLS-IV 0.0627*** (0.007) 0.0434*** (0.007) 17,348 0.000
AB-GMM 0.0453*** (0.006) 0.0460*** (0.016) 19,748 –
This table shows the coefficients and standard deviations that we obtain when we use the OLS, EW, and the GMM estimators in Eq. 57.22. The table also displays the standard OLS-FE coefficients (after applying the differencing transformation to treat the fixed effects) in column (2) and OLS-IV in the last column. Robust standard errors in parentheses for OLS and GMM and clustered in firms for OLS-FE and OLS-IV. Each EW coefficient is an average of the yearly coefficients reported in Table 57.11 and the standard error for these coefficients is a Fama–MacBeth standard error. The table shows the EW coefficients for the data after applying the within transformation. The data are taken from the annual Compustat industrial files over the 1970–2005 period. See text for details. *, **, and *** represent statistical significance at the 10 %, 5 %, and 1 % levels, respectively
Observations F-stat p-value (first step)
Cash flow
Variables q
Table 57.9 EW, GMM, and OLS-IV coefficients, real-world data
1610 H. Almeida et al.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1611
When using OLS and OLS-FE, we obtain the standard result in the literature that both q and cash flow attract positive coefficients [see columns (1) and (2)]. In the OLS-FE specification, for example, we obtain a q coefficient of 0.025 and a cash flow coefficient of 0.121. Columns (3), (4), and (5) show that the EW estimator does not deliver robust inferences about the correlations between investment, cash flow, and q. The q coefficient estimate varies significantly with the set of moment conditions used, even flipping signs. In addition, none of the q coefficients is statistically significant. The cash flow coefficient is highly inflated under EW, and in the case of the EW-GMM4 estimator, it is more than three times larger than the (supposedly biased) OLS coefficient. These results are inconsistent with Conditions 1 and 2 above. These findings agree with the Monte Carlo simulations of Sect. 57.3.3, which also point to a very poor performance of the EW estimator in cases in which fixed effects and heteroscedasticity are present. By comparison, the OLS-IV delivers results that are consistent with Conditions 1 and 2. In particular, the q coefficient increases from 0.025 to 0.063, while the cash flow coefficient drops from 0.131 to 0.043. These results suggest that the proposed OLS-IV estimator does a fairly reasonable job at addressing the measurement-error problem. This conclusion is consistent with the Monte Carlo simulations reported above, which show that the OLS-IV procedure is robust to the presence of fixed effects and heteroscedasticity in simulated data. The AB-GMM results also generally satisfy Conditions 1 and 2. Notice, however, that the observed changes in the q and cash flow coefficients (“corrections” relative to the simple, biased OLS estimator) are less significant than those obtained under the OLS-IV estimation.
57.4.6 Robustness of the Empirical OLS-IV Estimator It is worth demonstrating that the OLS-IV we consider is robust to variations in the set of instruments that is used for identification. While the OLS-IV delivered results that are consistent with our priors, note that we examined a just-identified model, for which tests of instrument quality are not available. As we have discussed previously, OLS-IV estimators should be used with care in this setting, since the underlying structure of the error in the latent variable is unknown. In particular, the Monte Carlo simulations suggest that it is important to show that the results remain when we use longer lags to identify the model. We present the results from our robustness checks in Table 57.10. We start by adding one more lag of q (i.e., qt3) to the instrumental set. The associated estimates are in the first column of Table 57.10. One can observe that the slope coefficient associated with q increases even more with the new instrument (up to 0.090), while that of the cash flow variable declines further (down to 0.038). One problem with this estimation, however, is the associated J-statistic. If we consider a 5 % hurdle rule, the J-statistic of 4.92 implies that, with this particular instrumental set, we reject the null hypothesis that the identification restrictions are met (p-value of 3 %). As we have discussed, this could be expected if, for example, the measurement-error process has an MA structure.
(1) 0.0901*** (0.014) 0.0383*** (0.007) 15,264 0.000 4.918 0.0266
(2) 0.0652*** (0.014) 0.0455*** (0.011) 11,890 0.000 3.119 0.210
(3) 0.0394*** (0.015) 0.0434*** (0.011) 12,000 0.000 0.122 0.727
(4) 0.0559*** (0.012) 0.0449*** (0.010) 13,448 0.000 0.00698 0.933
(5) 0.0906*** (0.033) 0.0421*** (0.008) 12,000 0.001 0.497 0.481
(6) 0.0660*** (0.024) 0.0450*** (0.012) 10,524 0.000 8.955 0.111
(7) 0.0718*** (0.026) 0.0444*** (0.012) 10,524 0.000 5.271 0.261
This table shows the results of varying the set of instruments that are used when applying the OLS-IV estimator to Eq. 57.22. In the first column we use the second and third lags of q as instruments for current (differenced) q, as in Table 57.6. In column (2), we use third, fourth, and fifth lags of q as instruments. In column (3), we use the fourth and fifth lags of q and the first lag of cash flow as instruments. In column (4), we use the third lag of q and fourth lag of cash flow as instruments. In column (5), we use the fourth and fifth lags of cash flow as instruments. In column (6), we use {qt4, qt5, qt6, CFt3, CFt4, CFt5} as instruments. Finally, in column (7) {qt5,qt6,CFt3,CFt4,CFt5} as instruments. The estimations correct the errors for heteroscedasticity and firmclustering. The data are taken from the annual Compustat industrial files over the 1970–2005 period. See text for details. *, **, and *** represent statistical significance at the 10 %, 5 %, and 1 % levels, respectively
Observations F-stat p-value (first step) J-stat J-stat p-value
Cash flow
Variables q
Table 57.10 OLS-IV coefficients, robustness tests
1612 H. Almeida et al.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1613
This suggests that the researcher should look for longer lagging schemes, lags that “erase” the MA memory of the error structure. Our next set of estimations use longer lagging structures for our proposed instruments and even an instrumental set with only lags of cash flow, the exogenous regressors in the model. We use combinations of longer lags of q (such as the fourth and fifth lags) and longer lags of cash flow (fourth and fifth lags). This set of tests yields estimates that more clearly meet standard tests for instrument validity.28 Specifically, the J-statistics now indicate we do not reject the hypothesis that the exclusion restrictions are met. The results reported in columns (2) through (7) of Table 57.10 also remain consistent with Conditions 1 and 2. In particular, the q coefficient varies from approximately 0.040 to 0.091, while the cash flow coefficient varies roughly from 0.044 to 0.046. These results are consistent with our simulations, which suggest that these longer lag structures should deliver relatively consistent, stable estimates of the coefficients for q and cash flow in standard investment regressions.
57.5
Concluding Remarks
OLS estimators have been used as a reference in empirical work in financial economics. Despite their popularity, those estimators perform poorly when dealing with the problem of errors in variables. This is a serious problem since in most empirical applications, one might raise concerns about issues such as data quality and measurement errors. This chapter uses Monte Carlo simulations and real data to assess the performance of different estimators that deal with measurement error, including EW’s higher-order moment estimator and alternative instrumental variable-type approaches. We show that in the presence of individual fixed effects, under heteroscedasticity, or in the absence of high degree of skewness in the data, the EW estimator returns biased coefficients for both mismeasured and perfectly measured regressors. The IV estimator requires assumptions about the autocorrelation structure of the measurement error, which we characterize and discuss in the chapter. We also estimate empirical investment models using the two methods. Because real-world investment data contain firm-fixed effects and heteroscedasticity, the EW estimator delivers coefficients that are unstable across different specifications and not economically meaningful. In contrast, a simple OLS-IV estimator yields results that conform to theoretical expectations. We conclude that real-world investment data is likely to satisfy the assumptions that are required for identification of OLS-IV but that the presence of heteroscedasticity and fixed effects causes the EW estimator to return biased coefficients.
28
All of the F-statistics associated with the first-stage regressions have p-values that are close to zero. These statistics (reported in Table 57.10) suggest that we do not incur a weak instrument problem when we use longer lags in our instrumental set.
1614
H. Almeida et al.
Table 57.11 EW coefficients for real data (within transformation) q coefficient Year 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993
GMM3 GMM4 GMM5 0.029 0.000 0.000 (0.075) (0.073) (4.254) 0.050 0.029 0.019 (0.037) (0.012) (0.016) 0.225 0.001 0.000 (0.475) (0.149) (0.125) 0.137 0.001 0.000 (0.094) (0.273) (0.042) 0.082 0.243 0.000 (0.263) (0.109) (0.108) 0.263 0.514 0.281 (0.282) (0.927) (0.146) 0.020 0.001 0.001 (0.161) (0.048) (0.031) 0.349 0.116 0.183 (0.294) (0.071) (0.055) 0.334 0.185 0.324 (0.165) (0.045) (0.128) 0.109 0.383 0.238 (0.155) (0.316) (0.126) 0.081 0.001 0.001 (0.037) (0.041) (0.059) 0.230 0.210 0.185 (0.083) (0.050) (0.043) 0.198 0.349 0.230 (0.483) (0.137) (0.024) 0.672 0.244 0.593 (0.447) (0.089) (0.162) 0.102 0.104 0.115 (0.039) (0.020) (0.003) 0.129 0.179 0.148 (0.051) (0.029) (0.014) 0.365 0.015 0.111 (1.797) (0.082) (0.196) 0.437 0.419 0.529 (0.404) (0.137) (0.024) 0.384 0.260 0.240 (0.225) (0.105) (0.038) 0.105 0.102 0.040 (0.016) (0.008) (0.016) 0.274 0.322 0.452 (0.394) (0.352) (0.273)
Cash flow coefficient GMM3 GMM4 GMM5 0.347 0.265 0.264 (0.207) (0.207) (11.968) 0.168 0.199 0.214 (0.073) (0.043) (0.043) 0.161 0.292 0.292 (0.281) (0.095) (0.094) 0.156 0.276 0.276 (0.090) (0.251) (0.048) 0.203 0.091 0.261 (0.179) (0.090) (0.083) 0.122 (0.067) 0.108 (0.224) (0.689) (0.125) 0.249 0.266 0.266 (0.155) (0.056) (0.044) 0.021 0.219 0.163 (0.273) (0.074) (0.067) 0.145 0.061 0.131 (0.248) (0.093) (0.191) 0.125 0.206 0.031 (0.195) (0.398) (0.174) 0.132 0.184 0.184 (0.033) (0.034) (0.040) 0.125 0.138 0.154 (0.067) (0.052) (0.048) 0.050 (0.018) 0.035 (0.212) (0.086) (0.032) 0.179 0.070 0.133 (0.303) (0.079) (0.128) 0.078 0.078 0.077 (0.021) (0.021) (0.020) 0.030 0.027 0.029 (0.011) (0.007) (0.007) 0.285 0.162 0.196 (0.642) (0.063) (0.078) 0.395 0.386 0.440 (0.214) (0.094) (0.093) 0.098 0.007 0.023 (0.199) (0.099) (0.055) 0.086 0.088 0.148 (0.034) (0.033) (0.037) 0.076 0.118 0.232 (0.360) (0.297) (0.276) (continued)
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1615
Table 57.11 (continued) q coefficient Year 1994
GMM3 GMM4 0.110 4.436 (0.136) (86.246) 1995 0.574 8.847 (1.862) (145.827) 1996 0.220 0.167 (0.068) (0.022) 1997 0.089 0.177 (0.082) (0.042) 1998 0.620 0.245 (1.634) (0.187) 1999 0.031 0.003 (0.059) (0.028) 2000 0.071 0.126 (0.024) (0.030) 2001 0.050 0.077 (0.021) (0.020) 2002 0.047 0.048 (0.128) (0.016) 2003 0.131 0.066 (0.043) (0.025) 2004 0.005 0.030 (0.066) (0.018) 2005 0.049 0.029 (0.025) (0.009) Fama–MacBeth standard error 0.0679 0.3031 0.0455 0.3018
Cash flow coefficient GMM5 GMM3 GMM4 GMM5 0.047 0.255 3.550 0.207 (0.011) (0.108) (65.488) (0.045) 2.266 0.537 5.898 1.633 (5.565) (1.275) (94.154) (3.749) 0.196 0.101 0.106 0.103 (0.013) (0.036) (0.033) (0.030) 0.158 0.059 0.020 0.028 (0.021) (0.042) (0.041) (0.034) 0.119 0.688 0.355 0.242 (0.027) (1.446) (0.169) (0.037) 0.000 0.160 0.126 0.123 (0.055) (0.074) (0.038) (0.068) 0.118 0.032 0.029 0.021 (0.020) (0.043) (0.057) (0.051) 0.055 0.034 0.020 0.031 (0.013) (0.016) (0.016) (0.012) 0.048 0.030 0.030 0.030 (0.014) (0.033) (0.013) (0.012) 0.157 0.013 0.025 0.027 (0.010) (0.031) (0.026) (0.014) 0.030 0.092 0.079 0.079 (0.009) (0.045) (0.034) (0.034) 0.026 0.078 0.095 0.098 (0.011) (0.040) (0.032) (0.034) 0.0232 0.1299 0.3841 0.1554 0.0787 0.0310 0.2027 0.05222
This table shows the coefficients and standard deviations that we obtain when we use the EW estimator in Eq. 57.22, estimated year by year. The table also shows the results for the EW estimator associated with GMM3, GMM4, and GMM5. The table also shows the EW coefficients for the data that is treated for fixed effects via the within transformation. The data are taken from the annual Compustat industrial files over the 1970–2005 period. See text for details
References Agca, S., & Mozumdar, A. (2007). Investment-cash flow sensitivity: Myth or reality? Working paper, George Washington University. Almeida, H., & Campello, M. (2007). Financial constraints, asset tangibility and corporate investment. Review of Financial Studies, 20, 1429–1460. Altonji, J., & Segal, L. (1996). Small-sample bias in GMM estimation of covariance structures. Journal of Business & Economic Statistics, 14, 353–366. Arellano, M., & Bond, S. (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies, 58, 277–297.
1616
H. Almeida et al.
Bakke, T., & Whited, T. (2009). What gives? A study of firms’ reactions to cash shortfalls. Working paper, University of Rochester. Baltagi, B. (2005). Econometric analysis of panel data (3rd ed.). Chichester: Wiley. Baum, C., Schaffer, M., & Stillman, S. (2003). Instrumental variables and GMM: Estimation and testing. Stata Journal, 3, 1–31. Bertrand, M., & Mullainathan, S. (2005). Bidding for oil and gas leases in the Gulf of Mexico: A test of the free cash flow model?. mimeo, University of Chicago and MIT. Bertrand, M., & Schoar, A. (2003). Managing with style: The effect of managers on firm policies. Quarterly Journal of Economics, 118, 1169–1208. Bertrand, M., Duflo, E., & Mullainathan, S. (2004). How much should we trust differences-indifferences estimates? Quarterly Journal of Economics, 119, 249–275. Biorn, E. (2000). Panel data with measurement errors: Instrumental variables and GMM procedures combining levels and differences. Econometric Reviews, 19, 391–424. Blanchard, O., Lopez-de-Silanes, F., & Shleifer, A. (1994). What do firms do with cash windfalls? Journal of Financial Economics, 36, 337–360. Blundell, R., Bond, S., Devereux, M., & Schiantarelli, F. (1992). Investment and Tobin’s Q: Evidence from company panel data. Journal of Econometrics, 51, 233–257. Bond, S., & Cummins, J. (2000). The stock market and investment in the new economy. Brookings Papers on Economic Activity, 13, 61–124. Chirinko, R. (1993). Business fixed investment spending: Modeling strategies, empirical results, and policy implications. Journal of Economic Literature, 31, 1875–1911. Colak, G., & Whited, T. (2007). Spin-offs, divestitures, and conglomerate investment. Review of Financial Studies, 20, 557–595. Cragg, J. (1997). Using higher moments to estimate the simple errors-in-variables model. RAND Journal of Economics, 28, 71–91. Doran, H., & Schmidt, P. (2006). GMM estimators with improved finite sample properties using principal components of the weighting matrix, with an application to the dynamic panel data model. Journal of Econometrics, 133, 387–409. Erickson, T., & Whited, T. (2000). Measurement error and the relationship between investment and Q. Journal of Political Economy, 108, 1027–1057. Erickson, T., & Whited, T. (2002). Two-step GMM estimation of the errors-in-variables model using high-order moments. Econometric Theory, 18, 776–799. Fazzari, S., Hubbard, R. G., & Petersen, B. (1988). Financing constraints and corporate investment. Brooking Papers on Economic Activity, 1, 141–195. Frees, E. (2004). Longitudinal and panel data analysis and applications in the social sciences. New York: Cambridge University Press. Gan, J. (2007). Financial constraints and corporate investment: Evidence from an exogenous shock to collateral. Journal of Financial Economics, 85, 709–734. Gomes, J. (2001). Financing investment. American Economic Review, 91, 1263–1285. Griliches, Z., & Hausman, J. A. (1986). Errors in variables in panel data. Journal of Econometrics, 31, 93–118. Hadlock, C. (1998). Ownership, liquidity, and investment. RAND Journal of Economics, 29, 487–508. Hayashi, F. (1982). Tobin’s marginal q and average q: A neoclassical interpretation. Econometrica, 50, 213–224. Hayashi, F. (2000). Econometrics. Princeton: Princeton University Press. Hennessy, C. (2004). Tobin’s Q, debt overhang, and investment. Journal of Finance, 59, 1717–1742. Holtz-Eakin, D., Newey, W., & Rosen, H. S. (1988). Estimating vector autoregressions with panel data. Econometrica, 56, 1371–1395. Hoshi, T., Kashyap, A., & Scharfstein, D. (1991). Corporate structure, liquidity, and investment: Evidence from Japanese industrial groups. Quarterly Journal of Economics, 106, 33–60. Hubbard, R. G. (1998). Capital market imperfections and investment. Journal of Economic Literature, 36, 193–227.
57
Assessing the Performance of Estimators Dealing with Measurement Errors
1617
Kaplan, S., & Zingales, L. (1997). Do financing constraints explain why investment is correlated with cash flow? Quarterly Journal of Economics, 112, 169–215. Lamont, O. (1997). Cash flow and investment: Evidence from internal capital markets. Journal of Finance, 52, 83–110. Lyandres, E. (2007). External financing costs, investment timing, and investment-cash flow sensitivity. Journal of Corporate Finance, 13, 959–980. Malmendier, U., & Tate, G. (2005). CEO overconfidence and corporate investment. Journal of Finance, 60, 2661–2700. Petersen, M. (2009). Estimating standard errors in finance panel data sets: Comparing approaches. Review of Financial Studies, 22, 435–480. Poterba, J. (1988). Financing constraints and corporate investment: Comment. Brookings Papers on Economic Activity, 1, 200–204. Rauh, J. (2006). Investment and financing constraints: Evidence from the funding of corporate pension plans. Journal of Finance, 61, 33–71. Riddick, L., & Whited, T. (2009). The corporate propensity to save. Journal of Finance, 64, 1729–1766. Shin, H., & Stulz, R. (1998). Are internal capital markets efficient? Quarterly Journal of Economics, 113, 531–552. Stein, J. (2003). Agency information and corporate investment. In G. Constantinides, M. Harris, & R. Stulz (Eds.), Handbook of the economics of finance. Amsterdam: Elsevier/North-Holland. Wansbeek, T. J. (2001). GMM estimation in panel data models with measurement error. Journal of Econometrics, 104, 259–268. Whited, T. (2001). Is it inefficient investment that causes the diversification discount? Journal of Finance, 56, 1667–1691. Whited, T. (2006). External finance constraints and the intertemporal pattern of intermittent investment. Journal of Financial Economics, 81, 467–502. Xiao, Z., Shao, J., & Palta, M. (2008). A unified theory for GMM estimation in panel data models with measurement error. Working paper, University of Wisconsin Madison.
Realized Distributions of Dynamic Conditional Correlation and Volatility Thresholds in the Crude Oil, Gold, and Dollar/Pound Currency Markets
58
Tung-Li Shih, Hai-Chin Yu, Der-Tzon Hsieh, and Chia-Ju Lee
Contents 58.1 58.2 58.3 58.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic Conditional Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58.4.1 Dynamic Conditional Correlation Between Gold, Oil, and the Dollar/Pound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58.4.2 Empirical Results of Dynamic Conditional Correlation . . . . . . . . . . . . . . . . . . . . . 58.4.3 Volatility Threshold Dynamic Conditional Correlation . . . . . . . . . . . . . . . . . . . . . 58.4.4 Does Investors’ Behavior Change over Subperiods? . . . . . . . . . . . . . . . . . . . . . . . . 58.5 Conclusions and Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1620 1622 1623 1625 1625 1628 1636 1639 1643 1644
Abstract
This chapter proposes a modeling framework for the study of co-movements in price changes among crude oil, gold, and dollar/pound currencies that are conditional on volatility regimes. Methodologically, we extend the dynamic
T.-L. Shih Department of Hospitality Management, Ming Dao University, Changhua Peetow, Taiwan e-mail: [email protected] H.-C. Yu (*) Department of International Business, Chung Yuan University, Chungli, Taiwan e-mail: [email protected]; [email protected] D.-T. Hsieh Department of Economics, National Taiwan University, Taipei, Taiwan e-mail: [email protected] C.-J. Lee College of Business, Chung Yuan University, Chungli, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_58, # Springer Science+Business Media New York 2015
1619
1620
T.-L. Shih et al.
conditional correlation (DCC) multivariate GARCH model to examine the volatility and correlation dynamics depending on the variances of price returns involving a threshold structure. The results indicate that the periods of market turbulence are associated with an increase in co-movements in commodity (gold and oil) prices. By contrast, high market volatility is associated with a decrease in co-movements between gold and the dollar/pound or oil and the dollar/pound. The results imply that gold may act as a safe haven against major currencies when investors face market turmoil. By looking at different subperiods based on the estimated thresholds, we find that the investors’ behavior changes in different subperiods. Our model presents a useful tool for market participants to engage in better portfolio allocation and risk management. Keywords
Dynamic conditional correlation • Volatility threshold • Realized distribution • Currency market • Gold • Oil
58.1
Introduction
Commodity markets in recent years have experienced dramatic growth in trading volume as well as widespread price volatility. With few exceptions, most of the commodities have experienced an impressive bull run and have generally outperformed traditional investments. For example, the prices of commodities such as crude oil have risen dramatically, and the crude oil price almost reached a new high of US$200 per barrel in 2011. In the meantime, the price of gold hit a new high of US$1,700 in 2011. These price surprises have influenced not only the commodity markets but also the currency markets and the international parity of foreign exchange. By the fall of 2007, the increasing speculation in commodity markets was associated with the devaluation of the US dollar. Among these commodities, gold appears to have exhibited a more stable price trend than crude oil. From the beginning of the financial crisis in 1997 up until 2011, the price of gold has risen by almost 42 %. For many years, gold has been viewed as a safe haven from market turbulence. However, very few empirical studies have examined the role of gold as a safe-haven asset and even fewer have examined gold’s safe-haven role with respect to major currency exchange rates, especially those of the two major currencies – the US dollar and the British pound. The reason why we choose exchange rates as a comparative baseline is that, for commodities that are traded continuously in organized markets, a change in a major currency exchange rate will result in an instant adjustment in the prices of commodities in at least one currency and perhaps in both currencies if both countries are “large.” For instance, when the dollar depreciates against the pound, the dollar prices of commodities tend to rise (and pound prices fall) even though the fundamentals of the markets remain unchanged.
58
Realized Distributions of Dynamic Conditional Correlation
1621
This widely expanded and complex volatility in commodity prices increases the importance of modeling real volatility and correlation, because a good estimate helps facilitate portfolio optimization, risk management, and hedging activities. Although some of the literature assumes volatility and correlation to be constant in the past years, it is widely recognized that they indeed vary over time. This recognition has spurred a vibrant body of work regarding the dynamic properties of market volatility. To date, very little is known about the volatility dynamics between the commodity and currency markets, for instance, in the case of gold and its possible correlations with oil and major currencies. This chapter intends to address this gap. The main purpose of this study is to examine the dynamic relationships among gold, oil, and the dollar/pound to further understand the hedging ability of gold relative to another commodity or currency. That is to say, if gold acts as a financial safe haven against the dollar (or oil), it allows for systematic feedback between changes in the price of gold, oil, and the dollar/pound exchange rate. Specifically, this chapter asks, does gold act as a safe haven against the dollar/pound, as a hedge, or as neither? Are gold and oil highly correlated with each other? Movements in the price of gold, oil, and the dollar/pound are analyzed using a model of dynamic conditional correlation covering 20 years of daily data. Studies related to this issue are few. Capie et al. (2005) point out that gold acts as an effective hedge against the US dollar by estimating elasticity relative to changes in the exchange rate. However, their approach involves the use of a single-equation model in which the exchange rate is assumed to be unaffected by the time of the dependent variable, the price of gold. Our chapter improves their work by employing a dynamic model of conditional correlations in which all variables are treated symmetrically. Besides, although Baur and Lucey (2010) find evidence in support of gold providing a haven from losses incurred in the bond and stock markets, they neglect the interactions with the currency market and, like Capie et al. (2005), do not consider feedback in their model of returns. Nikos (2006) uses correlation analysis to estimate the correlation of returns between gold and the dollar and shows that the correlation between the dollar and gold is 0.19 and 0.51 for two different periods. These findings imply that gold is a contemporaneous safe haven in extreme currency market conditions. Steinitz (2006) utilizes the same method to estimate the correlations of weekly returns between gold and Brent oil for two periods of 1 year and 5 years, respectively, and shows that the correlations between gold and Brent oil are 0.310 and 0.117, respectively. Boyer and Fillion (2007) report on the financial determination of Canadian oil and gas stock returns and conclude that a weakening of the Canadian dollar against the US dollar has a negative impact on stock returns. If correlations and volatilities vary over time, the hedge ratio should be adjusted to account for the new information. Other work, such as Baur and McDermott (2010), similarly neglects feedback in its regression model. Further studies
1622
T.-L. Shih et al.
investigate the concept of a safe-haven asset without reference to gold. For example, Ranaldo and Soderlind (2009) and Kaul and Sapp (2006) examine safe-haven currencies, while Upper (2000) examines German government bonds as safe-haven instruments. Andersen et al. (2007) show that exchange rate volatility outstrips bond volatility in the US, British, and German markets. Thus, currency risk is worth exploring and being hedged. As a general rule, commodities are priced in US dollars. Since the US currency has weakened that a bull run of commodity prices appeared, the question arises as to which the increases in commodity prices have been a product of the depreciation in the US dollar. Furthermore, it would be interesting to examine how to provide a hedge against the dollar that varies across different commodities. It also needs to be asked which investment instruments are more suitable for diversification purposes to protect against changes in the US currency. This chapter investigates the following issues. First, how do the time-varying correlations and associated distributions appear in the crude oil, gold, and dollar/ pound markets? Second, what is the shape of each separate distribution of various volatility levels among the crude oil, gold, and dollar/pound markets? Third, by employing the volatility threshold DCC model put forward by Kasch and Caporin (2012), is the high volatility (exceeding a specified threshold) of the assets associated with an increasing degree of correlation? We find that the volatility thresholds of oil and gold correspond to two major events – the First Gulf War in 1990 and the 911 event in 2001. We also find that the increase in commodity (crude oil and gold) prices was a reflection of the falling US dollar, especially after the 911 event. The evidence shows that the DCC between crude oil and gold was 0.1168, while those for the gold/dollar/pound and oil/dollar/pound markets were 0.2826 and 0.0369, respectively, with the latter being significantly higher than in the other subperiods. This remainder of this chapter is organized as follows. Section 58.2 provides a review of the literature. Section 58.3 describes the data and summary statistics for crude oil, gold, and the dollar/pound exchange rate. Section 58.4 presents the dynamic conditional correlation model and reports the results of its volatility threshold. And also provides the results for subperiods separated by the thresholds found. Finally, Sect. 58.5 discusses the results and concludes.
58.2
Literature Review
Engle et al. (1994) investigate how the returns and volatilities of stock indices between Tokyo and New York are correlated and find that, except for a lagged return spillover from New York to Tokyo after the crash, there was no significant lagged spillover in returns or in volatilities. Ng (2000) examines the size and the impact of volatility spillover from Japan and the USA to six Pacific Basin equity markets. Using four different specifications of correlation by constructing volatility spillover models, he distinguishes the volatility between local idiosyncratic shock, regional shock from Japan, and global shock from the USA and finds significant
58
Realized Distributions of Dynamic Conditional Correlation
1623
spillover effects from regional to Pacific Basin economies. Andersen et al. (2001a) find strong evidence that volatilities and correlations move together in a manner broadly consistent with the latent factor structure. Andersen et al. (2001b) found that volatility movements are highly correlated across the deutsche mark and yen against the US dollar. Furthermore, the correlation between the two exchange rates increases with volatility. Engle (2002) finds that the breakdown of the correlations between the deutsche mark and the pound and lira in August 1992 is very apparent. In addition, after the euro is launched, the estimated currency correlation essentially moves to 1. Recently, Doong et al. (2005) examined the dynamic relationship and pricing between stocks and exchange rates for six Asian emerging markets. They found that the currency depreciation is accompanied by a fall in stock prices. The conditional variance-covariance process of changes in stock prices and exchange rates is time varying. Lanza et al. (2006) estimate the dynamic conditional correlations in the daily returns for West Texas Intermediate (WTI) oil forward and future prices from January 3, 1985 to January 16, 2004, and find that the dynamic conditional correlations vary dramatically. Chiang et al. (2009) investigate the probability distribution properties, autocorrelations, dynamic conditional correlations, and scaling analysis of Dow-Jones and NASDAQ Intraday returns from August 1, 1997 to December 31, 2003. They find the correlations to be positive and to mostly fluctuate in the range of 0.6–0.8. Furthermore, the variance of the correlation coefficients has been declining and appears to be stable during the post-2001 period. Pe´rez-Rodrı´guez (2006) applies a multivariate DCC-GARCH technique to examine the structure of the short-run dynamics of volatility returns on the euro, yen, and British pound against the US dollar over the period from 1999 to 2004 and finds strong dynamic relationships between currencies. Tastan (2006) applies multivariate GARCH to capture the time-varying variance-covariance matrix for stock market returns (Dow-Jones Industrial Average Index and S&P500 Index) and changes in exchange rates (euro/dollar exchange rates). He also plots news impact surfaces for variances, covariances, and correlation coefficients to sort out the effects of shocks. Chiang et al. (2007a) apply a dynamic conditional correlation model to nine Asian daily stock-return series from 1990 to 2003 and find evidence of a contagion effect and herding behavior. Chiang et al. (2007b) examine A-share and B-share market segmentation conditions by employing a dynamic multivariate GARCH model and show that stock returns in both A- and B-shares are positively correlated with the daily change in trading volume or abnormal volume.
58.3
Data
Our data consist of the daily prices of crude oil and gold, and the US dollar/British pound exchange rate, and are obtained from the AREMOS database over the period from January 1, 1986 to December 31, 2007 for a total of 5,165 observations. The West Texas Intermediate crude oil price is chosen to represent the oil spot market, and the price of 99.5 % fine gold, the London afternoon fixing,
1624
T.-L. Shih et al. The Price Movement of Crude Oil and Gold Market
100
900 Oil
Gold
80
800 700
60
600
40
500 400
20
300
0
200 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 The Price Movement of Foreign Exchange Market
0.75
USD/Pound
0.70 0.65 0.60 0.55 0.50 0.45 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 Time
Fig. 58.1 The price movement for the sampled markets from January 1, 1986 to December 31, 2007, for a total of 5,405 observations
is chosen to represent the gold spot market. The daily dollar/pound exchange rate, which represents the major currencies, is selected to estimate the volatility of the FX market. In our sample period, the crude oil price was testing the $100 per barrel threshold by November 2007. Meanwhile, the price of gold was relatively stable varying between $415 and $440 per ounce from January to September 2005. However, in the fourth quarter of 2005, the gold price jumped dramatically and hit $500 per ounce. In April 2006, the gold price broke through the $640 level. 2007 was a strong year, with the price steadily rising from $640 on January 2 with a closing London fixed price of over $836 on December 31, 2007. Since then, prices have continued to increase to reach new record highs of over $1,700 in 2011. Figure 58.1 displays the price movements for oil, gold, and the dollar/pound over the sample period. As shown in Fig. 58.1, gold traded between a low of $252 (August 1999) and a high of $836 (December 31, 2007) per ounce at the fixing, while oil traded between a low of $10 (in late 1998, in the wake of the Asian
58
Realized Distributions of Dynamic Conditional Correlation
1625
Table 58.1 Summary statistics of the daily returns among crude oil, gold, and dollar/pounda (January 1, 1986 to December 31, 2007) Mean Max Min Standard dev. Skewnessb Kurtosisb Jarque-Berac
Crude oil 0.024 (0.536) 0.437 0.404 0.029 0.012 (0.729) 37.915** (0.0000) 323,692.99** (0.0000)
Gold 0.017 (0.159) 0.070 0.063 0.009 0.031 (0.354) 5.968** (0.0000) 8,019.20** (0.0000)
Dollar/pound 0.006 (0.4769) 0.0379 0.0329 0.006 0.164** (0.0000) 2.439** (0.0000) 1,363.59** (0.0000)
a
The table summarizes the daily returns of estimates for the West Texas Intermediate crude oil, gold, and dollar/pound markets. The sample covers the period from January 1, 1986 through December 31, 2007 for a total of 5,405 observations b The three markets are far away from the skewness and kurtosis of 0 and 3, respectively, implying that the three markets are not normally distributed c Jarque-Bera is the Jarque-Bera test statistic, distributed w22 **Denotes significance at the 0.05 level
Financial Crisis and the United Nations’ oil-for-food program) and a high of $99.3 (November 2007) per barrel. These large variations in the price of both gold and oil indicate that the DCC and realized distribution are better approaches for detecting the trading pattern of investors. Table 58.1 reports the statistics of daily returns for crude oil, gold, and the dollar/ pound exchange rate. The daily returns are calculated as the first differences of the natural log of the prices times 100. The results show that the crude oil has the highest return, followed by gold and the dollar/pound.
58.4
Dynamic Conditional Correlation
58.4.1 Dynamic Conditional Correlation Between Gold, Oil, and the Dollar/Pound It is generally recognized that financial markets are highly integrated in terms of price movements, since prices soaring in one market can spill over to another market instantly. One simple method to explore the relationship between the two markets is to calculate the correlation coefficient. We then specify a multivariate model, which is capable of computing the dynamic conditional correlation (DCC) that is capable of capturing ongoing market elements and shocks. The DCC model is specified as Eq. 58.1.
1626
T.-L. Shih et al.
Table 58.2 The correlation among crude oil, gold, and FX of dollar/pound (January 1, 1986 to December 31, 2007) Oil Gold FX
Oil 1 0.7488 0.5592
Gold
FX
1 0.6260
1
Et1 r i,t r j,t rij,t ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Et1 r 2i,t Et1 r 2j,t
(58.1)
where the conditional correlation rij,t is based on information known in the previous period Et1 and i, j represent the three markets 1, 2, and 3. Based on the laws of probability, all correlations defined in this way must lie within the interval [1, 1]. This is different from the constant correlation we have usually used and assumed throughout a given period. To clarify the relationship between the conditional correlations and conditional variances, it is convenient to express the returns as the conditional standard deviation times the standardized disturbance as suggested by Engle (2002) in Eq. 58.2 below: hi,t ¼ Et1 r 2i,t ,
r i,t ¼
pffiffiffiffiffiffiffiffiffiffiffiffi hi , t e i , t ,
i ¼ 1, 2, 3
(58.2)
Since the correlation coefficients among crude oil, gold, and dollar/pound FX markets provide useful measures of the long-term relationship between each pair of markets, Table 58.2 presents a simple correlation matrix in which the calculation is based on the constant coefficient given by Eq. 58.1. Some preliminary information is obtained below. First, the crude oil and gold are highly correlated with a coefficient of 0.7488, a result that is in line with Steinitz (2006). Secondly, both gold and crude oil are highly negatively related to the dollar/pound with coefficients of 0.6260 and 0.5592, respectively, which is consistent with the report of Nikos (2006). As the autoregressive conditional heteroskedasticity (ARCH) model has become the most useful model in investigating the conditional volatility since Engle (1982), we then follow this model in our analysis. The ARCH model adopts the effect of past residuals that helps explain the phenomenon of volatility clustering. Bollerslev (1986) proposed the generalized autoregressive conditional heteroskedasticity (GARCH) model, which has created a new field in the research on volatility and is widely used in financial and economic time series. Some of his research attempts to discuss the effects of more than one variable simultaneously. For instance, Bollerslev (1990) proposed the constant conditional correlation (CCC) model which makes a strong assumption, namely, that the correlation among the variables remains constant in order to simplify the estimation. Engle (2002) later proposed
58
Realized Distributions of Dynamic Conditional Correlation
1627
a dynamic conditional correlation (DCC) model, which allows the correlation to be time varying and, by involving fewer complicated calculations, is capable of dealing with numerous variables. In this chapter, we follow the Engle (2002) approach, which has clear computational advantages over multivariate GARCH models in that the number of parameters to be estimated remains constant and loosens the assumptions of the multivariate conditional correlations, in order to develop the dynamic conditional correlation (DCC) model. The DCC model can be viewed as a generalization of the Bollerslev (1990) constant conditional correlation (CCC) estimator. It differs only in that it allows the correlation to be time varying, which parameterizes the conditional correlations directly. The estimation takes place in two steps, in that a series of univariate GARCH estimates are first obtained followed by the correlation coefficients. The characteristics of the DCC model are that the multivariate conditional correlations are dynamic and not constant and confirm that the real conditional correlations of financial assets in general and the timevarying covariance matrices can be estimated. This model involves a less complicated calculation without losing too much generality and is able to deal with numerous variables. Following Engle (2002) and Chiang et al. (2009), the mean equation is assumed to be represented by Eq. 58.1, where the multivariate conditional variance is given by H t,t ¼ Dt,t V t,t Dt,t ,
(58.3)
where t is a time interval, which can be a day, an hour, or 1 min. Here t is a daily interval. Vt,t is a symmetric conditional correlation matrix of et and Dt,t is a (2 2) matrix with the conditional variances ht,ii,t for two stock returns (where i ¼ gold, oil, or the dollar/pound exchange rate) on the diagonal. That is, hqffiffiffiffiffiffiffiffiffiffi i . Equation 58.3 suggests that the dynamic properties of Dt,t ¼ diag s2t,ii,t ð2;2Þ
the covariance matrix Ht,t are determined by Dt,t and Vt,t for a given t, a time interval that can be 1 min, 1 day, or 1 week and so on. The DCC model proposed by Engle (2002) involves a two-stage estimation of the conditional covariance matrix Ht in Eq. 58.3. In the first stage, univariate volatility models are fitted for each of the qffiffiffiffiffiffiffiffiffiffiffi returns, and estimates of s2t, ii, t ði ¼ 1, 2, and 3Þ are obtained by using Eq. 58.4. In the second stage, return residuals are transformed by their estimated standard qffiffiffiffiffiffiffiffiffiffi deviations from the first stage. That is t,i,t ¼ et,i,t = s2t,ii,t , where t,i,t is used to estimate the parameters of the conditional correlation. The evolution of the correlation in the DCC model is given by Eq. 58.5: s2t,ii,t ¼ ct,i þ at, i e2t,i,t1 þ bt, i s2t,ii,t1 ,
i ¼ 1, 2
Qt,t ¼ 1 at,i bt,i Qt þ at,i t, i,t1 0t, i,t1 þ bt,i Qt,t1 ,
(58.4) (58.5)
1628
T.-L. Shih et al.
where Qt,t ¼i (qt,ij,t) is the 2 2 time-varying covariance matrix of t, i, t , Qt ¼ h E t, i, t 0t, i, t is the 2 2 unconditional variance matrix of t,i,t, and at,i and bt,i are non-negative scalar parameters satisfying (at,i + bt,i) < 1. Since Qt does not generally have ones on its diagonal, we scale it to obtain a proper correlation matrix Vt,t Thus, 1=2 1=2 V t, t ¼ diag Qt, t Qt, t diag Qt, t , (58.6) 1=2 pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi where diag Qt, t ¼ diag 1= qt,11,t , 1= qt,22,t : Here Vt,t in Eq. 58.6 is a correlation matrix with ones on the diagonal and off-diagonal elements of less than one in absolute value terms, as long as Qt,t is positive definite. A typical element of Vt,t takes the form: rt, 12, t ¼ qt, 12, t =
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qt, 11, t qt, 22, t
(58.7)
The dynamic correlation coefficient, rt,12,t, can be obtained by using the element of Qt,t in Eq. 58.5, which is given by Eq. 58.8 below: 0 qt, ij, t ¼ 1 at, i bt, i rt, ij þ at, i t, i, t1 t, j, t1 þ bt, i qt, ij, t1 ,
(58.8)
The mean reversion requires that (at,i + bt,i) < 1. In general terms, the essence of this concept is the assumption that both an asset’s high and low prices are temporary and that the asset’s price will tend to move toward the average price over time. Besides, the estimates of the dynamic correlation coefficients, rij,t, between each pair of the three markets have been specified as in Eq. 58.1.
58.4.2 Empirical Results of Dynamic Conditional Correlation In this section, we present the estimation results of the models outlined above. The estimation results are presented in Table 58.1, which provides the dynamic correlations of returns across crude oil, gold, and the dollar/pound foreign exchange rate with each other. The estimated a and b for three markets are listed in Tables 58.3 and 58.4. The likelihood ratio does not support the rejection of the null hypothesis of the scalar dynamic conditional correlation. It can be seen that the sum of the estimated coefficients in the variance equations (a + b) is close to 1 for all of the cases, implying that the volatility appears to be highly persistent. As for the LjungBox Q-statistic of the serial correlation of the residuals, the results show that that the serial correlations in the error series are regarded as adequate. Calvert et al. (2006) observed that through the dynamic conditional correlation distribution, we can more fully understand the real impacts in international markets.
58
Realized Distributions of Dynamic Conditional Correlation
1629
Table 58.3 DCC estimates: three marketsa (January 1, 1986 to December 31, 2007) DCC a
0.0202** (52.3020)
b
0.9651** (1,050.395)
a
The t-statistic is given in parentheses **Denotes significance at the 0.05 level
Table 58.4 Estimation results from the DCC-GARCH modela
Oil Gold FX
Mean equation Constant 0.0004** (1.6549) 1.54E-06** (0.1658) 0.0001** (1.6045)
Variance equation a b 0.1399** 0.8306** (21.6431) (143.5303) 0.0664** 0.9272** (15.0320) (202.223) 0.0417** 0.9479** (9.2628) (168.722)
Persistence 0.9705
Ljung-Box Q-statistic 185.7203**
0.9936
61.8887**
0.9896
32.6628
a
The persistence level of the variance is calculated as the summation of the coefficients in the variance equations (a + b). The z-statistic is given in parentheses. The Ljung-Box Q-statistic tests the serial correlation of the residuals ** Denotes significance at the 0.05 level
It can also help with portfolio selection and risk management. In Fig. 58.2, which reports the results of the dynamic conditional correlation, the estimated correlation coefficients are time varying, reflecting some sort of portfolio shift between each two items. The correlation between crude oil and gold was estimated using the DCC integrated method, and the results, shown in Fig. 58.2a, are quite interesting. The correlations are found to be generally positive around 0.2 except for mid-1990 which turns out to be highly correlated with a correlation of around 0.6605. The possible interpretation for the high correlation is due to the Iraqi invasion of Kuwait and the Gulf War. The crude oil price jumped from about $15 to over $33 per barrel during that time, so that investors channeled their money into the gold market because of their fear of inflation. This fact accords with the “flight to quality” concept, which represents the action of investors moving their capital away from riskier or more volatile assets to the ones considered to be safer and less volatile. The correlation between gold and the dollar/pound exchange rate is shown in Fig. 58.2b for the integrated DCC in the last 20 years. Whereas for most of the period the correlations were between 0.1 and 0.3, there were two notable drops, where the stock market crashed in October 1987 and in late 2002, and we also find two peaks, one in the middle of 1990 and the other in late 1998 where the gold price dropped to $252 per ounce. Fig. 58.2c shows the correlation between crude oil and
1630
a
T.-L. Shih et al.
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 −0.1 −0.2
b
1986
1988
1990
1992
1994
1996
1998
2000
2002
2004
2006
1986
1988
1990
1992
1994
1996
1998
2000
2002
2004
2006
1986
1988
1990
1992
1994
1996
1998
2000
0.3 0.2 0.1 −0.0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.6
c
0.40 0.32 0.24 0.16 0.08 0.00
− 0.08 − 0.16 − 0.24 − 0.32 2002
2004 2006
Fig. 58.2 The time series of dynamic conditional correlations (DCC) among each pair of three markets: (a) daily DCC between crude oil and gold, (b) daily DCC between gold and dollar/pound, and (c) daily DCC between crude oil and dollar/pound
58
Realized Distributions of Dynamic Conditional Correlation
1631
the dollar/pound that was estimated using the DCC integrated method. Except in the mid-1990s when they are highly correlated with a coefficient of 0.32, the correlation between crude oil and the dollar/pound is generally negative (with a coefficient of 0.08 at the beginning of 1986 and a coefficient of 0.16 at the beginning of 2003, respectively). Key issues relevant in financial economic applications include, for example, whether and how volatility and correlation move together. It is widely recognized among both finance academics and practitioners that they vary importantly over time (Andersen et al. 2001a, b; Engle 2002; Kasch and Caporin 2012). Such questions are difficult to answer using conventional volatility models, and so we wish to use the dynamic conditional correlation model to explain the phenomenon. From Fig. 58.3, the bivariate scatter plots of volatilities and correlations, it is hard to tell if there is a strong positive association between each of the two markets sampled. The correlation between two financial assets will affect the diversification of the portfolio. If two financial assets are highly negatively correlated, the effect of diversification will be significant, meaning that the portfolio can balance the returns. This is the so-called idea of not putting all one’s eggs in the same basket. According to the empirical data, the dynamic conditional correlation for the overall gold and dollar/pound (at 0.1986) or the overall oil and dollar/pound (at 0.0116) can moderately diversify investment risks and can thereby increase the rate of return. Investors can add commodities and their related derivatives to portfolios, in an effort to diversify away from traditional investments and assets. These results are in line with Capie et al. (2005) who found a negative relationship between the gold price and the sterling/dollar and yen/dollar foreign exchange rates and Nikos (2006) who found that the correlation between the dollar and gold is significantly negative. The conclusion we can draw from the results is that gold is by far the most relevant commodity in hedging against the US dollar. Capie et al. (2005) observed that gold has served as a hedge against fluctuations in the foreign exchange value of the dollar. Secondly, gold has become particularly relevant during times of US dollar weakness. In addition to that, the dynamic conditional correlation for the overall crude oil and gold markets is 0.0889, and a similar correlation was documented for the Brent crude oil and gold markets by Steinitz (2006). To characterize the distributions of dynamic conditional correlation among the sampled markets, the summary statistics of the probability distributions for DCC are shown in Table 58.5, and the associated distributions of DCC for the sampled markets are shown in Fig. 58.4. We can find that the average DCC between the crude oil and gold markets is 0.0889 with a standard deviation of 0.0916. The distribution of the daily DCC between the crude oil and gold markets reflects a slightly right-skewed (at 1.2021) and leptokurtic distribution (at 4.6799), implying that a positive DCC occurs more often than a negative DCC between the crude oil and gold markets. Furthermore, the average DCC between the gold and
0.01
0.02 oil volatility
0.03
0.04
0.05
DCC between oil and gold markets
0.06
FX volatility
−0.36 0.00
0.000150
−0.24
−0.36 0.000000
0.00
0.12
0.24
0.36
−0.24 0.000100
f
−0.12
0.000050
DCC between oil and FX markets
−0.12
0.00
0.12
0.24
0.36
0.6 0.5 0.4 0.3 0.2 0.1 −0.0 −0.1 −0.2 −0.3 0.000000 0.000100 FX volatility
0.000150
0.01
0.02
0.03 0.04 oil volatility
0.05
DCC between oil and FX markets
0.000050
DCC between FX and gold markets
0.06
DCC between oil and gold markets 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 −0.1 −0.2 0.0000 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007 gold volatility
Fig. 58.3 Bivariate scatter plots of volatilities and correlations. The scatter plots of the daily DCC between (a) crude oil and gold and (b) crude oil and dollar/ pound against the crude oil volatility; the daily DCC between (c) gold and crude oil and (d) gold and dollar/pound against the gold volatility; and the daily DCC between (e) dollar/pound and crude oil and (f) dollar/pound and gold against the dollar/pound volatility have been shown
DCC
e
c
DCC
DCC
gold volatility
d
DCC
DCC
b
DCC between gold and FX markets 0.6 0.5 0.4 0.3 0.2 0.1 −0.0 −0.1 −0.2 −0.3 0.0000 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 −0.1 −0.2 0.00 DCC
a
1632 T.-L. Shih et al.
58
Realized Distributions of Dynamic Conditional Correlation
1633
Table 58.5 Summary statistics of probability distributions of DCC for each pair of oil, gold, and FX Panel A The DCC distributions for the sampled markets from January 1, 1986 to December 31, 2007 Mean Standard dev. Max Min Skewness Kurtosis DCC between oil and gold 0.0889 0.0916 0.6605 0.1744 1.2021 4.6799 DCC between gold and FX 0.1986 0.1197 0.2019 0.5596 0.1529 0.3304 DCC between FX and oil 0.0116 0.0799 0.3349 0.3076 0.0201 0.8678 Panel B The DCC distributions for the sampled markets from January 1, 1986 through July 31, 1990 Mean Standard dev. Max Min Skewness Kurtosis DCC between oil and gold 0.0910 0.0669 0.3003 0.0783 0.3752 0.1478 DCC between gold and FX 0.2567 0.1052 0.0246 0.5506 0.2751 0.0666 DCC between FX and oil 0.0030 0.0734 0.2308 0.3076 0.5582 1.9547 Panel C The DCC distributions for the sampled markets from August 1, 1990 through August 31, 2001 Mean Standard dev. Max Min Skewness Kurtosis DCC between oil and gold 0.0720 0.1040 0.6605 0.1744 1.8367 6.5149 DCC between gold and FX 0.1258 0.0857 0.2019 0.3222 0.7307 0.8036 DCC between FX and oil 0.0004 0.0772 0.3349 0.2469 0.1648 1.3984 Panel D The DCC distributions for the sampled markets from September 1, 2001 through December 31, 2007 Mean Standard dev. Max Min Skewness Kurtosis DCC between oil and gold 0.1168 0.0758 0.3270 0.0772 0.0521 0.4706 DCC between gold and FX 0.2826 0.1003 0.0688 0.5596 0.2534 0.4195 DCC between FX and oil 0.0369 0.0832 0.2413 0.2459 0.1549 0.0225
dollar/pound markets is 0.1986 with a standard deviation of 0.1197. The distribution of the daily DCC between the gold and dollar/British pound markets reflects a slightly left-skewed (at 0.1529) and platykurtic distribution (at 0.3304), implying that a negative DCC occurs more often than a positive DCC between gold and the dollar/pound. Moreover, the average DCC between the dollar/pound and crude oil is 0.0116 with a standard deviation of 0.0799. The distribution of daily DCCs between the dollar/pound and crude oil markets reflects a slightly left-skewed (at 0.0201) and platykurtic distribution (at 0.8678), implying that negative DCCs occur more often than positive DCCs between the dollar/pound and crude oil markets. According to the empirical results, we rank the sequence of the volatility in order to analyze the volatility effect in the correlation. We show the panel of volatility and dynamic conditional correlation in Tables 58.6, 58.7, and 58.8. We can clearly realize from the sample mean, from the daily DCC between the crude oil and gold markets, that higher volatility can accompany the larger DCC. However, the higher volatility can also accompany the smaller DCC between the gold and dollar/pound markets and the crude oil and dollar/pound markets. This is because the correlations between these markets are negative.
T.-L. Shih et al.
7
7
6
6 Probability Density
Probability Density
1634
5 4 3 2 1 0
5 4 3 2 1
−0.6 −0.4
−0.2
0 0
0.2
0.4
0.6
7
7
6
6 Probability Density
Probability Density
The DCC between oil and gold market
5 4 3 2 1 0
−0.6 −0.4 −0.2 0 0.2 0.4 0.6 The DCC between gold and FX market
WTI and GOLD GOLD and FX WTI and FX
5 4 3 2 1
−0.6 −0.4
−0.2
0
0.2
0.4
0.6
The DCC between oil and FX market
0 −0.6 −0.4 −0.2
0
0.2
0.4
0.6
0.8
DCC
Fig. 58.4 The distributions of dynamic conditional correlations among gold, oil, and FX from 1986 to 2007
To further quantify this volatility effect in correlation, we classify the volatility into two categories, low volatility days and high volatility days,1 and according to the results, we rank the sequence of the volatility. The group of low volatility days means that the volatility is less than the 10th percentile value and the group of high volatility days means that the volatility is greater than the 90th percentile value. The results are shown in Fig. 58.5a–c that reports the DCC distributions for low volatility days and high volatility days. It is found that some special characteristics of DCC exist among the oil, gold, and FX markets. First, distributions for low volatility days are obviously different from those for high volatility days. Those for low volatility days approximate leptokurtic distributions, whereas those for high volatility days approximate
1
Following Andersen et al. (2001b), the authors classify the days into two groups: low volatility days and high volatility days. The empirical results show that the distribution of correlations shifts rightward when volatility increases.
58
Realized Distributions of Dynamic Conditional Correlation
1635
Table 58.6 DCC distributions of crude oil and gold markets Panel A DCC between crude oil and gold against the crude oil market volatility Crude oil and gold against the crude oil market volatility Volatility Mean Standard dev. Skewness 0–10 % 0.0779 0.0379 0.6205 10–20 % 0.0785 0.0589 0.7672 20–30 % 0.0887 0.0680 0.3950 30–40 % 0.0924 0.0754 0.6638 40–50 % 0.0904 0.0769 0.4299 50–60 % 0.0900 0.0825 0.1764 60–70 % 0.0839 0.0820 0.3908 70–80 % 0.0735 0.0872 0.4688 80–90 % 0.0908 0.1273 1.2185 90–100 % 0.1212 0.1530 0.8691 Panel B DCC between crude oil and gold against the gold market volatility Crude oil and gold against the gold market volatility Volatility Mean Standard dev. Skewness 0–10 % 0.0537 0.0516 0.1667 10–20 % 0.0817 0.0567 0.0538 20–30 % 0.0790 0.0704 0.4538 30–40 % 0.0908 0.0734 0.7867 40–50 % 0.0842 0.0879 0.7319 50–60 % 0.0723 0.0918 0.2724 60–70 % 0.0838 0.0899 0.6821 70–80 % 0.0868 0.0880 0.9889 80–90 % 0.1002 0.0996 0.9919 90–100 % 0.1579 0.1386 0.9773
Kurtosis 1.8582 0.6458 0.1434 0.9140 0.8993 0.5715 0.7808 1.4680 2.3300 1.2595
Kurtosis 0.6070 1.4446 1.2475 1.8984 2.4974 1.5955 1.9649 3.9028 3.0296 1.9451
platykurtic distributions. Secondly, the average DCCs of the high volatility days are greater than the average DCCs of the low volatility days for the DCCs between crude oil and gold, implying that the correlation between gold and oil increases with volatility. Furthermore, the distribution of DCCs shifts rightward when volatility increases. Similar results are found for equity returns as reported by Sonlinik et al. (1996) and in realized exchange rate returns by Andersen et al. (2001b). Thirdly, the average DCCs for the high volatility days are smaller than the average DCCs for lower volatility days across gold and the dollar/pound, and oil and the dollar/pound. This implies that the correlation between gold (oil) and foreign exchange rates decreases with volatility, and the distribution of DCCs shifts leftward when volatility increases. Finally, the standard deviations of the distributions for high volatility days are obviously greater than the standard deviations of the distributions for low volatility days.
1636
T.-L. Shih et al.
Table 58.7 DCC distributions of gold and dollar/pound markets Panel A DCC between gold and FX against the gold market volatility Gold and FX against the gold market volatility Volatility Mean Standard dev. Skewness 0–10 % 0.1507 0.0452 0.2642 10–20 % 0.1468 0.0811 0.1898 20–30 % 0.2048 0.1069 0.0397 30–40 % 0.2209 0.1039 0.3412 40–50 % 0.2105 0.1184 0.2681 50–60 % 0.2070 0.1131 0.0924 60–70 % 0.1984 0.1239 0.0848 70–80 % 0.2188 0.1411 0.2455 80–90 % 0.2231 0.1425 0.2051 90–100 % 0.2064 0.1498 0.3059 Panel B DCC between gold and FX against the FX market volatility Gold and FX against the FX market volatility Volatility Mean Standard dev. Skewness 0–10 % 0.1419 0.0621 0.1926 10–20 % 0.1618 0.0861 0.1325 20–30 % 0.1839 0.0943 0.0866 30–40 % 0.1968 0.1036 0.5927 40–50 % 0.2012 0.1163 0.4529 50–60 % 0.2314 0.1150 0.3684 60–70 % 0.2189 0.1239 0.2181 70–80 % 0.2201 0.1217 0.1057 80–90 % 0.2170 0.1510 0.2916 90–100 % 0.2170 0.1558 0.1548
Kurtosis 0.2864 0.1166 0.2823 0.0578 0.3950 0.2283 0.2003 0.1963 0.0326 0.0513
Kurtosis 0.8518 0.1298 0.0156 0.7147 1.0610 0.9286 0.1255 0.1704 0.4897 0.6496
58.4.3 Volatility Threshold Dynamic Conditional Correlation To further check if different the volatility threshold model examine whether increasing is associated with an increasing is specified as Eq. 58.9:
subperiods have various patterns, we utilize addressed by Kasch and Caporin (2012)2 to volatility (exceeding a specified threshold) correlation. The volatility threshold DCC model
qij, t ¼ 1 a2 b2 qij gi gj vij þ a2 ei, t1 ej, t1 þ b2 qij, t1 þ gi gj vij, t
(58.9)
2 Kasch and Caporin (2012) extended the multivariate GARCH dynamic conditional correlation of Engle to analyze the relationship between the volatilities and correlations. The empirical results indicated that high volatility levels significantly affect the correlations of the developed markets, while high volatility does not seem to have a direct impact on the correlations of the transition blue chip indices with the rest of the markets. It is easy to see that the volatility and correlation move together.
58
Realized Distributions of Dynamic Conditional Correlation
1637
Table 58.8 DCC distributions of crude oil and dollar/pound markets Panel A DCC between crude oil and FX against the FX market volatility Crude oil and FX against the FX market volatility Volatility Mean Standard dev. Skewness 0–10 % 0.0147 0.0530 0.2458 10–20 % 0.0142 0.0651 0.4224 20–30 % 0.0140 0.0695 0.0757 30–40 % 0.0028 0.0804 0.0465 40–50 % 0.0108 0.0832 0.0264 50–60 % 0.0206 0.0720 0.1618 60–70 % 0.0081 0.0789 0.5080 70–80 % 0.0081 0.0773 0.4689 80–90 % 0.0161 0.1042 0.1100 90–100 % 0.0068 0.1019 0.7085 Panel B DCC between crude oil and FX against the crude oil market volatility Crude oil and gold against the gold market volatility Volatility Mean Standard dev. Skewness 0–10 % 0.0001 0.0453 0.5049 10–20 % 0.0045 0.0609 0.1145 20–30 % 0.0016 0.0675 0.2397 30–40 % 0.0087 0.0715 0.2668 40–50 % 0.0164 0.0736 0.2498 50–60 % 0.0228 0.0743 0.2262 60–70 % 0.0174 0.0732 0.0187 70–80 % 0.0167 0.0811 0.0200 80–90 % 0.0028 0.0934 0.0397 90–100 % 0.0310 0.1234 0.4996
Kurtosis 0.3703 0.1734 0.2648 0.2710 0.1784 0.0328 1.0521 1.6993 0.5576 0.2311
Kurtosis 0.7991 0.1341 0.2455 0.1219 0.1421 0.0489 0.2162 0.0117 0.0698 0.1905
where vt is a dummy variables matrix defined as vij, t ¼
1 if hi, t > f hi ðkÞ or hj, t > f hj ðkÞ 0 otherwise
(58.10)
where fhi(k) is the kth fractional of the volatility series hi,t. When thresholds are found, the whole period will be divided into various subperiods based on these thresholds. This separation helps detect any changes in investor behavior after crucial events. It is then known whether a time horizon is a key factor influencing the patterns of return and volatility. Table 58.9 presents the estimation results of the volatility threshold DCC models. The estimation was based on various volatility threshold levels at 50 %, 75 %, 90 %, and 95 %. The results in Table 58.9 show that the correlation between the oil and gold prices is significantly affected by the volatility of oil at the 50 %, 75 %, and 90 %
1638
a
T.-L. Shih et al. 12
12 High Volatility Days Low Volatility Days
8 6 4 2
8 6 4 2
0 −0.1 −0.05
0
0.05
0.1
0.15
0.2
0.25
0 −0.1 −0.05
0.3
High Volatility Days Low Volatility Days
6 5 4 3 2
0.2
0.25
0.3
High Volatility Days Low Volatility Days
8 7 6 5 4 3
−0.5
−0.4
−0.3
−0.2
−0.1
1 0 −0.6
0
−0.5
−0.4
−0.3
−0.2
−0.1
0
The DCC between gold and FX markets
The DCC between gold and FX markets 10
14 Low Volatility Days High Volatility Days
Low Volatility Days High Volatility Days
9 8 Probability Density
Probability Density
0.15
2
1
12
0.1
9 Probability Density
Probability Density
7
c
0.05
10
8
0
0
The DCC between oil and gold markets
The DCC between oil and gold markets
b
High Volatility Days Low Volatility Days
10 Probability Density
Probability Density
10
10 8 6 4
7 6 5 4 3 2
2
1
0 −0.25 −0.2 −0.15 −0.1 −0.05 0
0.05 0.1 0.15
The DCC between oil and FX markets
0
−0.2
−0.1
0
0.1
0.2
0.3
The DCC between oil and FX markets
Fig. 58.5 (a) The distributions of DCC between crude oil and gold on low and high volatility days. (b) The distributions of DCC between gold and dollar/pound on low and high volatility days. (c) The distributions of DCC between oil and dollar/pound on low and high volatility days
(with the exception of 95 %) thresholds. Interestingly, these estimated thresholds were quite consistent with the real events of the First Gulf War in 1990 and the 911 attack in 2001. We then separate the period into three subperiods to further examine whether the investors’ behaviors change after the events.
58
Realized Distributions of Dynamic Conditional Correlation
1639
Table 58.9 The volatility threshold dynamic conditional correlationa Panel A Crude oil and gold 50 % 0.0171 a2 (1.956) b2 0.9546 (36.377) gWTIgGOLD 0.0075 (2.045) Panel B Gold and dollar/pound 50 % 0.0161 a2 (3.016) b2 0.9784 (108.063) gGOLDgFX 0.00032 (0.190) Panel C Crude oil and dollar/pound 50 % 0.0131 a2 (2.294) b2 0.9519 (33.260) gWTIgFX 0.00006 (0.016)
75 % 0.0146 (1.870) 0.9548 (41.162) 0.0116 (2.300)
90 % 0.0187 (2.072) 0.9362 (27.21) 0.0206 (2.376)
95 % 0.0229 (2.424) 0.9233 (23.705) 0.0253 (1.636)
75 % 0.0150 (3.052) 0.9808 (124.94) 0.0012 (0.805)
90 % 0.0150 (2.938) 0.9804 (120.37) 0.0023 (0.932)
95 % 0.0149 (2.894) 0.9803 (119.94) 0.0046 (1.079)
75 % 0.0121 (1.825) 0.9580 (28.286) 0.0015 (0.418)
90 % 0.0109 (1.633) 0.9639 (33.633) 0.0034 (0.785)
95 % 0.0126 (2.013) 0.9518 (28.095) 0.0099 (0.991)
a
This table presents the quasi-maximum likelihood estimates of volatility threshold of dynamic conditional correlation. The t-statistics are given in parentheses
58.4.4 Does Investors’ Behavior Change over Subperiods? The three subperiods based on our estimated thresholds are before the first Gulf War (January 1, 1986 to July 31, 1990), after the first Gulf War up to the 911 attack (August 1, 1990 to August 31, 2001), and after the 911 attack (September 1, 2001 to December 1, 2007). We then examine the dynamic co-movement in each pair of markets in various subperiods (Rigobon and Sack 2005; Guidi et al. 20073). Our sampled period covers economic hardship and soaring energy prices. Soaring energy prices make gold an attractive hedging asset against inflation in that a positive correlation with oil is expected over time, especially after the 911 event. The evidence in Table 58.10 shows that oil and gold are highly 3
Guidi et al. (2007) examined the impact of relevant US decisions on oil spot price movements from January 1986 to December 2005. They identified the following conflict periods: the Iran-Iraq conflict, January 1985 until July 1988; Iraq’s invasion of Kuwait, August 1990 until February 1991; and the US-led forces’ invasion of Iraq, March 2003 until December 2005.
1640
T.-L. Shih et al.
Table 58.10 Simple correlation matrix of oil, gold, and dollar/pound markets among the three subperiods Panel A The correlation coefficients between crude oil, gold, and dollar/pound markets from January 1, 1986 through July 31, 1990 Oil Gold FX Oil 1 Gold 0.1648 1 FX 0.1517 0.5228 1 Panel B The correlation coefficients between crude oil, gold, and dollar/pound markets from August 1, 1990 through August 31, 2001 Oil Gold FX Oil 1 Gold 0.2465 1 FX 0.0628 0.2096 1 Panel C The correlation coefficients between crude oil, gold, and dollar/pound markets from September 1, 2001 through December 31, 2007 Oil Gold FX Oil 1 Gold 0.9264 1 FX 0.8124 0.8142 1
correlated with a high coefficient of 0.9264 between September 1, 2001 and December 31, 2007 (after the 911 event). In the meantime, the correlation between gold and the dollar/pound is 0.8142, and between oil and the dollar/pound is 0.8124, showing the fears of a depreciation in the US dollar push commodity prices up significantly. The results of Panels B, C, and D in Table 58.5 along with Figs. 58.6 and 58.7 show that the DCCs between oil and gold are all positive in the three subperiods (with coefficients of 0.0910, 0.0720, and 0.1168, respectively). Obviously, higher oil prices spark inflationary concerns and make gold a value reserve for wealth. By contrast, the correlation between oil and the dollar/pound has been negative, and a DCC of 0.0369 is shown after the 911 attack. In this period, the oil price increased from a low of US$19 per barrel in late January 2002 to US$96 per barrel in late December 2007. Meanwhile, the dollar conversely tumbled 29 % from US$0.7 to a US$0.5 per pound. Historical data also show that the prices of oil and gold are rising over time. The increase in the prices of gold was a reflection of the falling US dollar. The evidence confirms that the DCCs between gold and the dollar/pound in the three subperiods are all negative (with average DCC of 0.2567, 0.1258, and 0.2826 in the first, second, and third periods, respectively). Generally speaking, during the subperiods of market crises, increasingly high correlations in commodity prices were observed, with oil and gold moving in the same direction and the dollar/pound moving in opposite directions.
Realized Distributions of Dynamic Conditional Correlation
a
1641
30 WTI and GOLD GOLD and FX WTI and FX
Probability Density
25 20 15 10 5
0 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 DCC during 19860101-19900731
b
0.3
12 WTI and GOLD GOLD and FX WTI and FX
10
Probability Density
Fig. 58.6 The distributions of dynamic conditional correlations among each pair of the three markets over three subperiods: (a) January 1, 1986 to July 31, 1990, (b) August 1, 1990 to August 31, 2001, and (c) September 1, 2001 to December 31, 2007
8 6 4 2 0 −0.4
c
−0.2 0 0.2 0.4 0.6 0.8 DCC during 19900801-20010831
1
8 WTI and GOLD GOLD and FX WTI and FX
7
Probability Density
58
6 5 4 3 2 1 0 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1
0
0.1 0.2 0.3 0.4
DCC during 20010901-20071231
1642
a
30 19860101–19900731 19900801–20010831 20010901–20071231
Probability Density
25 20 15 10 5 0 −0.1
b
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 DCC between crude oil and gold markets
0.8
25 19860101–19900731 19900801–20010831 20010901–20071231
Probability Density
20
15
10
5
0 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 DCC between gold and FX markets
c
0.3
30 19860101–19900731 19900801–20010831 20010901–20071231
25 Probability Density
Fig. 58.7 The distributions of dynamic conditional correlations for (a) crude oil and gold, (b) gold and dollar/pound, (c) crude oil and dollar/pound over three subperiods
T.-L. Shih et al.
20 15 10 5 0 −0.3
−0.2
−0.1
0
0.1
0.2
0.3
DCC between crude oil and FX markets
0.4
58
Realized Distributions of Dynamic Conditional Correlation
58.5
1643
Conclusions and Implications
Using the dynamic conditional correlation model, we have estimated the cross correlation and volatility among crude oil, gold, and dollar/pound currencies from 1986 to 2007. After exploring the time-varying correlations and realized distributions, several regularities have been found that help illustrate the characteristics of the crude oil, gold, and currency markets. First, the correlation coefficients between each pair of the three assets are found to be time varying instead of constant. As such, besides considering the mean and standard deviation of the underlying assets, investors need to follow the co-movement in the relevant assets in order to make better portfolio hedging and risk management decisions across these assets. The results of the dynamic correlation coefficients between the gold and dollar/pound show that gold is by far the most relevant commodity in terms of serving as a hedge against the US dollar. These results are in line with the reports suggested by Nikos (2006) and Capie et al. (2005). Our findings are helpful in terms of arriving at a more optimal allocation of assets based on their multivariate returns and associated risks. Besides, the distributions of low volatility days are found to approximate leptokurtic distributions in the gold, oil, and dollar/pound markets, whereas the high volatility days approximate platykurtic distributions. Furthermore, the DCC between oil and gold is increasing with volatility, indicating that the distribution of DCC shifts rightward when volatility increases. By contrast, the DCCs between gold and the dollar/pound and crude oil and the dollar/pound are decreasing with the volatility. Our findings in terms of oil and gold are consistent with the reports of Sonlinik et al. (1996) and Andersen et al. (2001b) who use different approaches. Moreover, by estimating the volatility threshold dynamic conditional correlation model addressed by Kasch and Caporin (2012), we find that high volatility values (exceeding some specified thresholds) are associated with an increase in correlation values in various subperiods. Remarkably, investors’ behaviors are seen to have changed in different subperiods. During periods of market turmoil, such as the First Gulf War in 1990 and the 911 terror attack in 2001, an increase in correlation between the prices of oil and gold, as well as a decrease in correlation between the oil (gold) and dollar/pound currencies, is observed. These behaviors make gold an attractive asset against major currencies for value-preserving purposes. For market participants from long-term hedging perspective, our results provide useful information on asset allocation across commodity and currency markets during market turmoil. Acknowledgments This chapter was presented at the Seventh International Business Research Conference in Sydney, Australia, and the Taiwan Finance Association Annual Meeting in Taichung, Taiwan. The authors are grateful for the helpful and suggestive comments from Ken Johnson, C. L. Chiu, Ming-Chi Lee, and other participants. The authors also appreciate the financial grants for attending the conference from the National Science Council.
1644
T.-L. Shih et al.
References Andersen, T. G., Bollerslev, T., Diebold, F. X., & Ebens, H. (2001a). The distributions of realized stock return volatility. Journal of Financial Economics, 61, 43–76. Andersen, T. G., Bollerslev, T., Diebold, F. X., & Labys, P. (2001b). The distribution of realized exchange rate volatility. Journal of the American Statistical Association, 96(453), 42–55. Andersen, T., Bollerslev, T., Diebold, F., & Vega, C. (2007). Real time price discovery in global stock, bond and foreign exchange markets. Journal of International Economics, 73, 251–277. Baur, D. G., & Lucey, B. M. (2010). Is gold a hedge or a safe haven? An analysis of stocks, bonds and gold. Financial Review, 45(2), 217–229. Baur, D. G., & McDermott, T. K. (2010). Is gold a safe haven? International evidence. Journal of Banking and Finance, 34(8), 1886–1898. Bollerslev, T. (1986). Generalized autoregressive conditional heteroscedasticity. Journal of Econometrics, 31, 307–327. Bollerslev, T. (1990). Modeling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH. Review of Economics and Statistics, 72(3), 498–505. Boyer, M. M., & Fillion, D. (2007). Common and fundamental factors in stock returns of Canadian oil and gas companies. Energy Economics, 29, 428–453. Calvert, L. E., Fischer, A. J., & Thompson, S. B. (2006). Volatility comovement: A multifrequency approach. Journal of Econometrics, 131(1–2), 179–215. Capie, F., Mills, T. C., & Wood, G. (2005). Gold as a hedge against the dollar. Journal of International Financial Markets, Institutions and Money, 15(4), 343–352. Chiang, T. C., Jeon, B. N., & Li, H. M. (2007a). Dynamic correlation analysis of financial contagion: Evidence from Asian markets. Journal of International Money and Finance, 26(7), 1206–1228. Chiang, T. C., Tan, L., & Li, H. M. (2007b). Empirical analysis of dynamic correlations of stock returns: Evidence from Chinese A-share and B-share markets. Quantitative Finance, 7(6), 651–667. Chiang, T. C., Yu, H. C., & Wu, M. C. (2009). Statistical properties, dynamic conditional correlation, scaling analysis of high-frequency intraday stock returns: Evidence from Dow-Jones and nasdaq indices. Physica A, 388(8), 1555–1570. Doong, S. C., Yang, S. Y., & Wang, A. T. (2005). The dynamic relationship and pricing of stocks and exchange rates: Empirical evidence from Asian emerging markets. Journal of the American Academy of Business, 7(1), 118–123. Engle, R. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of U.K. inflation. Econometrica, 50, 987–1008. Engle, R. (2002). Dynamic conditional correlation: A simple class of multivariate GARCH models. Journal of Business & Economic Statistics, 20(3), 339–350. Engle, R., Ito, T., & Lin, W. L. (1994). Do bulls and bears move across borders? International transmission of stock returns and volatility. Review of Financial Studies, 7, 507–538. Guidi, M., Russell, A., & Tarbert, H. (2007). The efficiency of international oil markets in incorporating US announcements during conflict and non-conflict periods. Petroleum Accounting and Financial Management Journal, 26(2), 67–86. Kasch, M., & Caporin, M. (2012). Volatility threshold dynamic conditional correlations: An international analysis. SSRN: http://ssrn.com/abstract¼968233 Kaul, A., & Sapp, S. (2006). Y2K fears and safe haven trading of the US dollar. Journal of International Money and Finance, 25, 760–779. Lanza, A., Manera, M., & McAleer, M. (2006). Modeling dynamic conditional correlations in WTI oil forward and futures returns. Finance Research Letters, 3, 114–132. Ng, A. (2000). Volatility spillover effects from Japan and the US to the Pacific-Basin. Journal of International Money and Finance, 19(2), 207–233. Nikos, K. (2006). Commodity prices and the influence of the US dollar. World Gold Council.
58
Realized Distributions of Dynamic Conditional Correlation
1645
Pe´rez-Rodrı´guez, J. (2006). The Euro and other major currencies floating against the U.S. dollar. Atlantic Economic Journal, 34, 367–384. Ranaldo, A., & Soderlind, P. (2009). Safe haven currencies. Review of Finance, 14(3), 385–407. Rigobon, R., & Sack, B. (2005). The effect of war on US financial markets. Journal of Banking and Finance, 29(7), 1769–1789. Sonlinik, B., Boucrelle, C., & Le Fur, Y. (1996). International market correlation and volatility. Financial Analysts Journal, 52(5), 17–34. Steinitz, J. (2006). Gold: Volatility, correlation and teacups. World Gold Council. Tastan, H. (2006). Estimating time-varying conditional correlations between stock and foreign exchange markets. Physica A, 360, 445–458. Upper, C. (2000). How safe was the “safe-haven”? Financial market liquidity during the 1998 turbulences. Discussion paper 1/00, Deutsche Bundesbank.
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
59
Ahmed Hachicha and Cheng-Few Lee
Contents 59.1 59.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.2.1 SVEC Model and Forecasting Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.2.2 Matrix B Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.2.3 The Confidence Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.2.4 Forecasting with SVEC Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.3 Data Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.4 Empirical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.4.1 Pre-IT (Before IT Adoption) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.4.2 Post-IT (After IT Adoption) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.5 Interest Rate Shock and Real Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.5.1 Pre-IT (Before IT Adoption) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.5.2 Post-IT (After IT Adoption) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Joint Forecast Error Covariance Matrix, Pre-IT, and Post-IT Forecasting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Pre-IT and Post-IT Cointegration Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Pre-IT Macro Variables Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 4: Post-IT Macro Variables Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 5: Impulse Responses and Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1648 1649 1649 1650 1650 1651 1652 1655 1657 1657 1659 1661 1664 1665 1665 1666 1666 1666 1666 1666
A. Hachicha Department of Economic Development, Faculty of Economics and Management of Sfax, University of Sfax, Sfax, Tunisia e-mail: [email protected] C.-F. Lee (*) Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_59, # Springer Science+Business Media New York 2015
1647
1648
A. Hachicha and C.-F. Lee
Abstract
We estimate two SVECM (structural vector error correction) models for the Turkish economy based on imposing short-run and long-run restrictions that account for examining the behavior of the real sphere in the pre-IT policy (before inflation-targeting adoption) and post-IT policy (after inflation-targeting adoption). Responses reveal that an expansionary interest policy shock leads to a decrease in price level, a fall in output, an appreciation in the exchange rate, and an improvement in the share prices in the very short run for the most of pre-IT period. Central Bank of the Republic of Turkey (CBT) stabilizes output fluctuations in the short run while maintaining a very medium-run inflation target since January 2006. One of the most important results of this study is that the impact of a monetary policy shock on the real sphere is insignificant during the post-IT policy. Keywords
SVECM models • Turkish economy • Pre-IT policy • Post-IT policy • Real sphere
59.1
Introduction
The effectiveness of inflation targeting on the real sphere has recently been the subject of a vast and ever-growing literature. Being well defined in theory, we can say that there is a lack especially in the identification of the repercussion of inflation-targeting framework on macroeconomic variables. This is one of the most interesting subjects that merit to be watched nowadays. This chapter will be a new contribution to the empirical literature. We are going to present the eligible method that allows us to discover the relation between the inflation targeting and the real sphere in Turkey. Due to the successful experience in some neighboring countries with the adopting of inflation-targeting regime, the Turkish economy was encouraged to adopt such a monetary policy to overcome one of the deepest crises of Turkish economy in 2001 especially when the Central Bank of the Republic of Turkey was obliged to observe floating its currency. In this study, we analyze the following research questions: “How can the real sphere react to a pre-inflation-targeting regime?” “Does inflation targeting enhance output growth?” “Are we going to find similar results regarding the effectiveness of pre-IT policy and IT policy?” To deal with our objective, we investigate a structural vector error correction model (SVECM) analysis with long-run and short-run restrictions for IT policy and pre-IT policy to extract conclusions through examining the responses of macroeconomic data, respectively, to a monetary policy shock. Monetary transmission mechanisms based on a structural vector error correction model were studied by King et al. (1991), Ehrmann (1998), L€utkepohl et al. (1998), Ramaswamy and Sloek (1998), Cecchetti (1995), Debondt (2000), Clements et al. (2001),
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1649
Kakes and Sturm (2002), Nadenichek (2006), Hachicha and Chaabane (2007), Ivrendi and Guloglu (2010), Lucke (2010), and Bhuiyan (2012). The structure of this chapter is as follows: Sect. 59.2 provides the empirical methodology and outlines the SVECM technique. Section 59.3 defines the data. Section 59.4 presents long-run and short-run matrices estimation of the SVECM technique before and after IT adoption. Section 59.5 highlights a comparative analysis for the empirical results studying the impact of an interest rate shock on the output, inflation rate, share prices, and exchange rates in the pre-IT and post-IT policy. Section 59.6 makes some concluding remarks.
59.2
Empirical Methodology
Searching for an answer to my problematic, we resort to a structural vector error correction model (SVECM) technique with contemporaneous and long-run restrictions developed by Breitung et al. (2004). The main advantage of adopting a structural vector error correction model instead of a structural vector autoregressive model is that it gives us the opportunity to use cointegration restrictions which implement constraints on the long-run effects of the permanent shocks (Lutkepohl 2005). In what follows, we explore the SVEC model and forecasting technique.
59.2.1 SVEC Model and Forecasting Technique Dyt ¼ ab 0
yt1 D1 t1
þ G1 Dyt1 þ . . . þ Gp Dytp þ CDt þ ut
(59.1)
where yt ¼ (y1t, . . . ykt) 0 is a vector of K endogenous variables, D1 t 1 contains all deterministic terms included in the cointegration relations, and Dt contains all remaining deterministic variables (constant, seasonal dummy). The residual vector ut is assumed to be a K-dimensional process and unobservable zero means white noise process with positive definite covariance matrix E(u u 0 ) ¼ Su. The parameter matrices a and b have dimensions (K r) and they have to have rank r. They specify the long-run part of the model with b containing the cointegrating relations and a representing the loading coefficients. The column dimension of is also r and its row dimension corresponds to the dimension of b D1 will be used in the following and the row dimension t 1. The notation b ¼ of b will be denoted by K . Hence, b is a (K r) matrix. The cointegrating rank has to be in the range 1 r k 1. SVEC (structural vector correction) is a model that can identify the shocks to be traced in an impulse response analysis by imposing restrictions on the matrix of longrun effects of shocks and the matrix B of contemporaneous effects of the shocks.
1650
A. Hachicha and C.-F. Lee
59.2.2 Matrix B Definition According to the theorem of Johansen (1995), the VEC model has the following moving average representation: t X yt ¼ X ui þ X ðLÞut þ y0
(59.2)
i¼1
where yt ¼ (y1t, . . . ykt) 0 is a vector of K observable series and y*0 contains the initial values. The long-run effects of shocks are represented by the first term in Eq. 59.2, t X X ui , which captures the common stochastic trends from time (1) till time (t). i¼1
The matrix B is defined such that ut ¼ Bet, and assuming that it is in reduced form, the matrix of long-run effects of the ut residuals is
X ¼ b⊥ a⊥
0
Ik
p1 X
!
!1
Gi b⊥
0
a⊥
(59.3)
i¼1
Hence the long-run effects of e shocks are given by XB. rk(X) ¼ K r and it follows that XB has rank K r. Thus the matrix XB can have at most r columns of zero. On that account, there can be at most r shock with transitory effect andis at least (k r) shocks have permanent effects. Due to the reduced rank of the matrix, each column of zeros stands for only (k r) independent restrictions. (K r) (K r 1)/2 additional restrictions are needed to exactly identify the permanent shocks and r(r 1)/2 additional contemporaneous restrictions identify the transitory shocks.
59.2.3 The Confidence Interval The impulse responses are computed from the estimated VAR coefficients, and the Hall percentile interval is chosen to build confidence intervals (CI) that reflect the estimation’s unpredictability: h i CI ¼ f1 tð1g=2Þ , f2 tðg=2Þ
(59.4)
According to Hall (1992), tg=2 and t* (1 g/2) are the g/2 and the (1 g/2) quantiles of the distribution of CI ¼ hf1 f2i, respectively.
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1651
Table 59.1 Forecasting 1st undifferenced series (pre-IT period) Reference: L€utkepohl (1993), IMTSA, 2ed, ch. 5.2.6, ch. 10.5 CI coverage: 0.95 Forecast horizon: 1 period Using standard confidence intervals y(N) in levels used in the forecast MMR_logExRate_logShare_prices_logTime level (N) level (N) level (N) 2005 M12 0.0000 0.0000 0.0000 ExRate_log Time Forecast Lower CI Upper CI 2006 M1 0.3098 0.1963 0.4234 Share_prices_log Time Forecast Lower CI Upper CI 2006 M1 4.8843 4.6852 5.0834 IP_log Time Forecast Lower CI Upper CI 2006 M1 4.7326 4.5883 4.8769 CPI_log Time Forecast Lower CI Upper CI 2006 M1 4.6779 4.6504 4.7055
IP_loglevel (N) 0.0000
CPI_loglevel (N) 0.0000
+/ 0.1135 +/ 0.1991 +/ 0.1443 +/ 0.0275
59.2.4 Forecasting with SVEC Processes According to Pfaff (2008), forecasting is based on h-step at time T: yTþh=T ¼ A1 yTþh1=T þ . . . þ Ap yTþhp=T þ B0 xTþh þ . . . þ Bq xTþhq þ CDTþh
(59.5)
The forecasts are computed recursively for h 1 of an empirical VECM (p) process according to yTþh=T ¼ A1 yT þ . . . þ Ap yTþhp þ B0 xTþh þ . . . þ Bq xTþ1q þ CDTþh
(59.6)
The forecasting errors are yTþh yTþh=T ¼ uTþh þ F1 uTþh1 þ . . . þ fh1 uTþ1 ,
(59.7)
With F0 ¼ Ik and Fs can be computed recursively according to Fs ¼
s X
Fsj Aj ,
s ¼ 1, 2 . . .
(59.8)
i¼1
According to L€utkepohl (1991), F0 ¼ Ik and A j ¼ 0 for jip. Appendix 1 explores
1652
A. Hachicha and C.-F. Lee
Table 59.2 Forecasting 1st undifferenced series (post-IT period) Reference: L€utkepohl (1993), IMTSA, 2ed, ch. 5.2.6, ch. 10.5 CI coverage: 0.95 Forecast horizon: 1 period Using standard confidence intervals y(N) in levels used in the forecast MMR_logExRate_logShare_prices_logTime level (N) level (N) level (N) 2005 M12 0.0000 0.0000 0.0000 MMR_log Time Forecast Lower CI Upper CI 2011 M12 1.6238 1.2610 1.9866 ExRate_log Time Forecast Lower CI Upper CI 2011 M12 0.6154 0.5210 0.7099 Share_prices_log Time Forecast Lower CI Upper CI 2011 M12 5.2127 5.0617 5.3637 IP_log Time Forecast Lower CI Upper CI 2011 M12 4.8102 4.6942 4.9262 Consumerprices_log Time Forecast Lower CI Upper CI 2011 M12 5.1506 5.1336 5.1676
IP_log-level (N) 0.0000
CPI_loglevel (N) 0.0000
+/ 0.3628 +/ 0.0944 +/ 0.1510 +/ 0.1160 +/ 0.0170
the forecast error covariance matrix, presents forecasting series estimations (Tables 59.1 and 59.2), and plots figures (Figs. 59.1 and 59.2).
59.3
Data Definition
The dataset consists of monthly observations from January 2000 up to November 2011 divided into two periods, i.e., the inflation-targeting framework was adopted in Turkey in January 2006. The first period named “Pre-IT” treats the case before the adoption of inflation-targeting policy and starts from 2000 M1. The second period called “Post-IT” takes into consideration the adoption of inflation-targeting policy and starts from 2006 M1. This division highlights the importance of the topic treated in our paper and does not affect our estimation results. We choose deliberately monthly frequency to maximize the number of observations to get a robust estimation of each period. The empirical models are estimated separately for each period. The interest rate (R) is measured by the log of money market rate (MMR); the price level (P) is measured by the log of consumer prices (2005¼100); the real output (Y) is measured by the log of industrial production index (2005¼100);
2005.12
2005.12
IP_log
ExRate_log
Fig. 59.1 Time series forecasts (2000 M1–2005 M12)
5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0
0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0
0
1
2
3
4
5
6
2005.12
2005.12
1st Undifferenced Forecasts (CI 95.0%)
CPI_log
Share_prices_log
59 Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey 1653
2011.01
2011.01
2011.11
5.18 5.16 5.14 5.12 5.10 5.08 5.06 5.04
2011.11
2011.01
Share_prices_log
Fig. 59.2 Time series forecasts (2006 M1–2011 M11)
5.45 5.40 5.35 5.30 5.25 5.20 5.15 5.10 5.05
2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4
MMR_log
2011.01
2011.01
2011.11
Consumerprices_log
4.96 4.92 4.88 4.84 4.80 4.76 4.72 4.68
0.72 0.68 0.64 0.60 0.56 0.52 0.48 0.44 0.40
Time Series Forecasts (CI 95.0%)
2011.11
IP_log
2011.11
ExRate_log
1654 A. Hachicha and C.-F. Lee
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1655
the real exchange rate (REXR) is measured by the log of (EXR). Due to lack of adequate data for the Tobin’s Q, the share prices index (share prices) is taken as a proxy for wealth channel, and it is measured by the log of share prices. The main source of data is IMF’s International Financial Statistics and the Central Bank of the Republic of Turkey.
59.4
Empirical Analysis
We start our empirical analysis by investigating the univariate time series properties of the variables. Secondly, we use the AIC information criteria to determine the lag length of the VECM process. It suggests a lag length of P ¼ 11 when maximum lag length is pmax ¼ 11. This lag length is also confirmed with the same information criteria for p ¼ 12 (L€ utkepohl 1991). Then, we test the number of cointegration relation for each system before and after IT adoption, separately. We use Johansen’s (1988, 1995) approach to test for the existence of a cointegrating relationship among the variables. The maximum eigenvalue (lmax) and the trace tests for each model suggest one or two cointegration relations among five variables. In Appendix 2, we show results of Johansen cointegration test for the two systems. Specifically in Tables 59.3 and 59.4, we report cointegration results of the estimated parameters for the whole SVECM systems. The structural shocks being identified, the VECM model is transformed into a VMA model (moving average) which makes it possible to compute the dynamics of the various endogenous variables following a structural shock due to Table 59.3 Cointegration test (pre-IT) Sample range: [2000 M3, 2005 M12], T ¼ 70 Johansen trace test for: MMR_log ExRate_log Share_prices_log IP_log Consumerprices_log Included lags (levels): 2 Dimension of the process: 5 Trend and intercept included Response surface computed: r0 LR pval 90 % 95 % 99 % 0 133.83 0.0000 84.27 88.55 96.97 1 86.49 0.0001 60.00 63.66 70.91 2 50.90 0.0055 39.73 42.77 48.87 3 24.33 0.0754 23.32 25.73 30.67 4 10.55 0.1051 10.68 12.45 16.22 Optimal endogenous lags from information criteria Sample range: [2000 M11, 2005 M12], T ¼ 62 Optimal number of lags (searched up to 10 lags of levels) Akaike info criterion: 10 Final prediction error: 10 Hannan-Quinn criterion: 10 Schwarz criterion: 1
1656
A. Hachicha and C.-F. Lee
Table 59.4 Cointegration test (post-IT) Sample range: [2006 M3, 2011 M11], T ¼ 69 Johansen trace test for: MMR_log ExRate_log Share_prices_log IP_log Consumerprices_log r0 LR pval 90 % 95 % 99 % 0 79.82 0.0284 72.74 76.81 84.84 1 46.76 0.1928 50.50 53.94 60.81 2 27.58 0.2642 32.25 35.07 40.78 3 15.03 0.2293 17.98 20.16 24.69 4 4.83 0.3131 7.60 9.14 12.53 Optimal endogenous lags from information criteria Sample range: [2006 M11, 2011 M11], T ¼ 61 Optimal number of lags (searched up to 10 lags of levels) Akaike info criterion: 10 Final prediction error: 10 Hannan-Quinn criterion: 10 Schwarz criterion: 1
the Johansen procedure which supposes the ignorance of restrictions on beta. Then, we introduce restrictions in matrices of the long and short run. In addition, we know from the common trends literature that in a five-dimensional system with two cointegration relations determined previously, only three shocks can have permanent effect. We impose over identifying restrictions on the cointegrating vectors using the ML method proposed by Johansen (1995). The identified cointegration relations can be used to set up a full VECM, where no further restrictions are imposed to form an estimate for Su. Moreover, long-run and contemporaneous identifying restrictions derived from the theory are used to form estimates for matrix B or A. We know from the previous results that we need K (K 1)/2 ¼ 5(5 1)/2 ¼ 10 additional linearly independent restrictions coming from economic theory to exactly identify the structural shocks. 0
B0 B A ¼ Cð1ÞB ¼ B B @
0 0
0
While 0
B B B¼B B @
0
0 0
0 0
1 0 0C C 0C C 0A 0
1 C C C C A
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1657
Then, we estimate the standard VECM with identifying restrictions explained above for the first non-inflation-targeting period and reestimate a second model for the second inflation-targeting period. The zeros represent the restricted elements and the asterisks denote unrestricted elements. The bootstrap estimation allows us to determine unknown values of the short and long-run matrix for two periods.
59.4.1 Pre-IT (Before IT Adoption) Matrix A ¼ Cð1ÞB 0 0:134 B 0 B ¼B 0:0644 B @ 0:0362 0:0139
0:0125 0:1012 0 0:004 0:0609 0:0622 0:0602 0:0289 0 0:0112
0:0668 0:0649 0:0068 0 0:0064
1 0:00587 0:0427 C C 0:0068 C C 0:0325 A 0:0325
While 0
0:3601 B 0:0089 B Matrix B ¼ B B 0:0415 @ 0:0156 0:0018
0 0 0:0005 0 0:0596 0:0725 0:0613 0:0382 0:0007 0:0057
0 0:0622 0 0:0061 0:0028
1 0 0C C 0C C 0A 0
59.4.2 Post-IT (After IT Adoption) 0
0:1815 B 0 B Matrix A ¼ Cð1ÞB ¼ B 0:0134 B @ 0:0012 0:0011
0:029 0 0:0784 0:0211 0
0:0428 0:003 0:02134 0:0012 0:0007
0:0113 0:0479 0:0011 0 0:001
1 0:0123 0:0324 C C 0:0214 C C 0:0213 A 0:0012
While 0
0:1851 B 0:0003 B Matrix B ¼ B B 0:0138 @ 0:0035 0:001
0 0 0:002 0 0:0756 0:0041 0:0168 0:0571 0:0003 0:0011
0 0:0471 0 0:0148 0:0011
1 0 0C C 0C C 0A 0
To investigate the impulse response analysis, we compute impulse responses from the full SVECM, and we try to benefit from the important number of series
1658
A. Hachicha and C.-F. Lee
Table 59.5 Selected impulse responses: “impulse variable > response variable” related to Appendix 3. Selected confidence interval (CI): (a) 95 % Hall percentile CI (B ¼ 100 h ¼ 20) Time point estimate MMR_log >ExRate_log Point 0.0089 estimate CI a) [0.0228, 0.0195] 1 Point 0.0035 estimate CI a) [0.0082, 0.0153] 2 Point 0.0014 estimate CI a) [0.0030, 0.0107] 3 Point 0.0005 estimate CI a) [0.0011, 0.0071] 4 Point 0.0002 estimate CI a) [0.0004, 0.0046] 5 Point 0.0001 estimate CI a) [0.0002, 0.0030] 6 Point 0.0000 estimate CI a) [0.0001, 0.0019] 7 Point 0.0000 estimate CI a) [0.0000, 0.0012] 8 Point 0.0000 estimate CI a) [0.0000, 0.0008] 9 Point 0.0000 estimate CI a) [0.0000, 0.0005] 10 Point 0.0000 estimate CI a) [0.0000, 0.0003] 11 Point 0.0000 estimate CI a) [0.0000, 0.0002] 12 Point 0.0000 estimate CI a) [0.0000, 0.0001] 13 Point 0.0000 estimate CI a) [0.0000, 0.0001]
MMR_log MMR_log >Share_prices_log >IP_log 0.0415 0.0156
MMR_log >CPI_log 0.0018
[0.0722,0.0147] 0.0555
[0.0325, 0.0063] [0.0025, 0.0065] 0.0281 0.0092
[0.0911,0.0184] 0.0609
[0.0511, 0.0019] 0.0330
[0.0027, 0.0153] 0.0120
[0.1013,0.0169] 0.0631
[0.0603, 0.0026] 0.0349
[0.0053, 0.0199] 0.0132
[0.1054,0.0156] 0.0639
[0.0640, 0.0041] 0.0357
[0.0060, 0.0216] 0.0136
[0.1070,0.0134] 0.0642
[0.0654, 0.0053] 0.0360
[0.0055, 0.0223] 0.0138
[0.1077,0.0132] 0.0643
[0.0660, 0.0061] 0.0361
[0.0049, 0.0226] 0.0138
[0.1079,0.0131] 0.0644
[0.0662, 0.0065] 0.0361
[0.0047, 0.0227] 0.0139
[0.1080,0.0130] 0.0644
[0.0663, 0.0068] 0.0361
[0.0046, 0.0228] 0.0139
[0.1081,0.0130] 0.0644
[0.0663, 0.0069] 0.0362
[0.0046, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0070] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129]
[0.0663, 0.0071]
[0.0045, 0.0228] (continued)
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1659
Table 59.5 (continued) Time point estimate MMR_log >ExRate_log 14 Point 0.0000 estimate CI a) [0.0000, 0.0001] 15 Point 0.0000 estimate CI a) [0.0000, 0.0000] 16 Point 0.0000 estimate CI a) [0.0000, 0.0000] 17 Point 0.0000 estimate CI a) [0.0000, 0.0000] 18 Point 0.0000 estimate CI a) [0.0000, 0.0000] 19 Point 0.0000 estimate CI a) [0.0000, 0.0000] 20 Point 0.0000 estimate CI a) [0.0000, 0.0000]
MMR_log MMR_log >Share_prices_log >IP_log 0.0644 0.0362
MMR_log >CPI_log 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129] 0.0644
[0.0663, 0.0071] 0.0362
[0.0045, 0.0228] 0.0139
[0.1081,0.0129]
[0.0663, 0.0071]
[0.0045, 0.0228]
taken in this chapter and presented for a long period by estimating series two times; the first treats the case before the adoption of inflation targeting by Turkish monetary policy authorities. The second allows us to examine real sphere during the adoption of inflation-targeting policy.
59.5
Interest Rate Shock and Real Sphere
Appendixes 3 and 4 show the responses of the real exchange rate, the shares prices, the industrial production, and the consumer prices to a monetary policy shock during the pre-IT and the post-IT policy, respectively. The confidence bounds were bootstrapped, since this gives more accurate confidence coverage compared to the asymptotic ones. Results are reported in Tables 59.6 and 59.7 in Appendix 5. It is worth noting that for each figure presented in Appendixes 3 and 4, the horizontal axis of graphs shows the number of periods after a monetary policy shock has been initialized. The vertical axis measures the response of the relevant variables.
1660
A. Hachicha and C.-F. Lee
Table 59.6 Selected impulse responses: “impulse variable > response variable” related to Appendix 4 Selected confidence interval (CI): a) 95 % Hall percentile CI (B¼100 h¼20) MMR_log MMR_log >ExRate_log >Share_prices_log Point estimate 0.0003 0.0138 CI a) [0.0037, 0.0029] [0.0327,0.0009] 1 Point estimate CI a)
0.0001 0.0135 [0.0016, 0.0013] [0.0334,0.0004]
2 Point estimate CI a)
0.0000 0.0135 [0.0007, 0.0006] [0.0343,0.0001]
3 Point estimate CI a)
0.0000 0.0134 [0.0003, 0.0003] [0.0347,0.0000]
4 Point estimate CI a)
0.0000 0.0134 [0.0001, 0.0001] [0.0349, 0.0000]
5 Point estimate CI a)
0.0000 0.0134 [0.0001, 0.0001] [0.0349, 0.0000]
6 Point estimate CI a)
0.0000 0.0134 [0.0000, 0.0000] [0.0350, 0.0000]
7 Point estimate CI a)
0.0000 0.0134 [0.0000, 0.0000] [0.0350, 0.0000]
8 Point estimate CI a)
0.0000 0.0134 [0.0000, 0.0000] [0.0350, 0.0001]
9 Point estimate CI a)
0.0000 0.0134 [0.0000, 0.0000] [0.0350, 0.0001]
10 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 11 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 12 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001]
MMR_log >IP_log 0.0035 [0.0236, 0.0140] 0.0006 [0.0081, 0.0039] 0.0005 [0.0032, 0.0034] 0.0009 [0.0031, 0.0047] 0.0011 [0.0034, 0.0053] 0.0012 [0.0038, 0.0056] 0.0012 [0.0039, 0.0057] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058]
MMR_log >CPI_log 0.0010 [0.0005, 0.0027] 0.0010 [0.0006, 0.0028] 0.0011 [0.0006, 0.0028] 0.0011 [0.0006, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] (continued)
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1661
Table 59.6 (continued) Selected confidence interval (CI): a) 95 % Hall percentile CI (B¼100 h¼20) MMR_log MMR_log >ExRate_log >Share_prices_log 13 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 14 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 15 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 16 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 17 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 18 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 19 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001] 20 Point estimate 0.0000 0.0134 CI a) [0.0000, 0.0000] [0.0350, 0.0001]
MMR_log >IP_log 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058] 0.0012 [0.0040, 0.0058]
MMR_log >CPI_log 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028] 0.0011 [0.0007, 0.0028]
The simulation analysis covers 20 periods. The solid lines for each graph denote impulse responses. The dotted lines are approximately 95 error bands (with 95 % confidence intervals) that are derived from a bootstrap routine with 100 replications. Bootstrap confidence bands are computed by percentile method advanced by Hall (1992) and L€ utkepohl et al. (2001).
59.5.1 Pre-IT (Before IT Adoption) Figure 59.3 presented in Appendix 3 reveals the responses of our series after an unexpected increase of the short-term interest rate which normally leads to an exchange rate appreciation (Mishkin 2001). This cannot be seen in the response of real exchange rate due to nonsignificant response in the short and long run.
a Excludes World Wars I and II Summary of private forecasts for 2009 in italics
Default, Restructuring, Banking Crises, Growth Collapses, and IMF Programs: Turkey, 1800–2010 (calculations since independence – 1923 – reported) Hyper5 worst output External default/ Duration Domestic default/ Banking crisis inflation Share of years in Share of years in collapses year restructuring (in years) restructuring (first year) dates external default inflation crisis (decline)a 1876–1881 6 n.a. 1931 n.a. 19.5 35.6 1927 (9.1) 1915–1928 14 1982 1932 (6.0) 1931–1932 2 1991 1994 (5.5) 1940–1943 4 2000 2001 (5.7) 1959 1 2009 (5.6) 1965 1 1978–1979 2 1982 1 2000–2001 “near” 2 Number of episodes: 8 0 4 0 Memorandum item on IMF programs, 1952–2009 Dates of programs Total number 18 1961–1970, 1978–1980, 1983–1984, 1994, 1999, 2002
Table 59.7 Kenneth S, Rogoff and Carmen M. Reinhart (2010)
1662 A. Hachicha and C.-F. Lee
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1663
JMulTi Tue Mar 27 22:05:10 2012 SVEC Impulse Responses MMR_log -> ExRate_log 0.005 0.000 −0.005 −0.010 −0.015 −0.020 −0.025 −0.030 −0.035 −0.040
0
2
4
6
8
10
12
14
16
18
20
12
14
16
18
20
12
14
16
18
20
14
16
18
20
MMR_log -> Share_prices_log −0.00 −0.02 −0.04 −0.06 −0.08 −0.10 −0.12
0
2
4
6
8
10 MMR_log -> IP_log
0.02 0.00 −0.02 −0.04 −0.06 −0.08
0
2
4
6
8
10
MMR_log -> CPI_log 0.026 0.022 0.018 0.014 0.010 0.006 0.002 −0.002 −0.006 0
2
4 Zero Line
6
8
10
SVEC Impulse Responses
12
95% Efron Percentile CI (B=100 h=20)
Fig. 59.3 Responses of the main macro variables to a monetary policy shock before IT
Despite the trial by Turkish authorities to implement a new stabilization and structural adjustment program, the banking sector was in turbulence in May 2001. According to Table 59.7, this result could be explained by the predominance of many preponderant banking crisis dates especially in 1991 and 2000 which were accompanied by a considerable output collapse in 2001 (Rogoff and Reinhart 2010).
1664
A. Hachicha and C.-F. Lee
59.5.2 Post-IT (After IT Adoption) Figure 59.4 presented in Appendix 4 reveals the responses of our series after an unexpected increase of the short-term interest rate. Seeking to reduce the size of the money supply, normally, a contractionary monetary policy shock in the very JMulTi Fri Mar 30 22:04:54 2012 SVEC Impulse Responses −3
3 2 1 0 −1 −2 −3 −4
0.005 0.000 −0.005 −0.010 −0.015 −0.020 −0.025 −0.030 −0.035 −0.040
MMR_log -> ExRate_log
x 10
0
2
4
6
8
10
12
14
16
18
20
14
16
18
20
14
16
18
20
14
16
18
20
MMR_log -> Share_prices_log
0
2
4
6
8
10
12
MMR_log -> IP_log 0.015 0.010 0.005 0.000 −0.005 −0.010 −0.015 −0.020 −0.025
3.0 2.6 2.2 1.8 1.4 1.0 0.6 0.2 −0.2 −0.6 −1.0
0
2
4
6
x 10−3
0
8
10
12
MMR_log -> Consumerprices_log
2 Zero Line
4
6
8
10
SVEC Impulse Responses
12
95%Efron Percentile CI (B=100 h=20)
Fig. 59.4 Responses of the main macro variables to a monetary policy shock after IT adoption
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1665
short run depresses exchange rate which in turn reduces share prices, and it also appreciates consumer prices which in turn reduces the output. The other important result obtained from this study is that a contractionary policy has a nonsignificant effect on price level in only non-inflation-targeting period. This may be interpreted as a particular case of monetary policy effectiveness in inflation-targeting period. But, this does not mean that monetary policy in inflation-targeting period is more effective than monetary policy in non-inflation-targeting period. This result contradicts the findings of Akyurek and Kutan (2008) who advance that the performances of Turkey after inflation-targeting regime are better than its pre-inflation-targeting regime, at least in terms of containing inflation. However, the findings reveal that monetary policy affects the price level, share prices, real exchange rate, and output level in the very short run before inflationtargeting policy. These results are similar to those advanced by Fair (2007) and Ball and Sheridan (2005) who show no evidence that inflation targeting improves a country’s performance. To conclude, our results are similar to those of the IMF World Economic Outlook (2005) which offers evidence of either “no increase or a decrease in the volatility of output” due to inflation-targeting policy.
59.6
Conclusion
In this chapter, we based our analysis on two SVEC models with long- and short-run restrictions to detect the impact and dynamic effects of a contractionary monetary policy shock on the output, inflation rate, share prices, and exchange rate. The overall responses of the macroeconomic variables in our SVEC models are consistent with most common theoretical expectations that we discussed in this chapter: an expansionary interest policy shock leads to a decrease in price level, a fall in output, an appreciation in the exchange rate, an improvement in the share prices in the very short run for the most of non-inflation-targeting period. For this pre-IT period, we did not find any evidence of empirical anomalies advanced in empirical literature, i.e., the price puzzle and the exchange rate puzzle. Our approach in this chapter overcomes such empirical anomalies only for the pre-IT policy. The exchange rate puzzle and the price puzzle were observed for the whole inflation-targeting period. One of the most important results of this study is that the impact effect of a monetary policy shock on the real sphere is negative and generally statistically significant only in pre-IT period.
Appendix 1: Joint Forecast Error Covariance Matrix, Pre-IT, and Post-IT Forecasting Results The forecast errors have zero mean and, hence, the forecasts are unbiased. The joint forecast error covariance matrix for all forecasts up to horizon h is
1666
A. Hachicha and C.-F. Lee
1 0 yTþ1 yTþ1=T I C B F1 B :: C B B C¼B : : CovB C B B A @ : @ yTþh yTþh=T Fh1 0 10 I 0 : : 0 B F1 I : : 0C B C B : : : : :C B C @ : : : : :A Fh1 Fh2 : : I 0
0 I : : Qh2
: : : : :
1 : 0 : 0C CX : :C I h C u : :A : I
(59.9)
where yT + 1/T ¼ yT + j for j 0. Assuming normally distributed disturbances, these results can be used for setting up forecast intervals for any linear combination of these forecasts. See Tables 59.1 and 59.2, Figs. 59.1 and 59.2.
Appendix 2: Pre-IT and Post-IT Cointegration Tests See Tables 59.3 and 59.4.
Appendix 3: Pre-IT Macro Variables Responses See Fig. 59.3.
Appendix 4: Post-IT Macro Variables Responses See Fig. 59.4.
Appendix 5: Impulse Responses and Confidence Intervals See Tables 59.5 and 59.6.
References Akyurek, C., & Kutan, M. A. (2008). Inflation targeting, policy rates and exchange rate volatility: Evidence from Turkey. Comparative Economic Studies, 50, 460–493. Ball, L., & Sheridan, N. (2005). Does inflation targeting matter? In B. Bernanke & M. Woodford (Eds.), The inflation-targeting debate. Chicago: University of Chicago Press. Bhuiyan, R. (2012). Inflationary expectations and monetary policy: Evidence from Bangladesh. Empirical Economics, Online First™, 26 April 2012.
59
Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey
1667
Breitung, J., Bruggemann, R., & Lutkepohl, H. (2004). Structural vector autoregressive modelling and impulse responses. In H. Lutkepohl & M. Kratzig (Eds.), Applied Time Series Econometrics. Cecchetti, S. (1995). Distinguishing theories of monetary transmission mechanism. Economic Review of Federal Reserve of Saint-Louis Review, 77 May-June, 83–97. Clements, B. J., Kontolemis, G. Z., & Levy, J. V. (2001). Monetary policy under EMU: Differences in the transmission mechanisms? IMF Working paper, 01/102. DeBondt, G. (2000). Credit channels and consumption in Europe: Empirical evidence. BIS Working paper, 69. Ehrmann, M. (1998). Will EMU generate asymmetry? Comparing monetary policy transmissions across European countries. European University Institute, Working paper ECO 98/28. Fair, R. C. (2007). Evaluating inflation targeting using a macroeconometric model. Economics: The Open-Access, Open-Assessment E-Journal, 1. Hachicha, A., & Chaabane, A. (2007). Monetary policy transmission mechanism in Tunisia. EuroMediterranean Economics and Finance Review, 1, 104–26. Hall, P. (1992). The bootstrap and Edgeworth expansion. New York: Springer. IMF, (2005). Does inflation targeting work in emerging markets? In World economic outlook (Chapter 4, pp. 161–178). Ivrendi, M., & Guloglu, B. (2010). Monetary shocks, exchange rates and trade balances: Evidence from inflation targeting countries. Economic Modelling, 27(5), 1144–1155. Johansen, S. (1988). Statistical analysis of cointegrating vectors. Journal of Economics and Dynamic Control, 12, 231–254. Johansen, S. (1995). Likelihood-based inference in cointegrated vector autoregressive models. Oxford: Oxford University Press. Kakes, J. I., & Sturm, J. E. (2002). Monetary policy and bank lending: Evidence from German banking groups. Journal of Banking and Finance, 26, 2077–2092. King, R., Plosser, C., Stock, J., & Watson, M. (1991). Stochastic trends and economic fluctuations. The American Economic Review, 81(4), 819–840. Lucke, B. (2010). Identification and overidentification in SVECMs. Economics Letters, 108(3), 318–321. L€utkepohl, H. (1993). Introduction to multiple time series analysis. Berlin: Springer. Lutkepohl, H. (2005). Structural vector autoregressive analysis for cointegrated variables. EUI Working paper ECO. L€utkepohl, H., Benkwitz, A., & Wolters, J. (1998). A money demand system for German M3. Empirical Economics, 23, 382–384. L€utkepohl, H., Benkwitz, A., & Wolters, J. (2001). Comparison of bootstrap confidence intervals for impulse responses of German monetary systems. Macroeconomic Dynamics, 5, 81–100. Mishkin, F. S. (2001). The transmission mechanism and the role of asset prices in monetary policy. NBER Working paper 8617, December. Nadenichek, J. (2006). The J-curve effect: An examination using a structural vector error correction model. International Journal of Applied Economics, 3(2), 34–47. Pfaff, B. (2008). VAR, SVAR and SVEC models: Implementation within R package vars. Journal of Statistical Software, 27(4). http://www.jstatsoft.org/ Ramaswamy, R., & Sloek, T. (1998). The real effects of monetary policy in the European Union: What are the differences? IMF Staff Papers, 45, 347–395. Rogoff, K., & Reinhart, C. M. (2010). Growth in a time of debt. NBER Working paper 15795.
Determination of Capital Structure: A LISREL Model Approach
60
Cheng-Few Lee and Tzu Tai
Contents 60.1 60.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Determinants of Capital Structure and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.2.1 Determinants of Capital Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.2.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.3 Methodology and LISREL System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.3.1 SEM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.3.2 Illustration of SEM Approach in LISREL System . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.4 Empirical Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Codes of Structure Equation Modeling (SEM) in LISREL System . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1670 1672 1672 1674 1675 1675 1676 1677 1681 1682 1683
Abstract
Most previous studies investigate theoretical variables which affect the capital structure of a firm; however, these latent variables are unobservable and generally estimated by accounting items with measurement errors. The use of these observed accounting variables as theoretical explanatory latent variables will cause error-invariable problems during the analysis of the factors of capital structure.
C.-F. Lee (*) Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] T. Tai Department of Finance and Economics, Rutgers, The State University of New Jersey, Piscataway, NJ, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_60, # Springer Science+Business Media New York 2015
1669
1670
C.-F. Lee and T. Tai
Since Titman and Wessels (Journal of Finance 43, 1–19, 1988) first utilize LISREL system to analyze the determinants of capital structure choice based on a structural equation modeling (SEM) framework, Chang et al. (The Quarterly Review of Economic and Finance 49, 197–213, 2009) and Yang et al. (The Quarterly Review of Economics and Finance 50, 222–233, 2010) extend the empirical work on capital structure research and obtain more convincing results by using multiple indicators and multiple causes (MIMIC) model and structural equation modeling (SEM) with confirmatory factor analysis (CFA) approach, respectively. In this chapter, we employ structural equation modeling (SEM) in LISREL system to solve the measurement errors problems in the analysis of the determinants of capital structure and find the important factors consistent with capital structure theory by using date from 2002 to 2010. The purpose of this chapter is to investigate whether the influences of accounting factors on capital structure change and whether the important factors are consistent with the previous literature. Keywords
Capital structure • Structural equation modeling (SEM) • Multiple indicators and multiple causes (MIMIC) model • LISREL system • Simultaneous equations • Latent variable • Determinants of capital structure • Error in variable problem
60.1
Introduction
In previous research in capital structure, many models are derived based on theoretical variables; however, these variables are often unobservable in the real world. Therefore, many studies use the accounting items from the financial statements as proxies to substitute for the theoretically derived variables. In the regression analysis, the estimated parameters from accounting items as proxies for unobservable theoretical attributes would cause some problems. First, there are measurement errors between the observable proxies and latent variables. According to the previous theoretical literature in corporate finance, a theoretical variable can be formed with either one or several observed variables as a proxy. But there is no clear rule to allocate the unique weights of observable variables as the perfect proxy of a latent variable. Second, because of unobservable attributes to capital structure choice, researchers can choose different accounting items to measure the same attribute in accordance with the various capital structure theory and the their bias economic interpretation. The use of these observed variables as theoretical explanatory latent variables in both cases will cause error-in-variable problems. Joreskog (1977) Joreskog and Sorbom (1981, 1989) and Jorekog and Goldberger (1975) first develop the structure equation modeling (hereafter called SEM) to analyze the relationship between the observed variables as the indicators and the latent variables as the attributes of the capital structure choice.
60
Determination of Capital Structure: A LISREL Model Approach
1671
Since Titman and Wessels (1988) (hereafter called TW) first utilize LISREL system to analyze the determinants of capital structure choice based on a structural equation modeling (SEM) framework, Chang et al. (2009) and Yang et al. (2010) extend the empirical work on capital structure choice and obtain more convincing results. These papers employ structural equation modeling (SEM) in LISREL system to solve the measurement errors problems in the analysis of the determinants of capital structure and to find the important factors consistent with capital structure theories. Although TW initially apply SEM to analyze the factors of capital structure choice, their results are insignificant and poor to explain capital structure theories. Maddala and Nimalendran (1996) point out the problematic model specification as the reason for TW’s poor finding and propose a multiple indicators and multiple causes (hereafter called MIMIC) model to improve the results. Chang et al. (2009) reproduce TW’s research on determinants of capital structure choice but use MIMIC model to compare the results with TW’s. They state that the results show the significant effects on capital structure in a simultaneous cause-effect framework rather than in SEM framework. Later, Yang et al. (2010) incorporate the stock returns with the research on capital structure choice and utilize structural equation modeling (SEM) with confirmatory factor analysis (CFA) approach to solve the simultaneous equations with latent determinants of capital structure. They assert that a firm’s capital structure and its stock return are correlated and should be decided simultaneously. Their results are mainly same as TW’s finding; moreover, they also find that the stock returns as a main factors of capital structure choice. In this chapter, we compare the results of the determinants of capital structure from the period 2002–2010 with the results in previous chapter by using LISREL system. The purpose of this chapter is to investigate whether the influences of accounting factors on capital structure are of difference from TW’s results and whether the important factors are consistent with the theories in previous literature. During the financial crisis, the influences of accounting factors on the firm’s capital structure may have some difference due to the extremely decline of the equity market in the economic recession. Also, the method of reducing measurement error via the average of 3-year data may be invalid because the samples will have different time series pattern after significant event such as current financial crisis. Therefore, this chapter aims at whether the parameters used in TW paper are still significant or not during the financial crisis. This chapter is organized as follows. In Sect. 60.2, we review the accounting items as proxies for latent variables in TW paper and describe the sample data used in LISREL system. Then, in Sect. 60.3, we introduce SEM to investigate the determinants of capital structure choice and illustrate the SEM approach for TW work in LISREL program. The results of empirical work and the analysis of the comparison with TW’s finding are shown in Sect. 60.4. Finally, Sect. 60.5 represents the conclusions of this study.
1672
C.-F. Lee and T. Tai
Table 60.1 Attributes and indicators Attributes Asset structure Non-debt tax shield
Growth
Uniqueness Industry classification Size Volatility Profitability Capital structure (dependent variables)
60.2
Indicators Intangible asset/total assets (INT_TA) Inventory plus gross plant and equipment/total assets (IGP_TA) Investment tax credits/total asset (ITC_TA) Depreciation/total asset (D_TA) Non-debt tax shields/total asset (NDT_TA) Capital expenditures/total asset (CE_TA) The growth of total asset (GTA) Research and development/sales (RD_S) Research and development/sales (RD_S) Selling expense/sales (SE_S) SIC code (IDUM) Natural logarithm of sales (LnS) The standard deviation of the percentage change in operating income (SIGOI) Operating income/sales (OI_S) Operating income/total assets (OI_TA) Long-term debt/market value of equity (LT_MVE) Short-term debt/market value of equity (ST_MVE) Convertible debt/market value of equity (C_MVE) Long-term debt/book value of equity (LT_BVE) Short-term debt/book value of equity (ST_BVE) Convertible debt/book value of equity (C_BVE)
Determinants of Capital Structure and Data
Before we utilize SEM approach to analyze the determinants of capital structure, the observable indicators are first briefly described in this section and then the data used in this chapter is subsequently introduced.
60.2.1 Determinants of Capital Structure TW provide eight characteristics to determine the capital structure: asset structure, non-debt tax shields, growth, uniqueness, industry classification, size, volatility, and profitability. These attributes are unobservable; therefore, some useful and observable accounting items are classified into these eight characteristics in accordance with the previous literature on capital structure. The attributes as latent variables, their indicators as independent variables, and the indicators of capital structure as dependent variables are shown in Table 60.1. The parentheses in indicators are the notations used in LISREL system. Moreover, TW adopt the long-term debt, the short-term debt, and the convertible debt over either market
60
Determination of Capital Structure: A LISREL Model Approach
1673
value of equity or book value of equity as the indicators of capital structure as shown in the bottom of Table 60.1. Based on the trade-off theory and agency theory, firms with larger tangible and collateral assets may have less bankruptcy, asymmetry information, and agency costs. Myers and Majluf (1984) indicate that companies with larger collateral assets attempt to issue more secured debt to reduce the cost arising from information asymmetry. Moreover, Jensen and Meckling (1976) and Myers (1977) state that there are agency costs related to underinvestment problem in the leveraged firm. Therefore, the collateral assets are positive correlated to debt ratios. According to TW paper, the ratio of intangible assets to total assets (INT_TA) and the ratio of inventory plus gross plant and equipment to total assets (IGP_TA) are viewed as the indicators to evaluate the asset structure attribute. DeAngelo and Masulis (1980) extend Miller’s (1977) model to analyze the effect of non-debt tax shields increasing the costs of debt for firms. Bowen et al. (1982) find their empirical work on the influence of non-debt tax shields on capital structure consistent with DeAngelo and Masulis’s (1980) optimal debt model. Following Fama and French (2002) and TW paper, the indicators of non-debt tax shields are investment tax credits over total asset (ITC_TA), depreciation over total asset (D_TA), and non-debt tax shields over total asset (NDT_TA) which NDT is defined as in TW paper with the corporate tax rate 34 %. According to TW paper, we use capital expenditures over total asset (CE_TA), the growth of total asset (GTA), and research and development over sales (RD_S) as the indicators of growth attribute. TW argue the negative relationship between growth opportunities and debt because growth opportunities only add firm’s value but cannot collateralize or generate taxable income. Furthermore, the indicators of uniqueness include development over sales (RD_S) and selling expense over sales (SE_S). Titman (1984) indicates that uniqueness negatively correlates to debt because the firms with high-level uniqueness will cause customers, suppliers, and workers to suffer relatively high costs of finding alternative products, buyers, and jobs when firms liquidate. SIC code (IDUM) as proxy of industry classification attribute followed Titman’s (1984) and TW’s suggestions that firms manufacturing machines and equipment have high liquidation cost and thus more likely to issue less debt. The indicator of size attribute is measured by natural logarithm of sales (LnS). The financing cost of firms may relate to firm size since small firms have higher cost of nonbank debt financing (see Bevan and Danbolt 2002). Therefore size is supposed to be positive associated with debt level. Besides, volatility attribute is estimated by the standard deviation of the percentage change in operating income (SIGOI). The large variance in earnings means higher possibility of financial distress; therefore, to avoid bankruptcy to happen, firms with larger volatility of earnings will have less debt. Finally, the pecking order theory developed in Myers (1977) paper indicates that firms prefer to use internal finance rather than external finance when raising capital. The profitable firms are likely to have less debt and
1674
C.-F. Lee and T. Tai
Table 60.2 The Compustat code of observable data Accounting Total asset Intangible asset Inventory Gross plant and equipment Investment tax credits Depreciation Income tax Operating income Interest payment Capital expenditures
Code AT INTAN INVT PPEGT ITCB DPACT TXT EBIT XINT CAPX
Accounting Net income R&D expense Sales Selling expense SIC code Short-term debt Long-term debt Convertible debt Book value of equity Market value of equity
Code NI RDIP SALE XSGA SIC DLC DLTT DCVT SEQ MKVALT
profitability and hence are negatively related to debt level. Following TW paper, the indicators of profitability are operating income over sales (OI_S) and operating income over total assets (OI_TA).
60.2.2 Data The sample period from 2002 to 2010 is the same as the duration of variables used in TW paper. The sources of data are from Compustat and CRSP in WRDS. The codes of the accounting items used to calculate the observed variables in Compustat are shown in Table 60.2. The process of dealing with data is same as TW. The firms with incomplete record on variables and with negative values of total asset and operating income are deleted from the samples. After combination of all data from Compustat and CRSP, one indicator of non-debt tax shields is also excluded because almost all samples are zero or insignificant. For indicator of industry classification, we use dummy variable which equal to one for firms with SIC codes between 3400 and 4000 and equal to zero otherwise. According to the problem of measurement errors, TW suggested that the sampling period should be divided into three subperiods. In each subperiod, the variables are the average of 3-year data due to random year-to-year fluctuations in variables. The dependent variables and independent variables used to measure uniqueness, non-debt tax shields, asset structure, and the industry classification are measured during the period 2005–2007. The indicators of size and profitability are measured during the period 2002–2004. Two independent variables, capital expenditures/total asset (CE_TA) and the growth of total asset (GTA), are measured during 2008–2010. Finally, the standard deviation of the percentage change in operating income (SIGOI) is estimated during the whole sample period 2002–2010 in order to obtain as the same measure as in TW paper.
60
Determination of Capital Structure: A LISREL Model Approach
60.3
1675
Methodology and LISREL System
In this section, we first introduce the SEM approach and present an example of path diagram to show the structure of structural model and measurement model in SEM framework. Then, the specified structure in SEM approach is given in accordance with the constraints in TW paper and the code is illustrated in Appendix.
60.3.1 SEM Approach The SEM incorporates three equations as follows: Structural model: ¼ b þ Fx þ B
(60.1)
Measurement model for y: y ¼ Ly þ u
(60.2)
Measurement model for x: x ¼ Lx þ d
(60.3)
where x is the matrix of observed independent variables as the indicators of attributes, y is the matrix of observed dependent variables as the indicators of capital structure, x is the matrix of latent variables as attributes, and is the latent variables that link determinants of capital structure (a linear function of attributes) to capital structure(y). Figure 60.1 shows an example of the path diagram of SEM approach where the observed independent variables x ¼ (x1, x2, x3)0 are located in rectangular, the observed dependent variables y ¼ (y1, y2)0 are set in hexagons, variables ¼ (1, 2)0 , x ¼ (x1, x2)0 in ovals denote the latent variables, and the corresponding sets of disturbance are B ¼ (B1, B2)0 , u ¼ (u1, u2)0 , and d ¼ (d1, d2, d3)0 . The structural model can be specified as the system of equations which combines (60.1) and (60.2), and then we can obtain the structural model in TW paper as follows: y ¼ Gx þ e
(60.4)
In this chapter, the accounting items can be viewed as the observable independent variables (x) which are the causes of attributes as the latent variables (x), and the debt-equity ratios represented the indicators of capital structure are the observable dependent variables (y). The fitting function for maximum likelihood estimation method for SEM approach is the following: F ¼ logjSj þ tr SS1 logjSj ðp þ qÞ (60.5)
1676 d1
d2
d3
C.-F. Lee and T. Tai
X1
L
Ly x1
h1
x2
h2
y1
u1
y2
u2
X2
X3 V = (V 1, V 2)
Fig. 60.1 Path diagram of SEM approach In this path diagram, the SEM formulas (60.1), (60.2),2and (60.3) 3 are specified as follows: L1 0 Ly 1 Ly 2 0 b1 F1 0 , Ly ¼ , L ¼ 4 0 L2 5 where Ly1, Ly2, Ly3, b¼ ,F ¼ 0 Ly 3 0 0 0 F2 L3 L4 L1, L2, L3, and L4 denote unknown factor loadings; b1, F1, and F2 denote unknown regression weights; u1, u2, d1, d2, and d3 denote measurement errors; and B1, and B2 denote error terms
where S is the observed covariance matrix, S is the model-implied covariance matrix, p is the number of independent variables (x), and q is the number of dependent variables (y).
60.3.2 Illustration of SEM Approach in LISREL System In general, SEM consists of two parts, the measurement model and structural model. The measurement model analyzes the presumed relations between the latent variables viewed as the attributes and observable variables viewed as the indicators. For example, in TW paper, capital expenditures over total assets (CE_TA) and research and development over sales (RD_S) are the indicators of the growth attributes (Growth). In the measurement model, each indicator is assumed to have measurement error associated with it. On the other hand, the structure model presents the relationship between unobserved variables and outcome. For instance, the relationship between attributes and the capital structure is represented by the structure model. Moreover, the relationship between the capital structure and its indicators estimated by debt-equity ratios is modeled by the measurement model. TW also specific settings include zero measurement error of variables, the standard deviation of the percentage change in operating income (SIGOI) and SIC code (IDUM), measurement errors uncorrelated with each other indicator, with the latent variables, and with the errors in the structural equations. The attributes of volatility and industry classification equal to their indicators, respectively. Based on eight attributes as latent variables, thirteen indicators for determinants of capital structure choice, and six indicators of capital structure in TW paper, the SEM measurement
60
Determination of Capital Structure: A LISREL Model Approach
1677
model formula (60.3) and structural model formula (60.4) are specified as follows: 3 GTA 6 CE TA 7 7 6 6 RD S 7 3 2 7 6 Growth 6 SE S 7 7 7 6 6 Uniqueness 7 6 D TA 7 6 7 6 6 Non Debt Tax Shields 7 7 6 NDT TA 7 6 7 7 6 6 Asset Structure 7 7, 6 6 x ¼ 6 INT TA 7, x ¼ 6 7 Size 7 6 IGP TA 7 6 7 7 6 6 Profitability 7 6 LnS 7 6 7 5 6 4 Volatility 6 OI TA 7 7 6 Industry Dummy 6 OI S 7 7 6 4 SIGOI 5 IDUM 3 3 2 2 d1 L1 0 0 0 0 0 0 0 6 L2 0 6 d2 7 0 0 0 0 0 07 7 7 6 6 7 6 L3 L4 0 6 d3 7 0 0 0 0 0 7 7 6 6 6 0 L5 0 6 d4 7 0 0 0 0 07 7 7 6 6 6 0 6 d5 7 0 L6 0 0 0 0 07 7 7 6 6 6 0 6 d6 7 0 L7 0 0 0 0 07 7 7 6 6 7 6 L¼6 0 0 L8 0 0 0 07 7, d ¼ 6 d7 7, 6 0 7 7 6 0 6 0 0 L 0 0 0 0 d 9 7 6 6 87 7 7 6 0 6 0 0 0 L 0 0 0 d 10 7 6 6 97 7 7 6 0 6 0 0 0 0 L 0 0 d 11 7 6 6 10 7 7 7 6 0 6 0 0 0 0 L 0 0 d 12 7 6 6 11 7 4 0 4 0 5 0 0 0 0 0 1 05 2
0 0 0 0 0 3 2 LT MVE 6 ST MVE 7 2 7 6 G1, 1 6 C MVE 7 7, G ¼ 4 ⋮ y¼6 6 LT BVE 7 7 6 G6, 1 4 ST BVE 5 C BVE
0
0
1 2
0 3
e1 6 e2 7 6 7 G1, 8 6 e3 7 7 5 ⋱ ⋮ ,e ¼ 6 6 e4 7 7 6 G6, 8 4 e5 5 e6 3
where the variables for x, y, and x are defined as in Table 60.1. The codes of SEM in LISREL system are illustrated in Appendix.
60.4
Empirical Results and Analysis
In our empirical research, the estimates of the parameters of measurement model are presented in Tables 60.3 and 60.4. The regression coefficients of
The bold numbers are significant at 5 % level where the t-statistics are in parentheses
Attributes (the latent variables) Variable(x) Growth Uniqueness Non-debt tax shields Asset structure Size Profitability Volatility Industry dummy sd2 GTA 1.42 (4.25) 0.22 CE_TA 0.71 (1.84) 1.09 RD_S 0.0087 (0.46) 0.064 (3.16) 0.0017 SE_S 0.33 (0.55) 14.91 D_TA 0.02 (0.47) 0.029 NDT_TA 0.094 (0.76) 0.033 INT_TA 0.047 (5.15) 0.00063 IGP_TA 0.0061 (0.93) 0.005 LnS 0.0079 (0.45) 0.00035 OI_TA 0.041 (2.96) 0.018 OI_S 0.12 (1.9) 0.027 SIGOI 1.000 0.000 IDUM 1.000 0.000
Table 60.3 Measurement model: factor loadings of indicators for independent variables (x)
1678 C.-F. Lee and T. Tai
Growth 1 0.06 (0.35) 0.2 (2.01) 0.1 (0.74) 0.17 (3.36) 0.08 (1.52) 0.03 (1.24) 0.02 (1.19)
1 1.42 (2.75) 0.02 (0.15) 0.11 (2.07) 0.19 (4.37) 0.01 (0.62) 0.02 (0.97)
Uniqueness
1 0.12 (1.78) 0.54 (5.47) 0.39 (3.11) 0.04 (1.74) 0.07 (2.28)
Non-debt tax shields
1 0.15 (6.24) 0.65 (5.99) 0.06 (4.42) 0.06 (4.05)
Asset structure
The bold numbers are significant at 5 % level where the t-statistics are in parentheses
Growth Uniqueness Non-debt tax shields Asset structure Size Profitability Volatility Industry dummy
Table 60.4 The estimated covariance matrix of attributes
1 0.42 (6.69) 0 (0.02) 0.03 (0.30)
Size
1 0.11 (4.79) 0.12 (5.46)
Profitability
0.04 (10.37) 0.04 (7.04)
Volatility
0.04 (7.88)
Industry dummy
60 Determination of Capital Structure: A LISREL Model Approach 1679
Attributes Growth 0.53 0.055 0.094 0.00016 0.12 0.14
Uniqueness 2.32 0.0065 0.022 0.034 0.16 0.0047
The bold numbers are significant at 5 % level
Debt measure LT_MVE ST_MVE C_MVE LT_BVE ST_BVE C_BVE
Table 60.5 Estimates of structural coefficients Non-debt tax shields 0.46 0.0034 0.0073 0.0017 0.17 0.044
Asset structure 0.3 0.071 0.023 0.37 0.086 0.13
Size 0.3 0.071 0.023 0.026 0.086 0.13
Profitability 0.19 0.026 0.011 0.0092 0.0085 0.05
Volatility 0.3 0.071 0.023 0.026 0.086 0.13
Industry dummy 1.19 0.028 0.039 0.026 0.015 0.045
1680 C.-F. Lee and T. Tai
60
Determination of Capital Structure: A LISREL Model Approach
1681
matrix L in (60.3) are illustrated in Table 60.3, and most coefficients of independent variables except operating income over sales (OI_S) and intangible asset over total assets (INT_TA) are the same direction as TW paper. However, only five of them are significant. Table 60.4 shows the relationship between the attributes (the latent variables). Compared with TW results, only four relations are significantly the same: the relation between profitability and uniqueness, the relation between non-debt tax shields and size, the relation between profitability and non-debt tax shields, and the relation between asset structure and industry dummy. Even though the results of attributes’ relations are different from TW paper, the estimated correlations between attributes in TW do not show the t-statistic value. Thus we cannot conclude which results represent correct and convincing relationships between the latent variables. Moreover, there are only two significant estimates of structural coefficients (uniqueness and asset structure) in Table 60.5. The results inconsistent with TW paper may result from the several reasons as below. First, insignificant and incorrect latent variables may cause the wrong outcome of structural model. Besides, too many latent variables and the lack of using indicators with unique weights corresponding to their attributes may also cause the week results (Maddala and Nimalendran 1996). The other conjecture of inconsistent results is that the sampling fluctuation during financial crisis may aggravate the problem of measurement error. Although the results of estimates of structural coefficients seem very week, the evidence of negative relationship between debt ratio and uniqueness is consistent the statement of Titman (1984) that the high costs of liquidation are imposed on the customers, workers, and suppliers of firms with high uniqueness products. Besides, the evidence of the attribute of asset structure negatively related to debt ratios corresponds to the supposition of Grossman and Hart (1982). They indicate that in order to avoid the threat of bankruptcy and closely monitor, managers in firms with higher debt are less likely to consume excessive perquisites.
60.5
Conclusion
This chapter utilizes the structure equation modeling (SEM) approach to estimate the impact of unobservable attributes on the capital structure. We use the sample period from 2002 to 2010 as same as the duration in TW paper to investigate whether the influences of accounting factors on capital structure are consistent with TW’s results and whether the important factors are associated with the previous literature. In SEM framework, the debt ratios as indicators of capital structure choice to present the dependent variables and the observable accounting data from the financial statements used to calculate the indicators of attributes to form the latent variables. Compared with the results of TW, our empirical work still cannot support the influence of most attributes on the decision of capital structure. The main reason of weak finding in our chapter and TW paper is too many latent variables as indicated in Maddala and Nimalendran (1996). And, another possible reason is the problem of sampling fluctuation during financial crisis which may cause serious measurement
1682
C.-F. Lee and T. Tai
error in SEM approach. However, our empirical results show the significantly negative relationship between debt ratio and uniqueness which is consistent with the statement of Titman (1984) and TW results. Finally, in contrast with the unclear and insignificant relationship between the attribute of asset structure and debt ratios in TW paper, our finding of the significant negative relationship supports the argument of Grossman and Hart (1982) that firms with less collateral assets may choose higher debt levels to limit managers’ consumption of perquisites.
Appendix: Codes of Structure Equation Modeling (SEM) in LISREL System SEM Model-Titman and Wessels Paper Observed Variables: LT_MVE ST_MVE C_MVE LT_BVE ST_BVE C_BVE GTA CE_TA RD_S SE_S D_TA NDT_TA INT_TA IGP_TA LnS OI_TA OI_S SIGOI IDUM Covariance Matrix from File TW0904.COV Asymptotic Covariance Matrix from File TW0904.ACM Sample Size: 125 Latent Variables: Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy Relationships: LT_MVE ¼ Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy ST_MVE ¼ Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy C_MVE ¼ Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy LT_BVE ¼ Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy ST_BVE ¼ Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy C_BVE ¼ Growth Uniqueness Non_Debt_Tax_Shields Asset_Structure Size Profitability Volatility Industry_Dummy Growth ¼ GTA CE_TA RD_S Uniqueness ¼ RD_S SE_S Non_Debt_Tax_Shields ¼ D_TA NDT_TA
60
Determination of Capital Structure: A LISREL Model Approach
1683
Asset_Structure ¼ INT_TA IGP_TA Size ¼ LnS Profitability ¼ OI_TA OI_S Volatility ¼ 1.0*SIGOI Industry_Dummy ¼ 1.0*IDUM Set the Error Variance of SIGOI to 0.0 Set the Error Variance of IDUM to 0.0 LISREL Output: PS ¼ SY,FR TD ¼ DI,FR ND ¼ 3 SL ¼ 0.05 SC SE SS TV AL EF RS MI Path Diagram End of Problem
References Bevan, A. A., & Danbolt, J. (2002). Capital structure and its determinants in the UK – A decompositional analysis. Applied Financial Economics, 12, 159–170. Bowen, R. M., Daley, L. A., & Huber, C. C., Jr. (1982). Evidence on the existence and determinants of inter-industry differences in leverage. Financial Management, 4, 10–20. Chang, C., Lee, A., & Lee, C. F. (2009). Determinants of capital structure choice: A structural equation modeling approach. The Quarterly Review of Economic and Finance, 49, 197–213. DeAngelo, H., & Masulis, R. (1980). Optimal capital structure under corporate and personal taxation. Journal of Financial Economics, 8, 3–27. Fama, E. F., & French, K. R. (2002). Testing tradeoff and pecking order predictions about dividends and debt. Review of Financial Studies, 15, 1–33. Grossman, S., & Hart, O. (1982). Corporate financial structure and managerial incentives. In The economics of information and uncertainty. Chicago: University of Chicago Press. Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3, 305–360. Joreskog, K. G. (1977). Structural equation models in the social sciences: Specification estimation and testing. In P. R. Krishnaiah (Ed.). Joreskog, K. G., & Goldberger, A. S. (1975). Estimation of a model with multiple indicators and multiple causes of a single latent variable. Journal of American Statistical Association, 70, 631–639. Joreskog, K. G., & Sorbom, D. (1981). LISREL V, analysis of linear structural relationships by the method of maximum likelihood. Chicago: National Educational Resources. Joreskog, K. G., & Sorbom, D. (1989). LISREL 7: A guide to the program and applications (2nd ed.). Chicago: SPSS. Maddala, G. S., & Nimalendran, M. (1996). Error-in-variables problems in financial models. Handbook of statistics 14 (pp. 507–528). Elsevier Science Publishers B. V. Miller, M. (1977). Debt and taxes. Journal of Finance, 32, 261–275. Myers, S. C. (1977). Determinants of corporate borrowing. Journal of Financial Economics, 5, 146–175. Myers, S. C., & Majluf, N. (1984). Corporate financing and investment decision when firms have information investors do not have. Journal of Financial Economics, 13, 187–221. Titman, S. (1984). The effect of capital structure on a firm’s liquidation decision. Journal of Financial Economics, 13, 137–151. Titman, S., & Wessels, R. (1988). The determinants of capital structure choice. Journal of Finance, 43, 1–19. Yang, C. C., Lee, C. F., Gu, Y. X., & Lee, Y. W. (2010). Co-determination of capital structure and stock returns – A LISREL approach: An empirical test of Taiwan stock markets. The Quarterly Review of Economics and Finance, 50, 222–233.
Evidence on Earning Management by Integrated Oil and Gas Companies
61
Raafat R. Roubi, Hemantha Herath, and John S. Jahera Jr.
Contents 61.1 61.2 61.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Methodology and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.1 Integrated Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.2 Tools of Earnings Management and Associated Costs . . . . . . . . . . . . . . . . . . . . . . 61.3.3 Measures of Earnings Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.5 Data Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Empirical Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4.1 Does Management Employ Total Accruals to Manage Earnings? . . . . . . . . . . 61.4.2 Does Management Employ Special Items to Manage Earnings? . . . . . . . . . . . . 61.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Detecting Earnings Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1686 1687 1688 1688 1689 1689 1690 1690 1690 1690 1691 1692 1692 1692 1694
Abstract
The objective of this chapter is to demonstrate specific test methodology for detection of earnings management in the oil and gas industry. This study utilized several parametric and nonparametric statistical methods to test for such earnings management. The oil and gas industry was used given the earlier evidence where such firms manage earnings in order to ease the public view of the significant price swings that occur in oil and gas prices. In this chapter, our
R.R. Roubi (*) • H. Herath Department of Accounting, Faculty of Business, Brock University, St. Catharines, ON, Canada e-mail: [email protected]; [email protected] J.S. Jahera Jr. Department of Finance, College of Business, Auburn University, Auburn, AL, USA e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_61, # Springer Science+Business Media New York 2015
1685
1686
R.R. Roubi et al.
focus is on total accruals as the primary means of earnings management. The prevailing view is that total accruals account for a greater amount of earnings management and should be more readily detected. The model to be considered is the Jones model (Journal of Accounting Research 29, 193–228, 1991) which projects the expected level of discretionary accruals. By comparing actuals vs. projected accruals, we are able to compute the total unexpected accruals. Next, we correlate unexpected total accruals with several difficult to manipulate indicators that reflect company’s level of activities. The significant positive correlations between unexpected total accruals and these variables are an indication that oil and gas firms do not manage income before extraordinary items and discontinued operations. A second test is conducted by focusing on the possible use of special items to reduce reported net income by comparing mean levels of several special items pre-2008 and 2008. The test results indicate significant difference between 2008 means and the pre-2008 period. Keywords
Earnings management • Jones model (1991) • Discretionary accruals • Income from operations • Nonrecurring items • Special items • Research and development expense • Write-downs • Political cost • Impression management • Oil and gas industry
61.1
Introduction
Repeated oil crises have often resulted in above normal profits for integrated oil and gas companies. Such above normal profits typically attract attention in the news media and consequently in the political environment. In this circumstance, the political environment in North America typically provides a short-lived threat of higher taxes and/or regulations. In the latest crude oil price hike in 2008 that reached $140 per barrel, oil and gas companies reported unusually high profits. This was true even though, by the end of 2008, prices were about 50 % of their peak. Reporting high income numbers represent a short-term threat that focuses public attention on these companies for a period of time until consumers adapt to new and higher prices (i.e., $2.89 per gallon in 2010 is way above $1.90 per gallon in 2006–2007, but it is significantly lower than a near $5.00 per gallon in 2008–2009). The agency model predicts that faced with higher taxes and stricter regulations, management tends to use accounting accruals and/or special items to decrease reported income. Because of the possibility of reversals of all or of these accruals in future years, management has to time their responses to such crises to make its point to influence public opinion. Thus, the purpose of earnings management by management of these companies is to buy time until the public attitudes adjust to the new pricing and profitability levels. Earnings management is by no means a tool that can be used over a long period (i.e., multiple years) given the nature of the financial accounting accruals (i.e., firms cannot decrease sales by delaying revenue
61
Evidence on Earning Management by Integrated Oil and Gas Companies
1687
recognition over several years or keep reporting higher depreciation expense year after year). It is an important issue however for both regulators and those in the accounting profession, particularly auditors.
61.2
Literature Review
The earnings management literature can be broadly divided into two categories: (i) accrual management and (ii) operating decisions (real activity manipulation or economic earnings management). Accruals management is primarily the accounting choices available to management under generally accepted accounting practices (GAAP) that obscure true economic performance (Dechow and Skinner 2000). On the contrary, real earnings management takes place when managers intentionally change the timing or structuring of an operation, investment, or financing transaction to influence the output of an accounting system (Gunny 2010). Accruals management does not have any direct cash flow consequences as it is the manipulation of accounting numbers by using different accounting procedures and/or revising some specific accounting items to obtain desired reported earnings. In real earnings management, managers tend to take actions that affect cash flows and eventually earnings (Gupta et al. 2010). Over production by firms building up inventories when demand is falling allows management to report increased earnings because GAAP mandated the use of absorption or full costing for reporting purposes. Much has been written in the area of earnings management. Earnings management may arise in several contextual settings where it can be identified that there exist conditions in which managers’ incentive to manage earning is large (Healy and Wahlen 1999; Marquardt and Weidman 2004a among others). The empirical research has investigated many different incentives and settings conducive for earnings management. Some of settings investigated in Marquardt and Weidman (2004a) include the following: (1) equity offerings, where the motivation to manage earnings around equity offerings is viewed as increasing the stock price to benefit the firm; (2) management buyouts which is an opposite goal to reduce the stock price; and (3) firms attempting to avoid earning decreases. In general, as indicated above, earnings management could be either income increasing or income decreasing. The incentives as per Healy and Whalen (1999) include the following: (1) Capital market expectations and valuation. Here the argument made is that the widespread use of accounting information by investors and financial analysts to help value stock can create an incentive for managers to manipulate earning. (2) Contracts written in terms of accounting numbers. Compensation contracts which are based on accounting numbers are used to align the incentives of management with that of external stakeholders. Watts and Zimmerman (1979) argue that contacts create incentives for earnings management, because it is costly for compensation committees and creditors to “undo” earnings management. (3) Antitrust or other government regulation. The idea is that accounting discretion is used to manage industry-specific regulatory constraints.
1688
R.R. Roubi et al.
In a more recent study, Ang (2011) examines the effect of the Sarbanes-Oxley Act (SOX), specifically Section 404, on earnings management among large firms as defined by the Fortune 500. Ang’s findings are that firms tend to filter information that investors receive as well as disguise the true earnings of the firm. Overall, Ang concludes that earnings management activity was diminished with the enactment of Sarbanes-Oxley. In another recent study, Ota (2011) examines earnings forecast in a global setting, more specifically for Japanese firms. This study considers earnings management and its determinants with the conclusion that issues related to distress, growth, size, as well as prior forecasting errors are all related to earnings management in Japan. A further part of this study concludes that analysts are indeed knowledgeable of earnings management practice and take such practices into consideration when preparing their own independent forecasts of earnings. Habib and Hansen (2008) provide an updated literature review regarding earnings management. Their work essentially extends and updates the earlier work of Healey and Wahlen. In an earlier work, Burgstahler and Dichev (1997) consider the distribution of earnings changes and find a lower frequency of decreases in earnings but higher frequencies of increases. The implication is of course that management seeks to avoid such decreases in reported earnings. Earnings management has potential costs which may be high or low depending on the situation which can be grouped into (1) costs of detected earnings management and (2) the cost of undetected earnings management (Marquardt and Wiedman 2004b). If a firm’s use of earnings management becomes publicly known through the release of SEC enforcement actions, earnings restatements, shareholder litigation, qualified audit opinion, and negative business coverage by press, then it is termed detected earnings management. In undetected earnings management, there is no obvious event of public announcement of its occurrence. In terms of the research methodologies employed in earnings management studies, work by McNichols (2000) critically reviews several approaches from an empirical standpoint. In summary, McNichols considers three approaches typically seen in such analyses. Her work contends that further research should consider alternative specifications as opposed to the more traditional aggregate accrual models.
61.3
Empirical Methodology and Data
61.3.1 Integrated Companies Integrated oil and gas companies are in the public spotlight whenever there is a spike in consumer prices. Other companies within the industry (e.g., refineries, transportation including pipelines, equipment, and service companies) may or may not report higher profits but are not subject to the same degree of media scrutiny and criticism (i.e., price gouging). In addition, large integrated companies have the resources needed to manage their earnings. Smaller oil and gas producers, with
61
Evidence on Earning Management by Integrated Oil and Gas Companies
1689
limited resources and limited operations, may find it more difficult to be involved in earnings management. At the time this study has been conducted, we consider 2008 to be a target year for earnings management. Our contention is based on the fact that crude oil prices reached a record price of $140 per barrel in 2008.
61.3.2 Tools of Earnings Management and Associated Costs Tools of earnings management may include total accruals (A/R, inventory, accounts payable, and depreciation). They may also use special items including write-downs, restructuring charges, and discontinued operations. Companies involved in earnings management are always subject to scrutiny by regulatory agencies (i.e., SEC), external auditors, financial analysts, as well as media attention. Management of such firms has to judge each situation to determine whether it is wise to get involved in this type of activities. According to Marquardt and Wiedman (2004b), the use of special items carries a small cost if discovered, while managing of revenue (A/R) carries a very high cost if discovered. In the current study, we do not presume that management is biased for/against any tool. The argument we use is that a significant reduction of an extremely high profitability situation may prove appropriate. Also, the nature of the industry and the diversity of these companies’ businesses (i.e., search, exploration, development, production, transportation, and marketing of oil and gas products) would allow a very wide range of possibilities for management of these companies to influence reported income. For instance, assessing goodwill for impairment provides an excellent opportunity to drive reported profits down because of the conservative nature of impairment losses. Likewise, accelerating the process of expensing research and development costs is another conservative accounting tool that would not raise auditor’s concern but yet could affect reported profit. Also, the use of special items such as an upward revision of the provision for site restoration and environmental protection is a very strong tool in management’s hands to drive down reported profits.
61.3.3 Measures of Earnings Management Complete discussion of the empirical methodology is included in the Appendix. In general, the detection of earnings management is based on a variety of measures including (for more details, please see Ronen and Yaari 2008) serial correlation of income streams, the standard deviation of earnings relative to the standard deviation of cash flows, and the correlation between discretionary accruals and change in operating cash flows. In this chapter, our focus is on total accruals as the primary means of earnings management. The prevailing view is that total accruals account for a greater amount of earnings management and should be more readily detected. The model to be considered is the Jones model (1991) which projects the expected level of discretionary accruals.
1690
R.R. Roubi et al.
61.3.4 Data The sample is drawn from the North America COMPUSTAT – Fundamentals Annual through the Wharton Research Database and consists of international integrated oil and gas companies that carry a GSUBIND code 10102010. The initial sample consists of 1,428 annual observations covering the period 1950–2010. Companies included in the final sample of 836 annual observations met the following conditions: (1) information is available for the 2008 event year, (2) there is a minimum number of 6 annual consecutive observations per company, and (3) all data needed for analysis is available. After excluding out-of-range unusable observations, the final sample consists of 28 integrated oil and gas companies from Argentina, Austria, Brazil, Canada, China, France, Italy, Kazakhstan, the Netherlands, Norway, Russia, South Africa, Spain, the UK, and the USA.
61.3.5 Data Items The following data items (COMPUSTAT – Fundamentals Annual) were collected for all firms with the GSUBIND code 10102010 and used in the empirical part of this study. The following mnemonics are collected: AT Total assets for GDWLIP Goodwill impairment (pretax) IB Income before extraordinary items OANCF Cash flow from operations – net PPEGT Property, plant, and equipment – gross SALE Sales – net SPIOP Special items – pretax XIDO Extraordinary items and discontinued operations XRD Research and development expense WDP Write-downs – pretax
61.4
Empirical Findings
61.4.1 Does Management Employ Total Accruals to Manage Earnings? The presence of earnings management can be detected by observing a negative correlation between unexpected total accruals (UTACC2008) and a change in operating cash flows for the same year (DOANCF2008) (see Leuz et al. 2003). Table 61.1 provides parametric and nonparametric correlation coefficients for UTACC2008 and several variables including DOANCF2008. The statistics are presented for all companies, North American companies and the non-North American companies. The results that are presented indicate no evidence of
61
Evidence on Earning Management by Integrated Oil and Gas Companies
1691
Table 61.1 Correlation coefficients for UTACC2008 Sample DOANCF2008 All Observations (28) Pearson corr. 0.020 Kendall’s tau 0.058 Spearman’s Rho 0.096 North American (14) Pearson corr. 0.427* Kendall’s tau 0.275* Spearman’s Rho 0.473** Non-North American (14) Pearson corr. 0.075 Kendall’s tau 0.011 Spearman’s Rho 0.051 *, **, ***
OANCF2007
OANCF2008
SALES2008
0.120 0.058 0.102
0.097 0.101 0.123
0.102 0.122 0.193
0.245 0.231 0.314
0.284 0.319* 0.455**
0.295 0.341* 0.504**
0.159 0.011 0.064
0.096 0.033 0.055
0.098 0.033 0.064
significant at the 10 %, 5 %, and 1 %, respectively
Table 61.2 T-test results – special items (2008 versus pre-2008) Item SPI WDP GDWLIP XRD
Observations 2008 24 4 5 13
Observations pre-2008 660 32 7 415
Mean 2008 1,492.30 1,862.47 5,507.40 535.33
Mean pre-2008 48.96 410.39 328.10 200.20
T-value 5.419 1.994 1.250 5.075
One-tail significance 0.000 0.027 0.120 0.000
earnings management. In fact, the significant positive correlation coefficients between UTACC2008 and DOANCF2008 suggest that North American integrated oil and gas companies actually do not use total accruals to manage earnings.
61.4.2 Does Management Employ Special Items to Manage Earnings? Using special items to manage earnings is (1) less costly, i.e., unlike the case with manipulating revenues and/or operating expenses which are closely monitored by auditors, and (2) has no reversal problem, i.e., the timing of write-downs and/or spending is under management control. The results presented in Table 61.2 indicate that special items SPI which include goodwill impairment (GDWLIP), in-progress research and development (RDIP), restructuring costs (RCP), and other write-downs before taxes were significantly higher in 2008 ($1,492.30 million) versus an average of $48.96 million for the pre-2008 fiscal year. The t-test value of 5.419 is significant at the 0.000 level. The results in Table 61.2 also provide t-test statistics on some individual components of SPI. For example, the 2008 mean write-down before taxes of $1,862.47 million is significantly higher than the pre-2008 mean write-down of $410.39. The t-test statistic here shows a t-value of 1.994 which is significant at the
1692
R.R. Roubi et al.
0.027 level. The component GDWLIP does not reveal a significant increase in recognizing goodwill impairment though these companies reported on average $5,507.40 million impairment losses in 2008 compared to only $328.10 mean losses in the pre-2008 period. One should notice that individual means do not add up to the total SPI because of the different number of observations used in computing individual components’ means. Other components of SPI are missing either because they are not available in the data base or because they do not display significant t-values. Research and development expenditures represent another discretionary item that management is able to control (see Shehata, October 1991). As the results reported in Table 61.2 reveal, average spending on research and development (XRD) is approximately $535.33 million in 2008 compared to a pre-2008 average of $200.20 million. The t-test statistic indicates a t-value of 5.075 at the 0.000 level. Given the results reported in Table 61.2, one may conclude that management of integrated oil and gas companies does indeed employ special items as a tool to manage earnings.
61.5
Conclusion
Earnings management, which is an extensively researched topic in accounting, could be either income increasing or income decreasing. While there are several motivations for managing earnings and the associated costs depending whether earnings management is detected through public announcements or not, there are implications for incentive design and standard setting in a firm. In this chapter we explore the income decreasing earnings management by integrated oil and gas companies. Our findings provide evidence that integrated oil and gas companies do indeed use special items to manage earnings downwards. While there could be several reasons, one main reason is impression management. Although we selected earnings management as our topic, our primarily objective in this chapter is to demonstrate the important application of standard statistical analysis to the issue of detecting earnings management practices.
Appendix: Methodology Detecting Earnings Management The paper uses the Jones model (1991) as the basis for projecting the expected level of discretionary accruals. The steps are as follows: Step 1: Define total accruals (TACC) for as the difference between income (NI) before extra ordinary items and discontinued operations (EOI) and cash flows from operations (OCF). Compute the TACCit ¼ NI it EOI it OCFit for each year (t) for firm (i). Step 2: Divide the data available into two periods, namely, the estimation period t ¼ 1, . . ., Ti and the prediction period p ¼ 1, . . ., P.
61
Evidence on Earning Management by Integrated Oil and Gas Companies
1693
Step 3: Use ordinary least squares to obtain the coefficient estimates ai, bi, and ci of ai, bi, and gi, respectively. The linear regression expectation model (Jones 1991) for total accruals after controlling for changes in the economic circumstances of the firm is TACCit 1 DREV it PPEit ¼ ai þ bi þ gi þ eit TAit1 TAit1 TAit1 TAit1 where TACCit DREVit PPEit TAit1 eit i ¼ 1, . . ., N t ¼ 1, . . ., Ti
(61.1)
is the total accruals in year (t) for firm (i) is the revenues in year (t) less revenues in year (t 1) for firm (i) is the gross property, plant, and equipment in year (t) for firm (i) is the total assets in year (t 1) for firm (i) is the error term in year (t) for firm (i) is the firm index (N ¼ XX) is year index for the years included in the estimation period for firm (i), where Ti ranges between 6 and 22 years.
The gross property, plant, and equipment and change in revenue are included to control for changes in nondiscretionary accruals due to changing conditions. All variables in the accruals expectation model are scaled by lagged assets to reduce heteroscedasticity. The lagged assets are assumed to be positively associated with the variance of the disturbance term. Model (1) was used to calculate the coefficients ai, bi, and gi for the estimation period ending in 2007. This procedure is conducted on a company-by-company basis and produced 28 individual models that fit the TACC history of each of the sample companies. Each of these models is used in step 4 below to estimate TACC for the event year 2008 for each of the 28 firms. Step 4: Compute the discretionary accruals for the event year (2008) for firm (i) as follows. The coefficients ai, bi, and gi obtained from running model (1) are used to predict TACC2008 for each of the 28 sample firms. This procedure produced an estimated ETACC2008 which is compared to the actual ATACC2008. This comparison produced the unexpected total accruals (UTACC2008) for the event year. UTACC2008 ¼ ETACC2008 ATACC2008 where UTACC2008 ETACC2008 ATACC2008
(61.2)
is the unexpected total accruals for the event year 2008 is the estimated expected total accruals for the event year 2008 is the actual total accruals for the event year 2008.
Step 5: Test for earnings management. This study ran several parametric and nonparametric statistics to test for earnings management as follows:
1694
R.R. Roubi et al.
1. Correlations: We measured correlation coefficients (Pearson, Kendall’s tau, and Spearman’s Rho) to test for significant correlation between UTACC2008 and DONACF (a change in operating cash flows). A significant negative correlation between the two variables would represent a good sign of earnings management, i.e., a higher level of a change in operating cash flows negatively correlates with higher UTACC2008 (Leuz et al., September 2003). 2. T-test: We compared means UTACC2008 for North American companies against all other international companies included in the sample. 3. Regression analysis: We ran the following regression model UTACC2008 ¼ bi Country þ x where UTACC2008 Country x
(61.3)
is the unexpected total accruals for 2008 is a partition variable coded 1 for North American companies, 0 otherwise is an error term.
Step 6: Testing for use of special items to manage earnings. This study assumes that integrated oil and gas companies will find it easier and less costly to manage earnings using special items such as SPI, WDP, GDWLIP, and XRD. T-test is used to compare means of each of these items for 2008 fiscal year and pre-2008 means. Unlike total accruals which require a fitting period and an event period, special items’ expected level is always zero (Marquardt and Wiedman 2004a). Consequently, the unexpected amount of any of the above items is USPI 2008 ¼ SPI 0, UWDP2008 ¼ UWDP 0, UGDWLIP2008 ¼ GDWLIP 0, UXRD2008 ¼ XRD 0 where USPI2008 UWDP2008 UGDWLIP2008 UXRD2008
is unexpected special items for fiscal year 2008 is unexpected write-downs for fiscal year 2008 is unexpected goodwill impairment loss for fiscal year 2008 is unexpected research and development for fiscal year 2008.
References Ang, R. (2011). Sarbanes-Oxley effectiveness on the earnings management. Available at SSRN http://ssrn.com/abstract¼1791766 Burgstahler, D., & Dichev, I. (1997). Earnings management to avoid earnings decreases and losses. Journal of Accounting and Economics, 24, 99–126.
61
Evidence on Earning Management by Integrated Oil and Gas Companies
1695
Dechow, P. M., & Skinner, D. J. (2000). Earnings management: Reconciling the views of accounting academics, practitioners, and regulators. Accounting Horizons, 14, 235–250. Gunny, K. (2010). The relation between earnings management using real activities manipulation and future performance: Evidence from meeting earnings benchmarks. Contemporary Accounting Research, 27, 855–888. Gupta, M., Pevzner, M., & Seethamraju, C. (2010). The implications of absorption cost accounting and production decisions for future firm performance and valuation. Contemporary Accounting Research, 27, 889–922. Habib, A., & Hansen, J. (2008). Target shooting: Review of earnings management around earnings benchmarks. Journal of Accounting Literature, 27, 25–70. Healy, P., & Wahlen, J. (1999). A review of the earnings management literature and its implications for standard setting. Accounting Horizons, 13, 365–383. Jones, J. (1991). Earnings management during import relief investigations. Journal of Accounting Research 29, 193–228. Leuz, C., Nanda, D., & Wysocki, P. (2003). Earnings management and investor protection: An international comparison. Journal of Financial Economics, 69, 505–527. Marquardt, C., & Wiedman, C. (2004a). How are earnings managed? An examination of specific accruals. Contemporary Accounting Research, 21, 461–491. Marquardt, C., & Wiedman, C. (2004b). The effect of earnings management on the value relevance of accounting information. Journal of Business, Finance and Accounting, 31, 297–332. McNichols, M. (2000). Research design issues in earnings management studies. Journal of Accounting and Public Policy, 19, 313–345. Ota, K. (2011). Analysts’ awareness of systematic bias in management earnings forecasts. Applied Financial Economics, 21, 1317–1330. Ronen, J., & Yaari, V. (2008). Earnings management: Emerging insights in theory, practice, and research. Springer series in accounting scholarship, New York. Shehata, M. (1991). Self selection bias and the economic consequences of accounting regulations: An application of two-stage switching regression to SFAS no. 2. The Accounting Review, 66, 768–787. Watts, R. L., & Zimmerman, J. L. (1979). The demand for and supply of accounting theories: The market for excuses. Accounting Review, 54, 273–306.
A Comparative Study of Two Models SV with MCMC Algorithm
62
Ahmed Hachicha, Fatma Hachicha, and Afif Masmoudi
Contents 62.1 62.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Bayesian Approach and the MCMC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.1 The Metropolis-Hastings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.2 The Gibbs Sampler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 The Stochastic Volatility Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.1 Autoregressive SV Model with Student’s Distribution . . . . . . . . . . . . . . . . . . . . . . 62.3.2 Basic Svol Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Empirical Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4.1 The Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4.2 Estimation of SV Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1698 1699 1699 1700 1700 1700 1702 1703 1703 1703 1708 1709 1717 1717 1717 1717 1717
A. Hachicha (*) Department of Economic Development, Faculty of Economics and Management of Sfax, University of Sfax, Sfax, Tunisia e-mail: [email protected] F. Hachicha Department of Finance, Faculty of Economics and Management of Sfax, Sfax, Tunisia e-mail: [email protected] A. Masmoudi Department of Mathematics, Faculty of Sciences of Sfax, Sfax, Tunisia e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_62, # Springer Science+Business Media New York 2015
1697
1698
A. Hachicha et al.
Abstract
This paper examines two asymmetric stochastic volatility models used to describe the volatility dependencies found in most financial returns. The first is the autoregressive stochastic volatility model with Student’s t-distribution (ARSV-t), and the second is the basic Svol of JPR (Journal of Business and Economic Statistics 12(4), 371–417, 1994). In order to estimate these models, our analysis is based on the Markov Chain Monte Carlo (MCMC) method. Therefore, the technique used is a Metropolis-Hastings (Hastings, Biometrika 57, 97–109, 1970), and the Gibbs sampler (Casella and George The American Statistician 46(3) 167–174, 1992; Gelfand and Smith, Journal of the American Statistical Association 85, 398–409, 1990; Gilks et al. 1993). The empirical results concerned on the Standard and Poor’s 500 Composite Index (S&P), CAC 40, Nasdaq, Nikkei, and Dow Jones stock price indexes reveal that the ARSV-t model provides a better performance than the Svol model on the mean squared error (MSE) and the maximum likelihood function. Keywords
Autoregression • Asymmetric stochastic volatility • MCMC • MetropolisHastings • Gibbs sampler • Volatility dependencies • Student’s t-distribution • SVOL • MSE • Financial returns • Stock price indexes
62.1
Introduction
Stochastic volatility (SV) models are workhorses for the modelling and prediction of time-varying volatility on financial markets and are essential tools in risk management, asset pricing, and asset allocation. In financial mathematics and financial economics, stochastic volatility is typically modelled in a continuous time setting which is advantageous for derivative pricing and portfolio optimization. Nevertheless, since data is typically only observable at discrete points in time, in empirical applications, discrete-time formulations of SV models are equally important. Volatility plays an important role in determining the overall risk of a portfolio and identifying hedging strategies that make the portfolio neutral with respect to market moves. Moreover, volatility forecasting is also crucial in derivatives trading. Recently, SV models allowing the mean level of volatility to “jump” have been used in the literature; see Chang et al. (2007), Chib et al. (2002), and Eraker et al. (2002). The volatility of financial markets is a subject of constant analysis movements in the price of financial assets which directly affects the wealth of individual, companies, charities, and other corporate bodies. Determining whether there are any patterns in the size and frequency of such movements, or in their cause and effect, is critical in devising strategies for investments at the micro level and monetary stability at the macro level. Shephard and Pitt (1997) used improved and efficient Markov Chain Monte Carlo (MCMC) methods to estimate the volatility process “in block” rather than one point of time such as highlighted by Jacquier et al. (1994), for a simple SV model. Furthermore, Hsu and Chiao (2011)
62
A Comparative Study of Two Models SV with MCMC Algorithm
1699
analyze the time patterns of individual analyst’s relative accuracy ranking in earnings forecasts using a Markov chain model by treating two levels of stochastic persistence. Least squares and maximum likelihood techniques have long been used in parameter estimation problems. However, those techniques provide only point estimates with unknown or approximate uncertainty information. Bayesian inference coupled with the Gibbs sampler is an approach to parameter estimation that exploits modern computing technology. The estimation results are complete with exact uncertainty information. Section 62.2 presents the Bayesian approach and the MCMC algorithms. The SV model is introduced in Sect. 62.3, whereas empirical illustrations are given in Sect. 62.4.
62.2
The Bayesian Approach and the MCMC Algorithm
The Bayesian approach is a classical methodology where we assume that there is a set of unknown parameters. Alternatively, in the Bayesian approach the parameters are considered as random variables with given prior distributions. We then use observations (the likelihood) to update these distributions and obtain the posterior distributions. Formally, let X ¼ (X1, . . . , XT) denote the observed data and y a parameter vector: y X P /P PðyÞ X y The posterior distribution P(y/X) of a parameter y/ given the observed data X, where P(X/y) denotes the likelihood distribution of X and P(y) denotes the prior distribution of y. It would seem that in order to be as subjective as possible and to use the observations as much as possible, one should use priors that are non-informative. However, this can sometimes create degeneracy issues and one should choose a different prior for this reason. Markov Chain Monte Carlo (MCMC) includes the Gibbs sampler as well as the Metropolis-Hastings (M-H) algorithm.
62.2.1 The Metropolis-Hastings The Metropolis-Hastings is the baseline for MCMC schemes that simulate a Markov chain y(t) with P(y/Y) as the stationary distribution of a parameter y given a stock price index X. For example, we can define y1, y2, and y3 such that y ¼ (y1, y2, y3) where each y1 can be scalar, vectors, or matrices. MCMC algorithms are iterative, and so at iteration t we will sample in turn from the three conditional distributions. Firstly, we update y1 by drawing a value y1(t) from p(y1/Y, y(t1) , y(t1) ). Secondly, we draw a value for y2(t) from p(y2/Y, y(t) 2 3 1 , (t1) (t) y3 ), and finally, we draw y3(t) from p(y3/Y, y(t) , y ). 1 2 We start the algorithm by selecting initial values, yi(0), for the three parameters. Then sampling from the three conditional distributions in turn will produce a set of Markov chains whose equilibrium distributions can be shown to be the joint posterior distributions that we require.
1700
A. Hachicha et al.
Following Hastings (1970), a generic step from a M-H algorithm to update parameter yi at iteration t is as follows: 1. Sample y*i from the proposal distribution pt(yi/y(t1) ). i 2. Calculate f ¼ pt(y(t1) /y*i )/pt(y*i /y(t1) ) which is known as the Hastings ratio i i and which equals 1 for symmetric proposals as used in pure Metropolis sampling. 3. Calculate st ¼ fp(y*i /Y, fi)/p(y(t1) /Y, fi), where fi is the acceptance ratio and i gives the probability of accepting the proposed value. * (t) (t 1) 4. Let y(t) . i ¼ yi with probability min(1, st); otherwise let yi ¼ yi A popular and more efficient method is the acceptance-rejection (A-R) M-H sampling method which is available. Whenever the target densities are bounded by a density from which it is easy to sample.
62.2.2 The Gibbs Sampler The Gibbs sampler (Casella and Edward 1992; Gelfand and Smith 1990; Gilks et al. 1992) is the special M-H algorithm whereby the proposal density for updating yj equals the full conditional p(y*j /yj) so that proposals are acceptance with probability 1. The Gibbs sampler involves parameter-by-parameter or block-by-block updating, which when completed from the transaction from y(t) to y(t+1): (t) 1. y(t+1) f1(y1/yt2, y(t) 1 3 , . . . yD ) (t+1) t+1 (t) 2. y2 f2(y2/y1 , y3 , . . . y(t) D) . . . . (t+1) D. y(t+1) fD(yD/yt+1 , . . . y(t+1) D 1 , y2 D 1) Repeated sampling from M-H samplers such as the Gibbs samplers generates an autocorrelated sequence of numbers that, subject to regularity condition (ergodicity, etc.), eventually “forgets” the starting values y0¼ (y1(0), y2(0), . . . . . ., yD(0)) used to initialize the chain and converges to a stationary sampling distribution p(y/y). In practice, Gibbs and M-H algorithms are often combined, which results in a “hybrid” MCMC procedure.
62.3
The Stochastic Volatility Model
62.3.1 Autoregressive SV Model with Student’s Distribution In this paper, we will consider the pth order ARSV-t model, ARSV(p)-t, as follows:
Y t ¼ sx expðV t =2Þ V t ¼ f1 V t1 þ :::: þ fp V tp þ t1
62
A Comparative Study of Two Models SV with MCMC Algorithm
1701
et xt ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kt =ðn 2Þ k t w2 ð n Þ where kt is independent of (et, t), Yt is the stock return for market indexes, and Vt is the log-volatility which is assumed to follow a stationarity AR(p) process with a persistent parameter jfj ≺ 1. By this specification, the conditional distribution, xt, follows the standardized t-distribution with mean zero and variance one. Since kt is independent of (et, t), the correlation coefficient between xt and t is also r. If f N(0, 1), then T X
f1 ¼
! f2
V t V t1
t¼1
T X
! þ f1
V t1 V t2
t¼1 t X
! 1
V 2t1
t¼1
and T X
f2 ¼
! V t V t2
f1
t¼2
T X
! V t1 V t2
þ f2
t¼2 T X
!
V 2t2
1
t¼2
The conditional posterior distribution of the volatility is given by ! T X T 1 1X 2 V t Yt e ðV t f1 V t1 f2 V t2 Þ2 2 2s 2 t¼1 t¼1 pðV=Y, Y Þ / e ! T 1X ðV tþ1 f1 V t f2 V t2 Þ2 2 t¼1 The representation of the SV-t model in terms of a scale mixture is particularity useful in a MCMC context since it allows for sampling a non-log-concave sampling problem into a log-concave one. This allows for sampling algorithms which guarantee convergence in finite time (see Frieze et al. 1994). Allowing log returns to be student-tdistributed naturally changes the behavior of the stochastic volatility process; in the standard SV model, large value of jYtj induces large value of the Vt.
1702
A. Hachicha et al.
62.3.2 Basic Svol Model Jacquier, Polson, and Rossi (1994), hereafter JPR, introduced Markov chain technique (MCMC) for the estimation of the basic Svol model with normally distributed conditional errors:
pffiffiffiffiffi Y t ¼ V t est logðV t Þ ¼ a þ d logðV t1 Þ þ sn ent s n et ; et N ð0; I 2 Þ
Let Y ¼ (a, d, sv) be the vector of parameters of the basic SVOL, and V ¼ ðV t ÞTt¼1, where a is the intercept. The parameter vector consists of a location a, a volatility persistence d, and a volatility of volatility sn. The basic Svol specifies zero correlation, the errors of the mean, and variance equations. Briefly, the Hammersley-Clifford theorem states that having a parameter-set Y, a state Vt, and an observation Yt, we can obtain the joint distribution p(Y, V/Y) from p(Y, V/Y) and p(V/Y, Y), under some mild regularity conditions. Therefore by applying the theorem iteratively, we can break a complicated multidimensional estimation problem into many sample one-dimensional problems. Creating a Markov chain Y(t) via a Monte Carlo process, the ergodic averaging theorem states that the time average of a parameter will converge towards its posterior mean. The formula of Bayes factorizes the posterior distribution likelihood function with prior hypotheses: PðY, V=Y ÞaPðY=V, YÞPðV=YÞPðYÞ where a is the intercept, d the volatility persistence, and sv is the standard deviation of the shock to log Vt. We use a normal-gamma prior, so, the parameters a, d N, and sv2 IG, (Appendix 1) Then Pða, d=sv , V, Y Þ
Y PðV t =V t1 , a, d, sv ÞPða; dÞaN
And for sv, we obtain Y P s2 =a, sv , V, Y a PðV t =V t1 , a, d, sv ÞP s2v aIG
62
A Comparative Study of Two Models SV with MCMC Algorithm
62.4
1703
Empirical Illustration
62.4.1 The Data Our empirical analysis focuses on the study of five international financial indexes: the Dow Jones Industrial, the Nikkei, the CAC 40, the S&P500, and the Nasdaq. The indexes are compiled and provided by Morgan Stanley Capital International. The returns are defined as yt ¼ 100 * (log St log St1). We used the last 2,252 observations for all indexes except the Nikkei, when we have only used 2,201 observations due to lack of data. The daily stock market indexes are for five different countries over the period 1 January 2000 to 31 December 2008. Table 62.1 reports the mean, standard deviation, median, and the empirical skewness as well as kurtosis of the five series. All series reveal negative skewness and overkurtosis which is a common finding of financial returns.
62.4.2 Estimation of SV Models The standard SV model is estimated by running the Gibbs and A-R M-H algorithm based on 15,000 MCMC iterations, where 5,000 iterations are used as burn-in period. Tables 62.2 and 62.3 show the estimation results in the basic Svol model and the SV-t model of the daily indexes. a and d are independent priors. The prior in d is essentially flat over [0, 1]. We impose stationarity for log(Vt) by truncating the prior of d. Other priors for d are possible. Geweke (1994a, b) proposes alternative priors to allow the formulation of odds ratios for non-stationarity. Whereas Kim et al. (1998) center an informative Beta prior around 0.9. Table 62.1 Summary statistics for daily returns CAC 40 Dow Jones Nasdaq Nikkei S&P
Mean 3.7E-04 2.8e-04 2.5e-04 3.5e-04 2.8e-04
SD 0.013 0.015 0.014 0.005 0.008
Median 5.0e-4 4.0e-4 5.5e-4 3.2e-4 4.5e-4
Skewness 0.295 0.368 0.523 0.698 0.523
Kurtosis 5.455 4.522 6.237 3.268 5.659
Table 62.2 Estimation results for the Svol model s a d
CAC 40 0.4317(0.0312) 0.1270(0.0421) 0.7821(0.0621)
Dow Jones 0.4561(0.0421) 0.0059(0.0534) 0.0673(0.0317)
Nasdaq 0.5103(0.0393) 0.1596(0.0332) 0.6112(0.0429)
Nikkei 0.5386(0.0523) 0.1966(0.0493) 0.8535(0.0645)
S&P 0.4435(0.0623) 0.1285(0.0593) 0.7224(0.0423)
1704
A. Hachicha et al.
Table 62.3 Estimation results for the SV-t model F1 F2 s r
CAC 40
Dow Jones
Nasdaq
Nikkei
S&P
0.4548(0.0037) 0.5544(0.1524) 0.0154(0.0294) 0.02191(0.0625)
0.40839(0.0021) 0.6437(0.1789) 0.0205(0.0367) 0.0306(0.0346)
0.5225(0.0065) 0.4473(0.1326) 0.0131(0.0524) 0.0489(0.0498)
0.4348(0.0059) 0.4865(0.1628) 0.0148(0.0689) 0.0751(0.0255)
0.2890(0.0046) 0.6133(0.1856) 0.0135(0.0312) 0.0235(0.0568)
Table 62.2 shows the results for the daily indexes. The posterior of d are higher for the daily series. The highest means are 0.782, 0.067, 0.611, 0.85, and 0.722, for the full sample Nikkei. This result is not a priori curious because the model of Jacquier et al. (1994) can lead to biased volatility forecast. Well, as the basic SVOL, there is no apparent evidence of unit of volatility. There are other factors that can deflect this rate such exchange rate (O’Brien and Dolde 2000). Deduced from this model, against the empirical evidence, positive and negative shocks have the same effect in volatility. Table 62.3 shows the Metropolis-Hastings estimates of the autoregressive SV model. The estimates of f are between 0.554 and 0.643, while those of s are between 0.15 and 0.205. Against, the posterior of f for the SV-t model are located higher.1 This is consistent with temporal aggregation (as suggested by Meddahi and Renault 2000). This result confirms the typical persistence reported in the GARCH literature. After the result, the first volatility factors have higher persistence, while the small values of F2 indicate the low persistence of the second volatility factors. The second factor F2 plays an important role in the sense that it captures extreme values, which may produce the leverage effect, and then it can be considered conceivable. The estimates of r are negative in most cases. Another thing to note is that these estimates are relatively higher than that observed by Asai et al. (2006) and Manabu Asai (2008). The estimated of r for index S&P using Monte Carlo simulation is 0.3117, then it is 0.0235 using Metropolis-Hasting. This implies that for each data set, the innovations in the mean and volatility are negatively correlated. Negative correlations between mean and variance errors can produce a “leverage” effect in which negative (positive) shocks to the mean are associated with increases (decreases) in volatility. The return of different indexes not only is affected by market structure (Sharma 2011) but also is deeply influenced by different crises observed in international market, i.e., the Asian crises detected in 1987 and the Russian one in 2002.
1 We choose p ¼ 2 because if p ¼ 1 and v ! 1, the ARSV-t model declined to the asymmetric SV model of Harvey and Shephard (1996).
62
A Comparative Study of Two Models SV with MCMC Algorithm CAC 40
0.15
Nasdaq
0.15 0.1
0.05
0.05 Y
Y
0.1
1705
0
0
−0.05
−0.05
−0.1
0
500
1000
1500
2000
−0.1
2500
0
500
observations
1500
2000
2500
2000
2500
observations
S&P
0.15
1000
Nikkei
0.15 0.1
0.1
0.05 0.05 Y
Y
0
0
−0.1
−0.05
−0.15 0
500
1000 1500 observations
Y
−0.1
−0.05
0.12 0.1 0.08 0.06 0.04 0.02 0 −0.02 −0.04 −0.06 −0.08
2000
2500
−0.2
0
500
1000 1500 observations
Down Jones
0
500
1000 1500 observations
2000
2500
Fig. 62.1 Return for indexes
The markets in our sample are subject to several crises that directly affect the evolution of the return indexes. The event of 11 September 2002 the Russian crisis and especially the beginning of the subprime crisis in the United States in July 2007 justify our results. These results explored in Fig. 62.1 suggest that periods of market crisis or stress increase the volatility. Then the volatility at time (t) depends on the volatility at (t1) (Engle 1982). When the new information comes in the market, it can be disrupted and this affects the anticipation of shareholders for the evolution of the return.
1706
a
A. Hachicha et al. SVOL Nikkei
25
b
SV-t Nikkei
0.05 0.045
20
0.04
15
Vt-SV-t
Vt-SVOL
0.035
10
0.03 0.025 0.02 0.015
5
0.01 0.005
0 0
500
1000
1500
2000
0
2500
0
500
observations
1000
1500
2000
2500
observations
Fig. 62.2 Smoothed estimates of Vt, basic SVOL, and SV-t model
QQ Plot of Sample Data versus Standard Normal
800
800
700
700
Quantiles of Input Sample
Quantiles of Input Sample
QQ Plot of Sample Data versus Standard Normal
600 500 400 300 200 100 0 −100 −4
−3
−2 −1 0 1 2 3 Standard Normal Quantiles
4
600 500 400 300 200 100 0 −100 −4
−3
−2 −1 0 1 2 3 Standard Normal Quantiles
4
Fig. 62.3 QQ plot of normalized innovation based on the basic Svol model (left) and the SV-t model (right)
The resulting plots of the smoothed volatilities are shown in Fig. 62.2. We take our analysis in the Nikkei indexes, but the others are reported in Appendix 2. The convergence is very remarkable for the Nikkei, like Dow Jones, Nasdaq, and the CAC 40 indexes. This enhances the idea that the algorithm used for estimated volatility is a good choice. The basic Svol model mis-specified can induce substantial parameter bias and error in inference about Vt; Geweke (1994a, b) showed that the basic Svol has the same problem with the largest outlier, October 1987 “Asiatique crisis.” The Vt for the model Svol reveal a big outlier on period crises. The corresponding plots of innovation are given by Fig. 62.3 for two models basic Svol and SV-t for Nikkei indexes. Appendix 3 shows the QQ plot for the other indexes, respectively, for the Nasdaq, S&P, Dow Jones, and CAC 40 for the two models. The standardized innovation reveals a big outlier when the market in stress (Hwang and Salmon 2004).
MSE Likelihood
CAC 40 SVOL 0.0.210 2.595.104
SV-t 0.0296 0.0046
Dow Jones SVOL 0.1277 0.0374
Table 62.4 MSE and likelihood for two models SV-t 0.0040 0.0012
Nasdaq SVOL 0.2035 0.2257 SV-t 0.0229 0.0158
Nikkei SVOL 0.0241 0.2257.104
SV-t 0.0168 0.0094
S&P SVOL 0.0248 0.0054
SV-t 0.0395 0.02016
62 A Comparative Study of Two Models SV with MCMC Algorithm 1707
1708
A. Hachicha et al.
The advantages of asymmetric basic SV is able to capture some aspects of financial market and the main properties of their volatility behavior (Danielsson 1994; Chaos 1991; Eraker et al. 2000). It is shown that the inclusion of student-t errors improves the distributional properties of the model only slightly. Actually, we observe that basic Svol model is not able to capture extreme observation in the tail of the distribution. In contrast, the SV-t model turns out to be more appropriate to accommodate outliers. The corresponding plot of innovation for the basic model is unable to capture the distribution properties of the returns. This is confirmed by the Jarque-Bera normality test and the QQ plot revealing departures from normality, mainly stemming from extreme innovation. Finally, in order to detect which of the two models is better, we opt for two indicators of performance, such as the likelihood and the MSE. Likelihood is a function of the parameters of the statistical model that plays a preponderant role in statistical inference. MSE is called squared error loss, and it measures the average of the square of “error.” Table 62.4 reveals the results for this measure and indicates that the SV-t model is much more efficient than the other. Indeed, in terms of comparison, we are interested in the convergence of two models. We find that convergence to the SV-t model is fast. Table 62.4 shows the performance of the algorithm and the consequence of using the wrong model on the estimates of volatility. The efficiency is at 60 %. The MCMC is more efficient for all parameters used in these two models. In a certain threshold, all parameters are stable and converge to a certain level. Appendices 4 and 5 show that the a, d, s, f converge and stabilize; this shows the power for MCMC. The results for both simulated show that the algorithm of SV-t model is fast and converges rapidly with acceptable levels of numerical efficiency. Then, our sampling provides strong evidence of convergence of the chain.
62.5
Conclusion
We have applied these MCMC methods to the study of various indexes. The ARSV-t models were compared with Svol models of Jacquier et al. (1994) models using the S&P, Dow Jones, Nasdaq, Nikkei, and CAC 40. The empirical results show that SV-t model can describe extreme values to a certain extent, but it is more appropriate to accommodate outliers. Surprisingly, we have frequently observed that the best model is the Student’s t-distribution (ARSV-t) with their forecast performance. Our result confirms the finding from Manabu Asai (2008), who indicates, first, that the ARSV-t model provides a better fit than the MFSV model and, second, the positive and negative shocks do not have the same effect in volatility. Our result proves the efficiency of Markov chain for our sample and the convergence and stability for all parameters to a certain level. This paper has made certain contributions, but several extensions are still possible. To find the best results, opt for extensions of SVOL.
62
A Comparative Study of Two Models SV with MCMC Algorithm
1709
Appendix 1 The posterior volatility is PðV=Y, V Þ / PðY=Y, V ÞPðV=YÞ /
T Y
PðV t =V t1 , V tþ1 , Y, Y t Þ
t¼1
CAC40
25
15
Vt-SV-t
Vt-SVOL
20
10 5 0 0
500
1500 1000 observations
2000
2500
0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
CAC40
0
500
Nasdaq
1000 1500 observations
2000
2500
2000
2500
2000
2500
Nasdaq
4.5
0.25
4 0.2
3 Vt-SV-t
Vt-SVOL
3.5 2.5 2
0.15 0.1
1.5 1
0.05
0.5 0 0
500
1000 1500 observations
2000
0 0
2500
S&P
14
500
S&P
0.25
12
1000 1500 observations
0.2 Vt-SV-t
Vt-SVOL
10 8 6
0.15 0.1
4 0.05
2 0 0
500
1000
1500
observations
Fig. 62.4 (continued)
2000
2500
0
0
500
1000
1500
observations
1710
A. Hachicha et al. Down Jones
6
Down Jones
0.8 0.7
5
Vt-SV-t
Vt-SVOL
0.6 4 3 2
0.5 0.4 0.3 0.2
1 0
0.1 0
500
1000
1500
2000
2500
0
0
500
observations
1000
1500
2000
2500
observations
Fig. 62.4 Smoothed estimates of Vt, basic SVOL, and SV-t model
with PðV=V t1 , V tþ1 , Y, Y t Þ / PðY t =V t , YÞPðV t =V t1 , YÞPðV tþ1 =V t , YÞ A simple calculation shows that 2 Y 1 Y t 1 ðlogV t mt Þ2 ðV t Þ ¼ PðV t =V t1 , V tþ1 , Y, Y t Þ / 0:5 exp exp 2V t V t 2s2 Vt with mt ¼
að1 bÞ þ bðlogV tþ1 þ logV t1 Þ 1 þ b2
and s2 ¼
s2v 1 þ b2
The MCMC algorithm consists of the following steps: Pða, d=sn , V, Y Þ N P s2n =a, d, V, Y IG
!
62
A Comparative Study of Two Models SV with MCMC Algorithm SVOI cac40 QQ Plot of Sample Data versus Standard Normal
SV-t cac40 QQ Plot of Sample Data versus Standard Normal 800 Quantiles of Input Sample
Quantiles of Input Sample
800 700 600 500 400 300 200 100 0 −100 −4
700 600 500 400 300 200 100 0
−3
−2 −1 0 1 2 3 Standard Normal Quantiles
−100 −4
4
SVOI Nasdaq QQ Plot of Sample Data versus Standard Normal Quantiles of Input Sample
Quantiles of Input Sample
4
600 400 200 0
700 600 500 400 300 200 100 0
−3
1 2 3 −2 −1 0 Standard Normal Quantiles
−100 −4
4
−3
−2 −1 0 1 2 3 Standard Normal Quantiles
4
SV-t S&P QQ Plot of Sample Data versus Standard Normal x 1017
SVOI S&P QQ Plot of Sample Data versus Standard Normal 8 Quantiles of Input Sample
800 Quantiles of Input Sample
−2 −1 0 1 2 3 Standard Normal Quantiles
800
800
700 600 500 400 300 200 100 0 −100 −4
−3
SV-t Nasdaq QQ Plot of Sample Data versus Standard Normal
1000
−200 −4
1711
−3
−2
−1
0
1
2
Standard Normal Quantiles
Fig. 62.5 (continued)
3
4
6 4 2 0 −2 −4 −6 −4
−3
−2
−1
0
1
2
Standard Normal Quantiles
3
4
1712
A. Hachicha et al. SV-t Down Jones QQ Plot of Sample Data versus Standard Normal
SVOI Down Jones QQ Plot of Sample Data versus Standard Normal 800 Quantiles of Input Sample
Quantiles of Input Sample
250 200 150 100 50 0 −50 −4
−3
−2
−1
0
1
2
3
700 600 500 400 300 200 100 0 −100 −4
4
−3
−2
−1
0
1
2
3
4
Standard Normal Quantiles
Standard Normal Quantiles
Fig. 62.5 Smoothed estimates of Vt, basic SVOL, and SV-t model
PðV t =V t1 , V tþ1 , Y, Y t Þ : Metropolis-Hastings An iteration (j), T X
aðjÞ ¼
ð j1Þ
logV t
t¼1
bð j1Þ
T X
ðj1Þ
logV t1
t¼1
ðj1Þ s2n þT
By following the same approach, the estimator d at step (j) is given by T h X
dð j Þ ¼
ðj1Þ
logV t1
ðj1Þ
logV t
aðjÞ
i
t¼1
T ðj1Þ X ðj1Þ þ logV t1 s2n
2
t¼1
For parameter sv2, the prior density is an inverse gamma (IG (a, b)). The expression of the estimator parameter sv2 at step (j) is given by
2 ðjÞ sv ¼
T 1X ðj1Þ ðj1Þ 2 logV t aðjÞ dðjÞ logV t1 þb 2 t¼1
T=2 þ a 1
alpha
alpha
35 30 25 20 15 10 5 0 −5
−5
0
5
10
15
20
25
30
0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4
1 2 3 4 5 6 7 8 9 10 MH iteration
1 2 3 4 5 6 7 8 9 10 MH iteration Nasdaq
0 10 20 30 40 50 60 70 80 90 00 1 MH iteration Nikkei
CAC40
0
2000
4000
6000
8000
10000
12000
1800 1600 1400 1200 1000 800 600 400 200 0
−1000
−500
0
500
1000
1500
1 2 3 4 5 6 7 8 9 10 MH iteration
1 2 3 4 5 6 7 8 9 10 MH iteration Nasdaq
0 10 20 30 40 50 60 70 80 90 00 1 MH iteration Nikkei
CAC40
40 35 30 25 20 15 10 5 0
90 80 70 60 50 40 30 20 10
0
0.5
1
1.5
2
2.5
CAC40
1 2 3 4 5 6 7 8 9 10 MH iteration
1 2 3 4 5 6 7 8 9 10 MH iteration Nasdaq
0 10 20 30 40 50 60 70 80 90 00 1 MH iteration Nikkei
4 3 x 10
A Comparative Study of Two Models SV with MCMC Algorithm
Fig. 62.6 (continued)
alpha
delta delta delta
sigma sigma sigma
62 1713
1
2
2
3
3
4 5 6 7 MH iteration
S&P
4 5 6 7 MH iteration
8
8
9 10
9 10
Fig. 62.6 Behavioral of parameters of basic Svol model
0 −5 1
5
10
15
20
25
30
35
−5
0
5
10
15
20
25
Down Jones
8
8
9 10
9 10
40
50
60
70
80
20
4 5 6 7 MH iteration
S&P
4 5 6 7 MH iteration
0 3
3
40 35 30 25 20 15 10 5 0
30 2
2
Down Jones
2000
4000
6000
8000
10000
1
1800 1600 1400 1200 1000 800 600 400 200 0 1
12000
delta delta
alpha
alpha
sigma sigma
30
1
1
2
2
3
3
4 5 6 7 MH iteration
S&P
4 5 6 7 MH iteration
Down Jones
8
8
9 10
9 10
1714 A. Hachicha et al.
CAC40
0 10 20 30 40 50 60 70 80 90 100 MH iteration
Down jones
0 10 20 30 40 50 60 70 80 90 100 MH iteration
Fig. 62.7 (continued)
−2
−1
0
1
2
3
4
−3
−2
−1
0
1
2
3
phi 2 phi 2
phi 1
phi 1
3 2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5
−3
−2
−1
0
1
2
3
0 10 20 30 40 50 60 70 80 90 100 MH iteration
Down jones
0 10 20 30 40 50 60 70 80 90 100 MH iteration
CAC40
sigma sigma
0
0.005
0.01
0.015
0.02
1
1
0.025
0.018 0.016 0.014 0.012 0.01 0.008 0.006 0.004 0.002 0
2
2
3
3
5 6 7 MH iteration
5 6 7 MH iteration
Down jones
4
4
CAC40
8
8
9
9
10
10
62 A Comparative Study of Two Models SV with MCMC Algorithm 1715
3 2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5
phi 1
phi 1
MH iteration
0 10 20 30 40 50 60 70 80 90 100
Nikkei
MH iteration
0 10 20 30 40 50 60 70 80 90 100
phi 2 phi 2
3.5 3 2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5
−2
−1
0
1
2
3
4
MH iteration
0 10 20 30 40 50 60 70 80 90 100
Nikkei
0 10 20 30 40 50 60 70 80 90 100 MH iteration
Nasdaq
sigma sigma
Nasdaq
Fig. 62.7 Behavioral of parameters of SV-t model
3 2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5 −2
0
0.005
0.01
0.015
0
0.005
0.01
0.015
1
1
2
2
3
3
5
6
7 MH iteration
4
Nikkei
4 5 6 7 MH iteration
Nasdaq
8
8
9 10
9 10
1716 A. Hachicha et al.
62
A Comparative Study of Two Models SV with MCMC Algorithm
1717
Appendix 2 See Fig. 62.4
Appendix 3 See Fig. 62.5
Appendix 4 See Fig. 62.6
Appendix 5 See Fig. 62.7
References Asai, M. (2008). Autoregressive stochastic volatility models with heavy-tailed distributions: A comparison with multifactors volatility models. Journal of Empirical Finance, 15, 322–345. Asai, M., McAleer, M., & Yu, J. (2006). Multivariate stochastic volatility. Econometric Reviews, 25(2–3), 145–175. Casella, G., & Edward, G. (1992). Explaining the Gibbs sampler. The American Statistician, 46(3), 167–174. Chang, R., Huang, M., Lee, F., & Lu, M. (2007). The jump behavior of foreign exchange market: Analysis of Thai Baht. Review of Pacific Basin Financial Markets and Policies, 10(2), 265–288. Chib, S., Nardari, F., & Shephard, N. (2002). Markov chain Monte Carlo methods for stochastic volatility models. Journal of Econometrics, 108, 281–316. Danielsson, J. (1994). Stochastic volatility in asset prices: Estimation with simulated maximum likelihood. Journal of Econometrics, 64, 375–400. Engle, R. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50, 987–1007. Eraker, B., Johannes, M., & Polson, N. (2000). The impact of jumps in volatility and return. Working paper, University of Chicago. Frieze, A. M., Kannan, R., & Polson, N. (1994). Sampling from log-concave distributions. Annals of Applied Probability, 4, 812–837. Gelfand, A. E., & Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398–409. Geweke, J. (1994a). Priors for macroeconomic time series and their application. Econometric Theory, 10, 609–632. Geweke, J. (1994b) Bayesian comparison of econometric models. Working paper, Federal Reserve Bank of Minneapolis Research Department. Gilks, W. R., & Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. Journal of the Royal Statistical Society, Series C, 41(41), 337.
1718
A. Hachicha et al.
Harvey, A. C., & Shephard, N. (1996). Estimation of an asymmetric stochastic volatility model for asset returns. Journal of Business and Economic Statistics, 14, 429–434. Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109. Hsu, D., & Chiao, C. H. (2011). Relative accuracy of analysts’ earnings forecasts over time: A Markov chain analysis. Review of Quantitative Finance and Accounting, 37(4), 477–507. Hwang, S., & Salmon, M. (2004). Market stress and herding. Journal of Empirical Finance, 11, 585–616. Jacquier, E., Polson, N., & Rossi, P. (1994). Bayesian analysis of stochastic volatility models (with discussion). Journal of Business and Economic Statistics, 12(4), 371–417. Kim, S. N., Shephard, N., & Chib, S. (1998). Stochastic volatility: Likelihood inference and comparison with ARCH models. Review of Economic Studies, 65, 365–393. Meddahi, N., & Renault, E. (2000). Temporal aggregation of volatility models. Document de Travail CIRANO, 2000-22. O’Brien, T. J., & Dolde, W. (2000). A currency index global capital asset pricing model. European Financial Management, 6(1), 7–18. Sharma, V. (2011). Stock return and product market competition: Beyond industry concentration. Review of Quantitative Finance and Accounting, 37(3), 283–299. Shepherd, P. (1997). Likelihood analysis of non-Gaussian measurement time series. Biometrika, 84(3), 653–667.
Internal Control Material Weakness, Analysts Accuracy and Bias, and Brokerage Reputation
63
Li Xu and Alex P. Tang
Contents 63.1 63.2
63.3
63.4
63.5
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hypothesis Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2.1 Internal Control Material Weakness and Forecast Accuracy . . . . . . . . . . . . . . . . 63.2.2 Internal Control Material Weaknesses and Optimistic Forecast Bias . . . . . . . 63.2.3 The Impact of the Reputation of Brokerage Houses . . . . . . . . . . . . . . . . . . . . . . . . . Sample Selection and Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3.1 The Sample Selection and Matching Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3.2 Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3.3 The Sample Firms by Types of ICMW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4.1 Variable Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4.2 Univariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4.3 Analysis of Forecast Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4.4 Analysis of Forecast Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4.5 Analysis of Brokerage Reputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.5.1 ICMW Resolution and Forecast Accuracy and Bias . . . . . . . . . . . . . . . . . . . . . . . . . 63.5.2 The Differences in the Frequency of Analyst Forecasts Between Highly Reputable Houses and Less Highly Reputable Houses . . . . . . . . . . . . . . . . . . . . . . 63.5.3 Other Robustness Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1720 1723 1723 1725 1726 1727 1727 1728 1729 1732 1732 1732 1734 1738 1739 1741 1741 1743 1744
L. Xu (*) Washington State University, Richland, WA, USA e-mail: [email protected] A.P. Tang Morgan State University, Baltimore, MD, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_63, # Springer Science+Business Media New York 2015
1719
1720
L. Xu and A.P. Tang
63.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Material Weakness Classification Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Variable Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Matching Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 4: Ordinary Least Squares (OLS) Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 5: Cook’s Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 6: Rank Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1744 1745 1746 1747 1748 1748 1749 1749
Abstract
We examine the impact of internal control material weaknesses (ICMW hereafter) on sell-side analysts. Using matched firms, we find that ICMW reporting firms have less accurate analyst forecasts relative to non-reporting firms when the reported ICMWs belong to the Pervasive type. ICMW reporting firms have more optimistically biased analyst forecasts compared to non-reporting firms. The optimistic bias exists only in the forecasts issued by the analysts affiliated with less highly reputable brokerage houses. The differences in accuracy and bias between ICMW and non-ICMW firms disappear when ICMW disclosing firms stop disclosing ICMWs. Collectively, our results suggest that the weaknesses in internal control increase forecasting errors and upward bias for financial analysts. However, a good brokerage reputation can curb the optimistic bias. We use the Ordinary Least Squares (OLS) methodology in the main tests to examine the impact of internal control material weaknesses (ICMW hereafter) on sell-side analysts. We match our ICMW firms with non-ICMWs based on industry, sales, and assets. We reestimate the models using rank regression technique to assess the sensitivity of the results to the underlying functional form assumption made by OLS. We use Cook’s distance to test the outliers. Keywords
Internal control material weakness • Analyst forecast accuracy • Analyst forecast bias • Brokerage reputation • Sarbanes-Oxley act • Ordinary least squares regressions • Rank regressions • Fixed effects • Matching procedure • Cook’s distance
63.1
Introduction
As part of the Sarbanes-Oxley Act of 2002 (SOX), SEC registrants’ executives are now required to certify that they have evaluated the effectiveness of their internal controls over financial reporting (Section 302, effective in August 2002) and to provide an annual report to assess the effectiveness of the internal control structure and procedures (Section 404, effective
63
Internal Control Material Weakness
1721
in November 2004).1,2 These assessments of internal control requirements have arguably been the most controversial aspect of SOX. On the one hand, many firms complain that internal control problems are inconsequential for financial statement users, and hence the high compliance costs of assessing internal control are not justified (Solomon 2005). On the other hand, a growing chorus of investors claims that good internal controls result in much more reliable corporate financial statements, which benefit financial statement users by reducing their information collection and interpretation costs.3 In this paper we test whether internal control problems are inconsequential for financial statement users by examining the impact of internal control material weaknesses (ICMWs hereafter) on one group of the most important financial statement users – sell-side analysts. We examine the association between the analyst forecast accuracy and bias and the disclosed ICMWs under SOX Sections 302 and 404 and how the brokerage reputation will influence this association. Additionally, we investigate the impact of the types and severity of ICMWs on forecast accuracy. Based on a sample of 727 firms that have disclosed ICMWs since August of 2002, we find that the analysts’ forecasts are less accurate among ICMW reporting firms relative to matched non-reporting firms. When we classify ICMW reporting firms into Pervasive or Contained ICMW reporting firms, the accuracy is significantly lower among Pervasive ICMW reporting firms. Our findings suggest that Pervasive ICMWs significantly increase the complexity of the forecasting task for analysts. In contrast, Contained ICMWs alone do not significantly increase the complexity of the forecasting task for analysts. When we investigate the association between analyst forecast bias and ICMWs,
1
Key points of Section 302 include the following: (1) The signing officers must certify that they are responsible for establishing and maintaining internal control and have designed such internal controls to ensure that material information relating to the registrants and its consolidated subsidiaries is made known to such officers by others within those entities, particularly during the period in which the periodic reports are being prepared. (2) The officers must query “have evaluated the effectiveness of the registrant’s internal controls” as of a date within 90 days prior to the report and have presented in the report their conclusions about the effectiveness of their internal controls based on their evaluation as of that date. 2 Key points of Section 404 include the following: (1) Management is required to produce an internal control report as part of each annual Exchange Act report. (2) The report must affirm the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting. (3) The report must also contain an assessment, as of the end of the most recent fiscal year of the registrant, of the effectiveness of the internal control structure and procedures of the issuer for financial reporting. (4) External auditors are required to issue an opinion on whether effective internal control over financial reporting was maintained in all material respects by management. This is in addition to the financial statement opinion regarding the accuracy of the financial statements. 3 For example, Donald J. Peters, a portfolio manager at T. Rowe Price Group, says: “The accounting reforms [of SOX] have been a win. It is [now] much easier for financial statement users to have a view of the true economics” of a company (Wall Street Journal, January 29, 2007).
1722
L. Xu and A.P. Tang
we find that the analysts’ forecasts are more optimistically biased toward ICMW reporting firms compared to matched non-reporting firms. Taken together, our overall findings on accuracy and bias suggest that ICMWs add a unique dimension to forecasting complexity. In addition, we separate the analysts’ brokerage houses into two groups: highly reputable and less highly reputable brokerage houses. Highly reputable brokerage houses value analysts’ reports more than less highly reputable brokerage houses. Hence, analysts are likely to feel constrained from adding an arbitrarily high optimistic bias to their private estimates for fear of hurting the brokerage house’s reputation. In addition, more reputable brokerage firms tend to spend significant resources in collecting information, and thus the access to firms’ private information is relatively less important for analysts from highly reputable brokerage houses (compared to those from less highly reputable brokerage houses). We predict and find that analysts from less highly reputable brokerage houses are more likely to issue optimistic forecasts for ICMW reporting firms. We also examine the association between ICMW and forecast accuracy (and bias) in the post-reporting periods. We find that the differences in accuracy and bias between ICMW reporting firms and control firms disappear when firms stop reporting material weaknesses. This paper makes four major contributions to the accounting literature. First, sell-side analysts are among the most important users of financial reports. Researchers have long been interested in learning about analysts’ use of accounting information (Schipper 1991). While prior studies provide evidence of the link between earnings quality and weaknesses in internal control, exactly how weakness in internal control affects the users of earnings reports directly has been largely ignored.4 This study adds to this research by directly documenting a relation between ICMW and accuracy along with bias of analyst forecasts. The evidence presented in this paper shows that internal control deficiencies can influence the quality of analysts’ forecasts. Secondly, this study finds that not all ICMWs are created equal. The association between ICMW and forecast accuracy depends upon the severity of the reported ICMWs. When we separate material weaknesses into Contained or Pervasive types of weaknesses (based on the severity of the weaknesses), we find that firms identified with Pervasive ICMWs are more likely to be associated with forecast errors. In contrast, the relation between ICMW and accuracy is insignificant among firms identified with Contained ICMWs. Thirdly, we are able to link brokerage reputation to the analysts’ optimistic bias. Conceptually, it makes sense that the optimistic bias should be related to the reputation of the brokerage houses since highly reputable brokerage houses
4 A recent working paper by Kim et al. (2009) confirms our results. They find that internal control quality is inversely; associated with analysts’ error and forecast dispersion.
63
Internal Control Material Weakness
1723
are more concerned with the analysts’ forecasts. In addition, highly reputable houses have the resources to conduct more sophisticated analyses. This study presents the evidence to demonstrate that a good brokerage reputation can curb the optimistic bias. Finally, the study also shows that the differences in accuracy and bias between ICMW reporting firms and control firms disappear when firms stop reporting material weaknesses. We provide evidence that ICMWs indeed contribute to lower forecast accuracy and more positive forecast bias. The chapter is organized as follows. Section 63.2 develops hypotheses, and Sect. 63.3 describes sample selection process and forecast properties measurement. Sections 63.4 and 63.5 present the empirical tests and additional analysis, respectively. Conclusions are presented in Sect. 63.6.
63.2
Hypothesis Development
63.2.1 Internal Control Material Weakness and Forecast Accuracy The extant literature on the relation between ICMW, accruals quality, and management’s earnings guidance implies that ICMWs could affect analysts’ forecast accuracy. Doyle et al. (2007a) and Ashbaugh et al. (2008) find that ICMW reporting firms have lower accruals quality. Lobo et al. (2012) link accruals quality with analysts’ forecast accuracy by documenting that firms with lower accruals quality tend to have larger forecast errors. They conclude that analysts are unable to fully resolve the uncertainty in firms with lower accruals quality. Extant literature also suggests that ICMWs could affect management’s earnings guidance. In a recent study, Feng et al. (2009) discover that ICMW reporting firms have less accurate management earnings guidance. Management’s earnings guidance has been shown to be directly related to forecast accuracy, since the earnings guidance provides valuable information for analysts (Chen et al. 2011). Thus, we conjecture that ICMW disclosing firms are associated with less accurate analysts’ earnings forecast. This leads to our first hypothesis in the alternative form as the following: H1 Analysts’ earnings forecast for ICMW reporting firms will be less accurate relative to non-reporting firms (in alternative form). Internal control material weaknesses vary widely with respect to severity and underlying reasons. (See page 196 of Doyle et al. 2007b). Doyle et al. (2007b) find that the type of internal control problem is an important factor when examining determinants of ICMWs. They recommend that the type and severity of ICMWs should be considered by future research on internal control. If a firm’s management lacks the abilities or resources to exercise efficient internal control within the firm, the firm tends to have ICMWs about the overall
1724
L. Xu and A.P. Tang
control environment (defined as Pervasive ICMWs).5 Alternatively, even if management has sufficient capabilities and resources to prepare accurate and adequate financial statements, a firm may still have internal control deficiencies over financial reporting. Such internal control deficiencies may be related to controls over specific account balances or transaction-level processes (defined as Contained ICMWs).6,7 Doyle et al. (2007a) find that among all ICMW reporting firms, the earnings quality is significantly lower for firms reporting Pervasive ICMWs. In contrast, Contained ICMWs have no impact on earnings quality. In making their forecasts, analysts can use earnings-related information, disaggregated segmental information, and information provided by management (Previts and Bricker 1994; Bouwman and Frishkoff 1995; Rogers and Grant 1997). If analysts use firms’ earnings-related and segmental information to make forecasts, they might have more difficulty in predicting earnings for firms with Pervasive ICMWs. This is because earnings quality of Pervasive type ICMW firms is low (Doyle et al. 2007a). On the contrary, analysts should have no difficulty in predicting earnings for firms with Contained ICMWs, because auditors would be able to mitigate the errors in reported earnings associated with Contained ICMWs. Consistent with this argument, Doyle et al. (2007a) find that account-specific material weaknesses are not associated with lower earnings quality. If analysts use the guidance provided by management or other unaudited reports to make forecasts, it is possible that Contained ICMWs might still introduce errors into the reports since these reports are unaudited. In this case, both Pervasive and Contained ICMWs may increase the forecasting difficulties for analysts. Taken together, the forecast errors are expected to be larger for firms with Pervasive ICMWs than for firms with Contained ICMWs. To compare the forecast accuracy between Pervasive and Contained ICMWs, we classify firms into two groups based on the company’s stated reasons for material weaknesses. The first group of firms discloses only Contained ICMWs
5
For example, DynTek Inc. disclosed the following deficiencies in their 2004, 10-K: “The material weaknesses that we have identified relate to the fact that our overall financial reporting structure and current staffing levels are not sufficient to support the complexity of our financial reporting requirements. We have experienced employee turnover in our accounting department including the position of Chief Financial Officer. As a result, we have experienced difficulty with respect to our ability to record, process and summarize all of the information that we need to close our books and records on a timely basis and deliver our reports to the Securities and Exchange Commission within the time frames required under the Commission’s rules.” 6 For example, Westmoreland Coal Inc. disclosed the following deficiencies in their 2005, 10-K: “The company’s policies and procedures regarding coal sales contracts with its customers did not provide for a sufficiently detailed, periodic management review of the accounting for payments received. This material weakness resulted in a material overstatement of coal revenues and an overstatement of amortization of capitalized asset retirement costs.” Moody suggests that these types of material weaknesses are “auditable” and thus do not represent as serious a concern regarding the reliability of the financial statements. 7 The detailed classification of Pervasive and Contained ICMWs is provided in Appendix 1.
63
Internal Control Material Weakness
1725
(defined as G1 firms), which are related to controls over specific accounting choices. The second group of firms discloses Pervasive ICMWs (with or without Contained ICMWs) (defined as G2 firms). Note that unlike G1 firms that disclose only one type of ICMWs (i.e., Contained ICMWs). G2 firms may disclose two types of ICMWs (i.e., Pervasive as well as Contained ICMWs).8 We have the following hypothesis based on the types of ICMWs: H2 The analysts’ forecast errors are more pronounced for forecasts issued for G2 firms (compared to those issued for G1 firms) (in alternative form).
63.2.2 Internal Control Material Weaknesses and Optimistic Forecast Bias Extant literature suggests that sell-side analysts have an inclination to issue optimistic forecasts for several reasons. First, the compensation of analysts is tied to the amount of trade they generate for their brokerage firms. Given widespread unwillingness or inability to sell short, more trades will result from more optimistic forecasts. Moreover, mutual funds, the client group with resources to generate large trades, are precluded by regulation from selling short. Hence, without reputation concerns, analysts will prefer to issue more optimistic forecasts. Secondly, a positive outlook improves the chances of analysts’ brokerage houses winning investment banking deals. A number of prior studies have suggested that initial public offering (IPO) activities may compromise the quality of analysts’ research. For example, Womack (1996) argues that analysts are reluctant to issue unfavorable forecasts if there is an IPO underwriting relationship. Thirdly, prior studies show that analyst forecasts contain private information in addition to a statistical model based only on public information. Hence, access to management is crucial for analysts, as evidenced by the reports from Institutional Investor (a firm that compiles annual analyst rankings) showing that analysts rank the access to management as the sixth most valuable attribute out of 13 attributes (ahead of accuracy of earnings estimates, written reports, stock selection, and financial modeling). As evidenced by Huang et al. (2005), being optimistic has historically helped analysts maintain good relations with management.9
8
The conclusions of this paper remain the same if we classify G2 firms as firms disclosing only Pervasive ICMWs. 9 Our sample period starts after the introduction of Regulation Fair Disclosure (Reg. FD). The goal of Reg. FD is to prohibit management from selectively disclosing private information to analysts. However, as pointed out by recent research (see, e.g., Ke and Yu 2006; Kanagaretnam et al. 2012), there is no empirical evidence of management relations incentive weakening after Reg. FD. In the post-Reg. FD period, there are other incentives for analysts to please management, such as to gain favored participation in conference calls (Libby et al. 2008; Mayew 2008). Anecdotal evidence also shows that Reg. FD does not prevent a company from more subtle forms of retaliations against analysts who issue negative research reports (Solomon and Frank 2003).
1726
L. Xu and A.P. Tang
The optimistic bias in analyst forecasts is more pronounced when earnings are less predictable (e.g., Lim 2001; Das et al. 1998). Feeling less accountable in uncertain environments, analysts are inclined to issue more optimistic forecasts. Consistently, Zhang (2006) concludes that greater information uncertainty predicts more positive forecast bias. Since both less predictable earnings (Doyle et al. 2007a) and an uncertain information environment (Beneish et al. 2008) are more prevalent among ICMW firms, we expect that analysts will issue positively biased forecasts when firms have ICMWs.10 We, therefore, offer our next hypothesis regarding the relation between forecast bias and ICMWs: H3 Analysts’ earnings forecasts are more positively biased among ICMW reporting firms relative to non-reporting firms (in alternative form).
63.2.3 The Impact of the Reputation of Brokerage Houses As we discussed before, the magnitude of the bias is held in check by reputational concerns. We hypothesize that highly reputable brokerage houses value analysts’ reports more than less highly reputable brokerage houses. Hence, their analysts are likely to feel constrained from adding an arbitrarily high optimistic bias to their private estimates by the fear of hurting the brokerage houses’ reputations. In addition, highly reputable brokerage firms tend to have significant resources to collect information, and thus the access to firms’ private information is relatively less important for analysts from highly reputable brokerage houses.11 Alternatively, analysts from less highly reputable brokerage houses tend to have limited resources in research and thus have more incentives to issue more biased forecasts. If ICMWs indeed increase the cost of information collection and research, these extra costs will exacerbate the need of analysts from less highly reputable brokerage houses to access firms’ private information. Hence, we expect that analysts from less highly reputable brokerage houses will issue more upwardly biased forecasts for ICMW firms (compared to analysts from highly reputable brokerage houses). We, therefore, offer our last hypothesis: H4 The analysts’ optimistic biases associated with ICMW firms are more pronounced for forecasts issued by analysts from less highly reputable brokerage houses (compared to analysts from highly reputable brokerage houses) (in alternative form).
10
Note that our hypothesis is still valid if the forecasts have been made before the disclosure of ICMWs. Doyle et al. (2007a) argue that Sarbanes-Oxley has led to the disclosure of ICMWs that might have existed for some time. Indeed, they find that accrual quality has been lower for ICMW firms relative to non-ICMW firms even in the periods prior to the disclosure of ICMWs. 11 Brown et al. (2009) document stock market response to an analyst’s recommendation change and the difference between the analyst’s recommendation and the consensus recommendation. The market’s reaction is strongly influenced by the analyst’s reputation.
63
Internal Control Material Weakness
63.3
1727
Sample Selection and Descriptive Statistics
63.3.1 The Sample Selection and Matching Procedure We first use the key words “internal control” and “material weakness” to search the 8-K, 10-Q, and 10-K filings in Lexis/Nexis during the period of August 2002 to December 2006 and obtain 1,275 firms which disclose at least one material weakness (ICMW firms).12 We then cross-check our 1,275 firms against Doyle et al.’s (2007a) 1,210 sample firms, which are obtained by Doyle et al. through searching 10Kwizard.com (10-Ks, 10-Qs, and 8-Ks) from August 1, 2002, to October 31, 2005.13 We find that there are 181 firms in Doyle et al.’s (2007a) sample but not in ours. We subsequently add these 181 firms into our sample to create an initial sample of 1,456 firms. Out of the 1,456 initial sample firms, 952 firms have the required firm characteristics variables (for regressions) in the Compustat and CRSP annual database. Among these 952 firms, 745 firms have analyst forecast data in IBES database. Next we identify a sample of matched firms that do not disclose internal control material weaknesses (non-ICMW firms) with similar IBES, CRSP, and Compustat requirements as the ICMW firms. We match ICMW firms with non-ICMW firms by industry, firm size, and sales performance, as measured during the fiscal year in which the ICMW is disclosed. Industry is defined by using the 48 industry codes identified by Fama and French (1997), firm size is measured as total assets (Compustat #6), and sales performance is measured as total sales (Compustat #12). The matching algorithm is similar to that used by Francis et al. (2006). In particular, matches are identified by an algorithm that calculates the distance between each ICMW firm k and its matched non-ICMW counterpart j. Specifically, for each non-ICMW firm j in the same Fama-French industry as ICMW firm k, Assetsj Assetsk we calculate the percentage difference in assets, AssetsDIS ¼ Assetsk , and Salesj Salesk the percentage difference in sales, SalesDIS ¼ Sales . The sum of the two k distance measures yields a matching score for each non-ICMW firm j that is in the same industry as ICMW firm k. From the set of matching scores that are less than two, we choose the non-ICMW firm with the smallest matching score for each ICMW firm; we then remove the matched pair (the ICMW and its non-ICMW counterpart) from the lists of ICMW and non-ICMW firms. In some cases, a single non-ICMW firm is the best match for several ICMW firms. In this case, we control for the order in which we match a non-ICMW firm to an ICMW firm by first
12
It could be argued that our sample might miss some firms which have ICMWs but choose not to disclose them. However, discovery and disclosure of material weaknesses are mandatory according to 2004 SEC FAQ #11. We, therefore, use the disclosure of ICMW as a proxy for the existence of ICMW. 13 We thank Sarah McVay for making the data available on her website (http://pages.stern.nyu.edu/ smcvay/research/Index.html).
1728
L. Xu and A.P. Tang
calculating all possible matching scores and then assigning the non-ICMW firm j to the ICMW firm k whose matching score is the smallest among the candidate ICMWs. For the remaining candidate non-ICMWs, we repeat the above steps using the remaining ICMW firms. In total, the application of these procedures produces a final sample of 727 pairs of ICMW and non-ICMW firm-year observations.14
63.3.2 Descriptive Statistics Panels A and B in Table 63.1 show the number and percentage of sample firms by industry and by stock exchange, respectively. Industry groups with the largest representations in the sample include durable manufacturers (20.4 %), computers (19.1 %), retail (12.9 %), and financial (12.5 %). Our sample distribution is similar to that of Ghosh and Lubberink (2006) and Beneish et al. (2008). As a comparison, in the last two columns of Table 63.1 we also present the number and percentage of 2003 Compustat population by industry. The industry distributions of our sample firms and 2003 Compustat firms are similar. The industry groups with the largest representations in 2003 Compustat population are also durable manufacturers, computers, and financial. The retail industry has a larger weight in our sample relative to the 2003 Compustat population. In terms of stock exchange, the majority of ICMW firms are listed on NASDAQ (436 firm-years) and NYSE (245 firm-years). Table 63.2 Panel A presents the descriptive statistics for sample firms and matched firms. The median number of analysts following sample firms is smaller than that of analysts following matched firms. The firm size is measured as the natural logarithm of the market value of equity. The median firm size suggests that sample firms are smaller than matched firms (significant at the 0.1 significance level). There are no significant mean and median differences in leverage between sample firms and matched firms. The mean and median profitability of sample firms, measured by return on assets (ROA), are significantly lower than those of matched firms at the 0.01 significance level. The mean and median of book to market ratios (BM) of sample firms are significantly larger than those of matched firms at the 0.01 significance level. The mean and median of percentage change in earnings of sample firms are significantly smaller than those of matched firms at the 0.01 significance level, which suggests a systematic downward shift in reported earnings for firms disclosing ICMWs. Also, the median number of negative earnings of sample firms is significantly greater than that of matched firms at the 0.01 significance level. Taken together, the statistics imply that sample firms are followed by fewer analysts and have smaller market capitalization, higher book to market ratio, and lower profitability than their matched firms. 14
If a firm in our final sample reports internal material weaknesses in multiple years, the firm will show up in our sample multiple times. We have 599 distinct firms showing up once in our final sample and 64 firms showing up twice in our sample. The conclusions of this paper remain the same if we exclude these 64 firms from our analyses.
63
Internal Control Material Weakness
1729
Table 63.1 Industry and exchange distributions of firms reporting internal control material weakness (ICMW) Panel A: by industry Industry name Mining and construction Food Textiles and printing/ publishing Chemicals Pharmaceuticals Extractive Durable manufactures Computers Transportation Utilities Retail Financial Services Total Panel B: by stock exchange Stock exchange NYSE NASDAQ AMEX OTC Other Total
SIC codes 1000–1999 excluding 1300–1399 2000–2111 2200–2799 2800–2824, 2840–2899 2830–2836 1300–1399, 2900–2999 3000–3999, excluding 3570–3579, 3670–3679 3570–3579, 3670–3679 4000–4899 4900–4999 5000–5999 6000–6999 7000–8999 excluding 7370–7379
Sample 2003 Compustat N % N % 13 1.79 158 2.56 3 22
0.41 3.03
112 210
1.82 3.41
9 31 25 148
1.24 4.26 3.44 20.36
135 559 196 945
2.19 9.07 3.18 15.34
139 42 27 94 91 83
19.12 853 5.78 333 3.71 287 12.93 460 12.52 1377 11.42 537
13.84 5.40 4.66 7.47 22.35 8.71
727 100 N 245 436 29 16 1 727
6,162
98.80 % 33.70 59.97 3.99 2.20 0.14 100.00
A total of 727 firm-year observations have reported ICMW and have data available from Compustat and IBES. SIC codes and stock exchanges are from the Compustat The sample period is from 2003 to 2006. All data are from CRSP, Compustat, and IBES databases
63.3.3 The Sample Firms by Types of ICMW As we discussed in Sect. 63.2, we classify firms disclosing ICMWs into two groups: one group of firms discloses only Contained ICMWs, and the other group of firms discloses Pervasive ICMWs (with or without Contained ICMWs). The classification of Contained or Pervasive ICMWs is similar to that of Moody’s. Contained ICMWs are defined as internal control issues related to controls over specific account balances, or transaction-level processes, or accounting policy interpretations. Pervasive ICMWs are defined as internal control issues related to controls over the control environment or the overall financial reporting process.
1730
L. Xu and A.P. Tang
Table 63.2 Descriptive statistics for selected variables Panel A: descriptive statistics for the ICMW sample and the matched sample firms ICMW sample (N ¼ 727) Matched sample (N ¼ 727) Mean Median Mean Median 6.614 5.000 NUMBER 6.183 4.000** MV 6.285 6.165* 6.456 6.343 LEV 0.196 0.157 0.185 0.138 ROA 0.019*** 0.014*** 0.019 0.038 BM 0.534*** 0.473*** 0.488 0.424 *** ECHG 0.219 0.116*** 0.021 0.081 LOSS 0.354 0.000*** 0.228 0.000 Panel B: descriptive statistics for the Contained ICMW (G1) and Pervasive ICMW (G2) firms G1 firms (N ¼ 348) G2 firms (N ¼ 379) Mean Median Mean Median 0.017 0.035 0.012 ROA 0.001*** LOSS 0.322* 0.000 0.383 0.000 FOREIGN 0.391* 0.000* 0.459 0.000 ECHG 0.261 0.101 0.183 0.129 Matched firms consist of firms in the same industry based on the 48 industry codes identified by Fama and French (1997) with the closest market value and sales at the end of fiscal year *, **, *** denote two-tailed significance levels of 10 %, 5 %, and 1 %, respectively, for the differences between the ICMW sample and the matched sample. T-test is used to test the difference between the mean of the ICMW sample and the matched sample, and median test is used to test the difference between the median of the ICMW sample and the matched sample We match ICMW firms with non-ICMW firms by industry, firm size, and sales performance, as measured in the fiscal year in which the ICMW is disclosed. Industry is defined using the 48 industry codes identified by Fama and French (1997), firm size is measured as total assets (Compustat #6), and sales performance is measured as total sales (Compustat #12). The matching algorithm is similar to that used by Francis et al. (2006) All the variables are defined in Appendix 2. N is the number of firm-year observations *, **, *** denote two-tailed significance levels of 10 %, 5 %, and 1 %, respectively, for the differences between G1 firms and G2 firms. T-test is used to test the difference between the mean of G1 sample and G2 sample, and median test is used to test the difference between the median of G1 sample and G2 sample G1 firms are firms that disclose only Contained ICMW, and G2 firms are firms that disclose at least Pervasive ICMW. The Contained and Pervasive internal control material weaknesses are similar to Moody’s classification scheme. The Contained internal control material weakness is defined as the internal control issues related to controls over specific account balances, or transaction-level processes, or special accounting policy interpretation; the Pervasive internal control material weakness is defined as the internal control issues related to controls over the control environment or the overall financial reporting process
The detailed classification procedures are as follows: we first provide a breakdown of the firms based on firms’ stated reasons for material weaknesses as in Ge and McVay (2005). Firms usually disclose internal control issues in nine areas: Account-Specific, Training and Personnel, Period-End Reporting/Accounting Policies, Revenue Recognition, Segregation of Duties, Account Reconciliation, Subsidiary-Specific, Senior Management, and Technology Issues. We then classify
63
Internal Control Material Weakness
1731
Contained internal control issues as (1) Account-Specific, (2) Period-End Reporting/Accounting Policies, (3) Revenue Recognition, and (4) Account Reconciliation issues. The rationale for this classification is that these internal control issues are all related to controls over specific account balances, or transaction-level processes, or accounting policy interpretation. We next classify Pervasive internal controls issues as (1) Training and Personnel, (2) Segregation of Duties, (3) Subsidiary-Specific, (4) Senior Management, and (5) Technology Issues. All of these control issues are related to controls over the control environment or the overall financial reporting process. We record 1,372 distinct deficiencies for our 727 firm-year observations since some firms disclose more than one ICMW. Among these 1,372 deficiencies, 431 are Account-Specific deficiencies; 243 are Period-End Reporting/Accounting Policies deficiencies; 138 are Revenue Recognition deficiencies; 90 are Account Reconciliation issues deficiencies; 165 are Training and Personnel deficiencies; 53 are Segregation of Duties deficiencies; 89 are Subsidiary-Specific deficiencies; 56 are Senior Management deficiencies; and 68 are Technology deficiencies. Examples of our material weakness classification scheme are presented in Appendix 1. Among 727 ICMW firm-year observations, 348 firm-year observations disclose Contained ICMWs; 318 firm-year observations disclose both Contained and Pervasive ICMWs; and 61 firm-year observations disclose only Pervasive ICMWs. Hence, there are 348 G1 firm-year observations and 379 G2 firm-year observations (318 firm-year observations plus 61 firm-year observations). We examine and compare G1 and G2 firms’ profitability, business complexity, and changes in earnings. We use return on assets (ROA) and a loss indicator (LOSS) to proxy for profitability. ROA is calculated as earnings before extraordinary items (Compustat #18) scaled by average total assets (Compustat # 6); and LOSS is an indicator variable that equals one if earnings are negative and zero otherwise. As in Ge and McVay (2005), we use the existence of a foreign currency adjustment to proxy for the complexity of the business (FOREIGN) (Compustat Data Item #150). Lastly, we examine the percentage change in earnings. The descriptive statistics provided in Table 63.2 Panel B suggest that G1 firms are, on average, more profitable than G2 firms. The average return on assets (ROA) of G1 firms is significantly higher than that of G2 firms (at the 0.01 significance level); the average loss (LOSS) of G1 firms is significantly lower than that of G2 firms (at the 0.1 significance level). By comparing our business complexity measure, the existence of a foreign currency adjustment (FOREIGN), we find that the business models of G2 firms, on average, are significantly more complex than those of G1 firms (at the 0.1 significance level). Lastly, we find no significant differences in means and medians of earnings changes (ECHG), which suggests that there are no systematic differences in the reporting earnings between G1 and G2 firms. In summary, we find that G2 firms are less profitable and more complex compared to G1 firms. These results imply that the managements of G2 firms may have limited resources to invest in proper internal control (due to lower profitability) and have more difficulty in establishing efficient internal control (due to higher complexity) than G1 firms.
1732
63.4
L. Xu and A.P. Tang
Empirical Tests
63.4.1 Variable Measurement Based on Kanagaretnam et al. (2012), we define forecasts accuracy and bias as follows: ForecastedEPS Actual j Forecast accuracy (ACCURACY) is calculated as jEPSBeginning ; Stock Price both forecasted and actual earnings per share are from IBES Summary Files. Because we are interested in assessing the impact of a firm’s internal control on its financial statements, we try to focus on a particular announcement date: the annual earnings announcement. Forecast accuracy is computed as the absolute difference between the last median forecasted earnings before the annual earnings announcement and the actual earnings for the year in which ICMWs are disclosed.15 We deflate forecast accuracy by beginning stock price to facilitate comparisons across firms. ForecastedEPS Actual Forecast bias (BIAS) is calculated as EPSBeginning Stock Price , both forecasted and actual earnings per share are from IBES Summary Files. Forecast bias is computed as the difference between the last median forecasted earnings before the annual earnings announcement and the actual earnings for the year in which ICMWs are disclosed. We also deflate forecast bias by beginning stock price to facilitate comparisons across firms.16
63.4.2 Univariate Analysis We first examine whether there are significant differences between ICMW firms and their matched firms in forecast accuracy and bias using mean and median tests. The results are reported in Table 63.3. We find that median forecast accuracy for the ICMW sample is significantly smaller than that for matched firms at the 0.01 level, suggesting that internal control material weaknesses are related to less accurate forecasts. In addition, we find that median forecast bias for the ICMW sample is significantly larger than that for matched firms at the 0.01 level, consistent with the notion that internal control material weaknesses are related to more optimistic forecasts. A similar pattern exists when comparing forecast accuracy and bias separately for G1 firms and their matched firms and for G2 firms and their matched firms.
15
Note that the ACCURACY is defined so that larger errors correspond to a lower level of accuracy. 16 As a sensitivity test, we calculate ACCURACY and BIAS using the simple average of the measures across the 6 or 12 monthly reporting periods on the IBES before the company’s fiscal year end. In other words, we choose all median forecasts across the 6 or 12 monthly reporting periods on IBES before the company’s fiscal year ends and then average the median forecasts to create our ACCURACY and BIAS variables. The results are similar to what we report in this paper (not tabulated). When we use forecasts from the prior year instead of the current year, we also get similar results (not tabulated).
63
Internal Control Material Weakness
1733
Table 63.3 Accuracy and bias statistics for all ICMW firms, Contained ICMW (G1) and Pervasive ICMW firms (G2), and their matched firms
ACCURACY BIAS
ACCURACY BIAS
ACCURACY BIAS
ICMW firms (N ¼ 727) Mean Median 0.019 0.003*** * 0.007 0.001*** G1 firms (N ¼ 348) Mean Median 0.017 0.003*** 0.004 0.001*** G2 firms (N ¼ 379) Mean Median 0.004*** 0.020*** 0.010*** 0.001***
Matched firms (N ¼ 727) Mean Median 0.015 0.002 0.001 0.000 Matched firms (N ¼ 348) Mean Median 0.022 0.002 0.000 0.000 Matched firms (N ¼ 379) Mean Median 0.009 0.002 0.000 0.000
ACCURACY is forecast accuracy, calculated as the negative of the absolute difference between actual EPS and last median forecasted EPS scaled by stock price. BIAS is forecast bias, calculated as the difference between last median forecasted EPS and actual EPS scaled by stock price. *, **, *** denote two-tailed significance levels of 10 %, 5 %, and 1 %, respectively, for the differences between ICMW firms and matched firms, G1 firms and matched firms, and G2 firms and matched firms. T-test is used to test the difference between the mean of the ICMW sample and the matched sample, and median test is used to test the difference between the median of the ICMW sample and the matched sample Matched firms consist of firms in the same industry based on the 48 industry codes identified by Fama and French (1997) with the closest market value and sales at the end of fiscal year G1 firms are firms that disclose only Contained ICMW, and G2 firms are firms that disclose at least Pervasive ICMW. The Contained and Pervasive internal control material weaknesses are similar to Moody’s classification scheme. The Contained internal control material weakness is defined as the internal control issues related to controls over specific account balances, or transaction-level processes, or special accounting policy interpretation; the Pervasive internal control material weakness is defined as the internal control issues related to controls over the control environment or the overall financial reporting process
The median forecast accuracy of G1 firms is smaller than that of the matched firms at the 0.01 significance level. For G2 firms, the median forecast accuracy is also smaller than that of matched firms at the 0.01 significance level. In terms of optimistic bias, we find that median forecast biases for G1 and G2 samples are larger than those for their matched firms at the 0.01 significance level. These findings suggest that internal control material weaknesses are associated with less accurate and more optimistic forecasts for both G1 and G2 firms. In the next two subsections, we examine the association between ICMW and forecast accuracy (and bias) using multivariate regressions. The definitions of all the independent variables are provided in Appendix 2. Since we have a large number of independent variables incorporated in our regression models, the statistical inference on the variables could be affected by multicollinearity. Multicollinearity is a high degree of correlation (linear dependency) among several independent variables. It commonly occurs when some of the independent variables measure the same concepts or phenomena.
1734
L. Xu and A.P. Tang
In Table 63.4, we present the pair-wise correlations among all independent variables in the regressions and find that none of the correlations are larger than 0.50 (or smaller than 0.50) except for the correlation between SKEW and EPSVOL. To avoid the multicollinearity issue, we choose not to include these two variables in the same regression model.
63.4.3 Analysis of Forecast Accuracy The hypotheses to be tested are that analyst accuracy and bias are a function of ICMWs. However, the results will be difficult to interpret if endogeneity is a concern. We include firm-specific fixed effects to control for the possibility that endogeneity arises from omitted unobserved factors (e.g., business models) that may be correlated with both forecast quality and ICMWs. The model employed to test the association between forecast accuracy and ICMW is (H1 and H2): ACCURACYi, t ¼ a0 þ a1 ICMWi, t þ a2 NUMi, t þ a3 MVi, t þ a4 LEVi, t þ a5 ROAi, t þ a6 BMi, t þ a7 EPSVOLi, ðt5, t1Þ þ a8 ABSECHGi, t þ a9 LOSSi, t þ a10 SPECIALi, t þ a11 RETi, ðt3, t1Þ þ a12 DAi, t þ e
(63.1) where ACCURACY is forecast accuracy, calculated as the negative of the absolute difference between actual EPS and last median forecasted EPS scaled by stock price. ICMW is an indicator variable that equals one if a firm discloses a material weakness in internal control and zero otherwise. The definitions of the other variables are provided in Appendix 2. In our regression model we first control for earnings characteristics. Prior research identifies earnings volatility (EPSVOL), losses (LOSS), and special items (SPECIAL) as earnings characteristics that can negatively affect forecast accuracy. The forecasting task is more difficult for firms with historically more volatile earnings compared to firms with historically more stable earnings (e.g., Kross et al. 1990; Lim 2001), losses, and special items (Brown and Higgins 2001). In addition, we include absolute earnings changes (ABSECHG) to capture any shift in reported earnings. Prior studies show that forecast errors are larger for larger earnings surprises (e.g., Lang and Lundholm 1996; Duru and Reeb 2002). Moreover, we use absolute abnormal accruals (DA) to control for earnings quality. DA is estimated using the modified Jones model of Larcker et al. (2007). We expect a negative relation between forecast accuracy and DA.17
17
Note that the differences in the earnings quality could also be the consequences of the existence of ICMWs. By controlling for absolute abnormal accruals, we may overcontrol the impact of ICMWs. But it will bias against us finding any results.
NUM
1.00
BM
0.18*** 0.24 0.05 1.00
LEV
0.21*** 0.08 1.00
MV
0.47*** 1.00
SPE 0.10 0.07*** 0.03 0.06 1.00
**, ***
All the variables are described in Appendix 2 significant at 0.05 level and 0.01 level, respectively
NUM MV LEV BM SPE NECHG EPSVOL SKEW ICMW ABSECHG ECHG LOSS ROA RET
0.17*** 0.28*** 0.08 0.19*** 0.06 1.00
NECHG 0.31*** 0.29*** 0.09 0.01 0.05 0.05 1.00
EPSVOL
Table 63.4 Pearson correlations among the variables used in regression analysis SKEW 0.07 0.09 0.12** 0.04 0.02 0.05 0.67*** 1.00 0.03 0.03 0.03 0.13** 0.01 0.19*** 0.04 0.05 1.00
ICMW
ABSECHG 0.03 0.09 0.01 0.01 0.05 0.10 0.02 0.02 0.01 1.00
ECHG 0.01 0.07 0.08 0.04 0.06 0.35** 0.01 0.01 0.06 0.49*** 1.00
LOSS 0.11** 0.18*** 0.12** 0.07 0.03 0.33*** 0.11** 0.09 0.11** 0.19*** 0.18*** 1.00
0.23*** 0.29*** 0.13** 0.05 0.03 0.33*** 0.12** 0.05 0.09 0.19*** 0.17*** 0.45*** 1.00
ROA
RET 0.21*** 0.07 0.01 0.03 0.09 0.01 0.02 0.03 0.08 0.07 0.02 0.04 0.01 1.00
63 Internal Control Material Weakness 1735
1736
L. Xu and A.P. Tang
We next control for other firm characteristics such as size, growth, financial leverage, profitability, and risk. Firm size is measured as the natural logarithm of market value at the end of the year (MV) and has been used in the literature as a proxy for a number of factors. To the extent that size reflects information availability about a firm (other than through annual reports), a positive relation with forecast accuracy is expected (Ho 2004). However, firm size could also proxy for a host of other factors, such as managers’ incentives, for which predictions for the relation with forecast accuracy are unclear.18 We measure growth as the natural logarithm of the ratio of the book value of equity to the market value of equity at the end of the year (BM). Dechow and Sloan (1997) and Richardson et al. (2004) find that forecast accuracy and bias are related to measures of growth. Consistent with prior research, we expect firms with low book to market ratios (i.e., high growth firms) to have more accurate forecasts than firms with high book to market ratios (i.e., turnaround and declining firms). We also include the debt to equity ratio (LEV) to proxy for financial leverage and return on assets (ROA) to proxy for profitability. We do not have predictions for the sign of these two variables. Finally, we use the equally weighted market-adjusted cumulative return over the past 3 years (RET) to proxy for firm risk. We expect a negative relation between forecast accuracy and RET.19 Next, we use the natural logarithm of the number of analysts who issue the forecasts in calculating the last median earnings (NUM) to account for the effects of differences in forecast characteristics on forecast accuracy. Lys and Soo (1995) argue that the number of analysts proxies for the intensity of competition in the market. We expect a positive relation between forecast accuracy and analyst following. Finally, we include firm-specific and exchange-specific fixed effects to control for firm-specific and exchange-specific shocks. The results of the accuracy tests are reported in Table 63.5. Column 3 shows the OLS regression results for the overall sample along with the matched sample; Column 4 shows the OLS regression results for G1 sample firms and their matched firms; and Column 5 shows the OLS regression results for G2 sample firms and their matched firms. The results presented in Table 63.5 show that the coefficients on ICMW for the overall sample and G1 sample firms are not significantly different from zero. These findings suggest that forecast accuracy is not significantly different for all ICMW disclosing firms and for Contained ICMW disclosing firms relative to their corresponding matched firms when controlling for other independent variables. In contrast, for G2 sample firms, the coefficient of ICMW in Column 4 is significantly negative (at the 0.1 level for the two-tailed test). The negative coefficient suggests that forecast accuracy is significantly lower for Pervasive ICMW disclosing firms compared with their matched firms. Inferences about the control variables in the regression are generally similar to previous studies. Specifically, firms with lower frequency of negative earnings,
18
Fan and Yeh (2006) find that forecasting error is a negative function of firm size. The results are similar if we use value-weighted market-adjusted cumulative returns.
19
63
Internal Control Material Weakness
1737
Table 63.5 OLS regression estimations relating ACCURACY to ICMW firm variables ACCURACYi, t ¼ a0 þ a1 ICMWi, t þ a2 NUMi, t þ a3 MVi, t þ a4 LEVi, t þ a5 ROAi, t þ a6 BMi, t þa7 EPSVOLi, ðt5, t1Þ þ a8 ABSECHGi, t þ a9 LOSSi, t þ a10 SPECIALi, t þ a11 RETi, ðt3, t1Þ þa12 DAi, t þ e Dependent variable ¼ ACCURACY
(1) Independent variables
Predicted Sign (2)
INTERCEPT ICMW
NUM
+
MV
+
LEV
+/
BM
SPECIAL
EPSVOL
ABSECHG
LOSS
ROA
+/
RET
DA
R-square
All sample N ¼ 1454 (3) Coefficient (t-statistic) 0.043 (3.65***) 0.005 (1.22) 0.003 (0.92) 0.003 (1.41) 0.005 (0.44) 0.009 (3.02***) 0.016 (2.57***) 0.000 (1.13) 0.000 (0.05) 0.010 (1.75*) 0.120 (5.69***) 0.002 (0.79) 0.044 (1.49) 16.11 %
G1 sample N ¼ 696 (4) Coefficient (t-statistic) 0.038 (1.92*) 0.002 (0.36) 0.002 (0.36) 0.004 (1.14) 0.016 (0.80) 0.006 (0.99) 0.024 (2.22**) 0.002 (0.92) 0.000 (0.20) 0.006 (0.54) 0.181 (3.96***) 0.017 (2.93***) 0.099 (1.67*) 14.21 %
G2 sample N ¼ 758 (5) Coefficient (t-statistic) 0.046 (3.19***) 0.009 (1.77*) 0.007 (1.69*) 0.002 (0.73) 0.008 (0.52) 0.011 (3.12***) 0.009 (1.14) 0.000 (1.10) 0.000 (0.47) 0.006 (0.76) 0.120 (5.21***) 0.005 (1.35) 0.026 (0.84) 23.32 %
Observations include ICMW firms and matched firms that do not disclose ICMW. Matched firms consist of firms in the same industry based on the 48 industry codes identified by Fama and French (1997) with the closest market value and sales at the end of fiscal year. Regressions in the third, fourth, and fifth columns include all ICMW firms and matched firms, G1 firms and matched firms, and G2 firms and matched firms separately. Regressions control for exchange and firm fixed effects. Outliers are excluded using Cook’s (1977) distance statistic. N is the number of firm-year observations. *, **, *** denote two-tailed significance levels of 10 %, 5 %, and 1 %, respectively. All the variables are defined in Appendix 2
1738
L. Xu and A.P. Tang
lower book to market ratio, smaller abnormal accruals, and a larger number of analyst following have more accurate forecasts (as evidenced by significant P-value [ > otherwise : 2 8 k2 > < if i ¼ 8l þ 1, 8l þ 2, 8l þ 3, or 8l þ 4 t t1 2 f5ðiÞ ¼ , 8i ¼ 1, 2, , 16, > : k2 otherwise t2 t1 K LðiÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , i ¼ 2, 3, 5, 8, 3 8p 2 ðt2 t1 Þt1 LðiÞ ¼ Lð2Þ, i ¼ 1, 4, 6, 7 00 S ð 0Þ LðiÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , i ¼ 9, 12, 14, 15, 8p3 2 ðt2 t1 Þt1 LðiÞ ¼ Lð9Þ, i ¼ 10, 11, 13, 16, f1 ðiÞ ¼
and the rest parameters can be easily derived. Finally, we obtain the following pricing formula in two-dividend case: € ¼ erT C
ðb
ð b0 ð b00
1 1
k
00
Oð1Þ þ Oð2Þ þ þ Oð16Þdxdydz 16 h h 00 X LðiÞ H b , b0 , b, AðiÞ, BðiÞ, CðiÞ ¼ erT i¼1 ii 00 Hðk , b0 , b, AðiÞ, BðiÞ, CðiÞ :
1794
T.-S. Dai and C.-Y. Chiu
Table 65.4 Comparing our pricing results and the exact values in single-dividend case P(0) RR ours diff
44 1.0063271 1.0063272 1.44794E-07
46 1.25184 1.25183 4.6E-06
48 1.45995 1.45994 1.61E-06
50 1.60138 1.60137 1.81E-06
52 1.65384 1.65384 9.21E-07
Table 65.5 Comparing the pricing results by different models in single-dividend case P(0) 48 52 56 60 64 MAE RSE
65.4
B 1.3456 1.5829 1.4389 0.9164 0.1932
ours 1.3427 1.5796 1.431 0.9106 0.1868 0.0089 0.0054
M2 1.3317 1.5767 1.4395 0.9266 0.1932 0.0141 0.0093
M1 1.3641 1.6401 1.5423 1.07 0.3697 0.1765 0.1109
Numerical Results
Option values in different initial underlying stock prices with r ¼ 0.03, s ¼ 0.2, K ¼ 50, B ¼ 65, T ¼ 1, t1 ¼ 0.5, and c1 ¼ 0. “Ours” and “RR” stand for the pricing results by Eq. 65.8 and our closed form approximation for single-dividend case, respectively. “diff” gives their difference. To show that our approximating formula prices accurately, in this section we give the numerical results of our formula, compared with other pricing schemes. First we show that our pricing formula can exactly price a barrier option in zerodividend case. Equation 65.8 given in Reiner and Rubinstein (1991) is our special case when all discrete dividend ci are zero. Table 65.4 gives the comparison of the pricing results by our formula in single-dividend case and Eq. 65.8, with settings r ¼ 0.03, s ¼ 0.2, K ¼ 50, B ¼ 65, T ¼ 1, t1 ¼ 0.5, and c1 ¼ 0. As shown in this table, Eq. 65.8 and our pricing formula give almost the same result. Notice that the approximations for CDF N(x) and FU1 , U2 ða; b; OÞ could cause insignificant errors. Option values approximated by model 1 (denoted as “M1”), 2 (denoted as “M2”), and our formula in different initial underlying stock prices with r ¼ 0.03, s ¼ 0.2, K ¼ 50, B ¼ 65, T ¼ 1, t1 ¼ 0.5, and c1 ¼ 1. The columns “error” list the difference between their left column and the benchmark (denoted as “B”). MAE denotes the maximum absolute error, and RSE stands for the root-meansquare error. As mentioned in Frishling (2002), only model 3 can reflect the real-world phenomenon. Though models 1 and 2 suggested by Roll (1977) and Heath and Jarrow (1988) try to approximate model 3, Table 65.5 shows that their pricing results are less accurate than ours. Table 65.5 gives the comparison of the pricing
65
Accurate Formulas for Barrier Options
1795
Table 65.6 Comparing the pricing results by different models in single-dividend case. All settings are the same with table 65.5, but now we fix P(0) ¼ 50 and compare the pricing results for different dividend c1 c1 0.3 0.9 1.5 2.1 2.7 MAE RMSE
B 1.57589 1.52022 1.44776 1.3843 1.30605
ours 1.57301 1.51291 1.44926 1.38283 1.31449 0.01004 0.00528
M2 1.57046 1.50619 1.439 1.36935 1.29772 0.01495 0.01067
M1 1.58565 1.54856 1.50439 1.45376 1.39734 0.09737 0.06299
M2 0.996404 1.047928 0.887206 0.54854 0.113098 0.015314 0.009409
M1 1.061938 1.151891 1.031609 0.728746 0.319177 0.208819 0.146997
Table 65.7 Comparing the pricing results in two-dividend case P(0) 48 52 56 60 64 MAE RSE
B 1.003312 1.048434 0.877129 0.536375 0.110358
ours 1.002807 1.043798 0.873651 0.531522 0.107166 0.004919 0.003754
results by models 1 and 2 and our formula in single-dividend case, where we use the results by Monte Carlo simulation with 1,000,000 paths as a benchmark. Table 65.6 gives the same comparison as Table 65.5, while Table 65.6 compares the pricing results for several different c1 and one fixed P(0), but Table 65.5 compares the results for different P(0) fixing c1. As shown in Table 65.6, our pricing formula produces poorer results as c1 increases, but on average, our pricing results are still more accurate than others. Furthermore, typically dividend c1 is quite small, and hence, our pricing formula approximates option values accurately. In two-dividend case, the pricing results comparison is shown in Table 65.7. As shown in these tables, our formula also produces more accurate results the model 1 and 2. Thus, we can conclude that our formula is better in all cases.
65.5
Conclusions
There are several different ways to model the stock price process with discrete dividend. As suggested by Frishling (2002), only model 3 can reflect real-world phenomenon. However, it is hard to price a barrier option under model 3. Our chapter suggests a way to derive analytically approximating pricing formulas for a barrier option under model 3. Though the resulting analytical formulas involve multiple integral, as shown above, all of them can be reformulated in terms of the
1796
T.-S. Dai and C.-Y. Chiu
CDF of a multivariate normal distribution. As a result, our pricing formula prices efficiently. Furthermore, numerical results show that our pricing formulas produce accurate result. Option values in two-dividend case approximated by models 1 and 2 and our formula in different initial underlying stock prices with r ¼ 0.03, s ¼ 0.2, K ¼ 50, B ¼ 65, T ¼ 1.5, t1 ¼ 0.5, and t2 ¼ t1 + 0.5.
Appendix: Solve the Integration of Exponential Functions by the CDF of Multivariate Normal Distribution If the price of the underlying asset is assumed to follow the lognormal diffusion process, most option pricing formulas, including the pricing formulas in this chapter, can be expressed in terms of multiple integrations of an exponential function, where the exponent term is a quadratic function of integrators z1, z2, ···. The integration problem can be numerically solved by reexpressing the formulas in terms of CDF of multivariate normal distribution, which can be efficiently solved by accurate numerical approximation methods (see Hull 2003). These numerical methods are provided by mathematical softwares, like Matlab and Mathematica. Take the simplest case – the single integral, for example. Under the premise ðl 2 f2 < 0, the integral ef2x þf1 xþf0 dx can be rewritten as 1
rffiffiffiffiffiffiffiffiffi f2 4f0 f2 p 1 4f lm 2 N , e f2 s where
(65.31)
f1 2f2 1 s ¼ pffiffiffiffiffiffiffiffiffiffiffiffi , 2f2 m¼
and N(·) denotes the CDF of a univariate standard normal distribution. Note that the above identity is used to derive the pricing formula in no-dividend case. However, the integration for multivariate case is not straightforward. To address this problem, we derive a general formula for the multivariate integration with n integrators: z1, z2, ···, zn. Some matrix and vector calculations are employed to simplify derivation. For simplicity, for any matrix ℶ, we use jℶj, ℶT , and ℶ1 to denote the determinant, the transpose, and the inverse matrix of ℶ. ℶi, j stands for the element located at the i-th row and j-th column of ℶ. For any vector n, we use ni to denote the i-th element of n. We further assume that z ¼ (z1, z2, ···, zn)T is a column vector with n variables, b is an n 1 constant vector, d is a constant, and W is an n n symmetric invertible negative-definite constant matrix. Then the general integral formula is derived in the following theorem:
65
Accurate Formulas for Barrier Options
1797
Theorem 65.5 For any general quadratic formula zT Wz + bTz + d, the n-variate
integral for ez
T
#zþbT zþd
ð wn ð wn1 1
1
...
ð w1 1
ez
T
#zþbT zþd
dz
(65.32)
can be expressed in terms of a CDF of an n-dimensional standard normal distribution F , Un ðl1 ; l2 ; ; ln ; OÞ ð lUn1 , Uð2 ,ln1 ð l1 1 1 T 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffin e2y O y dy, jOjð2pÞ 1 1 1 where O denotes the covariance matrix of a n-variate standard normal random vector (Υ1, Υ2, , Υn). Proof To express the integral in Eq. 65.32 in terms of a CDF of a standard normal distribution, the exponent term zT Wz + bTz + d should be expressed in terms of the exponent term of a standard normal distribution. That is,
1 zT #z þ bT z þ d ¼ yT O1 y þ d0 2
(65.33)
for some constant d0 . This can be achieved by first deriving a proper constant vector m and a proper diagonal matrix S and then substituting y ¼ S1 n
(65.34)
into the left-hand side of Eq. 65.33, where n is the abbreviation of z m. The following lemma derives a proper m by completing the square identity for zT Wz + bTz. Lemma 65.6 Under the premises that W is a symmetric invertible n n matrix, and
z, b are both n 1 vectors, we have
1 1 T 1 1 1 z #z þ b z ¼ z þ # b # z þ # b bT #1 b: 2 2 4 T
T
(65.35)
Proof By expanding the right-hand side of Eq. 65.35, we have 1 1 1 z þ # b bT #1 b zþ 2 4 ¼ zT #z þ bT z ¼ the left-hand side of Eq:65:35:
T 1 1 2# b #
(65.36)
With lemma 65.6, we obtain zT #z þ zT b þ d ¼ nT #n þ d0 ,
(65.37)
1798
T.-S. Dai and C.-Y. Chiu
where 1 m #1 b, 2
(65.38)
1 d0 d bT #1 b: 4
(65.39)
The diagonal matrix F can be derived by equating the right-hand sides of Eq. 65.33 and Eq. 65.37 to get 1 nT Wn þ d0 ¼ yT O1 y þ d0 : 2 Subtracting d0 from both sides of above equation yields 1 nT #n ¼ yT O1 y 2
(65.40)
1 nT ðFOFÞ1 n, 2
(65.41)
By comparing the right-hand side of Eqs. 65.40 and 65.41, we have ¼ # , which is rewritten as FOF ¼ (2W)1. Recall that F is a diagonal matrix. All diagonal elements of O are all 1 since O is a covariance matrix of multivariate standard normal random variables. Thus, we have (FOF)i, i ¼ F2i,i, which leads us to obtain 12 ðFOFÞ1
8 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi < ð2#Þ1 Fi, j i, i : 0
if i¼j
(65.42)
otherwise
and O ð2FWFÞ1 :
(65.43)
Now we can evaluate Eq. 65.32 with d0 , m, F, and O defined above. By applying the change of variable Eq. 65.34, Eq. 65.32 can be rewritten as ð zn ¼wn ð zn1 ¼wn1
zn ð ¼1 zn1 ¼1 zn ¼wn ð zn1 ¼wn1
¼
zn ¼1 @x dy: @y
zn1 ¼1
ð z1 ¼w1
ez
T
z1 ð ¼1 z1 ¼w1
z1 ¼1
#zþbT zþd
e 2y
1 T
dz
O1 yþd0
(65.44)
65
Accurate Formulas for Barrier Options
1799
T zn mn 1 z2 m2 Since the elements in vector y can be represented as z1Fm ; ; ; , the F 2, 2 F n, n 1, 1 Jacobian determinant can be straightforwardly computed to get Yn @x F ¼ jFj . Thus, Eq. 65.44 can be further rewritten as the following @y ¼ i¼1 i, i closed form formula: rffiffiffiffiffiffiffiffiffi ffi pn w1 m 1 w2 m 2 wn m n FU , U , , Un e ; ; ; ;O , (65.45) F 1, 1 F2, 2 Fn, n j#j 1 2 pffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where jFj jOj ¼ jFOFj ¼ j2#j1 and | 2#j ¼ 2n| #j are substituted into Eq. 65.45. This integration formula is similar to the single integral case given. Q.E.D. d0
References Bender, R. & Vorst T. (2001). Options on dividends paying stocks. In Proceeding of the 2001 International Conference on Mathematical Finance, Shanghai. Black, F. (1975). Fact and fantasy in the use of options. Financial Analysts Journal, 31(36–41), 61–72. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81, 637. Bos, M., & Vandermark, S. (2002). Finessing fixed dividends. Risk, 15, 157–158. Chiras, D., & Manaster, S. (1978). The informational content of option prices and a test of market efficiency. Journal of Financial Economics, 6, 213–234. Dai, T.-S. (2009). Efficient option pricing on stocks paying discrete or path-dependent dividends with the stair tree. Quantitative Finance, 9, 827–838. Dai, T.-S., & Lyuu, Y. D. (2009). Accurate approximation formulas for stock options with discrete dividends. Applied Economics Letters, 16, 1657–1663. Figlewski, S., & Gao, B. (1999). The adaptive mesh model: A new approach to efficient option pricing. Journal of Financial Economics, 53, 313–351. Frishling, V. (2002). A discrete question. Risk, 15, 115–116. Gaudenzi, M., & Zanette, A. (2009). Pricing American barrier options with discrete dividends by binomial trees. Decisions in Economics and Finance, 32, 129–148. Geske, R. (1979). A note on an analytical valuation formula for unprotected American call options on stocks with known dividends. Journal of Financial Economics, 7(4), 375–380. Heath, D., & Jarrow, R. (1988). Ex-dividend stock price behavior and arbitrage opportunities. Journal of Business, 61, 95–108. Hull, J. (2003). Options, futures, and other derivatives. Prentice Hall. Kim, I., Ramaswamy, K., & Sundaresan, S. (1993). Does default risk in coupons affect the valuation of corporate bonds? A contingent claims model. Financial Management, 22, 117–131. Lando, D. (2004). Credit risk modeling: Theory and applications. Princeton, NJ: Princeton University Press. Leland, H. E. (1994). Corporate debt value, bond covenants, and optimal capital structure. Journal of Finance, 49, 157–196. Leland, H. E., & Toft, K. (1996). Optimal capital structure, endogenous bankruptcy, and the term structure of credit spreads. Journal of Finance, 51, 987–1019. Merton, R. (1973). Theory of rational option pricing. Journal of Economics and Management Science, 4, 141–183.
1800
T.-S. Dai and C.-Y. Chiu
Merton, R. (1974). On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance, 29, 449–470. Musiela, M., & Rutkowski, M. (1997). Martingale methods in financial modeling. Sydney: Springer. Reiner, E., & Rubinstein, M. (1991). Breaking down the barriers. Risk, 4, 28–35. Roll, R. (1977). An analytic valuation formula for unprotected American call options on stocks with known dividends. Journal of Financial Economics, 5, 251–258. Shreve, E. (2007). Stochastic calculus for finance II: Continuous-time models. New York: Springer Finance. Vellekoop, M., & Nieuwenhuis, J. (2006). Efficient pricing of derivatives on assets with discrete dividends. Applied Mathematical Finance, 13, 265–284.
Pension Funds: Financial Econometrics on the Herding Phenomenon in Spain and the United Kingdom
66
Mercedes Alda Garcı´a and Luis Ferruz
Contents 66.1 66.2 66.3 66.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Herding Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.4.1 CAPM and Herding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.4.2 Measuring Herding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.4.3 Models for Measuring Herding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.4.4 The State-Space Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.4.5 Herding with Market and Macroeconomic Variables . . . . . . . . . . . . . . . . . . . . . . . . 66.4.6 Generalized Herding Measurement in Linear Factor Models . . . . . . . . . . . . . . . 66.4.7 Robust Estimate of the Betas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.5 Data and Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.5.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.5.2 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: The State-Space Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Robust Estimate of the Betas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1802 1804 1806 1807 1807 1808 1809 1810 1811 1812 1812 1813 1813 1815 1822 1823 1824 1826
Abstract
This work reflects the impact of the Spanish and UK pension funds investment on the market efficiency; specifically, we analyze if manager’s behavior enhances the existence of herding phenomena. To implement this study, we apply a less common methodology: the estimated cross-sectional standard deviations of betas. We also estimate the betas
M. Alda Garcı´a (*) • L. Ferruz Facultad de Economı´a y Empresa, Departamento de Contabilidad y Finanzas, Universidad de Zaragoza, Zaragoza, Spain e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_66, # Springer Science+Business Media New York 2015
1801
1802
M. Alda Garcia and L. Ferruz
with an econometric technique less applied in the financial literature: state-space models and the Kalman filter. Additionally, in order to obtain a robust estimation, we apply the Huber estimator. Finally, we apply several models and study the existence of herding towards the market, size, book-to-market, and momentum factors. The results are similar for the two countries and style factors, revealing the existence of herding. Nonetheless, this is smaller on size, book-to-market, and momentum factors. Keywords
Herding • Pension funds • State-space models • Kalman filter • Huber estimation • Imitation • Behavioral finance • Estimated cross-sectional standard deviations of betas • Herding towards the market • Herding towards size factor • Herding towards book-to-market factor • Herding towards momentum factor
66.1
Introduction
For some time, Western countries have been undergoing a range of demographic and social changes: increased life expectancy, ageing population, shorter active phases, and longer retirement periods. All of these changes, together with increasing concern over the viability of public pension systems, have led to a greater role for complementary pension systems. Together with all these factors, society has become increasingly aware of the need to save for a higher retirement income; as Sanz (1999) observes, the state pension incomes are lower than those received during working life. Thus, together with tax incentives, pension plans are one of the key financial products for saving in developed economies. The need to find a backup to the state pension has caused more and more professional and nonprofessional investors to take out pension plans by investing in pension funds. As a consequence, these products are beginning to take a leading role in the industry of collective investment; according to INVERCO (Spanish Association of Investment and Pension Funds), the investment in such products worldwide in 2010 exceeded 12½ billion euros, of which more than 3 billion euros came from Europe, highlighting the United Kingdom, which represents a third of the total European investment. The origin of pension funds and pension plans can be traced to the foundation of the welfare states, when various governments began to expand public spending, in particular on social welfare: education, health, pensions, housing, and unemployment benefit. Among the different parts of the welfare state, pensions are included under the heading of the Social Welfare Systems, and the European pension systems are organized around the Livonia focus which is divided into three pillars, as Gonza´lez and Garcı´a (2002) notes:
66
Pension Funds
1803
1. The first pillar consists of the Social Security System. It is integrated by the public system, which is mandatory, defined benefit pensions, and pay-as-you-go system, which guarantees a minimum level of pension. The pay-as-you-go system means that active workers pay, with their contributions, the pensions of retirees at that time; although each worker’s pension depends on the contributions made during working life. 2. The second pillar consists of private and complementary occupational plans, within the scope of companies and workers associations. This pillar may be either voluntary or compulsory and may replace or supplement the first pillar, so it may be either private or public. 3. The third pillar consists of individual savings decisions, so it is private and voluntary; an example is the personal pension plans. Therefore, the first pillar is the Public Social Welfare System, while the second and third comprise the Complementary Social Welfare System. The evolution of each of these pillars depends on the evolution of the state system, so if the latter has expanded, providing more generous state pensions, the other two have been developed to a lesser extent, and vice versa. For instance, Ferna´ndez (2011) notes that the third pillar is weak in Spain because public pensions are close to the final salary. Therefore, complementary pension systems have varying characteristics across Europe depending on the private pension’s development (second and third pillars), and we can divide them into three groups: – Countries in which private pension systems have reached a high stage of development: the United Kingdom, Ireland, the Netherlands, and Sweden. – Countries in which private pension systems have reached a considerable degree of development, but are still evolving: Spain, Portugal, Italy, and Germany. – Countries in which private pension systems are less developed: France, Belgium, and some East European countries. With regard to legislation in Europe on these products, it must be noted that each country has its own, so that across Europe we can observe significant differences. Among the different European countries, we focus on Spain and the United Kingdom because they represent two different systems and their industries have different size; as a result, we want to study if these characteristics provide different results on the topic studied: the herding phenomenon. Respect the pension systems, these two markets are the most representative of two different pension systems: Spain belongs to the “Mediterranean model,” which is characterized by generous public pensions. In contrast, the United Kingdom belongs to the “Anglo-Saxon model,” with less generous public pensions, so private pensions are more developed. Equally, the differences in size between the two markets allow us to observe whether they enhance or not the existence of herding. With regard to the United Kingdom, according to the OECD (2010), it is the leading country in Europe (1.1 billion euros) and the second in worldwide terms, just behind the United States.
1804
M. Alda Garcia and L. Ferruz
For its part, Spain occupies seventh place in Europe with 84 milliard of euros invested in 2010, and given that these products did not come on the market here until 1988, this country has experienced a significant growth over the past years; as a result, this trend could have implications for the existence of herding. We focus on the study of herding because traditionally, most of the financial studies examine the pension funds’ performance, analyzing the manager behavior; however, the market efficiency can also be affected by the pension funds’ investment, for example, through the herding phenomenon, an aspect that we study in this paper. We apply a less common methodology to detect herding, based on the statespace models, as in the work of Christie and Huang (1995). Nonetheless, this model suffers from several inadequacies: it supposes that the instability of the market means that the whole market should demonstrate negative or positive returns and it introduces dummy variables arbitrarily; therefore, we also focus on the work of Hwang and Salmon (2004), who calculate herding using a single-factor model. In particular, they base on the market return from the CAPM model, using the betas dispersion of all market stocks. The rest of the work is organized as follows: in the second section we describe the herding phenomenon and the third section carries out a literature review on the topic. The fourth section develops the methodology. The fifth section compiles the data and presents the empirical results. Finally, we show the main conclusions.
66.2
The Herding Phenomenon
This topic is part of the behavioral finance, which is focused on the study of the rationality of investment decisions and the implications of the cognitive processes in the make decisions (Fromlet 2001). Investors’ preference, such as the avoidance of loss, may produce some irrational reactions and affect the market efficiency (Kahnemann and Tversky 1979; Tversky and Kahnemann 1986). This behavior may imply price fluctuations, not necessarily related to the arrival of new market information, but rather by the emergence of collective phenomena, like herding (Thaler 1991; Shefrin 2000), affecting the efficiency and stability of the market. In financial literature, herding arises when investors decide to imitate the decisions of other participants in the market or market movements; that it is to say, they imitate market agents who are thought to be better informed, rather than follow their own beliefs and information. We assume the manager as investor, given that even though individual investors make the investments in pension plans, the managers are responsible for buying and selling in the market; likewise, they also vary the composition of the pension fund portfolio. Therefore, managers are the final investors and they carry out the investment in the market, albeit according to the guidelines established by the members of the pension plans. In financial language, the herding phenomenon is one of the most widely discussed because, in the field of the asset pricing, it helps to explain market
66
Pension Funds
1805
anomalies; however, the difficulty in its measurement and calculation has limited their research. It is generally accepted that herding can lead to a situation in which market prices cannot reflect all of the information, then the market becomes unstable and moves towards the inefficiency. For that reason, market regulators show an interest in reducing this type of phenomena. Theoretical and empirical studies have focused on finding causes and implications related to herding. The majority agree that this may due to both rational and irrational investor behavior. According to Devenow and Welch (1996), irrational viewpoint focuses on the psychology of the investor, where the investor follows others blindly. On the other hand, rational herding may appear due to diverse causes: The first of them is the existence of imperfect information, that it is to say, when it is believed that other market participants are better informed. Banerjee (1992), Bikhchandani et al. (1992), Hirshleifer et al. (1994), Calvo and Mendoza (2000), Avery and Zemsky (1998), Chari and Kehoe (2004), Gompers and Metrick (2001), Puckett and Yan (2007), and Sahut et al. (2011) show evidence of such imperfect information. The second aspect that causes rational herding is the costs of reputation. Scharfstein and Stein (1990), Trueman (1994), Rajan (1994), or Maug and Naik (1996), focusing on agency theory, show evidence of this. These studies prove that mutual fund managers imitate others in order to obtain bonuses as set out in their compensation-reputation scheme rewarding. Compensation schemes also cause rational herding, as an investor will be rewarded based on their performance against the others; therefore, the deviations with respect to the market consensus could lead to an undesirable cost. Studies such as those of Roll (1992), Brennan (1993), Rajan (1994), or Maug and Naik (1996) demonstrate this. In addition to these explanations, some authors have considered other factors, like the degree of institutional participation, the spread of opinions, derivatives markets and their sophistication, or uninformed investors. Among these are Patterson and Sharma (2006), Demirer and Kutan (2006), Henker et al. (2006), and Puckett and Yan (2007). Despite the uncertainty surrounding the causes of this behavior, the study of herding in financial markets has followed two lines of investigation. The first one analyzes the tendency of individuals (individual investors). Among them, we highlight the works of Lakonishok et al. (1992a); Grinblatt et al. (1995); Wermers (1999), and Uchida and Nakagawa (2007). The second trend focuses on market herding as a whole, that is to say, as a collective behavior of all participants buying or selling a certain asset at the same time. The most representative studies within this line of investigation are those of Christie and Huang (1995), Chang et al. (2000), Hwang and Salmon (2004), Patterson and Sharma (2006), and Wang (2008). In this paper we focus on this second approach; to this purpose, we use the observing deviations from the equilibrium expressed in CAPM prices, a focus less used in financial literature.
1806
M. Alda Garcia and L. Ferruz
Based on the works of Christie and Huang (1995) and Hwang and Salmon (2004), we capture herding with the use of observed returns data, instead of measuring it in the same way as Lakonishok et al. (1992a), by detailed records of individual trading activities which may not be available in many cases. For this reason, in order to detect herding, we use the cross-sectional dispersion of the betas. Nonetheless, as this model suffers from some deficiencies as it supposes that market betas are statics, we build a time-varying distribution of the cross-sectional dispersion of betas. Likewise, as its betas do not take into account outliers, we apply a robust estimation. The estimation method applied is the state-space model, using the Kalman filter. This methodology is more innovative in financial literature, and as far as we know, it has not been applied to pensions, so we will obtain new empirical evidence. Likewise, as this technique has not been applied to pension funds, it will allow us to detect, for the first time, if pension fund managers produce herding behavior in the markets.
66.3
Literature Review
As we mentioned, we focus on the study of herding considering the market as a whole. In this trend, we find various types of models that detect herding: models of returns’ dispersion and state-space models. In the first ones we can distinguish between linear and nonlinear models. In the linear models, the most common measurement is the cross-sectional standard deviation of returns (henceforth CSSD), while in the nonlinear ones is the crosssectional absolute deviation of returns (henceforth CSAD). Studies based on these models show mixed evidence of this behavior. Christie and Huang (1995) find evidence in American stocks, but not during the market crises. Chang et al. (2000) find this in Taiwan, South Korea, and Japan, but not in Hong Kong and the United States. Lin and Swanson (2003) do not find this in international securities. Gleason et al. (2004) show this for ETFs funds. Bowe and Domuta (2004) also find positive results in the Jakarta stock market. Weiner (2006) finds scarce evidence for herding in the oil market. Demirer and Kutan (2006), together with Tan et al. (2008), study the Chinese market, but they do not find evidence of herding. In the Polish market, Goodfellow et al. (2009) find evidence of individual herding in bear markets, while they do not find evidence for institutional herding. Bohl et al. (2011) observe that restrictions on short-term positions lead to adverse herding in the United States, the United Kingdom, Germany, France, Australia, and South Korea. Economou et al. (2011) detect the presence of herding in Greek, Italian, and Portuguese markets. With respect to the Spanish market, there are different studies that detect herding: Blasco and Ferreruela (2007, 2008) Lillo et al. (2008), and Blasco et al. (2011). On the other hand, the state-space model (used in this work) also provides mixed evidence: Gleason et al. (2003) use it on the European futures markets, but the results display absence of herding. Hwang and Salmon (2004) find it in the Korean and American markets. In addition, they clearly observe it when the markets are
66
Pension Funds
1807
stable and the investors are sure of the futures markets’ direction. The authors conclude that financial crises stimulate a return towards efficiency. Wang (2008) applies this to various markets (developed and emerging) and concludes that herding in emerging markets is greater than in developed ones. Demirer et al. (2010) apply some models to the Taiwanese market, obtaining different results depending on the method. These authors find a lack of herding with the linear method (CSSD), but with the nonlinear model (CSAD) and the statespace model, they find strong evidence. Shapour et al. (2010) also discover such evidence in the Tehran stock market, but no evidence of herding when they study towards size and book-to-market factors. With regard to studies of herding in collective investment instruments (mutual and pension funds), we observe different analyses: Oehler and Goeth-Chi (2000) examine German mutual funds that invest on bond market. The results show herding, but to a lesser degree than in the stock market. Kim and Wei (2002) also get positive evidence in domestic and international Korean mutual funds. Voronkova and Bohl (2005) do not detect an influence of Polish pension funds on the stock market. Wylie (2005) examines this behavior in the portfolio holdings of UK equity mutual fund managers, revealing some modest results. Walter and Weber (2006) analyze whether the German mutual fund managers demonstrate this behavior, and their results confirm it. Lobao et al. (2007) obtain evidence in Portuguese mutual funds, detecting a stronger tendency to herd among medium-cap funds, and a decrease when the stock market corrects itself or is more volatile. Ferruz et al. (2008a, b) notice evidence of this phenomenon in value, growth, and cash stocks in Spanish equity mutual funds. Hsieh et al. (2010) show that mutual funds on 13 emerging Asian countries influence on the existence of herding, and this phenomenon is more pronounced during and after crises; for this reason, they suggest that mutual funds’ behavior may have contributed to the crises. Fong et al. (2011) document the existence of herding due to information cascades, in a sample of US equity mutual funds. Jame (2011) examines the magnitude and effects of herding on pension fund price in the United States. Jame uses the measurement proposed by Lakonishok et al. (1992b) and confirms that pension funds are involved on the development of herding. Given the small number of studies that analyze this topic in pension funds, this paper contributes to the financial literature by studying the influence of Spanish and British pension fund managers in their respective markets.
66.4
Methodology
66.4.1 CAPM and Herding Herding leads to mispricing so rational decisions may be disturbed through the use of biased beliefs and views of expected returns and risks. More specifically, in the CAPM model, herding produces biased betas, and they deviate from equilibrium.
1808
M. Alda Garcia and L. Ferruz
In order to observe empirically how this phenomenon affects the betas, we take as a starting point the CAPM model in equilibrium: Et ðr it Þ ¼ bimt Et ðr mt Þ
(66.1)
where rit and rmt are the excess return of fund i and the excess market return over the risk-free asset during the period t, respectively, bimt is the systematic risk measure, and Et(∙) is the conditional expectation at time t in order to price the fund i. The CAPM model assumes that bimt does not change over time, despite considerable empirical evidence demonstrating that betas are not constant, among those, Harvey (1989), Ferson and Harvey (1991, 1993), and Ferson and Korajczyk (1995). Nonetheless, following Hwang and Salmon (2004), the evidence shows that the betas do not change over time in equilibrium, which means that the variation of betas can be interpreted as behavioral anomalies, such as herding, rather than from fundamental changes in the beta or the equilibrium relationship between Et(rmt) and Et(rit). In this way, the individual cross-sectional dispersion of betas is lower than in equilibrium; given that if all returns were expected to be equal to the market return, all betas would be equal to one and the cross-sectional variance would be zero. In addition, if we assume that Et(rmt) represents the market as a whole and investor first forms a view of the market as a whole and then considers the value of the individual asset, subsequently the investor’s behavior is conditional on Et(rmt) and the observed beta (bimt) will be biased, at least in the short-term, given Et(rmt). In this way, the biased betas appear because the beliefs of the investors change, they follow the market more than they should in equilibrium, and they ignore the equilibrium relation, trying to match the individual asset returns with the market return. In this case takes place the so-called herding towards the market. The opposite behavior is also possible, producing adverse herding. This appears when high betas (larger than one) become higher and low betas (smaller than one) become lower. On this occasion, individual return becomes more sensitive for large beta stocks and less sensitive for low beta stocks. This leads to a reversion in the long-term equilibrium of bimt. In fact, adverse herding should exist if herding exists, since there must be some systematic adjustment back to the CAPM equilibrium.
66.4.2 Measuring Herding When there is herding in the market portfolio, the CAPM equilibrium does not occur, and both beta and expected return are biased. Therefore, instead of the equilibrium in (1), Hwang and Salmon (2004) assume that the following relation is produced in presence of herding towards the market: Ebt ðr it Þ ¼ bbimt ¼ bimt hmt ðbimt 1Þ Et ðr mt Þ
(66.2)
66
Pension Funds
1809
where Ebt (rit) is the market’s biased short run conditional expectation on the excess returns of fund i, bbimt is the market beta at time t in the presence of herding, and hmt is a latent herding parameter that changes over time, less than one (hmt 1) and conditional on market fundaments. When hmt ¼ 0, then bbimt ¼ bimt, and herding does not exist, producing the CAPM equilibrium. However, when hmt ¼ 1, then bbimt ¼ 1, it is perfect herding towards the market portfolio in the sense that all individual funds move in the same direction and magnitude as the market portfolio. In general, when the herding parameter (hmt) is between zero and one (0 < hmt < 1), a certain degree of herding exists in the market, determined by the magnitude of the herding coefficient. Considering the situation described in the previous section, the relationship between the real and biased expected excess fund returns and its beta can be explained. Therefore, for a fund with bimt > 1, then Er(rit) > Et(rmt), and the fund presents herding towards the market, so that Ebt (rit) moves towards Et(rmt), and Er(rit) > Ebt (rit) > Et(rmt). As a result, the fund seems less risky than it should be, suggesting that bbimt < bimt. On the other hand, for a fund with bimt < 1, it gives Er(rit) < Et(rmt), and the fund presents herding towards the market when Ebt (rit) moves towards Et(rmt), which is why Er(rit) < Ebt (rit) < Et(rmt). The fund seems riskier than it should be, suggesting that bbimt > bimt. Finally, for a fund with a beta equal to one, bimt ¼ 1, the fund is neutral to herding. As we have already mentioned, the existence of herding implies adverse herding, allowing hmt < 0; therefore, for a fund with bimt > 1, then Ebt (rit) > Er(rit) > Et(rmt), while a fund with bimt < 1 will produce the following: Ebt (rit) < Er(rit) < Et(rmt).
66.4.3 Models for Measuring Herding Herding of a market portfolio can be captured with the parameter hmt of the expression (66.2); however, neither the beta nor the herding parameter is observed. For this reason, we use state-space models in order to extract those parameters. As we aim to measure herding in terms of the market as a whole, we assume that the Eq. 66.2 captures all of the market assets, so we can calculate herding using all of the assets and not only one, eliminating the effects of the idiosyncratic movements of individual betas (bbimt). Since the cross-sectional mean of the betas (bbimt or bimt) is always one, Hwang and Salmon (2004) show that Std c bbimt ¼ ¼
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Ec ðbimt hmt ðbimt 1Þ 1Þ2 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi
Ec ðbimt 1Þ2 ð1 hmt Þ ¼ Std c ðbimt Þð1 hmt Þ
(66.3)
1810
M. Alda Garcia and L. Ferruz
where Ec(∙) represents the cross-sectional expectation, Stdc(bbimt) is the cross-sectional standard deviation of the beta in equilibrium, and rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 Ec ðbimt hmt ðbimt 1Þ 1Þ is a direct function of the herding parameter. In order to minimize the impact of the idiosyncratic changes in bimt, a great number of assets are used in the calculation of Stdc(bimt), so Stdc(bimt) will be stochastic in order to observe the movements in the equilibrium beta. Nonetheless, as it is expected that the market as a whole Stdc(bimt) does not change significantly in the short term, unless the structure of companies changes suddenly, it is assumed that Stdc(bimt) does not exhibit any systematic movement and that the changes in Stdc(bbimt) in the short term are due to the changes in hmt, that is to say, due to the presence of herding.
66.4.4 The State-Space Models In the previous section, we remark that the herding parameter is not observed, so we apply state-space models. Those models may be estimated by using the Kalman filter, which is an algorithm to perform filtering on the state-space model. In order to extract herding from model (66.3), we follow the procedure used by Hwang and Salmon (2004). First, taking logarithms of (66.3) log Std c bbimt ¼ log½Std c ðbimt Þ þ logð1 hmt Þ
(66.4)
and considering the assumptions carried out for Stdc(bimt), it is rewritten like this: log Std c bbimt ¼ mm þ umt
(66.5)
where mm ¼ E[log[Stdc(bimt)]] and umt iid(0, s2mu), then log Std c bbimt ¼ mm þ H mt þ umt
(66.6)
where Hmt ¼ log(1 hmt). Therefore, we suppose that the herding (Hmt) evolves over time and follows a dynamic process; for example, assuming a mean zero AR(1), we obtain the model (66.7): log Std c bbimt ¼ mm þ H mt þ umt H mt ¼ fm H mt1 þ mt where mt iid(0, s2m).
(66.7)
66
Pension Funds
1811
As a result, we have a standard state-space model, similar to those used in stochastic volatility modeling which is estimated using the Kalman filter. Furthermore, we focus on the movements of the latent variable (Hmt). It should be observed that when s2m ¼ 0, the model (66.7) becomes log Std c bbimt ¼ mm þ umt
(66.8)
This means that herding does not exist, so Hmt ¼ 0 for all t. A significant value of s2m can be interpreted as the existence of herding, and a significant value of f supports the autoregressive structure considered. A restriction is that the herding process (Hmt) should be stationary, and as we do not expect herding to be an explosive process towards the market portfolio, it must be jfmj 1.
66.4.5 Herding with Market and Macroeconomic Variables We explain above that Stdc(bbimt) changes over time depending on the level of herding in the market. However, it is interesting to study whether this behavior, extracted from Stdc(bbimt), is robust in the presence of variables that reflect the state of the market: the degree of volatility, the market return, or potential variables that reflect macroeconomic fundamentals. Therefore, if the herding parameter becomes insignificant when these variables are included, then the changes in Stdc(bbimt) could be explained by changes in the fundamentals rather than herding. In order to consider the influence of market volatility and market return, Hwang and Salmon (2004) include them as independent variables in the model (66.7), obtaining the model (66.9): log Std c bbimt ¼ mm þ H mt þ cm1 log smt þ cm2 r mt þ umt
(66.9)
H mt ¼ fm H mt1 þ mt where log smt and rmt are the market log volatility and the market return at time t, respectively. In a second step, Hwang and Salmon (2004) include the factors of the Fama and French (1993) in the model (66.9). Nevertheless, we also include the four factors of Carhart’s (1997) model, considering the size (SMB), book-to-market (HLM), and momentum (PR1YR) factors, as model (66.10) exhibits: log Std c bbimt ¼ mm þ H mt þ cm1 log smt þ cm2 r mt þ cm3 SMBt þ cm2 HMLt þ cm3 PR1YR þ umt H mt ¼ fm H mt1 þ mt
(66.10)
1812
M. Alda Garcia and L. Ferruz
Lastly, we add three macroeconomic variables to the model (66.9) – dividend yield (DY), time spread (TS), and the short-term interest rate (STIR), in order to consider information variables representative of the economic cycle: log Stdc bbimt ¼ mm þ H mt þ cm1 logsmt þ cm2 r mt þ cm6 DY t þ cm7 TSt þ cm8 STIRt þ umt H mt ¼ fm H mt1 þ mt
(66.11)
66.4.6 Generalized Herding Measurement in Linear Factor Models In the previous section, we observe that we can measure herding in any factor, employing linear factor models. Therefore, supposing that the excess return of the fund i (rit) follows this linear model: r it ¼ abit þ
K X
bbikt
f kt
þ eit , i ¼ 1, . . . , N y t ¼ 1, . . . , T
(66.12)
k¼1
where abit is the intercept that changes over time, bbikt are the coefficients on factor k at time t, fkt is the realized value of factor k at time t, and eit has a mean zero with a variance s2e . The factors in model (66.12) may be risk-specific factors or factors to detect an anomaly. One factor included is the excess market return, as in the conventional linear factor models.1 The superscript b on the betas indicates that they are biased betas under herding; therefore, the herding towards the factor k at time t can be captured by (66.13): bbikt ¼ bikt hkt ðbikt Ec ½bikt Þ
(66.13)
where Ec[bikt] is the cross-sectional expected beta for factor k at time t.
66.4.7 Robust Estimate of the Betas The first step to calculate the different models considered is to estimate the market betas from the CAPM model (66.14) and from the four-factor Carhart model (66.15): r it ¼ abit þ bbimt r mt þ eit 1
(66.14)
It should be clear that the linear factor model used does not require that the market is in equilibrium or efficient.
66
Pension Funds
1813
r it ¼ abit þ bbimt r mt þ bbiSt SMBt þ bbiHt HMLHt þ bbiPt PR1YRMt þ eit
(66.15)
With these betas we obtain the estimated cross-sectional standard deviation of the betas, and these are used in the state-space model. Although the ordinary least squares (OLS) estimation is the most common technique to estimate the beta, this has some drawbacks. Firstly, they behave badly when the errors are not from a normal i.i.d. distribution, particularly when the data is heavily tailed, which are very frequent in return data. Furthermore, the existence of outliers may also influence on the OLS beta, thus leading to a distorted perspective on the relationship between asset returns and index returns. In order to overcome these disadvantages and provide a better fit, Martı´n and Simin (2002) indicate that a robust estimation of beta should be implemented. One robust regression is the M-estimation method thorough the Huber estimation. With the different betas estimated, we obtain the cross-sectional standard deviation of the betas on the market portfolio as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u Nt
2 uX b b u ^ ^ b imt b imt b u t i¼1 ^ Std c b imt ¼ Nt where b^imt ¼ N1t b
(66.16)
Nt X b b^imt and Nt is the number of funds in the month t. i¼1
Finally, we estimate the four state-space models considered: (66.7), (66.9), (66.10), and (66.11), with the standard deviation of the betas.
66.5
Data and Empirical Results
66.5.1 Data The database was provided by Thomson Reuters. The data comprises the monthly returns obtained by all private pension funds with European equity investment vocation registered for sale in Spain (84 pension funds) and in the United Kingdom (690 pension funds). The time period analyzed is from January 1999 to September 2010. We require that the pension funds present data for at least 24 months to ensure the consistency of the analyses. In this way, our database is free of the so-called survivorship bias. The market benchmark used is the MSCI Europe index, given the European equity vocation of the pension funds, and it is necessary to use European benchmark portfolio to assess performance on an appropriate basis. The representative variable for risk-free asset is the 1-month Euribor rate.
1814
M. Alda Garcia and L. Ferruz
The macroeconomic variables used are as follows: • Dividend yield2 is the ratio between the dividends paid out by the MSCI Europe in the previous 12 months and the current index price. • Time spread is the annualized difference between the return on the EMU 10-year bond3 and the 3-month Euribor rate. • Short-term interest rate is the 3-month Euribor rate. In the investment style analysis, we consider the four factors of the Carhart (1997) model: excess market return, size (SMB), book-to-market (HML), and momentum (PR1YR). We follow the instructions by Fama and French (1993) to build the size and book-to-market factors. In regard to the size factor (SMBt), we build the mimicking portfolio as the difference between the portfolio made up of the MSCI Europe small value price, MSCI Europe small core price, and MSCI Europe small growth price indices and the portfolio made up of the MSCI Europe large value price, MSCI Europe large core price, and MSCI Europe large growth price indices. In relation with the book-to-market factor (HMLt), with the monthly returns obtained by the indices, the mimicking portfolio is the difference between the portfolio made up of the MSCI Europe small value price and MSCI Europe large value price indices and the portfolio made up of the MSCI Europe small growth price and MSCI Europe large growth price indices. Finally, the 1-year momentum factor (PR1YRt) is approached following the Carhart instructions, in our case, with the monthly returns obtained by a group of market indices representative of the geographic universe studied. As we analyze European equity funds, we use the 16 MSCI indices of the countries integrated in the MSCI Europe index: Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, the Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, and the United Kingdom. All of the indices have been obtained from the official MSCI website. Based on these 16 market indices, we build the equal-weight average of indices with the highest 30 % (five) 11-month returns, lagged 1 month, minus the equalweight average of indices with the lowest 30 % (five) 11-month returns, lagged 1 month. The descriptive statistics of the different risk factors are displayed on Table 66.1. The Table 66.1 reveals that the excess market return is the factor with least mean return, presenting the minimum negative value. Nonetheless, the maximum value is also found in this factor; hence the standard deviation is somewhat greater than the rest.
2
For its calculation we apply the difference between the monthly return obtained by the corresponding MSCI gross and the MSCI price; then we obtain the total of the 12 previous values for a determined month. Information obtained from MSCI: http://www.msci.com/ 3 Data obtained from the Bank of Spain: www.bde.es
66
Pension Funds
1815
Table 66.1 Properties of the risk factors Variable Average Standard deviation Market excess return 0.0005 0.0488 SMB 0.0051 0.0269 HML 0.0017 0.0254 PR1YR 0.0042 0.0392
Minimum 0.1350 0.0813 0.0896 0.1146
Maximum 0.1323 0.0858 0.0972 0.1106
Kurtosis 3.5650 3.8340 6.2798 3.4888
Skewness 0.3032 0.3169 0.3345 0.1526
This table includes the main statistics (average, standard deviation, minimum, maximum, kurtosis, and skewness) for the four risk factors calculated: market excess return, size (SMB), book-tomarket (HML), and momentum (PR1YR)
With respect to kurtosis, it is high in all factors (greater than three), indicating leptokurtosis, and therefore, they are not Gaussian. Furthermore, it is also remarkable the negative skewness on all factors.
66.5.2 Empirical Results Firstly, we estimate the betas of the models (66.14) and (66.15), and next we calculate the cross-sectional deviations, as we describe in Sect. 66.4.7. The main characteristics of the standard deviations of the estimated betas are reflected in Table 66.2. This table differentiates between the market betas from the CAPM model and the market betas from the Carhart four-factor model. All of them are estimated with the Huber robust technique. The first two columns of Table 66.2 show all cross-sectional standard deviations b of the market betas: Std c b^ . These are significantly different from zero in all imt
cases. However, the betas from the CAPM model present negative skewness, while the skewness is positive in the market betas from the four-factor model, which is common in series with volatility. We also observe a high kurtosis, revealing non-normality. This is confirmed with the Jarque-Bera test, as we reject the null hypothesis of normality; therefore, the standard deviation of the betas is not Gaussian. The correlation between these two market betas is high; then, if there is herding, this may reveal similar herding between models. Finally, the properties of the cross-sectional deviations of the betas of the SMB, HML, and PR1YR factors also exhibit similar properties: they are not normal and negative skewness.
66.5.2.1 Results of Herding Towards the Market Factor In this section we study the herding towards the market factor; that is to say, we start from the market betas of the CAPM and the four-factor models; after that, we calculate the standard deviation of the betas: Stdc(bbimt), and with these, we estimate the models (66.7), (66.9), (66.10), and (66.11). The results of these models are displayed on Tables 66.3 and 66.4 for the Spanish and British pension funds, respectively.
1816
M. Alda Garcia and L. Ferruz
Table 66.2 Properties of the cross-sectional standard deviation of estimated betas CAPM model Four-factor Carhart (1997) model Market return Market return Beta of SMB Beta of HML Beta of beta (A) beta (B) factor factor PR1YR Panel A: Pension funds with European equity investment vocation in Spain Average 0.7592 0.0163 0.1361 0.0423 0.8030 Standard deviation 0.1663 0.1408 0.2520 0.0887 0.1775 Minimum 0.0073 0.2947 0.5478 0.1919 0.0037 Maximum 1.0822 0.3548 0.8198 0.3626 1.1754 Kurtosis 10.5915 2.5177 3.0209 6.9308 10.2584 Skewness 2.0215 0.1710 0.1230 1.1898 1.8888 Jarque Bera test (0.0000)*** (0.0000)*** (0.0000)*** (0.0000)*** (0.0000)*** (p-value) Correlation A–B 0.9507 Panel B: Pension funds with European equity investment vocation in the United Kingdom Average 0.8116 0.2207 0.1275 0.0193 0.8725 Standard deviation 0.1418 0.2129 0.4075 0.1367 0.1346 Minimum 0.2079 0.3876 1.8963 0.8442 0.2027 Maximum 1.1433 1.1677 1.4798 0.8482 1.2735 Kurtosis 6.8994 6.9736 4.5372 9.3433 9.2515 Skewness 1.3375 1.5837 0.6795 0.1274 1.6848 Jarque-Bera test (0.0000)*** (0.0000)*** (0.0000)*** (0.0000)*** (0.0000)*** (p-value) Correlation A–B 0.8935 This table is divided into two panels (A and B), corresponding to the Spanish and British pension funds. Each panel includes the main statistics (average, standard deviation, minimum, maximum, kurtosis, skewness, and Jarque-Bera normality test) of the cross-sectional standard deviation for the estimated market betas in the CAPM model and the four betas of Carhart (1997) model: market, size (SMB), book-to-market (HML), and momentum (PR1YR) betas. *** represents significance at 1 % level
We interpret the significance of the market variables included in the models (66.9), (66.10), and (66.11) as adjustment in the mean level (mm) of log[Stdc(bbimt)] on the equation without herding, so we examine the degree of herding given the state of the market. The Spanish results (Table 66.3) display that the model (66.7), both in the case of the market betas of the CAPM model as in Carhart model, presents evidence of quite persistent herding, as the coefficient f^m is large and significant. Likewise, the standard deviation of nmt (smn) is significant; therefore, given the level of market volatility and return, Spanish pension funds lead to herding towards the market portfolio. The coefficients of the model (66.9) show that the herding is still significant when the two market variables are included: volatility and return, which suggests that the changes in the volatility of the sensitivity factor [Stdc(bbimt)] can be explained by herding rather than by changes in fundamentals.
Herding models with the market return beta of the four-factor Carhart (1997) model Model (66.7), Model (66.9), with the Model (66.10), with the factors: Model (66.11), with the factors: no exogenous factors: excess market excess market return, volatility, excess market return, volatility, variables return and volatility SMB, HML, and PR1YR DY, TS, and STIR 0.9780(0.06)* 0.9723 (0.102) 0.9833(0.201) 0.9162(0.100) 0.9201(0.000)*** 0.6250(0.000)*** 0.9755(0.000)*** 0.4488(0.000)*** 0.1926(0.135) 0.1550(0.050)** 0.1331(0.046)*** 0.1510(0.040)** *** *** *** 0.2487(0.002) 0.2009(0.000) 0.2770(0.001) 0.2609(0.099)* *** *** 0.0120(0.000) 0.0086(0.000) 0.0124(0.000)*** 0.0004(0.000)*** 0.0034(0.000)*** 0.0108(0.000)*** 0.0260(0.211) 0.0025(0.319) 0.0024(0.196) 0.0546(0.778) 0.0518(0.248) 0.1869(0.541)
This table displays the estimates of the state-space models for herding towards the market factor in the Spanish pension funds for the period from January 1999 to September 2010. The first column exhibits model (66.7) results, calculated with the market return beta from the CAPM model. The second, third, fourth, and fifth columns show the results for the models (66.7), (66.9), (66.10), and (66.11) estimated with the market return beta from the four-factor Carhart (1997) model. SMB represents the size factor, HML is the book-to-market factor, PR1YR is the momentum factor, DY is the dividend yield, TS is the time spread, and STIR is the short-term interest rate * ** , , and *** represent significance at 10 %, 5 %, and 1 % level, respectively
m fm smu smn logsmt rmt SMB HML PR1YR DY TS STIR
Model (66.7), calculated with the market return beta of the CAPM model 0.9100(0.049)** 0.8890(0.000)*** 0.1931(0.0000)*** 0.2837(0.000)***
Table 66.3 Estimates of state-space models for herding towards the market factor in Spain
66 Pension Funds 1817
Herding models with the market return beta of the four-factor Carhart (1997) model Model (66.7), Model (66.9), with the Model (66.10), with the factors: Model (66.11), with the factors: no exogenous factors: excess market excess market return, volatility, excess market return, volatility, variables return and volatility SMB, HML, and PR1YR DY, TS, and STIR 0.7093(0.181) 0.9794(0.119) 0.6569(0.201) 0.2851(0.099)* *** *** *** 0.5399(0.000) 0.8295(0.000) 0.9011(0.000) 0.9736(0.000)*** *** 0.2080(0.444) 0.1095(0.100) 0.1043(0.000) 0.1418(0.000)*** *** *** 0.0845(0.108) 0.1110(0.000) 0.1031(0.000) 0.2050(0.000)*** *** *** 0.0061(0.005) 0.0007(0.009) 0.0005(0.091)* * *** 0.0955(0.090) 0.0004(0.000) 0.0136(0.000)*** 0.0183(0.170) 0.0008(0.206) 0.0011(0.217) 0.0345(0.561) 0.0188(0.100) 0.0101(0.190)
This table displays the estimates of state-space models for herding towards the market factor in the UK pension funds for the time period from January 1999 to September 2010. The first column exhibits model (66.7) results, calculated with the market return beta from the CAPM model. The second, third, fourth, and fifth columns show the results for the models (66.7), (66.9), (66.10), and (66.11), estimated with the market return beta from the four-factor Carhart (1997) model. SMB represents the size factor, HML is the book-to-market factor, PR1YR is the momentum factor, DY is the dividend yield, TS is the time spread, and STIR is the short-term interest rate * ** , , and *** represent significance at 10 %, 5 %, and 1 % level, respectively
m fm smu smn log smt rmt SMB HML PR1YR DY TS STIR
Model (66.7), calculated with the market return beta of the CAPM model 0.9407(0.073)* 0.9048(0.000)*** 0.2084(0.201) 0.1540(0.075)*
Table 66.4 Estimates of state-space model for herding towards the market factor in the United Kingdom
1818 M. Alda Garcia and L. Ferruz
66
Pension Funds
1819
Therefore, the betas’ deviation decreases when the market volatility rises, but increases with the level of market return, as the logarithm of market volatility and the market return have significant negative and positive coefficients, respectively. As a result, when the market becomes riskier and is falling, Stdc(bbimt) decreases, while it increases when the market becomes less risky and rises. Therefore, a reduction in the standard deviation due to the herding process suggests that herd behavior is significant and exists independently of a particular state of the market. The model (66.10) includes the SMB, HML, and PR1YR factors as explanatory variables, but none of them is significant. Moreover, the herding (f^m) increases, so the results are very similar to those of the previous models. Model (66.11) includes three macroeconomic variables, but none of them is significantly different to zero. Nonetheless, the herding is still persistent because the coefficient f^m is significant. On the other hand, the UK results are reported in Table 66.4, where the coefficients f^m and smn are significant and persistent. Equally, the additional variables of the models (66.10) and (66.11) are not significant. Nonetheless, the market variables (logarithm of the volatility and market return) present signs contrary to the previous results, that is to say, positive and negative, respectively. Nevertheless, these results are consistent with previous studies, which find that herding arises most likely during market instability, in other words, periods of high volatility. As a consequence, we obtain evidence of herding in the two countries analyzed. Nonetheless, we do not observe significant differences between herding models, as the factors of the investment styles or macroeconomic variables are not significant. Therefore, herding is not influenced by the size of the company, the relation between book equity and market equity, or the momentum strategy. However, herding varies with the model applied; so the inclusion of additional variables is useful, but does not provide more information.
66.5.2.2 Results of Herding Towards Size, Book-to-Market, and Momentum Factors Starting with the betas estimated for the different factors (size, book-to-market, and momentum) of the four-factor Carhart model (66.15), we also calculate the standard deviation of these betas: Stdc(bbiSt), Stdc(bbiHt), and Stdc(bbiPR1YRt). After that, we repeat the above analysis, but, in this case, we study the herding towards the size, book-to-market, and momentum factors. The results of these analyses are very similar to the previous one; consequently we do not display them,4 but we show a summary in Table 66.5, indicating if the variables of the different models are significant. Nonetheless, next we discuss the different results. 4
These tables are available upon request.
1820
M. Alda Garcia and L. Ferruz
Table 66.5 Summary of the state-space models for herding towards style factors in Spain and the United Kingdom Herding towards SMB factor United Spain Kingdom Existence Yes, + and Yes, + and of herding significant significant Market Signif. Signif. volatility Market Signif. Signif. return SMB No signif. No signif. HML No signif. No signif. PR1YR No signif. No signif. DY No signif. No signif. TS No signif. No signif. STIR No signif. No signif.
Herding towards HML factor United Spain Kingdom Yes, + and Yes, + and significant significant Signif. Signif.
Herding towards PR1YR factor United Spain Kingdom Yes, + and Yes, + and significant significant Signif. Signif.
Signif.
Signif.
Signif.
Signif.
No signif. No signif. No signif. No signif. No signif. No signif.
No signif. No signif. No signif. No signif. No signif. No signif.
No signif. No signif. No signif. No signif. No signif. No signif.
No signif. No signif. No signif. No signif. No signif. No signif.
This table displays a summary of the state-space models for herding towards the style factors (size, book-to-market, and momentum) in Spain and the United Kingdom, indicating the existence of herding and if the variables of the different models (66.7), (66.9), (66.10), and (66.11) are significant (signif.) or not significant (no signif.)
We find the same behavior in all factors: existence of herding and the herding coefficients (f^S, f^H, f^PR1YR) are greater than 0.7, on average. Likewise, all standard deviations are highly significant and persistent (ss, sH sPR1YR). However, we notice less degree of herding towards the book-to-market in the Spanish pension funds, presenting a parameter f^H around 0.3. This behavior is also perceived in the case of the herding towards the momentum factor in Spain and the United Kingdom, where the coefficient f^PR1YR is 0.6, on average. Although the herding is less intense in these cases, the phenomenon is still significant towards the size (HSt), book-to-market (HHt), and momentum (HPR1YRt) factors, and the results are similar to the herding towards the market (Hmt). Furthermore, we also observe that the additional variables of the four-factor model (66.10) and the macroeconomic variables of the model (66.11) are not significant. Lastly, the logarithms of market volatility and the market return, especially the latter, explain less the cross-sectional standard deviation of the SMB, HML, and PR1YR betas, as the market return is less significant or not significant.
66.5.2.3 Relationship Between the Herding Towards Factors and Countries As in the previous section we find similar herding towards factors and between countries, we study the herding patterns in greater detail in Table 66.6, displaying the correlations between the different herding coefficients (coefficients of herding towards the market, size, book-to-market, and momentum factors).
Spain market 1 0.255*** 0.156*** 0.134*** 0.053*** 0.012*** 0.045*** 0.034***
1 0.005*** 0.111*** 0.075*** 0.009 0.057*** 0.050***
Spain SMB
1 0.093*** 0.022** 0.068*** 0.041*** 0.230***
Spain HML
1 0.286*** 0.051*** 0.082*** 0.096***
Spain PR1YR
1 0.099*** 0.009*** 0.129***
UK market
1 0.118*** 0.078***
UK SMB
1 0.128***
UK HML
1
UK PR1YR
This table represents the correlation coefficients of herding measures towards different factors: market return (represented as market), size (SMB), book-tomarket (HML), and momentum (PR1YR), from model (66.9) in the different factors and pension funds in Spain and the United Kingdom * ** , , and *** represent significance at 10 %, 5 %, and 1 % level, respectively
Spain market Spain SMB Spain HML Spain PR1YR UK market UK SMB UK HML UK PR1YR
Table 66.6 Relation between the herding of different pension funds in each country
66 Pension Funds 1821
1822
M. Alda Garcia and L. Ferruz
Although we do not display the herding towards the last three factors in the previous section, we calculate the correlation between their coefficients, considering the herding measures obtained in the model (66.9), as they are more significant, in general. This table exhibits the correlation between herding towards factors (hmt, hSt, hHt, and hPR1YRt) at 5 % significance level. The results show that among the 36 pairwise correlation coefficients, 15 are significantly positive, 12 are significantly negative, and 4 are not significant. We also detect positive negative correlations between common factors in both countries, except in the momentum factor, so we notice international herding. Overall, we notice more positive correlation between the UK factors and between Spanish and the UK factors than between Spanish factors. In conclusion, we observe a relationship between the herding coefficients for the same factor in Spain and the United Kingdom.
66.6
Conclusions
The pension fund industry has acquired great significance in recent years, especially in Europe, due to doubts about the future viability of public pensions. For this reason, more works focus on their study, especially in the analysis of their performance and the manager behavior. However, it is also interesting to study the consequences of their investment on the market, as it can affect the market efficiency. Specifically, we study the existence of herding, a phenomenon that arises when investors decide to imitate the observed decisions of others, instead of following their own beliefs and information. This phenomenon can lead to a situation in which market prices do not reflect all of the information and the market moves towards inefficiency. We analyze whether the behavior of the pension fund managers studied arises the herding behavior in the equity markets of Spain and the United Kingdom, applying a focus less used in financial literature: the estimated cross-sectional of standard deviations of market betas. The estimation technique is also less common because we use the state-space models, employing the Kalman filter. Furthermore, in order to carry out a robust estimation of the beta from the CAPM model and from the four-factor model of Carhart (1997), we implement the robust M-estimation with Huber estimator. We apply different models in order to analyze the existence of herding. Firstly, we only detect the existence and persistence of the phenomenon. After that, we add two market variables (volatility and return), then three style factors (size, book-to-market, momentum), and finally, three macroeconomic variables (dividend yield, time spread, and short-term interest rate), in order to examine their influence on herding. In addition, we check the existence of herding towards the size, book-to-market, and momentum factors; for this purpose, we use the estimated betas of these factors in Carhart’s model. The analysis of the momentum factor is innovative, as we do not find previous studies that analyze this aspect.
66
Pension Funds
1823
The results obtained are similar for the two countries studied, as well as for the different types of investment. These reveal the existence of herding towards the market and the style factors, showing significant movements and persistence independently from and given market conditions. Nonetheless, this effect is smaller in the case of herding towards the different style factors. Additionally, we do not find relevant influence of the macroeconomic variables, nor the style factors with any of the different models, so these variables do not significantly influence herding. As a consequence, we note that herding arises regardless of the company size, its situation (growing or not), as well as the implemented strategies. Hence, as we only study equity pension funds, we confirm that the Spanish and British pension fund managers influence the degree of herding in the stock markets. Moreover, we find the existence of herding in all models and markets, so pension fund managers encourage the existence of herding towards the market, size, bookto-market, and momentum factors. As a consequence, managers follow the performance of the market and the different styles more than they should in equilibrium, so they move towards matching the return on individual assets with that of the market and styles. Additionally, we discover a relationship between the herding coefficients for the same factor in Spain and the United Kingdom. As a result, the behavior of Spanish and British pension fund managers influences the market and, consequently, the performance of pension funds.
Appendix 1: The State-Space Models We remark that the herding parameter is not observed, so in order to extract herding parameter we apply state-space models. A state-space model is defined by two equations: Y t ¼ c þ SXt þ et
(66.17)
Xt ¼ d þ HXt1 þ zt
(66.18)
where: Xt is the hidden vector at time t. Yt is the observation vector at time t. c and d are vectors with constants. e is the error. z is the state error. e and z are both multivariate normally distributed, with mean zero and covariance matrices of R and Q, respectively. Those models can be estimated by using the Kalman filter, which is an algorithm to perform filtering on the state-space model. The estimate of the state equation by the Kalman filter algorithm also offers a smoothing time series, by performing fixed interval smoothing, i.e., computing Yt|t ¼ P[YtjY1, . . . ,Yt 1] for t T.
1824
M. Alda Garcia and L. Ferruz
The objective is, in the formula (66.17), to minimize the difference between the observation Yt, and the prediction based on the previous observations, (Yt|t ¼ P[YtjY1, . . . ,Yt1]) by recursive maximum likelihood estimation. The Kalman filter can be considered as an online estimation procedure, which is used to estimate the parameters online when new observations are entered after they have already been estimated. On the other hand, the smoothed Kalman filter is a method only used when the total series are observed. The Kalman filter results are close to the maximum likelihood estimates, while the smoother results are exact to the maximum likelihood estimates.
Appendix 2: Robust Estimate of the Betas In order to calculate the different betas from the CAPM model and from the four-factor Carhart model, the ordinary least squares (OLS) estimation is the most common technique for estimating betas; however, this has some drawbacks. Firstly, they behave badly when the errors are not from a normal i.i.d. (independent and identically distributed) distribution, particularly when the data is heavily tailed, which are very frequent in return data. Furthermore, the existence of outliers may also influence on the OLS beta, thus leading to a distorted perspective on the relationship between asset returns and index returns. In order to overcome these disadvantages and provide a better fit, Martı´n and Simin (2002) indicate that a robust estimation of beta should be implemented. One of the most commonly applied methods of robust regression is the M-estimation method, a generalization of maximum likelihood estimation. In order to explain this estimate method, we considered a linear model as a starting point: yi ¼ X i b þ ei
(66.19)
yt ¼ Xt b þ et
(66.20)
where i ¼ 1,.., n Thus, the fitted model is
The M-estimate principle is to minimize the objective function: n X i¼1
rðei Þ ¼
n X
rð y i X i b Þ
(66.21)
i¼1
where the function r(.) gives the contribution of each residual to the objective function.
66
Pension Funds
1825
Table 66.7 Objective functions and weight functions for the ordinary least squares estimation and the Huber estimation Estimation method Ordinary least square (OLS) Huber estimation
Objective function (r) e2 e2/2 when jej k kjej k2/2 when jej > k
Weight function (wi) 1 1 when jej k k/jej when jej > k
This table compares the objective functions and weight functions for the ordinary least squares estimator and the Huber estimator
If we define c ¼ r0 , as the first order derivative of r(.), by differentiating the objective function with respect to b and setting the partial derivatives to zero, we obtain a system of estimating equations: n X
0
cðyi Xi bÞXi ¼ 0
(66.22)
i¼1
If the weight function is w(e) ¼ c(e)/e and wi ¼ w(ei), the estimating equations become n X
0
wi ei Xi ¼ 0
(66.23)
i¼1
These equations can be solved as a weighted least squares problem, with the objective of minimizing: n X
w2i ei 2
(66.24)
i¼1
The weights depend on the residuals, the residuals depend on the estimated coefficients, and the estimated coefficients depend on the weights, so an iteration procedure is needed in order to solve the problem. To solve this iterative procedure, we apply the Huber estimation, given that this allows us to determine the weighted, the residuals, and the estimated coefficients. In order to compare the OLS estimator with the robust Huber estimator, Table 66.7 distinguishes the objective functions and weighted functions for each one of the methods. In Table 66.7 we observe that both functions increase without bound, as the residuals departs from zero; nonetheless, the Huber objective function increases more slowly. In fact, the least squares assigns equal weight to each observation, but the weights of the Huber estimator decline for jej > k, where e is the residual term and k is called a tuning constant for the Huber estimation. In the OLS estimation, a smaller k parameter provides more resistance to outliers; however, it offers a lower efficiency when the errors are normally distributed. In contrast, with Huber estimation, k has a general value of k ¼ 1.345 s
1826
M. Alda Garcia and L. Ferruz
(where s is the conventional standard deviation), producing 95 % efficiency when the errors are normal, and it also offers protection against outliers; therefore, this estimation is better than the OLS estimation.
References Avery, C., & Zemsky, P. (1998). Multidimensional uncertainty and herd behavior in financial markets. American Economic Review, 88, 724–748. Banerjee, A. (1992). A simple model of herd behavior. Quarterly Journal of Economics, 107, 797–818. Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100, 992–1026. Blasco, N., & Ferreruela, S. (2007). Comportamiento Imitador en el Mercado Bursa´til Espan˜ol: Evidencia Intradiaria. Revista de Economı´a Financiera, 13, 56–75. Blasco, N., & Ferreruela, S. (2008). Testing intentional herding in familiar stocks: An experiment in an international context. The Journal of Behavioral Finance, 9(2), 72–84. Blasco, N., Corredor, P., & Ferreruela, S. (2011). Detecting intentional herding: What lies beneath intraday data in the Spanish stock market. Journal of the Operational Research Society, 62, 1056–1066. Bohl, S., Martin, T., Klein, A. (2011). Short selling bans and institutional investors herding behavior: Evidence from the global financial crisis. Working paper. http://ssrn.com. Accessed 27 Dec 2011. Bowe, M., & Domuta, D. (2004). Investor herding during financial crisis: A clinical study of the Jakarta stock exchange. Pacific-Basin Finance Journal, 12(4), 387–418. Brennan, M. (1993). Agency and asset prices. Finance working paper no. 6–93, UCLA. Calvo, G., & Mendoza, E. (2000). Rational contagion and the globalization of securities markets. Journal of International Economics, 51(1), 79–113. Carhart, M. (1997). On persistence in mutual fund performance. The Journal of Finance, 52(1), 57–82. Chang, E. C., Cheng, J. W., & Khorana, A. (2000). An examination of herd behavior in equity markets: An international perspective. Journal of Banking and Finance, 24, 1651–1679. Chari, V. V., & Kehoe, P. (2004). Financial crises as herds: Overturning the critiques. Journal of Economic Theory, 119(1), 128–150. Christie, W. G., & Huang, R. D. (1995). Following the pied piper: Do individual returns herd around the market? Financial Analyst Journal, 51(4), 31–37. Demirer, R., & Kutan, A. (2006). Does herding behavior exist in Chinese stock markets? Journal of International Financial Markets, Institutions and Money, 16(2), 123–142. Demirer, R., Kutanand, A., & Chen, C. (2010). Do investors herd in emerging stock markets?: Evidence from the Taiwanese market. Journal of Economic Behavior & Organization, 76(2), 283–295. Devenow, A., & Welch, I. (1996). Rational herding in financial economics. European Economic Review, 40, 603–615. Economou, F., Kostakis, A., & Philippas, N. (2011). Cross-country effects in herding behavior: Evidence from four south European markets. Journal of International Financial Markets Institutions and Money, 21(3), 443–460. Fama, E. F., & French, K. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33(1), 3–56. Ferna´ndez, V. (2011). Los fondos de pensiones Como activo financiero familiar. Comparativa europea y perspectivas en Espan˜a. Ahorro familiar en Espan˜a, 17, 319–332. Ferruz, L., Sarto, J. L., & Vicente, L. (2008a). Herding behavior in Spanish equity funds. Applied Economics Letters, 15(7), 573–576.
66
Pension Funds
1827
Ferruz, L., Sarto, J. L., & Vicente, L. (2008b). Convergencia estrate´gica en la industria espan˜ola de fondos de inversio´n. El Trimestre Econo´mico, 75(4), 1043–1060. Ferson, W., & Harvey, C. R. (1991). The variation of economic risk premiums. Journal of Political Economy, 99, 285–315. Ferson, W., & Harvey, C. R. (1993). The risk and predictability of international equity returns. Review of Financial Studies, 6, 527–566. Ferson, W., & Korajczyk, R. A. (1995). Do arbitrage pricing models explain the predictability of stock returns? Journal of Business, 68, 309–349. Fong, K., Gallagher, D. R., Gardner, P., & Swan, P. L. (2011). Follow the leader: Fund managers trading in signal‐strength sequence. Accounting & Finance, 51(3), 684–710. Fromlet, H. (2001). Behavioral finance – theory and practical application. Business Economics, 36, 63–69. Gleason, K. C., Lee, C. I., & Mathur, I. (2003). Herding behavior in European futures markets. Finance Letters, 1, 5–8. Gleason, K. C., Mathur, I., & Peterson, M. A. (2004). Analysis of intraday herding behavior among the sector ETFs. Journal of Empirical Finance, 11, 681–694. Gompers, P. A., & Metrick, A. (2001). Institutional investors and equity prices. Quarterly Journal of Economics, 116(1), 229–259. Gonza´lez, P., & Garcı´a, J. C. (2002). Regulacio´n del sector de seguros y planes de pensiones. Revista ICE, Sistema Financiero: Novedades y Tendencias, 801, 51–67. Goodfellow, C., Bohl, M. T., & Gebka, B. (2009). Together we invest? Individual and institutional investors’ trading behavior in Poland. International Review of Financial Analysis, 18(4), 212–221. Grinblatt, M., Titman, S., & Wermers, R. (1995). Momentum investment strategies, portfolio performance and herding: A study of mutual fund behavior. American Economic Review, 85, 1088–1105. Harvey, C. R. (1989). Time-varying conditional covariances in tests of asset pricing models. Journal of Financial Economics, 24, 289–317. Henker, J., Henker, T., & Mitsios, A. (2006). Do investors herd intraday in Australian equities? International Journal of Managerial Finance, 2(3), 196. Hirshleifer, D., Subrahmanyam, A., & Titman, S. (1994). Security analysis and trading patterns when some investors receive information before others. Journal of Finance, 49(5), 1665–1698. Hsieh, S., Yang, T.-Y., & Tai, Y. (2010). Positive trading effects and herding behavior in Asian markets: Evidence from mutual funds. The International Journal of Business and Finance Research, 4(2), 177–188. Hwang, S., & Salmon, M. (2004). Market stress and herding. Journal of Empirical Finance, 11, 585–616. Jame, R. E. (2011). Pension fund herding and stock return. Working paper. http://ssrn.com. Accessed 21 Dec 2011. Kahnemann, D., & Tversky, A. (1979). Prospect theory; an analysis of decisions under risk. Econometrica, 47, 262–291. Kim, K., & Wei, S. (2002). Offshore investment funds: Monsters in emerging markets? Journal of Development Economics, 68(1), 205–224. Lakonishok, J., Shleifer, A., & Vishny, R. (1992a). The impact of institutional trading on stock prices. Journal of Financial Economics, 32, 23–43. Lakonishok, J., Shleifer, A. Vishny, R. (1992b). The structure and performance of the money management industry. Brookings Papers: Microeconomics, 1992, 339–391. Lillo, F., Moro, E., Vaglica, G., & Mantegna, R. (2008). Specialization of strategies and herding behavior of trading firms in a financial market. New Journal of Physics, 10, 043019 (15 pp). Lin, A. Y., & Swanson, P. E. (2003). The behavior and performance foreign investors in emerging equity markets: Evidence from Taiwan. International Review of Finance, 4(3–4), 189–210.
1828
M. Alda Garcia and L. Ferruz
Lobao, J., Serra, A., & Serra, P. (2007). Herding behavior evidence from Portuguese mutual funds. In G. Gregoriou (Ed.), Diversification and portfolio management of mutual funds (pp. 167–197). New York: Palgrave MacMillan. Martı´n, R., & Simin, T. (2002). Robust beta mining. In U. Fayyad, G. Grinstein, & A. Wierse (Eds.), Information visualization in data mining and knowledge discovery (pp. 309–312). San Francisco: Morgan Kaufmann. Maug, E., & Naik, N. (1996). Herding and delegated portfolio management: The impact of relative performance evaluation on asset allocation. IFA working paper 223. http://ssrn.com. Accessed 27 Dec 2011. OECD. (2010). Pension markets in focus, 7. http://www.oecd.org. Accessed 22 Jul 2011. Oehler, A., & Goeth-Chi, G. (2000). Institutional herding in bond markets. Working paper, Digital archive for economics and business studies. http://ssrn.com. Accessed 27 Dec 2011. Patterson, D. M., & Sharma, V. (2006). Do traders follow each other at the NYSE? Working paper, University of Michigan-Dearborn. Puckett, A., & Yan, X. (2007). The determinants and impact of short-term institutional herding. Working paper. http://ssrn.com. Accessed 27 Dec 2011. Rajan, R. G. (1994). Why credit policies fluctuate: A theory and some evidence. Quarterly Journal of Economics, 436, 399–442. Roll, R. (1992). A mean/variance analysis of tracking error. Journal of Portfolio Management, 18(4), 13–22. Sahut, J. M., Gharbi, S., & Gharbi, H. O. (2011). Stock volatility, institutional ownership and analyst coverage. Bankers Markets & Investors 110 (Janvier-Febrier). Sanz, J. (1999). Los planes de pensiones y jubilacio´n Como sistema complementario de las pensiones. Acciones e investigaciones sociales, 9, 157–166. Scharfstein, D. S., & Stein, J. C. (1990). Herd behavior and investment. American Economic Review, 80, 465–479. Shapour, M., Raeei, R., Ghalibaf, H., & Golarzi, H. (2010). Analysis of herd behavior of investors in Tehran stock exchange using with state-space model. Journal of Financial Accounting Research Summer, 2(4), 49–60. Shefrin, H. (2000). Beyond greed and fear; understanding behavioral finance and the psychology of investing. Cambridge: HBS Press. Tan, L., Chiang, T. C., Mason, J. R., & Nelling, E. (2008). Herding behavior in Chinese stock markets: An examination of A and B shares. Pacific-Basin Finance Journal, 16, 61–77. Thaler, R. (1991). Quasi-rational economics. New York: Russel Sage Foundation. Trueman, B. (1994). Analyst forecasts and herding behavior. Review of Financial Studies, 7, 97–124. Tversky, A., & Kahnemann, D. (1986). Rational choice and the framing of decisions. Journal of Business, 59, 251–278. Uchida, H., & Nakagawa, R. (2007). Herd behavior in the Japanese loan market: Evidence from bank panel data. Journal of Financial Intermediation, 16, 555–583. Voronkova, S., & Bohl, M. (2005). Institutional traders’ behavior in an emerging stock market: Empirical evidence on polish pension fund investors. Journal of Business Finance and Accounting, 32(7–8), 1537–1560. Walter, A., & Weber, F. (2006). Herding in the German mutual fund industry. European Financial Management, 12(3), 375–406. Wang, D. (2008). Herd behavior towards the market index: Evidence from 21 financial markets. IESE business school working paper. http://ssrn.com. Accessed 27 Dec 2011. Weiner, R. J. (2006). Do birds of a feather flock together? Speculator herding in the world oil market. Discussion paper. Resources for the Futures, RFF DP 06-31. http://ideas.repec.org/p/ rff/dpaper/dp-06-31.html. Accessed 27 Dec 2011. Wermers, R. (1999). Mutual fund herding and the impact on stock prices. Journal of Finance, 54, 581–622. Wylie, S. (2005). Fund manager herding: A test of the accuracy of empirical results using U.K. Data. The Journal of Business, 78(1), 381–403.
Estimating the Correlation of Asset Returns: A Quantile Dependence Perspective
67
Nicholas Sim
Contents 67.1 67.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The C-QQR Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67.2.1 Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67.2.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67.3 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67.3.1 Dynamic C-QQR Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1830 1834 1834 1837 1839 1842 1851 1854
Abstract
In the practice of risk management, an important consideration in the portfolio choice problem is the correlation structure across assets. However, the correlation is an extremely challenging parameter to estimate as it is known to vary substantially over the business cycle and respond to changing market conditions. Focusing on international stock markets, I consider a new approach of estimating correlation that utilizes the idea that the condition of a stock market is related to its return performance, particularly to the conditional quantile of its return, as the lower return quantiles reflect a weak market while the upper quantiles reflect a bullish one. Combining the techniques of quantile regression and copula modeling, I propose the copula quantile-on-quantile regression (C-QQR) approach to construct the correlation between the conditional quantiles of stock returns. The C-QQR approach uses the copula to generate a regression function for modeling the dependence between the conditional quantiles of the stock returns under consideration. It is estimated using a two-step quantile regression
N. Sim School of Economics, University of Adelaide, Adelaide, SA, Australia e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_67, # Springer Science+Business Media New York 2015
1829
1830
N. Sim
procedure, where in principle, the first step is implemented to model the conditional quantile of one stock return, which is then related in the second step to the conditional quantile of another return. The C-QQR approach is then applied to study how the US stock market is correlated with the stock markets of Australia, Hong Kong, Japan, and Singapore. Keywords
Stock markets • Copula • Correlation • Quantile regression • Quantile dependence • Business cycle • Dynamics • Risk management • Investment • Tail risk • Extreme events • Market uncertainties
67.1
Introduction
In the practice of risk management, an important consideration in the portfolio choice problem is the correlation structure across assets. The correlation is especially crucial for conveying the level of portfolio risk as it is the parameter that underscores the extent of how well diversified a given portfolio is. Nevertheless in practice, estimating the actual level of correlation is a notoriously difficult task, where existing research has overwhelmingly shown that the true correlation, be it the correlation of equities, bonds, and exchange rates, may fluctuate significantly over the business cycle and during extreme events.1 Over the past two decades, the literature has offered new ways of estimating asset correlation that depart from the premise that the correlation is constant. To complement these existing methodologies, this chapter offers a new perspective based on the concept of dependence between conditional quantiles to motivate a new approach of modeling correlation structure that takes into account that the correlation may be sensitive to the performance of financial markets. Since the early findings of Erb et al. (1994) and Longin and Solnik (1995), among others, it is well accepted among academics and practitioners that the correlation may respond to changing economic circumstances.2 A prime example can be found in the study of how international equity markets are dependent, where the literature has provided ample evidence that the level of dependence tends to be
1
For instance, equity returns are more highly correlated during business cycle downturns and bear markets (Erb et al. 1994; Longin and Solnik 1995, 2001; Ang and Chen 2002); the same is true for exchange rates returns (Patton 2006; Bouye` and Salmon 2009). Likewise, the deviation in the stock-bond correlation during bear markets is documented by Guidolin and Timmermann (2005). 2 For example, Erb et al. (1994) examine the dependence of G-7 equity stock markets by computing semicorrelations and find that the correlation is generally larger when the equity returns and output growth of these countries are below than above their respective means, thus when the economies and financial markets in these countries are bearish. Longin and Solnik (1995) examine the dependence of the market returns of Switzerland plus G-7, less Italy, by using a version of the multivariate GARCH model to show that the correlation is larger in times of greater market uncertainties.
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1831
stronger when markets become bearish (e.g., Erb et al. 1994). In order to capture such salient features about correlation, it is important to address the possibility that the actual level of correlation is contingent on current market conditions. Recent techniques of modeling asset dependence are developed with this objective in mind. They include regimes-witching frameworks to model jumps in correlation between normal and bearish states (e.g., Ang and Bekaert 2002; Guidolin and Timmermann 2005); extreme value theory that facilitates estimating the level of dependence during extreme events (e.g., Longin and Solnik 2001; Ang and Chen 2002; Poon et al. 2004; Heffernan and Tawn 2004); the copula approach, where the copula is a function that expresses the dependence structure of assets that also delivers an explicit measure of the dependence between the tail distributions of these assets (e.g., Patton 2006; Bouye` and Salmon 2009; Chollete et al. 2011); and the mixed copula approach that combines several copula functions where a different copula may be specified for a different state where financial markets are found (e.g., Hu 2006; Okimoto 2008). These techniques are widely popular in financial applications and are especially powerful for eliciting the properties of correlation across various market conditions. Nevertheless, there are also some limitations in the scope of how they may be used in financial applications. Take the extreme value theory approach of Longin and Solnik (2001), for example. Extreme value theory is relevant for the study of the conditional correlation between the extreme tail distributions of asset returns, hence is particularly useful for providing results on asymptotic tail dependence. However, as the asymptotic tail dependence is only suitable for describing the level of dependence between markets that are significantly bearish or bullish, the extreme value theory approach may not be amenable for examining the level of dependence when the extent of such market conditions is, loosely speaking, “less severe” or “mild.” Similarly, the concept of tail dependence in the copula approach is an asymptotic concept, and like extreme value theory, it does not convey the level of dependence for the different degrees of how bearish or bullish markets are. Regime-switching models can paint a broader picture on the characteristics of correlation, but might also require specifying a large number of regimes that makes them computationally burdensome to estimate. By focusing on modeling the correlation of international stock markets as the application, this chapter contributes to the existing literature by offering a simple approach to estimate the level of dependence that pertains not only to extreme market conditions but also to varying degrees of market bearishness or bullishness. It does so by simply relating the severity (or a lesser extent) of these market conditions to the distributions of the market returns under consideration. For example, we may associate the lower tail of the return distribution with a bearish market and the upper tail of this distribution with a bullish one. And between, say, the 10th and 30th percentiles of a stock return, the 10th return percentile may be perceived as associated with a market more bearish than the market associated with the 30th return percentile. By taking advantage of the fact that the quantile information of a market return reflects the market performance, concepts such as a market being “mildly” or “severely” bearish or bullish can be expressed more concretely by relating them to certain quantiles on the distribution of that market return.
1832
N. Sim
To estimate the level of dependence for specific market conditions, this chapter proposes a quantile dependence approach that looks at how the conditional quantiles of the market returns are correlated. From this perspective, to study how the equity markets of, say, the United States and Japan are dependent when they are bearish (bullish), one suggestion is to construct the correlation between the 10th (90th) conditional percentiles of the United States and Japan market returns. When studying their dependence during less bearish (bullish) times, we may construct the correlation of their return quantiles that are further away from the left (right) tails and closer towards the center of distributions. In other words, from the perspective that the distributional information (in particular, the quantile information) of a market return is indicative of the concurrent market condition, we may model the dependence structure of equity markets in a holistic and flexible way by estimating the correlation between the conditional quantiles of their returns, so that the level of dependence pertaining to a wide range of market conditions may be uncovered. To model the dependence of the return quantiles, a new framework combining the techniques of quantile regression and copula modeling is proposed. Quantile regression is a statistical tool for examining how the quantile of a variable is dependent on some other conditioning variables and thus is useful in this study as its main objective is to investigate the link between the distributional (or quantile) information on asset returns on the one hand and the correlation between these assets on the other. Together with the quantile regression technique, I use the copula model, which is popular among practitioners for its flexibility in modeling dependence between variables that may have complex, nonstandard joint distributions.3 When the study of correlation is considered, the copula approach is extremely useful as the correlation structure is summarized by the parameter in the copula function. In the case of the Gaussian or Student-t copula, the copula parameter is itself the correlation coefficient. Using these techniques to estimate how the conditional quantile of an asset return is correlated with the conditional quantile of another asset return, this chapter thereby proposes a methodology dubbed as the copula quantile-on-quantile regression (C-QQR) approach. The C-QQR approach is computationally convenient to implement as it is based on a two-step quantile regression procedure. Specifically, when computing the correlation between the market returns of, say, the United States and Japan, the C-QQR approach proceeds by first estimating the conditional quantile of the US return by way of quantile regression on an auxiliary equation, which is not an equation of main interest. Then, using information from this regression, it proceeds to estimate the correlation between the conditional quantiles of the market returns of Japan and the United States. This is achieved by implementing quantile regression on a quantile dependence equation, which contains a parameter that expresses the dependence between the return quantiles of these markets. As the quantile dependence equation articulates how the quantile of a return is related to the quantile of another return, it can be used to examine the entire dependence structure between these assets where their relationship is assumed to be contingent on their quantile information.
3
Introduction to copula models can be found in Nelsen (2006) and Trivedi and Zimmer (2007).
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1833
The application in this chapter focuses on modeling how the US market return is correlated with the market returns of Australia, Hong Kong, Japan, and Singapore. I employ the C-QQR approach to estimate the correlation between the 10th–90th conditional percentiles of the US return and the 10th–90th conditional return percentiles of Australia, Hong Kong, Japan, or Singapore (in decile intervals), leading to a total of 81 different correlation estimates for each return pair.4 These estimates exhibit substantial variation, implying that the correlation varies considerably across different levels (thus quantiles) of market returns. In particular, the C-QQR approach shows that the correlation between returns at the center of distributions, such as the correlation between the median returns, is typically weaker. This implies that equity markets are less dependent when conditions are not extreme. It also shows that the correlation is stronger between returns deep in the left tails and that the correlation between the tenth return percentiles is consistently larger than the correlation between the median returns. This observation is in line with the existing evidence that stock markets are more strongly dependent when they are bearish. After computing the C-QQR correlations, it is straightforward to construct a correlation time series by assigning a correlation estimate (from the pool of 81 C-QQR correlation estimates) to the realized returns in each period t. Calling this the dynamic C-QQR correlation, it is interesting to compare this constructed series against estimates that are obtained using conventional methods, such as the celebrated dynamic conditional correlation (DCC) framework of Engle (2002) that is designed for the study of how correlation evolves across time. Interestingly, although the C-QQR and DCC approaches are based on completely unrelated modeling principles – the C-QQR approach is based on quantile regressions and the DCC approach is based on the GARCH framework – the dynamic C-QQR correlation turns out to have similar visual characteristics as the DCC. This underscores another strength of the C-QQR approach – its ability to capture the salient features of the actual correlation dynamics. This chapter draws heavily from Sim (2012) on modeling quantile dependence but contains two important differences. First, it considers a different specification of the auxiliary equation as the one in Sim (2012). As it turns out, where the results are suppressed for conciseness sake, the estimation outcomes of the C-QQR correlation are not sensitive to using either the current auxiliary equation or the one in Sim (2012), suggesting at first pass that the C-QQR correlation estimates are fairly robust to mis-specification of the auxiliary equation. Second, the application in Sim (2012) is limited in scope as it focuses on modeling the correlation of the US market return with the market and sectoral returns of Australia. Extending this work, I present new results on the correlation of the US market with the stock markets of Hong Kong, Japan, and Singapore in addition to Australia.
4
Take the US-Australia return pair, for example. The 81 correlation estimates consist of estimates of the correlation between the tenth US and tenth Australian return percentiles, the tenth US and 20th Australian return percentiles, and so on, up to the 90th US and 90th Australian return percentiles.
1834
67.2
N. Sim
The C-QQR Model
67.2.1 Specification To construct the C-QQR model, an auxiliary equation and a quantile dependence equation must be specified. In our application, the auxiliary equation is utilized for modeling the conditional quantile of the US return quantile, and the quantile dependence equation is used for relating the dependence between the conditional quantile of the US market return and the conditional quantile of the market returns of Australia, Hong Kong, Japan, and Singapore. To specify the auxiliary equation, a model is postulated to link the US return (x) to a set of conditioning variables (x) as xt ¼ b> zt þ vt ,
(67.1)
where vt is the innovation in xt. In (67.1), xt is expressed as an autoregression with 12 lags to capture any mean reverting behavior of the US stock return, but it should be emphasized that the final outcome of the correlation estimate is not sensitive to using different numbers of lags. While the auxiliary equation is specified as an autoregression of xt, another possibility is to motivate a model such that the US return is dependent on certain macroeconomic aggregates5 or a model that is based on a general equilibrium framework with certain restrictions imposed. For example, instead of an autoregression, Sim (2012) takes into account of these considerations by specifying the auxiliary equation as a function of the US industrial production,6 which is an important determinant of stock return from both theoretical and empirical perspectives.7 To specify the quantile dependence equation, a dependence function h is postulated to relate the return of Australia, Hong Kong, Japan, or Singapore (y) to the US return (x) as e ðut ; vt ÞÞ, yt ¼ hðxt ; ’
5
(67.2)
According to the seminal work of Chen, Roll, and Ross (1986), asset prices could also be linked to information about the macroeconomic aggregates as they could influence the discount rate or dividend stream, given that asset price is the sum of discounted future dividend stream (e.g., McQueen and Roley 1993; Flannery and Protopapadakis 2002; Shanken and Weinstein 2006). 6 For example, Balvers et al. (1990) show that the general equilibrium framework with a logarithmic utility function and full capital depreciation can deliver a linear econometric model of stock return on the log of output. 7 See Balvers et al. (1990) for a theoretical justification of the importance of output as a determinant of stock return. In the empirical study of Shanken and Weinstein (2006), industrial production is found to be an important determinant of stock return. In the empirical study of Shanken and Weinstein (2006), industrial production is found to be an important determinant of stock return among other factors, such as expected and unanticipated inflation, the spread in corporate bonds, and the spread in the treasury yields.
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1835
where vt enters the quantile dependence equation; ut, the innovation in yt, is independent of vt; and yt is monotonic in ut for each xt and vt.8 The main parameter e which captures the dependence between y and x. In order to allow of interest is ’ this dependence parameter to be contingent on information about the US and the e in (67.2) is modeled as a function of ut and vt so that shocks other financial market, ’ to either the United States or the other financial market may influence the extent to which these markets are dependent. To obtain a parameter that provides information about the correlation between y and x, a copula model will be used to generate the function h. In the bivariate case, the copula is a function that combines the marginal distributions of x and y to yield their joint distribution function. Specifically, for variables x and y with marginal distributions Fx and Fy and joint distribution F, there exists a unique copula function C with copula parameter (with an abuse of notation) that satisfies e : Fðx; yÞ ¼ C Fx ðxÞ, Fy ðyÞ; ’ The copula that is employed in this study is the Gaussian copula, although other copulae such as the Student-t copula may be explored. Letting F2(∙) denote the bivariate Gaussian distribution and F(∙) denote the standard normal distribution, the Gaussian copula expresses the joint distribution of x and y as e , Fðyt ; xt Þ ¼ F2 F1 Fy ðyt Þ , F1 ðFx ðxt ÞÞ; ’
(67.3)
e is the correlation coefficient. One and is especially useful as its parameter ’ important advantage in adopting the Gaussian copula is the feasibility of transforming it into a regression model that is amenable to the quantile regression technique. For instance, Bouye` and Salmon (2009) show that the ty copula quantile curve based on the Gaussian copula can be derived from (67.3) as
ty
∂F2 F1 Fy ðyt Þ , F1 ðFx ðxt ÞÞ; ’ e ∂Fx ðxt Þ ! 1 F1 Fy ðyt Þ ’ e F ð Fx ð x t Þ Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼F , 1’ e2
(67.4)
where the first line follows by definition.9 Rewriting (67.4), a regression model can be written as10
8
Monotonicity ensures that conditional on xt and vt, the quantile of y can be mapped from the quantile of u. 9 See p. 726 in Bouye` and Salmon (2009). 10 When xt is replaced by yt1, the model becomes an autoregression in F 1(Fy(yt)), leading to the nonlinear copula quantile autoregression model of Chen et al. (2009).
1836
N. Sim 1
F
qffiffiffiffiffiffiffiffiffiffiffiffiffiffi e F ð Fx ð x t Þ Þ þ 1 ’ e 2 F1 ty : Fy ð y t Þ ¼ ’
1
(67.5)
If the marginal distributions Fx and Fy are standard normal, (67.5) simplifies further into an elegant regression model of qffiffiffiffiffiffiffiffiffiffiffiffiffiffi e xt þ 1 ’ e 2 F1 ty : (67.6) yt ¼ ’ Adapting from (67.6), the quantile dependence equation that is considered in the application can be expressed as e ðut ; vt Þxt þ yt ¼ ’
pffiffiffiffiffiffiffiffiffiffiffiffi e ðut ; vt Þ2 F1 ty , 1’
(67.7)
where the correlation parameter is modeled as function of ut and vt to be consistent with (67.2). e is allowed to be influenced by ut and vt is a crucial feature in the quantile That ’ e were a constant, the dependence parameter dependence approach. First of all, if ’ will not be affected by the quantile information of the financial markets. Second, the conditional quantiles of y and x are intertwined with the quantiles of u and v, e as a function of ut and vt, we are relating the respectively. By postulating ’ e to the quantile information on u and v, dependence between y and x given by ’ e which is in turn linked to the quantile information on y and x. Therefore, allowing ’ to be dependent on the quantile information of u and v enables us to capture the level of dependence that is specific to the quantile information of y and x. For example, let us consider how the conditional quantile of the US return can be motivated from the auxiliary equation of (67.1). Equation 67.1 shows that holding the conditioning variables fixed, any extrinsic variation in the US return (xt) must be attributed to vt. In other words, the conditional quantile of the US return is linked to the quantile of vt, so that the tx conditional quantile of the US return is given as Qx ðtx jzt Þ ¼ b> zt þ Fv 1 ðtx Þ,
(67.8)
where the distribution function of v be Fv(∙) and its tx quantile is Fv1(tx). Conditioning on z, (67.8) therefore illustrates how the conditional quantile of x, e as a function of i.e., Qx(tx|zt), is intertwined with the quantile of v. By modeling ’ e may then vary with the quantile information of v and hence of x. Likewise, by v, ’ e as a function of u, ’ e may vary with the quantile information of y as the modeling ’ quantile information of u and y are related. To express the concept of quantile dependence using the general quantile dependence equation of (67.2) for our discussion, recall from (67.8) that Qx(tx|zt) is linked with Fv1(tx). By conditioning yt on Qx(tx|zt), logical consistency requires fixing vt at Fv1(tx) in (67.2) as well. This is because as we have seen in (67.8), vt cannot vary freely in the construction of Qx(tx|zt). With this in mind, the ty quantile of the Australian return conditioning on Qx(tx|zt) can be expressed as
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1837
1 e F1 ¼ h Qx ðtx jzt Þ; ’ t , F v ð tx Þ u y (67.9) h Qx ðtx jzt Þ; ’ ty ; tx , 1 e F1 where I have defined ’ ty ; tx ’ u ty , Fv ðtx Þ . Through the dependence parameter ’(ty, tx), (67.9) summarizes how the ty Australian return quantile is related to the tx US return quantile. If the dependence function h in (67.9) is replaced with the copula-based model in (67.7), then (67.9) expresses the copula quantile-on-quantile regression (C-QQR) model that is to be estimated in this chapter. Before we proceed, it should be emphasized why the US return is chosen for the auxiliary model over the other market returns. In formulating (67.1) and (67.2), two assumptions are made. First, in relation to the auxiliary equation of (67.1), which is modeled as an autoregression of xt, I assume that US economic fundamentals are sufficient for determining the US return beyond the economic fundamentals of the other markets. Hence, information from the other markets would not matter for driving the US return and are excluded from (67.1), which is likely too reasonable for relatively smaller economies such as Australia, Hong Kong, and Singapore and perhaps less so for Japan. By including the United States in the quantile dependence equation of (67.2), the second assumption asserts that the US return information is important for influencing the other stock markets. This would be plausible if US fundamentals contribute towards the global economic forces that drive the co-movement between the United States and the other financial markets. Therefore, if information about US fundamentals has global content, and if this is subsumed in the US return, the US return will be a powerful variable for explaining the variation in the market returns of the Australia, Hong Kong, Japan, and Singapore. This motivates placing the US return, not the other market returns, as a right-hand-side variable in the quantile dependence equation. Furthermore, it should also be emphasized that (67.1) and (67.2) form a triangular system of simultaneous equation of the type analyzed by Ma and Koenker (2006), who study the dependence between conditional quantiles that is motivated from such a structure. However, there is a fundamental difference between this paper and Ma and Koenker (2006). While Ma and Koenker (2006) study a parametric model, this chapter does not make a parametric assumption e ðut ; vt Þ in order that the data is allowed to speak with respect to about the function ’ e to the quantile information of u and v. Therefore, this requires the response of ’ an alternative method of estimation from Ma and Koenker (2006) which is discussed in the next section. Qy ty jQx ðtx jzt Þ
67.2.2 Estimation This section outlines the for estimating dependence parameter in the procedure 1 the e F1 C-QQR model, i.e., ’ ty ; tx ’ t ð t Þ , F y x in (67.9). In the context of the u v C-QQR model where the h function in (67.9) is replaced by the copula-based model in (67.7), ’(ty, tx) expresses the correlation between ty conditional quantile of y and
1838
N. Sim
tx conditional quantile of x. Even though the value of ’(ty, tx) is based upon the e ðut ; vt Þ parameter in the quantile dependence equation of (67.2),11 the estimate of ’ ’(ty, tx) cannot be obtained by a straightforward application of quantile regression on (67.2) as it is a single equation with to two terms ut and vt. In order unobservable 1 1 e F1 estimate ’(ty,tx), or equivalently ’ u ty , Fv ðtx Þ , I first anchor vt at Fv (tx) in the quantile dependence equation while letting ut be “free” and then implement 1 a ty -quantile 1 regression on this resulting equation to set ut to Fu (ty) for an estimate 1 e Fu ty , Fv ðtx Þ . of ’ To elaborate, let us decompose the quantile dependence equation of (67.2) into two parts: e ut , F1 e ð ut ; v t Þ Þ h x t ; ’ e ut , F1 yt ¼ h xt ; ’ þ hðxt ; ’ : v ð tx Þ v ð tx Þ |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
(67.10)
O
1 e In the first1part of (67.10), the v-argument in ’ is anchored at Fu (tx) so that e ut , Fv ðtx Þ is now a function of a single unobservable, ut. If the Ω portion h xt ; ’ of (67.10) can be controlled, our target parameter ’(ty,tx) can be estimated by implementing a ty-quantile regression on (67.10) as doing so would, 1 in principle, e F1 deliver an estimate of the conditional function of h xt ; ’ t which y , Fv ð t x Þ u contains our target. To control for Ω, I approximate it by a first-order Taylor expansion of vt around F1 v (tx) as
e ut , F1 e v ut , F1 O he ’ xt ; ’ u ð tx Þ ’ u ðtx Þ ut ðtx Þ,
(67.11)
where (67.11) uses the definition vt(tx) ¼ vt F1 ’ is the v (tx). The function he e , which is a known expression given that h is partial derivative of h with respect to ’ e v ut , F1 e with specified in (67.7). The parameter ’ v ðtx Þ is the partial derivative of ’ respect to vt, where its v-argument is evaluated at Fv1(tx). The functional form of this e ut , F1 partial derivative is unknown as the functional form of ’ v ðtx Þ is not specified. With the first-order Taylor approximation leading to (67.11), the initial problem of controlling for vt in the quantile dependence equation now becomes an issue of controlling for vt(tx) in (67.11). The new variable vt(tx) can be estimated as the residual following a tx-quantile regression on the auxiliary model of (67.1).12 Letting ^v t ðtx Þ denote the estimate of vt(tx), we can control for Ω using its feasible counterpart: ^ ¼ h’ x t ; ’ e ut , F1 e v ut , F1 v t ðtx Þ, O v ð tx Þ ’ v ðtx Þ ^ e so that the quantile dependence equation to be taken to the data is e ðut ; vt Þ by anchoring the u , v-arguments in ’ e ðut ; vt Þ at Fu1(ty) ’(ty, tx) can be motivated from ’ and Fv1(tx). 12 Since Qx(tx|zt) ¼ b>zt + Fv1(tx), we may express the auxiliary regression of (67.1), i.e., xt ¼ b>zt + vt, as xt ¼ b>zt + Fv1(tx) + vt Fv1(tx) ¼ Qx(tx|zt) + vt Fv1(tx) ¼ Qx(tx|zt) + vt(tx). Therefore, we may estimate ut(tx) as the residual from a tx -quantile regression on (67.1). 11
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1839
e ut , F1 yt ¼ h xt ; ’ v ð tx Þ e ut , F1 e v ut , F1 þ he v t ðtx Þ: ’ xt ; ’ v ð tx Þ ’ v ð tx Þ ^
(67.12)
To a first-order approximation, ut is the only unobservable term in (67.12). Therefore, ’(ty, tx) can now be estimated by implementing a ty-quantile regression on (67.12). As compared to the original quantile dependence equation of (67.10), the “revised” quantile dependence equation of (67.12) now contains an e v, which is to be estimated together with the main parameter additional parameter, ’ e. of interest, ’ Let rt() be the “check” function in Koenker and Bassett (1978), defined as rt(u) ¼ u(t I(u < 0)), where I(∙) is an indicator function. Using (67.12), the dependence between the quantiles of x and y can be estimated by implementing a two-step quantile regression procedure: 1. Obtain residuals ^v t ðtx Þ from a tx -quantile regression on (67.1), the auxiliary equation, i.e., min b
T X
rt xt b> zt
t¼1
e from a ty -quantile regression on (67.12), the quantile 2. Using ^v t ðtx Þ, estimate ’ dependence equation, i.e., T X e Þ he e Þe rt yt hðxt ; ’ ’ v^v t ðtx Þ : min ’ ðx t ; ’ ’ ;e ’ v Þ t¼1 ðe Step 1 is a standard linear quantile regression and Step 2 is a standard nonlinear e will yield the desired estimate of quantile regression. The second-step estimate of ’ ’(ty, tx). Because the C-QQR approach involves a two-step quantile regression procedure, it can be implemented using statistical packages for quantile regression such as the quantreg package of Koenker (2009) within the R software.13
67.3
Application
Monthly returns of Australia, Hong Kong, Japan, Singapore, and the United States are constructed from the Datastream-MSCI indices.14 Table 67.1 provides the summary statistics for the period between March 1974 and February 2010. Among Australia, Hong Kong, Japan, and Singapore, the US market is most strongly correlated with the Singapore market (at 0.60) and most weakly correlated with the Japan market (at 0.41). But these are sample correlation coefficients that could differ significantly from actual levels of correlation especially when equity markets are bearish. By considering how
13
For instance, the Steps 1 and 2 regressions can be implemented using the rq and nlrq commands of the quantreg package in R. 14 The monthly returns are constructed as 100 multiplied by the change in the log of the index.
1840
N. Sim
Table 67.1 Summary statistics on monthly stock returns (March 1974 to February 2010) Australia Hong Kong Japan Singapore
Mean 0.67 1.22 0.39 0.71
Min 44.79 62.50 24.38 44.55
Max 20.11 36.36 17.51 44.92
Standard deviation 5.80 9.19 5.32 7.70
Correlation with United States 0.58 0.49 0.41 0.60
the quantiles of these market returns are correlated, the C-QQR approach offers a way of examining their level of dependence that is contingent on specific market conditions. In this application, the correlation between the tx conditional quantile of the US return and the ty conditional quantile of the return of Australia, Japan, Hong Kong, or Singapore is computed. The values of ty and tx are defined on the grid [0.1, 0.2, . . ., 0.9], where the total number of ty and tx combinations on this grid is 81. The C-QQR approach is implemented to estimate the correlation between the 10th–90th percentiles of the US return and the 10th–90th percentiles of the other market return (in decile intervals), for a total of 81 different correlation estimates for each return pair. To visualize the C-QQR correlation, Fig. 67.1 plots the C-QQR correlation surfaces by relating the level of correlation in the z-axis to the return quantiles of Australia, Hong Kong, Japan, or Singapore on the x-axis and the return quantiles of the United States on the y-axis. The correlation surfaces offer some important insights. First, they show that the level of dependence varies substantially across the distributions of returns. Second, across all the four return pairs, the C-QQR correlations share many common features that echo our existing understanding about how correlations behave under various circumstances. For instance, the C-QQR correlation has the tendency to be weak at the center of the return distributions, implying that markets are less dependent when they are neither bearish nor bullish. As a side remark, while the correlation between centrally located quantiles may be interpreted as the level of dependence when markets are neither bearish or bullish, measures such as the sample correlation coefficient may not deliver the same interpretation. In fact, Table 67.2 shows that the sample correlation coefficient appears to be different from the correlation between centrally located return quantiles such as the median returns, where the correlation between the median returns is always smaller than the sample correlation coefficient. This perhaps is not surprising as the sample correlation coefficient is computed without cleaving out the consequence of extreme events that lead to inflating the actual level of dependence when they happen. Of particular relevance to the practice of risk management is the fact that stock markets are more strongly dependent when they are bearish. Through the C-QQR approach, a similar point is made in terms of the stronger dependence between return quantiles in the left tail distributions. This is demonstrated in Fig. 67.2, which plots the correlation along the main diagonals in Fig. 67.1, i.e., the correlation along ty ¼ tx. Figure 67.2 reveals that the correlation usually peaks at around ty ¼ tx ¼ 0.1 (the 10th percentiles of returns). Because these lower return quantiles are associated with markets that are bearish, Fig. 67.2 reiterates a familiar result in the existing literature that the correlation between bear markets would be stronger than usual (e.g., Longin and Solnik 2001; Hu 2006; Chollete et al. 2011).
67
a
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1841
b
1
1 0.8
Correlation
Correlation
0.8 0.6 0.4 0.2 0
0.6 0.4 0.2 0
0.8
0.8 0.8
0.6 U.S
0.4 0.2
0.4 0.2
0.6
0.6
U.S
Australia
0.4 0.2
Australia
Hong Kong
d
1
1
0.8
0.8
0.6
0.6
Correlation
Correlation
c
0.2
0.8 0.6 0.4 Hong Kong
0.4 0.2 0
0.4 0.2 0
0.8
0.8 0.8
0.6 U.S
0.4 0.2
0.2
Japan
0.4
0.6 Japan
0.6 U.S
0.4 0.2
0.2
0.8 0.6 0.4 Singapore
Singapore
Fig. 67.1 C-QQR correlation This figure plots the correlation between the quantile of the US market return and the quantile of the market return of Australia, Hong Kong, Japan, or Singapore using the C-QQR approach. The x-axis marks the quantiles of the US return and the y-axis marks the quantile of the other return
Note that the C-QQR approach is useful for showing the key locations in the return distributions where most of the movements in correlation take place. Specifically, it offers a new insight that the correlation may deviate sharply even at less extreme return quantiles. In the case of the US-Australia and US-Singapore return pairs, Fig. 67.2 shows that the correlation rises substantially starting from the 30th return percentiles. If 30th return percentiles are associated with markets that are mildly bearish, this implies that markets do not have to be severely bearish in order to trigger a nontrivial increase in correlation. To evaluate the “performance” of the C-QQR approach informally, it is useful to compare the average correlation based on the C-QQR approach with the sample
1842
N. Sim
Table 67.2 Sample and C-QQR correlations Panel A Sample Average C-QQR Panel B ty ¼ tx ¼ 0.1 ty ¼ tx ¼ 0.5 ty ¼ tx ¼ 0.9
Australia
Hong Kong
Japan
Singapore
0.5795 0.5697
0.4867 0.5194
0.4123 0.4657
0.5987 0.6318
0.8856 0.4625 0.5480
0.8358 0.3583 0.7212
0.9541 0.3552 0.3439
0.8132 0.4757 0.7976
Sample reports the sample correlation coefficient. C-QQR reports the average of the 81 correlation ^ ty ; tx is specific for ty and t x defined on the grid [0.1, . . ., 0.9]. estimates, where each estimate ’ ty ¼ tx ¼ k reports the correlation between the k quantile of the US market return and k quantile of ^ ðk; kÞ the market return of Australia, Hong Kong, Japan, or Singapore, i.e., ’
correlation coefficient, which itself reflects the average level of dependence. This “average C-QQR correlation” can be obtained by averaging up the 81 C-QQR correlation estimates for each return pair. Panel A of Table 67.2 compares the average C-QQR correlation with the sample correlation coefficient and shows that the two measures of average dependence are quite similar. For example, looking at the dependence between the US-Australia return pair, the average C-QQR correlation is 0.57 (rounded to the nearest two decimal places), which is very close the sample correlation of 0.58. However, the closeness between the average C-QQR correlation and the sample correlation coefficient is not specific to the US-Australia correlation. For instance, in the case of the US-Hong Kong and US-Singapore return pairs, their average C-QQR correlations are 0.52 and 0.63, respectively, which are very close to their sample correlation coefficients of 0.49 and 0.60. Even in the largest case of disparity between the two correlation measures, which is found for US-Japan return pair, the difference between the average C-QQR correlation and the sample correlation is only about 0.05. Given that the average C-QQR correlation delivers a reasonable measure of the average level of dependence as benchmarked by the sample correlation coefficient, one may interpret the C-QQR approach as a technique for decomposing the level of average dependence into levels that are specific to different points in the distribution of returns and thus to a wide spectrum of market conditions.
67.3.1 Dynamic C-QQR Correlation Having obtained the C-QQR correlation estimates, it is straightforward to construct a historical series of correlation. This construction is especially useful for shedding light on the behavior of correlation across time, and in this regard, the C-QQR approach is related to the celebrated Dynamic Conditional Correlation (DCC) framework of Engle (2002) that is designed for the study of the time series behavior of correlation. Therefore, another informal evaluation of the C-QQR approach is to compare its estimates directly with the DCC. This comparison is interesting as the two approaches are completely unrelated – the C-QQR approach is based
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1843
a
1
Correlation
0.8 0.6 0.4 0.2 0 0.1
0.2
0.3
0.4
0.5 Tau
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
Australia
b
1
Correlation
0.8 0.6 0.4 0.2 0 0.1
0.2
0.3
0.4
0.5 Tau
Hong Kong
c
1 0.8
Correlation
Fig. 67.2 C-QQR correlation, ty ¼ tx For ty ¼ tx, this figure plots the correlation between the ty US return quantile and the tx quantile of the market return of Australia, Hong Kong, Japan, or Singapore using the C-QQR approach. The 95 % bootstrap confidence band is provided
0.6 0.4 0.2 0 0.1
0.2
0.3
0.4
0.5 Tau
Japan
d
1 0.8
Correlation
67
0.6 0.4 0.2 0 0.1
0.2
0.3
0.4
0.5 Tau
Singapore
Jan-75
Fig. 67.3 (continued)
Hong Kong
Jul-79
Apr-79
Jan-79
Oct-78
Jul-78
Apr-78
Jan-78
Oct-77
Jan-75
Jul-79
Apr-79
Jan-79
Oct-78
Jul-78
Apr-78
Jan-78
Oct-77
Jul-77
Apr-77
Jan-77
Oct-76
Jul-76
Apr-76
Jan-76
Oct-75
Jul-75
Apr-75
DCC
Jul-77
DCC
Apr-77
Jan-77
0.8
Oct-76
Jul-76
Apr-76
Jan-76
Oct-75
Jul-75
Apr-75
Jul-74 Oct-74
a
Oct-74
Jul-74
1844 N. Sim
1
0.9 C-QQR
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Australia
b 0.9
C-QQR
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1845
c
1 C-QQR
DCC
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
Jul-78
Oct-78
Jan-79
Apr-79
Jul-79
Jul-78
Jan-79
Apr-79
Jul-79
Apr-78 Apr-78
Oct-78
Jan-78 Jan-78
Jul-77
Oct-77
Apr-77
Jan-77
Oct-76
Jul-76
Apr-76
Jan-76
Oct-75
Jul-75
Jan-75
Apr-75
Oct-74
Jul-74
0
Japan
d
1 DCC
0.9
C-QQR
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Jul-77
Oct-77
Apr-77
Jan-77
Oct-76
Jul-76
Apr-76
Jan-76
Oct-75
Jul-75
Jan-75
Apr-75
Oct-74
Jul-74
0
Singapore Fig. 67.3 Dynamic C-QQR correlation and DCC in levels, 1974–1979 The C-QQR correlation and the DCC are plotted for the correlation of the US market return with the market return of Australia, Hong Kong, Japan, or Singapore. The solid line corresponds to the 4-month moving average C-QQR correlation and the dotted line corresponds to the DCC that is estimated using the DCC-GARCH(1,1) specification
1846
N. Sim
on quantile regressions and the DCC approach is based on the GARCH framework. As it turns out, despite the clear difference in the two modeling approaches, the C-QQR approach produces estimates that in some ways are visually comparable to the DCC. The time series of correlation can be constructed by first matching each realized stock return at period t to the nearest return quantile that is used when estimating the C-QQR correlation. Once that is done for the US return and the return of the other market, we may then match a C-QQR correlation estimate (from the pool of 81 estimates) to these quantiles to obtain an approximate realized correlation at period t. By employing this matching approach for each period, we may construct an approximation of the historical correlation. Calling this the “dynamic C-QQR correlation,” the detailed procedure of computing it is outlined as follows: 1. Compute the empirical distributions for the returns to the US market (x) and to the market returns of Australia, Hong Kong, Japan, or Singapore (y) for each e x ðxt Þ and F e y ðyt Þ: period t. Denote them by F e e e x ðxt Þ or F e y ðyt Þ is greater than 2. If F x ðxt Þ or F y ðyt Þ is less than 0.05, add 0.05. If F e e 0.95, subtract 0.05. Call the new series Fx ð1Þðxt Þ and Fy ð1Þðyt Þ. e ð1Þðxt Þ and F e ð1Þðyt Þ to the nearest first decimal place. The new series 3. Round F x y e e Fx ð2Þðxt Þ and Fy ð2Þðyt Þ will be on the grid [0.1, . . ., 0.9]. 4. The C-QQR correlation estimates is a 9 9 matrix, where each point on the matrix corresponds to a combination of points on two [0.1, . . ., 0.9] grids, with each grid representing the return percentiles of the US market and the other e ð 2Þ ð x t Þ market, respectively. The correlation at time t is obtained by matching F x e ð2Þðyt Þ to the C-QQR correlation matrix of estimates. and F y For a close-up comparison, Fig. 67.4 plots the dynamic C-QQR correlation (4-month moving average) and the DCC for the first 5-year period in the sample from 1973 to 1979, and Fig. 67.5 plots these correlations for the last 5-year period in the sample from 2005 to 2010. Choosing the first and last 5-year periods allows us to observe how the dynamic C-QQR correlation and DCC compare across the two most distant 5-year periods in the sample. Besides comparing their levels, it is also useful to compare their first difference as doing so would help us to gain further insights on how the dynamic C-QQR correlation behaves relative to the DCC. Focusing on the 1973–1979 period, Fig. 67.4 shows that the dynamic C-QQR correlation is similar to the DCC in terms of movements, although not necessarily in terms of magnitude. For instance, comparing the C-QQR correlation and the DCC for the US-Australia return pairs, Fig. 67.1 shows a decline in the C-QQR correlation from around July 1974 to April 1975, while the DCC manifests a similar downward motion from January to October 1975. Likewise, the C-QQR correlation trends upwards from April 1976 to January 1977 with the DCC following suit from October 1976 to around the same time. Focusing on their first difference, Fig. 67.5 shows that the peaks in the first difference of the dynamic C-QQR correlation are
Fig. 67.4 (continued)
Hong Kong May-09 Aug-09 Nov-09 Feb-10
Nov-09
Feb-10
Nov-07
Aug-07
Aug-09
0
May-09
0.1 Feb-09
0.2
Feb-09
0.3 Nov-08
0.4 Aug-08
0.5
Nov-08
0.6 May-08
0.7
Aug-08
C-QQR
May-08
b 0.9 Feb-08
Australia
Feb-08
Nov-07
Aug-07
DCC May-07
Feb-07
Nov-06
DCC
May-07
Feb-07
Nov-06
0.8 Aug-06
May-06
Feb-06
Nov-05
Aug-05
May-05
Feb-05
0.9
Aug-06
May-06
Feb-06
Nov-05
Aug-05
May-05
Feb-05
67 Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1847
a 1
C-QQR
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1848
c
N. Sim
1 0.9
C-QQR
DCC
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Nov-07
Feb-08
May-08
Aug-08
Nov-08
Feb-09
May-09
Aug-09
Nov-09
Feb-10
Nov-07
Feb-08
May-08
Aug-08
Nov-08
Feb-09
May-09
Aug-09
Nov-09
Feb-10
Aug-07
May-07
Feb-07
Nov-06
Aug-06
May-06
Feb-06
Nov-05
Aug-05
May-05
Feb-05
0
Japan
d 0.9
DCC
C-QQR
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
Aug-07
May-07
Feb-07
Nov-06
Aug-06
May-06
Feb-06
Nov-05
Aug-05
May-05
Feb-05
0
Singapore Fig. 67.4 Dynamic C-QQR correlation and DCC in levels, 2005–2010 The C-QQR correlation and the DCC are plotted for the correlation of the US market return with the market return of Australia, Hong Kong, Japan, or Singapore. The solid line corresponds to the 4-month moving average C-QQR correlation and the dotted line corresponds to the DCC that is estimated using the DCC-GARCH(1,1) specification
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1849
Fig. 67.5 (continued)
1850
N. Sim
Fig. 67.5 Dynamic C-QQR correlation and DCC in first difference, 1974–1979 The C-QQR correlation and the DCC, in first difference, are plotted for the correlation of the US market return with the market return of Australia, Hong Kong, Japan, or Singapore. The solid line corresponds to the 4-month moving average C-QQR correlation and the dotted line corresponds to the DCC that is estimated using the DCC-GARCH(1,1) specification
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1851
followed by similar peaks in the first difference of the DCC in many occasions, although the size of these changes across the C-QQR correlation and the DCC are somewhat different. Besides the US-Australia return pairs, the C-QQR correlation and the DCC display noticeable similarities with respect to the US-Hong Kong and US-Singapore return pairs, but least resemble each other in the case of US-Japan. During the 2005–2010 period, not only do the dynamic C-QQR correlation and the DCC display similar trends, they also appear to be moving along the same path. For instance, when focusing on the US-Australia and US-Hong Kong return pairs, Fig. 67.4 shows that the C-QQR correlation and the DCC track each other closely with similar levels. In terms of their first difference, Fig. 67.6 also shows that while changes in the C-QQR correlation and the DCC occur nearly in tandem, the changes in the DCC are usually preceded by changes in the C-QQR correlation in the same direction. For example, in the case of the US-Australia return pairs, Fig. 67.6 shows that the peaks in the first difference of the C-QQR correlation around April 2005, November 2005, and June 2006 are followed by peaks in the first difference of the DCC about a month or two later. The US-Japan return pair presents an interesting case in the comparison between the dynamic C-QQR correlation and the DCC. For example, I find that the DCC in this context to be more or less steady throughout the sample period, including the 1973–1979 and 2005–2010 periods that saw the 1974–1975 US and global recession triggered by the tripling of the price of oil and the current global financial crisis that started in 2007. Interestingly, while the C-QQR correlation typically meanders around a steady level throughout the sample period, it is also characterized by sharp upward movements during 1974–1975 and starting from the end of November 2007, the periods when global financial markets are bearish. And, during the times when the C-QQR correlation is free from these large deviations, it is nearly identical to the DCC. This can be seen by comparing the two correlation series during April 1976 to August 1977 and April 1978 to March 1979 in Fig. 67.3, and prior to February 2006 in Fig. 67.4, where the two correlation estimates nearly coincide. While this exercise does not offer a statistical evaluation of the closeness between the dynamic C-QQR correlation and the DCC, visual inspection of the two correlation series reveals some common features between them especially during 2005–2010. That there are some similarities between the dynamic C-QQR correlation and the DCC is somewhat surprising since from the outset, the C-QQR and the DCC approaches based on completely different modeling paradigms.
67.4
Conclusion
This chapter discusses a new perspective of modeling correlation, based on the C-QQR approach, which focuses on the correlation between the conditional quantiles of asset returns as a way of uncovering the level of dependence for specific market conditions. The C-QQR approach has the ability to replicate key features about the correlation between stock returns that have been noted before. For instance, it shows that the correlation between lower return quantiles is stronger than that between
1852
Fig. 67.6 (continued)
N. Sim
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1853
Fig. 67.6 Dynamic C-QQR correlation and DCC in first difference, 2005–2010 The C-QQR correlation and the DCC, in first difference, are plotted for the correlation of the US market return with the market return of Australia, Hong Kong, Japan, or Singapore. The solid line corresponds to the 4-month moving average C-QQR correlation and the dotted line corresponds to the DCC that is estimated using the DCC-GARCH(1,1) specification
1854
N. Sim
centrally located return quantiles, which corroborates the familiar observation that bear markets are more strongly correlated. The C-QQR approach also has the ability to produce a constructed time series of correlation that resembles the DCC even though the C-QQR and DCC approaches are completely unrelated. Given that the C-QQR framework produces correlation estimates that match the findings of existing non-dynamic-based approaches of modeling correlation and the estimates of the dynamic based DCC approach, it therefore empirically bridges the gap between the dynamic and non-dynamic-based paradigms of modeling correlation. Nevertheless, the application of the concept of quantile dependence in financial economics is still in the early stage where some issues could be addressed going forward. Firstly, I presented a bivariate version of the C-QQR model for the analysis of pairwise correlation. As financial markets are interrelated, an extension to the multivariate case would be an important direction. Secondly, I use the auxiliary equation to model the US return, and it would be interesting to investigate the implications on the final correlation estimate of doing the opposite, that is, use the auxiliary equation to model the market returns of Australia, Hong Kong, Japan, and Singapore. Finally, in terms of applications, it should be emphasized that the relevance of the C-QQR approach is not confined to the study of equities alone. For instance, one could also look at the correlation between stocks and bonds through the lens of the C-QQR approach and examine “flight to quality” hypothesis, which emphasizes the tendency of investors to underweight equities in favor of bonds in the face of market uncertainties (e.g., Connolly et al. 2005). It would also be interesting to apply the C-QQR approach to examine issues in macroeconomics such as studying the nonlinearity in the relationships between macroeconomic aggregates, which has been a topic of considerable interest in recent research.
References Ang, A., & Bekaert, G. (2002). International asset allocation with regime shifts. Review of Financial Studies, 15, 1137–1187. Ang, A., & Chen, J. (2002). Asymmetric correlations of equity portfolios. Journal of Financial Economics, 63, 443–494. Balvers, R. J., Cosimano, T. F., & McDonald, B. (1990). Predicting stock returns in an efficient market. Journal of Finance, 45, 1109–1128. Bouye`, E., & Salmon, M. (2009). Dynamic copula quantile regressions and tail area dynamic dependence in Forex markets. European Journal of Finance, 15, 721–750. Chen, N. F., Roll, R. R., & Ross, S. A. (1986). Economic forces and the stock market. Journal of Business, 59, 383–403. Chen, X., Koenker, R., & Xiao, Z. (2009). Copula-based nonlinear quantile regression autoregression. Econometrics Journal, 12, S50–S67. Chollete, L., de la Pena˜, V., & Lu, C. (2011). International diversification: A copula approach. Journal of Banking and Finance, 35, 403–417. Connolly, R. A., Stivers, C. T., & Sun, L. (2005). Stock market uncertainty and the stock-bond return relation. Journal of Financial and Quantitative Analysis, 40, 161–194. Engle, R. F. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business and Economic Statistics, 20, 339–350.
67
Estimating the Correlation of Asset Returns : A Quantile Dependence Perspective 1855
Erb, C. B., Harvey, C. R., & Viskanta, T. E. (1994). Forecasting international equity correlations. Financial Analysts Journal, 50, 32–45. Flannery, M. J., & Protopapadakis, A. A. (2002). Macroeconomic factors do influence aggregate stock returns. Review of Financial Studies, 15, 751–782. Guidolin, M., & Timmermann, A. (2005). Economic implications of bull and bear regimes in UK stock and bond returns. Economic Journal, 115, 111–143. Heffernan, J. E., & Tawn, J. A. (2004). A conditional approach for multivariate extreme values. Journal of the Royal Statistical Society B, 66, 497–546. Hu, L. (2006). Dependence patterns across financial markets: A mixed copula approach. Applied Financial Economics, 16, 717–729. Koenker, R. (2009). Quantile regression in R: A vignette. Mimeo. Koenker, R., & Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33–50. Longin, F., & Solnik, B. (1995). Is the correlation in international equity returns constant: 1960–1990? Journal of International Money and Finance, 14, 3–26. Longin, F., & Solnik, B. (2001). Extreme correlations in international equity markets. Journal of Finance, 56, 649–676. Ma, L., & Koenker, R. (2006). Quantile regression methods for recursive structural equation models. Journal of Econometrics, 134, 471–506. McQueen, G., & Roley, V. V. (1993). Stock prices, news, and business conditions. Review of Financial Studies, 6, 683–707. Nelsen, R. B. (2006). An introduction to copulas (2nd ed.). New York: Springer. Okimoto, T. (2008). New evidence of asymmetric dependence structures in international equity markets. Journal of Financial and Quantitative Analysis, 43, 787–815. Patton, A. J. (2006). Modelling asymmetric exchange rate dependence. International Economic Review, 47, 527–555. Poon, S. H., Rockinger, M., & Tawn, J. (2004). Extreme value dependence in financial markets: Diagnostics, models and financial implications. Review of Financial Studies, 17, 581–610. Shanken, J., & Weinstein, M. I. (2006). Economic forces and the stock market revisited. Journal of Empirical Finance, 13, 129–144. Sim, N. (2012). Modeling the correlations of asset returns by the dependence between their quantiles. Working paper. Trivedi, P. K., & Zimmer, D. A. (2007). Copula modeling: An introduction for practitioners. Hanover: Now Publishers.
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies
68
Shin Yun Wang and Cheng-Few Lee
Contents 68.1 68.2 68.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Review of Mutual Fund Investment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Method of Multi-criteria Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.3.1 l-Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.3.2 The Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.3.3 The Integral Multi-criteria Assessment Methodology . . . . . . . . . . . . . . . . . . . . . . . 68.4 Evaluation Model for Prioritizing the Investment Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.4.1 Evaluating the Mutual Fund Strategy Hierarchy System . . . . . . . . . . . . . . . . . . . . 68.4.2 The Process for Evaluating and Prioritizing Mutual Fund Strategies . . . . . . . 68.5 Empirical Examinations and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.5.1 Evaluating the Weights of Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.5.2 Evaluation and Prioritization of the Mutual Fund Strategy . . . . . . . . . . . . . . . . . . 68.5.3 Comparing with the Empirical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Description of Evaluative Criteria of Mutual Funds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary Statistics for Returns of the Mutual Funds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary Statistics for Returns of the Mutual Funds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1858 1859 1860 1861 1862 1862 1863 1864 1865 1865 1866 1866 1866 1868 1869 1869 1870 1870 1873 1873 1874 1876
S.Y. Wang (*) National Dong Hwa University, Shou-Feng, Hualien, Taiwan e-mail: [email protected] C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_68, # Springer Science+Business Media New York 2015
1857
1858
S.Y. Wang and C.-F. Lee
Abstract
Investors often need to evaluate the investment strategies in terms of numerical values based upon various criteria when making investment. This situation can be regarded as a multiple criteria decision-making (MCDM) problem. This approach is oftentimes the basic assumption in applying hierarchical system for evaluating the strategies of selecting the investment style. We employ the criteria measurements to evaluate investment style. To achieve this objective, first, we employ factor analysis to extract independent common factors from those criteria. Second, we construct the evaluation frame using hierarchical system composed of the above common factors with evaluation criteria and then derive the relative weights with respect to the considered criteria. Third, the synthetic utility value corresponding to each investment style is aggregated by the weights with performance values. Finally, we compare with empirical data and find that the model of MCDM predicts the rate of return. Keywords
Investment strategies • Multiple criteria decision making (MCDM) • Hierarchical system • Investment style • Factor analysis • Synthetic utility value • Performance values
68.1
Introduction
The number of mutual funds has increased exceeding the number of stocks listed on the organized exchange, hence making the selection of mutual funds an onerous task for the investor. In addition, the mutual funds are moving rapidly towards financial market development in response to increasing market demand and the mutual fund industry. Therefore the mutual funds have huge market potential and have been gaining momentum in the financial market. The complexities are numerous, and overcoming these complexities to offer successful selections is a mutual fund manager’s challenge. The mutual fund managers need to evaluate aquatic return so as to reduce its risk to find the optimal combination of invested stocks out of many feasible stocks and distribute the amount of investing funds to many stocks. Because of the limited amount of funds invested into mutual funds, the solution of the portfolio selection problem proposed by Markowitz (1952) has a tendency to increase the number of stocks selected for mutual funds. In a real investment, a fund manager first makes a decision on how much proportion of the investment should go to the market, and then he invests the fund to which stocks which is the stock selection ability. After that, many researchers explained in the presence of market-timing ability that actions will affect the performance of mutual funds. When investing mutual funds, some reports also point out that there are 90 % of investors who will consider the rate of return firstly and then the reputation of mutual fund corporation and
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1859
investment risk. Maximizing the mutual fund performance is the primary goal of mutual fund managers in a corporation. Usually, the mutual fund return reflects the financial performance of a fund corporation for operating and development. This study explores which criteria can lead to high mutual fund performance. To achieve this purpose, we use the method of multi-criteria decision making (MCDM). MCDM is one of the most widely used decision methodologies in engineering, medicine, economics, law, the environment, and public policy and business. The theory, methodology, and practice of MCDM have experienced a revolutionary process during the last five decades. MCDM methods aim at improving the quality of decisions by making the process more explicit, rational, and efficient. One intriguing problem is that oftentimes, different methods may yield different answers to the same decision problem. Thus, the issue of evaluating the relative performance of different MCDM methods is raised. One evaluating procedure is to examine the stability of an MCDM method’s mathematical process by checking the validity of its proposed rankings. However, within a dynamic and diversified decision-making environment, the traditional quantitative method does not solve the non-quantity problems of investment selection. Therefore, what is needed is a useful and applicable strategy that addresses the issues of investment selection. We thus propose a MCDM method to evaluate the hierarchy system for selecting investment strategies. In this study the hierarchical analytic approach is used to determine the weights of criteria from subjective judgment, and a nonadditive integral technique is utilized to evaluate the performance of investment style. Traditionally, researchers have used additive techniques to evaluate the synthetic performance of each criterion. The rest of the chapter is organized as follows: Mutual fund literature is discussed in the next section. The method of MCDM including the hierarchical analytic approach and non- additive integral evaluation process for MCDM problems is derived in Sect. 68.3. Then an illustrative example is presented in Sect. 68.4, which applies the MCDM method of investment. After which we discuss and show how the MCDM methods in this chapter are effective in Sect. 68.5. Finally, the conclusions are presented in Sect. 68.6.
68.2
Review of Mutual Fund Investment
Mutual fund research abounds in finance literature, and the investment performance of mutual fund managers has been extensively examined. Most of these studies employ a method developed by Jensen (1968, 1969) and later refined by Black et al. (1972) and Blume and Friend (1973). Such a method compares a particular manager’s performance with that of a benchmark index fund. Connor and Korajczyk (1986) develop a method of portfolio performance measurement using a competitive version of the arbitrage pricing theory (APT). However, they ignore any potential market timing by managers. One weakness of the above approach is
1860
S.Y. Wang and C.-F. Lee
that it fails to separate the aggressiveness of a fund manager from the quality of the information he/she possesses. It is apparent that superior performance of a mutual fund manager occurs because of his/her ability to “time” the market and the ability to forecast the returns on individual assets. Jensen (1968) demonstrates that the presence of market-timing ability is an important factor in mutual fund selections. Grant (1977) explains how market-timing actions will affect the results of empirical tests that focus only on microforecasting skills. Fama (1972) indicates that there are two ways for fund managers to obtain abnormal returns. The first one is security analysis, which is the ability of fund managers to identify the potential winning securities. The second one is market timing, which is the ability of portfolio managers to time market cycles and takes advantage of this ability in trading securities. Treynor and Mazuy (1966) add a quadratic term to the Jensen function to test for market-timing ability. Chen and Stockum (1986) employ a generalized varying parameter model, which treats Treynor and Mazuy (1966) as a special case, to study the mutual fund’s stock selectivity and market-timing ability. They find mutual funds as a group exhibits some evidence of stock selection ability yet no market-timing ability. Jensen (1972) develops theoretical structures for the evaluation of micro- and macroforecasting performance of fund managers where the basis for evaluation is a comparison of the ex post performance of the fund manager with the returns on the market. Merton (1981) and Henriksson’s (1984) model differs from Jensen’s formulation in that their forecasters follow a more qualitative approach to market timing. Chang and Lewellen (1984) and Henriksson (1984) employ the Merton-Henriksson model in evaluating mutual fund performance and find no evidence of market timing by fund managers. Bhattacharya and Pfleiderer (1983) extend the work of Jensen (1972). By correcting an error made in Jensen, they show that one can use a simple regression technique to obtain accurate measures of timing and selection ability. Lehmann and Modest (1987) combine the APT performance evaluation method with the Treynor and Mazuy (1966) quadratic regression technique. They found statically significant abnormal timing and selectivity performance by mutual funds. They also examine the impact of alternative benchmarks on the performance of mutual funds and find that performance measures are quite sensitive to the benchmark chosen. Also, Henriksson (1984) finds a negative correlation between the measures of stock selection and market-timing ability. Finally, Lee and Rahman (1990) also empirically examine market timing and selectivity performance of mutual funds. Furthermore, Jorge et al. (2006) deal with the relevance of benchmark choice for mutual fund performance behavior, and Spitz (1970) researches the relationship between mutual fund performance and cash inflows. Blake and Morey (2000) verify the mutual fund performance of Morningstar ratings. The above mentioned studies concentrate on a fund manager’s security selection and market-timing skills. However, external evaluation, human judgment, and subjective perception also affect the performance of mutual funds. In a real-world setting, the performance of mutual funds involves many criteria. In this article we will discuss these criteria and performance at the same time.
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1861
68.3
The Method of Multi-criteria Decision Making
Kuosmanen (2004) and Kopa and Post (2009) use the stochastic dominance criterion test on a portfolio optimality and efficient diversification. In this section we employ factor analysis to extract four independent common factors from those criteria. At the same time we construct the evaluation frame using AHP (analytic hierarchy process), which is composed of the above four common factors with sixteen evaluated criteria, and derive the relative weights with respect to the considered criteria and the synthetic utility value corresponding to each mutual fund investment style. According to the literature review and our questionnaire survey, we employ factor analysis to extract independent common factors from criteria. At the same time we construct the evaluation framework using a hierarchical system composed of the above common factors with evaluation criteria and derive the relative weights pertinent to the considered criteria. Then the synthetic utility value corresponding to each investment style is aggregated by the weights with performance values. Traditional analytic hierarchy process (AHP) assumes that there is no interaction between any two criteria within the same hierarchy. However, in reality, a criterion is inevitably correlated with another one. In this section, we give a brief to some notions from the theory of measure and integral. We describe a hierarchical analytic approach to determine the weighting of subjective judgments.
68.3.1 l-Measure The specification for general measures requires the values \ of a measure for all subsets in X. Let (X, b, g) be a measure space: l 2 (1, 1). If A 2 b, B 2 b; and A \ B¼f, and gðA [ BÞ ¼ gðAÞ þ gðBÞ þ lgðAÞgðBÞ
(68.1)
If this holds, then measure g is l-additive. This kind of measure is named l-measure, or the Sugeno measure. In this chapter we denote this l-measure by gi to differentiate from other measures. Based on the axioms above, the l-measure of the finite set can be derived from densities, as indicated in the following equation: gl ðfx1 ; x2 gÞ ¼ g1 þ g2 þ lg1 g2
(68.2)
Where g1, g2 represents the density. Let set X ¼ {x1, x2, .., xn} and the density of measure gi ¼ gl ({xi}), which can be formulated as follows: gl ð fx 1 ; x 2 ; . . . ; x n gÞ ¼
n X i¼1
gi þ l
n1 X n X i1 ¼1 i2 ¼i1 þ1
gi1 gi2 þ þ ln1 g1 g2 gn
(68.3)
1862
S.Y. Wang and C.-F. Lee
For an evaluation case with two criteria, A and B, there are three cases based on the above properties: Case 1: If l > 0, i.e., gl(A [ B) > gl(A) + gl(B), implying that A and B have a multiplicative effect. Case 2: If l ¼ 0, i.e., gl(A [ B) ¼ gl(A) + gl(B), implying that A and B have an additive effect. Case 3: If l < 0, i.e., gl(A [ B) < gl(A) + gl(B), implying that A and B have a substitutive effect. The measure is often used with the integral for aggregating information evaluation by considering the influence of the substitutive and multiplication effect among all criteria.
68.3.2 The Integral In a measure space (X, b, g), let h be a measurable set function defined in the measurable space. Then the definition of the integral of h over A with respect to g is ð hðxÞdg ¼ sup ½a∧gðA \ H a Þ
(68.4)
a2½0;1
A
where Ha ¼ {x belonging to X|h(x) a}. A is the domain of the integral. When A ¼ X, then A can be taken out. Next, the integral calculation is described in the following. For the sake of simplification, consider a measure g of X, ℵ where X is a finite set. Let h : X ! [0,1] and assume without loss of generality that the function h (xj) is monotonically decreasing with respect to j, i.e., h(x1) h(x2) h(xn). To achieve this, the elements in X can be renumbered. With this, we then have ð
n
hðxÞdg ¼ ∨ ½f ðxi Þ∧gðXi Þ i¼1
(68.5)
where Xi ¼ {x1,x2, ,xi}, i ¼ 1, 2, , n. In practice, h is the evaluated performance on a particular criterion for the alternatives, and g represents the weight of each criterion. The integral of h with respect to g gives the overall evaluation of the alternative. In addition, we can use the same measure using Choquet’s integral, defined as follows: ð hdg ¼ hðxn ÞgðXn Þ þ ½hðxn1 Þ hðxn ÞgðXn1 Þ þ þ ½hðx1 Þ hðx2 ÞgðX1 Þ (68.6) The integral model can be used in a nonlinear situation since it does not need to assume the independence of each criterion.
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1863
68.3.3 The Integral Multi-criteria Assessment Methodology The integral is used in this study to combine assessments primarily because this model does not need to assume independence among the criteria. A brief overview of the integral is presented here. Assume under general conditions, h(xk1) h(xki ) h(xkn) where h(xki ) is the performance value of the kth alternative for the ith criterion, the integral of the measure gl(Xkn) with respect to h(xkn) on ℵ (g: ℵ⟶[0, 1]) can be defined as follows: Ðk hdg ¼ h xkn gl Xkn þ h xkn1 h xkn gl Xkn1 þ þ h xk1 h xk2 gl Xk1 (68.7) where gl(Xk1) ¼ gl({xk1}), gl(Xk2) ¼ gl({xk1,xk2}), . . ., gl(Xkn) ¼ gl({xk1,xk2, ,xkn}) The measure of each individual criterion group gl(Xkn) can be expressed n X XX gl xki þ l gl ðfxi gÞgl xj þ ln1 gl ðfx1 gÞ gl ðfxn gÞ as follows: i¼1 n X gl Xkn ¼ gl xk1 , xk2 xkn ¼ gl xki þ
XX
i¼1
xj þ " # n Y 1 1 þ lgl xki 1 ln1 gl ðfx1 gÞ gl ðfxn gÞ ¼ l i¼1
l
gl ðfxi gÞgl
(68.8)
for 1 < l < þ1: l is the parameter that indicates the relationship among related criteria (if l¼0, Eq. 68.7 is an additive form; if l6¼0, Eq. 68.7 is a nonadditive form).
68.4
Evaluation Model for Prioritizing the Investment Strategy
We build up a hierarchical system for evaluating investment strategies of Wang and Lee (2011). Its analytical procedures stem from three steps: (i) factor, (ii) criteria, and (iii) investment style. We employ factor analysis to extract four independent common factors from various criteria, and these factors are (1) market timing, (2) stock selection ability, (3) fund size, and (4) teamwork. We construct the evaluation frame using hierarchical system composed of the above four common factors with sixteen evaluated criteria. We then derive the relative weights pertinent to the considered criteria. According to the risk of investment, mutual funds with different investment styles are classified as S1, asset allocation style; S2, aggressive growth style; S3, equity income style; S4, growth style; and S5, growth income style. Based on the review of literature, personal experience, and interviews with senior mutual fund managers, relevance trees are used to create hierarchical strategies for developing the optimal selection strategy of mutual funds.
1864 Goal
S.Y. Wang and C.-F. Lee Factor
Criteria
Market timing Stock selection ability
-P/E ratio -Net asset value/market value -Cash flow/market value -Net asset value -Risk premium
Fund size
-The market share of mutual fund -The growth rate of mutual fund scale -Dividend yield of mutual fund
Team work
Performance of mutual fund
-The ratio of fund’s market share -Market returns -Risk-free interest rate -Direction of fund flow
-Number of researchers -Education of fund manager -Known of fund manager -Turnover rate of fund manager
Investment style
S1: Asset Allocation style S2: Aggressive Growth style S3: Equity Income style S3: Growth style S5: Growth Income style
Fig. 68.1 Relevance system of hierarchy tree for evaluating mutual fund strategy
The elements (nodes) of relevance trees are defined and identified in hierarchical strategies, the combination of which consists of an evaluating mechanism for selecting a mutual fund strategy, as shown in Fig. 68.1.
68.4.1 Evaluating the Mutual Fund Strategy Hierarchy System Minimum risk or maximum return is usually used as the measurement index in traditional financial evaluation methods. Based on the risk of investment, mutual funds are classified into five investment styles and we evaluate the funds’ performance by the rate of return. Within a dynamic and diversified decision-making environment, the traditional quantitative method does not solve the non-quantity problems of mutual fund selection. Therefore, what is needed is a useful and applicable strategy that addresses the issues of selecting mutual funds. We propose an MCDM method to evaluate the hierarchy system for selecting mutual fund strategies. The performance of mutual fund architecture includes four components: market timing, stock selection ability, fund size, and teamwork. We first discuss conceptual and econometric issues associated with identifying four components of mutual fund performance. We have chosen multiple criteria evaluation method for selecting and prioritizing the mutual fund strategies to optimize the real scenarios faced by managers or investors.
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1865
68.4.2 The Process for Evaluating and Prioritizing Mutual Fund Strategies In this study, we use this MCDM method to evaluate various mutual fund strategies and rank them by performance. The following subsection describes the method of MCDM.
68.4.2.1 The Weights for the Hierarchy Process An evaluator always perceives the weight of a hierarchy subjectively. Therefore, consider the uncertain, interactive effects coming from other criteria when calculating the weight of a specified criterion. The weights wj corresponding to each criterion is as follows: wj ¼ r j ðr 1 r m Þ1
(68.9)
where rj is the geometric mean of each row of AHP reciprocal matrix 1=m r j ¼ aj1 ajm
(68.10)
68.4.2.2 The Synthetic Decision The weight of the different criteria and the performance value needs to be operated using integral techniques to generate the synthetic performance of each strategy within the same dimension. Furthermore, we have calculated the synthetic performance of each alternative strategy using different l values. Additionally, the synthetic performance is conducted by a simple additive weight method assuming the criteria are independent in an environment. Since each individual criterion is not completely independent from the others, we use the nonadditive integral technique to find the synthetic performance of each alternative and to investigate the order of the synthetic performance of different l values.
68.5
Empirical Examinations and Discussions
To demonstrate the practicality of our proposed method of evaluating mutual fund strategies, we conducted an empirical study based on survey of a total of 30 valid samples from managers of 12 Taiwanese mutual fund companies and researchers of eight research institutions and universities. The majority of the respondents are fund managers responsible for financial or general management. The mutual fund strategy selection process is examined below.
1866
S.Y. Wang and C.-F. Lee
68.5.1 Evaluating the Weights of Criteria By using the MCDM method, the weights of the factors and criteria are found and are shown in Table 68.1. The empirical evidence shown in Table 68.1 indicates that the weights of each criterion are market timing, 0.524; stock selection ability, 0.318; fund size, 0.141; and teamwork, 0.017, respectively. Therefore, the market timing is the most important factor influencing the performance of mutual funds, followed by the stock selection ability. Prior studies have simultaneously estimated the magnitudes of these portfolio performance evaluation measures. For example, some results show that, on average, mutual fund managers have positive security selection ability but negative market-timing ability (e.g., Chen and Stockum 1986). Since our results suggest that market timing has heavier weight than the stock selection ability, to enhance their performance, mutual fund managers should improve their ability of market timing.
68.5.2 Evaluation and Prioritization of the Mutual Fund Strategy In this study, the surveyors define their individual range (from 0 to100) for the linguistic variables based on their judgments. By ranking weights and synthetic performance values, we can determine the relative importance of criteria and decide on the best strategies. We apply a l-measure and nonadditive integral technique to evaluate investment strategies. The synthetic performance of each alternative using different ls is shown in Table 68.2. By ranking the synthetic performance in different ls in Table 68.2, we obtain mutual fund strategy ranking in Table 68.3. In Table 68.3, our empirical results show that when l < 0, the aggressive growth style is the most important strategy and growth style is the second most important strategy. When l 0 – 5, the results show that growth style is the most important strategy, and equity income style is the second most important strategy. When l ¼ 10 – 30, growth style is the most important strategy, followed by the growth income style strategy. When l 40, the results show that growth income style replaces growth style becoming the second ranked. On the other hand, when l 0, asset allocation style is the worst strategy with the smallest synthetic performance. We can thus infer that the less risky the funds are, the less performance of the funds will be.
68.5.3 Comparing with the Empirical Data Monthly returns from January 1980 to September 1996 (201 months) for a sample of 65 US mutual funds are used in this study to generate mutual fund performance. The random sample of mutual funds is provided by the MorningStar. The MorningStar segregates mutual funds into four basic investment styles on the basis of manager’s portfolio characteristics. Our sample consists of 8 asset allocation (S1), 14 aggressive growth (S2), 10 equity income (S3), 16 growth (S4),
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1867
Table 68.1 The weights of criteria for evaluating mutual funds
Criteria Market timing The ratio of fund’s market share Market returns Risk-free interest rate Direction of fund flow Stock selection ability P/E ratio Net asset value/market value Cash flow/market value Net asset value Risk premium Fund size The market share of mutual fund The growth rate of mutual fund scale Dividend yield of mutual fund Teamwork Number of researchers Education of fund manager Known of fund manager Turnover rate of fund manager
Weight 0.524 0.252 0.495 0.212 0.041 0.318 0.252 0.150 0.080 0.187 0.331 0.141 0.341 0.155 0.504 0.017 0.293 0.182 0.429 0.096
Table 68.2 The synthetic performance of mutual fund style l S1 S2 S3 S4 S5
1.0 395.3 616.8 457.0 552.2 363.1
0.5 538.0 901.2 682.5 855.7 382.9
0.0 300.7 311.9 313.0 315.7 310.5
1.0 299.9 310.8 311.8 312.8 309.3
3.0 298.8 307.6 309.5 311.7 307.2
5.0 297.1 306.0 307.8 310.2 305.3
10.0 296.1 303.2 304.5 308.6 304.9
20.0 295.9 300.2 302.1 304.4 303.5
40.0 293.7 297.3 297.6 300.3 302.1
100.0 292.5 294.0 294.5 298.9 301.9
150.0 291.6 292.0 291.7 295.8 300.1
200.0 291.0 290.9 290.5 294.6 299.5
S1 is the asset allocation fund, S2 is the aggressive growth, S3 is the equity income, S4 is the growth, and S5 is the growth income fund Table 68.3 The evaluation results of mutual fund strategy
Mutual fund strategy ranking l ¼1, 0.5 l ¼ 0, 1, 3 l¼5 l ¼ 10, 20 l ¼ 40, 100 l ¼ 150, 200
S2 S4 S3 S1 S5 S4 S3 S2 S5 S1 S4 S3 S5 S2 S1 S4 S5 S3 S2 S1 S5 S4 S3 S2 S1 S5 S4 S2 S3 S1
Where S1, asset allocation style; S2, aggressive growth style; S3, equity income style; S4, growth style; and S5, growth income style
1868
S.Y. Wang and C.-F. Lee
and 17 growth income (S5) mutual funds. The monthly returns on the S&P 500 Index are used for the market returns. Monthly observations of the 30-day Treasury-bill rate are used as a proxy for the risk-free rate. Appendix 3 contains summary statistics for the returns of mutual funds. All values are computed in excess of the returns on the US T-bills closest to 30 days to maturity. Data contains mean, standard deviation, maximum, and minimum. Averages of each investment style show that the asset allocation style has the smallest expected return and it also has the smallest standard deviation. However, the aggressive growth style has the largest maximum return but it also has the smallest minimum return and the largest standard deviation. In other words, the more aggressive the funds are, the more volatile the fund returns will be. The primary purpose of comparing with mutual fund performance data is to find out the true value of l. Given the true l value, we can infer other mutual funds’ performance during the same period. For example, in Appendix 3, we find the pecking order of mutual funds’ performance based upon investment styles is S4 > S5 > S3 > S2 > S1, which is in the same order as shown in Table 68.3 when l ¼ 10, 20. Therefore, we find the l value for certain period when comparing with a sample of mutual fund performance data. Based upon this l value, we can easily predict the performance of other mutual funds.
68.6
Conclusions
This study focuses on providing a mutual fund strategy for the mutual fund managers so that they could be successful in their decision making. Our empirical study demonstrates the validity of this method. In this study, the mutual fund strategy stems from four aspects: market timing, stock selection ability, fund size, as well as teamwork. Picking a mutual fund from the thousands is not an easy task. Mutual fund managers have difficulty in selecting the proper strategy for reasons such as the uncertain and dynamic environment and numerous criteria that they are facing. Managers are hence overwhelmed by this vague scenario and do not make proper decisions or allocate resources efficiently. The hierarchical method guides the manager how to select the investment style of mutual funds in the uncertainty environment. We compare our results with the empirical data and find that the model of MCDM predicts the rate of return well in certain ranges of l. Furthermore, we can use this l value to compute the performance of different mutual funds; thus the nonadditive integral technique is an effective method to predict the mutual fund performance. By ranking weights and synthetic performance values, we determine the relative importance of criteria, which allows us to decide on the best strategies. We apply a l-measure and nonadditive integral technique to evaluate investment strategies. By ranking the synthetic performance in different ls, we obtain mutual fund strategy ranking. Our empirical results show that when l < 0, the aggressive growth style is the most important strategy; when l 0 – 5, the growth style is the most important strategy; when l ¼ 10 – 30, growth style is the most important
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1869
strategy followed by the growth income style strategy. However, when l 40, growth income style replaces growth style becoming the second ranked. On the other hand, when l 0, asset allocation style is the worst strategy with the smallest synthetic performance. We can thus infer that the less risky the funds are, the less performance of the funds will be, and the more aggressive the funds are, the higher the volatility of the fund performance will be. Few studies have addressed mutual fund strategy planning. Proposed in this study is a first attempt to formally model the formulation process for a mutual fund strategy using MCDM. We believe that the analysis presented is a significant contribution to the literature and will help to establish groundwork for future research. Even though we are dedicated to setting up the model as completely as possible, there are additional criteria (e.g., tax, expenses, dividend) and methods that could be adopted and added in future research. The mutual fund industry is growing rapidly in the financial markets in response to increasing demand. Therefore, what is needed is a useful and applicable method that addresses the selection of mutual funds. We use a MCDM method to achieve this goal.
Appendix 1 The Description of Evaluative Criteria of Mutual Funds Criteria Market timing The ratio of fund market share The return of market Riskless interest rate
Flowing of cash
Stock selection ability P/E ratio
Net value/market value
Description The ability of portfolio managers to time market cycles and take advantage of this ability in trading securities The ratio of fund invested in securities The fraction of ups or downs of deep bid index in current period divided by the deep bid index in last period The risk-free interest rate is the interest rate that it is assumed can be obtained by investing in financial instruments with no default risk. In practice most professionals and academics use short-dated government bonds of the currency in question. For Taiwan investments, usually Taiwan bank 1-month deposit rate is used Cash flow refers to the amounts of cash being received and spent by a business during a defined period of time, sometimes tied to a specific project. Measurement of cash flow can be used to evaluate the state or performance of a business or project The ability of fund managers to identify the potential winning securities The P/E ratio (price per share/earnings per share) of a mutual fund is used to measure how cheap or expensive its share price is. The lower the P/E, the less you have to pay for the mutual fund, relative to what you can expect to earn from it The value of an entity’s assets less the value of its liabilities divided by market value (continued)
1870
S.Y. Wang and C.-F. Lee
Criteria Cash flowing/market value
Description It equals cash receipts minus cash payments over a given period of time divided by market value or equivalently, net profit plus amounts charged off for depreciation, depletion, and amortization (business) divided by market value Net value Net value is a term used to describe the value of an entity’s assets less the value of its liabilities. The term is commonly used in relation to collective investment schemes Risk premium A risk premium is the minimum difference between the expected value of an uncertain bet that a person is willing to take and the certain value that he is indifferent to Fund size The volume and scale of mutual funds The market share of It can be expressed as a company’s sales revenue (from that market) mutual fund divided by the total sales revenue available in that market. It can also be expressed as a company’s unit sales volume (in a market) divided by the total volume of units sold in that market The growth rate of mutual The fraction of the increase or decrease of the fund scale in current fund scale period divided by the fund scale in last period Dividend yield of mutual The dividend yield on a company mutual fund is the company’s fund annual dividend payments divided by its market cap or the dividend per share divided by the price per share Teamwork The culture of mutual fund company Number of researcher The number of researcher of each fund Education of fund Fund manager’s seniority, quality, and performance manager Known of fund manager Fund manager’s rate of exposed in the medium and number of win a prize Turnover rate of fund Fund manager leaves his job temporarily manager
Appendix 2 Summary Statistics for Returns of the Mutual Funds The notations and definition of the investment style of mutual funds are in panel 2.1. Panel 2.1 Classifications Investment style Aa Asset allocation
Ag
Aggressive growth
Description A large part of financial planning is finding an asset allocation that is appropriate for a given person in terms of their appetite for and ability to shoulder risk. The designation of funds into various categories of assets Regardless of the investment style or the size of the companies purchased, funds vary widely in their risk and price behavior which is likely to have a high beta and high volatility (continued)
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1871
Classifications Investment style Ei Equity income
G
Growth
Gi
Growth income
Description It will invest in common stock but will have a portfolio beta closer to 1.0 than to 2.0. It likely favors stocks with comparatively high dividend yields so as to generate the income its name implied The pursuit of capital appreciation is the emphasis with growth funds. This class of funds includes those called aggressive growth funds and those concentrating on more stable and predictable growth It pays steady dividends, and it is still predominately an investment in stocks, although some bonds may be included to increase the income yield of the fund
Monthly mutual funds are from January 1980 to September 1996 for a sample of 65 US mutual funds. The data are from Morningstar Company. Panel 2.2 Fund name General Securities Franklin Asset Allocation Seligman Income A USAA Income Valley Forge Income Fund of America FBL Growth Common Stock Mathers Asset allocation average American Heritage Alliance Quasar A Keystone Small Co Grth (S-4) Keystone Omega A Invesco Dynamics Security Ultra A Putnam Voyager A Stein Roe Capital Opport Value Line Spec Situations Value Line Leveraged Gr Inv WPG Tudor
Investment style Aa Aa
Mean Standard deviation Maximum Minimum 0.477 5.084 15.389 17.151 0.407 3.743 10.424 19.506
Aa Aa Aa Aa
0.394 0.316 0.293 0.566
2.414 2.024 1.803 2.552
8.474 9.381 9.980 9.166
7.324 5.362 5.573 8.836
Aa
0.273 3.599
10.466
24.088
Aa Aa
0.220 3.910 0.391 2.550
14.405 8.962
14.750 9.464
Ag Ag Ag
0.905 6.446 0.644 6.547 0.433 7.053
28.976 15.747 19.250
33.101 39.250 38.516
Ag Ag Ag Ag Ag
0.473 0.510 0.222 0.808 0.578
6.112 6.009 6.940 5.781 6.783
18.873 17.378 16.297 17.179 17.263
33.240 37.496 43.468 29.425 32.135
Ag
0.145 6.240
13.532
37.496
Ag
0.601 4.970
14.617
29.025
Ag
0.726 6.010
14.749
33.658 (continued)
1872
Fund name Winthrop Aggressive Growth A Delaware Trend A Founders Special Aggressive growth average Smith Barney Equity Income A Van Kampen Am Cap Eqty-Inc A Value Line Income United Income A Oppenheimer Equity Income A Fidelity Equity Income Delaware Decatur Income A Invesco Industrial Income Old Dominion Investors Evergreen Total Return Y Equity income average Guardian Park Avenue A Founders Growth Fortis Growth A Franklin Growth I Fortis Capital A Growth Fund of America Hancock Growth A Franklin Equity I Nationwide growth Neuberger&Berman Focus MSB Neuberger&Berman Partners Neuberger&Berman Manhattan Nicholas Oppenheimer A New England growth A
S.Y. Wang and C.-F. Lee
Investment style Ag
Mean Standard deviation Maximum Minimum 0.476 5.596 17.012 34.921
Ag Ag Ag
0.787 6.536 0.564 5.900 0.459 5.814
14.571 12.905 13.142
42.397 31.861 35.335
Ei
0.601 3.270
7.813
18.782
Ei
0.510 3.530
12.292
22.579
Ei Ei Ei
0.423 3.357 0.714 4.037 0.555 3.422
9.311 11.852 10.071
18.242 13.743 16.524
Ei Ei
0.706 3.612 0.547 3.615
10.608 10.269
19.627 20.235
Ei
0.601 3.705
9.349
20.235
Ei Ei
0.360 3.699 0.508 3.220
11.498 8.074
21.092 13.857
Ei G
0.527 3.238 0.740 4.391
9.094 11.321
18.718 27.965
G G G G G
0.718 0.724 0.570 0.682 0.625
4.986 5.983 4.050 4.791 4.722
13.055 14.520 12.907 12.818 12.226
25.108 30.771 11.706 21.585 23.962
G G G G
0.484 0.469 0.598 0.434
5.381 5.156 4.370 4.366
15.708 12.818 11.444 12.187
25.236 32.135 27.570 25.108
G G
0.517 4.665 0.661 3.612
13.452 9.311
31.178 19.385
G G G G G
0.606 0.710 0.225 0.727 0.608
11.574 10.125 11.321 19.120 11.121
30.500 19.385 31.451 37.207 26.081 (continued)
5.095 4.067 5.234 5.802 4.505
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1873
Fund name Growth average Pioneer II A Pilgrim America Magna Cap A Pioneer Philadelphia Penn Square Mutual A Oppenheimer Total Return A Vanguard/Windsor Van Kampen Am Cap Gr & Inc A Van Kampen Am Cap Comstock A Winthrop Growth & Income A Washington Mutual Investors Safeco Equity Seligman Common Stock A Salomon Bros Investors O Security Growth & Income A Selected American Putnam Fund for Grth & Inc A Growth income average
Investment style G Gi Gi
Mean 0.594 0.517 0.611
Standard deviation 4.775 4.386 3.949
Maximum 12.649 10.912 10.843
Minimum 26.255 29.693 22.704
Gi Gi Gi Gi
0.410 0.244 0.504 0.507
4.339 4.004 3.907 4.451
12.293 11.074 11.852 13.861
28.361 23.457 20.724 22.829
Gi Gi
0.726 4.078 0.570 4.781
10.746 15.349
18.542 32.135
Gi
0.599 4.539
13.167
34.921
Gi
0.430 3.987
10.717
24.088
Gi
0.723 3.882
11.409
20.113
Gi Gi
0.587 4.797 0.553 4.224
14.263 11.785
31.042 23.331
Gi
0.583 4.194
11.785
24.980
Gi
0.233 3.825
10.161
19.674
Gi Gi
0.650 3.969 0.637 3.540
13.142 8.456
19.385 22.081
Gi
0.544 3.940
10.380
24.469
Maximum 8.962 13.142 9.094 12.649 10.380
Minimum 9.464 35.335 18.718 26.255 24.469
Appendix 3 Summary Statistics for Returns of the Mutual Funds Fund name Asset allocation average Aggressive growth average Equity income average Growth average Growth income average
Investment style S1 S2 S3 S4 S5
Mean 0.391 0.459 0.527 0.594 0.544
Standard deviation 2.550 5.814 3.238 4.775 3.940
1874
S.Y. Wang and C.-F. Lee
Appendix 4 The MCDM proposed approach consists of eight steps: define the problem, define the evaluation criteria, initial screen, define the preferences on evaluation criteria, define the MCDM method for selection, evaluate the MCDM methods, choose the most suitable method, and conduct sensitivity analysis. Step 1: Define the problem. The characteristics of the decision-making problem under consideration are addressed in the problem definition step, such as identifying the number of alternatives, attributes, and constraints. The available information about the decision-making problem is the basis on which the most appropriate MCDM techniques will be evaluated and utilized to solve the problem. Step 2: Define the evaluation criteria. The proper determination of the applicable evaluation criteria is important because they have great influence on the outcome of the MCDM method selection process. However, simply using every criterion in the selection process is not the best approach because the more criteria used, the more information is required, which will result in higher computational cost. In this study, the characteristics of the MCDM methods will be identified by the relevant evaluation criteria in the form of a questionnaire. Ten questions are defined to capture the advantages, disadvantages, applicability, computational complexity, etc. of each MCDM method, as shown in the following. The defined evaluation criteria will be used as the attributes of an MCDM formulation and as the input data of decision matrix for method selection: 1. Is the method able to handle MADM, MODM, or MCDM problem? 2. Does the method evaluate the feasibility of the alternatives? 3. Is the method able to capture uncertainties existing in the problem? 4. What input data are required by the method? 5. What preference information does the method use? 6. What metric does the method use to rank the alternatives? 7. Can the method deal changing alternatives or requirements? 8. Does the method handle qualitative or quantitative data? 9. Does the method deal with discrete or continuous data? 10. Can the method handle the problem with hierarchy structure of attributes? Step 3: Initial screen in the initial screen step. The dominated and infeasible MCDM methods are eliminated by dominance and conjunctive. An alternative is dominated if there is another alternative which excels it in one or more attributes and equals it in the remainder. The dominated MCDM methods are eliminated by the dominance method, which does not require any assumption or any transformation of attributes. The sieve of dominance takes the following procedures. Compare the first two alternatives, and if one is dominated by the other, discard the dominated one; then compare the un-discarded alternative with the third alternative and discard any dominated alternative; and then introduce the fourth alternative and repeat this process until the last alternative has been compared. A set of non-dominated alternatives may possess unacceptable or infeasible attribute values. The conjunctive method is employed to remove the
68
Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies 1875
unacceptable alternatives, in which the decision maker sets up the cutoff value he/she will accept for each of the attributes. Any alternative which has an attribute value worse than the cutoff values will be eliminated. Step 4: Define the preferences on evaluation criteria. Usually, after the initial screen step is completed, multiple MCDM methods are expected to remain; otherwise we can directly choose the only one left to solve the decision-making problem. With the ten evaluation criteria defined in step 2, the decision maker’s preference information on the evaluation criteria is defined. This will reflect which criterion is more important to the decision maker when he/she makes decisions on method selection. Step 5: Define the MCDM method for selection. Existing commonly used MCDM methods are identified and stored in the method base as candidate methods for selection. The simple additive weighting (SAW) method is chosen to select the most suitable MCDM methods considering its simplicity and general acceptability. Basically, the SAW method provides a weighted summation of the attributes of each method, and the one with the highest score is considered as the most appropriate method. Though SAW is used in this study, it is worth noting that other MCDM methods can be employed to handle the same MCDM methods selection problem. Step 6: Evaluate the MCDM methods. Mathematical formulation of appropriateness index (AI) is used to rank the MCDM methods. The method with the highest AI will be recommended as the most appropriate method to solve the problem under consideration. Step 7: Choose the most suitable method. For optimization of specification of grinding wheel, the MCDM method which has the highest AI will be selected as the most appropriate method to solve the given decision-making problem. If the DM is satisfied with the final results, he/she can implement the solution and move forward. Otherwise, he/she can go back to step 2 and modify the input data or preference information and repeat the selection process until a satisfied outcome is obtained. Be displayed to provide guidance to DM is provided guidance about how to get the final solution by using the selected method. In addition, the detailed mathematical calculation steps are also built in the MATLAB-based DSS, which highly facilitates the decision-making process. Thus, the DM can input their data according to the instruction and get the final results by clicking one corresponding button. Step 8: Conduct analysis. In this section, selection of an optimized specification of grinding wheel problem is conducted to improve the capabilities of the grinding operation products by proposed MCDM decision support system. It is observed that different decision makers often have different preference information on the evaluation criteria and different answers to the ten questions; thus, analysis should be performed on the MCDM method selection algorithm in order to analyze its robustness with respect to parameter variations, such as the variation of decision maker’s preference information and the input data. If the decision maker is satisfied with the final results, he/she can implement the solution and move forward. Otherwise, he/she can go back to step 2 and modify the input data or preference information and repeat the selection process until a satisfied
1876
S.Y. Wang and C.-F. Lee
outcome is obtained. In this implementation, emphasis is put on explaining the holistic process of the intelligent MCDM decision support system. Thus, the step-by-step problem-solving process is explained and discussed for this decision-making problem.
References Bhattacharya, S., & Pfleiderer, P. (1983). A note on performance evaluation. Technical report 714, Stanford University, Graduate School of Business. Black, F., Jensen, M. C., & Scholes, M. (1972). The capital asset pricing model: Some empirical tests. In M. C. Jensen (Ed.), Studies in the theory of capital markets. New York: Praeger. Blake, C. R., & Morey, M. R. (2000). Morningstar ratings and mutual fund performance. Journal of Financial & Quantitative Analysis, 35(3), 451–483. Blume, M. E., & Friend, I. (1973). A new look at the capital asset pricing model. Journal of Finance, 28, 19–33. Chang, E. C., & Lewellen, W. G. (1984). Market timing and mutual fund investment performance. Journal of Business, 57, 57–72. Chen, C. R., & Stockum, S. (1986). Selectivity, market timing, and random beta behavior of mutual funds: A generalized model. Journal of Financial Research, 9(1), 87–96. Connor, G., & Korajczyk, R. (1986). Performance measure with the arbitrage pricing theory: A new framework for analysis. Journal of Financial Economics, 15(3), 373–394. Fama, E. F. (1972). Components of investment performance. Journal of Finance, 27(3), 551–567. Grant, D. (1977). Portfolio performance and the cost of timing decisions. Journal of Finance, 32, 837–846. Henriksson, R. D. (1984). Market timing and mutual fund performance: An empirical investigation. Journal of Business, 57(1), 73–96. Jensen, M. C. (1968). The performance of mutual funds in the period 19451964. Journal of Finance, 23(2), 389–416. Jensen, M. C. (1969). Risk, the pricing of capital assets, and the evaluation of investment portfolios. Journal of Business, 42(2), 167–247. Jensen, M. C. (1972). Optimal utilization of market forecasts and the evaluation of investment performance. In G. P. Szego & K. Shell (Eds.), Mathematical methods in investment and finance. Amsterdam: Elsevier. Jorge, S., Pilar, G., & Luis, M. D. (2006). Mutual fund performance and benchmark choice: The Spanish case. Applied Financial Economics Letters, 2(5), 317–321. Kopa, M., & Post, T. (2009). A portfolio optimality test based on the first-order stochastic dominance criterion. Journal of Financial & Quantitative Analysis, 44(5), 1103–1124. Kuosmanen, T. (2004). Efficient diversification according to stochastic dominance criteria. Management Science, 50(10), 1390–1406. Lee, C. F., & Rahman, S. (1990). Market timing, selectivity, and mutual fund performance: An empirical investigation. Journal of Business, 63(2), 261–278. Lehmann, B. N., & Modest, D. M. (1987). Mutual fund performance evaluation: A comparison of benchmarks and benchmark comparisons. Journal of Finance, 42(2), 233–265. Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7(1), 77–91. Merton, R. C. (1981). On market timing and investment performance I: An equilibrium theory of value for market forecasts. Journal of Business, 54, 363–406. Spitz, A. E. (1970). Mutual fund performance and cash inflows. Applied Economics, 2(2), 141–145. Treynor, J. L., & Mazuy, K. K. (1966). Can mutual funds outguess the market? Harvard Business Review, 44, 131–136. Wang, S. Y., & Lee, C. F. (2011). Fuzzy multi-criteria decision making for evaluating mutual fund strategies. Applied Economics, 43(24), 3405–3414.
Econometric Analysis of Currency Carry Trade
69
Yu-Jen Wang, Huimin Chung, and Bruce Mizrach
Contents 69.1 69.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of Model and Methodology for Carry Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69.2.1 The Regime-Switching and Logistic Smooth Transition Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69.2.2 The MCMC Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69.3 Empirical Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Empirical Study Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1878 1880 1880 1882 1883 1888 1888 1889
Abstract
The carry trade is a popular strategy in the currency markets whereby investors fund positions in high interest rate currencies by selling low interest rate currencies to earn the interest rate differential. In this article, we first provide an overview of the risk and return profile of currency carry trade; second, we introduce two popular models, the regime-switch model and the logistic smooth transition regression model, to analyze carry trade returns because the carry trade returns are highly regime dependent. Finally, an empirical example is illustrated. Keywords
Carry trade • Uncovered interest parity • Markov chain Monte Carlo • Regime-switch model • Logistic smooth transition regression model
Y.-J. Wang (*) • H. Chung Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] B. Mizrach Department of Economics, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_69, # Springer Science+Business Media New York 2015
1877
1878
69.1
Y.-J. Wang et al.
Introduction
According to the international financial theory such as uncovered interest parity (UIP), the exchange rate will depreciated in the future if the country has a high interest rate. Although investors can potentially profit from the interest rate differential between the two countries, the exchange rate differential may offset the interest rate differential. However, in the past two decades, an enormous amount of empirical research has refuted the UIP theory. These evidences suggest that when exchange rate returns combine with time-varying premium, UIP is not usually present in the past.1 Carry trade, a common international investment strategy, is a good example that runs contradictory to UIP theory. Carry trade is built by borrowing currency from a lower interest rate country and then investing in a higher interest rate country. A large body of research shows that carry trade is profitable over the long horizon. How the carry trade strategy earns a persistent excess return is an open question. The literature suggests several explanations for the forward premium puzzle. Engel (1984) and Fama (1984) provide the straightforward and theoretically convincing explanation based on the existence of time-varying risk premia for this puzzle. Burnside et al. (2011) refer to the peso problem as an explanation for the high average payoff to the carry trade.2 Baillie and Chang (2011) provide another explanation for this puzzle that focuses on the trading behavior. However, because UIP is not always held in the short term, previous empirical studies have adopted several models that allow for temporary deviations from UIP and have discussed regime dependence among other factors to fit the carry trade return. For example, Ichiue and Koyama (2008) provide the regime-switch model to detect how the exchange rate volatility influences UIP. The failures of UIP usually happen at relatively low volatility environment. In particular, they argue that the rapidly unwinding carry trade affects the exchange rate volatility. Recently, using daily data from 1985 to 2008, Christiansen and Ranaldo (2011) analyze carry trade returns with the multifactor model. Their main findings suggest high regime dependence of the carry trade return. Clarida et al. (2009) also find significant volatility regime sensitivity for Fama regressions estimated over low and high volatility periods. Sarno et al. (2006)
1
Meese and Rogoff (1983) assume that exchange rates follow the “near-random walk” model and provide the evidence to reject UIP. Fama (1984) applies the concept of forward rate contained in the time-varying premium to analyze the relation between the forward exchange rate and spot exchange rate and points out that high interest rate currencies tend to appreciate rather than depreciate. Froot and Thaler (1990) replace time-varying premium with the mean return theory to explain foreign exchange anomalies. Burnside et al. (2009) emphasize that the forward premium puzzle can be construed as the adverse selection problems between participants in foreign exchange markets. Brunnermerier et al. (2008) use the liquidity risk factor to explain the excess return of the carry trade. They add the change of VIX index or the TED spread variable to be the liquidity risk factors in the regression and suggest that the market liquidity factor may explain the carry trade’s risk premium. 2 The peso problem is as a generic term for the effects of small probabilities of large events in empirical work. Burnside et al. (2011) approach relies on analyzing the payoffs to a version of the carry trade strategy that does not yield high negative payoffs in the peso state.
69
Econometric Analysis of Currency Carry Trade
1879
provide empirical evidence the deviations from UIP display significant nonlinearities that consistent with theories based on transactions costs or limits to speculation. Menkhoff et al. (2012) explain that low returns occur in times of unexpected high volatility, when low interest rate currencies provide a hedge by yielding positive returns. Empirically, however, the literature has serious problems convincingly claiming that carry trade returns as regime dependence or that the higher profit opportunities of carry trade usually occurred in lower volatility regime. In sum, massive losses associated with the carry trade usually occurred in the higher volatility regime, the positive return usually exists in carry trade at the low volatility regime, and the carry trade return process does not follow the traditional linear models. Baillie and Chang (2011) further find that momentum trading increases carry trade volatility. Brunnermerier et al. (2008) also find the levered market participants gradually build up positions in high-yielding currencies, causing high-yielding currencies to appreciate over time along with speculators’ larger positions. In addition, Brunnermerier et al. (2008) find that higher market volatility is associated with carry trade losses. Clarida et al. (2009) use the Fama regression, which produces a positive coefficient that is greater than unity when volatility is in the top quartile.3 Baillie and Chang (2011) find that UIP is more likely to hold in a regime when volatility is unusually high. These results suggest that the momentum effect in the carry trade perhaps exists in the low volatility regime, but the empirical results are mixed in the high volatility regime. However, the carry trade perhaps confronts crash risk in the high volatility regime whether UIP holds. Because the carry trade return is highly regime dependent, regime conditions must be considered in the return process model. Ichiue and Koyama (2008) use the regimeswitch model to investigate the relation among exchange rate returns, volatilities, and interest rate differentials. Baillie and Chang (2011) use the logistic smooth transition regression (LSTR) model to identify whether the forward FX market is in a regime where the anomaly is present or whether it is in a regime where UIP tends to hold. They find that UIP is more likely to hold in a regime where volatility is unusually high, which may be explained by previous theoretical work that links momentum trading to increased volatility and more pronounced reversion to fundamentals. In estimating the model, we suggest the Markov chain Monte Carlo (MCMC) methods. The major advantage of the Bayesian MCMC approach is its extreme flexibility. Because the parameters are generated by the random variable from posterior distributions, the MCMC method can avoid the thorny problem of maximum such as the maximum likelihood estimation. This method is well suited to fit realistic models to complex data sets with threshold value, measurement error, censored or missing observations, multilevel or serial correlation structures, and multiple endpoints. This method is suitable for the regime-dependent models because these models usually have a threshold point. For instance, Ichiue and Koyama (2008) use the Bayesian Gibbs sampling method to estimate the parameters
3 The empirical result of Clarida et al. (2009) indicates that UIP is violated in the high volatility regime.
1880
Y.-J. Wang et al.
of the regime-switch model, and early investigations also used MCMC method, such as Albert and Chib (1993), Kim et al. (1998), and Kim and Nelson (1999). The study provides a more clear analysis of carry trade return behavior in the different volatility regimes. Our main purpose is to construct an investment strategy that can create lower volatility and higher return of the carry trade, where the carry trade returns are produced by exchange rate and 3-month interbank rate data and the volatility variables are determined by VIX, which is calculated from the S&P500 equity-options market, generalized autoregressive conditional heteroskedasticity (GARCH), and exponential GARCH (EGARCH).4 According to our empirical outputs, we find several results. First, we confirm that carry trade returns display a significant momentum effect at the low volatility regime. Second, the VIX performance of carry trade is better than other volatility measures during the subprime period (2007–2008). Third, we can create lower downside risk strategy than the buy-and-hold strategy in the long run when we used the GARCH volatility measure. Finally, we still have a higher Sortino ratio by using the GARCH volatility measure. The remainder of this study is organized as follows. We introduce two financial econometrics models, the regime-switch model and LSTR model, in Sect. 69.2.1. Section 69.2.2 discusses our method, based on the MCMC method. We discuss the carry trade return behavior and provide a new carry trade trading strategy of carry trade in Sect. 69.3. Section 69.4 concludes.
69.2
Overview of Model and Methodology for Carry Trade
Based on past study, the regime-dependent model is most popular to analyze carry trade returns because the behaviors of carry trade returns are diversely in the different volatility regimes. We will introduce two popular regime-dependent models, the regime-switching and logistic smooth transition regression models, in this section. The regime-dependent models usually have threshold value such as regime-switching model. We suggest using the MCMC methods to avoid this thorny problem of parameter estimation.
69.2.1 The Regime-Switching and Logistic Smooth Transition Regression Models The covered interest parity (CIP) is a non-arbitrage condition. It postulates that the nominal interest differential between two countries (i*t it) should equal the forward premium (ftst). It is expressed as Et ½Dstþ1 ¼ it it ¼ f t st ,
4
(69.1)
Wang et al. (2012) suggest that GARCH models with skew density innovations may be another suitable volatility measure for carry trade return.
69
Econometric Analysis of Currency Carry Trade
1881
where st is the logarithm of the spot exchange rate quoted as the foreign price of domestic currency, ft is the logarithm of the forward rate for a one-period ahead transaction, and it and i*t are the one-period risk-free domestic and foreign interest rates, respectively. The standard test of UIP to estimate the regression is Dstþ1 ¼ a þ bð f t st Þ þ utþ1 :
(69.2)
Under UIP, the null hypothesis is that a ¼ 0 and b ¼ 1 that the error term, ut+1, is serially uncorrelated. The forward premium anomaly generally refers to the widespread phenomenon of a negative slope coefficient being obtained by the ordinary least square estimation of Eq. 69.2. Baillie and Chang (2011) further use the LSTR model, which postulates that the slop coefficient is related nonlinear to the degree of carry and momentum trading over time; a natural approach is to specify the UIP relation in terms of the LSTR model: Dstþ1 ¼ ½a1 þ b1 ð f t st Þð1 Gðzt ; g; cÞÞ þ ½a2 þ b2 ðf t st ÞGðzt ; g; cÞ þ utþ1 ,
(69.3)
where G(·) is a logistic transition function as follows: gðzt cÞ 1 Gðzt ; g; cÞ ¼ 1 þ exp , g > 0, sz t
(69.4)
where zt is the transition variable, szt is the standard deviation of zt, g is a slope parameter, and c is a location parameter. Baillie and Chang choose various transition variables, zt, related to carry and consider momentum trading. Specifically, they use the interest differentials and the conditional volatility of exchange rates as measured by GARCH(1,1) models of spot exchange rate returns. This model approach works well for carry trade analysis because it allows for smooth and continuous adjustment between regimes. Another popular regime-dependent model is the regime-switch model. After Hamilton (1989) proposed the regime-switching model to examine the persistency of recessions and booms, many studies applied this model to exchange rate data. Engel and Hamilton’s (1990) two-regime model specifies currency returns as stþ1 st ¼ ai þ si tþ1 ,
(69.5)
where i ∈ {1, 2} denotes the regime; ai and si denote the trend of exchange rate and the volatility of exchange rate return under regime i, respectively; and tþ1 i:i:d: N ð0; 1Þ. Ichiue and Koyama (2008) employ the four-regime model to discuss carry trade returns. First, they use the following nesting model: stþ1 st ¼ ai þ bi it it þ si tþ1 :
(69.6)
1882
Y.-J. Wang et al.
According to the views of market participants, regime switches in exchange rate returns should be interpreted as switches in the relation between the returns and interest rate differentials or switches in market participants’ activities between the carry trade and its unwinding, rather than just switches in trends. Ichiue and Koyama assume that the intercept ai does not switch, that is, ai ¼ a for all i ¼ 1, 2. Second, they define a regime indicator variable that spans the regime space for both the slope and volatility regimes as 8 1 > > < 2 St ¼ 3 > > : 4
if if if if
Sb t Sb t Sb t Sb t
¼ 1 and Sst ¼ 2 and Sst ¼ 1 and Sst ¼ 2 and Sst
¼1 ¼1 , ¼2 ¼2
(69.7)
where Sbt is the slope regime, which indicates the relation between exchange rate returns and interest rate differentials, and at time t is bi when Sbt ¼ i, i ¼ 1, 2. Sst is defined as the volatility regime, with the volatility at time t being sj when Sst ¼ j, j ¼ 1, 2. Finally, Ichiue and Koyama’s model can be described as stþn st ¼ a þ bt it, n it, n þ st tþn :
(69.8)
bt ¼ b1 ðI 1t þ I 3t Þ þ b2 ðI 2t þ I 4t Þ:
(69.9)
st ¼ s1 ðI 1t þ I 2t Þ þ s2 ðI 3t þ I 4t Þ,
(69.10)
where Ikt ¼ 1 if St ¼ k and Ikt ¼ 0 if St 6¼ k, k ¼ 1, 2, 3, 4, where b1 0: Finally, we have PðtÞ ¼ c
ekt mðtÞ
# mðtÞd GðtÞ kt dt , e
ð GðtÞmðtÞd ekt dt: k
2169
(79.49)
(79.50)
Changing from an indefinite integral to a definite integral, Eq. 79.49 can be shown as ð ekt T pðtÞ ¼ GðsÞmðsÞd eks ds, d mðtÞ t which is Eq. 79.19.
Appendix 2: Derivation of Eq. 79.21 This appendix presents a detailed derivation of Eq. 79.21. In Eq. 79.51, the initial value of the firm can be expressed as pð0Þ ¼
ðT n
1 d
mð0Þ
o 0 a þ bI h Að0Þeth mðtÞd1 a Að0Þ2 rðtÞ2 sðtÞ2 e2th mðtÞd2 ekt dt:
0
(79.51) To maximize firm value, the number of shares outstanding at each point of time should be determined; therefore, the objective function can be written as follows: max pð0Þ:
fmðtÞgTt¼0
(79.52)
Following the Euler-Lagrange condition (see Chiang 1984), we take first-order conditions on the objective function with respect to m(t), where t 2 [0, T], and let such first-order conditions be equal to zero: 1 n ðd 1Þ a þ bI h Að0Þeth mðtÞd2 d mð0Þ o 0 a Að0Þ2 rðtÞ2 sðtÞ2 e2th mðtÞd3 ðd 2Þ ekt dt ¼ 0, Where t 2 ½0; T :
(79.53)
2170
C.-F. Lee et al.
To simplify Eq. 79.53, 0 ðd 1Þ a þ bI h Að0Þeth mðtÞd2 a Að0Þ2 rðtÞ2 sðtÞ2 e2th mðtÞd3 ðd 2Þ ¼ 0, where t 2 ½0; T ,
(79.54)
which is Eq. 79.21.
Appendix 3: Derivation of Eqs. 79.28 and 79.29 This appendix presents a detailed derivation of both Eqs. 79.28 and 79.29. In Eq. 79.55, the optimal payout ratio with no changes in total risk or systematic risk is
h k þ heðhkÞðTtÞ DðtÞ=xðtÞ ¼ 1 : (79.55) hk a þ bI Considering the finite growth case, if (h – k)(T – t) < 1, then following Maclaurin expansion, the e(h–k)(T–t) can be expressed as eðhkÞðTtÞ ¼ 1 þ ðh kÞðT tÞ þ
ðh kÞ2 ðT tÞ2 ðh kÞ3 ðT tÞ3 þ : 2! 3!
1 þ ðh k ÞðT t Þ (79.56) Therefore, Eq. 79.31 can be approximately written as
DðtÞ=xðtÞ 1
h ð1 þ hðT tÞÞ, a þ bI
which is Eq. 79.38. We further take the partial derivative of Eq. 79.33 with respect to the growth rate. Then the partial derivative of optimal payout ratio with respect to the growth rate can be approximately written as
@ DðtÞ=xðtÞ a þ bI ðT tÞ 2hðT tÞ 1 , @h a þ bI which is Eq. 79.29.
79
Optimal Payout Ratio Under Uncertainty and the Flexibility Hypothesis
2171
Table 79.6 Moving estimates process for testing the break point of structural change c 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00
F-statistics Beta risk 120.38 147.77 205.83 252.56 333.30 395.33 421.89 445.75 505.14 532.57 572.82 600.57 633.15 674.48 675.53 682.40 678.08 682.43 679.55 679.52 675.88 668.42 669.35 664.33 622.77
Total risk 92.78 114.94 153.89 179.52 244.85 298.05 320.42 340.36 213.02 205.06 411.85 452.04 485.64 524.66 542.20 547.18 542.58 549.74 547.81 546.92 549.83 543.73 547.23 548.16 545.64
c 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80
F-statistics Beta risk 658.58 646.43 636.99 621.57 619.94 610.91 616.34 610.81 608.96 609.88 610.61 575.99 543.25 532.27 505.57 487.91 446.97 408.25 383.32 345.32 327.34 309.13 296.67 278.69
Total risk 533.26 524.02 516.94 506.64 507.41 503.46 505.71 502.08 505.16 496.72 499.96 473.31 445.85 431.02 408.10 388.90 364.58 326.46 304.93 266.34 249.22 231.42 223.43 212.60
This shows the F-statistics of moving estimates processes. The nonstructural change regression and structural change regression are as follows: payout ratioi, t ln 1 payout ratio ¼ a þ b1 Riski, t þ b2 Growthi, t þ b3 lnðSizeÞi, t þ b4 ROAi, t þ ei, t ð i, t Þ ! 0 0 0 payout ratioi, t ln ¼ a þ b1 Riski, t þ b2 D gi, t < c ROAi, t Riski þ b3 Growthi, t 1 payout ratioi, t 0
0
þ b5 lnðSizeÞi, t þ b6 ROAi, t þ ei, t The dependent variable is the payout ratio with a logistic transformation. The breakpoint, c, is between 0.2 times and 1.8 times the rate of return on total assets. The dummy variable is equal to 1 if a firm’s 5-year average growth rate is less than c times its 5-year average ROA and 0 otherwise. The independent variables are beta risk (total risk), dummy times beta risk (total risk), growth rate, log of size, and the rate of return on assets. F-statistics are under the null hypothesis that the relationship between the payout ratio and the risk does not depend on the growth rate of a firm
2172
C.-F. Lee et al.
Fig. 79.2 Moving estimates process for testing the break point of structural change. The figures show the F-statistics of moving estimates processes. The nonstructural change regression and structural change regression are as follows: payout ratio ¼ a þ b1 Riski, t þ b2 Growthi, t þ b3 lnðSizeÞi, t þ b4 ROAi, t þ ei, t ln 1 payout ratioi, t ð i, t Þ ln
payout ratioi, t 1ðpayout ratioi, t Þ
0 0 0 ¼ a þ b1 Riski, t þ b2 D gi, t < c ROAi, t Riski þ b3 Growthi, t 0
0
þ b5 lnðSizeÞi, t þ b6 ROAi, t þ ei, t . The dependent variable is the payout ratio with a logistic transformation. The breakpoint, c, is between 0.2 times and 1.8 times rate of return on total assets. The dummy variable is equal to 1 if a firm’s 5-year average growth rate is less than c times its 5-year average ROA and 0 otherwise. The independent variables are beta risk (total risk), dummy times beta risk (total risk), growth rate, log of size, and the rate of return on assets. F-statistics is under the null hypothesis that the relationship between the payout ratio and the risk does not depend on the growth rate of a firm. The risk used in Fig. 79. 2a is the beta, and the risk used in Fig. 79. 2b is the total risk
79
Optimal Payout Ratio Under Uncertainty and the Flexibility Hypothesis
2173
Appendix 4: Using Moving Estimates Process to Find the Structural Change Point in Eq. 79.36 To estimate the empirical breakpoint, we first assume the dummy variable associated with risk as D(gi,t < c . ROAi,t). We introduce a no structural change model, Eq. 79.57, and a structural change model, Eq. 79.58. In Eq. 79.58, c is a continuous constant variable, ranging from 0.2 to 1.8 (c 2 [0.2, 1.8]). The breakpoint of structural change is at gi,t ¼ c . ROAi,t. ! payout ratioi, t ¼ a þ b1 Riski,t þ b2 Growthi,t ln 1 payout ratioi, t
ln
payout ratioi, t 1 payout ratioi, t
þ b3 lnðSizeÞi,t þ b4 ROAi, t þ ei,t
! ¼
(79.57)
0 0 a þ b1 Riski,t þ b2 D gi, t < c ROAi,t Riski 0
0
0
þb3 Growthi,t þ b4 lnðSizeÞi,t þ b5 ROAi,t þ ei,t
(79.58) By using the moving estimates process, we calculate the F-statistics for all potential structural change points between gi,t ¼ 0.2 ROAi,t and gi,t ¼ 1.8 ROAi,t. The F-test is under the null hypothesis that no structural change on the relationship between the payout ratio and the risk. That is, Eq. 79.57 is identical to Eq. 79.58. Finally, we can locate the breakpoint of the structural change at the point with highest value of F-statistics. From Table 79.6, the process has a clear peak at c ¼ 0.93 when using the beta risk as the independent variable and c ¼ 0.96 when using the total risk as the independent variable. Figure 79.2 also graphically presents a peak F-statistics point at c ¼ 0.93 (0.96) in terms of beta risk (total risk). Results from the moving estimates process indicate that a structural change on the relationship exists between the payout ratio and risks. The breakpoint of the structural change is at gi,t ¼ 0.93 ROAi,t or gi,t ¼ 0.97 ROAi,t.
References Agrawal, A., & Jayaraman, N. (1994). The dividend policies of all-equity firms: A direct test of the free cash flow theory. Managerial & Decision Economics, 15, 139–148. Aivazian, V., Booth, L., & Cleary, S. (2003). Do emerging market firms follow different dividend policies from U.S. Firms? Journal of Financial Research, 26, 371–387. Aoki, M. (1967). Optimization of stochastic systems. New York: Academic Press. Asquith, P., & Mullins, D. W., Jr. (1983). The impact of initiating dividend payments on shareholders’ wealth. Journal of Business, 56, 77–96. Bar-Yosef, S., & Kolodny, R. (1976). Dividend policy and capital market theory. Review of Economics & Statistics, 58, 181–190.
2174
C.-F. Lee et al.
Bellman, R. (1990). Adaptive control process: A guided tour. Princeton: Princeton University Press. Benartzi, S., Michaely, R., & Thaler, R. (1997). Do changes in dividends signal the future or the past? Journal of Finance, 52, 1007–1034. Bernheim, B. D., & Wantz, A. (1995). A tax-based test of the dividend signaling hypothesis. American Economic Review, 85, 532–551. Bhattacharya, S. (1979). Imperfect information, dividend policy, and “the bird in the hand” fallacy. Bell Journal of Economics, 10, 259–270. Blau, B. M., & Fuller, K. P. (2008). Flexibility and dividends. Journal of Corporate Finance, 14, 133–152. Boehmer, E., Jones, C. M., & Zhang, X. (2010). Shackling short sellers: The 2008 shorting ban, Working paper. Brennan, M. J. (1973). An approach to the valuation of uncertain income streams. Journal of Finance, 28, 661–674. Brook, Y., Charlton, W. T., Jr., & Hendershott, R. J. (1998). Do firms use dividends to signal large future cash flow increases? Financial Management, 27, 46–57. Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2006). Robust inference with multiway Clustering, Working paper. Chiang, A. C. (1984). Fundamental methods of mathematical economics (3rd ed.). New York: McGraw-Hill. Chow, G. C. (1960). Tests of equality between Sets of coefficients in two linear regressions. Econometrica, 28, 591–605. DeAngelo, H., & DeAngelo, L. (2006). The irrelevance of the MM dividend irrelevance theorem. Journal of Financial Economics, 79, 293–315. DeAngelo, H., DeAngelo, L., & Skinner, D. J. (1996). Reversal of fortune: Dividend signaling and the disappearance of sustained earnings growth. Journal of Financial Economics, 40, 341–371. DeAngelo, H., DeAngelo, L., & Stulz, R. M. (2006). Dividend policy and the earned/contributed capital mix: A test of the life-cycle theory. Journal of Financial Economics, 81, 227–254. Denis, D. J., Denis, D. K., & Sarin, A. (1994). The information content of dividend changes: Cash flow signaling¸overinvestment, and dividend clienteles. Journal of Financial & Quantitative Analysis, 29, 567–587. Denis, D. J., & Osobov, I. (2008). Why do firms pay dividends? International evidence on the determinants of dividend policy. Journal of Financial Economics, 89, 62–82. Easterbrook, F. H. (1984). Two agency-cost explanations of dividends. American Economic Review, 74, 650–659. Fama, E. F., & French, K. R. (2001). Disappearing dividends: Changing firm characteristics or lower propensity to pay? Journal of Financial Economics, 60, 3–43. Fenn, G. W., & Liang, N. (2001). Corporate payout policy and managerial stock incentives. Journal of Financial Economics, 60, 45–72. Gabudean, R. C. (2007). Strategic interaction and the co-determination of firms financial policies, Working Paper. Gordon, M. J. (1962). The savings investment and valuation of a corporation. Review of Economics and Statistics, 44, 37–51. Gould, J. P. (1968). Adjustment costs in the theory of investment of the firm. Review of Economic Studies, 35, 47–55. Grullon, G., Michaely, R., & Swaminathan, B. (2002). Are dividend changes a sign of firm maturity? Journal of Business, 75, 387–424. Gujarati, D. N. (2009). Basic econometrics (5th ed.). New York: McGraw-Hill. Hamilton, J. D. (1994). Time series analysis (1st ed.). Princeton: Princeton University Press. Hansen, B. E. (1996). Inference when a nuisance parameter is not identified under the null hypothesis. Econometrica, 64, 413–430. Hansen, B. E. (1999). Threshold effects in nondynamic panels: Estimation, testing, and inference. Journal of Econometrics, 93, 345–368.
79
Optimal Payout Ratio Under Uncertainty and the Flexibility Hypothesis
2175
Hansen, B. E. (2000). Sample splitting and threshold estimation. Econometrica, 68, 575–603. Healy, P. M., & Palepu, K. G. (1988). Earnings information conveyed by dividend Initiations and omissions. Journal of Financial Economics, 21, 149–175. Higgins, R. C. (1977). How much growth can a firm afford? Financial Management, 6, 7–16. Hogg, R. V., & Craig, A. T. (2004). Introduction to mathematical statistics (6th ed.). Englewood Cliffs: Prentice Hall. Howe, K. M., He, J., & Kao, G. W. (1992). One-time cash flow cash and free cash-flow theory: Share repurchases and special dividends. Journal of Finance, 47, 1963–1975. Intriligator, M. D. (2002). Mathematical optimization and economic theory. Philadelphia: Society for Industrial and Applied Mathematics. Jagannathan, M., Stephens, C. P., & Weisbach, M. S. (2000). Financial flexibility and the choice between dividends and stock repurchases. Journal of Financial Economics, 57, 355–384. Jensen, G. R., Solberg, D. P., & Zorn, T. S. (1992). Simultaneous determination of insider ownership, debt, and dividend policies. Journal of Financial & Quantitative Analysis, 27, 247–263. Jensen, M. C. (1986). Agency costs of free cash flow, corporate finance, and takeovers. American Economic Review, 76, 323–329. John, K., & Williams, J. (1985). Dividends, dilution, and taxes: A signaling equilibrium. Journal of Finance, 40, 1053–1070. Kalay, A., & Loewenstein, U. (1986). The informational content of the timing of dividend announcements. Journal of Financial Economics, 16, 373–388. Kao, C., & Wu, C. (1994). Tests of dividend signaling using the March-Merton model: A generalized friction approach. Journal of Business, 67, 45–68. Kreyszig, E. (2010). Advanced engineering mathematics (10th ed.). New York: Wiley. Lang, L. H. P., & Litzenberger, R. H. (1989). Dividend announcements: Cash flow signaling vs. free cash flow hypothesis? Journal of Financial Economics, 24, 181–191. Lee, C. F., & Shi, J. (2010). Application of alternative ODE in finance and econometric research. In: Lee, C. F., Lee A. C., & Lee, J. (Eds.), Handbook of Quantitative Finance and Risk Management. Springer, 1293-1300. Lerner, E. M., & Carleton, W. T. (1966). Financing decisions of the firm. Journal of Finance, 21, 202–214. Lie, E. (2000). Excess funds and agency problems: An empirical study of incremental cash disbursements. Review of Financial Studies, 13, 219–248. Lie, E. (2005). Operating performance following open market share repurchase announcements. Journal of Accounting and Economics, 39, 411–436. Lintner, J. (1962). Dividends, earnings, leverage, stock prices and the supply of capital to corporations. Review of Economics and Statistics, 44, 243–269. Lintner, J. (1963). The cost of capital and optimal financing of corporate growth. Journal of Finance, 18, 292–310. Lintner, J. (1964). Optimal dividends and corporate growth under uncertainty. Quarterly Journal of Economics, 78, 49–95. Lintner, J. (1965). Security prices, risk, and maximal gains from diversification. Journal of Finance, 20, 587–615. Miller, M. H., & Modigliani, F. (1961). Dividend policy, growth, and the valuation of shares. Journal of Business, 34, 411–433. Miller, M. H., & Rock, K. (1985). Dividend policy under asymmetric information. Journal of Finance, 40, 1031–1051. Mossin, J. (1966). Equilibrium in a capital asset market. Econometrica, 34, 768–783. Nissim, D., & Ziv, A. (2001). Dividend changes and future profitability. Journal of Finance, 56, 2111–2133. Petersen, M. A. (2009). Estimating standard errors in finance panel data sets: Comparing approaches. Review of Financial Studies, 22, 435–480. Pratt, J. W. (1964). Risk aversion in the small and in the large. Econometrica, 32, 122–136.
2176
C.-F. Lee et al.
Rozeff, M. S. (1982). Growth, beta and agency costs as determinants of dividend payout ratios. Journal of Financial Research, 5, 249–259. Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance, 19, 425–442. Thompson, S. B. (2010). Simple formulas for standard errors that cluster by both firm and time, Working paper. Wallingford, B. A., II. (1972a). An inter-temporal approach to the optimization of dividend policy with predetermined investments. Journal of Finance, 27, 627–635. Wallingford, B. A., II. (1972b). A correction to “An inter-temporal approach to the optimization of dividend policy with predetermined investments”. Journal of Finance, 27, 944–945. Yoon, P. S., & Starks, L. T. (1995). Signaling, investment opportunities, and dividend announcements. Review of Financial Studies, 8, 995–1018. Zeileis, A., Leisc, F., Hornik, K., & Kleiber, C. (2002). strucchange: An R package for testing for structural change in linear regression models. Journal of Statistical Software, 7, 1–38.
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
80
Thomas C. Chiang and Jiandong Li
Contents 80.1 80.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The GARCH-Type Model Based on the EGB2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 80.2.1 General Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80.2.2 Modeling Financial Time Series Based on Non-Normal Distributions . . . . . 80.3 Data and Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80.4 Empirical Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80.4.1 GARCH(1,1) Model Based on the Normal Distribution . . . . . . . . . . . . . . . . . . . . . 80.4.2 GARCH(1,1) Model Based on the Student’s t Distribution . . . . . . . . . . . . . . . . . 80.4.3 GARCH(1,1) Model Based on the EGB2 Distribution . . . . . . . . . . . . . . . . . . . . . . 80.4.4 The Impact of Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80.5 Distributional Fit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80.6 Implication of EGB2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delta Method and Standard Errors of the Skewness and Kurtosis Coefficients of the EGB2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2178 2181 2181 2182 2184 2189 2189 2193 2193 2203 2203 2208 2210 2210 2210 2212
Abstract
This chapter uses an exponential generalized beta distribution of the second kind (EGB2) to model the returns on 30 Dow Jones industrial stocks. The model accounts for stock return characteristics, including fat tails, peakedness (leptokurtosis), skewness, clustered conditional variance, and leverage effect.
T.C. Chiang (*) Department of Finance, Drexel University, Philadelphia, PA, USA e-mail: [email protected] J. Li Chinese Academy of Finance and Development (CAFD) and Central University of Finance and Economics (CUFE), Beijing, China e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_80, # Springer Science+Business Media New York 2015
2177
2178
T.C. Chiang and J. Li
The evidence suggests that the error assumption based on the EGB2 distribution is capable of taking care of skewness, kurtosis, and peakedness and therefore is also capable of making good predictions on extreme values. The goodness-of-fit statistic provides supporting evidence in favor of EGB2 distribution in modeling stock returns. This chapter also finds evidence that the leverage effect is diminished when higher moments are considered. The EGB2 distribution used in this chapter is a four-parameter distribution. It has a closed-form density function and its higher-order moments are finite and explicitly expressed by its parameters. The EGB2 distribution nests many widely used distributions such as normal distribution, log-normal distribution, Weibull distribution, and standard logistic distribution. Keywords
Expected stock return • Higher moments • EGB2 distribution • Risk management • Volatility • Conditional skewness • Risk premium
80.1
Introduction
Focusing on economic rationales, financial economists have identified a set of fundamental variables to predict stock returns over time, including market risk, change in interest rate, inflation rate, real activities, default risk, term premium, dividend yields, and earning yields, among other variables. In the cross-sectional analysis, Fama and French (1996) further emphasize size factor (SMB) and value factor (HML). Depending on the frequency of the data being studied, the Monday effect or the January effect is usually added to the model to highlight calendar anomalies. The empirical evidence of statistical significance to justify these variables is rather diverse. The mixed results have been attributed to variations in sample size, frequency, country, market, and/or model specification. As Avramov (2002) argues, the lack of consensus in choosing the “correct” variables may stem from model uncertainty, since the equilibrium asset pricing theories are not explicit about which variables should be included in the predictive regression. To deal with this uncertainty, researchers occasionally resort to a missing variable. It becomes more apparent as GARCH-type models show that financial data demonstrate some sort of volatility clustering phenomenon. To incorporate the conditional variance into the mean equation is definitely helpful in tying stock returns to volatility (See French et al. 1987; Akgiray 1989; Baillie and DeGennaro 1990; and Bollerslev et al. 1992, among others). However, the GARCH-type specification based on a normal distribution is unsatisfactory for use with data that entail extreme values. Recent financial market developments show that significant daily loss occurs more frequently, and the volatility cannot reasonably be predicted from a normal distribution. The popularity of using a normal distribution assumption lies in the fact that the statistical analysis of stock returns can be simplified, allowing the analyst to focus on the first two moments. This simplification, however, misses the information contained in higher order moments.
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2179
To account for higher-order moments is important in modeling stock return series for the following reasons. First, from an econometric point of view, Hansen (1994) notes that empirical specifications of asset pricing models are incomplete unless the full conditional model is specified. Estimation and forecasting accuracy depends on the full specification of the distribution moments. Many authors have found that higher-order moments (and co-moments) can serve as explanatory variables for modeling stock returns (Harvey and Siddique 2000; Patton 2004; Ranaldo and Favre 2005; Bali et al. 2008; Boyer et al. 2010). The exclusion of higher-order moments information in the asset return model is bound to result in missing variable and misspecification problems. Second, from the perspective of empirical finance studies, higher-order moments have particular economic meanings. Johnson and Schill (2006) suggest that FamaFrench factors (SMB and HML) can be viewed as proxies for higher-order co-skewness and co-kurtosis. They show that Fama-French loadings generally become insignificant when higher-order systematic co-moments are included in cross-sectional regressions of portfolio returns. Third, for portfolio management, higher-order moments are considered additional risk instruments in constructing the “new” portfolio theory, as argued by Jurczenko and Maillet (2002) and papers cited there. Further, the underlying theory of stochastic dominance (Vinod 2004) suggests that portfolio selection is determined not only by the conditional mean and variance but also by the skewness and kurtosis. The evidence provided by Harvey et al. (2010) and Cvitanic et al. (2008) substantiates the validity of the new portfolio theory. Moreover, in their recent studies, Andersen and Sornette (2001) and Malevergne and Sornette (2005) find that by incorporating higher-order moments risk, it is possible to increase the expected return on the portfolio while lowering its risks. Similarly, Tang (1998) finds that diversification reduces standard deviation but worsens negative skewness and fat tails in his study of the Hong Kong stock market. The evidence thus points to the fact that pricing risk based exclusively on the second moment may be very misleading. In light of this consideration, existing risk management techniques ought to be revised as well. The significance of higher-order moments has been revealed in a series of dramatic market events, such as the market crash in 1987, the Asian crisis in 1997, the financial collapses of LTCM, the bust of internet bubble, and the subprime loan crisis in 2007. To address excess risk, both financial institutions and regulatory agencies demand risk management techniques to deal with occurrences of extreme values. Although Value at Risk (VaR) has been invented to predict a portfolio’s maximum loss over a target horizon in a given confidence interval, the standard VaR models based on normal distribution often underestimate the potential risk. Three approaches have been developed in the literature to deal with higher-order moments. The first approach is to treat higher-order moments as explanatory variables in the stock return equation. The four-moment CAPM by Jurczenko and Maillet (2002) and Ranaldo and Favre (2005) are the examples. The difficulty of this approach lies in how to generate the explanatory variables. Generating explanatory variables usually relies on higher frequency data or a rolling sample method.
2180
T.C. Chiang and J. Li
The second approach is to apply the GARCH approach to higher conditional moments. Harvey and Siddique (1999) consider the conditional skewness, Brooks et al. (2005) tackle the autoregressive conditional kurtosis, and Conrad et al. (2009) find that individual securities’ volatility, skewness, and kurtosis are strongly related to subsequent returns. Although these studies are capable of extracting information from the higher-order moments and use them to explain the conditional mean, they have not completely resolved the fundamental issue that the dependent variable frequently violates the assumption of a normal distribution.1 This leads to the third approach: applying non-Gaussian distributions to model stock returns so that higher-order moments are naturally incorporated. This chapter falls into the third category. The knowledge that stock returns are not following the Gaussian distribution dates back to the papers by Mandelbrot (1963) and Fama (1965). Subsequent research includes Officer (1972), Clark (1973), McCulloch (1985), Bollerslev (1987), Nelson (1991), Hansen (1994), Liu and Brorsen (1995), and Mittnik et al. (1999) among others. These studies propose the t distribution, skewed t distribution, general error distribution (GED, also known as exponential power distribution), and a-stable Levy distributions. Briefly speaking, the t distribution is symmetric so that it inherently fails to describe the issue of skewness. The GED is not flexible enough to allow for larger innovations. The stable distribution has theoretical appeal because of the generalized central limit theorem; however, its moments are not defined for an order greater than a. In particular, the variance is not defined except for one special case, normal distribution; the skewness and kurtosis are always not defined. Finally, the skewed t distribution used in Hansen (1994) is far from being parsimonious, and it is hard to interpret its parameters because transformations are imposed. Recognizing the weakness of the above distributions, it is necessary to have a model that encompasses the features of asymmetry, a high peak, and fat tails. We find that an exponential generalized beta distribution of the second kind (EGB2) 2 is able to meet the diverse criteria, which forms the research foundation of this chapter. Results emerging from this chapter show that the EGB2 distribution works very well in dealing with high-order moments of individual stock returns. The evidence indicates that AR(1)-GJR-GARCH(1,1) model based on the EGB2 distribution provides a unique specification in handling the stylized facts of stock return behaviors: autocorrelation, conditional heteroskedasticity, leverage effect, skewness, excess kurtosis, and peakedness. 1 Both Harvey and Siddique (1999) and Brooks et al. (2005) use a t distribution. As shown in this paper, a t distribution has heavy tails but is not a good fit for stock return data with regard to peakedness. 2 There are other names for the EGB2 distribution in other nonfinancial fields or in non-American journals; for example, generalized logistic distribution in Wu et al. (2000), z-distribution in Barndorff-Nielsen et al. (1982), the Burr-type distribution in actuarial science in Hogg and Klugman (1983), and four-parameter kappa distribution in geology in Hosking (1994).
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2181
This study contributes to the literature in the following aspects. We find that using EGB2 distribution is superior to models based on a normal distribution and t distribution in handling skewness and kurtosis, as is evident by the goodness-of-fit statistics. Second, the prevalence of the risk management method Value at Risk (VaR) can be handled and updated via the EGB2 distribution. It informs investors that omitting higher moments “. . .will lead to a systematic underestimate of the true riskiness of a portfolio, where risk is measured as the likelihood of achieving a loss greater than some threshold” (Brooks et al. 2005, p. 400). Third, this chapter systematically examines all 30 stocks in the Dow Jones industrial index. The individual stocks cover a broad range of assets and reveal a variety of fat tail characteristics. The model encompasses a rich spectrum of asset features that help in guiding portfolio decisions. Fourth, we find that the asymmetric effect (leverage effect) has been diminished when the EGB2 distribution is applied. It implies that the so-called leverage effect is, at least, partially attributable to the model’s misspecification due to the imposition of a normal distribution of return series. The remainder of the chapter is organized as follows. Section 80.2 describes the methodology of the EGB2-GARCH model. Section 80.3 discusses the data. Section 80.4 presents the empirical results on the stock returns by applying different distributions; Sect. 80.5 reports the goodness-of-fit tests; Sect. 80.6 contains the probability evaluation using the EGB2 distribution. Section 80.7 contains conclusions.
80.2
The GARCH-Type Model Based on the EGB2 Distribution
80.2.1 General Specification The AR(1)-GARCH(1,1)-GJR-EGB2 stock return model can be represented by a system given below: Rt ¼ f0 þ f1 Rm, t þ f2 Rt1 þ dD87 þ eit et ¼
pffiffiffiffi h t ut
(80.1a) (80.1b)
ht ¼ w þ ae2t1 þ bht1 þ g I ðet1 < 0Þe2t1
(80.1c)
et jℑt1 Dð0; ht ; zÞ
(80.1d)
Equation 80.1a is the mean equation, where Rt is an individual stock’s excess return (stock return minus the risk-free rate) at time t; et is an error term. The inclusion of an AR(1) term in the mean equation accounts for the autocorrelation arising from nonsynchronous trading or slow price adjustments
2182
T.C. Chiang and J. Li
(Lo and MacKinlay 1990; Amihud and Mendelson 1987).3 The market’s equity premium (stock market return minus the risk-free rate), Rmt, at time t is included in the equation to capture the market risk as suggested by the CAPM. The dummy variable, D87, takes a value of unity in the week of October 19, 1987, and 0 otherwise. The series, ut, in Eq. 80.1b is a standardized error by conditional variance. The conditional variance, ht, is assumed to follow GARCH(1,1); w,a, and b > 0 to ensure a strictly positive conditional variance; and I is an indicative function that takes a value of 1 only when the error term has a negative value. The g is used to capture the asymmetric effect of the extraordinary shock to the variance: bad news usually results in a bigger effect than good news does. In this study, we adopt the asymmetric GARCH approach suggested by Glosten et al. (1993) for its simplicity and effectiveness. The distribution of et is assumed to be a general specification conditional on the distribution captured by the parameter z. For the normal distribution, the error follows that et|ℑt–1 N(0, ht). As a variant of a normal distribution, in this chapter, we consider two alternatives: t- and EGB2 distributions.
80.2.2 Modeling Financial Time Series Based on Non-Normal Distributions Student’s t distribution is well known for its capacity to capture the fat tail phenomenon. Bollerslev (1987), Bollerslev et al. (1994), and Hueng and McDonald (2005) incorporated t distribution into the GARCH model specification. The pdf of a normalized Student’s t distribution takes the form of " 2 #vþ1 2 G vþ1 1 x d 2 1þ tðx; d; s; vÞ ¼ v pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi v2 s G 2 s p ð v 2Þ
(80.2)
where x is a random variable, v is the degree of freedom of the t distribution (v > 2), and G is the gamma function. The excess kurtosis coefficient of the t distribution is 6 for v > 4. In light of system (80.1a–80.1d), the only change is in the given by v4 error distribution, which is given by et|ℑt 1 t(0, ht, v). From this perspective,
3 Dependent on the significance test of the AR(1) coefficient in the AR(1)-GARCH(1,1) model, the AR(1) term is then dropped for some stocks. The following stocks do not have an AR(1) variable: MSFT, HON, DD, GM, IBM, MO, CAT, BA, PFE, AA, DIS, MCD, JPM, and INTC. Stock PG, which is the only one that shows Q(30) any significance, adds an AR(4) variable to ensure that the autocorrelation is removed. The rest of this paper follows this pattern. Recent literature suggests that the sign of the AR(1) coefficient, f2, can be used to detect feedback trading behavior (Sentana and Wadhwani 1992; Antoniou et al. 2005). Our results show that the coefficient of AR(1) is negative.
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2183
both the coefficients and the degree of freedom of the t distribution are estimated simultaneously by maximizing the following log-likelihood function:
v vþ1 log L ¼ T log G log G 0:5logðpðv 2ÞÞ 2 2
X et 2 0:5 logðht Þ þ ðv þ 1Þlog 1 þ ht ðv 2Þ
(80.3)
Although the t distribution is good at modeling fat tails for time data, its shortcoming is the built-in symmetrical nature. The distribution, however, is unable to take care of the skewness characteristic present in the financial time series. Thus, we turn to an exponential generalized beta distribution of the second kind (EGB2) developed by McDonald (1984, 1991) and McDonald and Xu (1995). EGB2 is attractive because of its simplicity and ease in estimating the parameters.4 There is a closed-form density function for the EGB2 distribution; its higher-order moments are finite and explicitly expressed by its parameters. Moreover, it is flexible and able to accommodate a wider range of data characteristics, such as thick tails and skewness, than the more commonly used normal and log-normal distributions. The EGB2 distribution has the probability density function (pdf) given by h
ip xd eð s Þ EGB2ðx; d; s; p; qÞ ¼ h ipþq xd jsjBðp; qÞ 1 þ eð s Þ
(80.4)
where x is a random variable; d is a location parameter that affects the mean of the distribution; s reflects the scale of the density function; p and q (p > 0 and q > 0) are shape parameters that together determine the skewness and kurtosis of the distribution of the excess return series; B(p, q) is the beta function.5 As suggested by McDonald
4
It is not our intention to exhaust all the non-Gaussian models in our study, which is infeasible. Rather, our strategy is to adopt a distribution that is rich enough to accommodate the features of financial data. To our knowledge, there are different types of flexible parametric distributions parallel to the EGB2 distribution to model both third and fourth moments in the literature. One family of such distributions is a skewed generalized t distribution (SGT) (Theodossiou 1998; Hueng and Brooks 2003). Special cases of SGT include generalized t distribution (McDonald and Newey 1988), skewed t distribution (Hansen 1994), and skewed generalized error distribution (SGED) (Nelson 1991). The skewness and excess kurtosis of SGT are in the range (1, 1) and (1.8, 1), respectively. Another family is inverse hyperbolic sin distribution (IHS) (Johnson 1949 and Johnson et al. 1994). The skewness and excess kurtosis of IHS is in the range (3, 1) and (1, 1). EGB2 has less coverage for skewness and excess kurtosis than SGT and IHS. However, it covers many skewness-kurtosis combinations encountered in practice and performs “impressively” in estimating the slope coefficient in a simulation (Theodossiou et al. 2007). Other families of flexible distributions are also available in the literature. But there isn’t any comparison with the EGB2 distribution. 5 It should be noted that beta function here has nothing to do with the stock’s beta.
2184
T.C. Chiang and J. Li
(1991), the EGB2 is suitable for coefficient of skewness values between 2 and 2 and coefficient of excess kurtosis values up to 6. The distribution is capable of accommodating fat-tailed and skewed error distributions pertinent to stock return modeling.6 For the standardized EGB2 distribution with shape parameters p and q, the univariate GARCH-EGB2 log-likelihood function is7 " pffiffiffiffi ! pffiffiffiffi X O et p pffiffiffiffi 0:5 logðht Þ log L ¼ T log O logðBðp; qÞÞ þ pD þ ht !!# pffiffiffiffi Oet ðp þ qÞlog 1 þ exp pffiffiffiffi þ D ht
(80.5) where D ¼ c(p) c(q), O ¼ c (p) + c (q), and c and c represent digamma and trigamma functions, respectively.8 The BFGS algorithm is used in RATS to conduct a maximum likelihood estimation. The skewness and excess kurtosis for the EGB2 distribution are given respectively by 0
0
Skewness ¼ gðp; qÞ ¼ Kurtosis ¼ hðp; qÞ ¼
0
c00 ðpÞ c00 ðqÞ ðc0 ðpÞ þ c0 ðqÞÞ1:5 c000 ðpÞ þ c000 ðqÞ ðc0 ðpÞ þ c0 ðqÞÞ2
(80.6) (80.7)
and c00 and c000 represent tetragamma and pentagamma functions. Since the skewness and kurtosis coefficients are based on parameters p and q, the standard deviation of skewness and kurtosis coefficients can be drawn by using a standard delta method (see the appendix for details). By using these measures, we can judge if the EGB2 distribution correctly handles skewness and kurtosis.
80.3
Data and Summary Statistics
When asset returns are analyzed, movements in the Dow Jones Industrial Average (DJIA) are often considered one of the most important pieces of news that indicate 6
Many distributions are nested in the EGB2 distribution. Wang et al. (2001) show that the EGB2 distribution is very powerful in modeling exchange rates that have fat tails and leptokurtosis features. The EGB2 converges to normal distribution as p ¼ q approaches infinity, to log-normal distribution when only p approaches infinity, to the Weibull distribution when p ¼ 1 and q approaches infinity, and to the standard logistic distribution when p ¼ q ¼ 1. It is symmetric (called a Gumbel distribution) for p ¼ q. The EGB2 is positively (negatively) skewed as p > q (p < q) for s > 0. 7 This can be obtained in the appendix of Wang et al. (2001). 8 The digamma function is the logarithmic derivative of the gamma function; the trigamma function is the derivative of the digamma function.
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2185
the health of financial markets and investment performance. This chapter uses the DJIA’s 30 stocks as the sample, which represents a group of well-established and diverse companies. The sample covers the period from October 29, 1986, through December 31, 2005. One of the reasons for using this period is its completeness, so we can employ and assess information on all 30 stocks in the sample period.9 This time period also captures the most vigorous recent stock market advancements while covering several major market crashes and financial crises. Following the conventional approach, returns from the Standard & Poor’s 500 (SP 500) index are used to measure the market return. Both daily returns on the S&P 500 index and data on the 30 Dow Jones firms are compounding and including dividend payments. These data are taken from the CRSP database. The short-term interest rate is measured by the 3-month Treasury bill rate, which is taken from the Federal Reserve’s website.10 The daily risk-free rate is measured by using the annual rate divided by 360. The excess stock return is the difference between actual stock returns and the short-term interest rate. Weekly data are used in order to be consistent with industrial practice. For example, Value Line, Bloomberg, and Baseline all use weekly data on stocks to calculate the stocks’ beta. Daily stock returns are seldom used in industry. It also helps to smooth out the volatility for single date outliers. An additional advantage of using weekly observations is that some calendar effects, such as the Monday effect, can be avoided. The excess returns are measured on a weekly basis. Table 80.1 reports summarized statistics for the weekly excess returns. Looking at Table 80.1, we find that six stocks have a positive value for the skewness coefficient and two are significant at the 1 % level, while the remaining 24 stocks show negative values and 13 of them are significant at the 1 % level.11 A negative skewness coefficient means that there are more negative extreme values
9
Stock C (CitiGroup) in Table 80.1 began its trading data on Oct 29, 1986. Within this period, only one stock has one missing value. Stock MO (Philips Morris Co.) was not traded on May 25, 1994, because of “pending news which could affect the stock price.” Philip Morris’ board was meeting to announce whether the company would split its food and tobacco units on May 25, 1994. In this sample period, the most striking event is the market crash on October 19, 1987. This paper considers the 1987 market crash as an outlier in the later part. The week of the 9–11 terrorist attacks has only 1 day of trading information and is incorporated into the next week. 10 http://www.federalreserve.gov/releases/H15/data.htm#top, Treasury bill secondary market rates (serial: tbsm3m) are the averages of the bid rates quoted on a bank discount basis by a sample of primary dealers who report to the Federal Reserve Bank of New York. The rates reported are based on quotes at the official close of the US government securities market for each business day. 11 The sign of the skewness coefficient is related to data frequency. The skewness of the weekly return has nothing to do with the skewness of the daily return. For example, the stock HON (index ¼ 2) shows significant positive skewness in its daily return but significant negative skewness in its weekly return. 12 The skewness coefficient is the relationX between the second order moment and the third order moment. It is calculated by ðT1ÞðTT2Þs3 ðxi mÞ3 where m is the mean of the sample. The literature about the positive and negative values of the distribution skewness is confusing. The skewness in our study is based on the distribution’s moments (Kenney and Keeping 1962).
E.I. DuPont de Nemours & Co.
Exxon Mobil Corp.
General Electric Co.
General Motors Corp.
International Business Machines Corp. IBM
Altria Group Inc.
United Technologies Corp.
Procter & Gamble Co.
4
5
6
7
8
9
10
11
PG
UTX
MO
GM
GE
XOM
DD
KO
Coca-Cola Co.
3
999
999
999
999
999
999
999
999
999
999
HON
Honeywell International Inc.
2
Variance Skewness 0.00243 0.1054 [1.36] 0.00234 0.00189 0.7458 [9.62]*** 0.00264 0.00122 0.2049 [2.64]*** 0.00186 0.00141 0.1613 [2.08]** 0.00253 7.84E-04 0.1485 [1.92]* 0.00297 0.00119 0.1052 [1.36] 8.57E-04 0.00176 0.1895 [2.45]** 0.00168 0.00159 0.0399 [0.51] 0.00361 0.00157 0.3389 [4.37]*** 0.00296 0.00147 1.4454 [18.6]*** 0.00304 0.00125 2.09
Ticker Nobs Mean MSFT 999 0.00623
Index Company name 1 Microsoft Corp.
Table 80.1 Descriptive statistics for weekly excess stock returns: 1986–2005 Kurtosis 1.7434 [11.25]*** 10.4318 [67.30]*** 1.5426 [9.95]*** 1.4728 [9.50]*** 1.3036 [8.41]*** 3.2818 [21.17]*** 2.192 [14.14]*** 2.3767 [15.33]*** 3.9432 [25.44]*** 12.8243 [82.74]*** 24.1258
Peakedness Jarque-Bera Q(30) 1.1322 128.3609 40.0341 0*** 0.1 1.0107 4,622.3848 66.6923 0*** 0*** 1.0786 106.0431 35.541 0*** 0.22 1.0986 94.6198 51.4363 0*** 0.01*** 1.1647 74.4108 119.402 0*** 0*** 1.1407 450.1587 50.5722 0*** 0.01** 1.1856 205.9881 35.2411 0*** 0.23 1.0927 235.388 40.55 0*** 0.09* 1.0472 666.3313 37.7432 0*** 0.16 1.0301 7,193.6328 76.4998 0*** 0*** 1.055 24,955.353 99.2438
Q2(30) 210.3496 0*** 138.3476 0*** 222.6358 0*** 278.872 0*** 208.3203 0*** 197.9357 0*** 26.7299 0.64 198.6334 0*** 79.336 0*** 43.2729 0.06* 45.9705
2186 T.C. Chiang and J. Li
Caterpillar Inc.
Boeing Co.
Pfizer Inc.
Johnson & Johnson
3 M Co.
Merck & Co. Inc.
Alcoa Inc.
Walt Disney Co.
Hewlett-Packard Co.
McDonald’s Corp.
JPMorgan Chase & Co.
Wal-Mart Stores Inc.
12
13
14
15
16
17
18
19
20
21
22
23
999
999
999
999
999
999
999
999
999
999
WMT 999
JPM
MCD
HPQ
DIS
AA
MRK
MMM 999
JNJ
PFE
BA
CAT
0.00324
0.00252
0.0022
0.00322
0.00246
0.00279
0.00238
0.00229
0.00299
0.00292
0.00243
0.00319
[26.9]*** 0.00187 0.1223 [1.58] 0.00175 0.9363 [12.0]*** 0.00151 0.2471 [3.19]*** 0.00109 0.0269 [0.35] 9.87E-04 0.0023 [0.03] 0.00145 0.3045 [3.93]*** 0.00208 0.4607 [5.95]*** 0.00166 0.2527 [3.26]*** 0.00285 0.1841 [2.38]** 0.00123 0.0701 [0.90] 0.00239 0.0532 [0.69] 0.00161 0.0414
[155.6]*** 3.4189 [22.06]*** 8.9094 [57.48]*** 1.6742 [10.80]*** 2.3118 [14.92]*** 2.3585 [15.22]*** 2.4453 [15.78]*** 6.0484 [39.02]*** 2.9907 [19.30]*** 2.1588 [13.93]*** 1.085 [7.00]*** 1.9238 [12.41]*** 1.3849 1.1689
1.079
1.2158
1.1304
1.2043
1.1334
1.1817
1.0821
1.18
1.2616
1.0939
1.1005
0*** 489.0329 0*** 3,450.0172 0*** 126.8375 0*** 222.5903 0*** 231.5415 0*** 264.3375 0*** 1,558.134 0*** 382.9379 0*** 199.6392 0*** 49.8201 0*** 154.5247 0*** 80.1245
0*** 0.03** 49.6413 63.4811 0.01** 0*** 26.1619 85.6471 0.67 0*** 46.246 95.4955 0.03** 0*** 51.99 123.6244 0.01** 0*** 43.9957 202.3982 0.05** 0*** 34.283 58.3937 0.27 0*** 56.6534 73.5826 0*** 0*** 38.326 77.1463 0.14 0*** 51.4856 186.6745 0.01*** 0*** 36.5428 140.4038 0.19 0*** 42.948 353.6849 0.06* 0*** 47.2936 333.5604 (continued)
80 Modeling Asset Returns with Skewness, Kurtosis, and Outliers 2187
Verizon Communications Inc.
AT&T
Home Depot Inc.
American International Group Inc.
Citigroup Inc.
S&P 500 (market)
26
27
28
29
30
31
999
999
999
999
999
999
999
0.00133
0.00422
0.00278
0.00539
0.00196
0.00152
0.0055
Variance Skewness [0.53] 0.0018 0.2135 [2.75]*** 0.00342 0.632 [8.15]*** 0.00116 0.1909 [2.46]** 0.00133 0.2295 [2.96]*** 0.0023 0.4238 [5.47]*** 0.00139 0.3886 [5.01]*** 0.00208 0.1921 [2.48]** 4.62E-04 0.5292 [6.83]***
Kurtosis [8.94]*** 2.7292 [17.61]*** 3.8506 [24.84]*** 1.8996 [12.26]*** 3.1149 [20.10]*** 4.5083 [29.09]*** 2.8457 [18.36]*** 2.9292 [18.90]*** 2.8903 [18.65]***
Peakedness Jarque-Bera 0*** 1.2052 317.6268 0*** 1.2007 683.6673 0*** 1.2103 156.263 0*** 1.1904 412.6303 0*** 1.1297 875.9315 0*** 1.2107 362.2258 0*** 1.1439 363.2863 0*** 1.173 394.3576 0***
Q(30) 0.02** 45.6515 0.03** 29.5866 0.49 55.9734 0*** 51.7092 0.01*** 36.4251 0.19 44.6777 0.04** 42.9245 0.06* 58.9947 0***
Q2(30) 0*** 148.3075 0*** 87.1171 0*** 254.7223 0*** 300.738 0*** 165.8335 0*** 112.3984 0*** 93.5565 0*** 238.3201 0***
The 30 stocks are sorted by permanent CRSP number. nobs are the number of observations. The last row market is measured by the S&P 500. Numbers below coefficients are t-values (with bracket). Numbers below tests are p-values. ***Indicates 1 % significance, **5 %, and *10 %. The standard deviations of skewness and excess kurtosis coefficients are given approximately by (6/T)0.5 and (24/T)0.5, respectively. The peakedness is measured by f0.75-f0.25, the distance between the values of standardized variables at which the cumulative distribution function equals 0.75 and the value at which the cumulative distribution function equals 0.25. The reference value of the standard normal distribution is 1.35. A number of peakedness less than 1.35 means there is a high peak in the probability density function. The normality test is conducted using a Jarque-Bera statistic. The independence test is conducted using a Ljung-Box Q test up to the order of 30. The Q2 test up to the order of 30 is to show volatility clustering
C
AIG
HD
T
VZ
INTC
Intel Corp.
25
0.00284
AXP
American Express Co.
24
999
Ticker Nobs Mean
Index Company name
Table 80.1 (continued)
2188 T.C. Chiang and J. Li
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2189
than positive extreme values in the sample period.12 With respect to the excess kurtosis (kurtosis coefficient minus 3), all of the estimated values are statistically significant at the 1 % level, suggesting a serious fat-tailed problem. The range of the excess kurtosis coefficient is between 1.08 and 24.13. By checking the range of peakedness measured by the inter-quartile range (i.e., 0.75 fractile minus 0.25 fractile), we found that it lies between 1.01 and 1.26. This range is much lower than the referenced figure, 1.35, indicating the presence of a high peak in the probability density function for all of the stocks under investigation. While testing for dependency, Ljung-Box Q statistics show that ten stocks are serially autocorrelated, and 27 of the 30 stocks are autocorrelated in the squared term as shown by a Q2 test. The latter suggests a volatility clustering phenomenon and is consistent with a GARCH-type specification. By inspecting the Jarque-Bera statistics, the normality for all 30 stocks is uniformly rejected.13 The preliminary statistical results from Table 80.1 clearly indicate that the popular normality assumption does not conform to the weekly returns. The individual stock returns often show positive excess kurtosis (fat tails), accompanied by skewness. The evidence of peakedness is not in agreement with the normal distribution either. Besides the non-Gaussian features, some weekly stock returns present autocorrelation and almost all of them feature volatility clustering.
80.4
Empirical Evidence
In this section, we estimate the system of equations from Eq. 80.1a through 80.1d and present evidence of the GARCH(1,1) model based on different distributions. We analyze the impact of outliers on the EGB2 distribution.
80.4.1 GARCH(1,1) Model Based on the Normal Distribution Table 80.2 reports the estimates of the GARCH(1,1) model based on the assumption that the error series follows a normal distribution, et|ℑt–1 N(0, ht).14 Looking at the t-statistics, the null hypothesis of the absence of skewness is rejected at the 1 % level for 11 out of 30 cases (four positive and seven negative), while the null hypothesis of the absence of excess kurtosis is rejected in all of the cases. Moreover, the Jarque-Bera tests show that all of the return residuals are rejected by assuming
13
All the non-normality features are more remarkable in daily data but less so in monthly data. This is consistent with Brown and Warner’s (1985) report that the non-normal features tend to vanish in low-frequency data, such as monthly observations. Even so, subject to individual monthly stock returns, the Jarque-Bera test rejects the normality for 23 of the 30 stocks at the 1 % level. 14 The standardized residuals are obtained by dividing the estimated regression residuals by its conditional standard deviation. Standardizing the error term makes the distribution comparison feasible. Mean and variance are not reported in the table due to the use of normalization.
14
13
12
11
10
9
8
7
6
5
4
3
2
1.7464 [11.25]*** 3.7809 [24.36]*** 1.7435 [11.23]*** 0.8954 [5.77]*** 0.9415 [6.06]*** 1.2395 [7.99]*** 1.7674 [11.39]*** 2.4445 [15.75]*** 4.1205 [26.54]*** 3.8586 [24.86]*** 3.587 [23.08]*** 3.7273 [24.01]*** 1.7346 [11.17]*** 1.7829
1
0.325 [4.19]*** 0.1386 [1.79]* 0.0996 [1.28] 0.1178 [1.52] 0.1453 [1.87]* 0.2082 [2.68]*** 0.0232 [0.30] 0.2299 [2.96]*** 0.6695 [8.63]*** 0.4853 [6.25]*** 0.4783 [6.16]*** 0.2028 [2.61]*** 0.0969 [1.25] 0.2625
Kurtosis
Index Skewness
1.2095
1.1864
1.1323
1.13
1.1821
1.0558
1.0648
1.191
1.226
1.2957
1.1559
1.1815
1.1149
1.2393
144.3966 0*** 597.6384 0*** 128.0553 0*** 35.6498 0*** 40.3688 0*** 71.1013 0*** 129.9834 0*** 257.2865 0*** 780.568 0*** 658.2901 0*** 573.0948 0*** 584.5448 0*** 126.6834 0*** 143.6545
Peakedness JB 33.8396 0.29 40.1232 0.1 34.0617 0.28 30.6938 0.43 36.0064 0.21 44.0888 0.05** 37.1809 0.17 34.2906 0.27 22.7618 0.82 36.9234 0.18 46.8546 0.03** 33.3369 0.31 26.0719 0.67 41.7312
Q(30) 23.6315 0.79 18.262 0.95 19.7525 0.92 34.7945 0.25 18.1299 0.96 28.2673 0.56 19.0182 0.94 28.8303 0.53 23.094 0.81 9.2842 1 31.4136 0.4 16.408 0.98 44.0327 0.05** 32.5686
Q2(30) 0.0038 [3.26]*** 0.0002 [0.20] 0.0019 [2.29]** 0.0007 [0.84] 0.0021 [3.51]*** 0.0015 [2.50]** 0.0006 [0.58] 0.0008 [0.87] 0.0027 [2.87]*** 0.0019 [2.31]** 0.0014 [1.84]* 0.0022 [1.94]* 0.0018 [1.90]* 0.0014
f0 1.1478 [19.60]*** 1.1546 [23.30]*** 0.9433 [21.38]*** 1.0984 [28.66]*** 0.6815 [24.05]*** 1.1891 [39.18]*** 1.0286 [22.84]*** 0.9271 [20.08]*** 0.8083 [15.71]*** 1.0402 [25.05]*** 0.8322 [21.56]*** 1.0168 [18.30]*** 0.9425 [17.05]*** 0.9127
f1
Table 80.2 Estimates of the GARCH(1,1) normal distribution: weekly data, 1986–2005
[NA]
[NA]
[NA] 0.0725 [2.97]*** 0.0987 [4.11]***
[NA]
[NA]
[NA] 0.172 [7.18]*** 0.0662 [3.05]***
[NA] 0.043 [1.69]*
[NA]
f2 w 0.000012 [1.46] 0.000007 [2.01]** 0.000024 [2.00]** 0.000008 [1.90]* 0.000004 [8.01]*** 0.000005 [2.10]** 0.000024 [1.77]* 0.00001 [1.99]** 0.000011 [3.62]*** 0.000002 [1.07] 0.000029 [2.84]*** 0.000016 [2.66]*** 0.000011 [2.00]** 0.00005
d 0.1035 [1.92]* 0.1514 [4.72]*** 0.1308 [5.26]*** 0.0174 [0.63] 0.1284 [0.00] 0.0525 [2.91]*** 0.0042 [0.14] 0.0206 [0.93] 0.0296 [1.09] 0.1718 [5.12]*** 0.1153 [3.54]*** 0.1516 [4.42]*** 0.0029 [0.09] 0.0347 0.043 [3.75]*** 0.0251 [2.73]*** 0.0772 [3.29]*** 0.0523 [3.13]*** 0.0407 [32.48]*** 0.0284 [3.24]*** 0.008 [0.81] 0.0217 [2.10]** 0.007 [0.92] 0.0439 [3.93]*** 0.0764 [2.69]*** 0.0019 [0.27] 0.0143 [1.45] 0.0189
a 0.947 [68.46]*** 0.9483 [110.4]*** 0.89 [25.35]*** 0.9403 [69.26]*** 0.968 [857.7]*** 0.9557 [90.24]*** 0.9443 [49.51]*** 0.9296 [66.45]*** 6 0.9666 [127.9]*** 0.9599 [158.7]*** 0.8365 [26.21]*** 0.9663 [113.5]*** 0.9523 [98.07]*** 0.9101
b
0.0064 [0.30] 0.0475 [2.47]** 0.0115 [0.35] 0.0022 [0.10] 0.0334 [14.1]*** 0.0097 [0.52] 0.0652 [3.10]*** 0.0956 [3.41]*** 0.0615 [5.18]*** 0.0111 [0.67] 0.14 [3.42]*** 0.042 [3.56]*** 0.0491 [2.37]** 0.0545
g
2190 T.C. Chiang and J. Li
28
27
26
25
24
23
22
21
20
19
18
17
16
15
[3.38]*** 0.0521 [0.67] 0.1481 [1.91]* 0.5395 [6.95]*** 0.1916 [2.47]** 0.0434 [0.56] 0.0463 [0.60] 0.0039 [0.05] 0.1535 [1.98]** 0.11 [1.42] 0.0935 [1.20] 0.1157 [1.49] 0.1525 [1.97]** 0.1414 [1.82]* 0.24 [3.09]***
[11.49]*** 1.0313 [6.64]*** 1.6954 [10.92]*** 5.5791 [35.94]*** 1.3209 [8.51]*** 1.8057 [11.63]*** 2.9953 [19.30]*** 1.413 [9.10]*** 1.5692 [10.11]*** 1.0494 [6.76]*** 1.2285 [7.91]*** 1.1022 [7.10]*** 1.4721 [9.48]*** 1.7771 [11.45]*** 1.7515 [11.28]***
1.2185
1.1934
1.1727
1.1877
1.2377
1.2199
1.1886
1.2017
1.1578
1.1739
1.2232
1.1373
1.1093
1.2023
0*** 44.682 0*** 123.1715 0*** 1,342.749 0*** 78.6646 0*** 135.8917 0*** 373.4356 0*** 83.0282 0*** 106.3135 0*** 47.807 0*** 64.2169 0*** 52.7396 0*** 93.9842 0*** 134.6485 0*** 137.1534 0***
0.08* 39.197 0.12 32.113 0.36 35.2725 0.23 48.917 0.02** 31.3465 0.4 37.8005 0.15 34.6267 0.26 38.6063 0.13 33.1966 0.31 38.8477 0.13 39.102 0.12 18.7153 0.95 28.2358 0.56 35.0204 0.24 0.34 23.4614 0.8 43.3573 0.05** 13.3108 1 37.3444 0.17 34.4645 0.26 31.4578 0.39 27.0469 0.62 25.3614 0.71 20.3827 0.91 28.2932 0.55 18.3382 0.95 25.0171 0.72 46.9571 0.03** 49.3694 0.01**
[1.39] 0.0021 [2.59]*** 0.0011 [1.59] 0.002 [2.04]** 0.0008 [0.76] 0.0011 [1.10] 0.0017 [1.31] 0.0009 [0.97] 0.0001 [0.13] 0.0017 [2.15]** 0.0008 [0.92] 0.0034 [2.43]** 0.0004 [0.43] 0.0011 [1.25] 0.0039 [3.83]***
[17.52]*** 0.827 [18.58]*** 0.8627 [23.05]*** 0.8975 [19.70]*** 1.129 [19.86]*** 1.12 [22.25]*** 1.3192 [19.80]*** 0.8581 [17.25]*** 1.28 [26.27]*** 1.1453 [24.08]*** 1.3293 [33.18]*** 1.4716 [22.73]*** 0.7143 [16.06]*** 0.8291 [18.36]*** 1.328 [24.83]*** [NA] 0.1018 [3.38]*** 0.0536 [1.94]* 0.0679 [2.93]***
[NA] 0.0764 [2.97]*** 0.0455 [2.12]**
[NA]
[NA] 0.0634 [2.53]**
[NA]
[NA] 0.0929 [3.50]*** 0.0839 [3.39]*** 0.0696 [2.47]**
[1.20] 0.0214 [0.55] 0.0368 [1.22] 0.0272 [0.94] 0.2635 [4.00]*** 0.0509 [1.75]* 0.1321 [3.67]*** 0.1522 [5.58]*** 0.0775 [1.74]* 0.0414 [0.94] 0.0361 [1.25] 0.3448 [19.6]*** 0.114 [4.37]*** 0.1442 [5.09]*** 0.1298 [3.15]***
[2.77]*** 0.000028 [2.48]** 0.000003 [1.50] 0.000014 [2.54]** 0.00002 [2.14]** 0.000009 [2.01]** 0.000019 [1.86]* 0.000013 [2.41]** 0.000003 [0.75] 0.000013 [2.02]** 0.000004 [1.37] 0.000139 [1.63] 0.00002 [2.31]** 0.000005 [1.48] 0.000022 [2.29]** [1.17] 0.0157 [0.82] 0.0095 [1.04] 0.0052 [0.65] 0.0399 [2.99]*** 0.0236 [1.99]** 0.0214 [2.25]** 0.0158 [1.39] 0.0212 [1.61] 0.0357 [2.86]*** 0.0123 [1.13] 0.0918 [2.58]*** 0.0151 [1.02] 0.0527 [3.83]*** 0.0583 [3.74]***
[2.38]** 0.0717 [2.76]*** 0.031 [2.37]** 0.0536 [3.74]*** 0.0151 [0.68] 0.0111 [0.64] 0.0053 [0.34] 0.0344 [1.79]* 0.0764 [3.51]*** 0.0147 [0.85] 0.0468 [2.26]** 0.0131 [0.38] 0.1031 [2.93]*** 0.0015 [0.08] 0.0117 [0.62]
Modeling Asset Returns with Skewness, Kurtosis, and Outliers (continued)
[36.57]*** 0.9135 [34.30]*** 0.9704 [117.5]*** 0.9651 [106.2]*** 0.9382 [66.10]*** 0.9627 [111.4]*** 0.9659 [108.9]*** 0.9529 [80.21]*** 0.9427 [81.87]*** 0.9436 [67.03]*** 0.9607 [139.4]*** 0.8552 [15.05]*** 0.915 [42.86]*** 0.9445 [96.88]*** 0.9209 [65.73]***
80 2191
221.0298 0*** 2,268.791 0***
21.721 0.86 32.4082 0.35
Q(30) 34.2298 0.27 15.2051 0.99
Q2(30) 0.0018 [2.19]** 0.0024 [2.44]**
f0 1.1269 [24.40]*** 1.4348 [29.36]***
f1
d 0.0229 [0.72] 0.0534 [1.36]
f2 0.0865 [3.53]*** 0.0655 [2.77]***
w 0.000022 [2.33]** 0.000096 [2.43]**
0.0359 [2.68]*** 0.0605 [2.01]**
a
0.9123 [36.49]*** 0.837 [15.74]***
b
0.0516 [1.60] 0.0388 [0.90]
g
The 30 stocks are sorted by permanent CRSP number. Numbers below coefficients are t-values (with bracket). Numbers below tests are p-values. ***Indicates 1 % significance, **5 %, and *10 %. The standard deviations of skewness and excess kurtosis coefficients are given approximately by (6/T)0.5 and (24/T)0.5, respectively. The peakedness is measured by f0.75 f0.25, the distance between the values of standardized variables at which the cumulative distribution function equals 0.75 and the value at which the cumulative distribution function equals 0.25. The reference value of the standard normal distribution is 1.35. A number of peakedness less than 1.35 means there is a high peak in the probability density function. The normality test is conducted using Jarque-Bera (JB) statistics. The independence test is conducted using a Ljung-Box Q test up to the order of 30. The Q2 test up to the order of 30 is to show volatility clustering. The model is f0ffiffiffiffi+ f1Rm,t + f2Rt 1 + dD87 + eit (1.a) Rt ¼ p (1.b) et ¼ ht zt (1.c) ht ¼ w + aet2 1 + bht 1 + g I(et 1 < 0)e2t 1 (1.d) et|ℑt–1 N(0, ht). The following stocks do not have AR(1) variable: MSFT, HON, DD, GM, IBM, MO, CAT, BA, PFE, AA, DIS, MCD, JPM, and INTC. Stock PG is the only one to have an AR(4) variable in the mean equation, which ensures that autocorrelation has been removed
30
2.281 1.1977 [14.69]*** 7.2363 1.1258 [46.62]***
0.1677 [2.16]** 0.741 [9.55]***
29
Peakedness JB
Kurtosis
Index Skewness
Table 80.2 (continued)
2192 T.C. Chiang and J. Li
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2193
Gaussian distribution. If we check further into the measure of peakedness, the estimate values range from 1.06 to 1.30. All of these figures are lower than the reference point of a standard normal distribution, 1.35, indicating that all of the returns are leptokurtic. It is apparent that assuming that residuals for the estimated financial data are normally distributed is invalid.
80.4.2 GARCH(1,1) Model Based on the Student’s t Distribution Estimating the model by using a t distribution indicates that the excess kurtosis has mostly been removed from the estimated residuals. As shown in Table 80.3, 29 stocks show that the coefficients of excess kurtosis are insignificant. This demonstrates the effectiveness of t distribution in modeling the excess kurtosis. However, the problem of skewness has not been resolved at all. The evidence shows that 18 out of 30 are significant at the 5 % level or higher. There are four significant positive and eight significant negative skewness coefficients in the standardized residuals at the 1 % level.15 Another problem emerging from this model is insufficient peakedness of the distribution. The range of the estimated degree of freedom is (3.9 11.1), which corresponds to the range of peakedness (1.53 1.39). Note that the actual peakedness measurement from Table 80.3 is in the range of (1.02 1.29), indicating the presence of leptokurtosis. The evidence shows that the t distribution is worse than the normal distribution in modeling peakedness (see Fig. 80.1).
80.4.3 GARCH(1,1) Model Based on the EGB2 Distribution To advance our study, we reestimate the GARCH(1,1) model by employing the EGB2 distribution. Table 80.4 reports the comparable statistics based on the standardized residuals from GARCH(1,1) cum EGB2 distribution: et|ℑt 1 EGB2(0, ht, p, q). The results show that the skewness problem for most cases has been alleviated by using the EGB2 distribution. The evidence indicates that only five stocks show the presence of skewness. Turning to the statistics of excess kurtosis, we find that the EGB2 distribution works well on some stocks’ kurtosis but not all of them. The evidence in Table 80.4 indicates that nine stocks still show excess kurtosis. Table 80.4 also contains the range of p (0.334 1.776) and of q (0.348 1.669). The reported p- and q-values suggest that the residuals’ distributions are far from 15
To deal with the skewness, a number of skewed t distributions have been proposed (Theodossiou 1998; Hueng and Brooks 2003). One obvious drawback of a skewed t distribution in our study is the outcome of its peakedness measurement, which displays platykurtosis (flat topped in density). This appears to be the opposite of the leptokurtic stock returns. For this reason, we do not report results from the GARCH model based on a skewed t distribution in order to focus on the EGB2 distribution. However, the results are available upon request.
Index Skewness 1 0.343 [4.42]*** 2 0.2319 [2.99]*** 3 0.1337 [1.72]* 4 0.1218 [1.57] 5 0.1278 [1.65]* 6 0.2187 [2.82]*** 7 0.0013 [0.02] 8 0.2701 [3.48]*** 9 0.706 [9.10]*** 10 0.8208 [10.5]*** 11 0.8627 [11.1]*** 12 0.1705 [2.20]** 13 0.1782 [2.30]**
Kurtosis 1.7838 [0.53] 4.7297 [0.50] 1.994 [0.17] 0.9512 [1.02] 1.0005 [0.44] 1.3807 [0.14] 2.0779 [0.39] 2.8559 NA 4.5322 [0.12] 6.9991 [2.28]** 8.5655 [0.88] 4.2164 [0.53] 2.0983 [0.25]
Peakedness 1.2436 (1.44) 1.1348 (1.48) 1.1909 (1.44) 1.1426 (1.44) 1.2934 (1.39) 1.2375 (1.41) 1.1981 (1.44) 1.0195 (1.53) 1.0486 (1.48) 1.1804 (1.45) 1.1322 (1.45) 1.1414 (1.48) 1.1979 (1.44)
Q(30) 33.2195 0.31 40.0766 0.1 33.3153 0.31 30.255 0.45 34.3873 0.27 45.1364 0.04** 36.1023 0.2 34.8834 0.25 22.5016 0.84 33.9934 0.28 48.1859 0.02** 34.2222 0.27 26.1595 0.67
Q2(30) 23.0715 0.81 21.0042 0.89 20.5154 0.9 33.7479 0.29 16.7991 0.97 27.5533 0.59 18.2453 0.95 28.569 0.54 21.7429 0.86 7.8299 1 21.2743 0.88 14.6933 0.99 46.3158 0.03**
f0 0.0025 [2.37]** 0 [0.00] 0.0019 [2.49]** 0.0004 [0.55] 0.002 [3.09]*** 0.0012 [2.04]** 0.001 [1.01] 0.001 [1.20] 0.0035 [3.79]*** 0.0021 [2.69]*** 0.0022 [3.01]*** 0.0017 [1.69]* 0.0015 [1.50]
f1 1.1206 [20.40]*** 1.0971 [23.66]*** 0.9345 [23.49]*** 1.0819 [27.61]*** 0.6895 [22.69]*** 1.1779 [38.69]*** 1.0089 [21.65]*** 0.9383 [24.54]*** 0.804 [16.44]*** 1.0041 [26.12]*** 0.8264 [21.67]*** 1.0317 [21.07]*** 0.9043 [18.78]*** [NA]
[NA]
[NA] 0.0877 [3.61]*** 0.1159 [4.51]***
[NA]
[NA]
[NA] 0.1679 [6.81]*** 0.061 [3.49]***
[NA] 0.0596 [2.41]**
[NA]
f2
d 0.1049 [2.37]** 0.1458 [7.48]*** 0.1281 [6.46]*** 0.016 [0.84] 0.1297 [0.00] 0.0524 [2.98]*** 0.0026 [0.10] 0.0218 [1.35] 0.0309 [1.61] 0.1768 [7.95]*** 0.1119 [5.75]*** 0.1496 [5.66]*** 0.0005 [0.02] w 1.1E-05 [1.04] 1.9E-05 [2.36]** 9E-06 [0.99] 9E-06 [1.60] 6E-06 [5.48]*** 4E-06 [1.49] 1.1E-05 [1.23] 5E-06 [1.06] 1.4E-05 [2.07]** 1.5E-05 [1.58] 0.000015 [2.88]*** 1.4E-05 [1.48] 1.9E-05 [2.23]**
Table 80.3 Statistics of the standardized errors on the GARCH(1,1)-t distribution: weekly data, 1986–2005 a 0.0476 [3.09]*** 0.0298 [2.30]** 0.0687 [3.12]*** 0.0598 [2.55]** 0.0501 [20.20]** 0.0295 [2.43]** 0.0154 [1.31] 0.0297 [2.05]** 0.0176 [0.81] 0.0539 [2.17]** 0.0492 [2.69]*** 0.0201 [1.34] 0.0132 [1.03]
b 0.942 [44.63]*** 0.939 [62.86]*** 0.9273 [28.16]*** 0.929 [49.02]*** 0.9537 [425.1]*** 0.9515 [79.40]*** 0.9586 [78.94]*** 0.9489 [72.36]*** 0.9518 [57.86]*** 0.9273 [31.36]*** 0.9235 [61.63]*** 0.9592 [65.73]*** 0.9502 [72.45]***
g 0.0103 [0.37] 0.0277 [1.21] 0.0097 [0.30] 0.0065 [0.22] 0.0284 [6.14]*** 0.0224 [0.97] 0.0433 [2.03]** 0.0491 [1.89]* 0.0397 [1.75]* 0.001 [0.04] 0.0176 [0.65] 0.0231 [1.30] 0.0414 [1.70]*
6.4902
4.8239
5.1337
5.9383
4.0701
3.9057
6.2797
8.6249
11.1396
6.7749
6.7609
4.5243
v 6.4635
2194 T.C. Chiang and J. Li
27
26
25
24
23
22
21
20
19
18
17
16
15
14
0.2855 [3.68]*** 0.0652 [0.84] 0.1556 [2.00]** 0.8697 [11.2]*** 0.1839 [2.37]** 0.0404 [0.52] 0.0672 [0.87] 0.0207 [0.27] 0.1468 [1.89]* 0.1166 [1.50] 0.1277 [1.65]* 0.1312 [1.69]* 0.2108 [2.72]*** 0.1706 [2.20]**
2.0056 [0.52] 1.0808 [0.94] 1.6957 [0.86] 10.312 [2.80]*** 1.3727 [0.43] 1.8821 [0.69] 3.2296 [0.69] 1.6059 [0.55] 1.6039 [0.72] 1.0596 [0.71] 1.3495 [0.10] 1.2702 [0.82] 1.9665 [0.67] 1.9752 [0.48]
1.1918 (1.44) 1.1834 (1.44) 1.0909 (1.48) 1.1498 (1.45) 1.2229 (1.42) 1.1772 (1.45) 1.1420 (1.48) 1.2007 (1.44) 1.1651 (1.44) 1.2152 (1.42) 1.2377 (1.41) 1.1790 (1.44) 1.1407 (1.45) 1.1781 (1.44)
40.2125 0.1 39.0689 0.12 32.1196 0.36 31.9551 0.37 49.1214 0.02** 30.5184 0.44 37.5776 0.16 34.3777 0.27 38.9648 0.13 33.3934 0.31 37.6296 0.16 39.0144 0.13 19.1298 0.94 27.9549 0.57
30.7673 0.43 25.1389 0.72 42.3807 0.07* 7.3149 1 37.5822 0.16 31.6679 0.38 30.4423 0.44 27.0955 0.62 25.0082 0.72 21.1235 0.88 28.6486 0.54 28.3397 0.55 26.7937 0.63 46.7514 0.03**
0.0018 [1.94]* 0.0018 [2.34]** 0.0011 [1.60] 0.0019 [2.16]** 0.0004 [0.43] 0.0011 [1.19] 0.0017 [1.39] 0.0005 [0.55] 0.0001 [0.12] 0.0014 [1.71]* 0.0006 [0.71] 0.0038 [2.90]*** 0.0002 [0.23] 0.0009 [1.20]
0.92 [20.01]*** 0.8325 [21.67]*** 0.8569 [24.26]*** 0.9572 [21.44]*** 1.1146 [22.39]*** 1.1197 [26.38]*** 1.2807 [22.11]*** 0.865 [20.89]*** 1.3264 [27.83]*** 1.1359 [26.23]*** 1.3136 [30.37]*** 1.45 [20.87]*** 0.7986 [19.30]*** 0.8697 [20.74]*** [NA] 0.0871 [3.28]*** 0.05 [1.97]**
[NA] 0.0768 [3.28]*** 0.0494 [1.99]**
[NA]
[NA] 0.0657 [2.79]***
[NA]
[NA] 0.0783 [3.08]*** 0.0758 [3.35]*** 0.0651 [2.47]**
0.0344 [1.64]* 0.0243 [0.98] 0.0366 [2.23]** 0.0208 [0.96] 0.2645 [7.62]*** 0.0509 [2.38]** 0.1361 [4.78]*** 0.1532 [6.54]*** 0.0731 [2.56]** 0.0406 [1.28] 0.0342 [1.35] 0.2712 [7.00]*** 0.1231 [5.48]*** 0.1487 [6.38]*** 2.9E-05 [1.81]* 2.3E-05 [2.06]** 4E-06 [1.47] 2.6E-05 [2.27]** 1.6E-05 [1.50] 1.2E-05 [1.57] 1.6E-05 [1.42] 9E-06 [1.51] 4E-06 [0.81] 1.3E-05 [1.30] 8E-06 [1.59] 9E-06 [0.53] 1.9E-05 [1.94]* 8E-06 [1.51]
0.03 [1.56] 0.0226 [1.02] 0.0121 [0.79] 0.0285 [1.57] 0.0326 [2.23]** 0.0295 [1.90]* 0.0294 [2.29]** 0.0179 [1.39] 0.0217 [1.28] 0.0379 [2.49]** 0.0138 [0.98] 0.0514 [16.39]** 0.0515 [2.05]** 0.0621 [3.27]***
0.9247 [36.67]*** 0.9211 [32.82]*** 0.9688 [106.0]*** 0.9234 [44.28]*** 0.9427 [58.66]*** 0.9512 [62.59]*** 0.9602 [85.02]*** 0.955 [73.68]*** 0.9422 [63.96]*** 0.9393 [46.50]*** 0.952 [82.43]*** 0.9609 [60.67]*** 0.8996 [36.66]*** 0.9383 [68.32]*** 6.2806
5.8863
6.8554
8.2353
7.765
6.3138
6.6886
4.8937
5.9915
7.4977
5.5646
4.863
6.828
6.1476
Modeling Asset Returns with Skewness, Kurtosis, and Outliers (continued)
0.044 [1.55] 0.059 [2.15]** 0.0277 [1.81]* 0.0484 [1.69]* 0.0282 [1.14] 0.0162 [0.70] 0.0057 [0.28] 0.0394 [1.61] 0.0748 [2.90]*** 0.0202 [0.92] 0.0545 [2.10]** 0.0311 [1.56] 0.0672 [1.60] 0.015 [0.51]
80 2195
Kurtosis 1.9163 [0.16] 2.4257 [0.33] 10.0335 [1.90]*
Peakedness 1.2308 (1.44) 1.1944 (1.44) 1.1622 (1.45)
Q(30) 34.7149 0.25 21.4156 0.87 31.9395 0.37
Q2(30) 53.8793 0*** 31.1321 0.41 10.6979 1
f0 0.004 [3.78]*** 0.0012 [1.55] 0.0016 [1.85]*
f1 1.2955 [24.05]*** 1.1447 [26.33]*** 1.406 [30.66]***
f2 0.067 [2.83]*** 0.0772 [3.46]*** 0.0759 [3.58]***
d 0.1331 [4.47]*** 0.0254 [1.07] 0.0565 [1.87]* w 2.5E-05 [1.94]* 0.00002 [1.84]* 1.1E-05 [1.16]
a 0.0515 [3.10]*** 0.0403 [2.29]** 0.0189 [1.82]*
b 0.9275 [54.01]*** 0.9188 [33.77]*** 0.9579 [59.90]***
g 0.0041 [0.18] 0.0332 [1.10] 0.0278 [1.27]
5.316
6.849
v 6.8976
The 30 stocks are sorted by permanent CRSP number. Numbers below coefficients are t-values (with bracket). Numbers below tests are p-values. ***Indicates 1 % significance, **5 %, and *10 %. The standard deviations of skewness coefficients are given approximately by (6/T)0.5. The excess kurtosis coefficient of 6 for v > 4. Its standard deviation is obtained using the delta method. The peakedness is measured by f0.75 f0.25, the distance t distribution is given by v4 between the values of standardized variable at which the cumulative distribution function equals 0.75 and the value at which the cumulative distribution function equals 0.25. The reference value of the standard normal distribution is 1.35. The reference value for the estimated t distribution is reported below actual peakedness (in parenthesis), which is in the range (1.39, 1.53). A number of peakedness less than the reference value means there is a high peak in the probability density function. The normality test is omitted, since the assumption is a Student’s t distribution. The independence test is conducted using a LjungBox Q test up to the order of 30. The Q2 test up to the order of 30 is to show volatility clustering. The model is f0ffiffiffiffi+ f1Rm,t + f2Rt 1 + dD87 + eit (1.a) Rt ¼ p (1.b) et ¼ ht zt (1.c) ht ¼ w + aet2 1 + bht 1 + g I(et 1 < 0)e2t 1 (1.d) et|ℑt–1 t(0, ht, v) The following stocks do not have an AR(1) variable: MSFT, HON, DD, GM, IBM, MO, CAT, BA, PFE, AA, DIS, MCD, JPM, and INTC. Stock PG is the only one to have an AR(4) variable in the mean equation, which ensures that autocorrelation has been removed
Index Skewness 28 0.2452 [3.16]*** 29 0.1626 [2.10]** 30 0.7595 [9.79]***
Table 80.3 (continued)
2196 T.C. Chiang and J. Li
0.47 0.46 0.45 0.44 0.43 0.42 0.41 0.40 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.22 0.21 0.20 0.19 0.18 0.17 0.16 0.15 0.14 0.13 0.12 0.11 0.10 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00
−4
PLOT
−3
EGB2_1
NORM
−2
T_1
−1
0 X
1
2
3
4
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
Fig. 80.1 Comparisons of probability density function of three distributions. Note: This chart compares the probability density function (pdf) of three distributions. The solid line is for the EGB2 distribution, dashed line for the normal distribution, and dash-dot line is for the Student’s t distribution. Distribution-estimated parameters are from stock MSFT (index ¼ 1): p ¼ 1.0233; q ¼ 0.7971; v ¼ 6.4635
density function
80 2197
12
11
10
9
8
7
6
5
4
3
2
1.7765 [1.18] 4.4376 [9.86]*** 1.9475 [1.70]* 0.953 [1.75]* 0.9897 [1.18] 1.3534 [1.06] 1.9897 [1.52] 2.7399 [1.48] 4.3728 [7.39]*** 5.8553 [12.84]*** 6.0631 [12.84]*** 4.1096 [7.69]***
1
0.3412 [0.43] 0.2048 [1.87]* 0.1272 [1.65]* 0.1248 [0.49] 0.1341 [0.72] 0.2233 [0.28] 0.0048 [0.98] 0.257 [1.22] 0.6971 [3.07]*** 0.7098 [4.88]*** 0.6741 [3.52]*** 0.1834 [0.26]
Kurtosis
Index Skewness
1.2455 (1.20) 1.1664 (1.13) 1.1934 (1.19) 1.1425 (1.17) 1.2981 (1.26) 1.2350 (1.23) 1.2069 (1.18) 1.0606 (1.08) 1.0869 (1.09) 1.1884 (1.17) 1.1326 (1.14) 1.1571 (1.14)
33.4102 0.31 40.0458 0.1 33.3921 0.31 30.344 0.45 34.7639 0.25 45.1204 0.04** 36.3661 0.2 34.8276 0.25 22.6857 0.83 34.6538 0.26 46.4982 0.03** 34.3886 0.27
Peakedness Q(30) 22.9306 0.82 19.8209 0.92 20.4258 0.9 33.8905 0.29 17.1257 0.97 27.1924 0.61 18.4223 0.95 28.5692 0.54 21.929 0.86 8.0929 1 26.043 0.67 15.0915 0.99
Q2(30) 0.0034 [3.23]*** 0 [0.01] 0.002 [2.94]*** 0.0008 [0.95] 0.0021 [2.95]*** 0.0014 [2.13]** 0.0007 [0.65] 0.0012 [1.35] 0.0026 [2.64]*** 0.0017 [2.23]** 0.0017 [2.32]** 0.0024 [2.18]**
f0 1.1176 [21.84]*** 1.1021 [26.13]*** 0.9353 [21.09]*** 1.0782 [28.14]*** 0.6903 [19.25]*** 1.1795 [39.92]*** 1.0121 [21.31]*** 0.9455 [23.95]*** 0.8023 [18.33]*** 1.009 [27.68]*** 0.8242 [22.70]*** 1.0259 [20.40]***
f1
[NA]
[NA] 0.087 [3.83]*** 0.1203 [4.71]***
[NA]
[NA]
[NA] 0.1673 [6.31]*** 0.0632 [3.10]***
[NA] 0.056 [2.19]**
[NA]
f2 w 1.1E-05 [1.08] 1.4E-05 [2.67]*** 1.1E-05 [0.97] 9E-06 [1.45] 5E-06 [1.00] 4E-06 [1.49] 1.3E-05 [1.33] 6E-06 [1.24] 1.3E-05 [2.46]** 0.00001 [1.72]* 1.9E-05 [2.44]** 1.6E-05 [1.68]*
d 0.101 [2.27]** 0.1467 [7.30]*** 0.1291 [6.63]*** 0.0169 [0.91] 0.1301 [6.99]*** 0.0532 [3.53]*** 0.0036 [0.15] 0.0223 [1.61] 0.0322 [1.97]** 0.1769 [8.36]*** 0.11 [6.16]*** 0.1484 [6.07]*** 0.0473 [3.23]*** 0.0269 [2.49]** 0.0694 [2.89]** 0.0642 [2.61]*** 0.0481 [2.81]*** 0.032 [2.42]** 0.0136 [1.28] 0.0262 [1.96]** 0.0054 [0.35] 0.0458 [2.60]*** 0.0518 [2.32]** 0.017 [1.22]
a
Table 80.4 Statistics of the standardized errors on the GARCH(1,1)-EGB2 estimates: weekly data, 1986–2005 0.9435 [45.21]*** 0.9437 [79.79]*** 0.9241 [23.65]*** 0.9277 [44.76]*** 0.9584 [32.60]*** 0.9529 [81.42]*** 0.956 [69.88]*** 0.944 [64.61]*** 0.9573 [71.58]*** 0.9423 [54.62]*** 0.9004 [34.82]*** 0.9585 [64.75]***
b 0.0045 [0.17] 0.029 [1.50] 0.0101 [0.29] 0.0007 [0.02] 0.0317 [1.26] 0.0141 [0.58] 0.0463 [2.16]** 0.056 [2.13]** 0.0486 [2.73]*** 0.003 [0.13] 0.0504 [1.31] 0.0245 [1.45]
g
q
1.1263
1.6686
0.609
0.5251
0.5305 0.6015
0.6849 0.7425
0.3736 0.4322
0.3342 0.3477
0.8337 0.7498
1.378
1.776
0.7613 0.6589
0.8898 0.8338
0.5436 0.5274
1.0233 0.7971
p
2198 T.C. Chiang and J. Li
26
25
24
23
22
21
20
19
18
17
16
15
14
13
0.1586 [1.67]* 0.2815 [0.80] 0.0796 [0.74] 0.1496 [1.29] 0.799 [5.64]*** 0.2006 [0.15] 0.0419 [0.11] 0.0622 [0.66] 0.0083 [1.47] 0.1528 [0.31] 0.122 [0.53] 0.1166 [1.86]* 0.1305 [0.30] 0.2048 [0.99]
2.0019 [2.05]** 1.9679 [1.47] 1.0827 [1.16] 1.703 [1.12] 9.2326 [21.66]*** 1.3321 [0.43] 1.8661 [0.85] 3.1716 [5.14]*** 1.5496 [0.21] 1.6012 [0.50] 1.0602 [0.68] 1.321 [0.48] 1.2595 [0.52] 1.9035 [1.20]
1.2016 (1.19) 1.2027 (1.18) 1.1947 (1.19) 1.1076 (1.12) 1.1596 (1.14) 1.2287 (1.21) 1.1842 (1.17) 1.1715 (1.15) 1.2055 (1.18) 1.1773 (1.19) 1.2103 (1.20) 1.2361 (1.22) 1.1869 (1.19) 1.1586 (1.17)
26.1503 0.67 40.3874 0.1 39.075 0.12 32.4643 0.35 32.7319 0.33 49.0945 0.02** 30.6913 0.43 37.46 0.16 34.3959 0.27 38.8848 0.13 33.3633 0.31 37.8084 0.15 39.0536 0.12 19.1195 0.94
45.7982 0.03** 30.9281 0.42 24.6498 0.74 43.0084 0.06* 8.4127 1 38.2149 0.14 32.2694 0.36 30.6583 0.43 27.1835 0.61 24.9758 0.73 21.3757 0.88 28.7249 0.53 28.8414 0.53 26.5286 0.65
0.0016 [1.70]* 0.0013 [1.38] 0.0021 [2.53]** 0.0011 [1.57] 0.0019 [2.01]** 0.0009 [0.87] 0.0012 [1.26] 0.0017 [1.30] 0.0008 [0.88] 0.0001 [0.12] 0.0018 [1.98]** 0.0007 [0.84] 0.0032 [2.37]** 0.0004 [0.44]
0.912 [17.47]*** 0.9224 [18.75]*** 0.8394 [21.19]*** 0.8548 [25.54]*** 0.9539 [23.75]*** 1.1161 [20.81]*** 1.1209 [25.84]*** 1.277 [20.89]*** 0.8637 [21.01]*** 1.322 [29.64]*** 1.135 [26.40]*** 1.3141 [33.10]*** 1.4509 [23.06]*** 0.7923 [20.62]*** [NA] 0.0884 [3.31]***
[NA] 0.0739 [2.96]*** 0.0483 [2.29]**
[NA]
[NA] 0.0667 [2.69]***
[NA]
[NA] 0.0769 [3.00]*** 0.074 [3.22]*** 0.0652 [2.52]**
[NA]
0.0008 [0.03] 0.0352 [1.67]* 0.0268 [1.03] 0.0368 [2.62]*** 0.021 [1.07] 0.262 [8.05]*** 0.0504 [2.66]*** 0.1363 [5.28]*** 0.1543 [7.51]*** 0.0747 [2.45]** 0.043 [1.47] 0.0357 [1.45] 0.2732 [6.75]*** 0.1229 [6.16]***
1.7E-05 [2.22]** 3.2E-05 [2.04]** 2.4E-05 [2.08]** 3E-06 [1.42] 2.4E-05 [2.20]** 1.9E-05 [1.67]* 1.1E-05 [1.59] 1.7E-05 [1.62] 9E-06 [1.66]* 4E-06 [0.75] 1.5E-05 [1.63] 7E-06 [1.36] 8E-06 [0.44] 1.8E-05 [1.86]*
0.0128 [1.00] 0.0276 [1.52] 0.0306 [1.14] 0.0089 [0.82] 0.0206 [1.25] 0.0349 [2.31]** 0.0287 [1.67]* 0.0261 [2.14]** 0.0194 [1.45] 0.0197 [1.24] 0.0397 [2.48]** 0.0151 [1.07] 0.0493 [2.99]*** 0.0454 [1.99]**
0.9503 [74.53]*** 0.9212 [37.57]*** 0.9128 [30.49]*** 0.9711 [105.5]*** 0.9335 [46.75]*** 0.9417 [56.20]*** 0.9535 [69.68]*** 0.9614 [96.11]*** 0.9549 [75.20]*** 0.9422 [66.75]*** 0.936 [47.76]*** 0.9543 [86.97]*** 0.9632 [50.58]*** 0.9039 [37.62]***
0.0442 [1.84]* 0.0483 [1.63] 0.0564 [1.92]* 0.0299 [1.88]* 0.0452 [1.68]* 0.0183 [0.72] 0.014 [0.55] 0.0057 [0.30] 0.0332 [1.57] 0.0788 [2.76]*** 0.0198 [0.88] 0.0485 [1.84]* 0.0312 [1.42] 0.0669 [1.67]* 0.8582
0.8183
Modeling Asset Returns with Skewness, Kurtosis, and Outliers (continued)
0.7541 0.7163
0.7835 0.9016
1.1216 0.9945
1.0471 0.879
0.7846 0.8586
0.8679 0.7485
0.6121 0.5983
0.7224 0.6921
1.1292 0.9509
0.5532 0.5478
0.4835 0.4921
0.8794 0.7645
0.748
0.866
80 2199
1.9351 [1.01] 1.873 [2.01]** 2.3833 [3.43]*** 9.3273 [29.70]***
1.1779 (1.17) 1.2342 (1.20) 1.2032 (1.20) 1.1708 (1.15)
27.9691 0.57 34.7132 0.25 21.5452 0.87 33.0307 0.32
Peakedness Q(30) 46.5729 0.03** 52.5435 0.01** 32.2833 0.35 11.0894 1
Q2(30) 0.001 [1.09] 0.0037 [3.47]*** 0.0016 [1.90]* 0.0023 [2.43]**
f0 0.873 [19.65]*** 1.3029 [26.02]*** 1.1419 [27.31]*** 1.4123 [31.60]***
f1
d 0.149 [6.82]*** 0.1334 [4.64]*** 0.0273 [1.04] 0.0517 [2.13]**
f2 0.0499 [1.82]* 0.0683 [2.77]*** 0.0754 [3.26]*** 0.0674 [3.11]*** 7E-06 [1.51] 2.4E-05 [1.86]* 2.2E-05 [1.91]* 1.9E-05 [1.17]
w 0.0595 [3.19]*** 0.0531 [3.16]*** 0.0409 [2.28]** 0.0271 [1.67]*
a 0.9396 [69.05]*** 0.9257 [51.93]*** 0.9138 [30.42]*** 0.9441 [34.58]***
b
g 0.0114 [0.40] 0.0055 [0.25] 0.0348 [1.10] 0.02 [0.86]
p
0.7236
q
0.6879 0.5446
0.9941 0.8369
0.8967 0.9744
0.724
Their standard deviations are obtained using the delta method. The peakedness is measured by f0.75-f0.25, the distance between the values of standardized variable at which the cumulative distribution function equals 0.75 and the value at which the cumulative distribution function equals 0.25. The reference value of the standard normal distribution is 1.35. A number of peakedness less than 1.35 means there is a high peak in the probability density function. The reference value of the EGB2 distribution is reported below peakedness with parenthesis and is in the range of (1.07, 1.26). The normality test is omitted, since the assumption is the EGB2 distribution. The independence test is conducted using a Ljung-Box Q test up to the order of 30. The Q2 test up to the order of 30 is to show volatility clustering. The estimated model is (1.a) Rt ¼pfffiffiffiffi0 + f1Rm,t + f2Rt 1 + dD87 + eit (1.b)et ¼ ht zt (1.c) ht ¼ w + aet2 1 + bht 1 + g I(et 1 < 0)et2 1 (1.d) et|ℑt–1 EGB2(0, ht, p, q) An AR(1) variable is excluded from the following stocks in the estimated equation: MSFT, HON, DD, GM, IBM, MO, CAT, BA, PFE, AA, DIS, MCD, JPM, and INTC. Stock PG contains the only AR(4) variable in the mean equation
The 30 stocks are sorted by permanent CRSP number. Numbers below coefficients are t-values (with bracket). Numbers below Q test and Q2 test are p-values. 00 000 pÞc00 ðqÞ Þþc000 ðqÞ *** Indicates 1 % significance, **5 %, and *10 %. The predicted higher moments are given by the formula: Skewness ¼ ðcc0 ðpðÞþc , Kurtosis ¼ ðcc0 ððppÞþc . 0 ðqÞÞ2 0 ðqÞÞ1:5
30
29
28
Kurtosis
27
0.1627 [1.19] 0.2448 [1.27] 0.1727 [0.21] 0.754 [3.64]***
Index Skewness
Table 80.4 (continued)
2200 T.C. Chiang and J. Li
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2201
the normal distribution that requires that both p- and q-values approach infinity. Based on the estimated shape parameters, the expected peakedness for the 30 stocks is in the range of (1.07 1.26). The peakedness obtained from residuals of the mean equation is in the range of (1.06 1.30), conforming to the existence of high peak implied by the EGB2 distribution. With respect to the beta coefficients, we find that the estimated values are highly significant, ranging from 0.69 to 1.32. The evidence suggests that the market risk is still one of the most influential factors for predicting individual stocks. It is of interest to compare the beta values and the associated standard errors across different distributions. As may be seen from Fig. 80.2, where the figures are mainly reproduced from Tables 80.2, 80.3 and 80.4, we find no significant difference among them for the estimated betas. This is not surprising, since the estimations of the betas are obtained from the average effect based on the whole probability space. Our finding is consistent with the results form Nelson (1991) and Hansen (1994). While inspecting the lagged individual stock return variable, about half of them present negative signs and are statistically significant, indicating that a mean reversion process is present in the weekly data. Turning to the 1987 market crash dummy, the testing results show that 20 out of 30 stocks are significant at the 5 % level, although the signs are mixed. The diverse movements signify the profound impact due to an influential observation. Consistent with most financial data, with a few exceptions, the coefficients of the GARCH equation for each stock are found to be highly significant. One of the most striking results emerging from the estimations is that while testing the leverage effect, only four stocks are found to be statistically significant at the 5 % level. The number of stocks that present asymmetric effects has been reduced dramatically, as compared with the statistics reported in Table 80.2, where 15 stocks show a significant asymmetric effect. It can be argued that the so-called asymmetric effect may result from the fact that empirical analysis was built on a misleading assumption by imposing a normal distribution on the financial data. One fact in Table 80.4 is a bit disturbing: three stocks show their kurtosis coefficient being greater than 6, which is beyond the scope of EGB2 distribution. Despite this shortcoming and the abovementioned nine stocks that have significant kurtosis, we find a significant improvement compared with the model by assuming the normal distribution or the t distribution. The predicted skewness and excess kurtosis of the EGB2 distribution are much closer to the observed skewness and kurtosis. Thus, the EGB2 distribution has a good fit, although the results are not perfect.16 Finally, we test the independence of the correlation on statistics for both return level and return squares, and we find that in only three cases in each Q and Q2 test can the null hypothesis be rejected at the 5 % level but none at the 1 % level. In general, the models are adequate.
16
Some refinement of the model is contained in the following section.
2202
T.C. Chiang and J. Li 1.8 beta_OLS 1.6
beta_GARCH beta_T beta_EGB2
1.4
beta estimation
1.2 1 0.8 0.6 0.4 0.2 0 1
3
5
7
9
11
13 15 17 stock index
19
21
23
25
27
29
11
13
19
21
23
25
27
29
0.08 stdev_GARCH stdev_T
0.07
stdev_EGB2 stdev_OLS
standard error of the beta
0.06
0.05
0.04
0.03
0.02
0.01
0 1
3
5
7
9
15
17
stock index
Fig. 80.2 Comparisons of the beta estimation in different models. Note: The upper figure contains the plots of beta coefficients. The lower figure presents the corresponding standard deviations of the beta coefficients
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2203
80.4.4 The Impact of Outliers Theoretically, the EGB2 distribution is feasible for coefficients of skewness in a range of (2, 2) and the coefficients of excess kurtosis in (0, 6). However, the statistics in Table 80.4 are not completed in these desired ranges. Two possible reasons might contribute to these problems. First, the residual series was contaminated by the presence of outliers. As pointed out by Pen˜a et al. (2001), an outlier can have very serious effects on the properties of the observed time series and affect the estimated residuals and the parameter values. Second, the mean equation and/or the variance equation may be misspecified, although an asymmetric effect has been considered.17 To address this issue, we further investigate the stock return series on which the outliers might more seriously impinge. By investigating the microstructure of the nine stocks with excess kurtosis, we find a common phenomenon: multiple outliers are present. This means that a 1987 market crash dummy is incapable of accommodating multiple extreme values in the data series. For instance, stock UTX (index ¼ 10) has an extreme value of 38 % during the week of the 9–11 terrorist attacks in 2001. To address this issue, we identify the outliers and patch the outliers by using intervention analysis as in Box and Tiao (1975) and the extension by Tsay et al. (2000) and Pen˜a et al. (2001). Table 80.5 reports the statistics of the residual analysis for these nine stocks by adding different dummies in the mean equation. This result is rather encouraging, as evidenced by the reduction of the significance of the kurtosis coefficient. It reveals that the kurtosis problem is somehow related to a failure to account for extraordinary events that disturb the data structure, rather than the failure of the EGB2 distribution. It is evident that after removing the effect of the outliers in a given time series, the EGB2 distribution is capable of addressing the financial data with skewness and kurtosis in an appropriate range.18
80.5
Distributional Fit Test
Previous sections emphasized estimates of parameters pertinent to modeling the skewness and kurtosis of the standardized residuals by applying non-Gaussian distributions. As part of the modeling process, model checking in terms of goodness of fit is also important. Table 80.6 and Fig. 80.3 compare a GARCH(1,1) model based on three distributions: normal, Student’s t, and EGB2. The reported log-likelihood function values (negative) clearly show that the EGB2 distribution outperforms the 17
Engle et al. (1987) suggest putting a conditional volatility variable in the mean equation, which is called the GARCH-M model. However, the expected sign of the conditional variance variable is uncertain, according to literature surveys. Since we do not find its statistical significance in our empirical experiment (not reported), the conditional mean term is excluded from our test equation. 18 Longin (1996) proposes the use of a Frechet distribution, which is able to highlight those extreme price movements. However, his model does not cover whole return distributions but only extreme values.
MO
UTX
PG
CAT
MRK
HPQ
9
10
11
12
17
20
50.5229 0.01** 22.8401 0.82 32.3131 0.35 44.1755 0.05* 32.5068 0.34 34.2624 0.27 38.0405 0.15
1.2182 (1.19) 1.1174 (1.12) 1.2052 (1.20) 1.1577 (1.16) 1.1768 (1.16) 1.1634 (1.15) 1.2149 (1.19)
2.3953 [3.25]*** 2.7862 [2.28]** 1.2787 [0.19] 1.5127 [0.66] 2.0579 [1.17] 1.905 [0.43] 1.8507 [1.33]
HON
2
0.1578 [0.20] 0.5069 [2.07]** 0.1411 [1.02] 0.0746 [0.13] 0.2477 [0.32] 0.0976 [1.09] 0.0938 [0.21]
Peakedness Q(30)
Index Ticker Skewness Kurtosis 30.1429 0.46 27.9385 0.57 29.7857 0.48 25.669 0.69 19.5369 0.93 29.7861 0.48 51.1062 0.01**
Q2(30) 0.0006 [0.77] 0.0027 [2.88]*** 0.0021 [2.64]*** 0.0021 [2.57]*** 0.0022 [2.07]** 0.002 [2.13]** 0.0019 [1.71]*
f0 1.0566 [27.88]*** 0.8159 [18.11]*** 0.9843 [25.53]*** 0.7953 [22.15]*** 1.0234 [21.42]*** 0.9621 [22.28]*** 1.2568 [22.32]***
f1
[NA] 0.0609 [2.49]** 0.0687 [3.00]***
[NA] 0.0853 [4.20]*** 0.1211 [4.74]***
[NA]
f2 1.3E-05 [1.93]* 1.3E-05 [1.76]* 1.1E-05 [1.65]* 1.7E-05 [1.92]* 1.7E-05 [1.50] 0.00002 [2.34]** 1.3E-05 [1.44]
w
0.0408 [2.20]** 0.0407 [2.24]** 0.0414 [2.03]** 0.0561 [2.20]** 0.0198 [1.07] 0.0183 [1.17] 0.0219 [1.71]*
a
0.935 [55.37]*** 0.9407 [58.26]*** 0.9334 [42.05]*** 0.8984 [31.66]*** 0.9527 [47.31]*** 0.9364 [53.32]*** 0.9602 [87.05]***
b
Table 80.5 Statistics of the nine stocks’ standardized errors on the GARCH(1,1)-EGB2 estimates: weekly data, 1986–2005 0.0211 [0.80] 0.0142 [0.64] 0.0221 [0.92] 0.0527 [1.45] 0.0284 [1.44] 0.0532 [2.28]** 0.0199 [0.96]
g
q
N
0.8271 0.7859 9
0.6352 0.6109 1
0.7285 0.6271 3
0.6161 0.6576 1
0.8765 0.8795 2
0.4568 0.5192 3
0.9103 0.7813 10
p
2204 T.C. Chiang and J. Li
C
30
0.0815 [0.81] 0.2551 [0.65]
1.777 [2.09]** 1.3999 [0.22]
1.232 (1.22) 1.1966 (1.19)
20.983 0.89 30.3054 0.45
28.3133 0.55 29.5047 0.49
0.0016 [2.08]** 0.0024 [2.43]**
1.1323 [30.66]*** 1.4095 [32.79]***
0.0779 [3.69]*** 0.0761 [3.62]*** 2.1E-05 [1.88]* 1.1E-05 [1.18]
0.0434 [2.31]** 0.0374 [2.24]**
0.9149 [31.32]*** 0.9496 [50.63]***
0.0283 [0.95] 0.002 [0.09] 1.0202 0.7612 3
1.2246 1.0228 2
Their standard deviations are obtained using the delta method. The peakedness is measured by f0.75-f0.25, the distance between the values of standardized variable at which the cumulative distribution function equals 0.75 and the value at which the cumulative distribution function equals 0.25. The reference value of the standard normal distribution is 1.35. A number of peakedness less than 1.35 means there is a high peak in the probability density function. The reference value of the EGB2 distribution is reported below peakedness in parenthesis and is in the range of (1.12, 1.22). The normality test is omitted, since the assumption is the EGB2 distribution. The independence test is conducted using a Ljung-Box Q test up to order 30. The Q2 test of order of 30 is to show volatility clustering. The model is f0ffiffiffiffi+ f1Rm,t + f2Rt 1 + DnDextreme + eit (1.a) Rt ¼ p (1.b) et ¼ ht zt (1.c) ht ¼ w + aet2 1 + bht 1 + g I(et 1 < 0)e2t 1 (1.d) et|ℑt–1 EGB2(0, ht, p, q). N in the table represents the number of dummies in the mean equation (at most 10). Those dummies represent the extreme values in the individual stocks’ return series. The following stocks do not have an AR(1) variable: MSFT, HON, DD, GM, IBM, MO, CAT, BA, PFE, AA, DIS, MCD, JPM, and INTC. Stock PG is the only one to have an AR(4) variable in the mean equation, which ensures that autocorrelation has been removed
The nine stocks have significant excess coefficients in Table 80.4. Numbers below coefficients are t-values (with bracket). Numbers below tests are p-values. 00 000 pÞc00 ðqÞ Þþc000 ðqÞ *** Indicates 1 % significance, **5 %, and *10 %. The predicted higher moments are given by the formula: Skewness ¼ ðcc0 ðpðÞþc , Kurtosis ¼ ðcc0 ððppÞþc . 0 ðqÞÞ2 0 ðqÞÞ1:5
AIG
29
80 Modeling Asset Returns with Skewness, Kurtosis, and Outliers 2205
2206
T.C. Chiang and J. Li
Table 80.6 Fitness comparisons among alternative distributions Index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Ticker MSFT HON KO DD XOM GE GM IBM MO UTX PG CAT BA PFE JNJ MMM MRK AA DIS HPQ MCD JPM WMT AXP INTC VZ T HD AIG C
Likelihood (-lnL) Normal t 2,727.663 2,405.698 2,939.66 2,655.106 3,070.488 2,747.057 3,080.986 2,748.706 3,286.659 2,949.103 3,330.797 2,998.795 2,851.589 2,529.761 2,940.611 2,646.765 2,870.27 2,589.11 3,086.131 2,777.7 3,099.136 2,798.107 2,808.847 2,515.001 2,869.513 2,546.216 2,906.508 2,585.459 3,094.648 2,764.042 3,213.133 2,899.268 2,921.886 2,622.322 2,820.431 2,491.263 2,947.914 2,628.138 2,640.579 2,341.888 3,021.621 2,694.789 2,851.61 2,527.93 2,993.255 2,660.808 3,013.207 2,681.583 2,560.727 2,232.273 3,055.394 2,733.302 3,023.173 2,700.77 2,838.915 2,515.001 3,097.629 2,777.491 2,918.77 2,633.718
EGB2 1,836.975 2,078.661 2,175.794 2,180.607 2,378.235 2,428.805 1,958.933 2,076.628 2,016.484 2,204.927 2,226.256 1,941.912 1,974.935 2,014.814 2,194.88 2,330.127 2,049.385 1,921.461 2,057.377 1,768.097 2,125.111 1,957.228 2,091.682 2,111.124 1,663.033 2,162.317 2,129.957 1,943.85 2,206.573 2,060.1
Chi-square statistics Normal t 70*** 56.32** 77.76*** 129.92*** 45.6 89.76*** 50.32 70.88*** 46.24 49.6* 33.36 54.72** 40.56 78.32*** *** 87.84 151.04*** *** 76.56 138.72*** ** 56.24 86.4*** *** 77.6 106.4*** *** 71.04 110.88*** ** 54.72 77.36*** *** 66 84.08*** *** 68.72 86.72*** *** 69.92 99.68*** ** 54.88 91.52*** 43.92 69.84*** 45.84 76.56*** 68.56*** 117.92*** 65.36*** 117.12*** ** 54.72 78.32*** 48.64 64.48*** 42.32 72.8*** ** 58.8 63.04*** *** 64.4 106.32*** * 51.84 85.84*** 50.24 77.92*** ** 56 63.28*** *** 75.44 91.04***
EGB2 38.6 60.12*** 30.2 41.04 38 23.32 36.4 128.68*** 49* 42.64 47.92 50.52* 44.12 31.2 53** 54.88** 54.68** 34 47.64 67.28*** 45.52 48.72* 36.68 40.72 26.32 50.28* 45.16 42.24 39.72 39.76
This table compares the GARCH(1,1) model based on three distributions: normal, Student’s t(T), and EGB2 based on a negative logarithm of the likelihood function value (Left) and the w2 goodness-of-fit test statistic value (Right). The quantiles are computed via 40 intervals. The degree of freedom (d.f.) is 37 for the EGB2, 38 for the t distribution and 39 for the normal distribution. The chi-square critical values at the 1 %, 5 %, and 10 % levels are 59.89, 52.19, and 48.36, respectively, with d.f. being 37; 61.16, 53.39, and 49.51 with d.f. being 38; and 62.43, 54.57, and 50.66 with d.f. being 39. ***Indicates 1 % significance, **5 %, and *10 %
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2207
3500 3000
-lnL
2500 2000 1500 normal
1000
t 500
EGB2
28
26
24
22
20
18
16
14
12
10
8
6
4
2
st oc k
in de x
0
Fig. 80.3 Comparisons of log-likelihood function values in different models. The greater the function value the better of the fit the model is. The figure plots the negative logarithm value of the likelihood function
rival distributions: the normal distribution and the t distribution. However, as noted by Boothe and Glassman (1987), making non-nested distribution comparisons based on log-likelihood values can lead to spurious conclusions.19 Consequently, we calculate the goodness-of-fit (GoF) statistics20 to compare differences between observed distribution of standardized residuals and theoretical distribution based on estimated shape parameters following Snedecor and Cochran (1989). The null hypothesis tested by the GoF statistics is that the observed and predicted distribution functions are identical. The statistic is calculated by GoF ¼
k X ðf i Fi Þ2 Fi i¼1
(80.8)
where fi is the observed count of actual standardized residuals in the i th data class (interval), Fi is the predicted count derived from the estimated values for the distribution parameters, and k is the number of data intervals used in distributional comparisons. GoF has an asymptotic chi-square distribution with degrees of freedom equal to the number of intervals minus the number of estimated distribution 19
Normal distribution is a special case of the EGB2 distribution. A likelihood ratio rest suggests that there is significant improvement in the fit of the EGB2 distribution over that of the normal distribution. 20 The chi-square test is an alternative to the Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests. The chi-square test and Anderson-Darling test make use of the specific distribution in calculating critical values. This has the advantage of allowing a more sensitive test and the disadvantage that critical values must be calculated for each distribution.
2208
T.C. Chiang and J. Li
parameters minus one. For EGB2 distribution, two parameters are estimated; for Student’s t distribution, one parameter is estimated; for the normal distribution, no parameter is required, since the error term has been standardized. Table 80.6 reports the results of the w2 test for three distributions used in the GARCH(1,1) model. The test power is maximized by choosing a data class equiprobably (equal probability). The rule of thumb of the chi-square test is to choose the number of groups starting at 2 T0.4.21 The test results show that the null hypothesis is rejected by 12 stocks on the normal distribution at the 1 % level, 28 stocks on the t distribution and only three stocks on the EGB2 distribution. Furthermore, the w2 statistic also shows that the EGB2 distribution yields lower absolute values. We can conclude that the model based on the EGB2 distribution has the least deviation of the residuals from the theoretical distribution. The evidence suggests that the Student’s t distribution is able to solve the kurtosis problem, but it could not fit the whole error distributions due to peakedness. Putting the evidence together, it is clear that the EGB2 distribution is superior to the t distribution and the normal distribution in our empirical analysis.
80.6
Implication of EGB2 Distribution
One of the main objectives of analyzing financial data for risk management purposes is to provide an answer to the question: how should we evaluate the probability of the extreme values by using statistical distributions? According to the normal distribution, the 1987 market crash with more than 17s (daily data) would have never happened. However, recent market crashes indicate that big market swings or significant declines in asset prices happen more frequently than we expect. Although VaR is one of the most prevalent risk measures under normal conditions, it cannot deal with those extreme values, since extreme values are not normal. From this perspective, the EGB2 distribution provides a management tool for calculating risk. Table 80.7 reports the probability of the semivolatility of shocks. Here, we concentrate on the probability of the error term having negative shocks. From this table, we see that the predicted probability for extreme values (beyond 2s) is greater than that of the normal distribution. For instance, probabilities of a 5s and 7s shock for MSFT (index ¼ 1) are 4.9E-5 and 8.4E-7, much greater than 2.8E-7 and 1.3E-12 based on the normal distribution. Yet, the probabilities for the EGB2 distribution under a moderate range (within 2s) are less than that of the normal distribution. This is an alternative way to determine the peakedness and fat tails of portfolio returns. Notice that the crossing point between the EGB2 distribution and the normal distribution is in 21
Our sample contains 999 observations; 40 intervals are used. Each group (data class) has theoretically 25 observations. The degrees of freedom are 37, 38, and 39 for the EGB2 distribution, the t distribution and the normal distribution, respectively. (The chi-squared critical values are given in the note in Table 80.6.)
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2209
Table 80.7 The probability of negative extreme shocks in the error term Stock 1 MSFT 2 HON 3 KO 4 DD 5 XOM 6 GE 7 GM 8 IBM 9 MO 10 UTX 11 PG 12 CAT 13 BA 14 PFE 15 JNJ 16 MMM 17 MRK 18 AA 19 DIS 20 HPQ 21 MCD 22 JPM 23 WMT 24 AXP 25 INTC 26 VZ 27 T 28 HD 29 AIG 30 C If normal
Shocks 7s 8.41E-07 7.31E-06 3.42E-06 2.83E-06 1.05E-06 4.52E-07 3.30E-06 1.83E-05 2.35E-05 1.17E-05 1.53E-05 5.00E-06 3.28E-06 9.08E-06 1.90E-06 1.07E-05 9.13E-06 2.66E-06 4.92E-06 7.93E-06 2.13E-06 5.72E-06 1.21E-06 1.09E-06 1.14E-05 3.65E-06 6.62E-06 6.65E-06 1.35E-06 1.91E-06 1.28E-12
6s 6.42E-06 3.76E-05 2.03E-05 1.73E-05 8.00E-06 3.96E-06 1.96E-05 8.05E-05 9.97E-05 5.64E-05 7.00E-05 2.74E-05 1.97E-05 4.58E-05 1.25E-05 5.16E-05 4.53E-05 1.66E-05 2.74E-05 4.05E-05 1.36E-05 3.12E-05 8.74E-06 8.05E-06 5.56E-05 2.13E-05 3.50E-05 3.56E-05 9.52E-06 1.24E-05 9.87E-10
5s 4.90E-05 0.000193 0.00012 0.000105 6.07E-05 3.47E-05 0.000117 0.000352 0.00042 0.000269 0.000318 0.00015 0.000118 0.00023 8.18E-05 0.000248 0.000224 0.000103 0.000152 0.000206 8.67E-05 0.00017 6.28E-05 5.92E-05 0.000269 0.000124 0.000184 0.00019 6.69E-05 7.97E-05 2.87E-07
4s 0.000373 0.000984 0.000708 0.000642 0.000459 0.000303 0.000693 0.001534 0.001761 0.001287 0.00144 0.000818 0.000708 0.001154 0.000536 0.001193 0.001105 0.00064 0.000842 0.001045 0.000552 0.00092 0.000451 0.000435 0.0013 0.00072 0.000964 0.00101 0.000469 0.000514 3.17E-05
3s 0.002832 0.005031 0.004177 0.003902 0.003429 0.002615 0.004112 0.006686 0.007391 0.006139 0.006526 0.004465 0.004233 0.00578 0.003512 0.005729 0.005452 0.00397 0.004664 0.005301 0.003515 0.004991 0.003228 0.003184 0.006275 0.004185 0.005062 0.005372 0.003282 0.003315 0.00135
2s 0.021062 0.025703 0.024489 0.023582 0.024407 0.021451 0.024242 0.029148 0.031013 0.029219 0.029553 0.024337 0.025055 0.028808 0.022751 0.027508 0.026874 0.024331 0.025708 0.02685 0.022249 0.026883 0.022585 0.022668 0.030129 0.024206 0.026487 0.028274 0.022544 0.021299 0.02275
1s 0.137138 0.12933 0.135087 0.134136 0.143861 0.141932 0.134684 0.126866 0.129726 0.135227 0.132183 0.129658 0.137264 0.137298 0.135477 0.130414 0.130475 0.137113 0.134949 0.132606 0.132611 0.136911 0.138697 0.139658 0.138371 0.133382 0.133843 0.139248 0.137761 0.13028 0.158655
The probability is calculated based on estimated p- and q-values of the EGB2 distribution. It tells how often the error terms have negative extreme values. The probability values based on the normal distribution are the same for all 30 stocks
the neighborhood of 2s, where the probabilities of both distributions are about the same value. This feature implies that VaR at the 95 % confidence level based on the normal distribution is by chance consistent with reality. However, beyond this critical level, the VaR method based on the normal distribution leads an underestimation in forecasts of losses. Nevertheless, the EGB2 distribution in this regard provides a broader spectrum of risk information for guiding risk management.
2210
80.7
T.C. Chiang and J. Li
Conclusions
In this chapter, we present empirical evidence on the stock return equation based on market risk, time series pattern, and asymmetric conditional variance for the 30 Dow Jones stocks. Special attention is placed on the issue of presenting skewness, kurtosis, and outlier effects. Although we find no significant difference over the estimated betas and the corresponding standard errors of the distributions, the evidence shows that the exponential generalized beta distribution of the second kind (EGB2) is superior to the Student’s t distribution and the normal distribution in dealing with data that demonstrate skewness and excess kurtosis simultaneously. The superiority of the EBG2 distribution in modeling financial data is not only due to its flexibility but also to its closed-form density function for the distribution. Its higher-order moments are finite and explicitly expressed by its parameters. Thus, the EGB2 model provides a useful tool for forecasting variances involving extreme values. As a result, this model can have practical use for risk management. Consistent with the finding in the literature, the asymmetric effects are highly significant in the standard GJR-GARCH specification by assuming normal distributions. However, by incorporating the heavy tail information into the distributions, we can reduce the asymmetric effects. Our study confirms that the EBG2 distribution has the capacity to deal with the asymmetric effects. Since excess kurtosis is often caused by outliers, our finding suggests that removing the contamination of outliers from the residuals enhances the performance of EGB2 distributions. In short, the GJR-GARCH-type model based on the EGB2 distribution provides a richer framework for modeling stock return volatility. It accommodates several special stock return features, including fat tails, skewness, peakedness, autocorrelation, volatility clustering, and leverage effect. As a result, this model is effective for empirical estimation and suitable for risk management.
Appendix 1 Delta Method and Standard Errors of the Skewness and Kurtosis Coefficients of the EGB2 Distribution The delta method, in its essence, expands a function of a random variable about its mean, usually with a one-step Taylor approximation, and then takes the variance. For example, if we want to approximate the variance of G(x) where x is a random variable with mean m and G(x) is differentiable, we can try GðxÞ GðmÞ þ ðx mÞG0 ðmÞ
(80.9)
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2211
so that VarðGðxÞÞ G0 ðmÞ2 VarðxÞ
(80.10)
where G0 () ¼ dG/dX. This is a good approximation only if x has a high probability of being close enough to its mean so that the Taylor approximation is still good. The nth central moments of the EGB2 distribution is given by The nth moment ¼ sn cn1 ðpÞ þ ð1Þn cn1 ðqÞ (80.11) where cn is nth order a polygamma function. Correspondingly, the skewness coefficient is given by Skewness ¼ gðp; qÞ ¼
c00 ðpÞ c00 ðqÞ ðc0 ðpÞ þ c0 ðqÞÞ1:5
(80.12)
The variance of the skewness coefficient by the delta method is given by 0
0
VarðSkewnessÞ ¼ gp ðp; qÞ2 varðpÞ þ gq ðp; qÞ2 varðqÞ 0
0
þ 2gp ðp; qÞgq ðp; qÞcovðp; qÞ
(80.13)
where 0
gp ðp; qÞ ¼
0
gq ðp; qÞ ¼
c000 ðpÞðc0 ðpÞ þ c0 ðqÞÞ 1:5c00 ðpÞðc00 ðpÞ c00 ðqÞÞ ðc0 ðpÞ þ c0 ðqÞÞ2:5
c000 ðqÞðc0 ðpÞ þ c0 ðqÞÞ 1:5c00 ðqÞðc00 ðpÞ c00 ðqÞÞ ðc0 ðpÞ þ c0 ðqÞÞ2:5
(80.14)
(80.15)
Similarly, the excess kurtosis coefficient is given by Kurtosis ¼ hðp; qÞ ¼
c000 ðpÞ þ c000 ðqÞ ðc0 ðpÞ þ c0 ðqÞÞ2
(80.16)
The variance of the kurtosis coefficient by the delta method is given by 0
0
VarðKurtosisÞ ¼ hp ðp; qÞ2 varðpÞ þ hq ðp; qÞ2 varðqÞ 0
0
þ 2hp ðp; qÞhq ðp; qÞcovðp; qÞ
(80.17)
2212
T.C. Chiang and J. Li
where 0
hp ðp; qÞ ¼
0
hq ðp; qÞ ¼
c0000 ðpÞðc0 ðpÞ þ c0 ðqÞÞ 2c00 ðpÞðc000 ðpÞ þ c000 ðqÞÞ ðc0 ðpÞ þ c0 ðqÞÞ3 c0000 ðqÞðc0 ðpÞ þ c0 ðqÞÞ 2c00 ðqÞðc000 ðpÞ þ c000 ðqÞÞ ðc0 ðpÞ þ c0 ðqÞÞ3
(80.18)
(80.19)
In the equations above, high-order polygamma functions are involved. A polygamma function is the nth normal derivative of the logarithmic derivative of G(z): cn ðzÞ ¼
d nþ1 lnðGðzÞÞ dznþ1
(80.20)
which, for n > 0, can be written as cn ðzÞ ¼ ð1Þnþ1 n!
1 X
1
k¼0
ðz þ kÞnþ1
(80.21)
which is used to calculate polygamma functions in this chapter. Note: This appendix is based on the paper by Wang et al. (2001). However, there are typos and errors in that paper. The standard deviation formula for skewness used by Wang et al. (2001) is incorrect (see g0 q(p,q) equation in the paper at http://www. econ.queensu.ca/jae/2001-v16.4/wang-fawson-barrett-mcdonald/Appendix4_delta_ derivations.pdf). We therefore provide this appendix. Accordingly, as we reviewed and replicated that paper by using the data supplied by the Journal of Applied Econometrics, the EGB2 distribution doesn’t remove the skewness problem completely as shown in their Table 3. In addition, there is a computational error in the JPY series, so that its kurtosis has not been resolved either.
References Akgiray, V. (1989). Conditional heteroskedasticity in time series of stock returns: Evidence and forecasts. Journal of Business, 62, 55–80. Amihud, Y., & Mendelson, H. (1987). Trading mechanism and stock returns: An empirical investigation. Journal of Finance, 42, 533–553. Andersen, J. V., & Sornette, D. (2001). Have your cake and eat it too: Increasing returns while lowering large risks! Journal of Risk Finance, 2, 70–82. Antoniou, A., Koutmos, G., & Pericli, A. (2005). Index futures and positive feedback trading: Evidence from major stock exchanges. Journal of Empirical Finance, 12, 219–238. Avramov, D. (2002). Stock return predictability and model uncertainty. Journal of Financial Economics, 64, 423–458.
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2213
Baillie, R. T., & DeGennaro, R. P. (1990). Stock returns and volatility. Journal of Financial and Quantitative Analysis, 25, 203–215. Bali, T. G., Mo, H., & Tang, Y. (2008). The role of autoregressive conditional skewness and kurtosis in the estimation of conditional VaR. Journal of Banking & Finance, 32, 269–282. Barndorff-Nielsen, O., Kent, J., & Sorensen, M. (1982). Normal variance-mean mixtures and z distributions. International Statistical Review, 50, 145–159. Bollerslev, T. (1987). A conditionally heteroskedastic time series model for speculative prices and rates of return. Review of Economics and Statistics, 69, 542–547. Bollerslev, T., Chou, R. Y., & Kroner, K. F. (1992). ARCH modeling in finance: A review of the theory and empirical evidence. Journal of Econometrics, 52, 5–59. Bollerslev, T. R., Engle, F., & Nelson, D. B. (1994). ARCH model. In Engle & McFadden (Eds.), Handbook of econometrics (Vol. IV). Amsterdam: North Holland. Boothe, P., & Glassman, D. (1987). The statistical distribution of exchange rates. Journal of International Economics, 22, 297–319. Box, G. E. P., & Tiao, G. C. (1975). Intervention analysis with applications to economic and environmental problem. Journal of the American Statistical Association, 70, 70–79. Boyer, B., Mitton, T., & Vorkink, K. (2010). Expected idiosyncratic skewness. Review of Financial Studies, 23, 169–202. Brooks, C., Burke, S., Heravi, S., & Persand, G. (2005). Autoregressive conditional kurtosis. Journal of Financial Econometrics, 3, 399–421. Brown, S., & Warner, J. (1985). Using daily stock returns: The case of event studies. Journal of Financial Economics, 14, 3–31. Clark, P. K. (1973). A subordinated stochastic process model with finite variance for speculative prices. Econometrica, 41, 135–155. Conrad, J. S., Dittmar, R. F., & Ghysels, E. (2009). Ex ante skewness and expected stock returns. Available at SSRN: http://ssrn.com/abstract¼1522218 or http://dx.doi.org/10.2139/ ssrn.1522218 (Dec 11). Cvitanic, J., Polimenis, V., & Zapatero, F. (2008). Optimal portfolio allocation with higher moments. Annals Finance, 4, 1–28. Engle, R., Lilien, D., & Robins, R. (1987). Estimating time-varying risk premia in the term structure: The ARCH-M model. Econometrica, 55, 391–408. Fama, E. (1965). The behavior of stock-market prices. The Journal of Business, 38, 34–105. Fama, E., & French, K. (1996). Multifactor explanations of asset pricing anomalies. Journal of Finance, 51, 55–84. French, K., Schwert, G. W., & Stambaugh, R. (1987). Expected stock returns and volatility. Journal of Financial Economics, 19, 3–29. Glosten, L. R., Jagannathan, R., & Runkle, D. E. (1993). On the relation between the expected value and the volatility of the normal excess return on stocks. Journal of Finance, 48, 1779–1801. Hansen, B. (1994). Autoregressive conditional density estimation. International Economic Review, 35, 705–730. Hansen, C. B., McDonald, J. B., & Theodossiou, P. (2006). Some flexible parametric models for partially adaptive estimators of econometric models (Working Paper). Brigham Young University. Harvey, C. R., & Siddique, A. (1999). Autoregressive conditional skewness. Journal of Financial and Quantitative Analysis, 34, 465–487. Harvey, C. R., & Siddique, A. (2000). Conditional skewness in asset pricing tests. Journal of Finance, 55, 1263–1295. Harvey, C. R., Liechty, J. C., Liechty, M. W., & Muller, P. (2003). Portfolio selection with higher moments. Quantitative Finance, 10(5), 469–485. Hogg, R. V., & Klugman, S. A. (1983). On the estimation of long tailed skewed distributions with actuarial applications. Journal of Econometrics, 23, 91–102. Hosking, J. R. M. (1994). The four-parameter kappa distribution. IBM Journal of Research and Development, 38, 251–258.
2214
T.C. Chiang and J. Li
Hueng, C. J., & Brooks, R. (2003). Forecasting asymmetries in aggregate stock market returns: Evidence from higher moments and conditional densities (Working Paper, WP02-08-01). The University of Alabama. http://scholar.google.com.hk/citations?view_op=view_citation&hl=en&user=Rw22qOoAAAAJ&citation_for_view=Rw22qOoAAAAJ:IjCSPb-OGe4C Hueng, C. J., & McDonald, J. (2005). Forecasting asymmetries in aggregate stock market returns: Evidence from conditional skewness. Journal of Empirical Finance, 12, 666–685. Johnson, N. L. (1949). Systems of frequency curves generated by methods of translation. Biometrica, 36, 149–176. Johnson, H., & Schill, M. J. (2006). Asset pricing when returns are nonnormal: Fama-French factors versus higher order systematic comoments. Journal of Business, 79, 923–940. Johnson, N. L., Kotz, S., & Balakrishnan, N. (1994). Continuous univariate distributions (2nd ed., Vol. 1). New York: Wiley. Jurczenko, E., & Maillet, B. (2002). The four moment capital asset pricing model: Some basic results (Working Paper). Paris: Ecole supe´rieure de commerce de Paris. Kenney, J. F., & Keeping, E. S. (1962). Skewness. }7.10 In Mathematics of statistics (pp. 100–101). Princeton: Van Nostrand. Liu, S. M., & Brorsen, B. (1995). Maximum likelihood estimation of a GARCH – STABLE model. Journal of Applied Econometrics, 10, 273–285. Lo, A., & MacKinlay, A. C. (1990). Data-snooping biases in tests of financial asset pricing models. Review of Financial Studies, 3, 431–467. Longin, F. (1996). The asymptotic distribution of extreme stock market returns. Journal of Business, 63, 383–408. Malevergne, Y., & Sornette, D. (2005). Higher moment portfolio theory. The Journal of Portfolio Management, 31(4), 49–55. Mandelbrot, B. B. (1963). The variation of certain speculative prices. Journal of Business, 36, 394–419. McCulloch, J. H. (1985). Interest-risk sensitive deposit insurance premia: Stable ACH estimates. Journal of Banking and Finance, 9, 137–156. McDonald, J. B. (1984). Some generalized functions for the size distribution of income. Econometrica, 52, 647–663. McDonald, J. B. (1991). Parametric models for partially adaptive estimation with skewed and leptokurtic residuals. Economics Letters, 37, 273–278. McDonald, J. B., & Newey, W. K. (1988). Partially adaptive estimation of regression models via the generalized t distribution. Econometric Theory, 4, 428–457. McDonald, J. B., & Xu, Y. J. (1995). A generalization of the beta distribution with applications. Journal of Econometrics, 66, 133–152. Mittnik, S., Rachev, S., Doganoglu, T., & Chenyao, D. (1999). Maximum likelihood estimation of stable paretian models. Mathematical and Computer Modelling, 29, 275–293. Nelson, D. B. (1991). Conditional heteroskedasticity in asset return: A new approach. Econometrica, 59, 347–370. Officer, R. (1972). The distribution of stock returns. Journal of the American Statistical Association, 67, 807–812. Patton, A. J. (2004). On the out-of-sample importance of skewness and asymmetric dependence for ass et al location. Journal of Financial Econometrics, 2, 130–168. Pena, D., Tiao, G. C., & Tsay, R. S. (2001). A course in time series analysis. New York: Wiley. Ranaldo, A., & Favre, L. (2005). How to price hedge funds: From two- to four-moment CAPM (UBS Research Paper). SSRN: http://ssrn.com/abstract=474561. Sentana, E., & Wadhwani, S. (1992). Feedback traders and stock return autocorrelations: Evidence from a century of daily data. Economic Journal, 102, 415–425. Snedecor, G. W., & Cochran, W. G. (1989). Statistical methods. Ames: Iowa State Press. Tang, G. Y. N. (1998). Monthly pattern and portfolio effect on higher moments of stock returns: Empirical evidence from Hong Kong. Asia-Pacific Financial Markets, 5, 275–307.
80
Modeling Asset Returns with Skewness, Kurtosis, and Outliers
2215
Theodossiou, P. (1998). Financial data and the skewed generalized t distribution. Management Science, 44, 1650–1661. Theodossiou, P., McDonald, J. B., & Hansen, C. B. (2007). Some flexible parametric models for partially adaptive estimators of econometric models. Economics: The Open-Access, OpenAssessment E-Journal, 1(2007-7), 1–20. http://dx.doi.org/10.5018/economics-ejournal. ja.2007-7. Tsay, R. C., Pen˜a, D., & Pankratz, A. (2000). Outliers in multivariate time series. Biometrica, 87, 789–804. Vinod, H. D. (2004). Ranking mutual funds using unconventional utility theory and stochastic dominance. Journal of Empirical Finance, 11, 353–377. Wang, K. L., Fawson, C., Barrett, C. B., & McDonald, J. B. (2001). A flexible parametric GARCH model with an application to exchange rates. Journal of Applied Econometrics, 16, 521–536. Wu, J. W., Hung, W. L., & Lee, H. M. (2000). Some moments and limit behaviors of the generalized logistic distribution with applications. Proceedings of the National Science Council ROC, 24, 7–14.
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
81
Hong-Yi Chen, Sheng-Syan Chen, Chin-Wen Hsin, and Cheng-Few Lee
Contents 81.1 81.2
81.3
81.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Revenue, Earnings, and Price Momentum Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.1 Measures for Earnings Surprises and Revenue Surprises . . . . . . . . . . . . . . . . . . . . 81.2.2 Measuring the Profitability of Revenue, Earnings, and Price Momentum Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data and Sample Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.2 Sample Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.3 Descriptive Statistics for Stocks Grouped by SURGE, SUE, and Prior Returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Results of Univariate Momentum Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2218 2222 2222 2223 2223 2223 2224 2227 2227
This article is a reprint of the article entitled “Does revenue momentum drive or ride earnings or price momentum?” published in the Journal of Banking & Finance (Hong-Yi Chen, Sheng-Syan Chen, Chin-Wen Hsin, and Cheng-Few Lee, Vol. 38, 2014, pp. 166-185). We would like to thank the editor of the Journal of Banking & Finance and Elsevier for permission to reprint the paper. H.-Y. Chen (*) Department of Finance, National Central University, Taoyuan, Taiwan e-mail: [email protected] S.-S. Chen National Central University, Zhongli City, Taiwan e-mail: [email protected] C.-W. Hsin Yuan Ze University, Zhongli City, Taiwan e-mail: [email protected] C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_81, # Springer Science+Business Media New York 2015
2217
2218
H.-Y. Chen et al.
81.5
2232 2232
Interrelation of Revenue, Earnings, and Price Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5.1 Testing for Dominance Among the Momentum Strategies . . . . . . . . . . . . . . . . . . 81.5.2 Two-Way Sorted Portfolio Returns and Momentum Ccross-Contingencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5.3 Combined Momentum Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.6 Persistency and Seasonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.6.1 Persistence of Momentum Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.6.2 Seasonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: A Measures of Earnings and Revenue Surprises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2239 2246 2249 2249 2254 2256 2257 2259
Abstract
This study examines the profits of revenue, earnings, and price momentum strategies in an attempt to understand investor reactions when facing multiple information of firm performance in various scenarios. We first offer evidence that there is no dominating momentum strategy among the revenue, earnings, and price momentums, suggesting that revenue surprises, earnings surprises, and prior returns each carry some exclusive unpriced information content. We next show that the profits of momentum driven by firm fundamental performance information (revenue or earnings) depend upon the accompanying firm market performance information (price), and vice versa. The robust monotonicity in multivariate momentum returns is consistent with the argument that the market does not only underestimate the individual information but also the joint implications of multiple information on firm performance, particularly when they point in the same direction. A three-way combined momentum strategy may offer monthly return as high as 1.44 %. The information conveyed by revenue surprises and earnings surprises combined account for about 19 % of price momentum effects, which finding adds to the large literature on tracing the sources of price momentum. Keywords
Revenue surprises • Earnings surprises • Post-earnings-announcement drift • Momentum strategies
81.1
Introduction
Financial economists have long been puzzled by two robust and persistent anomalies in the stock market: price momentum (see Jegadeesh and Titman 1993, 2001; Rouwenhorst 1998), and post-earnings-announcement drift (see Ball and Brown 1968; Foster et al. 1984; Bernard and Thomas 1989; Chan et al. 1996). More recently, Jegadeesh and Livnat (2006b) also find that price reactions to revenue surprises on announcement dates only partially reflect the incremental information conveyed by the surprises. The information contents carried by revenue, earnings and stock prices are intrinsically linked through firm operations and investor evaluation, and there is evidence of mutual predictability for respective future values
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2219
(e.g., see Jegadeesh and Livnat 2006b). Nonetheless, investors, aware of the linkages among the information content conveyed by revenue, earnings and prices (see Ertimur et al. 2003; Raedy et al. 2006; and Heston and Sadka 2008), may still fail to take full account of their joint implications when pricing the stocks. This study investigates how investors price securities when facing multiple information contents of a firm, particularly those firm performance information that are most accessible for investors – price, earnings, and revenue.1 The longshort strategy of momentums, widely used in the literature, provides a venue to detect market reactions toward individual and multiple information contents. Accordingly, this study will start with documenting the revenue momentum profits and re-confirming the earnings and price momentums profits. Explorations with momentum strategies expect to yield implications that answer our two research questions. First, among the performance information of revenue surprises, earnings surprises, and prior returns, does each carry some exclusive information content that is not priced by the market? Second, do investors mis-react toward the joint implications as well as individual information of firm revenue, earnings, and price? Our first research question is explored by testing momentum dominance. One momentum strategy is said to be dominated if its payoffs can be fully captured by the information measure serving as the sorting criterion of another momentum strategy. Note that our emphasis here is not asset pricing tests; instead, as in Chan et al. (1996) and Heston and Sadka (2008), we focus on the return anomalies based on revenue surprises, earnings surprises, and prior returns. Results from both a pairwise nested comparison and a regression analysis indicate that revenue surprises, earnings surprises, and prior returns each lead to significant momentum returns that cannot be explained away by one another. That is, revenue momentum neither drives nor rides earnings or price momentum. Following the information diffusion hypothesis of Hong and Stein (1999), our evidence then suggests that revenue surprises, earnings surprises, and prior returns each contribute to the phenomenon of gradual information flow, or that each have some exclusive information content that is not priced by the market.2 Further regression tests indicate
1
Researches in the literature offer some evidence on the information linkage among revenue, earnings and prices. For example, Lee and Zumwalt (1981) find that revenue information is complementary to earnings information in security rate of return determination. Bagnoli et al. (2001) find that revenue surprises but not earnings surprises can explain stock prices both during and after the internet bubble. Swaminathan and Weintrop (1991) and Ertimur et al. (2003) suggest that the market reacts significantly more strongly to revenue surprises than to expenses surprises. Rees and Sivaramakrishnan (2001) and Jegadeesh and Livnat (2006b) also find that, conditional on earnings surprises, there is still a certain extent of market reaction to the information conveyed by revenue surprises. Ghosh et al. (2005) find that sustained increases in earnings are supported by sustained increases in revenues rather than by cost reductions. 2 The asset pricing tests of Chordia and Shivakumar (2006) support that price momentum is subsumed by the systematic component of earnings momentum, even though they also find earnings surprises and past returns have independent explanatory power for future returns. This latter finding is consistent with the results of Chan et al. (1996) and our results, as is reported later. In comparison, Chan et al. (1996), Jegadeesh and Livnat (2006b), and we focus on whether and
2220
H.-Y. Chen et al.
that earnings surprise and revenue surprise information each accounts for about 14 % and 10 % of price momentum returns, and that these two fundamental performance information combined account for just about 19 % of price momentum effects. These results provide additional evidence in the literature on the sources of price momentum (e.g., see Moskowitz and Grinblatt 1999; Lee and Swaminathan 2000; Piotroski 2000; Grundy and Martin 2001; Chordia and Shivakumar 2002, 2005; Ahn et al. 2003; Griffin et al. 2003; Bulkley and Nawosah 2009; Chui et al. 2010; Novy-Marx 2012). Our second research question inquires how the market reacts to the joint implications of multiple information measures. The three measures under our study all carry important messages on innovations in firm performance, and therefore expect to trigger investor reactions. They become ideal target to be studied to entail implications on how investors process multiple information interactively in pricing stocks. The results from two-way sorted portfolios find that the market anomalies vary monotonically with the joint condition of revenue surprises, earnings surprises, and prior returns, and anomalies tend to be strongest when stocks show the strongest signals in the same direction. The cross-contingencies of momentums are observed in that the momentum returns driven by fundamental performance information (revenue surprises or earnings surprises) change with the accompanying market performance information (prior returns), and vice versa. Such finding, as interpreted by the gradual-information-diffusion model, is consistent with the suggestion that the market not only underreacts to individual firm information but also underestimates the significance of the joint implications of revenue, earnings, and price information.3 These results also have interesting implications for investment strategies that the fundamental performance information plays an important role in differentiating future returns among price winners, while the market performance information is particularly helpful in predicting future returns for stock with high surprises in revenue or earnings. Specifically, price winners, compared to price losers, yield higher returns from revenue/earnings momentum strategies; stock with greater surprises in fundamentals yield greater returns from price momentums. The results of our dominance tests and multivariate momentum suggest that a combined momentum strategy should yield better results over single-criterion momentum strategies. A combined momentum strategy using all three performance measures is found to yield monthly returns as high as 1.44 %, which amounts to an annual return of 17.28 %. Such a combined momentum strategy outperforms singlecriterion momentum strategies by at least 0.72 percentage points in monthly return.
how firm characteristics, such as revenue surprises, earnings surprises, and prior returns, are related to future cross-sectional returns, while Chordia and Shivakumar (2006) also conduct asset pricing tests. 3 The firm performance measures, revenue, earnings, and stock price, do not only share common origins endogenously but also have added implications for future values of one another. Jegadeesh and Livnat (2006b) have documented evidence on the temporal linkages among these variables. In this study, we focus on the further inquiry that whether investors fully exploit such temporal linkages among these firm performance information in pricing stocks.
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2221
Our conclusions remain robust whether we use raw returns or risk-adjusted returns, whether we include January results or not, and whether we use dependent or independent sorts. Chan et al. (1996), Piotroski (2000), Griffin et al. (2005), Mohanram (2005), Sagi and Seasholes (2007), Asem (2009), and Asness et al. (2013) conduct similar tests on combined momentum strategies using alternative sorting criteria.4 In comparison, our study is the first to document results considering these three firm performance information, revenue surprises, earnings surprises, and prior returns altogether. In terms of persistency, the earnings momentum strategy is found to exhibit the strongest persistence, while the revenue momentum strategy is relatively shortlived. All the same, the short-lived revenue momentum effect is prolonged when the strategy is executed using stocks with the best prior price performance and more positive earnings surprises. In fact, the general conclusion supports our claim of cross-contingencies of momentum as applied to momentum persistence. This study contributes to the finance literature in several respects. First, we specifically identify the profitability of revenue momentum and its relation with earnings surprises and prior returns in terms of momentum strength and persistence. A revenue momentum strategy executed with a 6-month formation period and 6-month holding-period strategy yields an average monthly return of 0.61 % for the period between 1974 and 2009. Second, this study identifies empirical interrelations of anomalies arising from three firm performance information – revenue, earnings and price. To the best of our knowledge, we are the first to offer evidence that there is no dominating momentum strategy among the three, and that the profits of momentum driven by firm fundamental performance information (revenue or earnings) depend upon the accompanying firm market performance information (price), and vice versa.5 Third, aside from academic interest, the aforementioned findings may well serve as useful guidance for asset managers seeking profitable investment strategies. Fourth, this study also adds to the large literature attempting to trace the sources of price momentum. Our numbers indicate that the information conveyed by revenue surprises and earnings surprises combined account for about 19 % of price momentum effects. Last, our results offer additional evidence to the literature using the behavioral explanation for momentums.6 Our empirical results 4
Chan et al. (1996) and Griffin et al. (2005) find that when sorting prior price performance and earnings surprises together, the profits of a zero-investment portfolio are higher than those of single sorting. Piotroski (2000) and Mohanram (2005) develop fundamental indicators, FSCORE and GSCORE, to separate winners from losers. Sagi and Seasholes (2007) find that price momentum strategy becomes even more profitable when applied to stocks with high revenue growth volatility, low costs, or valuable growth options. Asness et al. (2013) find that the combination of value strategy and momentum strategy can perform better than either one alone. Asem (2009) find the momentum profits can be enhanced combining prior price returns and dividend behaviors. 5 Heston and Sadka (2008) and Novy-Marx (2012) also provide evidence that earnings surprises are unable to explain price momentum. However, this study is the first to consider earnings surprises and revenue surprises at the same time in explaining price momentum. 6 Barberis et al. (1998), Daniel et al. (1998), Hong and Stein (1999), Jackson and Johnson (2006), Verardo (2009), and Moskowitz et al. (2012) provide evidence in support of behavioral
2222
H.-Y. Chen et al.
are consistent with the suggestion that revenue surprises, earnings surprises, and prior returns each carry some exclusive unpriced information content. Moreover, the monotonicity of abnormal returns found in multivariate momentums suggests that the market does not only underestimate the individual information but also the joint implications of multiple information on firm performance. Such suggestion is new to the literature, and may also present a venue to track the sources of price momentum. The study is organized as follows. In Sect. 81.2, we develop our models and describe the methodologies. In Sect. 81.3, we describe the data. In Sect. 81.4, we report the results on momentum strategies based on a single criterion. In Sect. 81.5, we discuss the empirical results of exploration of inter-relations among revenue, earnings, and price momentums using strategies built on multiple sorting criteria. In Sect. 81.6, we test the persistency and seasonality of momentum strategies. Section 81.7 concludes.
81.2
Revenue, Earnings, and Price Momentum Strategies
81.2.1 Measures for Earnings Surprises and Revenue Surprises We follow Jegadeesh and Livnat (2006a, b) and measure revenue surprises and earnings surprises based on historical revenues and earnings.7 Assuming that both quarterly revenue and quarterly earnings per share follow a seasonal random walk with a drift, we define the measure of revenue surprises for firm i in quarter t, standardized unexpected revenue growth (SURGE), as QRi, t E QRi, t SURGEi, t ¼ , sRi, t
(81:1)
R R where Qi,t is the quarterly revenue of firm i in quarter t, E(Qi,t ) is the expected R quarterly revenue prior to earnings announcement, and si,t is the standard deviation of quarterly revenue growth. The same method is applied to measure earnings surprises, specifically standardized unexpected earnings (SUE), defined as
QEi, t E QEi, t SUEi, t ¼ , sEi, t
(81:2)
explanation to momentum effect, while Grundy and Martin (2001), Johnson (2002), Ahn et al. (2003), Sagi and Seasholes (2007), Li et al. (2008), Liu and Zhang (2008), and Wang and Wu (2011) attribute momentum effect to missing risk factors. In addition, Korajczyk and Sadka (2004) and Lesmond et al. (2004) re-examine the profitability of momentum strategies after taking the transaction cost into account and get mixed results. 7 See Appendix for a detailed discussion of measures to estimate revenue and earnings surprises.
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2223
E E where Qi,t is the quarterly earnings per share from continuing operations, E(Qi,t ) is E the expected quarterly earnings per share prior to earnings announcement, and si,t is the standard deviation of quarterly earnings growth.
81.2.2 Measuring the Profitability of Revenue, Earnings, and Price Momentum Strategies We construct all three momentum strategies based on the approach suggested by Jegadeesh and Titman (1993). To evaluate the information effect of earnings surprises on stock returns, we form an earnings momentum strategy analogous to the one designed by Chordia and Shivakumar (2006). At the end of each month, we sort sample firms by SUE and then group the firms into ten deciles.8 Dec 1 includes stocks with the most negative earnings surprises, and Dec 10 includes those with the most positive earnings surprises. The SUEs used in every formation month are obtained from the most recent earnings announcements, made within three months before the formation date. We hold a zero-investment portfolio, long the most positive earnings surprises portfolio and short the most negative earnings surprises portfolio, for K (K ¼ 3, 6, 9, and 12) subsequent months, not rebalancing the portfolios during the holding period. Such positive minus negative strategy (PMN) holds K different longpositive and short-negative portfolios each month. Accordingly, we obtain a series of zero-investment portfolio returns, which are the monthly returns to this earnings momentum strategy. Similarly, we apply this positive-minus-negative method to construct a revenue momentum strategy. In the case of price momentum, we form a zero-investment portfolio each month by taking a long position in the top decile portfolio (winner) and a short position in the bottom decile portfolio (loser), and we hold this winner minus loser portfolio (WML) for subsequent K months. We thus obtain a series of zero-investment portfolio returns, i.e., the returns to the price momentum strategy.
81.3
Data and Sample Descriptions
81.3.1 Data We collect from Compustat the firm basic information, earnings announcement dates, and firm accounting data. Stock prices, stock returns, share codes, and
8
Note that we sort the sample firms into five quintile portfolios on each criterion in our later construction of multivariate momentum strategies. To conform to the same sorting break points, we also test the single momentum strategies based on quintile portfolios and find the results remain similar to those based on decile portfolios.
2224
H.-Y. Chen et al.
exchange codes come retrieved from the Center for Research in Security Prices (CRSP) files. The sample period is from 1974 through 2009. Only common stocks (SHRCD ¼ 10, 11) and firms listed on New York Stock Exchange, American Stock Exchange, or Nasdaq (EXCE ¼ 1, 2, 3, 31, 32, 33) are included in our sample. We exclude from the sample regulated industries (SIC ¼ 4,000–4,999) and financial institutions (SIC ¼ 6,000–6,999). We also exclude firms with stock prices below $5 on the formation date, considering that investors generally pay only limited attention to such stocks. For the purpose of estimating their revenue surprises (SURGE), earnings surprises (SUE), and prior price performance, firms in the sample should have at least eight consecutive quarterly earnings announcements and six consecutive monthly returns before each formation month. To examine the return drift following the estimated SURGE, SUE, and prior price performance, firms in the sample need to have at least 12 consecutive monthly returns following each formation month. Firms in the sample should also have corresponding SURGE, SUE, size and book-to-market factors available in each formation month.
81.3.2 Sample Descriptions Table 81.1 presents the summary statistics for firm size, estimates of revenue surprises and estimates of earnings surprises for our sample firms between year 1974 and year 2009. Panel A shows that there are 223,831 firm-quarters during the sample period. Median firm market capitalization is $235 million. Panel B and Panel C describe the distributions the revenue surprises (SURGE) and the earnings surprises (SUE) across firms of different market capitalization and different bookto-market ratio. Around 54 % of revenue surprises and 50 % of earnings surprises are positive.9 The values of SURGE and SUE are expected to be positively correlated. After all, a firm’s income statement starts with revenue (sales) and ends with earnings; these two attributes share common firm operational information to a great extent, and their innovations, SURGE and SUE, should be correlated as well. Table 81.2 shows the time-series average of the cross-sectional correlations between 1974 and 2009. Panel A and Panel B present, respectively, the Pearson correlations and Spearman rank correlations. The average of both types of correlations between SURGE and SUE is 0.32, while prior price performance is not as significantly correlated with SURGE or SUE, with average correlations equal to about 0.15 and 0.19, respectively. We then partition the sample by book-to-market ratio (B/M) and size. Value firms and small firms are found to exhibit slightly higher correlations among 9
To ensure that firm accounting information is available to public investors at the time the stock returns are recorded, we follow the approach of Fama and French (1992) and match the accounting data for all fiscal years ending in calendar year t 1 with the returns for July of year t through June of t + 1. The market capitalization is calculated by the closing price on the last trading day of June of a year times the number of outstanding shares at the end of June of that year.
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2225
Table 81.1 Summary statistics of sample firm characteristics Panel A: sample size and firm market capitalization Number of firm-quarters Market cap (million dollars) Mean Median Min ALL 223,831 2,276 235 0.91 Panel B. Descriptive statistics of SURGE Positive SURGE Negative SURGE N Mean Median STD N Mean Median ALL 121,525 3.31 2.84 2.34 102,306 3.00 2.56 Growth 45,670 3.63 3.25 2.40 27,829 2.84 2.35 Mid-BM 50,881 3.21 2.73 2.32 46,309 3.05 2.62 Value 24,974 2.91 2.41 2.20 28,168 3.06 2.69 Small 61,827 3.19 2.70 2.31 54,935 2.96 2.57 Mid-size 38,338 3.41 2.98 2.37 30,591 3.02 2.56 Large 21,360 3.45 2.99 2.40 16,780 3.06 2.56 Panel C. Descriptive statistics of SUE Positive SUE Negative SUE N Mean Median STD N Mean Median ALL 112,068 2.42 1.89 1.94 111,330 2.92 2.11 Growth 37,928 2.47 1.98 1.92 35,407 2.83 2.10 Mid-BM 48,767 2.41 1.88 1.94 48,221 2.92 2.09 Value 25,373 2.37 1.79 1.95 27,702 3.04 2.17 Small 56,746 4.42 1.87 1.94 59,765 2.86 2.04 Mid-size 35,031 2.43 1.91 1.93 33,773 2.98 2.18 Large 20,291 2.42 1.92 1.92 17,792 3.01 2.21
Max 602,433
STD 2.21 2.21 2.25 2.15 2.14 2.28 2.33
Zero SURGE N 0 0 0 0 0 0 0
STD 2.59 2.43 2.60 2.76 2.54 2.65 2.66
Zero SUE N 433 164 202 67 251 125 57
This table presents the descriptive statistics for major characteristics of our sample stocks. Our sample includes stocks listed on the NYSE, the AMEX, and Nasdaq with data available to compute book-to-market ratios, revenue surprises, and earnings surprises. All financial service operations and utility companies are excluded. Firms with prices below $5 as of the earnings announcement date are also excluded. Panel A lists numbers of firm-quarter observations between January 1974 and December 2009. Panel B and Panel C respectively list the mean and median values the measure of revenue surprises (SURGE) and for the measure of earnings surprises (SUE) across all firm-quarters in our sample. Statistics for positive surprises, negative surprises, and zero surprises are presented separately. Sample firms are also classified into bottom 30 %, middle 40 %, and top 30 % groups by their respective market capitalizations or book-to-market ratios. The breakpoints for the size subsamples are based on ranked values of market capitalization of NYSE firms. The breakpoints for the book-to-market subsamples are based on ranked values of book-to-market ratio of all sample firms
SURGE, SUE, and prior price performance than growth firms and large firms, although the differences in correlations across B/M and size groups are not significant. Table 81.2 also shows the fractions of months where non-zero correlations are significant at the 1 % level. These numbers again confirm that the correlations between SURGE and SUE tend to be strongest across various classifications of firms, followed by correlations between SURGE and prior returns, and then those between SUE and prior returns.
2226
H.-Y. Chen et al.
Table 81.2 Correlation among revenue surprises, earnings surprises, and prior price performance Panel A. Pearson correlations among SURGE, SUE, and prior 6-month returns Correlated All firms Subsample by B/M Subsample by size variables Value Mid Growth Small Mid Large (SURGE, SUE) 0.3200*** 0.3331*** 0.3361*** 0.2818*** 0.3641*** 0.2917*** 0.2362*** (101.17) (84.46) (107.04) (65.93) (118.69) (69.91) (42.64) [100 %] [100 %] [100 %] [100 %] [100 %] [100 %] [71.1 %] (SURGE, Prior 0.1458*** 0.1272*** 0.1263*** 0.1353*** 0.1686*** 0.1304*** 0.1061*** returns) (44.09) (33.86) (35.67) (35.36) (55.44) (29.78) (17.78) [88.7 %] [41.5 %] [64.6 %] [62.7 %] [86.9 %] [55.4 %] [35.9 %] (SUE, Prior 0.1868*** 0.2120*** 0.2015*** 0.1496*** 0.2330*** 0.1523*** 0.0959*** returns) (65.54) (57.68) (54.40) (47.01) (75.82) (40.74) (20.93) [98.4 %] [81.9 %] [92.7 %] [68.1 %] [98.8 %] [67.1 %] [23.7 %] Panel B. Spearman rank correlations among SUE, SURGE, and prior 6-month-returns Correlated All firms Subsample by B/M Subsample by size variables Value Mid Growth Small Mid Large (SURGE, SUE) 0.3231*** 0.3367*** 0.3397*** 0.2828*** 0.3652*** 0.2952*** 0.2407*** (106.09) (93.92) (112.08) (68.22) (124.45) (72.92) (45.40) [100 %] [100 %] [100 %] [99.8 %] [100 %] [100 %] [74.4 %] (SURGE, Prior 0.1426*** 0.1227*** 0.1255*** 0.1315*** 0.1647*** 0.1285*** 0.1032*** returns) (42.61) (33.68) (36.33) (33.09) (55.45) (29.37) (17.58) [86.6 %] [41.8 %] [63.4 %] [58.2 %] [87.8 %] [53.3 %] [35.0 %] (SUE, Prior 0.1834*** 0.2117*** 0.1980*** 0.1383*** 0.2314*** 0.1501*** 0.0959*** returns) (63.98) (59.29) (54.79) (41.56) (76.68) (39.50) (20.21) [97.2 %] [84.0 %] [91.1 %] [62.0 %] [99.3 %] [64.8 %] [23.2 %] This table presents the correlations among SURGE, SUE and prior returns of our sample firms. At the end of each month, each sample firm should have its corresponding most current SUE, most current SURGE, and previous 6-month return. SURGE and SUE are winsorized at 5 % and 95 %, setting all SURGE and SUE values greater than the 95th percentile to the value of the 95th percentile and all SURGE and SUE values smaller than the 5th percentile to the value of the 5th percentile. Panel A lists the average Pearson correlations among SUE, SURGE, and prior returns between 1974 and 2009. Panel B lists the average Spearman rank correlations, where all sample firms are grouped into ten portfolios based on SURGE, SUE, and prior-6-month-returns independently at the end of each month. Decile 1 portfolio consists of firms with the lowest value of the attribute (SURGE, SUE, or prior 6-month returns), and Decile 10 consists of firms with the highest value of the attribute. The correlations are calculated at the end of each month. The values reported in the table are monthly averages of those correlations. Sample firms are further classified into bottom 30 %, middle 40 %, and top 30 % groups by their respective market capitalizations or book-to-market ratios at the end of the formation months. The breakpoints for the size subsamples are based on ranked values of market capitalization of NYSE firms. The breakpoints for the bookto-market subsamples are based on ranked values of book-to-market ratio of all sample firms. The numbers in parentheses are the average t-statistics under the null hypothesis that the correlation is zero.***, **, and * indicate statistical significance at 1 %, 5 %, and 10 %, respectively. Percentages in brackets represent the fraction of the months with non-zero correlations that are significant at the 1 % level
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2227
These preliminary results suggest that revenue surprises and earnings surprises share highly correlated information, while each still have a distinctive content, a conclusion consistent with Swaminathan and Weintrop (1991) and Jegadeesh and Livnat (2006b). The information content conveyed by market information, i.e., prior returns, differs more from that carried by the two fundamental information measures, SURGE and SUE.
81.3.3 Descriptive Statistics for Stocks Grouped by SURGE, SUE, and Prior Returns We next compare the firm characteristics for portfolios characterized by different revenue surprises (SURGE), earnings surprises (SUE) and prior returns. All sample stocks are sorted into quintiles based on their SURGE, SUE, and prior 6-month returns independently. The characteristics of those quintile portfolios are reported in Table 81.3. Several interesting observations emerge. The price level, as expected, is found to be lowest for the price losers (P1). Stocks with negative revenue surprises (R1) or negative earnings surprises (E1) also have lower price levels, while the trend is not as obvious as for price losers. We also find price losers (P1) and price winners (P5) tend to be smaller stocks. Another interesting observation revealed in the book-to-market ratios is that stocks with the most positive SURGE or the most winning returns tend to be growth stocks. Stocks with the most positive SUE also have lower B/M ratios, but to much less of a degree. This suggests that growth stocks are characterized by strong revenue but not necessarily strong earnings. The last three sections of Table 81.3 list the SURGE, SUE, and prior returns for those sorted portfolios. Stocks with strong SURGE also tend to have higher SUE and higher prior returns. A similar pattern is seen for stocks with high SUE or high prior returns. Stocks with strong SURGE, strong SUE, or winning prior returns tend to excel on all three information dimensions. This relation is consistent with the positive correlations reported in Table 81.2.
81.4
Empirical Results of Univariate Momentum Strategies
Table 81.4 presents the monthly returns to momentum strategies based on firms’ revenue surprises (SURGE), earnings surprises (SUE), and prior price performance, respectively termed as revenue momentum, earnings momentum, and price momentum strategies. Decile portfolio results are reported here. We first examine the profitability of revenue momentum. We are interested in knowing whether the well-documented post-announcement revenue drift also enables a profitable investment strategy. Following a similar strategy of earnings momentum by Chordia and Shivakumar (2006), we define a revenue momentum portfolio as a zero-investment portfolio by buying stocks with the most positive revenue surprises and selling stocks with the most negative revenue surprises.
R2
Price Mean 22.05 23.50 Median 16.38 17.75 STD 25.92 26.41 Mkt Cap (million dollars) Mean 2,117 2,312 Median 218 239 STD 12,173 13,316 B/M Mean 0.7426 0.7130 Median 0.6085 0.5769 STD 0.5284 0.5215 Prior-6-month-returns Mean 0.0026 0.0056 Median 0.0025 0.0052 STD 0.0452 0.0446
SURGE R1
0.6389 0.5529 0.6868 0.4948 0.4133 0.5446 0.5034 0.4695 0.5250
0.0153 0.0220 0.0037 0.0054 0.0145 0.0209 0.0038 0.0047 0.0464 0.0488 0.0460 0.0461
0.0107 0.0100 0.0457
0.6762 0.5389 0.5226
2,111 227 12,774
0.6782 0.5381 0.5119
23.59 17.38 32.49
E2
2,400 2,567 2,247 250 286 238 12,593 14,426 12,883
23.16 17.80 27.27
SUE E1
2,483 239 14,720
27.27 21.13 29.78
R5
25.03 19.00 30.71
R4
24.13 18.13 27.11
R3
Table 81.3 Descriptive statistics of characteristics of various portfolio groups
25.26 19.50 25.36
E4 26.03 19.63 28.05
E5 16.86 12.63 21.33
0.7317 0.6027 0.5209
0.6793 0.5471 0.4991
3,018 310 15,384
26.62 21.13 27.77
0.0107 0.0156 0.0229 0.0443 0.0109 0.009 0.0098 0.0145 0.0216 0.0403 0.0093 0.0096 0.0459 0.0453 0.0469 0.0322 0.0236 0.0220
0.6765 0.6485 0.6378 0.7774 0.5371 0.5103 0.4988 0.6408 0.5157 0.4975 0.4948 0.5595
2,524 256 14,233
23.39 18.12 27.84
Prior 6-month returns P1 P2 P3
2,189 2,771 2,561 1,316 236 275 253 169 11,838 15,122 14,516 8,718
23.93 17.75 26.73
E3
26.83 20.13 31.85
P5
0.0297 0.0676 0.0294 0.0637 0.0236 0.0359
0.6148 0.5223 0.4822 0.3836 0.4760 0.4566
3,120 1,902 322 222 16,527 10,874
28.29 22.50 29.17
P4
2228 H.-Y. Chen et al.
4.7404 1.6453 0.3942 4.5579 1.5386 0.5175 1.8859 1.3789 1.2745
2.4354 5.7428 1.2127 0.2628 0.4276 1.0688 2.1560 0.6392 0.0739 0.3424 2.5045 5.6739 1.5664 0.4180 0.4178 1.1661 2.3349 0.7582 0.1127 0.3592 1.2251 1.7428 4.0386 3.7035 3.5492 3.5099 3.6600 3.8580 3.7885 3.7861
0.8732 1.6730 0.9336 1.7770 3.7634 3.7450
1.9168 0.7191 0.0497 0.5215 1.0800 5.2584 1.5994 0.0029 1.4750 4.3051 1.4463 0.6197 0.1473 0.2722 0.8561 1.5894 0.4592 0.1297 0.6600 1.1971 4.9957 1.4842 0.0169 1.4329 4.1362 1.0550 0.3205 0.0709 0.4132 0.8788 3.5535 3.2176 3.1100 3.1184 3.4312 2.2130 0.8896 0.6252 0.6952 1.5130 3.5826 3.3691 3.3150 3.2623 3.2648
This table presents the descriptive statistics of firm characteristics for stocks sorted on SURGE, SUE, and prior returns. All sample stocks are sorted independently according to their SURGE, SUE, and prior 6-month returns. R1 (E1) represents the quintile portfolio of stocks with the most negative SURGE (SUE), and R5 (E5) represents the quintile portfolio of stocks with the most positive SURGE (SUE). Similarly, P1 denotes the quintile portfolio of stocks with the lowest prior 6-month returns while P5 denotes the portfolio of stocks with the highest prior 6-month returns. Reported characteristics include price level, market capitalization, B/M ratio, SURGE, SUE and prior 6-month returns for component stocks in each corresponding quintile portfolio. The reported mean values are the equally weighted averages for stocks in each quintile portfolio
SUE Mean Median STD SURGE Mean Median STD
81 Does Revenue Momentum Drive or Ride Earnings or Price Momentum? 2229
2230
H.-Y. Chen et al.
Panel A of Table 81.4 reports significant returns to the revenue momentum strategies. These strategies yield average monthly returns of 0.94 %, 0.93 %, and 0.84 %, respectively, by holding the relative-strength portfolios for 3, 6, and 9 months. This research, to the best of our knowledge, is the first to document specific evidence on the profitability of revenue momentum. We also test with more recent data the profitability of earnings momentum and price momentum strategies, which have both been studied in the literature. Panel B of Table 81.4 reports the results for the earnings momentum strategies. We again find that these positive-minus-negative (PMN) zero-investment portfolios yield significantly positive returns for holding periods ranging from 3 to 12 months. The profit is strongest when the PMN portfolios are held for 3 months, leading to an average monthly return of 0.99 %, significant at the 1 % level. The results are consistent with those of Bernard and Tomas (1989) and Chordia and Shivakumar (2006). Chordia and Shivakumar (2006) find a significant monthly return of 0.96 % on a 6-month holding-period earnings momentum strategy executed over Table 81.4 Returns to revenue momentum, earnings momentum, and price momentum strategies Panel A. Revenue momentum returns Holding period Low High 3 months 0.0074*** 0.0163*** (2.56) (5.37) 6 months 0.0097*** 0.0158*** (3.34) (5.17) 9 months 0.0118*** 0.0154*** (4.01) (5.03) 12 months 0.0131*** 0.0145*** (4.43) (4.78) Panel B. Earnings momentum returns Holding period Low High 3 months 0.0079*** 0.0178*** (2.71) (6.14) 6 months 0.0098*** 0.0169*** (3.35) (5.81) 9 months 0.0116*** 0.0164*** (3.92) (5.65) 12 months 0.0127*** 0.0155*** (4.28) (5.37) Panel C. Price momentum returns Holding period Loser Winner 3 months 0.0085** 0.0179*** (2.18) (4.81) 6 months 0.0088** 0.0182*** (2.29) (4.94) 9 months 0.0099*** 0.0183*** (2.62) (5.02)
PMN 0.0089*** (7.19) 0.0061*** (5.10) 0.0036*** (3.03) 0.0014 (1.24)
CAPM_Adj. (1) 0.0084*** (6.88) 0.0056*** (4.71) 0.0030*** (2.58) 0.0010 (0.87)
FF3_Adj. (2) 0.0105*** (9.22) 0.0079*** (7.32) 0.0054*** (5.16) 0.0034*** (3.36)
PMN 0.0099*** (9.77) 0.0071*** (7.82) 0.0048*** (5.68) 0.0028*** (3.60)
CAPM_Adj. (1) 0.0099*** (9.71) 0.0070*** (7.71) 0.0048*** (5.59) 0.0028*** (3.64)
FF3_Adj. (2) 0.0102*** (9.90) 0.0077*** (8.42) 0.0056*** (6.63) 0.0037*** (4.47)
WML 0.0094*** (3.23) 0.0093*** (3.47) 0.0084*** (3.57)
CAPM_Adj. (1) 0.0101*** (3.48) 0.0098*** (3.62) 0.0085** (3.62)
FF3_Adj. (2) 0.0113*** (3.80) 0.0112*** (4.09) 0.103*** (4.32) (continued)
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2231
Table 81.4 (continued) 12 months
0.0109*** (2.94)
0.0171*** (4.72)
0.0061*** (2.93)
0.0062*** (2.94)
0.0085*** (4.06)
This table presents monthly returns and associated t-statistics from revenue, earnings, and price momentum strategies executed during the period from 1974 through 2009. For the revenue momentum strategy, firms are grouped into ten deciles based on the measure SURGE during each formation month. Decile 1 represents the most negative revenue surprises, and Decile 10 represents the most positive revenue surprises. The values of SURGE for each formation month are computed using the most recent revenue announcements made within three months before the formation date. The zero-investment portfolios—long the most positive revenue surprises portfolio and short the most negative revenue surprises portfolio (PMN) —are held for K (K ¼ 3, 6, 9, and 12) subsequent months and are not rebalanced during the holding period. Panel A lists the average monthly returns earned from the portfolio of those firms with the most negative SURGE (low), from the portfolio of those with the most positive SURGE (high), and from the earnings momentum strategies (PMN). Earnings momentum strategies are developed with the same approach of revenue momentum strategies, by buying stocks with the most positive earnings surprises and selling stocks with the most negative earnings surprises. The zero investment portfolios are then held for K subsequent months. Panel B lists the average monthly returns earned from the portfolio of those firms with the most negative SUE (low), from the portfolio of those with the most positive SUE (high), and from the earnings momentum strategies (PMN). For the price momentum strategy, firms are sorted into 10 ascending deciles on the basis of previous 6 months returns. Portfolios of buying Decile 1 (winner) and selling Decile 10 (loser) are held for K subsequent months and not rebalanced during the holding period. The average monthly returns of winner, loser, and price momentum strategies are presented in Panel C. Risk-adjusted momentum returns are also provided in this table. Adj. (1) is momentum returns adjusted by CAPM, and Ad.j (2) is momentum returns adjusted by the Fama-French 3-factor model.***, **, and * indicate statistical significance at 1 %, 5 %, and 10 %, respectively
1972–1999, while we show a significant monthly return of 0.71 % for a sample period extending to 2009. Panel C shows the performance of price momentum strategies. Similar to the results in Jegadeesh and Titman (1993), price momentum strategies yield average monthly returns of 0.94 %, 0.93 %, 0.84 %, and 0.61 %, for the 3, 6, 9, and 12 months holding-period respectively. A comparison of the three momentum strategies indicates that the highest returns are for price momentum, followed by earnings momentum and revenue momentum. Meanwhile, the profitability for earnings momentum portfolio deteriorates faster than for price momentum as the holding period extends from 3 to 12 months.10 The revenue momentum strategy yields the smallest and the shortest-lived profits, with returns diminishing to an insignificant level when the holding period is extended to 12 months. Following a similar approach by Fama and French (1996) and Jegadeesh and Titman (2001), we implement the capital asset pricing model and a Fama-French three factor (FF-3) model to examine whether the momentum returns can be
10
We show later that earnings momentum actually demonstrates stronger persistence than price momentum when the momentum portfolios are held over 2 years.
2232
H.-Y. Chen et al.
explained by pricing factors.11 The last two columns in Panel A of Table 81.4 list the risk-adjusted returns to revenue momentum, which remain significant. The market risk premium, size factor, and book-to-market factor, while serving to capture partial effects of the revenue momentum strategy, are still unable to explain away abnormal returns entirely. The FF-3 factor adjusted return for 6 months remains strong at 0.79 % with a t-statistic equal to 7.32. The risk-adjusted returns to earnings momentum and price momentum in Panels B and C of Table 81.4 are similar to those in the literature (see Jegadeesh and Titman 1993; and Chordia and Shivakumar 2006) and generally confirm the conclusion of Fama (1998) that postearnings-announcement drift and price momentum profits remain significant.
81.5
Interrelation of Revenue, Earnings, and Price Momentum
We further examine the interrelation of momentum strategies through tests of dominance, cross-contingencies, and combined strategies. The objective is to find empirical support for hypotheses for our two research questions. First, we hypothesize that revenue surprises, earnings surprises, and prior returns each have some exclusive information content that is not captured by the market. Under this hypothesis, a particular univariate momentum strategy should not be subsumed by another strategy, which we examine through dominance tests. Second, we hypothesize that the market not only underreacts to individual firm information, but also underestimates the significance of the joint implications of revenue, earnings, and price information. Under this hypothesis, return anomalies are likely to be most pronounced when the information variables all point in the same direction.
81.5.1 Testing for Dominance Among the Momentum Strategies To tackle the interrelation of momentums, we first explore whether any of the three momentum strategies is entirely subsumed by another strategy. Stock price represents the firm value evaluated by investors in the aggregate, given their available information. The most important firm fundamental information for investors is undoubtedly firm earnings, which summarize firm performance. Jegadeesh and Livnat (2006b) point out that an important reference for investors regarding the persistence of firm earnings is offered by firm revenue information. Obviously, these three pieces of firm-specific information, revenue, earnings and stock price, share significant information content with each other. The anomalies of their corresponding momentums therefore may arise from common sources. That is, payoffs to a momentum strategy based on one measure, being revenue
11
We obtain monthly data on market return, the risk-free rate, and SMB and HML from Kenneth French’s website (http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/).
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2233
surprises, earnings surprises, or prior returns, may be fully captured by another measure. The dominance tests serve to test for such a possibility. We first apply the pairwise nested comparison model introduced by George and Hwang (2004) and test whether one particular momentum strategy dominates another. Table 81.5 reports the results in three panels. Panel A compares the revenue momentum and earnings momentum strategies. In Panel A.1, stocks are first sorted on earnings surprises, with each quintile further sorted on revenue surprises. We find that, when controlling for the level of earnings surprises, the revenue momentum strategy still yields significant profits. The zero-investment portfolio returns for 6-month holding periods range from 0.26 % to 0.36 %. In Panel A.2, stocks are first sorted on revenue surprises, and then on earnings surprises. Likewise, the returns to an earnings momentum strategy, when controlling for the level of revenue surprises, are still significantly positive. These paired results indicate that neither earnings momentum nor revenue momentum dominates one another. We follow the same process in comparing revenue momentum and price momentum strategies. Results in Panel B indicate that all the nested revenue momentum strategies and the nested price momentum strategies are found profitable, with the exception of revenue momentum in the loser stock group. In general, we still conclude that neither revenue momentum nor price momentum is dominated by the other. Panel C of Table 81.5 presents the results of the nested momentum strategies based on two-way sorts on earnings surprises and prior returns. Returns to all these nested momentum strategies remain significantly positive. The pairwise nested comparisons suggest that revenue surprises, earnings surprises, and prior returns each convey some unpriced information which is not shared by each other, and therefore further contributes to a momentum effect. A second approach allows us to simultaneously isolate the returns contributed by each momentum portfolio. Taking advantage of George and Hwang’s (2004) model, we implement a panel data analysis with six performance dummies. Rit ¼ ajt þ b1jt Ri, t1 þ b2jt sizei, t1 þ b3jt R1i, tj þ b4jt R5i, tj þ b5jt E1i, tj þ b6jt E5i, tj þ b7jt P1i, tj þ b8jt P5i, tj þ eit
(81:3)
where j ¼ 1, . . . , 6. We first regress firm i’s return in month t on control variables and six dummies for the portfolio ranks. We include the previous month return Ri,t1 to control for the bid-ask bounce effect and the market capitalization sizei,t1 to control for the size effect in the cross-sectional regressions. Momentum portfolio dummies, R1i,tj, R5i,t j, E1i,t j, E5i,t j, P1i,t j, and P5i,t j, indicate whether firm i is included in one or more momentum portfolios based on their scores in month t j. To obtain momentum profits corresponding to the Jegadeesh and Titman (1993) strategies, we average the estimated coefficients of the independent variable over j ¼ 1, . . . , 6, and then subtract the coefficient average for the bottom quintile portfolio from that for the top quintile portfolio. These are the returns
2234
H.-Y. Chen et al.
Table 81.5 Momentum strategies: two-way dependent sorts by revenue surprises, earnings Surprises, and prior returns Panel A. Revenue Momentum vs. Earnings Momentum A.1 Revenue momentum in various SUE groups A.2 Earnings momentum in various SURGE groups Portfolios Portfolios Ave. Portfolios Portfolios Ave. classified by classified by Monthly classified by classified by Monthly SUE SURGE Return t-stats SURGE SUE Return t-stats E1 (Low) R1 (Low) 0.0065 R1 (Low) E1 (Low) 0.0064 R5 (High) 0.0101 E5 (High) 0.0104 R5-R1 0.0036 (3.24) E5-E1 0.0040 (4.66) E2 R1 (Low) 0.0086 R2 E1 (Low) 0.0079 R5 (High) 0.0115 E5 (High) 0.0113 R5-R1 0.0028 (2.85) E5-E1 0.0034 (4.91) E3 R1 (Low) 0.0090 R3 E1 (Low) 0.0089 R5 (High) 0.0119 E5 (High) 0.0131 R5-R1 0.0029 (3.29) E5-E1 0.0042 (6.03) E4 R1 (Low) 0.0096 R4 E1 (Low) 0.0096 R5 (High) 0.0122 E5 (High) 0.0140 R5-R1 0.0026 (2.70) E5-E1 0.0043 (5.59) E5 (High) R1 (Low) 0.0116 R5 (High) E1 (Low) 0.0112 R5 (High) 0.0149 E5 (High) 0.0152 R5-R1 0.0033 (3.22) E5-E1 0.0040 (4.74) Panel B. Revenue momentum vs. Price momentum B.1 Revenue momentum in various PriorRet B.2 Price momentum in various SURGE groups groups Portfolios Portfolios Ave. Portfolios Portfolios Ave. classified by classified by Monthly classified by classified by Monthly Prior Ret SURGE Return t-stats SURGE Prior Ret Return t-stats P1 (Loser) R1 (Low) 0.0070 R1 (Low) P1 (Loser) 0.0072 R5 (High) 0.0077 P5 (Winner) 0.0095 R5-R1 0.0008 (0.67) P5-P1 0.0024 (1.35) P2 R1 (Low) 0.0083 R2 P1 (Loser) 0.0084 R5 (High) 0.0099 P5 (Winner) 0.0110 R5-R1 0.0015 (1.82) P5-P1 0.0026 (1.51) P3 R1 (Low) 0.0091 R3 P1 (Loser) 0.0092 R5 (High) 0.0123 P5 (Winner) 0.0135 R5-R1 0.0032 (4.33) P5-P1 0.0042 (2.29) P4 R1 (Low) 0.0089 R4 P1 (Loser) 0.0092 R5 (High) 0.0132 P5 (Winner) 0.0149 R5-R1 0.0042 (5.53) P5-P1 0.0057 (3.35) P5 (Winner) R1 (Low) 0.0106 R5 (High) P1 (Loser) 0.0080 R5 (High) 0.0175 P5 (Winner) 0.0176 R5-R1 0.0070 (7.03) P5-P1 0.0096 (4.82) (continued)
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2235
Table 81.5 (continued) Panel C. Earnings momentum vs. Price momentum C.1 Earnings momentum in various PriorRet C.2 Price momentum in various SUE groups groups Portfolios Portfolios Ave. Portfolios Portfolios Ave. classified by classified by Monthly classified by classified by Monthly SURGE Prior Ret return t-stats Prior Ret SURGE return t-stats P1 (Loser) E1 (Low) 0.0063 E1 (Low) P1 (Loser) 0.0066 E5 (High) 0.0096 P5 (Winner) 0.0097 E5-E1 0.0034 (3.73) P5-P1 0.0031 (1.62) P2 E1 (Low) 0.0082 E2 P1 (Loser) 0.0083 E5 (High) 0.0106 P5 (Winner) 0.0118 E5-E1 0.0024 (3.67) P5-P1 0.0035 (1.80) P3 E1 (Low) 0.0090 E3 P1 (Loser) 0.0081 E5 (High) 0.0126 P5 (Winner) 0.0134 E5-E1 0.0036 (5.96) P5-P1 0.0052 (2.87) P4 E1 (Low) 0.0091 E4 P1 (Loser) 0.0096 E5 (High) 0.0137 P5 (Winner) 0.0143 E5-E1 0.0046 (7.69) P5-P1 0.0047 (2.65) P5 (Winner) E1 (Low) 0.0104 E5 (High) P1 (Loser) 0.0100 E5 (High) 0.0178 P5 (Winner) 0.0177 E5-E1 0.0073 (8.78) P5-P1 0.0077 (4.16) This table presents the results of pairwise nested comparison between momentum strategies. Panel A shows the comparison between revenue momentum and earnings momentum during the period from 1974 to 2009. In each month, stocks are first sorted into five groups by earnings surprises (revenue surprises), then further sorted by revenue surprises (earnings surprises) in each group. All portfolios are held for 6 months. The monthly returns to 10 extreme portfolios and 5 conditional earnings (revenue) momentum strategies are presented. Pair tests are provided under the hypothesis that conditional earnings (revenue) momentum profits are the same. Panel B shows the comparison between revenue and price momentum strategies, and Panel C shows the comparison between earnings and price momentum strategies
contributed by each momentum strategy when the contributions from other momentum strategies are controlled for. Panel A of Table 81.6 reports the regression results. The returns isolated for revenue momentum, earnings momentum, and price momentum are listed in the last three rows. The results are all significant in terms of either raw returns or FF-3 factor adjusted returns when all months are included or when all non-January months are included. Note, however, that the isolated returns to revenue momentum (R5R1) and to price momentum (P5P1) strategies are no longer significantly positive in January. The insignificant returns in January are consistent with the tax-loss-selling hypothesis, proposing that investors sell poorly performing stocks in October through December and buy them back in January (e.g., see Keim 1989; Odean 1998; Grinblatt and Moskowitz 2004). The overall significant profits contributed by R5 R1 (E5 E1 or P5 P1) indicate market underreactions with respect to the information content of revenue
Panel A. Contribution of momentum returns solely from prior performance information Raw returns All months Jan. Feb.–Dec. Intercept 0.0130 0.0354 0.0110 (4.96) (3.13) (4.14) Ri,t1 0.0412 0.1146 0.0346 (8.95) (6.04) (7.55) Size >0.0001 >0.0001 >0.0001 (2.94) (2.64) (1.88) R1 Dummy 0.0015 0.0042 0.0020 (3.39) (2.81) (4.43) R5 Dummy 0.0013 0.0002 0.0014 (2.18) (0.07) (2.31) E1 Dummy 0.0018 0.0037 0.0017 (5.07) (2.89) (4.42) E5 Dummy 0.0024 0.0045 0.0023 (6.86) (3.61) (6.09) P1 Dummy 0.0026 0.0125 0.0040 (2.14) (1.96) (3.35) P5 Dummy 0.0040 0.0023 0.0041 (2.92) (0.53) (2.88) R5-R1 0.0028 0.0044 0.0035 (3.23) (1.50) (3.82)
Table 81.6 Comparison of revenue, earnings, and price momentum strategies Risk-adjusted returns All months Jan. 0.0051 0.0075 (6.31) (2.79) 0.0371 0.0851 (8.54) (4.60) >0.0001 >0.0001 (1.12) (0.02) 0.0020 0.0015 (4.68) (1.02) 0.0021 0.0019 (4.09) (1.06) 0.0016 0.0028 (4.43) (1.96) 0.0026 0.0055 (6.81) (4.05) 0.0036 0.0072 (3.17) (1.13) 0.0044 0.0033 (3.39) (0.67) 0.0041 0.0004 (5.45) (0.18)
Feb.–Dec. 0.0048 (5.64) 0.0327 (7.48) >0.0001 (0.99) 0.0023 (5.20) 0.0021 (3.81) 0.0015 (4.03) 0.0023 (6.06) 0.0048 (4.43) 0.0045 (3.33) 0.0044 (5.48)
2236 H.-Y. Chen et al.
0.0043 0.0082 0.0039 0.0041 (7.89) (4.64) (6.92) (7.45) P5-P1 0.0066 0.0102 0.0081 0.0080 (3.35) (1.15) (4.10) (3.95) Panel B. Univariate price momentum return and conditional price momentum returns Raw returns Risk-adjusted returns Intercept 0.0131 0.0129 0.0131 0.0130 0.0053 0.0051 (4.97) (4.92) (5.03) (4.96) (6.62) (6.36) Ri,t1 0.0404 0.0409 0.0407 0.0412 0.0363 0.0368 (8.72) (8.83) (8.87) (8.95) (8.29) (8.41) Size 0.0003 0.0003 0.0003 >0.0001 0.0001 0.0001 (2.72) (2.75) (2.97) (2.94) (0.83) (0.88) R1 Dummy 0.0021 0.0015 (4.53) (3.39) R5 Dummy 0.0018 0.0013 (2.88) (2.18) E1 Dummy 0.0022 0.0018 0.0021 (5.78) (5.07) (5.53) E5 Dummy 0.0028 0.0024 0.0030 (6.95) (6.86) (7.47) P1 Dummy 0.0034 0.0029 0.0030 0.0026 0.0044 0.0038 (2.74) (2.31) (2.44) (2.14) (3.83) (3.36) P5 Dummy 0.0046 0.0042 0.0043 0.0040 0.0052 0.0047 (3.33) (3.01) (3.14) (2.92) (3.98) (3.59)
E5-E1
0.0051 (6.31) 0.0371 (8.54) >0.0001 (1.12) 0.0020 (4.68) 0.0021 (4.09) 0.0016 (4.43) 0.0026 (6.81) 0.0036 (3.17) 0.0044 (3.39) (continued)
0.0053 (6.58) 0.0368 (8.46) 0.0001 (1.16) 0.0026 (5.69) 0.0026 (4.87)
0.0039 (3.48) 0.0047 (3.64)
0.0038 (6.66) 0.0092 (4.59)
0.0083 (4.34) 0.0039 (0.39)
81 Does Revenue Momentum Drive or Ride Earnings or Price Momentum? 2237
0.0081 (4.02)
0.0050 (8.07) 0.0070 (3.52) 0.0073 (3.70)
0.0039 (4.26)
0.0028 (3.23) 0.0043 (7.89) 0.0066 (3.35) 0.0096 (4.70)
0.0051 (8.26) 0.0085 (4.18)
0.0086 (4.29)
0.0051 (6.45)
0.0041 (5.45) 0.0041 (7.45) 0.0080 (3.95)
This table presents returns to relative strength portfolios and momentum strategies. Each month during the period from 1974 through 2009, six cross-sectional regressions are estimated for revenue, earnings, and price momentum strategies: Rit ¼ ajt + b1jtRi,t1 + b2jtsizei,t1 + b3jtR1i,tj + b4jtR5i,tj + b5jtE1i,tj + b6jtE5i,tj + b7jtP1i,tj + b8jtP5i,tj + eit, where Rit and sizei,t are the return and the market capitalization of stock i in month t; and R1i,tj (R5i,tj) is the most negative (positive) revenue surprise dummy that takes the value of 1 if revenue surprises for stock i is ranked in the bottom (top) quintile in month tj, and zero otherwise. The dummies with respect to earnings surprises (E1i,tj and E5i,tj), and the dummies with respect to prior 6 month price returns (P1i,tj and P5i,tj) are similar to the settings of R1i,tj and R5i,tj. The estimated coefficients of independent variable are averaged over j ¼ 1, . . ., 6. The numbers reported for raw returns are the time-series average of these averages. The t-statistics calculated from the time series are in parentheses. The risk adjusted returns are intercepts from Fama-French 3-factor regressions on raw returns; their t-statistics are in parentheses. Panel A presents returns to relative strength portfolios and momentum strategies solely belong to each of prior price performance, earnings surprise, and revenue surprises. Panel B presents raw return and conditional returns of price momentum strategy
P5-P1
E5-E1
R5-R1
Table 81.6 (continued)
2238 H.-Y. Chen et al.
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2239
surprises (earnings surprises or prior price performance) unrelated to the other two information measures. The isolated returns are greatest for price momentum (0.66 %), followed by earnings momentum (0.43 %) and then revenue momentum (0.28 %). This is similar to our earlier results on single-criterion momentum. Such a finding again rejects the existence of a dominating momentum strategy among the three. We do not find that information leading to revenue momentum or earnings momentum fully captures the price momentum returns. Similar findings are documented by Chan et al. (1996), Heston and Sadka (2008), and Novy-Marx (2012) for the relation between earnings surprises and price momentum. We would like to examine specifically how much of the price momentum can be explained by revenue surprises and/or earnings surprises information. For this reason, we perform similar regressions by including only a subset of portfolio dummies. The results are reported in Panel B of Table 81.6. In the case of raw returns, the return to price momentum without isolating other momentum sources is 0.81 %, while it is only reduced to 0.73 % after controlling for revenue momentum, to 0.70 % after controlling for earnings momentum, and to 0.66 % after controlling for both. In other words, information leading to revenue momentum and earnings momentum each accounts for about 10 % and 14 % of price momentum, and the two pieces of information combined account for just about 19 % of price momentum effects. The results for risk-adjusted returns are similar. This conclusion adds to the large literature attempting to trace the sources of price momentum. Our numbers indicate that the information conveyed by revenue surprises or earnings surprises seems to make only a limited contribution to price momentums. Results of the pairwise nested comparisons in Table 81.5 and the regression analysis in Table 81.6 both support the hypothesis that revenue surprises, earnings surprises, and prior returns each have some unpriced information content that is exclusive to each measure itself. This conclusion also suggests the possibility that one can improve momentum strategies by using all three information measures.
81.5.2 Two-Way Sorted Portfolio Returns and Momentum Ccross-Contingencies Here and in the next section, we examine the momentum strategies using multiple sorting criteria. These results serve to answer the research question of whether investors underestimate the implications of joint information of revenue surprises, earnings surprises, and prior returns. Given that the market usually informs investors with not just a single piece but multiple pieces of firm information, the incremental information content of additional firm data is likely to be contingent upon other information for the stock. Jegadeesh and Livnat (2006b) suggest that the information content of SURGE has implications for the future value of SUE and such information linkage is particularly significant when both measures point in the same direction. Jegadeesh and
2240
H.-Y. Chen et al.
Livnat (2006a) further find that the market, including financial analysts, underestimates the joint implications of these measures and thus firm market value. Our second research question extends Jegadeesh and Livnat (2006b) by additionally considering the information of prior price performance. We hypothesize that return anomalies should be most pronounced when the joint implications of multiple measures are most underestimated by the market, and this likely occurs when all information variables point in the same direction. In addition, a different but related issue is that any momentum profits driven by one measure may well depend on the accompanying alternative information, which we call the crosscontingencies of momentum. We use multivariate sorted portfolios to test this hypothesis.
81.5.2.1 Two-Way Sorts on Revenue Surprises and Earnings Surprises We start by testing the performance of investment strategies based on the joint information of revenue surprises and earnings surprises. We sort stocks into quintiles on the basis of their revenue surprises and then independently into quintiles based on earnings surprises during the 6-month formation period on each portfolio formation date. Panel A of Table 81.7 presents the raw returns of these 25 two-way sorted portfolios. The intersection of R1 and E1, labeled as R1 E1, is the portfolio formed by the stocks with both the lowest SURGE and the lowest SUE, and the intersection of R5 and E5 labeled as R5 E5, represents the portfolio formed by the stocks with both the highest SURGE and the highest SUE. We first note that the next-period returns of the 25 two-way sorted portfolios increase monotonically with SURGE as well as with SUE. The return to the portfolio with a similar level of SURGE increases with SUE (e.g., the return increases from 0.88 % for R1 E1 to 1.21 % for R1 E5). Similarly, the payoffs to the portfolio of stocks with a similar level of SUE increase with SURGE (e.g., the return increases from 1.23 % for R1 E5 to 1.70 % for R5 E5). That is, stocks that have performed well in terms of revenue and earnings continue to outperform expectations and yield higher future returns. Panel D of Table 81.7 shows the corresponding risk-adjusted abnormal returns for each of the 5 5 double-sorted portfolios based on SURGE and SUE. The monotonicity we see in raw returns in Panel A persists for the risk-adjusted returns. The most positive abnormal returns are for the portfolio of high-SURGE and highSUE stocks (R5 E5) while the most negative abnormal returns are for the portfolio of low-SURGE and low-SUE stocks (R1 E1). This provides direct and robust evidence that the return anomalies tend to be most pronounced when SURGE and SUE point in the same direction. The evidence of monotonicity suggests that the market underreaction is at its extreme when different elements of stock performance information signal in the same direction, i.e., the scenarios of R1 E1 or R5 E5. These are the scenarios where the information of SURGE and SUE are expected to have the most significant joint implications for firm value, while market underestimation of their joint implications is found to be strongest, leading to the most pronounced return drifts in the next period. This observation is consistent with the suggestion by Jegadeesh and Livnat (2006a, b).
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2241
Table 81.7 Momentum strategies: two-way sorts by revenue surprises, earnings surprises, and prior returns Panel A. Raw returns sorted on revenue surprises (SURGE) and earnings surprise (SUE) SUE Arbitrage returns on portfolios sorted by E1(Low) E2 E3 E4 E5(High) earnings R1(Low) 0.0088 0.0107 0.0109 0.0112 0.0121 0.0049 (4.63) R2 0.0098 0.0112 0.0117 0.0129 0.0139 0.0041 (4.71) SURGE R3 0.0106 0.0121 0.0134 0.0142 0.0154 0.0048 (5.40) R4 0.0108 0.0124 0.0133 0.0137 0.0165 0.0057 (5.84) R5(High) 0.0123 0.0141 0.0141 0.0146 0.0170 0.0039 (3.43) Arbitrage 0.0043 0.0034 0.0032 0.0034 0.0036 returns on (2.86) (2.59) (2.84) (2.84) (2.70) portfolios sorted by revenue Revenue-earnings combined momentum strategy: R5 E5 – R1 E1 0.0081 (6.25) Panel B. Raw returns sorted on revenue surprises (SURGE) and prior price performance Prior price performance Arbitrage returns on portfolios P1(Loser) P2 P3 P4 P5(Winner) sorted by price R1(Low) 0.0089 0.0104 0.0109 0.0109 0.0122 0.0034 (1.45) R2 0.0099 0.0112 0.0121 0.0121 0.0135 0.0036 (1.59) SURGE R3 0.0108 0.0125 0.0133 0.0139 0.0161 0.0053 (2.26) R4 0.0100 0.0125 0.0131 0.0141 0.0176 0.0076 (3.66) R5(High) 0.0090 0.0112 0.0143 0.0156 0.0198 0.0108 (4.67) Arbitrage returns on 0.0001 0.0008 0.0033 0.0048 0.0078 portfolios sorted by (0.06) (0.79) (3.69) (5.21) (6.43) price Revenue-Price combined momentum strategy: R5 P5–R1 P1 0.0109 (4.53) Panel C. Raw returns sorted on earnings surprises (SUE) and prior price performance Prior price performance Arbitrage returns on portfolios P1(Loser) P2 P3 P4 P5(Winner) sorted by price E1(Low) 0.0083 0.0103 0.0109 0.0107 0.0105 0.0045 (1.94) E2 0.0098 0.0115 0.0119 0.0126 0.0141 0.0044 (1.89) SUE E3 0.0099 0.0117 0.0127 0.0134 0.0162 0.0062 (2.73) E4 0.0106 0.0120 0.0133 0.0138 0.0168 0.0062 (2.81) E5(High) 0.0107 0.0127 0.0149 0.0160 0.0201 0.0092 (4.01) Arbitrage returns on 0.0030 0.0023 0.0040 0.0053 0.0078 portfolios sorted by (2.66) (3.10) (5.49) (7.51) (7.79) earnings (continued)
2242
H.-Y. Chen et al.
Table 81.7 (continued) Price-revenue combined momentum strategy: E5 P5 – E1 P1 0.0118 (5.47) Panel D. Risk-adjusted returns sorted on revenue surprises (SURGE) and earnings surprise (SUE) SUE Risk-adjusted returns on portfolios sorted by E1(Low) E2 E3 E4 E5(High) earnings R1(Low) 0.0043 0.0026 0.0020 0.0013 0.0005 0.0049 (4.52) R2 0.0029 0.0017 0.0008 0.0002 0.0013 0.0043 (4.79) SURGE R3 0.0016 0.0004 0.0008 0.0017 0.0032 0.0049 (5.38) R4 0.0008 0.0006 0.0011 0.0018 0.0044 0.0052 (5.21) R5(High) 0.0021 0.0027 0.0023 0.0033 0.0054 0.0033 (2.86) Risk-adjusted returns 0.0064 0.0053 0.0043 0.0046 0.0045 on portfolios sorted by (4.78) (4.32) (3.98) (4.16) (3.54) revenue Revenue-earnings combined momentum strategy: R5 E5 – R1 E1 0.0081 (6.25) Panel E. Risk-adjusted returns sorted on revenue surprises (SURGE) and prior price Prior price performance Risk-adjusted returns on portfolios P1(Loser) P2 P3 P4 P5(Winner) sorted by price R1(Low) 0.0050 0.0028 0.0018 0.0015 0.0002 0.0052 (2.22) R2 0.0037 0.0018 0.0004 0.0002 0.0015 0.0052 (2.26) SURGE R3 0.0025 0.0002 0.0009 0.0017 0.0040 0.0066 (2.76) R4 0.0026 0.0002 0.0013 0.0025 0.0059 0.0085 (4.05) R5(High) 0.0032 0.0005 0.0029 0.0044 0.0086 0.0118 (4.98) Risk-adjusted returns 0.0018 0.0023 0.0047 0.0059 0.0087 on portfolios sorted by (1.38) (2.49) (5.80) (6.86) (7.49) price Revenue-price combined momentum strategy: R5 P5 – R1 P1 0.0097 (7.86) Panel F. Risk-adjusted returns sorted on earnings surprises (SUE) and prior price performance Prior price performance Risk-adjusted returns on portfolios P1(Loser) P2 P3 P4 P5(Winner) sorted by price E1(Low) 0.0051 0.0024 0.0012 0.0010 0.0010 0.0062 (2.65) E2 0.0039 0.0014 0.0005 0.0006 0.0027 0.0066 (2.79) SUE E3 0.0033 0.0011 0.0004 0.0013 0.0042 0.0075 (3.20) E4 0.0025 0.0004 0.0012 0.0020 0.0051 0.0076 (3.35) E5(High) 0.0018 0.0003 0.0029 0.0041 0.0083 0.0096 (4.07) (continued)
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2243
Table 81.7 (continued) Risk-adjusted returns 0.0036 0.0027 0.0041 0.0051 0.0072 on portfolios sorted by (3.25) (3.64) (5.61) (7.07) (7.05) earnings Price-revenue combined momentum strategy: E5 P5 – E1 P1
0.0133 (6.09)
For each month, we form equal-weighted portfolios according to the breakpoints of two of three firm characteristics: a firm’s revenue surprises (SURGE), its earnings surprises (SUE), and its prior 6-month stock performance. Panel A and Panel D present raw returns and risk-adjusted returns of the 25 portfolios independently sorted on SURGE and on SUE. The returns of a revenue-earnings combined momentum strategy are obtained by buying the portfolio of the best SURGE stocks and the stocks with the best SUE (SURGE ¼ 5 and SUE ¼ 5) and selling the portfolio of the poorest SURGE stocks and the stocks with the poorest SUE (SURGE ¼ 1 and SUE ¼ 1). Panel B and Panel E present raw returns and risk-adjusted returns of the 25 portfolios independently sorted on SURGE and on prior price performance. The returns of a revenue-price combined momentum strategy is obtained by buying stocks in the portfolio of the best SURGE and the highest price performance and selling stocks in the portfolio of the poorest SURGE and the lowest price performance. Panel C and Panel F present the raw returns and risk-adjusted of the 25 portfolios independently sorted on SUE and on prior price performance. The returns of a earnings-price combined momentum strategy is obtained by buying stocks in the portfolio of the best SUE and the highest price performance and selling stocks in the portfolio of the poorest SUE and the lowest price performance. We also present the arbitrage returns and risk-adjusted arbitrage returns of single sorted portfolios based on the quintiles of price performance, SUE or SURGE at the bottom (and on the right hand side) of each panel for the purpose of comparisons. Risk-adjusted return is the intercept of the Fama-French 3-factor regression where the dependent variable is the arbitrage return or the excess return which is the difference between the raw return and the risk-free rate
Investors may execute various long-short strategies with those 25 portfolios. Those listed in the farthest right column of Panel A indicate earnings momentum returns for stocks with a particular level of SURGE, while those listed in the last row are returns on revenue momentum for stocks with a given level of SUE.12 We now examine the cross-contingencies of momentum. The revenue momentum measure is 0.36 % per month in the high-SUE subsample E5 and 0.43 % per month in the low-SUE subsample E1. Meanwhile, the earnings momentum measure is 0.39 % per month in the high-SURGE subsample R5, and 0.49 % per month in the low-SURGE subsample R1. We do not observe significant variations in momentum returns across SUE or SURGE. Panel D shows similar patterns when returns to momentum portfolios are adjusted for size and B/M risk factors. All of the profits generated earnings momentum strategies or revenue momentum strategies remain significantly positive.
81.5.2.2 Two-Way Sorts on Revenue Surprises and Prior Returns We apply similar sorting procedures based on the joint information of revenue surprises and prior price performance. The results for raw returns as shown in Panel 12
Similar to Hong et al. (2000), one may characterize the former strategy as earnings momentum strategies that are “revenue-momentum-neutral” and the latter as revenue momentum strategies that are “earnings-momentum-neutral.”
2244
H.-Y. Chen et al.
B of Table 81.7, generally exhibit a pattern similar to Panel A but with the following differences. Although the future returns still rise with SURGE among the average and winner stocks, they become insensitive to SURGE for loser stocks. A closer look at the return for portfolio R1 P1 down to the return for portfolio R5 P1 indicates that loser portfolio returns simply do not vary much with the level of SURGE. Panel E lists risk-adjusted returns for the 5 5 portfolios sorted on prior returns and SURGE. A similar monotonic pattern, now in relation with SURGE as well as with prior returns, is observed for most of those abnormal returns. That is, stocks that have performed well in terms of revenue (firm fundamental information) and prior returns (firm market information) continue to outperform expectations and yield higher future returns, and vice versa. As to the cross-contingencies of momentums, the results in Panel B indicate that the revenue momentum strategies executed with winner stocks yield higher returns than those executed with loser stocks. For example, the revenue momentum strategy executed with the most winning stocks yields a monthly return of 0.78 % (R5 P5 – R1 P5), while with the most losing stocks it yields only a monthly return of 0.01 % (R5 P1 – R1 P1). Likewise, the price momentum strategy executed with stocks with greater SURGE yields higher returns than with those with lower SURGE. For example, the price momentum strategy executed with the lowest SURGE stocks yields a monthly return of 0.34 % (R1 P5 – R1 P1), while with the highest SUE stocks it yields a monthly return as high as 1.08 % (R5 P5–R5 P1). The difference of 0.74 percentage between R1 and R5 subsamples is statistically and economically significant, with price momentum profits more than 200 % higher in R5 than in R1. These observations suggest that the revenue surprise information is least efficient among winner stocks, producing the greatest revenue drift for the next period, and that the prior return information is least efficient among stocks with the most positive SURGE producing the strongest return continuation. One noteworthy point is that revenue momentum is no longer profitable among loser stocks. Panel E shows similar patterns of momentum cross-contingencies when returns to momentum portfolios are adjusted for size and B/M risk factors. The message for investment strategy is that prior returns are most helpful in distinguishing future returns among stocks with high SURGE, and the same is true for the implications of revenue surprises for stocks of high prior returns. On the other hand, when a stock is priced unfavorably by the market, the information of revenue surprises does not offer much help in predicting its future returns.
81.5.2.3 Two-Way Sorts on Earnings Surprises and Prior Returns Panel C of Table 81.7 shows the raw returns for multivariate momentum strategies based on the joint information of earnings surprises and prior returns. Several findings are observed. First, as in the cases shown in Panels A and B, the nextperiod returns of the 25 two-way sorted portfolios increase monotonically with SUE as well as with prior returns. For example, when a firm has a highly positive earnings
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2245
surprises (E5) while having had winning stock returns (P5), these two pieces of information together are likely to have particularly strong joint implications for firm value. Such condition leads to an average monthly return as high as 2.01 % in the next 6-month period, possibly attributable to even greater investor underreactions. Panel F of Table 81.7 shows the risk-adjusted abnormal returns for each of the 5 5 double-sorted portfolios based on SUE and prior returns. The monotonicity we see in raw returns in Panel C persists for the risk-adjusted returns. The most positive abnormal returns are for the portfolio of high-SUE and high-prior-return stocks (E5 P5) while the most negative abnormal returns are for the portfolio of low-SUE and low-prior-return stocks (E1 P1). Looking now at the cross-contingencies between earnings momentum and price momentum, the earnings momentum strategy executed with winner stocks yields higher returns (0.78 %) than that executed with loser stocks (0.30 %), and that the price momentum strategy executed with positive-SUE stocks yields higher returns (0.92 %) than that executed with negative-SUE stocks (0.45 %). Panel F shows risk-adjusted returns for these momentum strategies and reveals a similar pattern as in Panel C for raw returns. Results indicate that the market underreactions to price performance are contingent upon the accompanying earnings performance, and vice versa. Can we reconcile our results on momentum cross-contingencies with the behavioral explanations for momentum returns? Barberis et al. (1998) observe that a conservatism bias might lead investors to underreact to information and then result in momentum profits. The conservatism bias, described by Edwards (1968), suggests that investors underweight new information in updating their prior beliefs. If we accept the conservatism bias explanation for momentum profits, one might interpret our results as follows. Investors update their expectations of stock value using firm fundamental performance information as well as technical information, and their information updates are subject to conservatism biases. The evidence of momentum cross-contingencies suggests that the speed of adjustment to market performance information (historical price) is contingent upon the accompanying fundamental performance information (earnings and/or revenue), and vice versa. Our results in Panel B and Panel C of Table 81.7 suggest that stock prices suffer from a stronger conservatism bias from investors and thus delay more in their adjustment to firm fundamental performance information (earnings or revenue) when those stocks experience good news, instead of bad news, as to market performance (prior returns). This then leads to greater earnings or revenue momentum returns for winner stocks than for loser stocks. Similar scenario also leads to greater price momentum returns for high-SUE or high-SURGE stocks than for low-SUE or low-SURGE stocks. This would mean that investors are subject to a conservatism bias that is asymmetric with respect to good news vis-a`-vis bad news. That is, investors tend to be even more conservative in reacting to information on firm fundamental performance (market performance) for stocks issuing good news than those issuing bad news about their market performance (fundamental performance).
2246
H.-Y. Chen et al.
81.5.3 Combined Momentum Strategies The negative results on dominance tests in Table 81.5 and Table 81.6 mean that each of the information variables, SURGE, SUE, and prior returns, at least to some extent, independently leads to abnormal returns. This then suggests that a combined momentum strategy using more than one of these information measures should offer improved momentum profits. While Chan et al. (1996), Piotroski (2000), Griffin et al. (2005), Mohanram (2005), Sagi and Seasholes (2007), Asness et al. (2013), and Asem (2009) have examined the profitability of combined momentum strategies based on other measures, to the best of our knowledge, we offer the first evidence on the profitability of combined momentum strategies using the three most accessible information on firm performance, i.e., prior returns, earnings surprises, and revenue surprises altogether.
81.5.3.1 Bivariate Combined Momentums Table 81.8 compares and analyzes the combined momentum returns. Panel A shows raw and FF-3 factor adjusted returns to momentum strategies based on one-way, two-way, and three-way sorts. We start with bivariate combined momentums. If we buy stocks with the highest SURGE and the highest SUE (R5 E5) while selling stocks with the lowest SURGE and the lowest SUE (R1 E1), such a revenue-and-earnings combined momentum strategy yields a monthly return as high as 0.81 %, which is higher than the univariate momentum return earned solely on the basis of revenue surprises (0.47 %) or earnings surprises (0.58 %) when using quintile portfolios. This result is also a consequence of our observation that the sorted portfolio returns increase monotonically with both SURGE and SUE. Panel A of Table 81.8 also shows that investors earn an average monthly return of 1.09 % by buying stocks with the highest SURGE and the most winning prior returns (R5 P5) and selling stocks with the lowest SURGE and the most losing prior returns (R1 P1). This revenue-and-price combined momentum strategy again outperforms the simple revenue momentum (0.47 %) and the simple price momentum strategy (0.72 %). Similarly, an earnings-and-price combined momentum strategy offers an average monthly return of 1.18 %, which outperforms the univariate earnings momentum (0.58 %) and the price momentum strategy (0.72 %). Note that the strategy using SURGE and SUE yields a return (0.81 %) poorer than that using SURGE and prior returns (1.09 %) or that using SUE and prior returns (1.18 %). This suggests that it is important to take advantage of market information (prior returns) as well as firm fundamental information (SURGE and SUE) when it comes to formulation of investment strategies. 81.5.3.2 Multivariate Combined Momentums Next, we further sort stocks into quintiles independently and simultaneously based on SURGE, SUE, and prior price performance to obtain three-way sorted portfolios. A revenue-earnings-price combined momentum strategy is performed by buying the stocks with the most positive revenue surprises, the most positive
Panel A. Summary of momentum returns from various single/multiple sorting criteria One-way sorts Two-way sorts Momentum Strategy Raw Adj. Momentum Strategy Raw Adj. Return Return Return Return Mom(R) 0.0047*** 0.0063*** Mom(R + E) 0.0081*** 0.0097*** (4.42) (6.77) (6.25) (7.86) Mom(E) 0.0058*** 0.0063*** Mom(R + P) 0.0109*** 0.0136*** (8.17) (8.81) (4.53) (5.75) Mom(P) 0.0072*** 0.0087*** Mom(E + P) 0.0118*** 0.0133*** (3.36) (4.01) (6.25) (6.09) Panel B. Contribution of momentum returns from single prior performance information Incremental return contribution of revenue Incremental return contribution of earnings momentum momentum Diff. in momentum Return difference Diff. in momentum Return difference strategies strategies Mom(R + P) Mom(P) 0.0038*** Mom(E + P) Mom(P) 0.0048*** (3.91) (6.69)
Table 81.8 Comparisons of assorted single and combined momentum strategies
Raw Return 0.0144*** (6.06)
Adj. Return 0.0168*** (7.12)
Incremental return contribution of price momentum Diff. in momentum Return difference strategies Mom(E + P) Mom(E) 0.0061*** (3.48) (continued)
Mom(R + E + P)
Three-way sorts Momentum Strategy
81 Does Revenue Momentum Drive or Ride Earnings or Price Momentum? 2247
0.0023*** (2.28) 0.0024*** Mom(R + E + P) Mom(R + P)
Mom(R + E) Mom(R) 0.0035*** (5.76) 0.0033***
(4.04)
0.0063*** (3.58) 0.0062***
Incremental return contribution of (earnings + price) momentum Diff. in momentum Return difference strategies Mom(R + E + P) 0.0096*** Mom(R) (5.54)
Mom(R + E + P) Mom(R + E)
Mom(R + P) Mom(R)
This table presents the return contribution by considering additional sorting criterion, being revenue surprises, earnings surprises or prior returns. In the table, R, E, and P respectively refer to revenue momentum, earnings momentum, and price momentum strategy. Momentum strategies based on combined criteria are indicated with plus signs. For example, R + P denotes revenue-price combined momentum strategy, that is, R5 P5 – R1 P1. Panel A summarizes raw returns and risk-adjusted returns obtained from momentum strategies based on one-way sorts, two-way sorts, and three-way sorts. Risk-adjusted return is the intercept of the Fama-French 3-factor regression on raw return. Panel B lists the return contributions of each additional sorting criterion based on the return differences. The associated t-statistics are in parentheses. Panel C lists the incremental returns obtained by applying additional two sorting criteria. All returns are expressed as monthly returns.***, **, and * indicate statistical significance at 1 %, 5 %, and 10 %, respectively
(2.70) (4.47) Panel C. Contribution of momentum returns from multiple prior performance information Incremental return contribution of (revenue + Incremental return contribution of (revenue + earnings) momentum price) momentum Diff. in momentum Return difference Diff. in momentum Return difference strategies strategies Mom(R + E + P) 0.0072*** Mom(R + E + P) 0.0085*** Mom(P) Mom(E) (5.47) (4.38)
Mom(R + E + P) Mom(P + E)
Mom(R + E) Mom(E)
Table 81.8 (continued)
2248 H.-Y. Chen et al.
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2249
earnings surprises, and the highest prior returns (R5 E5 P5), and selling the stocks with the most negative revenue surprises, the most negative earnings surprises, and the lowest prior returns (R1 E1 P1). This leads to a monthly momentum return of 1.44 %, which provides the highest investment returns of all the paired momentum strategies discussed so far. Panels B and C of Table 81.8 present the differences in portfolio performance, which indicate the incremental contribution to momentum portfolio returns from each additional sorting criterion. The results are straightforward. The joint consideration of each additional performance measure, whether it is revenue surprises, earnings surprises, or prior returns, helps improve the profits of momentum strategies significantly. The net contribution from price momentum is the greatest (0.62 %), followed by earnings momentum (0.33 %), and then revenue momentum (0.24 %). This result further supports the argument that revenue, earnings, and price all convey to some extent exclusive but unpriced information.
81.5.3.3 Dependent Sorts Versus Independent Sorts With highly correlated sorting criteria, as indicated in Table 81.2, independent multiple sorts may result in portfolios with limited numbers of stocks and therefore insufficient diversification. This will then lead to results that might be confounded by factors other than the intended sorting features. More important, only dependent sorts provide a way to identify the precise conditional momentum returns. Table 81.9 presents the returns and the associated t-statistics for two-way and three-way sorted combined momentum strategies using independent sorts and dependent sorts in different orders. For two-way sorted combined momentum strategies, dependent sorts are found to generate returns that are insignificantly different from those from independent sorts. For three-way sorted combined momentum strategies, however, the results are found to vary significantly with the sorting method. The three-way dependent sorts, in any order, yield investment strategies that significantly outperform those using independent sorts; independent sorts create an average monthly return of 1.44 %, while dependent sorts lead to an average monthly return ranging from 1.66 % to 1.89 %. Yet to take advantage of a more simplified presentation, we report results from only independent sorts in Tables 81.7 and 81.8. Note that the general conclusions we have drawn remain unchanged with dependent sorts.
81.6
Persistency and Seasonality
81.6.1 Persistence of Momentum Effects We next examine the persistence of momentum effects driven by revenue surprises, earnings surprises, and prior price performance. Stock prices tend to adjust slowly to information, and abnormal returns will not continue once information is fully incorporated into prices. Following the argument of conservatism bias (see Edwards 1968; and Barberis et al. 1998), an examination of the persistence of
2250
H.-Y. Chen et al.
Table 81.9 Returns of combined momentum strategies – a comparison between dependent sorts and independent sorts Momentum Strategies
Mom(R + E) Dep_sorts – Indep_sorts (t-statistic only)
Mom(R + P)
Independent sorts Dependent sorts SURGE | SUE | SUE SURGE 0.0084*** 0.0088*** 0.0081*** (6.25) (6.95) (6.88) (0.55) (1.49)
0.0109*** (4.53)
Dep_sorts – Indep_sorts (t-statistic only) Mom(E + P)
0.0118*** (5.47)
Dep_sorts – Indep_sorts (t-statistic only)
Mom (R + E + P)
0.0144*** (6.06)
Dep_sorts – Indep_sorts (t-statistic only)
P6 | SURGE 0.0104*** (4.66) (1.17)
SURGE | P6 0.0106*** (5.19) (0.57)
P6 | SUE 0.0111*** (5.24) (1.76)
SUE | P6 0.0115*** (6.20) (0.55)
P6| SURGE| P6|SUE| SUE|P6| SURGE| SURGE| P6|SUE SURGE SURGE SUE|P6 SUE 0.0175*** 0.0166*** 0.0189*** 0.0188*** 0.0171***
SUE| SURGE| P6 0.0168***
(4.16) (1.86)
(4.36) (1.39)
(4.12) (1.45)
(4.29) (2.44)
(4.45) (2.60)
(4.47) (1.61)
This table presents returns and the associated t-statistics from two-way and three-way sorted combined momentum strategies, which are formed using independent sorts or dependent sorts. A momentum strategy formed on the basis of multiple criteria, which we call combined momentum strategy, is said to apply independent sorts if portfolios are independently sorted into quintiles according to their SURGE, SUE, and prior price performance, with the partition points being independent across these criteria. A combined momentum strategy is said to apply dependent sorts if portfolios are sorted into quintiles according to their SURGE, SUE, and prior price performance, with a particular sorting order. For example, a two-way sorted momentum strategy based on SURGE and SUE using dependent sorts could be formed by first sorting on SURGE then on SUE (SUE| SURGE) or first sorting on SUE then on SURGE (SURGE|SUE). We present here the returns of momentum strategies following all possible sequences of two-way dependent sorts and three-way dependent sorts.***, **, and * indicate statistical significance at 1 %, 5 %, and 10 %, respectively
81
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
2251
momentum returns will reveal the speed of adjustment in reaction to revenue surprises, earnings surprises, and prior returns. More interestingly, the variations of persistence in conditional momentums will demonstrate how one element of information (e.g., revenue surprises) affects the speed of adjustment to another (e.g., prior returns). Table 81.10 presents the cumulative returns from revenue, earnings, and price momentum strategies. The formation period is kept at 6 months, and the cumulative returns are calculated up to 36 months after the event time. Panel A shows that the zero-investment portfolios built upon revenue surprises maintain their return momentum for 6 months. The buy-and-hold returns drop to insignificance 21 months after the portfolio formation. In Panel B, the profits of earnings momentum portfolios, although are not as high as on price momentum in the short term, demonstrate greater persistence than price momentum, with the cumulative returns continuing to drift upward for 25 months after portfolio formation. The cumulative returns still remain significant at 4.65 % 3 years after portfolio formation. Panel C shows that the profits to price momentum portfolio drift upward for 11 months after portfolio formation and start to reverse thereafter. The cumulative returns remain significant at 3.22 % on monthly terms 36 months after portfolio formation. Figure 81.1 compares the cumulative returns to those three univariate momentum strategies. Price momentum generates the highest cumulative returns in the short term (for a 1 year holding period), while earnings momentum demonstrates the most persistent performance, as cumulative returns continue to grow up to 2 years after portfolio formation. On the other hand, the payoffs to revenue momentum seem to be neither as persistent nor as strong as the other two strategies. Figure 81.2 presents the cumulative returns for momentum strategies conditional on alternative performance measures. Figure 81.2a, b present the cumulative returns of revenue momentum conditional on high-low SUEs and prior returns. They show that the revenue momentums remain short-lived, regardless of the level of SUE or the level of prior returns. The portfolio returns to a revenue momentum strategy with loser stocks not only quickly dissipate in the short term and actually reverse to negative returns starting 7 months after portfolio formation. Figure 81.2c, d demonstrate the cumulative returns for earnings momentums conditional on high-low SURGE and prior returns. Figure 81.2c shows that the earnings momentum returns remain similar for the low-SURGE and the highSURGE stocks during the first 20 months after portfolio formation. Such finding of momentum contingencies in fact conforms to our results in Panel A of Table 81.8. More interesting, as we hold the portfolio for over 20 months, the earnings momentum strategy with low-SURGE stocks starts deteriorating while the strategy with high-SURGE stocks still maintain significantly positive returns up to 36 months after the portfolio formation. Figure 81.2d, on the other hand, shows that earnings momentum effects are both greater and longer-lasting for winner stocks than for loser stocks. The caveat on investment strategy is that earnings momentum returns are higher and more longer-lived when applied over stocks with superior price history in the past 6 months.
Panel A. Revenue momentum t Negative SURGE Positive SURGE (month) (%) (%) 1 0.68 1.69 2 1.50 3.26 3 2.45 4.66 4 3.57 6.06 5 4.78 7.49 6 6.13 8.87 7 7.49 10.21 8 8.92 11.51 9 10.40 12.82 10 11.93 14.02 11 13.44 15.19 12 14.95 16.39 13 16.26 17.57 14 17.59 18.78 15 18.86 20.01 16 20.23 21.33 17 21.61 22.67 18 22.96 24.03 19 24.40 25.38 20 25.94 26.79 21 27.45 28.13 22 28.91 29.48 23 30.37 30.91 24 31.83 32.38 PMN (%) 1.02*** 1.75*** 2.21*** 2.49*** 2.71*** 2.75*** 2.72*** 2.59*** 2.42*** 2.09*** 1.76*** 1.44*** 1.31*** 1.19*** 1.15*** 1.09** 1.07** 1.07** 0.98** 0.85* 0.68 0.57 0.54 0.55
Panel B. Earnings momentum t (month) Negative SUE Positive SUE (%) (%) 1 0.66 1.83 2 1.53 3.47 3 2.50 4.95 4 3.59 6.42 5 4.76 7.92 6 5.97 9.40 7 7.19 10.80 8 8.51 12.14 9 9.88 13.50 10 11.27 14.75 11 12.66 16.00 12 14.05 17.33 13 15.28 18.69 14 16.49 20.05 15 17.66 21.42 16 18.95 22.89 17 20.26 24.41 18 21.56 25.94 19 22.90 27.44 20 24.34 28.95 21 25.77 30.41 22 27.21 31.88 23 28.67 33.39 24 30.18 34.90
Table 81.10 Cumulative returns from revenue, earnings, and price momentum strategies PMN (%) 1.17*** 1.94*** 2.44*** 2.83*** 3.17*** 3.43*** 3.61*** 3.63*** 3.62*** 3.49*** 3.34*** 3.28*** 3.41*** 3.57*** 3.76*** 3.94*** 4.15*** 4.37*** 4.54*** 4.62*** 4.64*** 4.67*** 4.72*** 4.72***
Panel C. Price momentum t (month) Loser Winner (%) (%) 1 1.12 1.48 2 1.96 3.19 3 2.79 4.76 4 3.69 6.42 5 4.67 8.10 6 5.66 9.86 7 6.64 11.62 8 7.76 13.21 9 9.00 14.77 10 10.22 16.18 11 11.52 17.54 12 12.91 18.80 13 14.31 19.91 14 15.73 21.04 15 17.13 22.19 16 18.64 23.43 17 20.15 24.72 18 21.59 26.09 19 22.91 27.71 20 24.33 29.29 21 25.79 30.89 22 27.23 32.38 23 28.74 33.90 24 30.29 35.41
WMN (%) 0.36 1.23*** 1.97*** 2.74*** 3.4*** 4.21*** 4.99*** 5.45*** 5.78*** 5.95*** 6.02*** 5.89*** 5.60*** 5.31*** 5.06*** 4.79*** 4.57*** 4.50*** 4.79*** 4.96*** 5.10*** 5.14*** 5.17*** 5.12***
2252 H.-Y. Chen et al.
33.24 34.68 36.08 37.53 39.06 40.58 42.13 43.77 45.38 46.96 48.49 50.06
33.79 35.19 36.57 37.98 39.41 40.85 42.38 43.93 45.44 46.95 48.46 49.97
0.54 0.51 0.49 0.45 0.35 0.26 0.25 0.16 0.06 0.01 0.03 0.10
25 26 27 28 29 30 31 32 33 34 35 36
31.62 33.11 34.54 36.01 37.52 39.04 40.54 42.08 43.60 45.11 46.59 48.15
36.36 37.80 39.22 40.69 42.18 43.62 45.14 46.66 48.14 49.62 51.20 52.80
4.74*** 4.69*** 4.68*** 4.67*** 4.66*** 4.58*** 4.60*** 4.59*** 4.53*** 4.51*** 4.60*** 4.65*** 25 26 27 28 29 30 31 32 33 34 35 36
31.87 33.48 35.03 36.67 38.38 40.00 41.54 43.10 44.70 46.35 47.86 49.46
36.67 38.00 39.29 40.61 41.90 43.31 44.86 46.50 48.09 49.61 51.15 52.68
4.79*** 4.52*** 4.26*** 3.94*** 3.53*** 3.31*** 3.32*** 3.39*** 3.39*** 3.27*** 3.29*** 3.22***
This table reports the cumulative returns of zero-cost momentum portfolio in each month following the formation period. t is the month after portfolio formation. Three different momentum strategies are tested. The sample period is from 1974 through 2009. Panel A reports the results from the revenue momentum strategy, where sample firms are grouped into five groups based on the measure SURGE during each formation month. The revenue momentum portfolios are formed by buying stocks with the most positive SURGE and selling stocks with the most negative SURGE. Listed are the cumulative portfolio returns for the portfolio with the most negative SURGE, the portfolio with the most positive SURGE, and the revenue momentum portfolio. Panel B reports the results from the earnings momentum strategy, where firms are grouped into five groups based on the measure SUE during each formation month. The earnings momentum portfolios are formed by buying stocks with the most positive SUE and selling stocks with the most negative SUE. Listed are the cumulative portfolio returns for the portfolio with the most negative SUE, the portfolio with the most positive SUE, and the earnings momentum portfolio. Panel C reports the results from the price momentum strategy. The price momentum portfolios are formed by buying Quintile 1 (winner) stocks and selling Quintile 5 (loser) stocks on the basis of previous 6 months returns. Listed are the cumulative portfolio returns for the loser portfolio, the winner portfolio, and the price momentum portfolio.***, **, and * indicate statistical significance at 1 %, 5 %, and 10 %, respectively
25 26 27 28 29 30 31 32 33 34 35 36
81 Does Revenue Momentum Drive or Ride Earnings or Price Momentum? 2253
2254
H.-Y. Chen et al. 7 Mom(R)
Mom(E)
Mom(P)
6
Monthly Return (%)
5 4 3 2 1 0
1
4
−1
7
10
13
16
19
22
25
28
31
34
Holding Period (months)
Fig. 81.1 Persistence of momentum effects. This figure shows the average cumulative returns of relative strength portfolios with respect to revenue surprises, earnings surprises, and prior price performance. The relative strength portfolio is buying stocks in highest quintile and selling stocks in lowest quintile on every formation date, and holding for 36 months. The cumulative returns are calculated by adding monthly returns from formation month t to month t + i
In Figure 81.2e, f, price momentum strategies yield higher and more persistent returns for stocks with positive SUE or SURGE than for stocks with negative SUE or SURGE. A comparison of Fig. 81.2e, f also finds that high-SURGE serves as a more effective driver than high-SUE for stocks to exhibit greater and more persistent price momentum. These observations on momentum persistence provide further support for our claim on momentum cross-contingencies. We find that the persistence of a momentum, just like the magnitude of the momentum returns, depends on the accompanying condition of another firm information. Such cross-contingencies are again not as strong in the relation between revenue momentum and SUE or between earnings momentum and SURGE, as shown in Fig. 81.2a, c. Results suggest that investors update their expectations based on the joint information of revenue surprises, earnings surprises, and prior price performance, and the speed of adjustment to firm fundamental information (SURGE or SUE) depends on the prevailing content of firm market information (prior returns), and vice versa.
81.6.2 Seasonality Jegadeesh and Titman (1993), Heston and Sadka (2008), Asness et al. (2013), Novy-Marx (2012), and Yao (2012) find that prior return winners outperform losers in all months except January, leading to positive profits for a price momentum
Mom(R) | SUE1 Mom(R) | SUE5
4 2 0 −2
1
4
7
−4 −6 −8
Cumulative Return (%)
c
12
d Mom(E) | SURGE1 Mom(E) | SURGE5
10 8 6 4 2 0 −2
1
4
7
10 13 16 19 22 25 28 31 34 Month
12 10 8 6 4 2 0 1 −2
Mom(P) | SURGE1
4
7
4 2 0 −2
1
4
Mom(P) | SURGE5
10 13 16 19 22 25 28 31 34 Month
7
10 13 16 19 22 25 28 31 34
−4 Mom(R) | Prior Price Performance 1
−6
Mom(R) | Prior Price Performance 5 Month
12
Mom(E) | Prior Price Performance 1 Mom(E) | Prior Price Performance 5
10 8 6 4 2 0 −2
f
2255
6
−8
Month
e Cumulative Return (%)
10 13 16 19 22 25 28 31 34
Cumulative Return (%)
b
6
Cumulative Return (%)
Cumulative Return (%)
a
Does Revenue Momentum Drive or Ride Earnings or Price Momentum?
Cumulative Return (%)
81
1
4
7
10
13
16
19
22
25
28
31
34
Month
12
Mom(P) | SUE1 Mom(P) | SUE5
10 8 6 4 2 0 −2
1
4
7
10 13 16 19 22 25 28 31 34 Month
Fig. 81.2 Cumulative returns of momentum effect conditional on performance measure. These figures show the average cumulative returns of relative strength portfolio with respect to revenue surprises, earnings surprises, and prior price performance conditional on one another. The holding period is up to 36 months. The cumulative profits are calculated by adding monthly returns from formation month t to month t + i. (a) Cumulative returns of revenue momentum conditional on SUE. (b) Cumulative returns of revenue momentum conditional on prior price performance. (c) Cumulative returns of earnings momentum conditional on SURGE. (d) Cumulative returns of earnings momentum conditional on prior price performance
strategy in all months except January but negative profits for that strategy in January. Chordia and Shivakumar (2006) also find significant seasonality effects in returns to the earnings momentum strategy. Do a revenue momentum strategy and combined momentum strategies exhibit similar seasonalities? Table 81.11 presents results for tests of seasonal patterns in returns to univariate momentum strategies and combined momentum strategies. For all types of momentum strategies, momentum profits in January are either negative or insignificantly different from zero. F-tests reject the hypothesis that the returns to momentum strategies are equal in January and non-January months. We therefore conclude that, as in finding elsewhere, there is seasonality in momentum strategies, and revenue surprises, earnings, surprises, and prior returns all yield significantly positive returns only in non-January months.
2256
H.-Y. Chen et al.
Table 81.11 Returns of momentum strategies in january and non-january months Momentum Strategies Mom(R) Mom(E) Mom(P) Mom(R + E) Mom(R + P) Mom(E + P) Mom(R + E + P)
All months 0.0047*** (4.42) 0.0058*** (8.17) 0.0072*** (3.36) 0.0081*** (6.25) 0.0109*** (4.53) 0.0118*** (5.47) 0.0144*** (6.06)
Jan. 0.0061 (1.59) 0.0026 (0.72) 0.0134 (1.32) 0.0062 (1.09) 0.0164 (1. 44) 0.0082 (0.73) 0.0131 (1.11)
Feb.–Dec. 0.0057*** (5.19) 0.0061*** (8.67) 0.0090*** (4.25) 0.0094*** (7.22) 0.0134*** (5.62) 0.0136*** (6.48) 0.0169*** (7.26)
F-Statistic 31.66
p-Value t0 ¼ max(k, k0 ), the change times of ut form a renewal process with i.i.d. inter-arrival times that are geometrically distributed with parameter p, or equivalently, I t :¼ 1fut 6¼ut1 g are i:i:d: BernoulliðpÞ with PðI t ¼ 1Þ ¼ p, I t0 ¼ 1, and there is no change point prior to t0.
85
Stochastic Change-Point Models of Asset Returns and Their Volatilities
2325
(A2) ut ¼ (1 It)ut1 + It(zTt ,gt)T, where (zT1 ,g1)T, (zT2 ,g2)T, . . . are i.i.d. random vectors such that ztjgt Normal(z, V/(2gt)), gt w2d/r, where w2d denotes the chi-square distribution with d degrees of freedom. (A3) The processes {It}, {ut}, and {(xt, ϵ t)} are independent. This model generalize the approach developed by Lai et al. (2005), who consider the special case of AR models with occasional jumps in regression parameters and error variances. The last paragraph of Sect. 93.3.3 of Lai and Xing (2010) has given a brief introduction of the model, and we provide more complete details here and also an updated account that includes some recent results.
85.3.1 Closed-Form Recursive Filters Conditions (A1)–(A3) specify a Markov chain with unobserved states (It, ut). The observations (xt, yt) are such that (yt bTt xt)/nt forms a GARCH process. This hidden Markov model (HMM) has hyperparameters p, z, V, r, d, a1, . . ., ak, b1, . . ., bk0 . To estimate ut assuming known hyperparameters, let Jn ¼ max{t n : In ¼ 1} and note that n Jn k by
(A1). Define Y n ¼ ðx1 ; y1 ; . . . ; xn ; yn Þ and Y j, n ¼ xj ; yj ; . . . ; xn ; yn . The estimates b^n and^n 2n based on Y n are weighted averages of b^n, j and^n 2n, j based on Y j, n, with the weights pn,j to be specified. The b^n, j and ^n 2n, j can be computed recursively (with ^ n, j ¼ V, and increasing n and fixed j). Initializing at n ¼ j 1 with b^n, j ¼ z, V 2 ^n n, j ¼ r=ð2dÞ, define for n j 0 ^h n, j ¼ @1
k X i¼1
þ
k X i¼1
ai
1
0
ai
k X
0
bl A þ
l¼1
k X
bl ^h nl, j
l¼1
yni b^ni, j xni T
2 (85.12a)
^n 2ni, j
Vn, j ¼ Vn1, j Vn1, j xn xTn Vn1, j = ^h n, j þ xTn Vn1, j xn ,
(85.12b)
n o T b^n, j ¼ b^n1, j þ Vn1, j xn yn b^n1, j xn = ^h n, j þ xTn Vn1, j xn ,
(85.12c)
^n 2n, j ¼
T yn b^n1, j xn
2
dþnj2 2 1 ^n þ : d þ n j 1 n1, j d þ n j 1 ^h n, j þ xTn Vn1, j xn
(85.12d)
2326
T.L. Lai and H. Xing
The weights pn,j are given recursively by
j ¼ n, jn1
(85.13)
n X 1 rn, j ¼ r þ zT V1 z zTn, j V1 z þ y2t =^h t, j , n , j n, j 2 t¼j
(85.14)
pn, j / pn, j :¼
p f nn =f 00 ð1 pÞpn1, j f nj =f n1, j
if if
Xn ^ x y = h where letting zn, j ¼ Vn, j V1 z þ t, j and t¼j t t
pn, j ¼ pn, j =
Xn 0
j ¼1
pn, j 0 , and the fnj are given explicitly by
1=2 ðdþnjþ1Þ=2Þ f nj ¼ Vn, j Gððd þ n j þ 1Þ=2Þrn, j , f 00 ¼ jVj1=2 Gðd=2Þðr=2Þd=2 :
(85.15)
These formulas are extensions of those for the special case xt ¼ (yt1, . . ., ytk)T and ht 1 considered by Lai et al. (2005). The extension from the case ht 1 to more general known ht basically amounts to extending ordinary least squares estimation (given the most recent change-point Jn) to generalized least squares estimation.
85.3.2 Estimation of Hyperparameters The above Bayesian filter involves z, V, r, d, p, and h ¼ (a1, . . ., ak, b1, . . ., bk0 ). Note that z is the prior mean of bt and r/(d 2) is the prior mean of 2n2 at time t when parameter changes occur. As noted in Lai and Xing (2008), it is more convenient to represent the w2d/r prior distribution for (2n2t ) 1 as a gamma (d/2, r/2) distribution so that d does not need to be an integer. The recursions (85.12b) and (85.12c) are basically recursions for ridge regression which shrinks the generalized least squares estimate (using the weights ^h t, j ) towards z, with V1 and ∑ nt ¼ jxtxTt being the matrix weights for the shrinkage target and the generalized least squares estimator, respectively. The shrinkage target z and its associated weight matrix V1 are relevant when n j is small but become increasingly negligible with increasing n j. We can estimate z, V, r, and d by applying the method of moments to the stationary distribution of the Markov chain (It, yt, ϵ t) that is partially observed via (xt, yt). Details are given in the next paragraph. With z, V, r, and d replaced by these estimates, we then estimate h and p by maximum likelihood, noting that the log-likelihood function ‘n based on y1, . . ., yn has the representation " # n t X X log pt, j ðh; pÞ , (85.16) log ‘n ðh; pÞ ¼ t¼1
where pt, j is given by Eq. 85.13.
j¼1
85
Stochastic Change-Point Models of Asset Returns and Their Volatilities
2327
Lai and Xing (2013) use the method-of-moments estimates of z, V, r, and d based on (xt, yt), 1 t n. From (A2) and (A3), it follows that E(bt) ¼ z, Cov(bt) ¼ (En2t )V, and E(xtyt) ¼ xtxTt z. From n L moving windows {(xt, yt) : s t s + L} of these data, compute the least squares estimates ðsÞ
b^
¼
!1
X
X
xt xTt
stsþL
xt yt :
(85.17)
stsþL
ðsÞ b^ is a method-of-moments estimate of z, and so is XnL ðsÞ 1 b^ . If an oracle would reveal the change times up to time b^ ¼ ðn LÞ s¼1 n, then one would segment the time series accordingly and use the least squares estimate for each segment to estimate the regression parameter for that segment. The average of these least squares estimates over the segment would provide a method-of-moments estimate of z. Similarly, the average squared residual in each segment is a method-of-moments estimate of E(n2t ) ¼ r/[2(d 2)], and so is the average of these values over the segments; see Engle and Mezrich (1996). In ignorance of the change points, the segments are replaced by moving windows of ðsÞ length L + 1 in Eq. 85.9 and estimate z by the average b of the b^ . Likewise r and d can be estimated by equating the mean and variance of the inverted gamma distribution for n2t to their sample counterparts for the average squared residuals:
Each
" # nL
2 X X ^ r 1 T ^ðsÞ yt xt b =ð L þ 1Þ ,
¼ n :¼ ðn LÞ 2 d^ 2 s¼1 stsþL " #2 nL
X X ^2 ðsÞ 2 r 1 T h yt xt b^ =ðL þ 1Þ n : 2 i ¼ ð n LÞ 2 d^ 2 d^ 4 s¼1 stsþL (85.18) Similarly Lai and Xing (2013) estimate V by nL
ðsÞ
T X
ðsÞ ^ ¼ 2 d^ 2 =^ V r ðn LÞ1 b^ b b^ bÞ :
(85.19)
s¼1
85.3.3 Bounded Complexity Mixture Approximations Although Eq. 85.13 provides closed-form recursions for updating the weights pt,i, 1 i t, the number of weights increases with t, resulting in rapidly increasing computational complexity and memory requirements for estimating yn as n increases. A natural idea to reduce the complexity and to facilitate the use of parallel algorithms for the recursions is to keep only a fixed number M of weights at every stage n (which is tantamount to setting the other weights to be 0).
2328
T.L. Lai and H. Xing
Lai and Xing (2013) keep the most recent m weights pn,i (with n m < i n) and the largest M m of the remaining weights, where 1 m < M. Specifically, let Kn1 denote the set of indices i for which pn1,i is kept at stage n 1; thus Kn1 * fn 1, , . . . , n mg. At stage n, define pn,i by Eq. 85.5 for i 2 fng [ Kn1 , and let in be the index not belonging to {n, n 1, . . ., n m + 1} such that n pn, in ¼ min pn, i : j 2 Kn1
and
o jnm ,
(85.20)
choosing in to be the one farthest from n if the minimizing set in (85.12) has more than one element. Define Kn ¼ fng [ ðKn1 fin gÞ, and let pn, i pn , i ¼ X pn, j :
(85.21)
j2Kn
Lai and Xing (2013) use these bounded complexity mixtures (BCMIX) not only to approximate the filters ðbt ; nt ÞjY t but X also to approximate the likelihood function (85.8), in which we replace ∑ tj¼1 by . They use a grid of the form j2Kt {2j/n : j0 j j1}, where j0 < 0 < j1 are integers, to search for the maximum ^p n of ^ n ¼ ðp ; ^ Þ is used to replace l in the ‘n ðp; ^ n Þ over the grid. Letting l ¼ ðp; hÞ, l n n ^ n of the GARCH parameters after recursions (85.4) and (85.5). The update h observing (xn, yn) uses simply a single iteration of the Newton-Raphson iteration procedure to maximize ‘n ð^p n1 ; hÞ when n n0 and uses more iterations until convergence for small n. Therefore relatively fast updates of the hyperparameters estimates can be used to implement the adaptive BCMIX filters.
85.3.4 Sequential BCMIX Forecasts The AR-GARCH model is often used to forecast future returns and their volatilities for portfolio optimization and risk management; see Sects. 6.4.1 and 12.2.3 of Lai and Xing (2008). Incorporating exogenous inputs and change points into the model improves the forecasts. For the change-point ARX-GARCH model, first assume that the hyperparameters are known. Since xn+1 consists of yn, . . ., ynk+1 and other input variables up to time n and since time n + 1 has prior probability p of being a change point, the forecast ^y nþ1jn of yn+1 given Y n is related to the filter b^n given in Sect. 85.1 by T ^y nþ1jn ¼ pzT xnþ1 þ ð1 pÞb^n xnþ1 ,
(85.22)
noting that bn+1 is equal to bn with probability 1 pn,n. Similarly, the forecast ^v 2nþ1jn of n2n+1 given Y n is
85
Stochastic Change-Point Models of Asset Returns and Their Volatilities
^n 2nþ1jn ¼
pp þ ð1 pÞ^n 2njn , 2ð d 2Þ
2329
(85.23)
assuming d > 4 in view of Eq. 85.10. Note that the conditional variance in the GARCH model involves n2nhn rather than n2n. Lai and Xing (2013) can use Eq. 85.4a with n replaced by n + 1 to forecast hn+1. In particular, for GARCH (1,1),
2 T they forecast hn+1 by hnþ1, j ¼ ð1 a bÞ þ b^h n, j þ a yn b^n, j xn =^n 2n, j and use the weights pn,j in Eq. 85.13 to weight 8
2 9 > ^T xn > = < a y b n n, j pr ^n 2nþ1, j ^ : ð1 a bÞ þ b^h n, j þ þ ð1 pÞ^n 2n, j h nþ1, j ¼ > > 2ðd 2Þ ^n 2n, j ; : (85.24)
85.3.5 BCMIX Smoothers Lai and Xing (2013) begin by deriving the Bayes estimate (smoother) of ut ¼ (bTt ,tt)T given Y n for 1 t n in the “oracle” setting in which the ht are specified exactly (by the oracle) so that there are explicit recursive representations of the posterior mean of ut given Y n for 1 t n. To obtain the optimal smoother Eðyt jY n Þ for 1 t n, they use Bayes theorem to combine the forward filter yt jY t with pffiffiffiffi the backward filter yt jY tþ1, n. Because the ht are assumed known in yt bTt xt = ht ¼ nt ϵ t and the ϵ t are i.i.d. standard normal, the backward filter has the same form as the forward filter. In fact, assumptions (A1)–(A3) define a reversible Markov chain of jump times and jump magnitudes, assuming I nt0 þ1 ¼ 1 and no change points afterwards. Let p denote the density function of the stationary distribution. Letting e J tþ1 ¼ min
e fs t þ 1 : I s ¼ 1g and qtþ1, j ¼ P J tþ1 ¼ jjY tþ1, n for j t + 1, one can reverse time and obtain a backward filter that is similar to the forward filter: n
X qtþ1, j f utþ1 jY tþ1, n , e J tþ1 ¼ j , f ut jY tþ1, n ¼ ppðut Þ þ ð1 pÞ j¼tþ1
in which qtþ1 , j / qtþ1, j ¼
pf jj =f 00 ð1 pÞqtþ2, j f tþ1, j =f tþ2, j
if j ¼ t þ 1, if j t þ 2:
Application of Bayes theorem then yields f ðut jY n Þ / f ðut jY t Þf ut jY tþ1, n =pðut Þ t X X / p pi, t f ðut jY n , Cit Þ þ ð1 pÞ i¼1
1it t:
(85.26)
Moreover, the posterior probability of having a change point at time t is given by PðI tþ1 ¼ 1jY n Þ ¼
X
PðCit jY n Þ ¼ p=At :
1it
The next step is to approximate aijt by a^ ijt that replaces the ht, which is actually unknown, by the estimates ^h j, i defined recursively for j i by Eq. 85.12a. As in Sect. 85.3.1, we assume known hyperparameters p and h for the time being. Using the BCMIX approximation to the forward and backward filters, we approximate the sum in Eq. 85.25 and that defining At in Eq. 85.26 by bt jtn , Y n
X i2Kt, j 2ftg[e K tþ1
aijt N zj, i , Vj, i = 2tj, i , tj, i jY n w2dþjiþ1 =rj, i (85.27)
e tþ1 is the where Kt is the same as that in Sect. 85.3.3 for the forward filter and K corresponding set for the backward filter. Assuming known h and p, the BCMIX estimates for bt and nt given Y n are b^tjn ¼ ^t tjn ¼
X i2Kt, j 2ftg[e K tþ1
X
i2Kt, j ftg[e K tþ1
aijt zj, i ,
^n 2tjn ¼
X
e i2Kt, j 2 ftg[K tþ1 aijt ðd þ j i þ 1Þ= 2rj, i :
aijt
rj, i , dþji1 (85.28)
The conditional probability of a change point at time t(n) given Y n is estimated by ^ ðI tþ1 ¼ 1jY n Þ ¼ p=At : P
(85.29)
85
Stochastic Change-Point Models of Asset Returns and Their Volatilities
2331
Without assuming p and h to be known, we can use the BCMIX approximation ^ Þ. in the log-likelihood function (85.16) based on Y n and evaluate its maximizer ð^p ; h ^ Þ in Eq. 85.28 yields the BCMIX empirical Bayes Replacing (p, h) by ð^p ; h smoother.
85.3.6 Application to Segmentation In principle, the frequentist approach to multiple change-point problems for regression models reviewed in Sect. 85.1 can be extended to ARX-GARCH models by maximizing the log-likelihood over the locations of the change points and the piecewise constant parameters when it is assumed that there are k change points. This optimization problem, however, is much more difficult than that for regression models and only constitutes an inner loop of an algorithm whose outer loop is another minimization, over k, of a suitably chosen model selection criterion to determine k. For computational and analytic tractability, the frequentist approach typically assumes that k is small relative to n and that adjacent change points are sufficiently far apart so that the segments are identifiable except for relatively small neighborhoods of change points; see Bai and Perron (1998). Lai and Xing (2011) formulate these assumptions for the piecewise constant parameter vectors ut as follows: (n) 1 (B1) The true change points occur at t(n) 1 < < tk such that lim infn ! 1n (n) (n) (n) (n) (ti ti1) > 0 for 1 i k + 1, with t0 ¼ 0 and tk + 1 ¼ n. (B2) There exists d > 0, which does not depend on n, such that min1ik utðnÞ utðnÞ d for all large n. i
i1
In the context of ARX-GARCH models, Lai and Xing (2013) also assume that the stochastic regressors satisfy the stability condition: P (B3) max1tn kxt k2 =n ! 0 and ∑ nt¼1xtxTt /n converge almost surely to a positive definite nonrandom matrix. Under (B1)–(B3) and assuming that m |log n|1+ϵ and M m ¼ O(1) as n ! 1, ^ tjn for some ϵ > 0, Lai and Xing (2013) have shown that the BCMIX smoother u satisfies u ^ tjn ut ! 0, max ð nÞ 1tn:min1ik jtti jm
as n ! 1
(85.30)
uniformly in a1/n p a2/n. They apply this result to estimate the change times (n) t(n) 1 , . . ., tk in (B1) as follows. Let ^ tþm u ^ tm 2 , Dt ¼ u
(85.31)
and let ^t 1 be the maximizer of Dt over m < t < n m. After ^t 1 , . . . , ^t j1 have been defined, define
2332
T.L. Lai and H. Xing
^t j ¼ arg
max
t:m K1 s T¼ N 1 ½ðC1 C2 Þ=ert ðK 2 K 1 Þ z when S K1 K2
2 z ¼ N 1 ððC1 C2 Þ=ert ðK 2 K 1 ÞÞ þ ln S2 =e2rt K 1 K 2 : (90.17) It is clear that if stock price is less than the lower exercise K1 (i.e., then both call options are pffiffiffiout of the money), and if we had chosen the value with the plus sign of z in Eq. 90.17, ISD calculated by Eq. 90.17 will be overstated. The advantage of this formula is that a sufficient condition to calculate ISD by Eq. 90.17 only requires that there existed any two consecutive call option values with different exercise prices. But, the accuracy of this formula will depend on the magnitude of the deviation between these two exercise prices. Ang et al. (2012) further extend this approach to include a third option to derive the third formula. Similar to Eq. 90.16, if there is a third call option C3 with the exercise price K3, then the following Eq. 90.18 must hold for K2, K3 and C2, C3. pffiffiffi2 pffiffiffi s t þ 2N 1 ½ðC2 C3 Þ=ert ðK 3 K 2 Þ s t
ln S2 = e2rt K 2 K 3 ¼ 0:
(90.18)
Given the constant variance assumption in Black and Scholes option model, the following Eq. 90.19 is thus derived by subtracting Eq. 90.18 from Eq. 90.16 as follows:
pffiffiffi s t ¼ lnðK 3 =K 1 Þ= 2 N 1 ððC1 C2 Þ=ert ðK 2 K 1 ÞÞ N 1 ððC2 C3 Þ=ert ðK 3 K 2 ÞÞ :
(90.19) An advantage of using Eq. 90.19 rather than Eq. 90.17 to estimate the ISD is to circumvent the sign issue that appears in Eq. 90.17. However, a drawback of using Eq. 90.19 is that there must exist at least three instead of two call options for Eq. 90.17. Equation 90.19 provides a simple formula to calculate ISD because all option values and exercise price are given and the inverse function of the standard cumulative normal function also available in the Excel spreadsheet. Ang et al. (2012) state that this third formula in Eq. 90.19 is more accurate method for estimating ISD based on their simulation results.
90
Implied Volatility: Theory and Empirical Method
90.4
2485
Illustration of Estimating Implied Standard Deviation by MATLAB
The data for this study for estimating ISD include the call options on the S&P 500 index futures which are traded at the Chicago Mercantile Exchange (CME).2 According to Eq. 90.6, we need the information of market call option price on S&P 500 index, the annualized risk-free rate, S&P 500 index futures price, exercise price, and maturity date on the contracts as input variables to calculate the ISD of call option on S&P 500 index futures. Daily closed-price data of S&P 500 index futures and options on S&P 500 index futures was gathered from Datastream for two periods of time: the options expired on March, June, and September, 2010; options expired on March, June, and September, 2011; and the S&P 500 index future from October 1, 2008, to November 4, 2011. The S&P 500 spot price is based on the closed price of S&P 500 index on Yahoo! Finance3 during the same period of S&P 500 index future data. The risk-free rate used in Black model is based on 3-month Treasury bill from Federal Reserve Bank of St. Louis.4 The selection of these futures option contracts is based on the length of trading days. The futures options expired on March, June, September, and December have over 1 year trading date (above 252 observations), and other options only have more or less 100 observations. Therefore, we only choose the futures options with longer trading period to investigate the distributional statistics of these ISD series. Studying two different time periods (2010 and 2011) of call options on S&P 500 index futures will allow the examination of ISD characteristics and movements over time as well as the effects of different market climates. The tolerance level used is the same formula as shown in Eq. 90.5, and let the tolerance level Q equal to 0.000001 as follows: s1 s0 s < :000001 0 This chapter utilized financial toolbox in MATLAB to calculate the implied volatility for futures option that the code of function is as follows5: Volatility ¼ blsimpvðPrice, Strike, Rate, Time, Value, Limit, Tolerance, ClassÞ
2
Nowadays Chicago Mercantile Exchange (CME), Chicago Board of Trade (CBOT), New York Mercantile Exchange (NYMEX), and Commodity Exchange (COMEX) are merged and operate as designated contract markets (DCM) of the CME Group which is the world’s leading and most diverse derivatives marketplace. Website of CME group: http://www.cmegroup.com/ 3 Website of Yahoo! Finance is as follows: http://finance.yahoo.com 4 Website of Federal Reserve Bank of St. Louis: http://research.stlouisfed.org/ 5 The syntax and the code from m-file source of MATLAB for Implied Volatility Function of Futures Options are represented in Appendix 1. The detailed information of the function and example of calculating the implied volatility for futures option also can be referred on MathWorks website: http://www.mathworks.com/help/toolbox/finance/blkimpv.html
2486
C.-F. Lee and T. Tai
where the blsimpv is the function name in MATLAB; Price, Strike, Rate, Time, Value, Limit, Tolerance, and Class are input variables; Volatility is the annualized ISD (also called implied volatility). The advantages of this function are the allowance of the upper bound of implied volatility (Limit variable) and the adjustment of the implied volatility termination tolerance (Tolerance variable), in general, equal to 0.000001. A summary of the ISD distributional statistics for S&P 500 index futures call options in 2010 and 2011 appears in Table 90.1. The most noteworthy feature from this table is the significantly different mean values of the ISD that occur for different exercise prices. The means and variability of the ISD in 2010 and 2011 appear to be inversely related to the exercise price. Comparing the mean ISDs across time periods, it is quite evident that the ISDs in 2011 are significantly smaller. Also, the time-to-maturity effect observed by Park and Sears (1985) can be identified. The September options in 2011 possess higher mean value of the ISD than those maturing in June and March with the same strike price. The other statistical measures listed in Table 90.1 are the relative skewness and relative kurtosis of the ISD series, along with the studentized range. Skewness measures lopsidedness in the distribution and might be considered indicative of a series of large outliers at some point in the time series of the ISDs. Kurtosis measures the peakedness of the distribution relative to the normal and has been found to affect the stability of variance (see Lee and Wu 1985). The studentized range gives an overall indication as to whether the measured degrees of skewness and kurtosis have significantly deviated from the levels implied by a normality assumption for the ISD series. Although an interpretation of the effects of skewness and kurtosis on the ISD series needs more accurate analysis, a few general observations are warranted at this point. Both 2010 and 2011 ISD’s statistics present a very different view of normal distribution, certainly challenging any assumptions concerning normality in Black–Scholes option pricing model framework. Using significance tests on the results of Table 90.1 in accordance with Jarque–Bera test, the 2010 and 2011 skewness and kurtosis measures indicate a higher proportion of statistical significance. We also utilize simple back-of-the-envelope test based on the studentized range to identify whether the individual ISD series approximate a normal distribution. The studentized range larger than 4 in both 2010 and 2011 indicates that a normal distribution significantly understates the maximum magnitude of deviation in individual ISD series. As a final point to this brief examination of the ISD skewness and kurtosis, note the statistics for MAR10 1075, MAR11 1200, and MAR11 1250 contracts. The relative size of these contract’s skewness and kurtosis measures reflect the high degree of instability that its ISD exhibited during the last 10 days of the contract’s life. Such instability is consistent across contracts. However, these distortions remain in the computed skewness and kurtosis measures only for these particular contracts to emphasize how a few large outliers can magnify the size of these statistics. For example, the evidence that S&P 500 future price jumped on
90
Implied Volatility: Theory and Empirical Method
2487
Table 90.1 Distributional statistics for the ISD series of call options on S&P 500 index futures Mean Option seriesa Call futures options in 2010 MAR10 1075 0.230 (C070WC) JUN10 1050 0.263 (B243UE) JUN10 1100 0.247 (B243UF) SEP10 1100 0.216 (C9210T) SEP10 1200 0.191 (C9210U) Call futures options in 2011 MAR11 1200 0.206 (D039NR) MAR11 1250 0.188 (D1843V) MAR11 1300 0.176 (D039NT) JUN11 1325 0.165 (B513XF) JUN11 1350 0.161 (A850CJ) SEP11 1250 0.200 (B9370T) SEP11 1300 0.185 (B778PK) SEP11 1350 0.170 (B9370V)
Std. dev.
CVb
Studentized Skewness Kurtosis rangec
0.032
0.141
2.908
14.898
10.336
251
0.050
0.191
0.987
0.943
6.729
434
0.047
0.189
0.718
0.569
4.299
434
0.024
0.111
0.928
1.539
6.092
259
0.022
0.117
0.982
2.194
6.178
257
0.040
0.195
5.108
36.483
10.190
384
0.027
0.145
3.739
25.527
10.636
324
0.021
0.118
1.104
4.787
8.588
384
0.016
0.095 1.831
12.656
10.103
200
0.018
0.113 0.228
1.856
8.653
234
0.031
0.152
2.274
6.875
7.562
248
0.024
0.131
2.279
6.861
7.399
253
0.025
0.147
2.212
5.848
6.040
470
Observations
a
Option series contain the name and code of futures options with information of the strike price and the expired month, for example, SEP11 1350 (B9370V) represents that the futures call option is expired on September 2011 with the strike price $1,350, and the parentheses is the code of this futures option in Datastream b CV represents the coefficient of variation that is standard deviation of option series divided by their mean value c Studentized range is the difference of the maximum and minimum of the observations divided by the standard deviation of the sample
January 18, 2010, and plunged on February 2, 2011, causes the ISD of these particular contracts sharply increasing on that dates. Thus, while still of interest, any skewness and kurtosis measures must be calculated and interpreted with caution. One difficulty in discerning the correct value for the volatility parameter in the option pricing model is due to its fluctuation over time. Therefore, since an accurate estimate of this variable is essential for correctly pricing an option, it would seem
2488
C.-F. Lee and T. Tai
that time series and cross-sectional analysis of this variable would be as important as the conventional study of security price movements. Moreover, by examining the ISD series of each call options on S&P 500 index futures over time as well as within different time sets, the unique relationships between the underlying stochastic process and the pricing influences of differing exercise prices, maturity dates, and market sentiment (and, indirectly, volume), might be revealed in a way that could be modeled more efficiently. Therefore, we should consider autoregressive–moving-average (ARMA) models or cross-sectional time series regression models to analyze the ISD series and forecast the price of call options on S&P 500 index futures by predicting the future ISD of these options.
90.5
Summary and Concluding Remarks
The research in estimation of the implied volatility becomes the one of most important topics in option pricing research because the standard deviation of the underlying asset return, which is the important factor in Black–Scholes’ option pricing model, cannot be observed directly. The purpose of this chapter is to review the different theoretical methods used to estimate implied standard deviation and to show how the implied volatility can be estimated in empirical work. We review the OLS method and a Taylor series expansion method for estimating the ISD in previous literature. Three formulas for the estimation of the ISD by applying a Taylor series expansion method to Black–Scholes option pricing model can be derived from one, two, and three options, respectively. Regarding to these formulas with the remainder terms in a Taylor series expansion method, the accuracy of these formulas depends on how an underlying asset is close to the present value of exercise price in an option. In empirical work, we illustrate how MATLAB can be used to deal with the issue of estimating implied volatility for call options on S&P 500 index futures in 2010 and 2011. The results show that the time series of implied volatility significantly violate the assumption of constant volatility in Black– Scholes option pricing model. The skewness and kurtosis measures reflect the instability and fluctuation of the ISD series over time. Therefore, in the future research in the ISD, we should consider autoregressive–moving-average (ARMA) models or cross-sectional time series regression models to analyze and predict the ISD series to forecast the future price of call options on S&P 500 index futures.
Appendix 1: The Syntax and Code for Implied Volatility Function of Futures Options in MATLAB The function name of estimating implied volatility for European call options on index futures in this chapter are as below:
90
Implied Volatility: Theory and Empirical Method
2489
Syntax Volatility ¼ blsimpvðPrice, Strike, Rate, Time, Value, Limit, . . . Tolerance, ClassÞ The input variables that can be a scalar, vector, or matrix in the function of estimating implied volatility are described in Table 90.2 The code from m-file source of MATLAB for implied volatility function of futures options is shown as below: function volatility ¼ blkimpv(F, X, r, T, value, varargin) % BLKIMPV Implied volatility from Black’s model for futures options. % Compute the implied volatility of a futures price from the market % value of European futures options using Black’s model. % % Volatility ¼ blkimpv(Price, Strike, Rate, Time, Value) % Volatility ¼ blkimpv(Price, Strike, Rate, Time, Value, Limit, . . . % Tolerance, Class) % % Optional Inputs: Limit, Tolerance, Class. % % Inputs: % Price - Current price of the underlying asset (i.e., a futures contract). %
Table 90.2 The description of input variables used in blsimpv function in MATLAB Current price of the underlying asset (a futures contract) Exercise price of the futures option Annualized, continuously compounded risk-free rate of return over the life of the option, expressed as a positive decimal number Time Time to expiration of the option, expressed in years Value Price of a European futures option from which the implied volatility of the underlying asset is derived Limit (optional) Positive scalar representing the upper bound of the implied volatility search interval. If Limit is empty or unspecified, the default ¼ 10, or 1,000 % per annum Tolerance Implied volatility termination tolerance. A positive scalar. Default ¼ 1e-6 (optional) Class (optional) Option class (call or put) indicating the option type from which the implied volatility is derived. May be either a logical indicator or a cell array of characters. To specify call options, set Class ¼ true or Class ¼ {‘call’}; to specify put options, set Class ¼ false or Class ¼ {‘put’}. If Class is empty or unspecified, the default is a call option Price Strike Rate
2490
C.-F. Lee and T. Tai
% Strike - Strike (i.e., exercise) price of the futures option. % % Rate - Annualized continuously compounded risk-free rate of return % over the life of the option, expressed as a positive decimal number. % % Time - Time to expiration of the option, expressed in years. % % Value - Price (i.e., value) of a European futures option from which % the implied volatility is derived. % % Optional Inputs: % Limit - Positive scalar representing the upper bound of the implied % volatility search interval. If empty or missing, the default is 10, % or 1000% per annum. % % Tolerance - Positive scalar implied volatility termination tolerance. % If empty or missing, the default is 1e-6. % % Class - Option class (i.e., whether a call or put) indicating the % option type from which the implied volatility is derived. This may % be either a logical indicator or a cell array of characters. To % specify call options, set Class ¼ true or Class ¼ {’call’}; to specify % put options, set Class ¼ false or Class ¼ {’put’}. If empty or missing, % the default is a call option. % % Output: % Volatility - Implied volatility derived from European futures option % prices, expressed as a decimal number. If no solution is found, a % NaN (i.e., Not-a-Number) is returned.
90
Implied Volatility: Theory and Empirical Method
2491
% % Example: % Consider a European call futures option trading at $1.1166, with an % exercise prices of $20 that expires in 4 months. Assume the current % underlying futures price is also $20 and that the riskfree rate is 9% % per annum. Furthermore, assume we are interested in implied volatilities % no greater than 0.5 (i.e., 50% per annum). Under these conditions, any % of the following commands % % Volatility ¼ blkimpv(20, 20, 0.09, 4/12, 1.1166, 0.5) % Volatility ¼ blkimpv(20, 20, 0.09, 4/12, 1.1166, 0.5, [], {’Call’}) % Volatility ¼ blkimpv(20, 20, 0.09, 4/12, 1.1166, 0.5, [], true) % % return an implied volatility of 0.25, or 25%, per annum. % % Notes: % (1) The input arguments Price, Strike, Rate, Time, Value, and Class may be % scalars, vectors, or matrices. If scalars, then that value is used to % compute the implied volatility from all options. If more than one of % these inputs is a vector or matrix, then the dimensions of all % non-scalar inputs must be the same. % (2) Ensure that Rate and Time are expressed in consistent units of time. % % See also BLKPRICE, BLSPRICE, BLSIMPV. % Copyright 1995-2003 The MathWorks, Inc. % $Revision: 1.4.2.2 $ $Date: 2004/01/08 03:06:15 $ % References: % Hull, J.C., "Options, Futures, and Other Derivatives", Prentice Hall, % 5th edition, 2003, pp. 287-288. % Black, F., "The Pricing of Commodity Contracts," Journal of Financial
2492
C.-F. Lee and T. Tai
% Economics, March 3, 1976, pp. 167-79. % % % Implement Black’s model for European futures options as a wrapper % around a general Black-Scholes option model. % % In this context, Black’s model is simply a special case of a % Black-Scholes model in which the futures/forward contract is % the underlying asset and the dividend yield ¼ the riskfree rate. % ifnargin< 5 error(’Finance:blkimpv:TooFewInputs’, . . . ’Specify Price, Strike, Rate, Time, and Value.’) end switchnargin case 5 [limit, tol, optionClass] ¼ deal([]); case 6 [limit, tol, optionClass] ¼ deal(varargin{1}, [], []); case 7 [limit, tol, optionClass] ¼ deal(varargin{1}, varargin {2}, []); case 8 [limit, tol, optionClass] ¼ deal(varargin{1:3}); otherwise error(’Finance:blkimpv:TooManyInputs’, ’Too many inputs.’) end try volatility ¼ blsimpv(F, X, r, T, value, limit, r, tol, optionClass); catch errorStruct ¼ lasterror; errorStruct.identifier ¼ strrep(errorStruct.identifier, ’blsimpv’, ’blkimpv’); errorStruct.message ¼ strrep(errorStruct.message, ’blsimpv’, ’blkimpv’); rethrow(errorStruct); end
90
Implied Volatility: Theory and Empirical Method
2493
References Ang, J. S., Jou, G. D., & Lai, T. Y. (2009). Alternative formulas to compute implied standard deviation. Review of Pacific Basin Financial Markets and Policies, 12, 159–176. Ang, J. S., Jou, G. D., Lai, T. Y. (2012). A comparison of formulas to compute implied standard deviation. In C. F. Lee, A. C. Lee, J. C. Lee (Eds.), Encyclopedia of finance (2nd ed.). Springer, NY. forthcoming. Beckers, S. (1981). Standard deviation implied in option prices as predictors of future stock price variability. Journal of Banking and Finance, 5, 363–381. Black, F. (1975). Fact and fantasy in the use of options. Financial Analysts Journal, 31(4), 36–41. +61–72. Black, F. (1976). The pricing of commodity contracts. Journal of Financial Economics, 3, 167–179. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81, 637–654. Brenner, M., & Subrahmanyam, M. G. (1988). A simple formula to compute the implied standard deviation. Financial Analysts Journal, 44, 80–83. Brenner, M., Courtadon, G., & Subrahmanyam, M. (1985). Options on the spot and options on futures. Journal of Finance, 40(5), 1303–1317. Chance, D. M. (1986). Empirical tests of the pricing of index call options. Advances in Futures and Options Research, 1, 141–166. Chance, D. (1996). A generalized simple formula to compute the implied volatility. The Financial Review, 31, 859–867. Corrado, C. J., & Miller, T. W. (1996). A note on a simple, accurate formula to compute implied standard deviations. Journal of Banking and Finance, 20, 595–603. Corrado, C. J., & Miller, T. W. (2004). Tweaking implied volatility (Working paper). Auckland: Massey University-Albany. Garman, M. S., & Klass, M. J. (1980). On the estimation of security price volatility from historical data. Journal of Business, 53, 67–78. Hallerback, W. G. P. M. (2004). An improved estimator for Black-Scholes-Merton implied volatility. ERIM Report Series. Erasmus University. Hull, J. C. (2011). Options, futures, and other derivatives (8th ed.). Englewood Cliffs: PrenticeHall. Lai, T. Y., Lee, C. F., & Tucker, A. L. (1992). An alternative method for obtaining the implied standard deviation. Journal of Financial Engineering, 1, 369–375. Latane, H. A., & Rendleman, R. J. (1976). Standard deviation of stock price ratios implied by option premia. Journal of Finance, 31, 369–382. Lee, C. F., & Wu, C. (1985). The impacts of kurtosis on risk stability. Financial Review, 20, 263–270. Li, S. (2005). A new formula for computing implied volatility. Applied Mathematics and Computation, 170, 611–625. Macbeth, J. D., & Merville, L. J. (1979). An empirical examination of the Black-Scholes call option pricing model. Journal of Finance, 34(5), 1173–1186. Manaster, S., & Koehler, G. (1982). The calculation of implied variances from the Black–Scholes model: A note. The Journal of Finance, 37, 227–230. Merton, R. C. (1973). The theory of rational option pricing. Bell Journal of Economics and Management Science, 4, 141–183. Merton, R. C. (1976). Option pricing when underlying stock returns are discontinuous. Journal of Financial Economics, 3(1&2), 125–144. Park, H. Y., & Sears, R. S. (1985). Changing volatility and the pricing of options on stock index futures. Journal of Financial Research, 8, 265–296.
2494
C.-F. Lee and T. Tai
Ramaswamy, K., & Sundaresan, S. M. (1985). The valuation of options on futures contracts. Journal of Finance, 40(5), 1319–1340. Schwert, G. W. (1989). Why does stock market volatility change over times? Journal of Finance, 44, 1115–1154. Whaley, R. (1982). Valuations of American call options on dividend paying stocks: Empirical test. Journal of Financial Economics, 10, 29–58. Wolf, A. (1982). Fundamentals of commodity options on futures. Journal of Futures Markets, 2(4), 391–408.
Measuring Credit Risk in a Factor Copula Model
91
Jow-Ran Chang and An-Chi Chen
Contents 91.1 91.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.1 CreditMetrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.2 Copula Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.3 Factor Copula Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.2 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2496 2497 2497 2501 2504 2507 2507 2507 2509 2516 2517
Abstract
In this chapter, we provide a new approach to estimate future credit risk on target portfolio based on the framework of CreditMetrics™ by J.P. Morgan. However, we adopt the perspective of factor copula and then bring the principal component analysis concept into factor structure to construct a more appropriate dependence structure among credits. In order to examine the proposed method, we use real market data instead of virtual one. We also develop a tool for risk analysis which is convenient to use, especially for banking loan businesses. The results show the fact that people assume dependence structures are normally distributed will indeed lead to risk
J.-R. Chang (*) National Tsing Hua University, Hsinchu City, Taiwan e-mail: [email protected] A.-C. Chen KGI Securities Co. Ltd., Taipei, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_91, # Springer Science+Business Media New York 2015
2495
2496
J.-R. Chang and A.-C. Chen
underestimation. On the other hand, our proposed method captures better features of risks and shows the fat-tail effects conspicuously even though assuming the factors are normally distributed. Keywords
Credit risk • Credit VaR • Default correlation • Copula • Factor copula • Principal component analysis
91.1
Introduction
Credit risk is a risk that generally refers to counterparty failure to fulfill its contractual obligations. The history of financial institutions has shown that many banking association failures were due to credit risk. For the integrity and regularity, financial institutions attempt to quantify credit risk as well as market risk. Credit risk has great influence on all financial institutions as long as they have contractual agreements. The evolution of measuring credit risk has been progressed for a long time. Many credit risk measure models were published, such as CreditMetrics by J.P. Morgan and CreditRisk + by Credit Suisse. On the other side, New Basel Accords (Basel II Accords) which are the recommendation on banking laws and regulations construct a standard to promote greater stability in financial system. Basel II Accords allowed banks to estimate credit risk by using either a standardized model or an internal model approach, based on their own risk management system. The former approach is based on external credit ratings provided by external credit assessment institutions. It describes the weights, which fall into five categories for banks and sovereigns and four categories for corporations. The latter approach allows banks to use their internal estimation of creditworthiness, subject to regulatory. How to build a credit risk measurement model after banking has constructed internal customer credit rating? How to estimate their default probability and default correlations? This thesis attempts to implement a credit risk model tool which links to internal banking database and gives the relevant reports automatically. The developed model facilitates banks to boost their risk management capability. The dispersion of the credit losses, however, critically depends on the correlations between default events. Several factors such as industry sectors and corporation sizes will affect correlations between every two default events. The CreditMetrics™ model (1997) issued from J.P. Morgan proposed a binomial normal distribution to describe the correlations (dependence structures). In order to describe the dependence structure between two default events in detail, we adopt copula function instead of binomial normal distribution to express the dependence structure. When estimating credit portfolio losses, both the individual default rates of each firm and joint default probabilities across all firms need to be considered. These features are similar to the valuation process of collateralized debt obligation (CDO). A CDO is a way of creating securities with widely different risk characteristics from a portfolio of debt instrument. The estimating process is almost the
91
Measuring Credit Risk in a Factor Copula Model
2497
same between our goal and CDO pricing. We focus on how to estimate risks. Most CDO pricing literature adopted copula functions to capture the default correlations. Li (2000) extended Sklar’s issue (1959) that a copula function can be applied to solve financial problems of default correlation. Li (2000) pointed out that if the dependence structure were assumed to be normally distributed through binomial normal probability density function, the joint transformation probability would be consistent with the result from using a normal copula function. But this assumption is too strong. It has been discovered that most financial data have skew or fat-tail phenomenon. Bouye et al. (2000) and Embrechts et al. (1999) pointed out that the estimating VaR would be underestimated if the dependence structure were described by normal copula comparing to actual data. Hull and White (2004) combined factor analysis and copula functions as a factor copula concept to investigate reasonable spread of CDO. How to find a suitable correlation to describe the dependence structure between every two default events and to speed up the computational complexity is our main object. Bielecki et al. (2012) apply the Markov copula approach to model joint default between counterparty and the reference name in a CDS contract. This chapter aims to: 1. Construct an efficient model to describe the dependence structure 2. Use this constructed model to analyze overall credit, marginal, and industrial risks 3. Build up an automatic tool for banking system to analyze its internal credit risks
91.2
Methodology
91.2.1 CreditMetrics Gupton et al. (1997) adopt the main framework of CreditMetrics and calculate credit risks by using real commercial bank loans. The calculating dataset for this chapter is derived from a certain commercial bank in Taiwan. Although there may be some conditions which are different from the situations proposed by CreditMetrics, the calculating process by CreditMetrics can still be appropriately applied to this chapter. For instance, the number of rating degrees in CreditMetrics adopted in S&P’s rating category is 7, i.e., AAA to C, but in this loan dataset, there are 9 instead. The following is the introduction to CreditMetrics model framework. This model can be roughly divided into three components, i.e., value at risk due to credit, exposures, and correlations, respectively, as shown in Fig. 91.1. In this section, these three components and how does this model work out on credit risk valuation will be briefly introduced. Further details could be referred to CreditMetrics technical document.
91.2.1.1 Value at Risk Due to Credit The process of valuing value at risk due to credit can be decomposed into three steps. For simplicity, we assumed there is only one stand-alone instrument which is
2498
Exposures
J.-R. Chang and A.-C. Chen
Correlations
Value at Risk due to Credit
User portfolio
Credit Rating
Seniority
Credit Spread
Rating Series
Market Volatilities
Rating Migration Likelihood
Recovery Rate In Default
Present Value Revaluation
Models (correlations)
Exposure Distributions
Standard Deviation of value due to credit qualities changes for a single exposure
Joint Credit Rating Changes
Fig. 91.1 Structure of CreditMetrics model Table 91.1 One-year transition matrix Initial rating AAA AA A BBB BB B CCC
Rating at year-end (%) AAA AA A 90.81 8.33 0.68 0.70 90.65 7.79 0.09 2.27 91.05 0.02 0.33 5.95 0.03 0.14 0.67 0 0.11 0.24 0.22 0 0.22
BBB 0.06 0.64 5.52 86.93 7.73 0.91 1.30
BB 0.12 0.06 0.74 5.30 80.53 6.48 2.38
B 0 0.14 0.26 1.17 8.84 83.46 11.24
CCC 0 0.02 0.01 0.12 1.00 4.07 64.86
D 0 0 0.06 0.18 1.06 5.20 19.79
Source: J.P. Morgan’s CreditMetrics – technical document (1997)
a corporation bond. (The bond property is similar to loan as they both receive certain amount of cash flow every period and principal at the maturity.) This bond has 5-year maturity and pays an annual coupon at the rate of 5 % to express the calculation process if necessary. Some modifications to fit real situations will be considered later. In Step 1, CreditMetrics assumed that all risks of one portfolio are due to credit rating changes, no matter defaulting or rating migrating. It is significant to estimate not only the likelihood of default but also the chance of migration to move toward any possible credit quality state at the risk horizon. Therefore, a standard system that evaluated “rating changing” under a certain horizon of time is necessary. This information is represented more concisely in transition matrix. Transition matrix can be calculated by observing the historical pattern of rating change and default. They have been published by S&P and Moody’s rating agencies or calculated by private banking internal rating systems. Besides, the transition matrix should be estimated for the same time interval (risk horizon) which can be defined by user demand, usually in 1-year period. Table 91.1 is an example to represent 1-year transition matrix. In the transition matrix table, AAA level is the highest credit rating, and D is the lowest; D also represents that default occurs. According to the above transition matrix table, a company which stays in AA level at the beginning of the year has the probability of 0.64 % to go down to BBB level at the end of the year. By the same
91
Measuring Credit Risk in a Factor Copula Model
2499
Table 91.2 Recovery rates by seniority class
Class Loan Corporation bond
Secured Unsecured Secured Unsecured
Recovery rate of Taiwan debt business research using TEJ data Mean (%) Standard deviation (%) 55.38 35.26 33.27 30.29 67.99 26.13 36.15 37.17
Source: Da-Bai Shen et al. (2003), Research of Taiwan recovery rate with TEJ Data Bank
way, a company which stays in CCC level at the beginning of the year has the probability of 2.38 % to go up to BB level at the end of the year. In this chapter, the transition matrix is to be seen as an external data.1 In Step 1, we describe the likelihood of migration to move to any possible quality states (AAA to CCC) at the risk horizon. Step 2 is valuation. The value at the risk horizon must be determined. According to different states, the valuation falls into two categories. First, in the event of a default, recovery rate of different seniority class is needed. Second, in the event of up (down) grades, the change in credit spread that results from the rating migration must be estimated, too. In default category, Table 91.2 shows the recovery rates by seniority class which this chapter adopts to revaluate instruments. For instance, if the holding bond (5-year maturity that pays an annual coupon at the rate of 5 %) is unsecured and the default occurs, the recovery value will be estimated using its mean value which is 36.15 %. In rating migration category, the action of revaluation is to determine the cash flows which result from holding the instrument (corporation bond position). Assuming a face value of $100, the bond pays $5 (an annual coupon at the rate of 5 %) each at the end of the next 4 years. Now, the calculating process to describe the value V of the bond assuming the bond upgrades to level A by the formula below: V ¼5þ
5 5 5 105 þ þ þ ¼ 108:66 ð1 þ 3:72%Þ ð1 þ 4:32%Þ2 ð1 þ 4:93%Þ3 ð1 þ 5:32Þ4
The discount rate in the above formula comes from the forward zero curves shown in Table 91.3, which is derived from CreditMetrics technical document. This chapter does not focus on how to calculate forward zero curves. It is also seen as an external input data. Step 3 is to estimate the volatility of value due to credit quality changes for this stand-alone exposure (level A, corporation bond). From step 1 and step 2, the likelihood of all possible outcomes and distribution of values within each outcome are known. CreditMetrics used two measures to calculate the risk estimate:
1
We do not focus on how to model probability of default (PD) but focus on how to establish the dependence structure. The 1-year transition matrix is a necessary input to our model.
2500
J.-R. Chang and A.-C. Chen
Table 91.3 One-year forward zero curves by credit rating category Category AAA AA A BBB BB B CCC
Year 1 3.60 3.65 3.72 4.10 5.55 6.05 15.05
Year 2 4.17 4.22 4.32 4.67 6.02 7.02 15.02
Year 3 4.73 4.78 4.93 5.25 6.78 8.03 14.03
Year 4 5.12 5.17 5.32 5.63 7.27 8.52 13.52
Source: J.P. Morgan’s CreditMetrics – technical document (1997)
One is standard deviation, and the other is percentile level. Besides these two measures, this chapter also embraces marginal VaR which denotes the increment VaR due to adding one new instrument in the portfolio.
91.2.1.2 Exposures As discussed above, the instrument is limited to corporation bonds. CreditMetrics has allowed the following generic exposure types: 1. Non-interest bearing receivables 2. Bonds and loans 3. Commitments to lend 4. Financial letters of credit 5. Market-driven instruments (swap, forwards, etc.) The exposure type this chapter aims at is loans. The credit risk calculation process of loans is similar to bonds as previous example. The only difference is that loans do not pay coupons. Instead, loans receive interests. But the CreditMetrics model can definitely fit the goal of this chapter to estimate credit risks on banking loan business. 91.2.1.3 Correlations In most circumstances, there is usually more than one instrument in a target portfolio. Now, multiple exposures are taken into consideration. In order to extend the methodology to a portfolio of multiple exposures, estimating the contribution to risk brought by the effect of nonzero credit quality correlations is necessary. Thus, the estimation of joint likelihood in the credit quality co-movement is the next problem to be resolved. There are many academic papers which address the problems of estimating correlations within a credit portfolio. For example, Gollinger and Morgan (1993) used time series of default likelihood to correlate default likelihood, and Stevenson and Fadil (1995) correlated the default experience across 33 industry groups. On the other hand, CreditMetrics proposed a method to estimate default correlation. They have several assumptions: (A) A firm’s asset value is the process which drives its credit rating changes and default. (B) The asset returns are normally distributed. (C) Two asset returns are correlated and bivariate normally distributed, and multiple asset returns are correlated and multivariate normally distributed.
91
Measuring Credit Risk in a Factor Copula Model
2501
Upgrade to BBB
Downgrade to B
Firm remains BB rated
Firm defaults
ZCCC
ZB
ZBBB
ZA
ZAAZAAA
Asset return over one year
Fig. 91.2 Distribution of asset returns with rating change thresholds
According to assumption A, individual threshold of one firm can be calculated. For a two-exposure portfolio, which credit ratings are level B and level AA and standard deviations of returns are s and s0 , respectively, it only remains to specify the correlation r between two asset returns. The covariance matrix for the bivariate normal distribution is 2 s rss0 S¼ rss0 s0 2 Then the joint probability of co-movement that both two firms stay in the same credit rating can be described by the following formula: Pr Z BB < R1 < Z B , Z 0AAA < R2 < Z 0AA ¼
ð Z B ð Z0
AA
ZBB
Z0AAA
f ðr, r 0 ; SÞðdr 0 Þdr
where ZBB, ZB, Z0 AAA, Z0 AA are the thresholds. Figure 91.2 gives a concept of the probability calculation. These three assumptions regarding estimating the default correlation are too strong, especially assuming the multiple asset returns are multinormally distributed. In the next section, a better way of using copula to examine the default correlation is proposed.
91.2.2 Copula Function Consider a portfolio consists of m credits. The marginal distribution of each individual credit risks (defaults occur) can be constructed by using either the historical approach or the market implicit approach (derived credit curve from market information). But the question is: how to describe the joint distribution or
2502
J.-R. Chang and A.-C. Chen
co-movement between these risks (default correlation)? In a sense, every joint distribution function for a vector of risk factors implicitly contains both a description of the marginal behavior of individual risk factors and a description of their dependence structure. The simplest assumption of dependence structure is mutual independence among the credit risks. However, the independent assumption of the credit risks is obviously not realistic. Undoubtedly, the default rate for a group of credits tends to be higher when the economy is in a recession and lower in a booming. This implies that each credit is subject to the same factors from macroeconomic environment and that there exists some form of dependence among the credits. The copula approach provides a way of isolating the description of the dependence structure. That is, the copula provides a solution to specify a joint distribution of risks, with given marginal distributions. Of course, this problem has no unique solution. There are many different techniques in statistics which can specify a joint distribution with given marginal distributions and a correlation structure. In the following section, the copula function is briefly introduced.
91.2.2.1 Copula Function An m-dimensional copula is a distribution function on [0,1]m with standard uniform marginal distributions: CðuÞ ¼ Cðu1 ; u2 ; . . . ; um Þ
(91.1)
C is called a copula function. The copula function C is a mapping of the form C:[0, 1]m ! [0, 1], i.e., a mapping of the m-dimensional unit cube [0, 1]m such that every marginal distribution is uniform on the interval [0, 1]. The following two properties must hold: 1. C(u1, u2, . . . , um, S) is increasing in each component ui. 2. C(1, . . . ,1, ui, 1, . . . , 1, S) ¼ ui for all i ∈{1, . . . , m}, ui ∈[0, 1].
91.2.2.2 Sklar’s Theorem Sklar (1959) underlined the applications of the copula. Let F(•) be an m-dimensional joint distribution function with marginal distribution F1, F2, . . . , Fm. There exist a copula C: [0, 1]m ! [0, 1] such that Fðx1 ; x2 ; . . . ; xm Þ ¼ CðF1 ðx1 Þ, F2 ðx2 Þ, . . . , Fm ðxm ÞÞ
(91.2)
If the margins are continuous, then C is unique. For any x1, . . . , xm in ℜ ¼ [1, 1] and X has joint distribution function F, then Fðx1 ; x2 ; . . . ; xm Þ ¼ Pr½F1 ðX1 Þ F1 ðx1 Þ, F2 ðX2 Þ F2 ðx2 Þ, . . . , Fm ðXm Þ Fm ðxm Þ (91.3) According to Eq. 91.2, the distribution function of (F1(X1), F2(X2), . . . , Fm(Xm)) is a copula. Let xi ¼ Fi1(ui), then
91
Measuring Credit Risk in a Factor Copula Model
1 1 Cðu1 ; u2 ; . . . ; um Þ ¼ F F1 1 ðu1 Þ, F2 ðu2 Þ, . . . , Fm ðum Þ
2503
(91.4)
This gives an explicit representation of C in terms of F and its margins.
91.2.2.3 Copula of F Li (2000) used the copula function conversely. The copula function links univariate marginals to their full multivariate distribution. For m uniform random variables, U1, U2, . . . , Um, the joint distribution function C is defined as Cðu1 ; u2 ; . . . ; um ; SÞ ¼ Pr½U1 u1 , U 2 u2 , . . . , Um um
(91.5)
where S is correlation matrix of U1, U2, . . . , Um. For given univariate marginal distribution functions F1(x1), F2(x2), . . . , Fm(xm). xi ¼ Fi1(ui), the joint distribution function F can be described as follows: Fðx1 ; x2 ; . . . ; xm Þ ¼ CðF1 ðx1 Þ, F2 ðx2 Þ, . . . , Fm ðxm Þ, SÞ
(91.6)
The joint distribution function F is defined by using a copula. The property can be easily shown as follows: CðF1 ðx1 Þ, F2 ðx2 Þ, . . . , Fm ðxm Þ, SÞ ¼ Pr½U1 F1 ðx1 Þ, U2 F2 ðx2 Þ, . . . , U m Fm ðxm Þ 1 1 ¼ Pr F1 1 ðU 1 Þ x1 , F2 ðU 2 Þ x2 , . . . , Fm ðU m Þ xm ¼ Pr½X1 x1 , X2 x2 , . . . , Xm xm ¼ F ðx 1 ; x 2 ; . . . ; x m Þ
The marginal distribution of Xi is CðF1 ðþ1Þ, F2 ðþ1Þ, . . . , Fi ðxi Þ, . . . , Fm ðþ1Þ, SÞ ¼ Pr½X1 þ1, X2 þ1, . . . , Xi xi , . . . , Xm þ1 ¼ Pr½Xi xi
(91.7)
¼ Fi ð x i Þ Li showed that with given marginal functions, we can construct the joint distribution through some copulas accordingly. But what kind of copula should be chosen corresponding to the realistic joint distribution of a portfolio? For example, the CreditMetrics chose Gaussian copula to construct multivariate distribution. By Eq. 91.6, this Gaussian copula is given by CGa ðu; SÞ ¼ PrðFðX1 Þ u1 , FðX2 Þ u2 , . . . , FðXm Þ um , SÞ ¼ FS F1 ðu1 Þ, F1 ðu2 Þ, . . . , F1 ðum Þ
(91.8)
2504
J.-R. Chang and A.-C. Chen
where F denotes the standard univariate normal distribution, F1 denotes the inverse of a univariate normal distribution, and FS denotes multivariate normal distribution. In order to easily describe the construction process, we only discuss two random variables u1 and u2 to demonstrate the Gaussian copula: CGa ðu1 ; u2 ; rÞ ¼
ð F1 ðu1 Þ ð F1 ðu2 Þ 1
1
2 1 v 2rv1 v2 þ v22 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi exp 1 dv2 dv1 2 ð 1 r2 Þ 2p ð1 r2 Þ (91.9)
where r denotes the correlation of u1 and u2. Equation 91.9 is also equivalent to the bivariate normal copula which can be written as follows: Cðu1 , u2 , rÞ ¼ F2 F1 ðu1 Þ, F1 ðu2 Þ
(91.10)
Thus, with given individual distribution (e.g., credit migration over 1-year horizon) of each credit asset within a portfolio, we can obtain the joint distribution and default correlation of this portfolio through copula function. In our methodology, we do not use copula function directly. In the next section, we bring in the concept of factor copula for further improvement to form the default correlation. Using factor copula has two advantages. One is to avoid constructing a high-dimensional correlation matrix. If there are more and more instruments (N > 1,000) in our portfolio, we need to store N-by-N correlation matrix; scalability is one problem. The other advantage is to speed up the computation time because of the lower dimension.
91.2.3 Factor Copula Model In this section, copula models that have a factor structure will be introduced. It is called factor copula because this model describes dependence structure between random variables not from the perspective of a certain copula form, such as Gaussian copula, but from the factors model. Factor copula models have been broadly used to assess price of collateralized debt obligation (CDO) and credit default swap (CDS). The main concept of factor copula model is that under a certain macro environment, credit default events are independent to each other. And the main causes that affect default events come from potential market economic conditions. This model provides another way to avoid dealing with multivariate normal distribution (high-dimensional) simulation problem. Continuing the above example, a portfolio is consisted of m credits. In the first we consider the simplest example which contains only one factor and define Vi as the asset value of ith credit under single factor copula model. Then this ith credit asset value can be expressed by one factor M (mutual factor) chosen from macroeconomic factors and one error term ei:
91
Measuring Credit Risk in a Factor Copula Model
V i ¼ ri M þ
qffiffiffiffiffiffiffiffiffiffiffiffiffi 1 r 2i ei
2505
(91.11)
where ri is weight of M, and the mutual factor M is independent of ei. Let the marginal distribution of V1, V2, . . . , Vm are Fi, i ¼ 1, 2 , . . . , m. Then the m-dimensional copula function can be written as 1 1 Cðu1 ; u2 ; . . . ; um Þ ¼ F F1 1 ðu1 Þ, F2 ðu2 Þ, . . . , Fm ðum Þ 1 1 ¼ Pr V 1 F1 1 ðu1 Þ, V 2 F2 ðu2 Þ, . . . , V m Fm ðum Þ (91.12) where F is the joint cumulative distribution function of V1, V2, . . . ,Vm. It has been known that M and ei are independent of each other, according to iterated expectation theorem; Eq. 91.12 can be written as 1 1 Cðu1 ; u2 ; . . . ; um Þ ¼ E Pr V 1 F1 1 ðu1 Þ, V 2 F2 ðu2 Þ, . . . , V m Fm ðun Þ jM ( ) qffiffiffiffiffiffiffiffiffiffiffiffiffi m Y 1 2 ¼E Pr r i M þ 1 r i ei Fi ðui Þ jM i¼1
! ) m Y F1 i ðui Þ r i M pffiffiffiffiffiffiffiffiffiffiffiffiffi F e, i ¼E jM 1 r 2i i¼1 !! ð Y m F1 i ðu i Þ r i M pffiffiffiffiffiffiffiffiffiffiffiffiffi gðMÞdM ¼ F e, i 1 r 2i i¼1 (
(91.13) Using the above formula, the m-dimensional copula function can be derived. Moreover, according to Eq. 91.13, the joint cumulative distribution F can also be derived: 1 ! ! ð Y m Fi FT , i ðti Þ r i M pffiffiffiffiffiffiffiffiffiffiffiffiffi Fðt1 ; t2 ; . . . ; tm Þ ¼ Fe , i gðMÞdM (91.14) 1 r 2i i¼1 Let Fi(ti) ¼ Pr(Ti ti) represents i credit default probability (default occurs before time ti), where Fi is the marginal cumulative distribution. We note here that CDX pricing cares about when the default time Ti occurs. Under the same environment (systematic factor M) (Andersen and Sidenius (2004)), the default probability Pr(Ti ti) will be equal to Pr(Vi ci), which represent that the probability asset value Vi is below its threshold ci. Then joint default probability of these m credits can be described as follows: Fðc1 ; c2 ; . . . ; cm Þ ¼ PrðV 1 c1 , V 2 c2 , . . . , V m cm Þ Now, we bring the concept of principal component analysis (PCA). People use PCA to reduce the high-dimensional or multivariable problems. If someone would like to explain one thing (or some movement of random variables), he/she has to
2506
J.-R. Chang and A.-C. Chen
CreditMetrics Model Exposures
User Userportfolio portfolio
Correlations
Value at Risk due to Credit
Credit Rating
Seniority
Rating Series
Credit Spread External data
Market Volatilities
Exposure Distributions
Rating Migration Likelihood
Recovery Rate In Default
Present Value Revaluation
Models (Normal Copula)
Joint Credit Rating Changes
Standard Deviation of value due to credit qualities changes for a single exposure
Internal data Copula Function Application Copula Function
Factor Copula Function
Principal Component Analysis Portfolio Value Risk due to Credit
Fig. 91.3 Architecture of proposed model
gather interpreting variables related to those variable movements or their correlation. Once the kinds of interpreting variables are too huge or complicated, it becomes harder to explain those random variables and will produce complex problems. Principal component analysis provides a way to extract approximate interpreting variables to cover maximum variance of variables. Those representative variables may not be “real” variables. Virtual variables are allowed and depend on the explaining meaning. We do not talk about PCA calculation processes; further detail could be referred to Jorion (2000). Based on factor model, the asset value of m credits with covariance matrix S can be described as follows: V i ¼ r i1 y1 þ r i2 y2 þ . . . þ r im ym þ ei
(91.15)
where yi are common factors between these m credits and rij is the weight (factor loading) of each factor. The factors are independent of each other. The question is: how to determinate those yi factors and their loading? We use PCA to derive the factor loading. Factor loadings are based on listed price of those companies in the portfolio to calculate their dependence structure. The experimental results will be shown in the next section. We note here that the dependence structure among assets have been absorbed into factor loadings (Fig. 91.3).
91
Measuring Credit Risk in a Factor Copula Model
91.3
2507
Experimental Results
The purpose of this chapter is to estimate credit risk by using principal component analysis to construct dependence structure without giving any assumptions to specify formulas of copula. In other words, the data were based on itself to describe the dependence structure.
91.3.1 Data In order to analyze credit VaR empirically through proposed method, this investigation adopts the internal loan account data, loan application data, and customer information data from a commercial bank on current market in Taiwan. For reliability of data authenticities, all the data are in Taiwan stock market instead of virtual one. This also means now the portfolio pool contains only the loans of listed companies and does not contain the loan of unlisted companies. According to the period of these data, we can estimate two future portfolio values. They are values on 2003 and 2004, respectively. All requirement data are downloaded automatically from database system to workspace for computations. Before going to the detail of the experiments, the relevant data and experimental environment are introduced as follows.
91.3.1.1 Requirements of Data Input 1. Commercial bank internal data: This internal data contains nearly 40,000 entries of customer’s data, 50,000 entries of loan data, and 3,000 entries of application data. These data contain maturity dates, outstanding amount, credit ratings, interest rate for lending, market type, etc. up to December 31, 2004. 2. One-year period transition matrix: The data was extracted from Yang (2005), who used the same commercial bank history data to estimate a transition matrix which obeyed Markov chain (Table 91.4). 3. Zero forward rate: Refer to Yang (2005), based on computed transition matrix to estimate the term structure of credit spreads. Furthermore, they added corresponding risk-free interest rate to calculate zero forward rates from discounting zero spot rates (Table 91.5). 4. Listed share prices at exchange market and over-the-counter market: We collected weekly listed share prices of all companies at exchange and over-the-counter markets in Taiwan from January 1, 2000, to December 31, 2003, in total 3 years’ data, through Taiwan Economic Journal Data Bank (TEJ).
91.3.2 Simulation In this section, the simulation procedure of analyzing banking VaR is briefly introduced. There are two main methods of experiments: A and B. A is the method that this chapter proposed which uses factor analysis to explain the dependence
2508
J.-R. Chang and A.-C. Chen
Table 91.4 One-year transition matrix (commercial data) Initial rating 1 2 3 4 5 6 7 8 9 D
Rating at year-end (%) 1 2 3 4 100 0 0 0 3.53 70.81 8.90 5.75 10.76 0.03 72.24 0.24 1.80 1.36 5.85 57.13 0.14 0.44 1.58 2.39 0.09 0.06 0.94 2.44 0.05 0.05 0.27 3.72 0.01 0 0.03 0.45 0 0 0.02 0.09 0 0 0 0
5 0 6.29 10.39 18.75 75.47 13.66 3.75 0.21 1.46 0
6 0 1.39 5.78 11.31 16.97 70.58 14.49 1.34 1.80 0
7
8
0 0.19 0.31 2.45 1.49 6.95 66.39 2.00 1.36 0
0 2.74 0.09 0.32 0.61 1.68 8.05 77.10 3.08 0
9 0 0.06 0.06 0.70 0.49 0.76 0.12 0.44 70.06 0
D 0 0.34 0.10 0.33 0.42 2.81 3.11 18.42 22.13 100
Source: Yang (2005)
Table 91.5 One-year forward zero curves by credit rating category (commercial data) Yield (%) Credit rating
1 2 3 4 5 6 7 8 9
1 year 1.69 2.57 2.02 2.6 2.79 4.61 6.03 22.92 27.51
2 year 2.08 2.88 2.41 2.93 3.1 5.02 6.16 23.27 27.82
3 year 2.15 3.19 2.63 3.28 3.48 5.16 6.56 22.54 26.4
4 year 2.25 3.44 2.85 3.59 3.81 5.31 6.83 21.91 25.17
5 year 2.41 3.72 3.11 3.91 4.15 5.51 7.07 21.36 24.09
6 year 2.53 3.94 3.32 4.17 4.42 5.67 7.23 20.78 23.03
7 year 2.58 4.07 3.45 4.34 4.6 5.76 7.28 20.15 21.97
8 year 2.62 4.18 3.56 4.48 4.75 5.83 7.31 19.52 20.97
9 year 2.7 4.33 3.71 4.65 4.93 5.93 7.36 18.94 20.08
Source: Yang (2005)
structure and to simulate the distribution of future values. B is the contrast set which are used traditionally and popularly in most applications such as CreditMetrics. We call it the multi-normal (normal/Gaussian copula) simulation method. Both of these two methods need three input data tables: credit transition matrix, forward zero curves, and share prices of each corporation in the portfolio pool. The detail of normal copula method procedure is not mentioned here; readers can refer to technical documentation of CreditMetrics. Now, the process of factor analysis method is shown as follows: 1. Extract the data entries that do not mature under given date, from database system including credit ratings, outstanding amounts, and interest rates. 2. According to the input transition matrix, we can calculate standardized thresholds for each credit rating. 3. Use the share prices of those corporations in the portfolio pool to calculate equities correlations.
91
Measuring Credit Risk in a Factor Copula Model
2509
4. Use principal component analysis to obtain each factor loadings for all factors under the assumption that these factors obey some distributions (e.g., standard normal distribution) to simulate their future asset value and future possible credit ratings. 5. According to possible credit ratings, discount the outstanding amounts by their own forward zero curves to evaluate future value distributions. 6. Display the analysis results.
91.3.3 Discussion 91.3.3.1 Tools and Interfaces Preview For facility and convenience, this chapter uses MATLAB and MySQL to construct an application tool to help tool users analyze future portfolio value more efficiently. Following is this tool’s interactive interfaces: Basic Information of Experimental Data: (Pie Chart) The non-computing data and pie charts give user the basic view of loan information. These charts present the proportion of each composition of three key elements: loan amount of companies, industry, and credit rating. To assist user, construct an overview of concerned portfolio (Fig. 91.4). Pie chart of loan amount weight in terms of enterprise, industry, and credit rating. Information According to Experimental data: (Statistic Numbers) Besides graphic charts, the second part demonstrates a numerical analysis. The first part is the extraction of the company data which has maturity more than the given months, and the second part is the extraction of the essential data of top weighted companies. Parts I and II extract data without any computation; the only thing has been done is to sort or remove some useless data (Figs. 91.5 and 91.6). Set Criteria and Derive Fundamental Experimental Result This portion is the core of proposed tool; it provides several functions of computations. Here are the parameters that users must decide themselves: 1. Estimated year. 2. Confidence level. 3. Simulation times. Of course, the more simulation time user chooses, the more computational time will need. 4. Percentage of explained factors which is defined for PCA method. Using the eigenvalues of given normalized assets (equities) values, we can determinate the explained percentage. 5. This function gives user the option to estimate all or portion of the companies of portfolio pool. The portion part is sorted according to the loan amount of each enterprise. User can choose multiple companies they are most concerned. The computational result is written to a text file for further analysis.
2510
Fig. 91.4 Interface of part I
J.-R. Chang and A.-C. Chen
91
Measuring Credit Risk in a Factor Copula Model
2511
Fig. 91.5 Interface of part II
Fig. 91.6 Companies data downloads from part II interface
6. Distribution of factors. This is defined for PCA method, too. There are two distributions that user can choose: standard normal distribution or Student’s t-distribution. The default freedom of Student’s t-distribution is set as one (Fig. 91.7). Report of Overall VaR Contributor User may be more interested in the detail of risk profile at various levels. In this part, industries are discriminated from 19 sections, and credits are discriminated from nine levels. This allow user to see where the risk is concentrated visually (Fig. 91.8).
2512
J.-R. Chang and A.-C. Chen
Fig. 91.7 Interface of part III
91.3.3.2 Experimental Result and Discussion Table 91.6 represents the experimental results. For objectivity, all simulation times are set to 100,000 times which is large enough to obtain stable numerical results.2 Based on the provided data, the 1-year future portfolio value of listed corporations on 2003 and 2004 can be estimated. In other words, standing on January 1, 2002, we can estimate the portfolio value on December 31, 2003. Or standing on January 1, 2003, we can estimate the portfolio value on December 31, 2004. The following tables listed the experimental results of factor copula methods of different factor distributions and compared with multi-normal method by CreditMetrics. The head of the tables are parameter setting, and the remained fields are experimental results. We note here the formula Eq. 91.15 V i ¼ r i1 y1 þ r i2 y2 þ . . . þ r im ym þ ei where the distribution of factors y1, y2. . .ym listed in the following table is standard normally distributed and Student t-distributed (assumes freedoms are 2, 5, and 10).
2
We have examine the simulation times; 100,000 times is enough to have a stable computational result.
91
Measuring Credit Risk in a Factor Copula Model
2513
Fig. 91.8 VaR contribution of individual credits and industries
There are some messages that can be derived from the above table. First, obviously, risk of future portfolio value by multi-normal method is less than by proposed method. The risk amount of proposed method is 3–5 times over multinormal method. This result corresponds to most research that copula function can capture the fat-tail phenomenon which prevails over practical market more adequately. Second, the distribution of future portfolio value by proposed method is more diversified than multi-normal method which concentrated on nearly 400,000 with 50,000 times while proposed method with 17,000 times. Third, it is very clear to see that risks with factors using Student’s t-distribution to simulate are more than with normal distribution, and the risk amount tends toward the same while the degree of freedom becomes larger. Fourth, the mean of portfolio of proposed method is smaller than that of multi-normal method, but the standard deviation of proposed method is much more than multi-normal method. It shows that the overall possible portfolio values by proposed method have the trend to become less worth and also fluctuate more rapidly. The above discrepancies between two methods give us some inferences. First, the proposed method provides another way to estimate more actual credit risks of
2514
J.-R. Chang and A.-C. Chen
Table 91.6 Experimental result of estimated portfolio value at the end of 2003 Estimate year: 2003 Parameter setting Simulation time: 100,000 Percentage of explained factors: 100.00 % Involved listed enterprise number: 40 Loan account entries: 119 Result Factor distribution assumption : normal distribution F N(0,1) Credit VaR 95 % Credit VaR 99 % Portfolio mean Portfolio s.d. Multi-normal 192,113.4991 641,022.0124 3,931,003.1086 136,821.3770 PCA 726,778.6308 1,029,766.9285 3,812,565.6170 258,628.5713
6
x 104
2
x 104
1.8 5
1.6 1.4
4
1.2 1
3
0.8 2
0.6 0.4
1
0.2
0 0.5
1
1.5
2
2.5
3
3.5
4
4.5 6 x 10
0 1.5
2
2.5
3
4
4.5 6 x 10
PCA method
Multi-Normal method
Factor distribution assumption : Student’s t-distribution, freedom ¼ (2) Credit VaR 95 % Credit VaR 99 % Portfolio mean Multi-normal 191,838.2019 620,603.6273 3,930,460.5935 PCA 1,134,175.1655 1,825,884.8901 3,398,906.5097 x 104 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 0.5 1
3.5
Portfolio s.d. 136,405.9177 579,328.2159
8000 7000 6000 5000 4000 3000 2000 1000 1.5
2
2.5
3
3.5
Multi-Normal method
4
4.5 x 106
0 0
0.5
1
1.5
2
2.5
3
3.5
4
4.5 6 x 10
PCA method
Factor distribution assumption : Student’s t-distribution, freedom ¼ (5) Credit VaR 95 % Credit VaR 99 % Portfolio mean Multi-normal 192,758.7482 610,618.5048 3,930,923.6708 PCA 839,129.6162 1,171,057.2562 3,728,010.5847
Portfolio s.d. 135,089.0618 337,913.7886 (continued)
91
Measuring Credit Risk in a Factor Copula Model
2515
Table 91.6 (continued) Factor distribution assumption : Student’s t-distribution, freedom ¼ (5) Credit VaR 95 % Credit VaR 99 % Portfolio mean 6
Portfolio s.d.
15000
x 104
5 10000
4 3
5000
2 1 0 0.5
0 1
1.5
2
2.5
3
3.5
4
4.5 x 106
1
1.5
2
2.5
3.5
4
4.5 x 106
PCA method
Multi-Normal method
Factor distribution assumption : Student’s t-distribution, freedom ¼ (10) Credit VaR 95 % Credit VaR 99 % Portfolio mean Multi-normal 192,899.0228 600,121.1074 3,930,525.7612 PCA 773,811.8411 1,080,769.3036 3,779,346.2750 6
3
Portfolio s.d. 137,470.3856 291,769.4291
x 104 18000 16000
5
14000
4
12000 10000
3
8000 2
6000
1
4000 2000
0 0.5
1
1.5
2
2.5
3
3.5
4
4.5 x 106
0
Multi-Normal method
1
1.5
2
2.5
3
3.5
4
4.5 x 106
PCA method
Table 91.7 CPU time for factor computation (simulation time: 100,000 year: 2003) Method Multi-normal PCA
Explained ratio (s) 100 % 9095 % 2.5470 2.6090 1.2030 0.7810
8085 % 2.2350 0.7030
7080 % 2.2500 0.6720
Below 60 % 2.3444 0.6090
portfolio containing risky credits through market data, and this method captures fat-tail event more notably. Second, the computation time of proposed method is shorter than multi-normal method. In Table 91.7, when using fully explained factors, computation time by proposed method is still faster than by multi-normal method. The computation time decreases as the required explained ratio is set lower.
2516
J.-R. Chang and A.-C. Chen
Table 91.8 Individual credit VaR of top 5 industries Industry (No. 1) Electronics (No. 2) Plastic (No. 3) Transportation (No. 4) Construction (No. 5) Textile
Credit VaR Multi-normal method 40,341 42,259 22,752 7,011 2,884
PCA method 252,980 42,049 22,391 7,007 2,765
Table 91.9 Estimate portfolio value at the end of 2004 with different explained level 95 % confidence level, F (0,1) 100 % Multi-normal 208,329.40 PCA 699,892.33
9095 % 208,684.38 237,612.60
8085 % 209,079.72 200,057.74
7080 % 208,686.22 187,717.73
6070 % 207,710.63 183,894.91
This means less numbers of factors are used for the expected explained level. Third, Table 91.8 which retrieves individual credit VaR contribution to whole portfolio from 19 industries shows that the main risk comes from electronics industry. Based on the commercial data, we find out that among its loan account entries, the electronics industry customers have the proportion of exceeding half of loan entries (63/119). The credit VaR of electronics industry computed by proposed method is six times more than by multi-normal method. This effect reveals that the multi-normal method lacks the ability to catch concentrative risks. On the contrary, based on factor structure, the mutual factor loadings extracted by the correlation among companies express more actual risks. Fourth, for finite degree of freedom, the t-distribution has fatter tails than Gaussian distribution and is known to generate tail dependence in the joint distribution. Table 91.9 shows the impact on risk amount by using different factor numbers. According to Table 91.9, the risks decrease as the explained level decreases; this is a trade-off between time-consuming and afforded risk amount. Most research and reports say 80 % explained level is large enough to be accepted.
91.4
Conclusion
Credit risk and default correlation issues have been probed in recent research, and many solutions have been proposed. We take another view to examine credit risks and derivative tasks. On our perspective, the loan credits in target portfolio like the widely different risk characteristics from a portfolio of debt instruments and their properties and behavior are the same in the main. In this chapter, we propose a new approach which connects the principal component analysis and copula functions to estimate credit risks of bank loan businesses. The advantage of this approach is that we do not need to specify
91
Measuring Credit Risk in a Factor Copula Model
2517
particular copula functions to describe dependence structure among credits. On the contrary, we use a factor structure which covers market factor and idiosyncratic factor, and the computed risks have heavy-tail phenomenon. Another benefit is that it reduces the difficulties to estimate parameters which copula functions will use. This approach provides another way and has better performance than conventional method such as assume the dependence structures are normally distributed. In order to describe the risk features and other messages that bank policymakers may like to know, we wrote a tool for risk estimation and results display. It contains basic data information preview which just downloads data from database and does some statistic analyses. It also provides different parameter settings and uses Monte Carlo simulation to calculate credit VaR and finally gives an overview of individual credit VaR contributions. The experimental results are consistent with previous studies that the risk will be underestimated compared with real risks if people assume dependence structure are normally distributed. In addition, the aforementioned approach and tool still have some rooms to be improved such as recovery rate estimations, how to chose distributions of factors, and more friendly user interface.
References Andersen, L., & Sidenius, J. (2004). Extensions to the Gaussian copula: Random recovery and random factor loadings. Journal of Credit Risk, 1(1), 29–70. Bielecki, T. R., Crepey, S., Jeanblanc, M., & Zargari, B. (2012). Valuation and hedging of CDS counterparty exposure in a Markov copula model. International Journal of Theoretical and Applied Finance, 15(1), 85–123. Bouye, E. V., Durrleman, A., Nikeghbali, G. R., & Roncalli, T. (2000). Copulas for finance – A reading guide and some application. Groupe de Recherche. Embrechts, P., McNeil, A., & Straumann, D. (1999). Correlation and dependence in risk management: Properties and pitfalls. Zurich: Mimeo. ETHZ Zentrum. Gollinger, T. L., & Morga, J. B. (1993). Calculation of an efficient frontier for a commercial loan portfolio. Journal of Portfolio Management, 19, 39–46. Gupton, G. M., Finger, C. C., & Bhatia, M. (1997). CreditMetrics –technical document. New York: Morgan Guaranty Trust Company. Hull, J., & White, A. (2004). Valuing of a CDO and an n-th to default CDS without Monte Carlo simulation. Journal of Derivatives, 12(2), 8–48. Jorion, P. (2000). Value at risk. New York: McGraw Hill. Li, D. X. (2000). On default correlation: A copula function approach. Journal of Fixed Income, 9, 91–54. Shen, D. B., Tsai, C. C. & Jing, Y. K. (2003). Research of Taiwan recovery rate with TEJ Data Bank. Accounting Research Monthly, 215, 113–125. Sklar, A. (1959). Functions de repartition an n dimension et leurs marges. Publications de l’Institut de Statistique de L’Universite´ de Paris, 8, 229–231. Stevenson, B. G., & Fadil, M. W. (1995). Modern portfolio theory: Can it work for commercial loans? Commercial Lending Review, 10(2), 4–12. Yang, T. C. (2005). The pricing of credit risk of portfolio base on listed corporation in Taiwan market. Master Thesis, National Tsing Hua University, Taiwan.
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods
92
Chuan-Hsiang Han
Contents 92.1 92.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volatility Estimation: Introduction to Fourier Transform Method . . . . . . . . . . . . . . . . . . . . 92.2.1 Price Correction Schemes: Bias Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Simulation Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.1 Case I: Local Volatility Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.2 Case II: Stochastic Volatility Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4 Volatility Estimation Under Different Sampling Frequencies . . . . . . . . . . . . . . . . . . . . . . . . 92.4.1 Volatility Daily Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4.2 Multiple Risk Factors of Volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5 Hypothesis of Linearity Between the Instantaneous Volatility and VIX . . . . . . . . . . . . . 92.5.1 Theoretical Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5.2 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 Concluding Remarks and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2520 2522 2523 2524 2524 2525 2526 2526 2526 2528 2529 2531 2532 2532
Abstract
Malliavin and Mancino (2009) proposed a nonparametric Fourier transform method to estimate the instantaneous volatility under the assumption that the underlying asset price process is a semi-martingale. Based on this theoretical result, this chapter first conducts some simulation tests to justify the effectiveness of the Fourier transform method. Two correction schemes are proposed to improve the accuracy of volatility estimation. By means of these Fourier transform methods, some documented phenomena such as volatility daily
Work supported by NSC 101-2115-M-007-011, Taiwan. C.-H. Han Department of Quantitative Finance, National Tsing Hua University, Hsinchu, Taiwan, Republic of China e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_92, # Springer Science+Business Media New York 2015
2519
2520
C.-H. Han
effect and multiple risk factors of volatility can be observed. Then, a linear hypothesis between the instantaneous volatility and VIX derived from Zhang and Zhu (2006) is investigated. We extend their result and adopt a general linear test for empirical analysis. Keywords
Information content • Instantaneous volatility • Fourier transform method • Bias reduction • Correction method • Local volatility • Stochastic volatility • VIX • Volatility daily effect • Online estimation
92.1
Introduction
Volatility estimation has been recognized as a core problem along the development of modern financial markets. There are enormous literatures devoted to this subject. The concept of “information content” is introduced to categorize a huge amount of studies. More precisely, backward and forward information contents of volatility are used and distinguished by time. For any given time point, the backward information refers to the usage of a set of past information. A segment of historical data of an underlying risky asset, such as historical stock prices, is a typical example. In contrast, the forward information refers to the usage of financial data that contain risk exposure in the future. Financial derivatives such as futures and options are typical examples. The backward information content of volatility is extensively investigated in the fields of financial statistics and econometrics. See Tsai (2005), Engle (2009), and references therein. The forward information content of volatility is almost exclusively studied in the field of mathematical finance and financial engineering. Within all these academic fields, parametric models play the key role to analyze financial data such as stocks and options because certain mathematical structures allow for analytic or computational assessments to relevant estimation procedures. Relatively few results on nonparametric models can be found to analyze volatility. Dupire formula and VIX Gatheral (2006) are frontiers to compute some kinds of volatility using traded option data. Of course, these volatilities contain forward information. In the context of backward information, the dual concept of VIX, i.e., the integrated variance, is the historical volatility squared, which is defined as the variance of the standardized returns and easy to calculate. The instantaneous volatility, denoted by st, can be viewed as the dual concept of Dupire formula, in which the volatility s(T, K) depends on the maturity T and the strike price K. Unfortunately, estimation of the instantaneous volatility is hard. One accessible way to estimate the instantaneous volatility is taken directly from the result of a differentiation on the quadratic variation h, it of the underlying price process S. That is, for small D > 0,
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2521
st
hS; SitþD hS; Sit D
This approximation is theoretically consistent but not plausible for practical implementation. One main reason is that the differentiation is sensitive to data frequency as seen from the above approximation. See Zhang and Mykland (2005) for improved methods on this direction. Not until recently, Malliavin and Mancino (2009) proposed a Fourier transform method for estimation of the instantaneous volatility. This alternative approach is integral based, not differentiable based, as the aforementioned difference approximation. The authors claim that this approach is particularly suitable for the analysis of high-frequency time series and for the computation of cross volatilities. However, Reno (2008) alerts that the Fourier algorithm performs badly near boundaries of estimated volatility time series data, i.e., estimated volatility of the first and last 1 % time series is not accurate enough. The author recommends discarding those volatility estimates near the boundary. Yet, this compromise may constitute a drawback in estimation. One example is that, when exclusively following Reno (2008), dropping the most recent 1 % volatility estimates will distort the prediction of a short-time volatility, say 1-day volatility. To avoid this “boundary effect” pitfall, price correction schemes by matching the estimated volatility with observed price returns have been proposed in Han et al. (2014) and Han (2014). These schemes only require solving some regression equations derived from the maximum likelihood method so they are easy to implement. Additional advantages include (i) no loss of data observations and (ii) reduction of the volatility bias generated from the Fourier transform method. Based on those developed Fourier transform methods to estimate the instantaneous volatility, this chapter further conducts two empirical studies as applications. The first empirical study investigates dynamic behaviors of volatility under three different sampling frequencies: high, medium, and low. In particular, the daily effect, a U shape of volatility, and evidence of multiple risk factors of volatility are observed. These observations are consistent with empirical findings in financial literatures. The second empirical study reveals the linear relationship between VIX and the instantaneous volatility. We derive a theoretical result and apply a general linear test to justify the linearity. Two datasets are used for empirical examination. They include TAIEX (January 2001–March, 2011) and S&P 500 index and its VIX (January 1990–January 2011). Data period covers both tranquil and turbulent times. The organization of this chapter is as follows. Section 92.2 reviews the Fourier transform method and two price correction schemes, including a linear and a nonlinear correction method. Section 92.3 conducts some simulation tests for typical local and stochastic volatility models. Section 92.4 investigates the dynamic behavior of volatility under different sampling frequencies. Section 92.5 conducts a linear test for the instantaneous volatility and its VIX. Section 92.6 concludes this chapter.
2522
92.2
C.-H. Han
Volatility Estimation: Introduction to Fourier Transform Method
Fourier transform method (Malliavin and Mancino (2009)) is a nonparametric method to estimate multivariate volatility process. Its main idea is to reconstruct volatility as time series in terms of sine and cosine basis under the following continuous semi-martingale assumption. Let ut be the log-price of an underlying asset S at time t, i.e., ut ¼ 1n St, and follow a diffusion process dut ¼ mt dt þ st dW t ,
(92.1)
where mt is the instantaneous growth rate and Wt is a one-dimensional standard Brownian motion. One can estimate the time series volatility st with the following steps. • Step 1: Compute the Fourier coefficients of the underlying ut as follows: 1 a0 ðduÞ ¼ 2p
2ðp
dut ,
(92.2)
cos ðktÞdut ,
(92.3)
sin ðktÞdut ,
(92.4)
0
ak ðduÞ ¼
bk ðduÞ ¼
1 p 1 p
2ðp
0 2ðp
0
1 X bk ðduÞ ak ðduÞ cos ðktÞ þ sin ðktÞ . Note k k k¼1 that the original time interval [0, T] can always be rescaled to [0, 2p] as shown in above integrals. • Step 2: Compute the Fourier coefficients of variance s2t as follows: for any k 1, so that uðtÞ ¼ a0 þ
ak s2 ¼ lim
Nk X p as ðduÞasþk ðduÞ þ bs ðduÞbsþk ðduÞ 2N þ 1 S¼N
(92.5)
bk s2 ¼ lim
N k X p a ðduÞbsþk ðduÞ bs ðduÞasþk ðduÞ , 2N þ 1 S¼N s
(92.6)
N!1
N!1
for k 0, in which as (du) and bs (du) are defined by 8 < as ðduÞ, 0, as ðduÞ ¼ : as ðduÞ,
if s > 0 if s ¼ 0 if s < 0
and
8 < bs ðduÞ, 0, bs ðduÞ ¼ : bs ðduÞ,
if s > 0 if s ¼ 0 if s < 0:
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2523
• Step 3: Reconstruct the time series of variance s2t by s2t ¼ lim
N!1
N X
’ðdkÞ ak s2 cos ðktÞ þ bk s2 sin ðktÞ ,
(92.7)
k¼0
where ’ðxÞ ¼ sinx2ðxÞ is a smooth function with the initial condition ’(0) ¼ 1 and d 1 is a smooth parameter typically specified as d ¼ 50 (Reno 2008). From Eqs. 92.2, 92.3, and 92.4, it is observed that the integration error of Fourier coefficients is adversely proportional to data frequency. That is, when the data frequency gets higher, each integral becomes more accurate. This Fourier transform method is easy to implement because, as shown in Eqs. 92.5 and 92.6, Fourier coefficients of the variance time series can be approximated by a finite sum of multiplications of a* and b*. This integration method can accordingly avoid drawbacks inherited from those traditional methods based on the differentiation of quadratic variation. 2
92.2.1 Price Correction Schemes: Bias Reduction It is documented that this Fourier transform method incurs a “boundary effect.” Reno (2008) notes that Fourier algorithm provides inaccurate estimate for volatility time series near the boundary of simulated data. He suggests that all the time series of estimated volatility near the first and last 1 % should be discarded for the purpose of better estimation. This compromise is in contrast to the Markov property, which is a key assumption in the stochastic financial theory Shreve (2000). For example, when one is about to compute the value at risk, evaluate option prices, or hedge financial derivatives, he or she may need the most updated volatility for computational tasks. Two correction schemes to remedy this boundary deficit are reviewed as follows. Recall that ut defined in Eq. 92.1 is the natural logarithm of asset price. Based on the Euler discretization, the increment of log-price ut can be approximated by pffiffiffiffiffiffiffiffi st Dt et . That is, Dut st
pffiffiffiffiffiffiffiffiffiffi Dt et ,
(92.8)
where Dt denotes a small discretized time interval and et denotes a sequence of i.i.d. standard normal random variables. This approximation is derived from neglecting the drift term ofpsmall ffiffiffiffiffiffiffiffiffiffi order Dt and using the increment distribution of Brownian ^t motion DW t ¼ Dt et , . Given a set of discrete observations of log returns, let s denote the volatility time series estimated from the original Fourier transform method. Two correction schemes including nonlinear and linear correction methods have been proposed in Han et al. (2014) and Han (2014), respectively. These Fourier transform methods are effective to reduce bias of volatility estimation, known as the boundary effect.
2524
C.-H. Han
1. Nonlinear Correction Method: This method consists of a linear trans^ 2t in formation on the natural logarithm of estimated variance process s order to guarantee the positiveness of estimated volatility. That is, we transform ^ t to a þ bY^t so that the corrected volatility st ¼ exp a þ bY^t =2 Y^t ¼ 2lns pffiffiffiffiffiffiffiffi > 0 satisfies Dut exp a þ bY^t =2 Dt et , where Dut ¼ ut+1 ut, and a and b denote the correction variables. This linear transformation on Y^t can be understood as the first-order approximation to a possible nonlinear transformation ^ t . Then, we can use the maximum likelihood method to on estimated volatility s regress out correction variables pffiffiffiffiffi via the relationship between logarithm of squared standardized return Dut = Dt and the driving volatility process a þ bY^t :
Dut 2 ln pffiffiffiffiffi ¼ a þ bY^t þ lne2t : (92.9) Dt 2. Linear Correction Method: This method directly applies a linear transformation on the estimated volatility as ^ t: s t ¼ a þ bs
(92.10)
pffiffiffiffiffiffiffiffi ^ t Þ Dtet is Substituting this corrected volatility st into Eq. 92.8, Dut a þ bs obtained. For regression purpose, we take squares on both sides, then a natural logarithm to obtain the following nonlinear equation: pffiffiffiffiffi 2 ^ t Þ2 þ lnet 2 : ln Dut = Dt ln a þ bs We remark that there is no guarantee that the corrected volatility estimation defined in Eq. 92.10 remains positive. This is a disadvantage compared with the previous nonlinear correction method. Note that these correction methods do not involve any model parameters, so they retain the spirit of non-parameterization.
92.3
Simulation Tests
In this section, two well-known volatility models including a local volatility model and a stochastic volatility model are considered for simulation studies in order to justify effectiveness of these Fourier transform methods.
92.3.1 Case I: Local Volatility Model A local volatility model of the following form dSt ¼ aðm St Þdt þ bSgt dwt
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2525
is employed. Model parameters are taken from Jiang (1998) based on the following estimation result: a ¼ 0.093, m ¼ 0.079, b ¼ 0.794, and g ¼ 1.474. A simulation procedure is used to generate sample processes of the price St and its volatility st ¼ bSgt . This simulation is done by the Euler discretization with time step size Dt ¼ 1/250, and the total sample number is 5,000. Based on the original Fourier transform method and two proposed price correction schemes, three volatility time series can be estimated. These are compared with the actual volatility series generated from the local volatility model. Criteria for error measurements include mean squared error (MSE) and maximum absolute error (MAE). Comparison results are shown below: 1. MSE: 7.52E-04 (original Fourier method), 1.19E-05 (nonlinear correction method), and 7.61E-06 (linear correction method) 2. MAE: 0.04 (original Fourier method), 0.02 (nonlinear correction method), and 0.01 (linear correction method) Price correction methods are effective to reduce both error criteria at least by half in this simulated example. Other vast simulation studies also show similar results of bias reduction.
92.3.2 Case II: Stochastic Volatility Model Stochastic volatility models often possess the mean-reverting property. Among various models, the Ornstein-Uhlenbeck process is often taken as the driving one-factor volatility model, which is also known as the exp-OU model in finance. It is defined as
dSt ¼ mSt dt þ expðY t =2ÞSt dW 1t, dY t ¼ aðm Y t Þdt þ bdW 2t,
(92.11)
where St denotes the underlying risky asset price, m the return rate, and W1t and W2t are two correlated Brownian motions. The volatility process st is defined as exp(Yt/2), m denotes the long-run mean, a denotes the rate of mean reversion, b denotes the vol-vol (volatility of volatility), and Yt denotes the OrnsteinUhlenbeck process. Model parameters are set as follows: m ¼ 0.01, S0 ¼ 50, Y0 ¼ 2, m ¼ 2, a ¼ 5, b ¼ 1, and r ¼ 0. To simulate sample processes, the time discretization is set as Dt ¼ 1/5,000 and the total number of samples is 5,000. The procedure to conduct our simulation study is as follows. First, time series of volatility st ¼ exp(Yt/2) and the asset price St are simulated. Using the original Fourier transform method and the nonlinear correction method, two volatility time series are estimated to compare with the true volatility time series. Comparison results are as follows: 1. Mean squared error: 0.0324 (original FTM), 0.0025 (corrected FTM) 2. Maximum absolute error: 0.3504 (original FTM), 0.1563 (corrected FTM)
2526
C.-H. Han
From these simulated results, over a half of errors are reduced by the corrected FTM. Similar reduction can also be found among a vast of simulations. It is worth noting that the employed nonlinear correction method is based on the maximum likelihood method, which is fairly easy to compute numerically. This section conducted simulation tests for two corrected Fourier transform methods for the instantaneous volatility estimation. It is observed that those methods can effectively reduce bias generated from the truncation errors of the original Fourier transform method. Next, we apply these methods to analyze volatility of indices such as S&P 500 in the USA and TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) in Taiwan.
92.4
Volatility Estimation Under Different Sampling Frequencies
92.4.1 Volatility Daily Effect It is well documented that volatility demonstrates a daily effect under the high sampling frequency. This effect causes a pattern of U shape on volatility, which is often observed from intraday data. TAIEX, complied by Taiwan Stock Exchange, is updated every 15 s and publically available. Such easy access to the data enables an exploration of the daily effect. The sampled data period is from February 10 to 15 for four consecutive trading days in 2011. In Fig. 92.1, the bottom line, labeled as vol with magnitude on the left Y-axis, shows four U shapes of the instantaneous volatility. Using an average-type deseasoning technique, see Wild and Seber (1999), time series of deseasoned volatility, labeled as Vol_Deseasoned with magnitude on the right Y-axis, is shown in the middle part of that figure. This deseasoned volatility is often used to measure the actual activity of volatility.
92.4.2 Multiple Risk Factors of Volatility From the statistical point of view, volatility is a latent variable, and its estimation causes a lot of attentions during the last two decades. See review papers of Broto and Ruiz (2004), Molina et al. (2010), Yu (2010), and references therein. In virtue of the nonparametric Fourier transform method, volatility is represented as a Fourier transformation of the underlying asset prices, which can be calculated but is subject to some truncation errors. Then given such estimation modified by our correction schemes, one can further investigate parameters given a prescribed model. Therefore, a new approach to estimate the exp-OU stochastic volatility model parameters can be given as follows: Step 1: Use a corrected Fourier transform method to estimate the instantaneous ^ t ¼ 2lns ^ t . By taking a natural logarithm Y ^ t , the time series of the volatility s driving volatility OU becomes available.
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2527 Vol_Deasonalized
Vol
12:44:15
11:57:15
11:10:15
10:23:15
09:36:15
13:19:15
12:32:15
11:45:15
10:58:15
10:11:15
09:24:15
13:07:15
12:20:15
11:33:15
10:46:15
09:59:15
09:12:15
12:55:15
12:08:15
11:21:15
10:34:15
09:47:15
0.2 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 09:00:15
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Fig. 92.1 2011/02/102011/02/15 TAIEX’s estimated Ivol (every 15 s)
Step 2: Discretize the OU process and use the maximum likelihood method to estimate three parameters with the OU process. Detailed description can be found in Han et al. (2014), but estimators for parameters defined in Eq. 92.11 are given below: 2 ! ! !3 N N 1 N 1 X X X 6 Y^t Y^t ðN 1Þ Y^t Y^tþ1 7 7 16 t¼2 t¼1 t¼1 7 6 a^ ¼ 61 7, !2 ! 7 N1 N1 Dt 6 X X 2 5 4 Y^t ðN 1Þ Y^t t¼1
t¼1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 2 N 1 u X 1 b^ ¼ t Y^tþ1 a^ m Dt þ 1 a^ Dt ÞY^t , ðN 1ÞDt t¼1 2 ! ! ! !3 N N 1 N 1 N 1 X X X X 2 6 Y^t Y^t Y^t Y^t Y^tþ1 7 7 1 6 t¼1 t¼1 t¼1 7 6 t¼2 ^¼ m 7: 6 !2 ! 7 N 1 N 1 a^Dt 6 X X 2 5 4 Y^t ðN 1Þ Y^t t¼1
t¼1
These estimators are employed to estimate model parameters under three different sampling frequencies, i.e., high (5 months_1 day, 5-min data for one trading day), medium (1 day_2 years, daily data for 2 years), and low (1 week_10 years, weekly data for 10 years). Parameter m, the long-run mean, is referred as the risk level of volatility. Figure 92.2 illustrates its estimation from January 3 to March 31 of 2011. To be more precise, for any given date, we use historical data of 1 day, 2 years, and 10 years separately, then adopt the M estimator for parameter m under
2528
C.-H. Han
Fig. 92.2 2011/01/032011/03/31 long-run mean (m) of the exp-OU stochastic volatility model
these three different frequencies. Our dataset is TAIEX and the data resource is from Taiwan Stock Exchange. Two other model parameters which include rate of mean reversion a and vol-vol b are related to invariance property of time scales. See Fouque et al. (2000) for detailed discussions. From Table 92.1, both mean-reverting rate a and vol-vol b are proportional to data sampling frequency. That is, the higher the data frequency, the larger those parameter sizes are. Note that those parameters are well separated, especially the rate of mean reversion. These estimations show a strong evidence of multiple time scales on volatility.
92.5
Hypothesis of Linearity Between the Instantaneous Volatility and VIX
VIX (Volatility Index) is complied by Chicago Board Options Exchange (CBOE). Its formula is mainly based on out-of-money S&P 500 index options with weights depending on the corresponding strike prices. The full definition and formula can be found on Hull (2008) or the VIX white paper published by CBOE.1 VIX is often used to measure an aggregated risk exposure in the next 30 calendar days. Hence, the information content of VIX is forward, as opposed to its dual backward information content of volatility, i.e., the 30-day historical volatility.
1
www.cboe.com/micro/vix/vixwhite.pdf
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2529
Table 92.1 Model parameter estimations under different data frequencies 5 months_1 day 1 day _ 2 years 1 week _ 10 years
a 11,661.39 23.81 2.62
b 114.83 6.29 2.18
(a _ std) (2,681.39) (2.08) (0.17)
(b _ std) (14.70) (0.27) (0.07)
In addition, due to a strong negative correlation with the S&P 500 index, VIX is popularly used as a fear gauge.
92.5.1 Theoretical Result Zhang and Zhu (2006) obtained a linear relationship between the VIX squared and the square of the instantaneous volatility under the Heston model Heston (1993). This result is based on the following derivation. Under a risk-neutral probability measure, the instantaneous variance Vs is assumed to follow a square root process dVs ¼ ½am ða þ lÞVs dt þ b
pffiffiffiffiffiffiffiffiffiffiffiffiffi Vs dW s ,
where m denotes the long-run mean, a the rate of mean reversion, b the variance of variance, Ws the standard Brownian motion, and l the volatility risk premium under the probability measure change. Taking mathematical expectations to the previous stochastic differential equation on both sides, we obtain dEt ½Vs ¼ am ða þ lÞEt ½Vs dt:
(92.12)
This is because an exchange of integral and differential is assumed, and the expectation (conditioned at time t) over the Brownian motion term is zero. The notation Et[.] means a conditional expectation given the filtration at time t under a risk-neutral probability measure. Equation 92.10 is a linear ordinary equation, and its solution is given as
am am þ Vt Et ½ Vs ¼ eðaþlÞðstÞ : aþl aþl Since VIX2 can also be defined as a variance swap rate Hull (2008), one can evaluate it by its definition
VIX2t
ð 1 tþt0 ¼ Et Vs ds 0 t ðttþt 0 1 ¼ Et ðVs Þds, t0 t
2530
C.-H. Han
30 where t0 is defined by 365 . In summary, under the Heston model, a linear relationship 2 between VIX and the instantaneous variance Vt is deduced by
VIX2t ¼ A þ BV t , where constant coefficients are am 1 eðaþlÞt0 1 A¼ , aþl ða þ lÞt0
B¼
1 eðaþlÞt0 : ða þ lÞt0
Next, we generalize this result to a more general class of models, within which the long-run mean m and vol-vol b can be time dependent but deterministic. Even under his larger model class, we still can derive and obtain a linear relationship between VIX square and the instantaneous variance. Theorem 1 Assume that the instantaneous variance st follows a mean-reverting
process dst ¼ aðmðtÞ st Þdt þ gðst ÞdW t ,
(92.13)
where functions m and g exist such that the classical assumption of stochastic differential equations (see Oksendal (1998)) is satisfied. A linear relationship between VIX square and the instantaneous variance
where A ¼ 1t
ðt ðt 0
0
VIX2t ¼ A þ Bs2t , 1 e2aðstÞ hðsÞdsdt and B ¼ 2at ðe2at 1Þ.
Proof Apply Ito’s Lemma to s2t to obtain
ds2t ¼ 2amðtÞst 2as2t þ g2 ðst Þ dt þ 2st gðst ÞdW t : Then, take a mathematical expectation over both sides dE s2t ¼ 2aE s2t þ hðtÞ dt,
(92.14)
where h(t) ¼ E⌊2am(t)st + g2(st)c is a time-dependent deterministic function. This equation is known as the linear ordinary differential equation with an inhomogeneous term. The unique solution is given by ðt E s2t ¼ s20 e2at e2aðstÞ hðsÞds, 0 ðt 2 2 2 1 and VIX2 defined ð ð as VIX0 ¼ t Est dt ¼ A þ Bs0 . Coefficients are A ¼ 1t
t
t
0
0
0
1 e2aðstÞ hðsÞdsdt and B ¼ 2at ðe2at 1Þ.
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2531
Fig. 92.3 Daily VIX, denoted by VIX/100, and the estimated instantaneous volatility, denoted by F_adj_MLE_Vol, of S&P 500 index within the time period from January 2, 1990, to January 31, 2011
92.5.2 Empirical Results The previous theorem advocates a linear relationship between the VIX squared and the instantaneous variance. No empirical analysis on this theoretical result is found to our best knowledge. The reason may come from the fact that estimation for the instantaneous variance was difficult. In virtue of the corrected Fourier transform methods, one can estimate time series s2t . Daily VIX and the estimated instantaneous volatility of S&P 500 index within the time period from January 2, 1990, to January 31, 2011, are shown in Fig. 92.3. Our data resource is from Yahoo! Finance. We adopt the general linear test approach, see Kutner et al. (2005), to examine the linear relationship between VIX2 and the instantaneous variance s2. Assuming that the null hypothesis is
H 0 : b1 ¼ 0, H 1 : b1 6¼ 0:
and the alternative is Yi ¼ b0 + b1Xi + ei, one can use the least squares method to obtain the sum of squared errors, denoted by SSE(F), which is chi-square distributed with the degree of freedom (n-2) and denoted by dfF. Similarly under the null hypothesis, the model is assumed to be Yi ¼ b0 + ei, and the sum of squared errors, denoted by SSE(R), which is also chi-square distributed with the degree of freedom (n-1), denoted by dfR. The F statistic is defined by
2532
C.-H. Han
F ¼
SSEðRÞSSEðFÞ df R df F SSEðFÞ df F
Fð1 a, df R df F , df F Þ,
where the confidence level is (1 a). Our data size is n ¼ 1,091 and estimated linear equation is VIX_2 ¼ 0:039342 þ 0:180687V: Given the significance level alpha ¼ 1 %, the value of F statistic is equal to 6192.8 > F(0.99, 1, n 2) ¼ 6.639621 so that the null is rejected.
92.6
Concluding Remarks and Future Works
This chapter presents methods of nonparametric Fourier transform for estimating the instantaneous volatility in one dimension. Based on these results, we conduct simulation tests for a local model and a stochastic volatility model and two empirical studies. They include (1) volatility behaviors under different sampling frequencies and (2) linear hypothesis between the instantaneous volatility and VIX. As a matter of fact, this method can be used for high-dimensional case. That allows for estimating dynamic volatility matrices. As a result, this whole approach may be suitable to study subjects of portfolio risk management, systemic risk analysis, etc. We leave these as future works.
References Broto, C., & Ruiz, E. (2004). Estimation methods for stochastic volatility models: A survey. Journal of Economic Surveys, 18(5), 613–649. Engle, R. (2009). Anticipating correlations: A new paradigm for risk management. Princeton, New Jersey: Princeton University Press. Fouque, J.-P., Papanicolaou, G., & Sircar, R. (2000). Derivatives in financial markets with stochastic volatility. Cambridge: Cambridge University Press. Gatheral, J. (2006). The volatility surface. Hoboken, New Jersey: Wiley. Han, C.-H., Liu, W.-H., & Chen, T.-Y. (2014). VaR/CVaR Estimation under Stochastic Volatility Models. To appear on International Journal of Theoretical & Applied Finance. C.H. Han, C.-H. Chang, C.-S. Kuo, S.-T. Yu. (2014) Robust hedging performance and volatility risk in option markets: Application to Standard and Poor’s 500 and Taiwan index options. To appear on the special issue “Advances in Financial Risk Management and Economic Policy Uncertainty” of International Review of Economics and Finance. Heston, S. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of Financial Studies, 6(2), 327–343. Hull, J. (2008). Options, futures, and other derivatives (7th ed.). Prentice Hall. Jiang, G. J. (1998). Nonparametric modeling of U.S. interest term structure dynamic and implications on the prices of derivative securities. The Journal of Financial and Quantitative Analysis, 33(4), 465–497. Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied linear statistical models (5th ed.). McGraw-Hill/Irwin.
92
Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods 2533
Malliavin, P., & Mancino, M. E. (2009). A Fourier transform method for nonparametric estimation of multivariate volatility. Annals of Statistics, 37(4), 1983–2010. Molina, G., Han, C.-H., & Fouque, J. P. (2010). MCMC estimation of multiscale stochastic volatility models. In Handbook of quantitative finance and risk management. Springer. Oksendal, B. (1998). Stochastic differential equations (5th ed.). Springer. Reno, R. (2008). Nonparametric estimation of the diffusion coefficient of stochastic volatility models. Econometric Theory, 24(5), 1174–1206. Shreve, S. E. (2000). Stochastic calculus for finance II: Continuous-time models. Springer. Tsai, R. S. (2005). Analysis of financial time series (2nd ed.). Wiley. Wild, C. J. & Seber, G. A. F. (1999). Chance encounters: A first course in data analysis and inference. Wiley. Yu, Jun. (2010). Simulation-based estimation methods for financial time series models. In J.-C. Duan, J. E. Gentle, & W. Hardle (Eds.), Handbook of computational finance. Springer. Zhang, L., & Mykland, P. (2005). A tale of two time scales: Determining integrated volatility with noise high frequency data. Journal of American Statistics, 100, 1394–1411. Zhang, J. E., & Zhu, Y. (2006). VIX Futures. The Journal of Futures Markets, 26(6), 521–531.
A Dynamic CAPM with Supply Effect Theory and Empirical Results
93
Cheng-Few Lee, Chiung-Min Tsai, and Alice C. Lee
Contents 93.1 93.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Development of Multiperiod Asset Pricing Model with Supply Effect . . . . . . . . . . . . . . 93.2.1 The Demand Function for Capital Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.2 Supply Function of Securities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.3 Multiperiod Equilibrium Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.4 Derivation of Simultaneous Equation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.5 Test of Supply Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3 Data and Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3.1 Data and Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3.2 Dynamic CAPM with Supply Side Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Modeling the Price Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Identification of the Simultaneous Equation System . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3: Derivation of the Formula Used to Estimate dt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2536 2538 2539 2541 2542 2543 2544 2545 2546 2548 2552 2553 2555 2557 2559
C.-F. Lee (*) Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] C.-M. Tsai Central Bank of the Republic of China (Taiwan), Taipei, Taiwan, Republic of China e-mail: [email protected] A.C. Lee State Street Corp., USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_93, # Springer Science+Business Media New York 2015
2535
2536
C.-F. Lee et al.
Abstract
Breeden (1979) and Grinols (1984) and Cox et al. (1985) have described the importance of supply side for the capital asset pricing. Black (1976) derives a dynamic, multiperiod CAPM, integrating endogenous demand and supply. However, Black’s theoretically elegant model has never been empirically tested for its implications in dynamic asset pricing. We first theoretically extend Black’s CAPM. Then, we use price, dividend per share, and earnings per share to test the existence of supply effect with US equity data. We find the supply effect is important in US domestic stock markets. This finding holds as we break the companies listed in the S&P 500 into ten portfolios by different level of payout ratio. It also holds consistently if we use individual stock data. A simultaneous equation system is constructed through a standard structural form of a multiperiod equation to represent the dynamic relationship between supply and demand for capital assets. The equation system is exactly identified under our specification. Then, two hypotheses related to supply effect are tested regarding the parameters in the reduced form system. The equation system is estimated by the seemingly unrelated regression (SUR) method, since SUR allow one to estimate the presented system simultaneously while accounting for the correlated errors. Keywords
CAPM • Asset • Endogenous supply • Simultaneous equations • Reduced form • Seemingly unrelated regression (SUR) • Exactly identified • Cost of capital • Quadratic cost • Partial adjustment
93.1
Introduction
Breeden (1979) and Grinols (1984) and Cox et al. (1985) have described the importance of supply side for the capital asset pricing. Cox et al. (1985) study a restricted technology to allow them to explicitly solve their model for reduced form. Grinols (1984) focuses on describing market optimality and supply decisions which guide firms in incomplete markets in the absence of investor unanimity. Black (1976) extends the static CAPM by Sharpe (1964), Litner (1965), and Mossin (1966) explicitly allowing for the endogenous supply effect of risky securities to derive the dynamic asset pricing model.1 Black modifies the static model by explicitly allowing for the existence of the supply effect of risky securities. In addition, the demand side for the risky securities is derived from a negative exponential function for the investor’s utility of wealth. Black finds that the static 1
This dynamic asset pricing model is different from Merton’s (1973) intertemporal asset pricing model in two key aspects. First, Black’s model is derived in the form of simultaneous equations. Second, Black’s model is derived in terms of price change, and Merton’s model is derived in terms of rates of return.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2537
CAPM is unnecessarily restrictive in its neglect of the supply side and proposes that his dynamic generalization of the static CAPM can provide the basis for many empirical tests, particularly with regard to the intertemporal aspects and the role of the endogenous supply side. Assuming that there is a quadratic cost structure of retiring or issuing securities and that the demand for securities may deviate from supply due to anticipated and unanticipated random shocks, Black concludes that if the supply of a risky asset is responsive to its price, large price changes will be spread over time as specified by the dynamic capital asset pricing model. One important implication in Black’s model is that the efficient market hypothesis holds only if the supply of securities is fixed and independent of current prices. In short, Black’s dynamic generalization model of static wealth-based CAPM adopts an endogenous supply side of risky securities by setting equal quantity demanded and supplied of risky securities. Lee and Gweon (1986) extend Black’s framework to allow time-varying dividend payments and then test the existence of supply effect in the situation of market equilibrium. Their results reject the null hypothesis of no supply effect in the US domestic stock market. The rejection seems to imply a violation of efficient market hypothesis in the US stock market. It is worth noting that some recent studies also relate return on portfolio to trading volume (e.g., Campbell et al. 1993; Lo and Wang 2000). Surveying the relationship between aggregate stock market trading volume and the serial correlation of daily stock returns, (Campbell et al. 1993) suggest that a stock price decline on a high-volume day is more likely than a stock price decline on a low-volume day. They propose an explanation that trading volume occurs when random shifts in the stock demand of non-informational traders are accommodated by the risk-averse market makers. Lo and Wang (2000) also examine the CAPM in the intertemporal setting. They derive an intertemporal CAPM (ICAPM) by defining preference for wealth instead of consumption, by introducing three state variables into the exponential types of investor’s preference as we do in this paper. This state-dependent utility function allows one to capture the dynamic nature of the investment problem without explicitly solving a dynamic optimization problem. Thus, the marginal utility of wealth depends not only on the dividend of the portfolio but also on future state variables. This dependence introduces dynamic hedging motives in the investors’ portfolio choices. That is, this dependence induces investors to care about future market conditions when choosing their portfolio. In equilibrium, this model also implies that an investor’s utility depends not only on his wealth but also on the stock payoffs directly. This “market spirit,” in their terminology, affects investor’s demand for the stocks. In other words, for even the investor who holds no stocks, his utility fluctuates with the payoffs of the stock index. Black (1976), Lee and Gweon (1986), and Lo and Wang (2000) develop models by using either outstanding shares or trading volumes as variables to connect the decisions in two different periods, unlike consumption-based CAPM which uses consumption or macroeconomic information. Black (1976) and Lee and Gweon (1986) both derive the dynamic generalization models from the
2538
C.-F. Lee et al.
wealth-based CAPM by adopting an endogenous supply schedule of risky securities.2 Thus, the information of quantities demanded and supplied can now play a role in determining the asset price. This proposes a wealth-based model as an alternative method to investigate intertemporal CAPM. In this chapter, we first theoretically extend the Black’s dynamic, simultaneous CAPM to be able to test the existence of the supply effect in the asset pricing determination process. We use two datasets of price per share and dividend per share to test the existence of supply effect with US equity data. The first dataset consists most companies listing in the S&P 500 of the US stock market. The second dataset is the companies listed in the Dow Jones Index. In this study, we find the supply effect is important in the US stock market. This finding holds as we break the companies listed in the S&P 500 into ten portfolios. It also holds if we use individual stock data. For example, the existence of supply effect holds consistently in most portfolios if we test the hypotheses by using individual stock as many as 30 companies in one group. We also find that one cannot reject the existence of supply effect by using the stocks listed in the Dow Jones Index. This chapter is structured as follows. In Sect. 93.2, a simultaneous equation system of asset pricing is constructed through a standard structural form of a multiperiod equation to represent the dynamic relationship between supply and demand for capital assets. The hypotheses implied by the model are also presented in this section. Section 93.3 describes the two sets of data used in this paper. The empirical finding for the hypotheses and tests constructed in previous section is then presented. Our summary is presented in Sect. 93.4.
93.2
Development of Multiperiod Asset Pricing Model with Supply Effect
Based on the framework of Black (1976), we derive a multiperiod equilibrium asset pricing model in this section. Black modifies the static wealth-based CAPM by explicitly allowing for the endogenous supply effect of risky securities. The demand for securities is based on the well-known model of James Tobin (1958) and Harry Markowitz (1959). However, Black further assumes a quadratic cost function of changing short-term capital structure under long-run optimality condition. He also assumes that the demand for security may deviate from supply due to anticipated and unanticipated random shocks. Lee and Gweon (1986) modify and extend Black’s framework to allow timevarying dividends and then test the existence of supply effect. In Lee and Gweon’s model, two major differing assumptions from Black’s model are: (1) the model allows for time-varying dividends, unlike Black’s assumption constant dividends,
2
It should be noted that Lo and Wang’s model did not explicitly introduce the supply equation in asset pricing determination. Also, one can identify the hedging portfolio using volume data in the Lo and Wang model setting.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2539
and (2) there is only one random, unanticipated shock in the supply side instead of two shocks, anticipated and unanticipated shocks, as in Black’s model. We follow the Lee and Gweon set of assumptions. In this section, we develop a simultaneous equation asset pricing model. First, we derive the demand function for capital assets, then we derive the supply function of securities. Next, we derive the multiperiod equilibrium model. Thirdly, the simultaneous equation system is developed for testing the existence of supply effects. Finally, the hypotheses of testing supply effect are developed.
93.2.1 The Demand Function for Capital Assets The demand equation for the assets is derived under the standard assumptions of the CAPM.3 An investor’s objective is to maximize their expected utility function. A negative exponential function for the investor’s utility of wealth is assumed: U ¼ a h efbW tþ1 g ,
(93:1)
where the terminal wealth Wt+1 ¼ Wt(1 + Rt); Wt is initial wealth; and Rt is the rate of return on the portfolio. The parameters a, b, and h are assumed to be constants. The dollar returns on N marketable risky securities can be represented by X j,
tþ1
¼ Pj ,
tþ1
Pj, t þ Dj,
tþ1 ,
j ¼ 1, . . . , N,
(93:2)
where Pj, t+1 ¼ (random) price of security j at time t + 1 Pj, t ¼ price of security j at time t Dj, t+1 ¼ (random) dividend or coupon on security at time t + 1 These three variables are assumed to be jointly normally distributed. After taking the expected value of Eq. 93.2 at time t, the expected returns for each security, xj, t+1, can be rewritten as xj, tþ1 ¼ Et Xj, tþ1 ¼ Et Pj, tþ1 Pj, t þ Et Dj, tþ1 ,
j ¼ 1, . . . , N,
(93:3)
where Et Pj, tþ1 ¼ E Pj, tþ1 jOt Et Dj, tþ1 ¼ E Dj, tþ1 jOt Et Xj, tþ1 ¼ E Xj, tþ1 jOt Þ Ot is the given information available at time t. 3
The basic assumptions are as follows: (1) a single period moving horizon for all investors; (2) no transaction costs or taxes on individuals; (3) the existence of a risk-free asset with rate of return, r*; (4) evaluation of the uncertain returns from investments in terms of expected return and variance of end-of-period wealth; and (5) unlimited short sales or borrowing of the risk-free asset.
2540
C.-F. Lee et al.
Then, a typical investor’s expected value of end-of-period wealth is wtþ1 ¼ Et W tþ1 ¼ W t þ r W t qtþ1 0 Pt þ qtþ1 0 xtþ1 ,
(93:4)
where Pt ¼ (P1, t, P2, t, P3, t, . . ., P N, t)0 xt+1 ¼ (x1, t+1, x2, t+1, x3, t+1, . . ., xN, t+1) 0 ¼ EtPt+1–Pt + EtDt+1 qt+1 ¼ (q1, t+1, q2, t+1, q3, t+1, . . ., qN, t+1) 0 qj,t+1 ¼ number of units of security j after reconstruction of his portfolio r* ¼ risk-free rate In Eq. 93.4, the first term on the right hand side is the initial wealth, the second term is the return on the risk-free investment, and the last term is the return on the portfolio of risky securities. The variance of Wt+1 can be written as VðW tþ1 Þ ¼ EðW tþ1 wtþ1 ÞðW tþ1 wtþ1 Þ0 ¼ qtþ1 0 Sq, tþ1 ,
(93:5)
where S ¼ E(Xt+1–xt+1)(Xt+1–xt+1) 0 ¼ the covariance matrix of returns of risky securities. Maximization of the expected utility of Wt+1 is equivalent to: b Max wtþ1 VðW tþ1 Þ, 2
(93:6)
By substituting Eqs. 93.4 and 93.5 into Eq. 93.6, Eq. 93.6 can be rewritten as: Maxð1 þ r ÞW t þ qtþ1 0 ðxtþ1 r Pt Þ ðb=2Þqtþ1 0 Sqtþ1:
(93:7)
Differentiating Eq. 93.7, one can solve the optimal portfolio as: qtþ1 ¼ b1 S1 ðxtþ1 r Pt Þ:
(93:8)
Under the assumption of homogeneous expectation, or by assuming that all the investors have the same probability belief about future return, the aggregate demand for risky securities can be summed as: Qtþ1 ¼
m X
qktþ1 ¼ cS1 ½Et Ptþ1 ð1 þ r ÞPt þ Et Dtþ1 ,
(93:9)
k¼1
where c ¼ S(bk)1. In the standard CAPM, the supply of securities is fixed, denoted as Q*. Then, Eq. 93.9 can be rearranged as Pt ¼ (1/r*)(xt+1–c1 S Q*), where c1 is the market price of risk. In fact, this equation is similar to the Lintner’s (1965) well-known equation in capital asset pricing.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2541
93.2.2 Supply Function of Securities An endogenous supply side to the model is derived in this section, and we present our resulting hypotheses, mainly regarding market imperfections. For example, the existence of taxes causes firms to borrow more since the interest expense is tax-deductible. The penalties for changing contractual payment (i.e., direct and indirect bankruptcy costs) are material in magnitude, so the value of the firm would be reduced if firms increase borrowing. Another imperfection is the prohibition of short sales of some securities.4 The costs generated by market imperfections reduce the value of a firm, and, thus, a firm has incentives to minimize these costs. Three more related assumptions are made here. First, a firm cannot issue a risk-free security; second, these adjustment costs of capital structure are quadratic; and third, the firm is not seeking to raise new funds from the market. It is assumed that there exists a solution to the optimal capital structure and that the firm has to determine the optimal level of additional investment. The one-period objective of the firm is to achieve the minimum cost of capital vector with adjustment costs involved in changing the quantity vector, Qi, t+1: Min Et Di, tþ1 Qi, tþ1 þ ð1=2Þ DQi, tþ1 0 AiDQi, tþ1 , subject to Pi, t DQi, tþ1 ¼ 0,
(93:10)
where Ai is a ni ni positive-definite matrix of coefficients measuring the assumed quadratic costs of adjustment. If the costs are high enough, firms tend to stop seeking raise new funds or retire old securities. The solution to Eq. 93.10 is DQi, tþ1 ¼ Ai 1 li Pi, t Et Di, tþ1 ,
(93:11)
where li is the scalar Lagrangian multiplier. Aggregating Eq. 93.11 over N firms, the supply function is given by DQtþ1 ¼ A1 ðBPt Et Dtþ1 Þ, 2 where
6 A1 ¼ 6 4
3 Q1 6 Q2 7 7 Q¼6 4 ⋮ 5: QN 2
4
A1 1
3 A1 2
⋱
7 7 , 5 A1 N
2 6 B¼6 4
l1 I
(93:12) 3 l2 I
⋱
7 7 , and 5 lN I
Theories as to why taxes and penalties affect capital structure are first proposed by Modigliani and Miller (1958) and then Miller (1977). Another market imperfection, prohibition on short sales of securities, can generate “shadow risk premiums” and, thus, provide further incentives for firms to reduce the cost of capital by diversifying their securities.
2542
C.-F. Lee et al.
Equation 93.12 implies that a lower price for a security will increase the amount retired of that security. In other words, the amount of each security newly issued is positively related to its own price and is negatively related to its required return and the prices of other securities.
93.2.3 Multiperiod Equilibrium Model The aggregate demand for risky securities presented by Eq. 93.9 can be seen as a difference equation. The prices of risky securities are determined in a multiperiod framework. It is also clear that the aggregate supply schedule has similar structure. As a result, the model can be summarized by the following equations for demand and supply, respectively: Qtþ1 ¼ cS1 ðEt Ptþ1 ð1 þ r ÞPt þ Et Dtþ1 Þ,
(93:9)
DQtþ1 ¼ A1 ðBPt Et Dtþ1 Þ:
(93:12)
Differencing Eq. 93.9 for period t and t+1 and equating the result with Eq. 93.12, a new equation relating demand and supply for securities is cS1 ½Et Ptþ1 Et1 Pt ð1 þ r ÞðPt Pt1 Þ þ Et Dtþ1 Et1 Dt ¼ A1 ðBPt Et Dtþ1 Þ þ Vt ,
(93:13) where Vt is included to take into account the possible discrepancies in the system. Here, Vt is assumed to be random disturbance with zero expected value and no autocorrelation. Obviously, Eq. 93.13 is a second-order system of stochastic differential equation in Pt and conditional expectations Et1Pt and Et1Dt. By taking the conditional expectation at time t1 on Eq. 93.13, and because of the properties of Et1[EtPt+1] ¼ Et1Pt+1 and Et1E(Vt) ¼ 0, Eq. 93.13 becomes cS1 ½Et1 Ptþ1 Et1 Pt ð1 þ r ÞðEt1 Pt Pt1 Þ þ Et1 Dtþ1 Et1 Dt ¼ A1 ðBEt1 Pt Et1 Dtþ1 Þ: (93:130 ) Subtracting Eq. 93.130 from Eq. 93.13, ð1 þ r ÞcS1 þ A1 B ðPt Et1 Pt Þ ¼ cS1 ðEt Ptþ1 Et1 Ptþ1 Þ þ cS1 þ A1 ðEt Dtþ1 Et1 Dtþ1 Þ Vt :
(93:14)
Equation 93.14 shows that prediction errors in prices (the left hand side) due to unexpected disturbance are a function of expectation adjustments in price (first term on the right hand side) and dividends (the second term on the right hand side) two periods ahead. This equation can be seen as a generalized capital asset pricing model.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2543
One important implication of the model is that the supply side effect can be examined by assuming the adjustment costs are large enough to keep the firms from seeking to raise new funds or to retire old securities. In other words, the assumption of high enough adjustment costs would cause the inverse of matrix A in Eq. 93.14 to vanish. The model is, therefore, reduced to the following certain equivalent relationship: Pt Et1 Pt ¼ ð1 þ r Þ1 ðEt Ptþ1 Et1 Ptþ1 Þ þ ð1 þ r Þ1 ðEt Dtþ1 Et1 Dtþ1 Þ þ Ut ,
(93:15) Where Ut ¼ c1S(1 + r*)1Vt. Equation 93.15 suggests that current forecast error in price is determined by the sum of the values of the expectation adjustments in its own next-period price and dividend discounted at the rate of 1 + r*.
93.2.4 Derivation of Simultaneous Equation System From Eq. 93.15, if price series follow a random walk process, then the price series can be represented as Pt ¼ Pt1 + at, where at is white noise. It follows that Et1Pt ¼ Pt1, EtPt+1 ¼ Ptand Et1Pt+1 ¼ Pt1. According to the results in Appendix 1, the assumption that price follows a random walk process seems to be reasonable for both datasets. As a result, Eq. 93.14 becomes r cS1 þ A1 B ðPt Pt1 Þ þ cS1 þ A1 ðEt Dtþ1 Et1 Dtþ1 Þ ¼ Vt : (93:16) Equation 93.16 can be rewritten as G pt þ H d t ¼ Vt ,
(93:17)
where G ¼ (r * cS1 + A1B) H ¼ (cS1 + A1) dt ¼ EtDt+1 Et1Dt+1 pt ¼ Pt Pt1. If Eq. 93.17 is exactly identified and matrix G is assumed to be nonsingular, then as shown in Greene (2004), the reduced form of this model may be written as5 pt ¼ Pdt þ Ut ,
5
The identification of the simultaneous equation system can be found in Appendix 2.
(93:18)
2544
C.-F. Lee et al.
where P is a n-by-n matrix of the reduced form coefficients and Ut is a column vector of n reduced form disturbances. Or P ¼ G1 H, and Ut ¼ G1 Vt :
(93:19)
Equations 93.18 and 93.19 are used to test the existence of supply effect in the next section.
93.2.5 Test of Supply Effect Since the simultaneous equation system as in Eq. 93.17 is exactly identified, it can be estimated by the reduced form as Eq. 93.18. A proof of identification problem of Eq. 93.17 is shown in Appendix 2. That is, Eq. 93.18, pt ¼ Pdt + Ut, can be used to test the supply effect. For example, in the case of two portfolios, the coefficient matrix G and H in Eq. 93.17 can be written as6 G¼
g11 g21
g12 g22
¼
h H ¼ 11 h21
ðr cs11 þ a1 b1 Þ r cs21
h12 h22
cs11 þ a1 ¼ cs21
r cs12 , ðr cs22 þ a2 b2 Þ cs12 : cs22 þ a2
(93:20)
Since P ¼ G 1 H in Eq. 93.21, P can be calculated as 1 r cs11 þ a1 b1 r cs12 cs11 þ a1 cs12 r cs21 r cs22 þ a2 b2 cs21 cs22 þ a1 r cs12 cs12 cs11 þ a1 1 r cs22 þ a2 b2 ¼ jGj r cs21 r cs11 þ a1 b1 cs21 cs22 þ a1 1 ðr cs22 þ a2 b2 Þðcs11 þ a1 Þ r cs12 cs21 ðr cs22 þ a2 b2 Þcs12 r cs12 ðcs22 þ a1 Þ ¼ jGj r cs21 ðcs11 þ a1 Þ þ ðr cs11 þ a1 b1 Þcs21 r cs21 cs12 þ ðr cs11 þ a1 b1 Þðcs22 þ a1 Þ p11 p12 ¼ : p21 p22
G1 H ¼
(93:21) From Eq. 93.21, if there is a high enough quadratic cost of adjustment, or if a1 ¼ a2 ¼ 0, then with s12 ¼ s21, the matrix would become a scalar matrix in which diagonal elements are equal to r*c2 (s11 s22 s122), and the off-diagonal elements are all zero. In other words, if there is high enough cost of adjustment, firm tends to stop seeking to raise new funds or to retire old securities. Mathematically, this will be represented in a way that all off-diagonal elements are all zero and all diagonal
6
sij is the ith row and jth column of the variance-covariance matrix of return. ai and bi are the supply adjustment cost of firm i and overall cost of capital of firm i, respectively.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2545
elements are equal to each other in matrix П. In general, this can be extended into the case of more portfolios. For example, in the case of N portfolios, Eq. 93.18 becomes 3 2 32 3 2 3 2 p1t p11 p12 p1N d1t u1t 6 p2t 7 6 p21 p22 p2N 76 d2t 7 6 u2t 7 7 6 76 7 6 7 6 (93:22) 4 ⋮ 5 ¼ 4 ⋮ ⋮ ⋱ ⋮ 54 ⋮ 5 þ 4 ⋮ 5: pNt pN1 pN2 pNN d Nt uNt Equation 93.22 shows that if an investor expects a change in the prediction of the next dividend due to additional information (e.g., change in earnings) during the current period, then the price of the security changes. Regarding the US equity market, if one believes that how the expectation errors in dividends are built into the current price is the same for all securities, then, the price changes would be only influenced by its own dividend expectation errors. Otherwise, say if the supply of securities is flexible, then the change in price would be influenced by the expectation adjustment in dividends of all other stocks as well as that in its own dividend. Therefore, two hypotheses related to supply effect to be tested regarding the parameters in the reduced form system shown in Eq. 93.18 are as follows: Hypothesis 1: All the off-diagonal elements in the coefficient matrix P are zero if the supply effect does not exist. Hypothesis 2: All the diagonal elements in the coefficients matrix P are equal in the magnitude if the supply effect does not exist. These two hypotheses should be satisfied jointly. That is, if the supply effect does not exist, price changes of a security should be a function of its own dividend expectation adjustments, and the coefficients should all be equal across securities. In the model described in Eq. 93.16, if an investor expects a change in the prediction of the next dividend due to the additional information during the current period, then the price of the security changes. Under the assumption of the efficiency in the domestic stock market, if the supply of securities is fixed, then the expectation errors in dividends are built in the current price is the same for all securities. This phenomenon implies that the price changes would only be influenced by its own dividend expectation adjustments. If the supply of securities is flexible, then the change in price would be influenced by the expectation adjustment in dividends of all other securities as well as that of its own dividend.
93.3
Data and Empirical Results
In this section, we derive the test by analyzing the US domestic stock market. Most details of the model, the methodologies, and the hypotheses for empirical tests are previously discussed in Sect. 93.2. However, before testing the hypotheses, some other details of the related tests that are needed to support the assumptions used in the model are also briefly discussed in this section.
2546
C.-F. Lee et al.
This section examines the hypotheses derived earlier for the US domestic stock market by using the companies listed in S&P 500 and, then, by using the companies listing in Dow Jones Index. If the supply of risky assets is responsive to its price, then large price changes, which are due to the change in expectation of future dividend, will be spread over time. In other words, there exists supply effect in the US domestic stock markets. This implies that the dynamic instead of static CAPM should be used for testing capital assets pricing in the equity markets of the United States.
93.3.1 Data and Descriptive Statistics Three hundred companies are selected from the S&P 500 and grouped into ten portfolios with equal numbers of 30 companies by their payout ratios. The data are obtained from the Compustat North America industrial quarterly data. The data starts from the first quarter of 1981 to the last quarter of 2002. The companies selected satisfy the following two criteria. First, the company appears on the S&P 500 at some time period during 1981 through 2002. Second, the company must have complete data available – including price, dividend, earnings per share, and shares outstanding – during the 88 quarters (22 years). Firms are eliminated from the sample list if one of the following two conditions occurs: (i) Reported earnings are either trivial or negative. (ii) Reported dividends are trivial. Three hundred fourteen firms remain after these adjustments. Finally, excluding those seven companies with highest and lowest average payout ratio, the remaining 300 firms are grouped into ten portfolios by the payout ratio. Each portfolio contains 30 companies. Figure 93.1 shows the comparison of S&P 500 index and the value-weighted index of the 300 firms selected (M). Figure 93.1 shows that the trend is similar to each other before the third quarter of 1999. However, there exist some differences after third quarter of 1999. To group these 300 firms, the payout ratio for each firm in each year is determined by dividing the sum of four quarters’ dividends by the sum of four quarters’ earnings; then, the yearly ratios are further averaged over the 22-year period. The first 30 firms with highest payout ratio comprise portfolio one, and so on. Then, the value-weighted average of the price, dividend, and earnings of each portfolio is computed. Characteristics and summary statistics of these ten portfolios are presented in Tables 93.1 and 93.2, respectively. Table 93.1 presents information of return, payout ratio, size, and beta for ten portfolios. From the results of this table, there appears to exist an inverse relationship between return and payout ratio, payout ratio and beta. However, the relationship between payout ratio and beta is not so clear. This finding is similar to that of Fama and French (1992). Table 93.2 shows the first four moments of quarterly returns of the market portfolio and ten portfolios. The coefficients of skewness, kurtosis, and JarqueBera statistics show that one cannot reject the hypothesis that log return of most
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2547
1600 1400 S&P500 1200
M
1000 800 600 400 200
02 20
00 20
99 19
97 19
96 19
94 19
93 19
91 19
90 19
88 19
87 19
85 19
84 19
19
82
0
Fig. 93.1 Comparison of S&P 500 and market portfolio
Table 93.1 Characteristics of ten portfolios Portfolioa 1 2 3 4 5 6 7 8 9 10
Returnb 0.0351 0.0316 0.0381 0.0343 0.0410 0.0362 0.0431 0.0336 0.0382 0.0454
Payoutc 0.7831 0.7372 0.5700 0.5522 0.5025 0.4578 0.3944 0.3593 0.2907 0.1381
Size (000) 193,051 358,168 332,240 141,496 475,874 267,429 196,265 243,459 211,769 284,600
Beta (M) 0.7028 0.8878 0.8776 1.0541 1.1481 1.0545 1.1850 1.0092 0.9487 1.1007
a
The first 30 firms with highest payout ratio comprise portfolio one, and so on The price, dividend, and earnings of each portfolio are computed by value-weighted of the 30 firms included in the same category c The payout ratio for each firm in each year is found by dividing the sum of four quarters’ dividends by the sum of four quarters’ earnings; then, the yearly ratios are then computed from the quarterly data over the 22-year period b
portfolios is normal. The kurtosis statistics for most sample portfolios are close to three, which indicates that heavy tails are not an issue. Additionally, Jarque-Bera coefficients illustrate that the hypotheses of Gaussian distribution for most portfolios are not rejected. It seems to be unnecessary to consider the problem of heteroskedasticity in estimating domestic stock market if the quarterly data are used.
2548
C.-F. Lee et al.
Table 93.2 Summary statistics of portfolio quarterly returnsa Country Market portfolio Portfolio 1 Portfolio 2 Portfolio 3 Portfolio 4 Portfolio 5 Portfolio 6 Portfolio 7 Portfolio 8 Portfolio 9 Portfolio 10
Mean (quarterly) 0.0364 0.0351 0.0316 0.0381 0.0343 0.0410 0.0362 0.0431 0.0336 0.0382 0.0454
Std. dev. (quarterly) 0.0710 0.0683 0.0766 0.0768 0.0853 0.0876 0.0837 0.0919 0.0906 0.0791 0.0985
Skewness 0.4604 0.5612 1.1123 0.3302 0.1320 0.4370 0.2638 0.1902 0.2798 0.2949 0.0154
Kurtosis 3.9742 3.8010 5.5480 2.8459 3.3064 3.8062 3.6861 3.3274 3.3290 3.8571 2.8371
Jarque-Bera 6.5142* 6.8925* 41.470** 1.6672* 0.5928 5.1251 2.7153 0.9132 1.5276 3.9236 0.0996
a
Quarterly returns from 1981:Q1to 2002:Q4 are calculated * and ** denote statistical significance at the 5 % and 1 % level, respectively
93.3.2 Dynamic CAPM with Supply Side Effect If one believes that the stock market is efficient (i.e., if one believes the way in which the expectation errors in dividends are built in the current price is the same for all securities), then price changes would be influenced only by its own dividend expectation errors. Otherwise, if the supply of securities is flexible, then the change in price would be influenced by the expectation adjustment in dividends of other portfolios as well as that in its own dividend. Thus, two hypotheses related to supply effect are to be tested and should be satisfied jointly in order to examine whether there exists a supply effect. Recalling from the previous section, the structural form equations are exactly identified, and the series of expectation adjustments in dividend, dt, are exogenous variables (dt can be estimated from earnings per share and dividends per share by using a partial adjustment model as presented in Appendix 3). Now, the reduced form equations can be used to test the supply effect. That is, Eq. 93.22 needs to be examined by the following hypotheses: Hypothesis 1: All the off-diagonal elements in the coefficient matrix P are zero if the supply effect does not exist. Hypothesis 2: All the diagonal elements in the coefficients matrix P are equal in the magnitude if the supply effect does not exist. These two hypotheses should be satisfied jointly. That is, if the supply effect does not exist, price changes of each portfolio would be a function of its own dividend expectation adjustments, and the coefficients should be equal across all portfolios. The estimated coefficients of the simultaneous equation system for ten portfolios are summarized in Table 93.3.7 Results of Table 93.3 indicate that the estimated
7
The results are similar when using either the FIML or SUR approach. We report here the estimates of the SUR method.
P8
P7
P6
P5
P4
P3
P2
P1
16.2607
60.9228
[1.5385]
82.9826
67.1171
[1.2364]
[1.9106]
79.569
64.5317
[1.2330]
11.5065
4.463256
64.5323
[0.0692]
5.497057
[0.0886]
[0.6005]
[1.5065]
62.0465
35.03964
58.354
84.5262
56.1062
12.6764
[0.6891]
12.1881
8.734942
12.46593
[1.0228]
45.85208
[1.0945]
[1.0047]
54.39686
25.521
P_P4
[0.4995]
58.8276
29.38345
[0.3663]
53.1955
19.48548
[2.2094]
11.5558
25.53128
[0.2516]
24.197
6.087413
[1.1691]
61.1839
71.5316
[1.8326]
69.8588
128.0238
[0.9040]
13.4906
12.19542
[0.3842]
22.3461
8.58455
P_P5
[0.1258]
67.4932
8.488357
[1.6045]
61.0314
97.9274
[1.2871]
13.2581
17.06422
[1.1212]
27.7613
31.12653
[0.5466]
70.1966
38.36708
[2.0201]
80.1493
161.9093
[1.2025]
15.4778
18.61126
[0.3364]
25.6377
8.62495
P_P6
[0.0057]
68.7905
0.394223
[0.0708]
62.2046
4.402397
[1.3405]
13.5129
18.11443
[0.2680]
28.2949
7.582502
[0.4177]
71.5459
29.88297
[0.5444]
81.69
44.47442
[0.3377]
15.7753
5.326864
[0.0952]
26.1305
2.486287
P_P7
[0.3294]
65.5667
21.59846
[0.9731]
59.2894
57.69584
[1.8261]
12.8796
23.51969
[1.1454]
26.9689
30.88937
[0.6437]
68.193
43.8957
[1.4872]
77.8617
115.7946
[1.1301]
15.036
16.99283
[0.4208]
24.906
10.48123
P_P8
[0.7183]
63.6582
45.72339
[1.0229]
57.5636
58.88397
[0.1378]
12.5047
1.723033
[0.7376]
26.1839
19.3122
[0.3133]
66.208
20.7400
[1.3661]
75.5953
103.2686
[0.3888]
14.5984
5.675232
[0.0810]
24.181
1.959701
P_P9
[0.4599]
43.0635
19.80597
[1.7475]
38.9406
68.04914
[0.5311]
8.45921
4.492465
[0.9927]
17.7129
17.58315
[0.2330]
44.7884
10.4372
[1.4530]
51.1387
74.30349
[0.1818]
9.8755
1.795597
[0.0779]
16.358
1.274653
P_P10
(continued)
[1.4017]
76.67
107.4715
[0.0514]
69.3296
3.566607
[2.0941]
15.0607
31.53814
[0.0005]
31.5359
0.01716
[0.0254]
79.741
2.02316
[0.8207]
91.047
74.72393
[0.7955]
17.5823
13.98581
[0.4609]
29.1236
13.4239
A Dynamic CAPM with Supply Effect Theory and Empirical Results
[0.5424]
58.5765
31.77293
[2.1673]
52.9685
114.7987
[3.9849]
[2.2577]
24.0937
29.0526
26.5435
25.63953
[0.2669]
[2.6017]
69.5607
76.6333
[1.8412]
180.973
73.6813
[1.2685]
24.73303
13.433
117.8989
[1.1722]
22.2507
[0.5913]
140.7762
18.7728
14.7988
16.67868
14.2287
24.513
[0.5144]
23.5688
13.15747
12.60844
[0.6607]
P_P3
P_P2
15.57183
P_P1
Table 93.3 Coefficients for matrix П’ (ten portfolios)a
93 2549
0.902404
0.096546
3.334318
0.283079
[0.4659]
[1.0637]
Standard errors in ( ) and t-statistics in [ ]
a
0.772786
112.584
[0.1300]
0.083841
106.288
117.094
[0.4371]
14.64016
F-st
49.51991
[1.7799]
51.1797
[1.3320]
14.6768
16.1691
15.5463
15.61156
P_P3
28.77904
P_P2
20.70817
P_P1
R2
P10
P9
Table 93.3 (continued)
1.310894
0.134377
[0.6059]
106.743
64.67943
[1.5700]
14.7398
23.14069
P_P4
0.816966
0.088212
[0.1922]
122.467
23.53575
[1.5339]
16.911
25.93932
P_P5
0.694038
0.075947
[0.5399]
124.821
67.38674
[2.0353]
17.236
35.08121
P_P6
0.850408
0.091492
[0.0593]
118.971
7.053653
[1.4448]
16.4283
23.73591
P_P7
0.241141
0.027763
[0.2617]
115.508
30.23067
[0.9698]
15.9501
15.46799
P_P8
0.596435
0.065971
[0.1989]
78.1391
15.54273
[1.6826]
10.7899
18.15523
P_P9
1.363029
0.138979
[0.2631]
139.118
36.60222
[1.3159]
19.2103
25.27915
P_P10
2550 C.-F. Lee et al.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2551
diagonal elements seem to vary across portfolios and most of the off-diagonal elements are significant from zero. However, simply observing the elements in matrix П directly cannot justify either accept or reject the null hypotheses derived for testing the supply effect. Two tests should be done separately to check whether these two hypotheses are both satisfied. For the first hypothesis, the test of supply effect on off-diagonal elements, the following regression in accordance with Eq. 93.22 is run for each portfolio: pi, t ¼ bi di, t þ Sj6¼i bj dj, t þ ei, t, i, j ¼ 1, . . . , 10:
(93:23)
The null hypothesis then can be written as H0: bj ¼ 0, j ¼ 1, . . ., 10, j 6¼ i. The results are reported in Table 93.4. Two test statistics are reported. The first test uses an F distribution with 9 and 76 degrees of freedom, and the second test uses a chi-squared distribution with 9 degrees of freedom. The null hypothesis is rejected at 5 % significance level in six out of ten portfolios, and only two portfolios cannot be rejected at 10 % significance level. This result indicates that the null hypothesis can be rejected at conventional levels of significance. For the second hypothesis of supply effect on all diagonal elements of Eq. 93.22, the following null hypothesis needs to be tested: H0 : pi, i ¼ pj, j
for all i, j ¼ 1, . . . , 10:
To do this null hypothesis test, we need to estimate Eq. 93.22 simultaneously, and then, we calculate Wald statistics by imposing nine restrictions on this equation system. Under the above nine restrictions, the Wald test statistic has a chi-square distribution with 9 degrees of freedom. The statistic is 18.858, which is greater than 16.92 at 5 % significance level. Since the statistic corresponds to a p-value of 0.0265, one can reject the null hypothesis at 5 %, but it cannot reject H0 at a 1 % significance level. In other words, the diagonal elements are not similar to each other in magnitude. In conclusion, the above empirical results are sufficient to reject two null hypotheses of nonexistence of supply effect in the US stock market. In order to check whether the individual stocks can hold up to the same testing, we use individual stock data as many as 30 companies in one group. The results are summarized in Table 93.5. From Table 93.5, we find that the above conclusion seems to be sustainable if we use individual stock data. More specifically, the diagonal elements are not equal to each other at any conventional significant level and the off-diagonal elements are significantly from zero in each group composed of 30 individual stocks. We also find that one cannot reject the existence of supply effect by using the stocks listed in the Dow Jones Index. Again, to test the supply effect on off-diagonal elements, Eq. 93.23 is run as the following for each company: pi, t ¼ bi di, t þ Sj 6¼ i bj dj, t þ ei, t,
i, j ¼ 1, . . . , 29:
(93:230 )
2552
C.-F. Lee et al.
Table 93.4 Test of supply effect on off-diagonal elements of matrix Пa,b Portfolio 1 Portfolio 2 Portfolio 3 Portfolio 4 Portfolio 5 Portfolio 6 Portfolio 7 Portfolio 8 Portfolio 9 Portfolio 10
R2 0.1518 0.1308 0.4095 0.1535 0.1706 0.2009 0.2021 0.1849 0.1561 0.3041
F- statistic 1.7392 1.4261 5.4896 1.9240 1.9511 1.2094 1.8161 1.9599 1.8730 3.5331
p-value 0.0872 0.1852 0.0000 0.0607 0.0509 0.2988 0.0718 0.0497 0.0622 0.0007
Chi-square 17.392* 14.261 53.896*** 17.316** 19.511** 12.094 18.161* 19.599** 18.730** 35.331***
p-value 0.0661 0.1614 0.0000 0.0440 0.0342 0.2788 0.0523 0.0333 0.0438 0.0001
a pi, t ¼ bi 0 di, t + Sj 6¼ ibj 0 dj, t + e 0 i, t i, j ¼ 1, . . . ,10. Hypothesis: all bj ¼ 0, j ¼ 1, . . . , 10, j 6¼ i b The first test uses an F distribution with 9 and 76 degrees of freedom, and the second uses a chi-squared distribution with 9 degrees of freedom *, **, and *** denote statistical significance at the 10 %, 5 %, and 1 % level, respectively
The null hypothesis then can be written as H0: bj ¼ 0, j ¼ 1, . . ., 29, j 6¼ i. The results are summarized in Table 93.6. The null hypothesis is rejected at 1 % significance level in 26 out of 29 companies. For the second hypothesis of supply effect on all diagonal elements, the following null hypothesis is also tested: H0: pi, i ¼ pj, j, for all i, j ¼ 1, . . ., 29. The Wald test statistic has a chi-square distribution with 28 degrees of freedom. The statistic is 86.35. That is, one can reject this null hypothesis at 1 % significance level.
93.4
Summary
We examine an asset pricing model that incorporates a firm’s decision concerning the supply of risky securities into the CAPM. This model focuses on a firm’s financing decision by explicitly introducing the firm’s supply of risky securities into the static CAPM and allows the supply of risky securities to be a function of security price. And thus, the expected returns are endogenously determined by both demand and supply decisions within the model. In other words, the supply effect may be one important factor in capital assets pricing decisions. Our objective is to investigate the existence of supply effect in the US stock markets. We find that supply effect is important in the US stock market. This finding holds as we break the companies listed in the S&P 500 into ten portfolios. It also holds if we use individual stock data. These test results show that two null hypotheses of the nonexistence of supply effect do not seem to be satisfied jointly. In other words, this evidence seems to be sufficient to support the existence of supply effect and, thus, imply a violation of the assumption in the one-period static CAPM, or to imply a dynamic asset pricing model may be a better choice in the US domestic stock markets.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2553
Table 93.5 Test of supply effect (by individual stock)
Group 1 Group 2
Test supply effect on off-diagonal elements: pi,t ¼ bi 0 di,t + Sj 6¼ ibj 0 dj, t + e 0 i, t, for i, j ¼ 1, 2, Test of supply effect on the diagonal . . . , 30; H0: all bj ¼ 0, j ¼ 1, 2, . . . , 30, j 6¼ i Different significant level elements: 5% 10 % H0: pii ¼ pjj for all i, j ¼ 1, 2, . . . , 30 1 % w2 ¼ 113.65, p-value ¼ 0.0000 Reject 23 in Reject 25 in Reject 25 in 30 equations 30 equations 30 equations ➔ Reject H0 at 1 % w2 ¼ 52.08, p-value ¼ 0.0053 Reject 21 in Reject 24 in Reject 25 in 30 equations 30 equations 30 equations ➔ Reject H0 at 1 %
w2 ¼ 86.53, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 4 w2 ¼ 88.58, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 5 w2 ¼ 101.14, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 6 w2 ¼ 69.14, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 7 w2 ¼ 181.10, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 8 w2 ¼ 116.97, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 9 w2 ¼ 117.44, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 10 w2 ¼ 109.50, p-value ¼ 0.0000 ➔ Reject H0 at 1 % Group 3
Reject 26 in 30 equations
Reject 27 in 30 equations
Reject 28 in 30 equations
Reject 21 in 30 equations
Reject 24 in 30 equations
Reject 25 in 30 equations
Reject 25 in 30 equations
Reject 26 in 30 equations
Reject 28 in 30 equations
Reject 17 in 30 equations
Reject 21 in 30 equations
Reject 22 in 30 equations
Reject 29 in 30 equations
Reject 30 in 30 equations
Reject 30 in 30 equations
Reject 29 in 30 equations
Reject 29 in 30 equations
Reject 29 in 30 equations
Reject 27 in 30 equations
Reject 28 in 30 equations
Reject 29 in 30 equations
Reject 25 in 30 equations
Reject 27 in 30 equations
Reject 27 in 30 equations
For the future research, we will first modify the simultaneous equation asset pricing model defined in Eqs. 93.9 and 93.12 to allow for testing the existence of market disequilibrium in dynamic asset pricing. Then, we will use disequilibrium estimation methods developed by Amemiya (1974), Fair and Jaffe(1972), and Quandt (1988) to test whether there is price adjustment in response to an excess demand in equity market.
Appendix 1: Modeling the Price Process In Sect. 93.2.3, Eq. 93.16 is derived from Eq. 93.15 under the assumption that all countries’ index series follow a random walk process. Thus, before further discussion, we should test the order of integration of these price series. From Hamilton (1994), we know that two widely used unit root tests are the Dickey-Fuller (DF) and the augmented Dickey-Fuller (ADF) tests. The former can be represented as Pt ¼ m + gPt1 + et, and the latter can be written as DPt ¼ m + g Pt1 + d1DPt1 + d2DPt2 + . . . + dpDPtp + et.
2554
C.-F. Lee et al.
Table 93.6 Test of supply effect (companies listed in the Dow Jones Index) GVKEY 1300 1356 1447 1581 2285 2817 2968 3144 3243 3980 4087 4194 4503 5047 5073 5606 5680 6008 6066 6104 6266 7154 7257 7435 8543 8762 9899 10983 11259
Security i Honeywell International Inc Alcoa Inc American Express AT&T Corp Boeing Co Caterpillar Inc JPMorgan Chase & Co Coca-Cola Co Citigroup Inc Disney (Walt) Co Du Pont (E I) De Nemours Eastman Kodak Co Exxon Mobil Corp General Electric Co General Motors Corp Hewlett-Packard Co Home Depot Inc Intel Corp Intl Business Machines Corp Intl Paper Co Johnson & Johnson McDonalds Corp Merck & Co 3M CO Altria Group Inc Procter & Gamble Co SBC Communications Inc United Technologies Corp Wal-Mart Stores
R2 of each equation i 0.7088 0.6716 0.4799 0.5980 0.5291 0.5887 0.5352 0.5927 0.6082 0.6457 0.6231 0.3793 0.5653 0.5425 0.5372 0.4755 0.6753 0.5174 0.5596 0.5512 0.5211 0.4416 0.4109 0.6344 0.5751 0.5816 0.5486 0.6595 0.6488
H0: all bj ¼ 0, j ¼ 1, 2, . . ., 29, j 6¼ i Chi-square p-value 137.08 0.0000 120.84 0.0000 55.47 0.0015 56.16 0.0012 66.75 0.0001 83.10 0.0000 68.12 0.0000 87.04 0.0000 88.63 0.0000 104.06 0.0000 98.37 0.0000 36.14 0.1416 76.50 0.0000 61.17 0.0003 66.73 0.0001 53.61 0.0025 106.00 0.0000 60.05 0.0004 75.31 0.0000 72.58 0.0000 67.59 0.0000 45.53 0.0195 40.82 0.0558 105.07 0.0000 72.25 0.0000 84.19 0.0000 72.81 0.0000 116.19 0.0000 111.85 0.0000
Test the off-diagonal elements: pi,t ¼ bi 0 di,t + Sj6¼ibj 0 dj,t + e 0 i,t, for i, j ¼ 1, . . .,29, null hypothesis H0: all bj ¼ 0, j ¼ 1, 2, . . ., 29, j 6¼ i Test of supply effect on the diagonal elements; H0: pii ¼ pjj for all i, j ¼ 1, 2, . . ., 29 Result: w2 ¼ 86.35, p-value ¼ 0.0000 ! Reject H0 at 1 % Microsoft Corp. is not included since it had paid dividends twice for the whole sample period
Similarly, in the US stock markets, the Phillips-Perron test is used to check whether the value-weighted price of market portfolio follows a random walk process. The results of the tests for each index are summarized in Table 93.7. It seems that one cannot reject the hypothesis that all indices follow a random walk process since, for example, the null hypothesis of unit root in level cannot be rejected for all indices but are all rejected if one assumes there is a unit root in the first-order difference of the price for each portfolio. This result is consistent with most studies that find that the financial price series follow a random walk process.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2555
Table 93.7 Unit root tests for Pt
Market portfolio S&P 500 Portfolio 1 Portfolio 2 Portfolio 3 Portfolio 4 Portfolio 5 Portfolio 6 Portfolio 7 Portfolio 8 Portfolio 9 Portfolio 10
Pt ¼ m + gPt1 + et Estimated c2 (std. error) 1.0060 (0.0159) 0.9864 (0.0164) 0.9883 (0.0172) 0.9877 (0.0146) 0.9913 (0.0149) 0.9935 (0.0143) 0.9933 (0.0158) 0.9950 (0.0150) 0.9892 (0.0155) 0.9879 (0.0166) 0.9939 (0.0116) 0.9889 (0.0182)
Phillips-Perron testa Level 1st differenceb 0.52 8.48** 0.90 959** 0.56 8.67** 0.97 9.42** 0.51 13.90** 0.61 7.66** 0.43 9.34** 0.32 8.66** 0.64 9.08** 0.74 9.37** 0.74 7.04** 0.69 9.07**
2
Adj. R 0.9788 0.9769 0.9746 0.9815 0.9809 0.9825 0.9787 0.9808 0.9793 0.9762 0.9884 0.9716
*5 % significant level; ** 1 % significant level The process assumed to be random walk without drift b The null hypothesis of zero intercept terms, m, cannot be rejected at 5 %, 1 % level for all portfolios a
Appendix 2: Identification of the Simultaneous Equation System Note that given G is nonsingular, P ¼ G1 H in Eq. 93.19 can be written as 0
where A ¼ ½ G
g11 B g21 B H ¼ B B : @ : gn1 0
p11 B p21 B W ¼ ½ P In 0 ¼ B B : @ : pn1
g12 g22
...... ......
g1n g1n
h11 h21
h12 h22
gn2
......
gnn
hn1
hn2
p12 p22
... ...
p1n p1n
1 0 0 1
pn2
...
pnn
0 0
1 . . . . . . h1n . . . . . . h2n C C C : C A : . . . . . . hnn
1 ...... 0 ...... 0C C C : C A : ...... 1 (93:24)
That is, A is the matrix of all structure coefficients in the model with dimension of (n 2n), and W is a (2n n) matrix. The first equation in Eq. (93.24) can be expressed as A1 W ¼ 0, where A1 is the first row of A, i.e., A1 ¼ [g11 g12 . . .. g1n h11 h12 . . ... h1n].
(93:25)
2556
C.-F. Lee et al.
Since the elements of P can be consistently estimated, and In is the identity matrix, Eq. 93.25 contains 2n unknowns in terms of ps. Thus, there should be n restrictions on the parameters to solve Eq. 93.25 uniquely. First, one can try to impose normalization rule by setting g11 equal to 1 to reduce one restriction. As a result, there are at least n1 independent restrictions needed in order to solve Eq. 93.25. It can be illustrated that the system represented by Eq. 93.17 is exactly identified with three endogenous and three exogenous variables. It is entirely similar to those cases of more variables. For example, if n ¼ 3, Eq. 93.17 can be expressed in the form 10 1 0 r cs11 þ a1 b1 p1t r cs12 r cs13 CB C B @ r cs21 r cs22 þ a2 b2 r cs23 A@ p2t A r cs31 r cs32 r cs33 þ a3 b3 p3t 10 1 0 1 0 (93:26) cs11 þ a1 d1t v1t cs12 cs13 CB C B C B þ @ cs21 cs22 þ a2 cs23 A@ d2t A ¼ @ v2t A cs31 cs32 cs33 þ a3 d3t v3t where r* ¼ scalar of risk-free rate sij ¼ elements of variance-covariance matrix of return ai ¼ inverse of the supply adjustment cost of firm i bi ¼ overall cost of capital of firm i For example, in the case of n ¼ 3, Eq. 93.17 can be written as 0 10 1 0 10 1 0 1 g11 g12 g13 h11 h12 h13 v1t p1t d1t @ g21 g22 g23 A@ p2t A þ @ h21 h22 h23 A@ d2t A ¼ @ v2t A: g31 g32 g33 p3t h31 h32 h33 d3t v3t
(93:27)
Comparing Eq. 93.26 with Eq. 93.27, the prior restrictions on the first equation take the form g12 ¼ r*h12 and g13 ¼ r*h13 and so on. Thus, the restriction matrix for the first equation is of the form 0 1 0 0 r 0 F¼ (93:28) 0 0 1 0 0 r Then, combining Eq. 93.25 and the parameters of the first equation gives 1 0 p11 p12 p13 0 0 C Bp p p B 21 22 13 1 0 C C B B p31 p32 p33 0 1 C C ½ g11 g12 g13 h11 h12 h13 B (93:29) B 1 0 0 0 0 C½ 0 0 0 0 0 : C B C B @ 0 1 0 r 0 A 0
0
1
0 r
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2557
That is, extending Eq. 93.29, we have g11 p11 þ g12 p21 þ g13 p31 þ h11 ¼ 0, g11 p12 þ g12 p22 þ g13 p32 þ h12 ¼ 0, g11 p13 þ g12 p23 þ g13 p33 þ h13 ¼ 0, g12 þ r h12 ¼ 0, and g13 þ r h13 ¼ 0:
(93:30)
The last two (n1 ¼ 31 ¼ 2) equations in Eq. 93.30 give the value h12 and h13, and the normalization condition, g11 ¼ 1, allows us to solve Eq. 93.25 in terms of ps uniquely. That is, in the case n ¼ 3, the first equation represented by Eq. 93.25, A1W ¼ 0, can be finally rewritten as Eq. 93.30. Since there are three unknowns, g12, g13, and h11, left for the first three equations in Eq. 93.30, the first equation A1 is exactly identified. Similarly, it can be shown that the second and the third equations are also exactly identified.
Appendix 3: Derivation of the Formula Used to Estimate dt To derive the formula for estimating dt, we first define the partial adjustment model as Dt ¼ a1 þ a2 Dt1 þ a3 Et þ ut
(93:31)
where Dt ¼ dividend per share in period t, Dt1 ¼ dividend per share in period t1, Et ¼ earnings per share in period t are dividends and earnings, and ut ¼ error term in period t. Similarly, Dtþ1 ¼ a1 þ a2 Dt þ a3 Etþ1 þ utþ1 :
(93:310 )
Εt1 ½Dt ¼ a1 þ a2 Dt1 þ a3 Εt1 ½Et ,
(93:32)
Εt ½Dtþ1 ¼ a1 þ a2 Dt þ a3 Εt ½Etþ1 ,
(93:33)
Εt1 ½Dtþ1 ¼ a1 þ a2 Εt1 ½Dt þ a3 Εt1 ½Etþ1 :
(93:34)
And thus,
Substituting Eq. 93.32 to Eq. 93.34, we have Εt1 ½Dtþ1 ¼ a1 þ a1 a2 þ a22 Dt1 þ a2 a3 Εt1 ½Et þ a3 Εt1 ½Etþ1 :
(93:340 )
Subtracting Eq. 93.340 from Eq. 93.33 on both hand sides, we have Εt ½Dtþ1 Εt1 ½Dtþ1 ¼ a1 a2 þ a2 Dt a22 Dt1 a2 a3 Εt1 ½Et þ a3 Εt ½Etþ1 a3 Εt1 ½Etþ1 :
(93:35)
Equation Eq. 93.35 can be investigated depending upon whether Et is following a random walk.
2558
C.-F. Lee et al.
Case 1 Et follows an AR(p) process.
If the time series of Et is assumed to be stationary and follows an AR(p) process, then after taking the seasonal differences, we obtain dEt ¼ r0 þ r1 dEt1 þ r2 dEt2 þ r3 dEt3 þ r4 dEt4 þ et ,
(93:36)
where dEt ¼ Et Et4. The expectation adjustment in seasonally differenced earnings, or the revision in forecasting future seasonally differenced earnings, can be solved as Et ½dEtþ1 Et1 ½dEtþ1 ¼ r1 ðr0 þ dEt r1 dEt1 r2 dEt2 r3 dEt3 r4 dEt4 Þ:
(93:37) Since Εt[dEt+1] Εt1[dEt+1] ¼ Εt[Et+1] Εt1[Et+1], we have Εt ½Etþ1 Εt1 ½Etþ1 ¼ r1 ðr0 þ dEt r1 dEt1 r2 dEt2 r3 dEt3 r4 dEt4 Þ:
(93:38) Furthermore, from Eq. 93.36, we have Εt1 ½dEt ¼ r0 þ r1 dEt1 þ r2 dEt2 þ r3 dEt3 þ r4 dEt4 :
(93:39)
Similarly, Εt1[dEt] ¼ Εt1[E1 Et4] ¼ Εt1[Et] Et4; thus, Εt1[Et] can be found by Εt1 ½Et ¼ r0 þ r1 ðdEt1 Þ þ r2 ðdEt2 Þ þ r3 ðdEt3 Þ þ r4 ðdEt4 Þ þ Et4 :
(93:40)
Finally, the expectation adjustment in dividends, dt, can be found by plugging Eqs. 93.38 and 93.40 into Eq. 93.35: dt Εt ½Dtþ1 Εt1 ½Dtþ1 ¼ a1 a2 þ a2 Dt a22 Dt1 a2 a3 ðr0 þ r1 dEt1 þ r2 dEt2 þ r3 dEt3 þ r4 dEt4 þ Et4 Þ þ a3 r1 ðr0 þ dEt r1 dEt1 r2 dEt2 r3 dEt3 r4 dEt4 Þ
(93:41)
Or d t ¼ C0 þ C1 Dt þ C2 Dt1 þ C3 dEt þ C4 dEt1 þ C5 dEt2 þ C6 dEt3 þ C7 dEt4 þ C8 Et4
(93:42)
where C0 to C8 are functions of a1 to a1 and r0 to r4. That is, the expectation adjustment in dividends, dt, can be found by the coefficients estimated in Eqs. 93.31 and 93.36, i.e., a1 to a3 and r0 to r4, and the observable data from the time series of Dt and Et.
93
A Dynamic CAPM with Supply Effect Theory and Empirical Results
2559
Case 2 Et follows a random walk process.
If the series of earnings, Et, follows a random walk process, i.e., Εt[Et+1] ¼ Et, Εt1[Et] ¼ Et1, and Εt1[Et+1] ¼ Et1, then Eq. 93.35 can be redefined: d t Εt ½Dtþ1 Εt1 ½Dtþ1 ¼ C0 þ C1 Dt þ C2 Dt1 þ C3 Et þ C4 Et1
(93:43)
where C0 ¼ a1a2 C1 ¼ a2, C2 ¼ a22, C3 ¼ a3, and C4 ¼ a3(1 + a2). That is, the expectation adjustment in dividends, dt, can be found by the observable data from the time series of Dt and Et. In this study, we assumed that Et follows a random walk process. Therefore, we used Eq. 93.43 instead of Eq. 93.42 to estimate dt in Eqs. 93.22 and 93.23 in the text.
References Amemiya, T. (1974). A note on a Fair and Jaffee model. Econometrica, 42, 759–762. Black, S. W. (1976). Rational response to shocks in a dynamic model of capital asset pricing. American Economic Review, 66, 767–779. Breeden, D. T. (1979). An intertemporal asset pricing model with stochastic consumption and investment opportunities. Journal of Financial Economics, 7, 265–196. Campbell, J. Y., Grossman, S. J., & Wang, J. (1993). Trading volume and serial correlation in stock returns. Quarterly Journal of Economics, 108, 905–939. Cox, J. C., Jr. Ingersoll, J. E., & Ross, S. A. (1985). An intertemporal general equilibrium model of asset prices. Econometrica, 53, 363–384. Fair, R. C., & Jaffee, D. M. (1972). Methods of estimation of market in disequilibrium. Econometrica, 40, 497–514. Fama, E., & French, K. R. (1992). The cross-section of expected stock return. Journal of Finance, 47, 427–465. Greene, W. H. (2004). Econometric analysis (4th ed.). Upper Saddle River: Prentice Hall. Grinols, E. L. (1984). Production and risk leveling in the intertemporal capital asset pricing model. The Journal of Finance, 39(5), 1571–1595. Hamilton, J. D. (1994). Time series analysis. Princeton: Princeton University Press. Lee, C. F., & Gweon, S. C. (1986). Rational expectation, supply effect and stock price adjustment. Paper presented at 1986, Econometrica Society Annual Meeting. Lintner, J. (1965). The valuation of risky assets and the selection of risky investments in stock portfolios and capital budgets. Review of Economics and Statistics, 47, 13–37. Lo, A. W., & Wang, J. (2000). Trading volume: Definition, data analysis, and implications of portfolio theory. Review of Financial Studies, 13, 257–300. Markowitz, H. (1959). Portfolio selection: Efficient diversification of investments. New York: Wiley. Merton, R. C. (1973). An intertemporal capital assets pricing model. Econometrica, 41, 867–887. Miller, M. H. (1977). Debt and taxes. Journal of Finance, 32(2), 261–275. Modigliani, F., & Miller, M. (1958). The cost of capital, corporation finance and the theory of investment. American Economic Review, 48, 261–297. Mossin, J. (1966). Equilibrium in a capital asset market. Econometrica, 35, 768–783. Quandt, R. E. (1988). The econometrics of disequilibrium. New York: Basil Blackwell. Sharpe, W. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance, 19, 425–442. Tobin, J. (1958). Liquidity preferences as behavior toward risk. Review of Economic Studies, 25, 65–86.
A Generalized Model for Optimum Futures Hedge Ratio
94
Cheng-Few Lee, Jang-Yi Lee, Kehluh Wang, and Yuan-Chung Sheu
Contents 94.1 94.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GIG and GH Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2.1 The Generalized Hyperbolic Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2.2 Multivariate Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.3 Futures Hedge Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.3.1 Minimum Variance Hedge Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.3.2 Sharpe Hedge Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.3.3 Minimum Generalized Semivariance Hedge Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . 94.4 Estimation and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.4.1 Kernel Density Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.4.2 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.4.3 Simulation of Generalized Hyperbolic Random Variables . . . . . . . . . . . . . . . . . . 94.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2562 2563 2563 2565 2566 2567 2567 2569 2570 2570 2572 2573 2574 2575
C.-F. Lee (*) Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] J.-Y. Lee Tunghai University, Taichung, Taiwan K. Wang Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] Y.-C. Sheu National Chiao-Tung University, Hsinchu, Taiwan e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_94, # Springer Science+Business Media New York 2015
2561
2562
C.-F. Lee et al.
Abstract
Under martingale and joint-normality assumptions, various optimal hedge ratios are identical to the minimum variance hedge ratio. As empirical studies usually reject the joint-normality assumption, we propose the generalized hyperbolic distribution as the joint log-return distribution of the spot and futures. Using the parameters in this distribution, we derive several most widely used optimal hedge ratios: minimum variance, maximum Sharpe measure, and minimum generalized semivariance. Under mild assumptions on the parameters, we find that these hedge ratios are identical. Regarding the equivalence of these optimal hedge ratios, our analysis suggests that the martingale property plays a much important role than the joint distribution assumption. To estimate these optimal hedge ratios, we first write down the log-likelihood functions for symmetric hyperbolic distributions. Then we estimate these parameters by maximizing the log-likelihood functions. Using these MLE parameters for the generalized hyperbolic distributions, we obtain the minimum variance hedge ratio and the optimal Sharpe hedge ratio. Also based on the MLE parameters and the numerical method, we can calculate the minimum generalized semivariance hedge ratio. Keywords
Optimal hedge ratio • Generalized hyperbolic distribution • Martingale property • Minimum variance hedge ratio • Minimum generalized semivariance • Maximum Sharpe measure • Joint-normality assumption • Hedging effectiveness
94.1
Introduction
Because of their low transaction cost, high liquidity, high leverage, and ease of short position, stock index futures are among the most successful innovations in the financial markets. Besides the speculative trading, they are widely used to hedge against the market risk of the spot position. One of the most important issues for investors and portfolio managers is to calculate the optimal futures hedge ratio, the proportion of the position taken in futures to the size of the spot so that the risk exposure can be minimized. The optimal hedge ratios typically depend on the objective functions under consideration. In literature on futures hedging, there are two different types of objective functions: the risk function to be minimized and the utility function to be maximized. Johnson (1960) obtains the minimum variance hedge ratio by minimizing the variance of the change in the value of the hedged portfolios. On the other hand, as Adams and Montesi (1995) indicate, corporate managers are more concerned with the downside risk rather than the upside variation. A measure of the downside risk is the generalized semivariance (GSV) where
94
A Generalized Model for Optimum Futures Hedge Ratio
2563
the risk is computed from the expectation of a power function of shortfalls from the target return (Bawa 1975, 1978; Fishburn 1977). De Jong et al. (1997) and Lien and Tse (1998, 2000, 2001) have calculated several GSV-minimizing hedge ratios. Regarding the utility function approach, we consider the Sharpe measure (SM) criteria, i.e., the ratio of the portfolio’s excess return to its volatility. Howard and D’Antonio (1984) formulate the optimal hedge ratio by maximizing the Sharpe measure. Normally, these optimal hedge ratios under different approaches are not the same. However, with the joint-normality and martingale assumptions, they are identical to the minimum variance hedge ratio. Unfortunately, many empirical studies indicate that major markets typically reject the joint-normality assumption (Chen et al. 2001; Lien and Tse 1998). In particular, the fat-tail property of the return distribution affects the hedging effectiveness substantially. It will be useful to find out the nature of the optimal hedge ratios under more realistic assumption. In this paper we introduce the bivariate generalized hyperbolic distributions as alternative joint distributions for returns in the spot and futures markets. Barndorff-Nielsen (1977, 1978) develops the generalized hyperbolic (GH) distributions as a mixture of the normal distribution and the generalized inverse Gaussian (GIG) distribution first proposed in 1946 by Etienne Halphen. The class of the generalized hyperbolic distributions includes the hyperbolic distributions, the normal inverse Gaussian distributions, and the variance-Gamma distributions, while the normal distribution is a limiting case of the generalized hyperbolic distributions. Uses of the generalized hyperbolic distributions have been increasing in finance literature. To model the log returns of some financial assets, Eberlein and Keller (1995) consider the hyperbolic distribution and Barndorff-Nielsen (1995) proposes the normal inverse Gaussian distribution. For more recent applications of the generalized hyperbolic distributions in finance, see Bibby and Sørensen (2003), Eberlein et al. (1998), Rydberg (1997, 1999), K€ucher et al. (1999), and Bingham and Kiesel (2001). In terms of the parameters for the bivariate hyperbolic distributions, we have developed in this paper the minimum variance hedge ratio, GSV-minimizing hedge ratio, and the SM-maximizing hedge ratio. Moreover, the relationships between these hedge ratios are explored. In particular, under the martingale assumption, we can still obtain the result that these hedge ratios are the same as the minimum variance hedge ratio (see Theorems 2.1, 2.4 and Proposition 2.2). Based on the maximum likelihood estimation of the parameters and the numerical methods, we calculate and compare the different hedge ratios for TAIEX futures and S&P 500 futures. The chapter is divided into five sections. Section 94.1 first introduces the definitions and some basic properties for GIG and GH distributions. In Sect. 94.2, we study the optimal hedge ratios under different approaches and estimate these ratios in terms of the parameters for GH distributions. In Sect. 94.3, we discuss the kernel density estimators and MLE method for parameter estimation problem. The last section provides the concluding remarks.
2564
94.2
C.-F. Lee et al.
GIG and GH Distributions
94.2.1 The Generalized Hyperbolic Distributions To introduce the generalized hyperbolic distribution, we first recall some basic properties of generalized inverse Gaussian (GIG) distributions. Note that for any d, c > 0 and l ∈ R, the function d GIGðl;d;cÞ ðxÞ ¼
ðc=dÞl l1 12ðd2 x1 þc2 xÞ x e , 2K l ðdcÞ
x>0
(94.1)
is a probability density function on (0, 1). Here, the function K l ðxÞ ¼
1 2
ð1
ul1 e2xðu 1
1
þuÞ
du,
x>0
(94.2)
0
is the Bessel functions of the third kind with index l. The distribution with the density function dGIG(l, d, c)(x) on the positive half-line is called a generalized inverse Gaussian (GIG) distribution with parameters l, d, c and denoted by GIG(l, d, c). The moment generating function of the generalized inverse Gaussian distribution is given by
MGIGðl; d; cÞ ðuÞ ¼
ð1 0
0
1l
c C B eux dGIGðl; d; cÞ ðxÞdx ¼ @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA c2 2u
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K l d c2 2u K l ðdcÞ (94.3)
with the restriction 2u < c2. From this, we obtain ½GIG ¼
Var½GIG ¼
d K lþ1 ðdcÞ c K l ðdcÞ
2 d K lþ2 ðdcÞ K 2lþ1 ðdcÞ 2 : c K l ðdcÞ K l ðdcÞ
Barndorff-Nielsen (1977) introduced the class of generalized hyperbolic (GH) distributions as mean-variance mixtures of normal distributions. More precisely, one says that a random variable Z has the generalized hyperbolic distribution GH(l, a, b, d, m) if ZjY ¼ y N ðm þ by, yÞ,
94
A Generalized Model for Optimum Futures Hedge Ratio
2565
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where Y is a random variable with distribution GIG l; d; a2 b2 and N(m + by, y) denotes the normal distribution with mean m + by and variance y. From this, one can easily verify that the density function for GH(l, a, b, d, m) is given by the formula dGHðl;a;b;d;mÞ ðxÞ ¼ ¼
l c d
ð1 0
d Nðmþby, yÞ ðxÞd
"
eðxmÞb d2 þ ðx mÞ2 pffiffiffiffiffiffi a2 2pK l ðdcÞ
GIG l;d;
#l2 12
pffiffiffiffiffiffiffiffiffi2ffi ðyÞdy a2 b
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K l12 a d2 þ ðx mÞ2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where c ¼ a2 b2 . The class of hyperbolic distributions is the subclass of GH distributions obtained when l is equal to 1. We write H(a, b, d, m) instead of GH(1, a, b, d, m). Using the fact that K1/2(z) ¼ (p/2z)1/2ez, one obtains the density for H(a, b, d, m) is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a 2 b2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ea d2 þ ðx mÞ2 þ bðx mÞ: dHða;b;d;mÞ ðxÞ ¼ 2adK 1 d a2 b2
(94.4)
The normal inverse Gaussian (NIG) distributions were introduced to finance in Barndorff-Nielsen (1995). It is a subclass of the generalized hyperbolic distributions obtained for l equal to 1/2. The density of the NIG distribution is given by " #12 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d a2 dcþðxmÞb d NIGða;b;d;mÞ ðxÞ ¼ e K 1 a d2 þ ðx mÞ2 : p d2 þ ðx mÞ2
94.2.2 Multivariate Modeling In finance one does not look at a single asset, but at a bunch of assets. Since the assets in the market are typically highly correlated, it is natural to use multivariate distributions. A straightforward way for introducing multivariate generalized hyperbolic (MGH) distributions is via the mixtures of multivariate normal distributions with the generalized inverse Gaussian distributions. In fact the multivariate generalized hyperbolic distributions were introduced and investigated in BarndorffNielsen (1978). Let D be a symmetric positive-definite d d- matrix with determinant |D| ¼ 1. Assume that l ∈ R, b, m ∈ Rd, d > 0, and a2 > b0 Db. We say that a d-dimensional random vector Z has the multivariate generalized hyperbolic distribution MGH(l, a, b, d, m, D) with parameters (l, a, b, d, m, D) if ZjY ¼ y N d ðm þ yDb, yDÞ,
2566
C.-F. Lee et al.
where Nd(A, B) denotes the d-dimensional normal distribution with mean vector qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 A and covariance matrix B, and Y distribution as GIG l; d; a2 b Db . Here we notice that the generalized hyperbolic distributions are symmetric if and only if b ¼ (0, . . ., 0)0 . For l ¼ (d + 1)/2 we obtain the multivariate hyperbolic distributions. For l ¼ 1/2 we obtain the multivariate normal inverse Gaussian distribution. The density function of the distribution MGH(l, b, d, m, D) is given by the formula qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K ld=2 a d2 þ ðx mÞ0 D1 ðx mÞ 0 d MGH ðxÞ ¼ cd qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffid=2l e b ðx mÞ a1 d2 þ ðx mÞ0 D1 ðx mÞ where cd ¼
where c ¼
94.3
(94.5)
l=2 pffiffiffiffiffiffiffiffiffiffiffiffiffi
. The mean and covariance of MGH are given by 0 0
a2 b Db =d2
ð2pÞd=2 K l d
a2 b Db
½MGH ðl; a; b; d; m; DÞ ¼ m þ Db½GIGðl; d; cÞ,
(94.6)
Var½MGHðl; a; b; d; m; DÞ ¼ D½GIGðl; d; cÞ þ Dbb0 DVar½GIGðl; d; cÞ
(94.7)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2 b0 Db. (For details, see, e.g., Blæsid (1981).)
Futures Hedge Ratios
We consider a decision maker. At the decision date (t ¼ 0), the agent engages in the production of Q(Q > 0) commodity units for sale at the terminal date (t ¼ 1) at the random cash price P1. In addition, at the decision date the agent can sell X commodity units in the futures market at the price F0 but must repurchase them back at the terminal date at the random futures price F1. Let the initial wealth be V0¼P0Q and the end-ofperiod wealth be V1 ¼ P1Q + (F0 F1)X. Then we consider the wealth return that is V 1 V 0 P1 Q þ F0 X F1 X P0 Q ¼ P Q V0 0 P 1 P0 F 1 F0 F0 X ¼ ¼ er p yer f P0 F0 P0 Q
er y ¼
(94.8)
where er p ¼ ðP1 P0 Þ=P0 and er f ¼ ðF1 F0 Þ=F0 are one-period returns on the spot and futures positions, respectively. h ¼ X/Q is the hedge ratio and y ¼ h(F0/P0). (Note that y is so-called the adjusted hedge ratio.) The main objective of hedging is to choose the optimal hedge ratio y. However, the optimal hedge ratio will depend on a particular objective function to be
94
A Generalized Model for Optimum Futures Hedge Ratio
2567
optimized. We recall some most widely used theoretical approaches to the optimal futures hedge ratios and compute explicitly these optimal ratios in terms of the parameters for MGH distributions. For a comprehensive review of futures hedge ratios, see Chen et al. (2003).
94.3.1 Minimum Variance Hedge Ratio The most widely used hedge ratio is the minimum variance hedge ratio which is known as the MV hedge ratio. The objective function to be minimized is the variance of er y . Clearly we have Var½er y ¼ s2rp þ y2 s2rf 2yrsrp srf , where srp and srf are standard deviations of er p and er f , respectively, and r is the correlation coefficient between er p and er f . The MV hedge ratio is obtained by minimizing Var½er y . Simple calculation shows that the MV hedge ratio is given by yMV ¼ r
srp : srf
(94.9)
er f ;er p 0 is distributed as MGH(l, a, b, d, m, D), where D11 D12 is symmetry. Then we have b ¼ (b1, b2)0 , m ¼ (m1, m2)0 , and D ¼ D21 D22
Theorem 2.1 Assume
D12 ½GIG þ dfp D11 ½GIG þ dff pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where GIG¼GIG l; d; a2 b0 Db and y MV ¼
(94.10)
dff ¼ b21 D211 þ 2b1 b2 D11 D12 þ b22 D212 Var½GIG 2
dfp ¼ b1 D11 D12 þ b1 b2 D11 D22 þ D212 þ b22 D12 D22 Var½GIG: In particular, if b ¼ (0, . . . ,0)0 , then yMV ¼ DD1211 .
94.3.2 Sharpe Hedge Ratio We consider the optimal hedge ratio that incorporates both risk and expected return. Howard and D’Antonio (1984) considered the optimal level of futures contracts by maximizing the ratio of the portfolio’s excess return to its volatility, that is,
max y
mrp ymrf r L sy
,
(94.11)
2568
C.-F. Lee et al.
where sy is the standard deviation of er y , mrp , mrf are expected values for er p and er f , respectively, and rL is the risk-free interest rate. Consider the function r ð yÞ ¼
mrp ymrf r L sy
:
Then we have
0
r ð yÞ ¼ where srf rp
h i y s2rf mrp r L þ mrf srf rp þ mrp r L srp rf s2rp mrf
s3y
¼ Cov er p ;er f and, hence, the critical point for r(y) is given by 2 ys ¼
s mrf r srprf mrp r L : s r srprf mrf mrp r L
(94.12)
srp srf
(94.13)
s
It follows from Eq. 94.12 that if mrp mL > r srrp mrf , then r0 (y) > 0 for y < ys f and r0 (y) < 0 for y > ys . Hence, ys is the optimal hedge ratio (Sharpe hedge ratio) sr p for Eq. 94.11. Similarly, if mrp mL < r sr mrf , then r(y) has a minimum at ys . f
s
(Note that if mrp mL ¼ r srrp mrf , then r(y) is strictly monotonic in ys .) f The measure of hedging effectiveness (abbreviated HE) is given in Howard and D’Antonio (1984) by mrp r L HE ¼ r ys = : srp
(94.14)
Write z¼
mrf =srf : mrp rL =srp
(z is also called the risk-return relative.) Then we have ys
srp r z ¼ srf 1 zr
and sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð r zÞ 2 þ 1: HE ¼ 1 þ r2 Clearly the last equality implies that
(94.15)
94
A Generalized Model for Optimum Futures Hedge Ratio
HE
>1 ¼1
when when
2569
r 6¼ z r ¼ z:
Moreover, without any distribution assumption, we have the following relationship between ys and yMV . In particular, if the expected return on the futures contract is zero and mrp > r L , then the Sharpe hedge ratio reduces to the minimum variance hedge ratio. Proposition 2.2 Assume mrp > r L and 1 > zr. Then we have
8 < ys > yMV y ¼ y : ys < yMV s MV
when when when
mf < 0 mf ¼ 0 0 < mf :
Recall that srf rp ¼ Cov er p ;er f . Then we have s2rp mrf srf rp mrp r L : ys ¼ srf rp mrf s2rf mrp r L
(94.16)
From this and by Eqs. 94.6 and 94.7, we obtain
Theorem 2.3 Assume e r f ;er p 0 is distributed as in Theorem 2.1. Assume that zfp ½ðm1 þ b1 D11 þ b2 D12 Þ½GIG < zff ½ðm2 þ b1 D21 þ b2 D22 Þ½GIG rL : Then we have
ys ¼
zpp ½ðm1 þ b1 D11 þ b2 D12 Þ½GIG zfp ½ðm2 þ b1 D21 þ b2 D22 Þ½GIG rL zfp ½ðm1 þ b1 D11 þ b2 D12 Þ½GIG zff ½ðm2 þ b1 D21 þ b2 D22 Þ½GIG rL
(94.17) where dff, dfp, GIG are the same as in Theorem 2.1 and dpp ¼ b21 D221 þ 2b1 b2 D21 D22 þ b22 D222 Var½GIG zff ¼ D11 ½GIG þ dff zfp ¼ D12 ½GIG þ dfp zpp ¼ D22 ½GIG þ dpp :
94.3.3 Minimum Generalized Semivariance Hedge Ratio In this case, the optimal hedge ratio is obtained by minimizing the generalized semivariance (GSV) given below:
2570
C.-F. Lee et al.
Ln ðc; XÞ ¼
ðc 1
ðc xÞn dFðxÞ, n > 0,
(94.18)
where F(·7) is the probability distribution function of the return X. The GSV is specified by two parameters: the target return c and the power of the shortfall n. (Note that if the density function of X is symmetric at c, then we obtain L2(c, X) ¼ Var(X)/2. Hence, in this case, the GSV approach is the same as that of the minimum variance.) The GSV, due to its emphasis on the returns below the target return, is consistent with the risk perceived by managers (see Lien and Tse 2001). For futures hedge, we consider Ln ðc; yÞ ¼ Ln c,er p yer f . Under some conditions on the joint distribution, we obtain that the minimum GSV hedge ratio is the same as the minimum variance hedge ratio.
Theorem 2.4 Assume e r f ;er p is the same as in Theorem 2.1. If b ¼ 0 and m1 ¼ mrf ¼ 0, then the minimum GSV hedge ration is the same as the minimum variance hedge ration i.e., yGSV ¼ yMV ; ¼ D12 D11 . In empirical studies, the true distribution is unknown or complicated. Then yGSV can be estimated from the sample by using the so-called empirical distribution method adapted in, e.g., Price et al. (1982) and Harlow (1991). Suppose we have m observations of er f ;er p , say, (rf(i), rp(i)), i ¼ 1, 2, . . . , m. From this, the GSV can be estimated by the formula
Lobs n ðc; yÞ ¼
m
n 1X c r i, y I ri, yc , m i¼1
(94.19)
where ri,y ¼ rp(i) yrf(i). Given c and n, numerical methods can be used to search the hedge ratio that minimizing the sample GSV, Lobs n (c,y).
94.4
Estimation and Simulation
94.4.1 Kernel Density Estimators Assumed that we have n independent observations x1, . . . , xn from the random variable X with the unknown density function f. The kernel density estimator for the estimation of f is given by n x x X i ^f h ðxÞ ¼ 1 K , nh i¼1 h
x2R
(94.20)
where K is a so-called kernel function and h is the bandwidth. In this chapter we pffiffiffiffiffiffi 1 work with the Gaussian kernel: K ðxÞ ¼ 1= 2p exp x2 =2 and h ¼ ð4=3Þ1=5 sn 5 .
94
A Generalized Model for Optimum Futures Hedge Ratio
2571
Fig. 94.1 Normal density and Gaussian kernel density estimators
Fig. 94.2 Log-densities of daily log returns of major indices and futures (2000–2004)
(For more details, see Scott (1979).) Meanwhile it is worth noting that Lien and Tse (2000) proposed the kernel density estimation method to estimate the probability distribution of the portfolio return for every y, and then grid search methods were adapted to find the optimum GSV hedge ratio (Figs. 94.1 and 94.2).
2572
C.-F. Lee et al.
94.4.2 Maximum Likelihood Estimation We focus on how to estimate the parameters of a density function f(x; Y), where Y is the set of parameters to be estimated. Suppose that we have m independent observations x1, . . . , xn of a random variable X with the density function f(x; Y). The maximum likelihood estimator ^y MLE is the parameter set that maximizes the likelihood function
LðYÞ ¼
n Y
f ðxi ; YÞ:
i¼1
Clearly this is equivalent to maximizing the logarithm of the likelihood function
log LðYÞ ¼
n X
log f ðxi ; YÞ:
i¼1
The log-likelihood function for hyperbolic distribution H(a, b, d, m) is given by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ‘Hða;b;d;mÞ ðYÞ ¼ n log a2 b2 log 2 log a log d log K 1 d a2 b2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X 2 2 a d þ ðxi mÞ þ bðxi mÞ : þ i¼1
The symmetric MGH density function is given by the formula
ða=dÞl ð2pÞd=2 K l ðadÞ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K ld2 a d2 þ ðx mÞ0 D1 ðx mÞ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffid2l , a1 d2 þ ðx mÞ0 D1 ðx mÞ
and, in particular, the two-dimensional symmetric hyperbolic distributions (i.e., b ¼ 0 and l ¼ 3/2) have the density
H2 ¼
ða=dÞ3=2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ea 3=2 2 paK 32 ðadÞ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d2 þ ðx m0 ÞD1 ðx mÞ:
From this, we obtain the log-likelihood function for two-dimensional symmetric hyperbolic distributions (Figs. 94.3 and 94.4):
94
A Generalized Model for Optimum Futures Hedge Ratio
2573
Fig. 94.3 Estimated symmetric H2 distributions
‘H2
3 a 3 1 ¼ n log log 2 log p log a log K 32 ðadÞ 2 d 2 2 n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X a d2 þ ðxi mÞ0 D1 ðxi mÞ: i¼1
94.4.3 Simulation of Generalized Hyperbolic Random Variables From the representation of GH distribution as a conditional normal distribution mixed with the generalized inverse Gaussian, a schematic representation of the algorithm reads as follows: 1. Sample Y from GIG(l, d, c) distribution 2. Sample e from N(0, 1) pffiffiffi 3. Return X ¼ m þ bY þ Y e Similarly, for simulating an MGH distributed random vector, we have: 1. Set D ¼ LTL via Cholesky decomposition 2. Sample Y from GIG(l, d, c) distribution 3. Sample Z from N(0, I), where pffiffiffi I is d d-identity matrix 4. Return X ¼ m þ YDb þ Y LT Z The efficiency of the above algorithms depends on the method of sampling the generalized inverse Gaussian distributions. Atkinson (1982) applied the method of rejection algorithm to sampling GIG. We adopt their method for simulation of estimated hyperbolic random variables.
2574
C.-F. Lee et al.
Fig. 94.4 Sharpe value
94.5
Concluding Remarks
Although there are many different theoretical approaches to the optimal futures hedge ratios, under the martingale and joint-normality assumptions, various optimal hedge ratios are identical to the minimum variance hedge ratio. However, empirical studies show that major market data reject the joint-normality assumption. In this paper we propose the generalized hyperbolic distribution as the joint log-return distribution of the spot and futures. In terms of the parameters for generalized hyperbolic distributions, we obtain several most widely used optimal hedge ratios: minimum variance, maximum Sharpe measure, and minimum generalized semivariance. In particular, under mild assumptions on the parameters, we show that these theoretical approaches are equivalent. To estimate these optimal hedge ratios, we first write down the log-likelihood functions for symmetric hyperbolic distributions. Then we calculate these parameters by maximizing the log-likelihood functions. Using these MLE parameters for the GH distributions, we obtain the MV hedge ratio and the optimal Sharpe hedge ratio by Theorems 2.1 and 2.3, respectively. Also based on the MLE parameters and the numerical method, we calculate the minimum generalized semivariance hedge ratio.
94
A Generalized Model for Optimum Futures Hedge Ratio
2575
Regarding the equivalence of these three optimal hedge ratios, our results suggest that the martingale property plays a much important role than the joint distribution assumption. However, conditional heteroskedasticity and stochastic volatility are observed in many spot and futures price series. This implies that the optimal hedge strategy should be time-dependent. To account for this dynamic property, parametric specifications of the joint distribution are required. Based on our work here, it is interesting to extend the results to time-varying hedge ratios.
References Adams, J., & Montesi, C. J. (1995). Major issues related to hedge accounting. Newark: Financial Accounting Standard Board. Atkinson, A. C. (1982). The simulation of generalized inverse Gaussian and hyperbolic random variables. SIAM Journal of Scientific and Statistical Computing, 3, 502–515. Barndorff-Nielsen, O. E. (1977). Exponentially decreasing distributions for the logarithm of particle size. Proceedings of the Royal Society London A, 353, 401–419. Barndorff-Nielsen, O. E. (1978). Hyperbolic distributions and distributions on hyperbolae. Scandinavian Journal of Statistics, 5, 151–157. Barndorff-Nielsen, O. E. (1995). Normal inverse Gaussian distributions and the modeling of stock returns. Research Report no. 300, Department of Theoretical Statistics, Aarhus University. Bawa, V. S. (1975). Optimal rules for ordering uncertain prospects. Journal of Financial Economics, 2, 95–121. Bawa, V. S. (1978). Safety-first, stochastic dominance, and optimal portfolio choice. Journal of Financial and Quantitative Analysis, 13, 255–271. Bibby, B. M., & Sørensen, M. (2003). Hyperbolic processes in finance. In S. T. Rachev (Ed.), Handbook of heavy tailed distributions in finance (pp. 211–248). Amsterdam/The Netherlands: Elsevier Science. Bingham, N. H., & Kiesel, R. (2001). Modelling asset returns with hyperbolic distribution. In J. Knight & S. Satchell (Eds.), Return distribution in Finance (pp. 1–20). Oxford/Great Britain: Butterworth-Heinemann. Blæsid, P. (1981). The two-dimensional hyperbolic distribution and related distribution with an application to Johannsen’s bean data. Biometrika, 68, 251–263. Chen, S. S., Lee, C. F., & Shrestha, K. (2001). On a mean-generalized semivariance approach to determining the hedge ratio. Journal of Futures Markets, 21, 581–598. Chen, S. S., Lee, C. F., & Shrestha, K. (2003). Futures hedge ratio: A review. The Quarterly Review of Economics and Finance, 43, 433–465. De Jong, A., De Roon, F., & Veld, C. (1997). Out-of-sample hedging effectiveness of currency futures for alternative models and hedging strategies. Journal of Futures Markets, 17, 817–837. Eberlein, E., & Keller, U. (1995). Hyperbolic distributions in finance. Bernoulli, 1, 281–299. Eberlein, E., Keller, U., & Prause, K. (1998). New insights into smile, mispricing and value at risk: The hyperbolic model. Journal of Business, 71, 371–406. Fishburn, P. C. (1977). Mean-risk analysis with risk associated with below-target returns. American Economic Review, 67, 116–126. Harlow, W. V. (1991). Asset allocation in a downside-risk framework. Financial Analysts Journal, 47, 28–40. Howard, C. T., & D’Antonio, L. J. (1984). A risk-return measure of hedging effectiveness. Journal of Financial and Quantitative Analysis, 19, 101–112. Johnson, L. L. (1960). The theory of hedging and speculation in commodity futures. Review of Economic Studies, 27, 139–151.
2576
C.-F. Lee et al.
K€ucher, U., Neumann, K., Sørensen, M., & Streller, A. (1999). Stock returns and hyperbolic distributions. Mathematical and Computer Modelling, 29, 1–15. Lien, D., & Tse, Y. K. (1998). Hedging time-varying downside risk. Journal of Futures Markets, 18, 705–722. Lien, D., & Tse, Y. K. (2000). Hedging downside risk with futures contracts. Applied Financial Economics, 10, 163–170. Lien, D., & Tse, Y. K. (2001). Hedging downside risk: Futures vs. options. International Review of Economics and Finance, 10, 159–169. Price, K., Price, B., & Nantel, T. J. (1982). Variance and lower partial moment measures of systematic risk: Some analytical and empirical results. Journal of Finance, 37, 843–855. Rydberg, T. H. (1997). The normal inverse Gaussian Levy process: Simulation and approximation. Communications in Statistics: Stochastic models, 13, 887–910. Rydberg, T. H. (1999). Generalized hyperbolic diffusion processes with applications in finance. Mathematical Finance, 9, 183–201. Scott, W. (1979). On optimal and data-based histograms. Biometrika, 33, 605–610.
Instrumental Variables Approach to Correct for Endogeneity in Finance
95
Chia-Jane Wang
Contents 95.1 95.2 95.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Endogeneity: The Statistical Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Instrumental Variable Approach to Endogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.3.1 Instrumental Variables and Two-Stage Least Square (2SLS) . . . . . . . . . . . . . . . 95.3.2 Hypothesis Testing with 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.3.3 Instrumental Variables and Generalized Method of Moments (GMM) . . . . . 95.3.4 Hypothesis Testing Using GMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.4 Validity of Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.4.1 Test for Exogeneity of Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.4.2 Whether IV Estimator Is Really Needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.5 Identification and Inferences with Weak Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.5.1 Problems with Weak Instruments and Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.5.2 Possible Cures and Inferences with Weak Instruments . . . . . . . . . . . . . . . . . . . . . . 95.6 Empirical Applications in Corporate Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2578 2579 2580 2580 2582 2583 2585 2586 2586 2588 2590 2590 2592 2593 2596 2597
Abstract
The endogeneity problem has received a mixed treatment in corporate finance research. Although many studies implicitly acknowledge its existence, the literature does not consistently account for endogeneity using formal econometric methods. This chapter reviews the instrumental variables (IV) approach to endogeneity from the point of view of a finance researcher who is implementing instrumental variables methods in empirical studies. This chapter is organized into two parts. Part I discusses the general procedure of the instrumental variables approach, including two-stage least square (2SLS) and generalized method
C.-J. Wang Manhattan College, Riverdale, NY, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_95, # Springer Science+Business Media New York 2015
2577
2578
C.-J. Wang
of moments (GMM), the related diagnostic statistics for assessing the validity of instruments, which are important but not used very often in finance applications, and some recent advances in econometrics research on weak instruments. Part II surveys corporate finance applications of instrumental variables. We found that the instrumental variables used in finance studies are usually chosen arbitrarily, and very few diagnostic statistics are performed to assess the adequacy of IV estimation. The resulting IV estimates thus are questionable. Keywords
Endogeneity • OLS • Instrumental variables (IV) estimation • Simultaneous equations • 2SLS • GMM • Overidentifying restrictions • Exogeneity test • Weak instruments • Anderson-Rubin statistic • Empirical corporate finance
95.1
Introduction
Corporate finance often involves a number of decisions that are intertwined and endogenously chosen by managers and/or debtholders and/or shareholders. For example, in order to maximize value, firms must form an effective board of directors and grant their managers an optimal pay-performance compensation contract. Debtholders have to decide how much debt and with what features (junior or senior, convertible, callable, maturity length, etc.) should the debt be structured. Given that these endogenously chosen variables are interrelated and are often partially driven by unobservable firm characteristics, the endogeneity problem can make the standard OLS results hard to interpret (Hermalin and Weisbach 2003). The usual approach to the endogeneity problem is to implement the instrumental variables estimation method. In particular, one chooses instrumental variables that are correlated to the endogenous regressors, but uncorrelated to the structural equation errors and then employs a two-stage least square (2SLS) procedure. The endogeneity problem has received a mixed treatment in corporate finance research. Although many studies implicitly acknowledge its existence, the literature does not consistently account for endogeneity using formal econometric methods. There is an increasing emphasis on addressing the endogeneity problem in recent work, and the simultaneous equations model is now being used more commonly. However, the instrumental variables are often chosen arbitrarily and few diagnostic statistics are performed to assess the adequacy of IV estimation. This chapter reviews the instrumental variables approach to endogeneity from the point of view of a finance researcher who is implementing instrumental variables methods in empirical studies. We do not say much on the distribution theory and the mathematical proofs of estimation methods and test statistics as they are well covered in econometrics textbooks and articles. The chapter proceeds as follows: Sect. 95.2 describes the statistical issue raised by endogeneity. Section 95.3 discusses
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2579
the commonly used instrumental variables approaches to endogeneity: the two-stage least squares estimation (2SLS) and the generalized method of moments (GMM) estimation. Section 95.4 discusses the conditions of a valid instrument and the related diagnostic statistics, which are critical but not frequently used in finance applications. Section 95.5 considers the weak instrument problem, that is, the instruments are exogenous but have weak explanatory power for explaining the endogenous regressor. Some recent advances in econometrics research on statistical inference with weak instruments are briefly discussed. Section 95.6 surveys corporate finance applications of instrumental variables. Section 95.7 concludes.
95.2
Endogeneity: The Statistical Issue
To illustrate the endogeneity issue, assume that we wish to estimate parameter b of a linear regression model for a population of firms Y ¼ Xb þ u
(95.1)
where Y is the dependent variable, which is typically an outcome such as return or profitability, X is the regressor that explains the outcome, and u is the unobservable random disturbance or error. If u satisfies the classical regression conditions, the parameters b can be consistently estimated by the standard OLS procedures. This can be shown from the probability limit of OLS estimator Plim bOLS ¼ b þ CovðX; uÞ=VarðXÞ
(95.2)
When the disturbance and the regressor are not correlated and hence the second term is zero, the OLS estimator will be consistent. But if the disturbance is correlated with the regressor, that is, the explanatory variable in Eq. 95.1 is potentially endogenous,1 the usual OLS estimation generally results in biased estimator. In applied finance work, endogeneity is often caused by omitted variables and/or simultaneity. Omitted variables problem arises when the explanatory variable X is hard to measure or depends on some unobservable factors, which are part of the error term u, thus X and u are correlated. The correlation of explanatory variables with unobservable may be due to self-selection: the firm makes the choice of X for the reason that is unobservable. An example is the private information held by a firm in making a debt issue, in which the terms and structure of the debt offering are likely to be correlated with unobserved private information held by the firm. The Heckman two-step procedure (Heckman 1979) is extensively used in corporate 1
Traditionally, a variable is defined as endogenous if it is determined within the context of a model. However, in applied econometrics the “endogenous” variable is used more broadly to describe the situation where an explanatory variable is correlated with the disturbance and the resulting estimator is biased (Wooldridge 2002).
2580
C.-J. Wang
finance for modeling this omitted variable.2 Endogeneity may also be caused by simultaneity, in which one or more of the explanatory variables are determined simultaneously along with the dependent variable. For example, if Y is firm value and X is management compensation, the management compensation contract is partly determined by the anticipated firm value Y from choosing X, and the general OLS estimator is biased in this situation.
95.3
Instrumental Variable Approach to Endogeneity
95.3.1 Instrumental Variables and Two-Stage Least Square (2SLS) The method of instrumental variable (IV) approach provides a general solution to the problem of endogenous explanatory variables. To see how, consider the following setup: Y ¼ Xb þ u
(95.3)
where Y is N 1 vector of observations on the dependent variable, X is N K matrix of explanatory variables, and u is unobservable mean-zero N 1 vector of disturbance correlated with some elements of X. To use the IV approach, we need instruments at least as many as explanatory variables in the model. The instruments should be sufficiently correlated with the endogenous explanatory variables but asymptotically uncorrelated with u, and the explanatory variables that are exogenous can serve as their own instruments as they are uncorrelated with u. Specifically, the valid instruments must satisfy the orthogonality condition and there must be no linear dependencies among the exogenous variables. In a just-identified setup, using an N K matrix Z to instrument for X, the IV estimator can be solved as 1
1
bIV ¼ ðZ0 XÞ ðZ0 YÞ þ ðZ0 XÞ Z0 u 1
bIV ¼ b þ ðZ0 XÞ Z0 u
(95.4) (95.5)
If Z and u are not correlated and Z0 X has full rank
2
Eð Z0 uÞ ¼ 0
(95.6)
Eð Z0 X Þ ¼ K
(95.7)
The finance literature using self-selection models has little interest in estimating the endogenous decision itself (the parameter b in Eq. 95.1) but is more interested in using self-selection models to reveal and test for the private information that influences the decision. In contrast, this chapter focuses on how to implement IV approach to estimate the parameter consistently. The readers interested in finance application of self-selection models are referred to Li and Prabhala (2007).
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2581
The second term of the right hand side of Eq. 95.5 becomes 0, bIV will be consistent and the unique solution for the true parameters b when the conditions (95.6) and (95.7) hold. When we have more exogenous variables than needed to identify the parameters, for example, if the instrumental variables Z is an N L matrix, where L > K, the model is said to be overidentified, and there are (L K) overidentifying restrictions because (L K) instruments could be discarded and the parameters can still be identified. If L < K, the parameters cannot be identified. Therefore, the order condition L K is necessary for the rank condition, which requires that Z is sufficiently related to X so Z0 K has full column rank K. In an overidentifying model, any linear combinations of the instruments can also be used as instruments. Under homoskedasticity, the two-stage least square (2SLS) is the most efficient IV estimator out of all possible linear combinations of the valid instruments since the method of 2SLS chooses the most highly correlated with the endogenous explanatory variable. The name “two-stage least squares” comes from its two-step procedure: Step 1: Obtain the fitted value of each endogenous explanatory variable from regressing each endogenous explanatory variable on all instrumental variables. This is called the first-stage regression. Using the matrix notation, the matrix of ^ ¼ ZðZ0 ZÞ1 Z0 X (if any xi is exogenous, the fitted values can be expressed as X its fitted value is itself). Step 2: Run the OLS regression of the dependent variable on the exogenous explanatory variables in the structural Eq. 95.1 and the fitted values obtained from the first-stage regression in place of the observations on the endogenous variables. This is the second-stage regression. The IV/2SLS estimator that 0 1 0 ^ can be written as b^ ¼ X ^X ^ Y . Substitute uses the instruments X X ^ this 2SLS estimator can be written as Z(Z0 Z)1Z0 X for X; h i1 1 1 b^ ¼ X0 ZðZ0 ZÞ Z0 X X0 ZðZ0 ZÞ Z0 Y
(95.8)
In practice, it is best to use a software package with 2SLS command rather than carry out the two-step procedure because the reported OLS standard errors of the second-stage regression are not the 2SLS standard errors. The 2SLS residuals and covariance matrix are calculated by the original observations on the explanatory variables instead of the fitted values of the explanatory variables. In searching valid instrument, both the orthogonality condition and the rank condition are equally important. When an instrument is not fully exogenous and the correlation between the instrument and the explanatory variable is too small, the bias in IV estimator may not be smaller than the bias in OLS estimator. To see this, we compare the bias in the OLS estimator with the bias in the IV estimator for model (95.3): bOLS b ¼ CovðX; uÞ=VarðXÞ ¼ su rx, u =sx
(95.9)
2582
C.-J. Wang
bIV b ¼ CovðZ; uÞ=CovðX; ZÞ ¼ su rz, u =sx rx, z
(95.10)
where ri,j is the correlation coefficient between variable i and variable j. Thus, the IV estimator has smaller bias than OLS estimator only if r2x,z is larger than r2z,u/ r2x,u.3 Although this comparison involves the population correlations with unobservable variable u and cannot be estimated directly, it indicates that when the instrument is only weakly correlated to the endogenous regressor, the IV estimator is not likely to improve upon on the OLS (see Bartels 1991 for the discussions of quasi-instrumental variables). Even if the instruments are perfectly exogenous, the low relevance of the instruments can increase asymptotic standard errors and reduce the power of the tests. We can see from Eq. 95.10 that the inconsistency in the IV estimator can get extremely large if the correlation between X and Z gets close to zero and makes the IV estimator undesirable. The orthogonality of an instrument is difficult to ascertain because we cannot test the correlation between one observable instrument and the unobservable disturbance. But in the case of overidentifying model (i.e., the number of instruments exceeding the number of regressors), the overidentifying restrictions test can be used to evaluate the validity of the additional instruments under the assumption that at least one instrument is valid. We will discuss the tests for validity of the instrument in Sect. 95.4. For checking the rank condition, often we estimate the reduced form for each endogenous explanatory to make sure that at least one of the instruments not in X is significant. If the reduced form regression fits poorly, the model is said to suffer from weak instrument problem and the standard asymptotic theory cannot be employed to make inference. We will discuss the weak instruments problem in Sect. 95.5.
95.3.2 Hypothesis Testing with 2SLS Testing hypotheses about a single parameter estimate in model (95.3) is straightforward using an asymptotic t-statistic. For testing restrictions on multiple parameters, Wooldridge (2002) provides a method to compute a residual-based F-statistic. Rewrite the Eq. 95.3 into a partitioned model Y ¼ X1 b1 þ X2 b2 þ u
(95.11)
where X1 is N K1, X2 is N K2, and K1 + K2 ¼ K. Let Z denote an N L matrix of instruments and assume the rank and the orthogonality conditions hold. Our interest is to test the K2 restrictions: H0 : b2 ¼ 0 against H1 : b2 6¼ 0
(95.12)
In order to calculate the F-statistics for 2SLS, we need to calculate the sum of ^ r; the squared residuals from the restricted second-stage regression, denoted as SSR sum of squared residuals from the unrestricted second-stage regression, denoted as ^ ur; and the sum of squared residuals from unrestricted 2SLS, denoted as SSRur. SSR
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2583
The F-statistics is calculated as ^ r SSR ^ ur ðN KÞ SSR F FK2 , NK K2 SSRur
(95.13)
When the homoskedasticity assumptions cannot be made, we need to calculate the heteroskedasticity-robust standard errors for 2SLS. Some statistical packages compute these standard errors using a simple command. Wooldridge (2002) shows the robust standard errors can be computed in the following steps: Step 1: Apply 2SLS procedures and obtain the 2SLS residual, denoted ^u i for ^ 2 , where each observation i, i ¼ 1. . .N, then use ^u i to calculate the SSR and s 0 1 SSR ^X ^ ^ 2 NK ^2 X , and Var b^j , where Var b^j ¼ s , and the standard error is s denoted as se b^j , j ¼ 1. . .K. Step 2: Obtain the fitted value of each explanatory variables, denoted as x^ ij , j ¼ 1. . .K. Step 3: Regress each element of ^x ij on all other ^x ik where k 6¼ j, and obtain the residuals from the regressions, denoted as ^r ij for each j. Step 4: Compute the heteroskedasticity-robust standard errors of b^j , denoted as seheter b^ : j
2 32 1=2 N 1=2 se b^j XN 4 5 1 ^j ¼ ^r ^u (95.14) , where m seheter b^j ¼ i¼1 ij i ^ ^j ðN KÞ s m
95.3.3 Instrumental Variables and Generalized Method of Moments (GMM) When we have a system of equations, with sufficient instruments we can still apply 2SLS procedure to each single equation in the system to obtain acceptable results. However, in many cases we can obtain more efficient estimators by estimating parameters in the system equations jointly. This system instrumental variables estimation approach is based on the principle of the generalized method of moments (GMM). As discussed in the previous section, the orthogonality conditions require that the valid instruments are uncorrelated to the disturbance, i.e., the population moment condition E(Z0 m) is equal to zero. Thus, by this principle the optimal parameter estimate is chosen so that the corresponding sample moment is also equal to zero. To show this, reconsider the linear model (95.3) for a random sample from the population, with Z as an N L matrix of instruments orthogonal to u for the set of L linear equations in K unknowns and assume the rank condition holds, the sample moment condition must satisfy
2584
C.-J. Wang
N1
N X
Z0i ðYi Xi b^ ¼ 0
(95.15)
i¼1
where the i subscript is included for notational clarity. When we have an exactly identified model where L ¼ K and Z0 X is invertible, we can solve for the general1 ized method of moments (GMM) estimator b^ in full matrix notation as b^ ¼ ðZ0 XÞ Z0 Y, which is the same IV estimator obtained in Eq. 95.4. When L > K, the model is overidentified; we cannot choose K unknowns to satisfy the L equations and generally the Eq. 95.15 will not have a solution. If we cannot set the sample moment condition exactly equal to zero, we can at least choose the parameter estimate b^ so that the vector in Eq. 95.15 is as close to zero as possible. One idea is to minimize the squared Euclidean length of the L 1 vector in Eq. 95.15 and the optimal GMM estimator b^ is the minimizer. point for GMM estimation is to specify the GMM criterion function The starting ^ Y , which is the sample moment in Eq. 95.15 in a quadratic form with an Q b; ^: L L symmetric and positive definite weighting matrix W " #0 " # N N X X 0 0 ^ ^ ^ ^ Q b; Y Z Yi Xi b W Z Yi Xi b i
i
i¼1
(95.16)
i¼1
^ The GMM estimator b can be obtained by minimizing the criterion function Q ^ Y over b^ and the unique solution in full matrix notation is b; ^ 0 X 1 X0 ZWZ ^ 0Y b^ ¼ X0 ZWZ
(95.17)
It can be shown that if the rank condition holds and the chosen weighting matrix ^ W is positive definite, the resulting GMM estimator is asymptotically consistent. There is no shortage of such matrix. However, it is important that we choose the weighting matrix that produces the GMM estimator with the smallest asymptoticvariance. To do this, first we need to find the asymptotic variance matrix of pffiffiffiffi ^ N b b . Plug Eq. 95.17 into Eq. 95.3 to get ^ 0 X 1 X0 ZWZ ^ 0u b^ ¼ b þ X0 ZWZ (95.18) pffiffiffiffi Thus, the asymptotic variance matrix Avar N b^ b can be shown as h i pffiffiffiffi 1 Avar N b^ b ¼ Var ðC0 WCÞ C0 WZ0 u 1
¼ ðC0 WCÞ C0 WSWCðC0 WCÞ
1
(95.19)
where Z0 X C, S Var(Z 0 u). It can be shown that if the weighting matrix W is chosen such that W ¼ S1, the GMM estimator has the least asymptotic variance. The details can be found in Hayashi (2000, p. 212). By setting W S1, the Eq. 95.19 can be simplified as
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2585
pffiffiffiffi 1 0 1 1 N b^ b ¼ C0 S1 C ¼ X ZS ZX
(95.20)
Avar
Therefore, the efficiency of the GMM estimator depends on if we can consistently estimate S (the variance of the asymptotic distribution of Z0 u). Since the 2SLS estimator is consistent though not necessarily efficient, it can be used as an initial estimator to obtain the residuals, which in turn can be used as the estimator for S. The two-step efficient GMM procedure is given in Wooldridge (2002) as follows: Step 1: Use the 2SLS estimator as an initial estimator since it is consistent, denoted as e b, to obtain the 2SLS residuals by e m i ¼ Yi Xi e b, i ¼ 1, 2,. . . ,N. For a system of equations, apply 2SLS equation by equation. This allows the possibility that different instruments are used for different system equations. Step 2: Estimate S, the variance of the asymptotic distribution of Z0 u by
^ ¼ N1 S
N X
0
0
Zi e u ie u i Zii
(95.21)
i¼1
^ 1 as the weighting matrix to obtain the efficient GMM Then use W ¼ S ^ 1 into Eq. 95.17 and obtain the efficient GMM estimator estimator. We can plug S 1 ^ 1 Z0 X ^ 1 Z0 Y b^ ¼ X0 ZS X0 ZS
(95.22)
^ of the optimal GMM Following Eq. 95.19, the asymptotic variance matrix V estimator can be estimated as 1 ^ 1 ZX ^ ¼ X 0 ZS V (95.23) ^ are the asymptotic The square roots of diagonal elements of this matrix V standard errors of the optimal GMM estimator.
95.3.4 Hypothesis Testing Using GMM The t-statistics for the hypothesis testing after GMM estimation can be directly computed by using the asymptotic standard errors obtained from the variance ^ . For testing multiple restrictions, the GMM criterion function can be matrix V used to calculate the statistic. For example, suppose our interest is to test Q restrictions for the K unknowns in the system; thus, the Wald statistics is a limiting null w2Q. To apply this statistics, we need to assume the optimal weighting matrix W ¼ S1 is chosen to obtain the GMM estimator with and without imposing the Q restrictions. Define the residuals evaluated at the unrestricted GMM estimator b^u as ^u u Yi Xi b^u and the residuals evaluated at the restricted GMM estimator b^r as ^u r Yi Xi b^r . By plugging the calculated
2586
C.-J. Wang
residuals ^ u u and ^ u r into the criterion function (95.16) for the unrestricted model and the restricted model, respectively, the criterion function statistic is computed as the difference between the two criterion function values, divided by the sample size N. The criterion function statistic has chi-square distribution with Q degrees of freedom.
95.4
Validity of Instrumental Variables
95.4.1 Test for Exogeneity of Instruments When researchers explore various alternative ways to solve the endogeneity problem and consider the instrumental variables estimation to be the most promising econometric approach, the next step is to find and justify the instruments used. The orthogonality condition (95.6) is the sufficient condition in order for a set of instruments to be valid. When the condition is not satisfied, the IV estimator is inconsistent. In practice, we cannot test whether an observable instrument is uncorrelated with the unobservable disturbance. But when we have more instruments than needed to identify an equation, we can use any subset of the instruments for estimation and the estimates should not be significantly different if the instruments are valid. Following the principle of Hausman (1978), we can test whether the additional instruments are valid by comparing the estimates of an overidentified model with those of a just-identified model. The model we wish to test is Y ¼ Xb þ u
(95.24)
where X is a vector of 1 K explanatory variables with some elements correlated with u, Z is a vector of 1 L instruments, and L > K so the model has L K overidentifying restrictions and we can use any 1 K subset of Z as instruments for X. Let Z1 be a vector of 1 (L K) extra instruments. The overidentifying restrictions require that E(Z1 0 u) ¼ 0. The LM statistic can be computed by regressing the 2SLS residuals from the original model (95.24) on the full set of instruments Z and use the obtained uncentered R2 times N. The asymptotic distribution of the statistic is w2LK. Another way to test the overidentifying restrictions is to use a test statistic based on the difference between the minimized values of the IV criterion function for the overidentified model and the just-identified model. By the moment condition, the minimized value of the criterion function is equal to zero for the just-identified model. Thus, the test statistic is the minimized value of the criterion function for the overidentified model (95.24), divided by the estimate of the error variance from the same model. This test is called Sargan’s test, after Sargan (1958), and numerically the Sargan’s test is identical to the LM statistic discussed above. The usefulness of the overidentifying restrictions test is that if we cannot reject the null, we can have some confidence in the overall set of instruments used. This suggests that it is preferable to have an overidentified model for
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2587
empirical application, because the overidentifying restrictions can and should always be tested. However, we also need to note that the overidentifying restrictions test is performed under the assumption that at least one instrument is valid. If this assumption does not hold, we will have a situation that all instruments have similar bias and the test will not reject the null. It could also be that the test has low power 4 for detecting the endogeneity of some of the instruments. The heteroskedasticity-robust test can be computed as follows. Let H be any 1 (L K) subset of Z. Regress each element of H onto the fitted value of X and collect the residuals, denoted as ^n. The asymptotic w2LK test statistic is obtained as N minus the sum of squared residuals from regressing 1 on ^u 0 ^n . For the system equations using GMM estimation, the test of overidentifying restrictions can be computed by using a similar criterion function-based procedure. The overidentification test statistic is the minimized criterion function (95.16) evaluated at the optimal (efficient) GMM estimator, and there is no need to be divided by an estimator of error variance because the GMM criterion function takes account of the covariance matrix of the error terms: " N
1=2
N X i¼1
Z0i
# " # N 0 X 1=2 0 ^ N Yi Xi b^ W Zi Yi Xi b^ w2 LK
(95.25)
i¼1
The GMM estimator using the optimal weighting matrix is called the minimum chi-square estimator because the GMM estimator b^ is chosen to make the criterion function minimum. If the chosen weighting matrix is not optimal, the expression (95.25) fails to hold. This test statistic is often called Hansen’s J-statistic after Hansen (1982) or Hansen-Sargan statistic for its close relationship with the Sargan’s test in the IV/2SLS estimation. The Hansen’s J-statistic is distributed as chi-square in the number of overidentifying restrictions, (L K), since K degrees of freedom are lost for having estimated K parameters,5 and it is consistent in the presence of heteroskedasticity. It is strongly suggested that an overidentifying restrictions test on instruments should always be performed before formally using the IV estimation, but we also need to be cautious in interpreting the test results. There are several situations in which the null will be rejected. One possibility is that the model is correctly specified, and some of the instruments are indeed correlated with the disturbance and thus are invalid. The other possibility is that the model is not correctly specified, for instance, some variables are omitted from the regression function. In either case, the overidentifying test statistic leads us to reject the null hypothesis but we cannot be certain about which is the case. Another problem is that in small samples, the actual size of the Hansen’s J-test far exceeds the nominal size and the test usually rejects too often. 4
If the instruments are only weakly related to the endogenous explanatory variables, the power of the test can be low. 5 A potential problem is that the test is not consistent against some failures of the orthogonality condition due to the loss of degrees of freedom from K to K-L.
2588
C.-J. Wang
95.4.2 Whether IV Estimator Is Really Needed In many cases, we may suspect that certain explanatory variables are endogenous but do not know whether the IV estimation is reasonably preferred to the OLS estimation. However, it is important to know that using instrumental variable to solve for the endogeneity problem in empirical research is like a trade-off between efficiency and consistency. As discussed in Bartels (1991), the asymptotic mean square error of the IV estimator can be partitioned into three components:
AMSE b^IV ¼ s2 u =Ns2 X 1 þ 1 s2 XZ =s2 XZ þ Ns2 Zu =s2 XZ
(95.26)
where u is the structural error, X is the explanatory variable, Z is the instrumental variable, and N is the sample size. The first term in the second bracket of (95.26) corresponds to the asymptotic variance of the OLS estimator, the second term corresponds to the additional asymptotic variance produced by using IV estimator rather than the OLS estimator, and the third term captures the inconsistency if the instruments are not truly exogenous. Thus, even when we find perfectly exogenous instruments so that s2Zu ¼ 0, the standard error for the IV estimator will exceed the standard error for the OLS estimator by 1/s2XZ. As the correlation between X and Z gets close to zero, the standard error for the IV estimator can get extremely large. Because the IV estimator is less efficient than the OLS estimator, unless we have strong evidence that the explanatory variable is endogenous, the IV estimation is not preferred to the OLS estimation. Therefore, it is always useful to run a test for endogeneity before using the IV approach. This endogeneity test dates back to Durbin (1954), subsequently extended by Wu (1973) and Hausman (1978). Thus, we refer to the test of this type as Durbin-Wu-Hausman tests. To illustrate the procedure, write a population model as Y1 ¼ bY2 þ gZ1 þ u
(95.27)
where Y1 is the dependent variable, Y2 is 1 K2 vector of the possible endogenous explanatory variables, Z1 is 1 K1 vector of exogenous explanatory variables, and K1 + K2 ¼ K; also Z1 is a subset of the 1 L exogenous variables Z, assuming Z satisfies the orthogonality condition and E(Z0 Y2) has full rank. Our interest is to test the null hypothesis that Y2 is actually exogenous: H0 : Y1 ¼ bY2 þ gZ1 þ u, EðY2 0 uÞ ¼ 0 against H1 : Y1 ¼ bY2 þ gZ1 þ u, EðY2 0 uÞ 6¼ 0 Under H0 both IV and the OLS estimators are consistent, and they should not differ significantly. Under H1 only the IV estimator is consistent. Thus, the original idea of the endogeneity test is to check whether the IV estimator is significantly different from the OLS estimator. The test can also be made by
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2589
just comparing the estimated coefficients of the parameters of interest, which is b^ in our case. The suitable Hausman statistic thus can be calculated as 0 h i1 6 b^IV b^OLS Var b^IV Var b^OLS b^IV b^OLS . We often use Hausman (1978) regression-based form of the test, which is easier to compute and asymptotically equivalent to the original endogeneity test. Following the procedure described in Wooldridge (2002), the first step is to regress each element of the K2 possibly endogenous variables against all exogenous variables Z. The 1 K2 vector of the obtained reduced form error is denoted as n. Since Z is uncorrelated to the structural error u and the reduced form error n, Y2 is endogenous if and only if u and n are correlated. To test this, we can project u onto n as u ¼ ln þ e
(95.28)
where n is uncorrelated to e and e is uncorrelated to Z; thus, the test of whether Y2 is exogenous is equivalent to test whether the joint test of l (the K2 restrictions) is significantly different from 0. By plugging Eqs. 95.9 into 95.8, we have the equation Y1 ¼ bY2 þ gZ1 þ ln þ e
(95.29)
Since e is uncorrelated to n, Z1, and Y2 by construction, the test of the null hypothesis can be done by using a standard joint F-test on l in an OLS regression (for the single endogenous variable, a t-statistic can be used in the same procedure), and the F-statistic has the F(K2, N-K-K2) distribution. If we reject the null H0: l ¼ 0, there is evidence that at least some elements of Y2 are indeed endogenous. So the use of IV approach is justified assuming the instruments are valid. If the heteroskedasticity is suspected under H0, the test can be made robust to heteroskedasticity in m (since m ¼ e under H0) by computing the heteroskedasticity-robust standard errors.7 Another use of this regression-based form of the test is that the OLS estimates of b and l for Eq. 95.29 should be identical to their 2SLS estimates for the Eq. 95.27 using Z as the instruments (see Davidson and MacKinnon 1993 for the details), and that allows us to examine whether the differences in the OLS and 2SLS point estimates are practically significant.
6
If the assumption of homoskedasticity cannot be made, this standard error is invalid because the asymptotic variance of the difference is no longer the difference in asymptotic variances. 7 Since the robust (Hubert-White) standard errors are asymptotically valid to the presence of heteroskedasticity of unknown form including homoskedasticity, these standard errors are often reported in empirical research especially when the sample size is large. Several statistical packages such as Stata now report these standard errors with a simple command, so it is easy to obtain the heteroskedasticity-robust standard errors.
2590
95.5
C.-J. Wang
Identification and Inferences with Weak Instruments
95.5.1 Problems with Weak Instruments and Diagnosis As stated in the very beginning of this chapter, a valid instrument should be asymptotically uncorrelated with the structural error (orthogonality) but sufficiently correlated with the endogenous explanatory variable for which it is supposed to serve as instrument (relevance). The relevance condition is critical for the structural model to be identified. If the instruments have little relevance, the instruments will not enter the first-stage regression, then the sampling distribution of IV statistics are generally not normal and the IV estimates and standard tests are unreliable. A simple way to detect the presence of weak instruments is to look at the firststage F-statistic for the null hypothesis that the instruments are jointly equal to zero. Staiger and Stock (1997) suggest that the instruments are considered to be weak if the first-stage F-statistic is less than 10. Some empirical studies use R2 in the firststage regression as a measure of instrument relevance. However, Shea (1997) argues that for the model with multiple endogenous regressors, when instruments are highly collinear, IV may work poorly even if R2 is high for each first-stage regression. For instance, suppose both the vector of endogenous regressors X and the vector of instruments Z have rank of 2, while only one element of Z is highly correlated to X. In this situation the regression of each element of X onto Z will have high R2 even though b in the structural model may not be identified. Instead, Shea (1997) proposes a partial R2, which measures the instrument relevance by taking intercorrelations among the instruments into account. The idea is that the instruments should work best when the part of instruments important to one endogenous regressor is linearly independent of the part important to the other endogenous regressor. Taking the example above, Shea’s partial R2 is the squared correlation between the component of one endogenous regressor X1 orthogonal to the other endogenous regressor X2 (i.e., the residuals from regressing X1 onto X2) and the component of the fitted values of X1 orthogonal to the fitted values of X2 (i.e., the residuals from regressing the fitted values of X1 onto the fitted values of X2). Shea’s partial R2 can be corrected for the degrees of freedom by 2 Rp ¼ 1 ½ðN 1Þ=ðN KÞ 1 R2p
(95.30)
2
where Rp is the corrected partial R2, R2p is uncorrected partial R2, N is the sample size, and K is the number of exogenous variables in the system. Another type of the tests for instrument relevance is testing whether the equation is identified. Rothenberg et al. (1984) shows that at a formal level the strength of instruments can be characterized in terms of the so-called concentration parameter associated with the first-stage regression. He considers a simple linear model with a single endogenous regressor X and without included exogenous regressors: Y ¼ Xb þ u
(95.31)
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
X¼Z
Y
þv
2591
(95.32)
where Y and X are N 1 vectors of observations on endogenous variables, Z is an N K matrix of instruments, and u and v are N 1 error vectors. The errors are assumed to be i.i.d. as N(0, S) where s2u, s2uv, and s2v are elements of S, and the correlation between the error terms r ¼ s2uv/(susv) . In matrix notation the 2SLS 1 estimator can be written as b^2SLS ¼ ðX0 PZ XÞ ðX0 PZ YÞ , where the idempotent 0 0 1 0 matrix PZ ¼ Z (Z Z) Z . The concentration parameter m2 associated with the first-stage reduced form is defined as Y0 Y m2 ¼ Z0 Z =s2 v (95.33) The concentration parameter m2 can be interpreted in terms of F, the first-stage F-statistic for testing the hypothesis P ¼ 0 (i.e., the instruments do not enter the first-stage regression). Let e F be the infeasible counterpart of F using the true value of s2v. Then Ke F has a chi-squared distribution of K degrees of freedom and noncentrality parameter m2. When the sample size is large, the E Ke F ffi m2 =K þ 1; thus, for large values of m2/K, (F 1) can be used as an estimator for m2/K. Rothenberg et al. (1984) shows that the concentration parameter m2 plays an important role in the approximation to the distributions of 2SLS estimators and test statistics. He emphasizes that for the normal approximation to the distribution of the 2SLS estimator to be precise, the concentration parameter m2 must be large. Thus, a small F-statistic (i.e., a smaller value of m2/K) can indicate the presence of weak instruments. There are several tests developed using the concentration matrix to test for weak identification. Cragg and Donald (1993) proposed a statistic using the minimum eigenvalue of the concentration parameter to test the null hypothesis of underidentification, which occurs when the concentration matrix is singular. Stock and Yogo (2002) argue that when the concentration matrix is nonsingular but its eigenvalues are sufficiently small, the inferences based on conventional normal approximation distributions are misleading even though the parameters might be identified. Thus, the minimum eigenvalues of the concentration matrix m2/K can be used to detect the presence of weak instruments. By this principle Stock and Yogo (2002) develop two alternative quantitative definition of the weak instrument. A set of instruments is considered to be weak if m2/K is small enough, so the bias in IV estimator to the bias in OLS estimator exceeds a certain threshold, depending on the researcher’s tolerance, for example, 10 %. Alternatively, a set of instruments is considered to be weak if m2/K is small enough, so the conventional a-level Wald test-based IV statistic has an actual size exceeding a certain threshold, again depending on the researcher’s tolerance. Stock and Yogo (2002) propose using the first-stage F-statistic for making inferences about m2/K and develop the critical values of F-statistic corresponding to the weak instrument threshold m2/K. For example, if a researcher requires the 2SLS relative bias no more than 10 %, for a model with three instruments, the computed first-stage F-statistic has to be larger than 9.08 for the threshold value m2/K larger than 3.71, so the null
2592
C.-J. Wang
hypothesis that the 2SLS relative bias is less than or equal to 10 % (i.e., instruments are weak) will be rejected. The readers are referred to Stock and Yogo (2002) for a tabulation of the critical values for weak instrument tests with multiple endogenous regressors.
95.5.2 Possible Cures and Inferences with Weak Instruments Many of the key issues of weak instruments have been studied for decades, however, most of the research on the estimation and inferences robust to weak instruments is quite recent and their applications in finance still remain to be seen. Therefore, this section just simply touches upon this topic and refers the reader to the original articles for details. In their survey of weak instrument and identification, Stock et al. (2002) considered the Anderson-Rubin statistic as “fully robust” to weak instruments, in the sense that this procedure has the correct size regardless of the value of concentration parameter. For testing the null hypothesis for b ¼ b0, the Anderson-Rubin statistic (Anderson and Rubin 1949) is computed as ARðb0 Þ ¼
ðY Xb0 Þ0 PZ ðY Xb0 Þ=K FK, NK ðY Xb0 Þ0 MZ ðY Xb0 Þ=ðN KÞ
(95.34)
where PZ ¼ Z(Z0 Z)1Z0 and MZ ¼ I PZ. Testing the null hypothesis that the coefficients of the endogenous regressors in the structural equation are jointly equal to zero is numerically equivalent to estimating the reduced form of the equation (with the full set of instruments as regressors) and testing that the coefficients of the excluded instruments are jointly equal to zero. Therefore, the AR statistic is often used for testing overidentifying restrictions. The Anderson-Rubin procedure provides valid tests and confidence set under weak instrument asymptotics,8 but it has low power when too many instruments are added (see Dufour 2003). Berkowitz et al. (2012) argue that when there is a mild violation of the orthogonality condition, the Anderson and Rubin (1949) test may be oversized. In order to correct this problem, the authors fractionally resampled Anderson-Rubin test by modifying Wu’s (1990) resampling technique and obtain valid but more conservative critical values. Other fully robust tests discussed in Stock et al. (2002) include Moreira’s conditional test by Moreira (2002), which fixes the size distortion of the test in the presence of weak instruments and can be used to make reliable inference about the coefficients of endogenous variables in the structural equation, and Kleibergen’s
8 Stock et al. (2002) defines weak instrument asymptotics as the alternative asymptotics methods that can be used to analyze IV statistics in the presence of weak instruments. Weak instrument asymptotics involves a sequence of models chosen to keep concentration parameters constant as sample size N ! 1 and the number of instruments held fixed.
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2593
statistic by Kleibergen (2002), which is robust under both conventional and weak instrument asymptotics. Stock et al. (2002) considered several k-class estimators that are “partially robust” to weak instruments, in the sense that these k-class estimators are more reliable than 2SLS. The k-class is a set of estimators defined by the following estimating equation with an arbitrary scalar k: 1 b^ðkÞ ¼ ½X0 ðI kMZ ÞX ½X0 ðI kMZ ÞY
(95.35)
This class includes 2SLS, LIML, and Fuller-k. 2SLS is a k-class estimator with k equal to 1; LIML is a k-class estimator with k equal to the LIML eigenvalue; Fuller-k or called Fuller’s modified LIML, proposed by Fuller (1977), sets k ¼ kLIML a/(N K), where K is the total number of instruments and a is the Fuller constant, and the Fuller estimator with a ¼ 1 yields unbiased results to second order with fixed instruments and normal errors; Jackknife 2SLS estimator, proposed by Angrist et al. (1999), is asymptotically equivalent to k-class estimator with k ¼ 1 + K/(N K) (Chao and Swanson 2005). For a discussion of LIML and k-class estimators, see Davidson and MacKinnon (1993). Stock et al. (2002) found that LIML, Fuller-k, and Jackknife estimators have lower critical value for weak instrument test than 2SLS (so the null will not be rejected too often) and thus are more reliable when the instruments are weak. Anderson et al. (2010, 2011) show that LIML estimator has good performance in terms of the bounded loss functions and probabilities in the presence of many weak instruments. However, Hahn et al. (2004) argued that due to the lack of finite moment, LIML sometimes performs well but sometimes poorly in the weak instrument situation. They found that the interquartile range and the root MSE of the LIML often far exceed those of the 2SLS and hence suggested extreme caution in using the LIML in the presence of weak instrument. Instead, Hahn et al. (2004) recommend the Jackknife 2SLS estimator and Fuller-k estimator because the two estimators do not have the “no moment” problem that LIML has. Theoretical calculations and simulations show that Jackknife 2SLS estimator improves on 2SLS when many instruments are used and thus the weak instrument problem usually occurs (see Chao and Swanson (2005) and Angrist et al. (1999)). Hahn et al. (2004) also find that the bias and mean square error using Fuller-k estimator are smaller than those using 2SLS and LIML.
95.6
Empirical Applications in Corporate Finance
In order to provide some insight into the use of IV estimation by finance researchers, we follow the methodology used by Larcker and Rusticus (2010) to conduct a search using the key words “2SLS,” “simultaneous equations,” “instrumental variables,” and “endogeneity” for papers published in Journal of Finance, Journal of Financial Economics, Review of Financial Studies, and Journal of Financial and Quantitative Analysis during the period from 1997 to 2012.
2594
C.-J. Wang
Table 95.1 Finance research that uses instrumental variable methods Capital structure/leverage ratio Faulkender and Petersen (RFS 2006) Yan (JFQA 2006) Molina (JF 2005) Johnson (RFS 2003) Desai et al. (JF 2004) Harvey et al. (JFE 2004) Agency/ownership structure/governance Bitler et al. (JF 2005) Ortiz-Molina (JFQA 2006) Palia (RFS 2001) Cho (JFE 1998) Wei et al. (JFQA 2005) Daines (JFE 2001) Coles et al. (JFE 2012) Wintoki et al. (JFE forthcoming) Pricing of public offering Cliff and Denis (JF 2004) Lee and Wahal (JFE 2004) Lowry and Shu (JFE 2002)
Debt covenants Dennis et al. (JFQA 2000) Chen et al. (JF 2007) Macro/product market Thorsten et al. (JFE 2000) Campello (JFE 2006) Garmaise (RFS 2008) Financial institutions Ljungqvist et al. (JF 2006) Berger et al. (JFE 2005) Microstructure Brown et al. (JF 2008) Conrad et al. (JFE 2003) Kavajecz and Odders-White (RFS 2001) Diversification/acquisition Campa and Kedia (JF 2002) Hsieh and Walkling (JFE 2005) Venture capital/private equity Gompers and Lerner (JFE 2000)
The sample is based on an electronic search for the term “2SLS,” “instrumental variables,” “simultaneous equations,” and “endogeneity” for papers published in Journal of Finance, Journal of Financial Economics, Review of Financial Studies, and Journal of Financial and Quantitative Analysis during the period from 1997 to 2008
As shown in Table 95.1, our search produced 30 published articles that use instrumental variables approach to solve the endogeneity bias in the study of capital structure, agency/ownership structure, pricing of public offering, debt covenants, financial institutions, microstructure, diversification/acquisition, venture capital/ private equity, product market, and macroeconomics. Compared with the survey of Larcker and Rusticus (2010) for the IV applications in accounting research (which found 42 such articles in the recent decade), our list shows that the IV estimation is less commonly used in finance research9 and often employed in corporate finance-related studies. Similar to the finding of Larcker and Rusticus (2010) on IV applications in accounting research, there is little attempt in empirical finance research to develop a formal structural equation to identify endogenous and exogenous variables in the first place. Most take the endogeneity in the variables of interest as granted and only
9
Another reason we found a smaller number of finance papers using IV may be that we limit our keywords to appearing in the abstract. Thus, our data may be more representative of the general situation of the finance research using IV as their main tests.
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2595
Table 95.2 Descriptive statistics for finance research that uses instrumental variables methods A. Types of IV applications Standard two-stage least squares Capital structure/leverage ratio (4) Agency/ownership structure/governance (7) Pricing of public offering (1) Debt covenants (2) Financial institutions (1) Microstructure (2) Diversification/acquisition (1) Total: 18 Two-stage Heckman Pricing of public offering/debt covenants (2) Financial institutions (1) Microstructure (1) Diversification/acquisition (1) Venture capital/private equity (1) Total: 6 GMM/3SLS Capital structure/leverage ratio/governance (3) Macro/product market (3) Total: 6 B. Features of IV application Two-stage Specification/justification Discussion of model/instruments 9 (50 %) Endogeneity test 3 (17 %) Reported statistics IV relevance First-stage regression 7 (39 %) F-statistic for strength of IV 3 (17 %) Partial R2 for IV 0 IV orthogonality Overidentifying restrictions test 2 (11 %) Concerns on WI 1 (6 %) Total 18
Heckman
GMM/3SLS
Total
3 (50 %) 1 (17 %)
4 (66 %) 1 (17 %)
16 (53 %) 5 (17 %)
2 (33 %) 2 (33 %) 1 (17 %)
3 (50 %) 3 (50 %) 0
12 (40 %) 8 (27 %)
0 1 (17 %) 6
4 (66 %) 2 (33 %) 6
6 (20 %) 4 (13 %)
17 % conducted the endogeneity test (Hausman test). Almost all decided using IV approach to solve the endogeneity problem directly without considering the alternatives. As we discussed in Sects. 95.3.1 and 95.5.1, without valid instruments, the IV estimator can produce even more biased results than the OLS estimator and finding good instruments can be very challenging. Therefore, the researcher should investigate the nature of the endogeneity and explores alternative methods before selecting the IV approach. We found that only Sorensen (2007) and Mitton (2006)
2596
C.-J. Wang
evaluated the situation and chose alternative methods and fixed effects model instead to solve the endogeneity problem for the lack of good instruments. Table 95.2 shows that the standard two-stage least square regression method is the most commonly used procedure (18 out of 30), and the GMM/3SLS procedure is more concentrated on the study of financial economics. Half of the research did not discuss formally why a specific variable is selected as an instrumental variable, and even fewer explicitly justify theoretically or statistically the validity of selected instruments by examining the orthogonality and relevance of the instruments. All studies have overidentified models but only 20 % performed the overidentifying restrictions test to examine the orthogonality of the instruments. One study even cited the low R2 in the first stage as evidence that the instruments do not appear to be related to the error terms. It raises a great concern about the validity of the instruments used and the possible bias in the resulted IV estimates. Forty percent reported the first-stage results along with R2; however, it should be noted that the first-stage R2 does not represent the relevance of the excluded instruments but the overall explanatory power of all exogenous variables. Thus, the strength of instruments can be overstated if judged by the first-stage R2. Only 27 % formally performed the first-stage F-statistic for the instrument relevance, one study also used Shea’s partial R2 (Shea 1997), and none used Cragg and Donald’s underidentification test (Cragg and Donald 1993) or Stock and Yogo’s weak instrument test Stock and Yogo (2002). Given the absence of the weak instrument test in most finance studies, many of the estimation results using IV approach are questionable. It is also not surprising that only four studies among all addressed the concerns with the weak instrument problem. Ljungqvist et al. (2006) used five bank pressure proxies as instruments for analyst behavior in their two-step Heckman MLE model for competing underwriting mandates but detected the weak instrument problem by the low F-statistic. They interpreted the insignificant estimates in the second step as possibly a result of weak instrument. Beck et al. (2000) discussed the possible weak instrument problem (though not judged by any test) associated with a difference estimator using the lagged value of the dependent variable, and then decided to use the alternative method by estimating the regression in difference jointly with the regression in level to reduce the potential finite sample bias. It appears that the recently developed weak-instrument-robust estimators and inferences have not yet applied in finance research.
95.7
Conclusion
The purpose of this chapter is to present a practical procedure for using the instrumental variables approach to solve the endogeneity problem in empirical finance studies. The endogeneity problem has received a mixed treatment in finance research. The literature does not consistently account for endogeneity using formal econometric methods. When the IV approach is used, the instrumental variables are
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2597
often chosen arbitrarily and few diagnostic statistics are performed to assess the adequacy of IV estimation. Given the challenge of finding good instruments, it is important that the researcher analyzes the nature of the endogeneity and the direction of the bias if possible, and then explores alternative empirical approaches so the problem can be solved more appropriately. For example, in the presence of omitted variables, if the unobservable effects, which are part of the error term, can be treated as random variables rather than the parameters to be estimated, panel data models can be used to obtain consistent estimates. When the IV approach is considered to be the most appropriate estimation method, the researcher need to find and justify the instruments theoretically and statistically. One way to describe an instrumental variable is that a valid instrument Z for the potential endogenous variable X should be redundant in the structural equation when X is already included, which means that Z will not affect the dependent variable Y in any way other than through X. To examine the orthogonality statistically, an overidentified model is preferred and hence the overidentifying restrictions can be used. If the OID test cannot be rejected, we then can have some confidence in the orthogonality of the overall instruments. To examine the instrument relevance, the partial R2 and the first-stage F-statistic on the joint significance of the instruments should be performed at the minimum. The newly developed weak instrument tests (Cragg and Donald 1993; Stock and Yogo 2002) can also be used as robust check, especially for the finite samples. The researcher should keep in mind that the IV estimation method provides a general solution to the endogeneity problem, however; without strong and exogenous instruments, the IV estimator is more biased and inconsistent than the simple OLS estimator.
References Anderson, T. W., & Rubin, H. (1949). Estimation of the parameters of a single equation in a complete system of stochastic equations. Annuals of Mathematical Statistics, 20, 46–63. Anderson, T. W., Kunitomo, N., & Matsushita, Y. (2010). On the asymptotic optimality of the LIML estimator with possibly many instruments. Journal of Econometrics, 157, 191–204. Anderson, T. W., Kunitomo, N., & Matsushita, Y. (2011). On finite sample properties of alternative estimators of coefficients in a structural equation with many instruments. Journal of Econometrics, 165, 58–69. Angrist, J. D., Imbens, G. W., & Krueger, A. B. (1999). Jackknife instrumental variables estimation. Journal of Applied Econometrics, 14, 57–67. Bartels, L. M. (1991). Instrumental and ‘quasi-instrumental’ variables. American Journal of Political Science, 35, 777–800. Beck, T., Levine, R., & Loayza, N. (2000). Finance and the sources of growth. Journal of Financial Economics, 58, 261–300. Berger, A. N., Miller, N. H., Petersen, M. A., Rajan, R. G., & Stein, J. C. (2005). Does function follow organizational form? Evidence from the lending practices of large and small banks. Journal of Financial Economics, 76, 237–269.
2598
C.-J. Wang
Berkowitz, D., Caner, M., & Ying, F. (2012). The validity of instruments revisited. Journal of Econometrics, 166, 255–266. Bitler, M. P., Moskowitz, T. J., & Vissing-Jorgensen, A. (2005). Testing agency theory with entrepreneur effort and wealth. Journal of Finance, 60, 539–576. Brown, J. R., Ivkovic, Z., Smith, P. A., & Weisbenner, S. (2008). Neighbors matter: Causal community effects and stock market participation. Journal of Finance, 63, 1509–1531. Campa, J. M., & Kedia, S. (2002). Explaining the diversification discount. Journal of Finance, 57, 1731–1762. Campello, M. (2006). Debt financing: Does it boost or hurt firm performance in product markets? Journal of Financial Economics, 82, 135–172. Chao, J. C., & Swanson, N. R. (2005). Consistent estimation with a large number of weak instruments. Econometrica, 73, 1673–1692. Chen, L. D., Lesmond, A., & Wei, J. (2007). Corporate yield spreads and bond liquidity. Journal of Finance, 62, 119–149. Cho, M. (1998). Ownership structure, investment and the corporate value: An empirical analysis. Journal of Financial Economics, 47, 103–121. Cliff, M. T., & Denis, D. J. (2004). Do initial public offering firms purchase analyst coverage with underpricing? Journal of Finance, 59, 2871–2901. Coles, J. L., Lemmon, M. L., & Meschke, F. J. (2012). Structural models and endogeneity in corporate finance: The link between managerial ownership and corporate performance. Journal of Financial Economics, 103, 149–168. Conrad, J., Johnson, K. M., & Wahal, S. (2003). Institutional trading and alternative trading systems. Journal of Financial Economics, 70, 99–134. Cragg, J. G., & Donald, S. G. (1993). Testing identifiability and specification in instrumental variable models. Econometric Theory, 9, 222–240. Daines, R. (2001). Does delaware law improve firm value? Journal of Financial Economics, 62, 525–558. Davidson, R., & MacKinnon, J. G. (1993). Estimation and inference in econometrics. New York: Oxford University Press. Dennis, S., Nandy, D., & Sharpe, I. G. (2000). The determinants of contract terms in bank revolving credit agreements. Journal of Financial and Quantitative Analysis, 35, 87–110. Desai, M. A., Foley, C. F., & Hines, J. R. (2004). A multinational perspective on capital structure choice and internal capital markets. Journal of Finance, 59, 2451–2487. Dufour, J. (2003). Identification, weak instruments and statistical inference in econometrics. Canadian Journal of Economics, 36, 767–808. Durbin, J. (1954). Errors in variables. Review of the International Statistical Institute, 22, 23–32. Faulkender, M., & Petersen, M. A. (2006). Does the source of capital affect capital structure? Review of Financial Studies, 19, 45–79. Fuller, W. A. (1977). Some properties of a modification of the limited information estimator. Econometrica, 45, 939–953. Garmaise, M. J. (2008). Production in entrepreneurial firms: The effects of financial constraints on labor and capital. Review of Financial Studies, 21, 543–577. Gompers, P., & Lerner, J. (2000). Money chasing deals? The impact of fund inflows on private equity valuations. Journal of Financial Economics, 55, 281–325. Hahn, J., Hausman, J., & Kuersteiner, G. (2004). Estimation with weak instruments: Accuracy of higher-order bias and MSE approximation. Econometrics Journal, 7, 272–306. Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50, 1029–1055. Harvey, C. R., Lins, K. V., & Roper, A. H. (2004). The effect of capital structure when expected agency costs are extreme. Journal of Financial Economics, 74, 3–30. Hausman, J. A. (1978). Specification tests in econometrics. Econometrica, 46, 1251–1271.
95
Instrumental Variables Approach to Correct for Endogeneity in Finance
2599
Hayashi, F. (2000). Econometrics. Princeton: Princeton University Press. Heckman, J. (1979). Sample selection as a specification error. Econometrica, 47, 1531–161. Hermalin, B. E., & Weisbach, M. S. (2003). Boards of directors as an endogenously determined institution: A survey of the economic literature. Federal Reserve Bank of New York Economic Policy Review, 9, 7–26. Hsieh, J., & Walkling, R. A. (2005). Determinants and implications of arbitrage holdings in acquisitions. Journal of Financial Economics, 77, 605–648. Johnson, S. A. (2003). Debt maturity and the effects of growth opportunities and liquidity risk on leverage. Review of Financial Studies, 16, 209–236. Kavajecz, K. A., & Odders-White, E. R. (2001). An examination of changes in specialists’ posted price schedules. Review of Financial Studies, 14, 681–704. Kleibergen, F. (2002). Pivotal statistics for testing structural parameters in instrumental variables regression. Econometrica, 70, 1781–1803. Larcker, D. F., & Rusticus, T. O. (2010). On the use of instrumental variables in accounting research. Journal of Accounting and Economics, 49, 186–205. Li, K., & Prabhala, N.R. (2007). Self-Selection models in corporate finance. In: Handbook of Corporate finance: Empirical corporate finance, Handbooks in finance series (Chapter 2). Amsterdam: Elsevier/North Holland. Lee, P. M., & Wahal, S. (2004). Grandstanding, certification and the underpricing of venture capital backed IPOs. Journal of Financial Economics, 73, 375–407. Ljungqvist, A., Marston, F., & Wilhelm, W. J. (2006). Competing for securities underwriting mandates: Banking relationships and analyst recommendations. Journal of Finance, 61, 301–340. Lowry, M., & Shu, S. (2002). Litigation risk and IPO underpricing. Journal of Financial Economics, 65, 309–335. Mitton, T. (2006). Stock market liberalization and operating performance at the firm level. Journal of Financial Economics, 81, 625–647. Molina, C. A. (2005). Are firms underleveraged? An examination of the effect of leverage on default probabilities. Journal of Finance, 60, 1427–1459. Moreira, M. J. (2002). Tests with correct size in the simultaneous equations model. Berkeley: University of California. Ortiz-Molina, H. (2006). Top management incentives and the pricing of corporate public debt. Journal of Financial and Quantitative Analysis, 41, 317–340. Palia, D. (2001). The endogeneity of managerial compensation in firm valuation: A solution. Review of Financial Studies, 14, 735–764. Rothenberg, T. J., Griliches, Z., & Intriligator, M. D. (1984). Approximating the distributions of econometric estimators and test statistics In: Handbook of econometrics: Vol. II. Handbooks in economics series. Amsterdam: Elsevier/North Holland. Sargan, J. D. (1958). The estimation of economic relationships using instrumental variables. Econometrica, 26, 393–415. Shea, J. (1997). Instrument relevance in multivariate linear models: A simple measure. Review of Economics and Statistics, 79, 348–352. Sorensen, M. (2007). How smart is smart money? A two-sided matching model of venture capital. The Journal of Finance, 62, 2725–2762. Staiger, D., & Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica, 65, 557–586. Stock, J. H., Wright, J. H., & Yogo, M. (2002). A survey of weak instruments and weak identification in generalized method of moments. Journal of Business and Economic Statistics, 20, 518–529. Stock, J. H., & Yogo, M. (2002). Testing for weak instruments in linear IV regression (National Bureau of Economic Research (NBER) Technical Working Paper 284). Cambridge, MA: National Bureau of Economic Research.
2600
C.-J. Wang
Thorsten, B., Ross, L., & Loayza, N. (2000). Finance and the sources of growth. Journal of Financial Economics, 58, 261–300. Wei, Z., Xie, F., & Zhang, S. (2005). Ownership structure and firm value in China’s privatized firms: 1991–2001. Journal of Financial and Quantitative Analysis, 40, 87–108. Wintoki, M. B., Linck, J. S., & Netter, J. M. (2012). Endogeneity and the dynamics of internal corporate governance. Journal of Financial Economics, 105, 581–606. Wooldridge, J. M. (2002). Econometric analysis of cross section and panel data. Cambridge, MA: MIT Press. Wu, D. (1973). Alternative tests of independence between stochastic regressors and disturbances. Econometrica, 41, 733–750. Wu, C. (1990). On the asymptotic properties of the jackknife histogram. Annals of Statistics, 18, 1438–1452. Yan, A. (2006). Leasing and debt financing: Substitutes or complements? Journal of Financial and Quantitative Analysis, 41, 709–731.
Application of Poisson Mixtures in the Estimation of Probability of Informed Trading
96
Emily Lin and Cheng-Few Lee
Contents 96.1 96.2 96.3 96.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Problem with PIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Estimation Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96.4.1 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96.4.2 Application of PIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Poisson Mixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generalized Poisson Mixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Binomial Poisson Mixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2: Estimation by EM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Analysis Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Precision of Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Different m (mb, ms) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2602 2604 2605 2606 2606 2608 2609 2609 2609 2610 2610 2612 2616 2616 2616 2618
Abstract
This research first discusses the evolution of probability for informed trading in finance literature. Motivated by asymmetric effects, e.g., return and trading volume in up and down markets, this study modifies a mixture of the Poisson
E. Lin (*) St. John’s University, New Taipei City, Taiwan e-mail: [email protected] C.-F. Lee Department of Finance and Economics, Rutgers Business School, Rutgers, The State University of New Jersey, Piscataway, NJ, USA Graduate Institute of Finance, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected]; [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_96, # Springer Science+Business Media New York 2015
2601
2602
E. Lin and C.-F. Lee
distribution model by different arrival rates of informe d buys and sells to measure the probability of informed trading proposed by Easley et al. (Journal of Finance 51:1405–1436, 1996). By applying the expectation–maximization (EM) algorithm to estimate the parameters of the model, we derive a set of equations for maximum likelihood estimation, and these equations are encoded in a SAS Macro utilizing SAS/IML for implementation of the methodology. Keywords
Probability of informed trading (PIN) • Expectation–maximization (EM) algorithm • A mixture of Poisson distribution • Asset-pricing returns • Order imbalance • Information asymmetry • Bid–ask spreads • Market microstructure • Trade direction • Errors in variables • GARCH
96.1
Introduction
This study investigates the probability of informed trading PIN which is widely used in existing literature and is introduced by Easley et al. (1996). Easley et al. (1996, 2002, 2008) have proposed the same arrival rate of informed orders m for both bad and good events, and the likelihood is given by LðyjB, SÞ ¼ ð1 aÞeeb
eBb es eSs eB ðm þ es ÞS e þ adeeb b eðmþes Þ B! S! B! S!
þ að1 dÞeðmþeb Þ
ðm þ eb ÞB es eSs e B! S!
(96.1)
where a is the probability of new information, d is the probability that new information is bad news, m is the arrival rate of informed buy orders and also that of informed sell orders, and eb and es are the arrival rates of uninformed buyers and sellers. Until Easley, Engle, O’Hara and Wu (2008) models a time-varying arrival rate of informed and uninformed traders, the model has been a static approach. We allow the arrival rate of informed buyers to be different from that of informed sellers in order to match the empirical environment. Furthermore, we examine on intraday data and allow more than one informational event per day. The modified model is given by LðyjB, SÞ ¼ ð1 aÞeeb
eBb es eSs eB ðm þ es ÞS e þ adeeb b eðms þes Þ s B! S! B! S!
þ að1 dÞeðmb þeb Þ
ðmb þ eb ÞB es eSs e B! S!
(96.2)
where mb is the arrival rate of informed buyers and ms is the arrival rate of informed sellers. The function provides the structure necessary to exact information on the
96
Application of Poisson Mixtures
2603
parameters y ¼ (a, d, m, eb, es) from the observable variables, buys and sells, to measure PIN ¼ amþeams þeb . For the parameters in our model, y* ¼ (a, d, mb, ms, eb, es)
s jþð1dÞjmb jÞ . The buys and and probability of informed trading PIN ¼ aðdjmaðdjm s jþð1dÞjmb jÞþes þeb sells follow one of three Poisson processes on each day. The likelihood of observing any sequence of orders that contains B buys and S sells on a no-event day is given by
eeb
eBb es eSs e B! S!
(96.3)
Similarly, on a bad-event day, the likelihood of observing any sequence of orders that contains B buys and S sells is eeb
eBb ðmþes Þ ðm þ es ÞS e B! S!
(96.4)
Finally, on a good-event day, this likelihood is eðmþeb Þ
ðm þ eb ÞB es eSs e B! S!
(96.5)
To estimate the order arrival rates of the buy and sell processes, we need only consider the total number of buys, B, and the total number of sells, S, on any day. The likelihood of observing B buys and S sells on a day of unknown type is a mixture of the Poisson distribution, the weighted average of Eqs. 96.3, 96.4, and 96.5 using the probabilities of each type of day occurring to obtain Eq. 96.1. Church and Gale (1995) claim that a mixture of the Poisson distribution fits the data better than the standard Poisson, producing more accurate estimates of the variance. Johnson and Kotz (1969, pp. 135–136) survey a number of applications of the negative binomial in a variety of fields and conclude that “the negative binomial is frequently used as a substitute for the Poisson, when the strict requirements of the Poisson is doubtful.” This is due to the negative binomial, which can be viewed as a continuous mixture of infinitely many Poissons, as suggested by Bookstein and Swanson (1974, p. 317). Because days are independent, the likelihood of observing the data M ¼ (Bi, Si)Ii¼1 over I days is just the product of the daily likelihoods. Therefore, LðyjMÞ ¼
I Y
LðyjBi , Si Þ,
(96.6)
i
To estimate the parameter vector y from any data set M, we maximize the likelihood defined in Eq. 96.6.
2604
96.2
E. Lin and C.-F. Lee
The Problem with PIN
The derivation of PIN requires the classification of trades into buyer- or sellerinitiated trades, and therefore errors can occur by possible misclassification. Ellis et al. (2000) present that in the case of NASDAQ trades, the Lee and Ready (1991) trade classification algorithm correctly classifies 81.05 % of the trades, with the lowest rate of success among trades that take place inside the spread. Specifically, the authors admit that “the success rate for classifying trades inside the quotes . . . is substantially lower, falling to approximately 60 % for midpoint trades and to only 55 % for trades that are inside the quotes but not at the midpoint.” In the case of NYSE trades, Odders-White (2000) reports a success rate of 85 % for the entire sample. Boehmer et al. (2007) and Lei and Wu (2005) also point out that misappropriation of trades may bias PIN estimates and arrival rates may not be symmetric and time varying. To avoid this criticism or error, Popescu and Kumar (2008) use observed bid and ask quotes, assume different depth at the bid and the ask, and include order processing costs for estimating the probability of informed trading by extending the model developed by Copeland and Galai (1983),1 which is the first model to examine intraday informed trading under an option framework. The measure of Popescu and Kumar (2008) can be computed at any point in time and thus can be used to estimate changes in the level of information asymmetry over a short interval. Although this methodology does not require trades to be distinguished between buyer initiated and seller initiated, the estimation of order processing costs based on three bid–ask spread structure models may introduce bias on the estimated PIN. These three models include Glosten and Harris (1988) and Madhavan and Smidt (1991) which model revision in trade price and a proposed model which combines Hasbrouck (1991), Foster and Viswanathan (1993), and Brennan and Subrahmanyam (1996) to model transaction size and price revision. Copeland and Galai (1983) extend Bagehot (1971) which employs bid–ask spread to derive an adverse selection cost, use an option idea to describe the quote spread of market maker to contain an option value, and consider the trade value of straddle strategy as an adverse selection cost. Bollen et al. (2004) also follow the same strategy by Copeland and Galai (1983) to measure an adverse selection cost. Because the bid–ask spread carries unrealized price information and hides future value, it seems more reasonable to observe the bid–ask spread from the idea of an option.
Copeland and Galai (1983) model informed trading as (1 PI)[PBL(A S0) + PSL(B S0) + PNL.0], where B is the bid price, A is the ask price, S0 is the dealer’s estimate of the “true” value of a security (B < S0 < A), and PI is the probability that the next trade originates from an informed trader, while PBL, PSL, and PNL are the conditional probabilities that the next liquidity trader will buy, sell, or not trade when he/she faces the market maker. Popescu and Kumar (2008) revise it into (1 PI)[PBL. DA. (A S0) + PSL. DB. (B S0) + PNL.0], where DA and DB denote the depth at the ask and at the bid.
1
96
Application of Poisson Mixtures
2605
Based on the setting of Easley et al. (1996, 2002, 2008), the informed order imbalance is unavailable. We attain informed order imbalance in November 2008 and our work is the first to measure informed trading based on an order imbalance signal. Although not fully explored here, this measure allows one to measure informed order imbalance by (mb ms)/(mb + ms). The measure is a proxy for informed trading and is discussed in Lin et al. (2013), while the relationship between PIN and arbitrage opportunity is determined in Chang and Lin (2014). Like PIN, this measure is an estimate variable, and so it is potentially subject to errors-in-variables bias. To correct the errors-in-variables problem in PIN, Easley et al. (2002) suggest to create an instrument variable to use in place of the variable in question. Lee and Chen (2012) include five other methods2 which can correct this bias. Duarte and Young (2009) also allow the arrival rate of informed buyers mb to be different from that of informed sellers ms and overcome the estimation dilemma of standard PIN. In addition, Duarte and Young (2009) apply a time-varying technique to examine whether PIN is priced and includes symmetric order-flow shocks to capture the positive correlation they find between buys and sells. Chang and Lin (2014) ignore to do so as they observe no significant correlation between buys and sells in the data. Easley et al. (2012) take a similar approach with a time-varying technique, easing the estimation of PIN in high-volume markets and referring it to VPIN. Easley et al. (2012) claim VPIN is updated in volume time and does not require the intermediate estimation of nonobservable parameters or the application of numerical methods. Nevertheless, Easley et al. (2002) show that PIN as a proxy for the risk of informed trading is priced. The results of Easley et al. (2002) provide evidence that information plays a deeper role beyond what is captured in spreads. Duarte and Young (2009) propose a model that decomposes PIN into two components, one related to asymmetric information and one related to illiquidity. On the contrary, they find the PIN component related to asymmetric information is not priced, while the PIN component related to illiquidity is priced. This contrary finding makes itself an open question for finance researchers.
96.3
The Estimation Methodology
To solve the likelihood function in Eqs. 96.1 and 96.2, we apply an expectation– maximization (EM) algorithm. In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding a maximum likelihood or maximum posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between
2
The other five methods are (i) classical estimation method: either unconstrained or constrained type, (ii) grouping method, (iii) mathematical programming method, (iv) maximum likelihood method, and (v) LISREL and MIMIC methods.
2606
E. Lin and C.-F. Lee
performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found in step E. These parameter estimates are then used to determine the distribution of the latent variables in the next E step. We can derive a set of equations for maximum likelihood estimation when the observed data consists of complete pairs. These equations are encoded in a SAS Macro utilizing SAS/IML for implementation of the methodology. McLachlan and Krishnan (2008) have discussed this algorithm in detail. In addition, Appendix 2 has presented the estimation procedure of PIN.
96.4
Empirical Results
It is PIN that affects asset-pricing return consistent with economic analysis motivates the empirical work in Chang and Lin (2014) to explore how PIN under various cross-section and time-series sample splits is related to cash-futures basis, defined as futures price minus stock price. PIN is calculated as per Eq. 96.1 or 96.2 to test this relationship. The resulting higher significant regression coefficient of PIN derived from different arrival rates of informed trades at buy side and sell side confirms a conjecture that the revised PIN appears to capture the asymmetric information in cash-futures basis spread better than does standard PIN. To deeply understand the difference of these two PIN measures, this study furthermore compares the distribution of the parameters in each model. The comparison estimates the parameters of the two models using 5-min intraday data sourced from Taiwan index futures and Taiwan stock index markets. To appropriate a trade direction, two general approaches are used to infer the direction of a trade: (1) compare the trade price to the bid/ask prices of the prevailing quote or (2) compare the trade price to adjacent trades (the techniques commonly known as “tick tests”). In this study, the algorithm of Ready and Lee (1991) is used for classifying index futures data, while tick test3 is used for stock index data because of lack of quote price at market level.
96.4.1 Preliminary Results Both Tables 96.1 and 96.2 contain time-series averages from March 24, 1999, through September 22, 2005, of means, medians, standard deviations, and the
3
Tick test classifies each trade into four categories: an uptick, a downtick, a zero-uptick, and a zero-downtick. A trade is an uptick (downtick) if the price is higher (lower) than the price of the previous trade. When the price is the same as the previous trade (a zero tick) and if the last price was an uptick, then the trade is a zero-uptick. Similarly, if the last price change was a downtick, then the trade is a zero-downtick.
96
Application of Poisson Mixtures
2607
Table 96.1 Parameter summary statistics in Taiwan index futures market Parameter a d m mb ms eb es PIN
Mean 0.416 (0.601) 0.514 (0.546) 173 (147.437) (27.419) 75.983 (77.256) 81.688 (129.918) 0.300 (0.305)
Median 0.408 (0.598) 0.516 (0.561) 160.352 (146.797) (66.203) 69.405 (70.914) 76.936 (84.524) 0.296 (0.299)
Standard deviation 0.126 (0.178) 0.197 (0.204) 109.167 (132.094) (171.272) 45.941 (42.375) 47.363 (119.959) 0.067 (0.064)
Median standard error 0.003 (0.004) 0.005 (0.005) 2.692 (3.257) (4.223) 1.133 (1.045) 1.168 (2.958) 0.002 (0.002)
Table 96.2 Parameter summary statistics in Taiwan stock index market Parameter a d m mb ms eb es PIN
Mean 0.651 (0.658) 0.517 (0.517) 903,591 (892,352) (907,195) 579,489 (577,746) 577,863 (574,816) 0.334 (0.337)
Median 0.655 (0.661) 0.520 (0.517) 843,264 (833,270) (838,015) 536,094 (535,663) 536,838 (536,306) 0.332 (0.334)
Standard deviation 0.085 (0.074) 0.115 (0.108) 352,935 (356,513) (373,210) 219,850 (221,193) 216,781 (215,968) 0.047 (0.042)
Median standard error 0.002 (0.002) 0.003 (0.003) 8,689 (8,776) (9,187) 5,412 (5,445) 5,337 (5,316) 0.001 (0.001)
median of parameter standard errors from the likelihood estimation. Parameters as per Eqs. 96.1 and 96.2 are estimated for the index futures and stock index in the Taiwan market. We achieve the parameters for the ^ ; ^e b ; ^e s ¼ ð0:42; 0:51; 173; 76; 82Þ and Taiwan index futures market ^y fut ¼ ^a ; ^d; m d fut ¼ 0:2997 for the case of the same arrival rate of informed trades, mfut, atthe PIN ^ b; m ^ s ; ^e b ; ^e s ¼ buy side and sell side. Meanwhile, we achieve ^y fut ¼ ^a ; ^d; m d ð0:60; 0:55; 147; 27; 77; 129Þ and PIN fut ¼ 0:3046 for the case of different arrival rates of informed trades, mbfut and msfut, at buy side and sell side. There are 1,005 out of 1,645 days (61 %) that the value of m fut is between mbfut and msfut.
2608
E. Lin and C.-F. Lee
Similarly, we achieve the parameters for the Taiwan stock index ^ ^ d ind ¼ ^ ; ^e b ; ^e s ¼ ð0:65; 0:52; 903591; 579489; 577863Þ and PIN y ind ¼ ^ a ; d; m 0:3343 for the case of the same mind, and meanwhile we obtain ^ ^ b; m ^ s ; ^e b ; ^e s ¼ ð0:66; 0:52; 892352; 907195; 577746; 574816Þ and y ind ¼ ^ a; ^ d; m
ind d ¼ 0:3366 for the case of different arrival rates, mind PIN b and ms . There are ind ind ind 1,637 out of 1,650 days (99 %) that the value of m is between mb and mind s . In Table 96.1, each parameter in the two models has a similar distribution, except m and the arrival rate of uninformed sell orders, es, while in Table 96.2, there is no such exception for the distribution of each parameter in the two models. The PIN value estimated from each model is similar for both index futures and stock index markets. The differential results of these two models may result from volatile data or order imbalance between buys and sells. We observe that the Taiwan index futures behave much more volatile than the Taiwan stock index. The model of Easley et al. assumes a sole informed arrival rate for buys on a goodevent day and for sells on a bad-event day. The assumption might fit an individual stock level better than a market level because information tends to be just good or bad for an individual stock than for a market. Meanwhile, the model of Easley et al. appears more appropriate for a stable or a less order imbalance market.
96.4.2 Application of PIN Maturity effect may also play an information role in the market by helping the market incorporate certain types of information into prices. In Table 96.3, PIN, as derived from differential arrival rate of informed trades for index futures denoted by pin(f) and stock index denoted by pin(s), is calculated to explore how they Table 96.3 Asymmetry of maturity effect Panel A. Volume effect Dependent variables Rhat pin(f) pin(s) Panel B. Option introduction Dependent variables Rhat pin(f) pin(s)
A/F 2000/10/24 1.5039 0.2960 0.3290
B/F 2000/10/24 1.2485 0.2856 0.3540
P-value 0.0004 0: (98.43) p 0 S ∂S ∂S2 Q Q ∂2 Cðt; tÞ ∂2 1 ∂2 2 ¼ Sð t Þ KBðt; tÞ (98.44) GV ∂V 2 ∂V 2 ∂V 2 Q Q 2Q Y ∂2 Cðt; tÞ ∂2 1 ∂ 2 ∂ 2 2 þ R ðtÞ 2 : (98.45) GR ¼ SðtÞ KBðt; tÞ 2RðtÞ ∂R ∂R2 ∂R2 ∂R2 Q ð ∂2 Cðt; tÞ ∂ 1 1 1 ∂2 f 1 ¼ ¼ df: Re ðifÞ1 eifln½K GS, V ∂S∂V p 0 ∂V ∂V
(98.46)
where for g ¼ V, R and j ¼ 1, 2 ∂2
Q
∂g2
j
1 ¼ p
ð1 0
" 1 ifln½K
Re ðifÞ e
# ∂2 f j df: ∂g2
(98.47)
Acknowledgments We would like to thank Sanjiv Das, Ranjan D’Mello, Helyette Geman, Eric Ghysels, Frank Hatheway, Steward Hodges, Ravi Jagannathan, Andrew Karolyi, Bill Kracaw, C. F. Lee, Dilip Madan, Louis Scott, Rene´ Stulz, Stephen Taylor, Siegfried Trautmann, Alex Triantis, and Alan White for their helpful suggestions. Any remaining errors are our responsibility alone.
References Amin, K., & Jarrow, R. (1992). Pricing options on risky assets in a stochastic interest rate economy. Mathematical Finance, 2, 217–237. Amin, K., & Ng, V. (1993). Option valuation with systematic stochastic volatility. Journal of Finance, 48, 881–910. Andersen, T., & Lund, J. (1997). Estimating continuous time stochastic volatility models of the short term interest rate. Journal of Econometrics, 77, 343–377. Bailey, W., & Stulz, R. (1989). The pricing of stock index options in a general equilibrium model. Journal of Financial and Quantitative Analysis, 24, 1–12. Bakshi, G., Cao, C., & Chen, Z. (1997). Empirical performance of alternative option pricing models. Journal of Finance, 52, 2003–2049. Bakshi, G., Cao, C., & Chen, Z. (2000a). Do call prices and the underlying stock always move in the same direction? Review of Financial Studies, 13, 549–584. Bakshi, G., Cao, C., & Chen, Z. (2000b). Pricing and hedging long-term options. Journal of Econometrics, 94, 277–318. Bakshi, G., & Chen, Z. (1997). An alternative valuation model for contingent claims. Journal of Financial Economics, 44, 123–165.
98
Option Pricing and Hedging Performance
2699
Barone-Adesi, G., & Whaley, R. (1987). Efficient analytic approximation of American option values. Journal of Finance, 42, 301–320. Bates, D. (1996a). Testing option pricing models. In G. S. Maddala & C. R. Rao (Eds.), Statistical methods in finance (Handbook of statistics, Vol. 14, pp. 567–611). Amsterdam: Elsevier. Bates, D. (1996b). Jumps and stochastic volatility: Exchange rate processes implicit in Deutschemark options. Review of Financial Studies, 9(1), 69–108. Bates, D. (2000). Post-87 crash fears in S&P 500 futures options. Journal of Econometrics, 94, 181–238. Black, F. (1975). Fact and fantasy in the use of options. Financial Analyst Journal, 31, 899–908. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81, 637–659. Cao, C., & Huang, J. (2008). Determinants of S&P 500 index option returns. Review of Derivatives Research, 10, 1–38. Chan, K., Karolyi, A., Longstaff, F., & Sanders, A. (1992). An empirical comparison of alternative models of the short- term interest rate. Journal of Finance, 47, 1209–1227. Cox, J., Ingersoll, J., & Ross, S. (1985). A theory of the term structure of interest rates. Econometrica, 53, 385–408. Cox, J., & Ross, S. (1976). The valuation of options for alternative stochastic processes. Journal of Financial Economics, 3, 145–166. Dumas, B., Fleming, J., & Whaley, R. (1998). Implied volatility functions: Empirical tests. Journal of Finance, 53(6), 2059–2106. Figlewski, S. (1989). Option arbitrage in imperfect markets. Journal of Finance, 44, 1289–1311. Galai, D. (1983a). The components of the return from hedging options against stocks. Journal of Business, 56, 45–54. Galai, D. (1983b). A survey of empirical tests of option pricing models. In M. Brenner (Ed.), Option pricing (pp. 45–80). Lexington: Heath. George, T., & Longstaff, F. (1993). Bid-ask spreads and trading activity in the S&P 100 index options market. Journal of Financial and Quantitative Analysis, 28, 381–397. Hansen, L. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50, 1029–1054. Harrison, M., & Kreps, D. (1979). Martingales and arbitrage in multiperiod securities markets. Journal of Economic Theory, 20, 381–408. Harvey, C., & Whaley, R. (1992a). Market volatility and the efficiency of the S&P 100 index option market. Journal of Financial Economics, 31, 43–73. Harvey, C., & Whaley, R. (1992b). Dividends and S&P 100 index option valuation. Journal of Futures Markets, 12, 123–137. Heston, S. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of Financial Studies, 6, 327–343. Hull, J., & White, A. (1987a). The pricing of options with stochastic volatilities. Journal of Finance, 42, 281–300. Hull, J., & White, A. (1987b). Hedging the risks from writing foreign currency options. Journal of International Money and Finance, 6, 131–152. Kim, I.-J. (1990). The analytical valuation of American options. Review of Financial Studies, 3(4), 547–572. Longstaff, F. (1995). Option pricing and the martingale restriction. Review of Financial Studies, 8(4), 1091–1124. Madan, D., Carr, P., & Chang, E. (1998). The variance gamma process and option pricing. European Finance Review, 2, 79–105. McBeth, J., & Merville, L. (1979). An empirical examination of the Black–Scholes call option pricing model. Journal of Finance, 34, 1173–1186. Melino, A., & Turnbull, S. (1990). Pricing foreign currency options with stochastic volatility. Journal of Econometrics, 45, 239–265. Melino, A., & Turnbull, S. (1995). Misspecification and the pricing and hedging of long-term foreign currency options. Journal of International Money and Finance, 45, 239–265.
2700
C. Cao et al.
Merton, R. (1973). Theory of rational option pricing. Bell Journal of Economics, 4, 141–183. Nandi, S. (1996). Pricing and hedging index options under stochastic volatility (Working Paper). Federal Reserve Bank of Atlanta. Ross, S. (1995). Hedging long-run commitments: Exercises in incomplete market pricing (Working Paper). Yale School of Management. Rubinstein, M. (1985). Nonparametric tests of alternative option pricing models using all reported trades and quotes on the 30 most active CBOE options classes from August 23, 1976 through August 31, 1978. Journal of Finance, 455–480. Rubinstein, M. (1994). Implied binomial trees. Journal of Finance, 49, 771–818. Scott, L. (1987). Option pricing when the variance changes randomly: Theory, estimators, and applications. Journal of Financial and Quantitative Analysis, 22, 419–438. Scott, L. (1997). Pricing stock options in a jump-diffusion model with stochastic volatility and interest rates: Application of Fourier inversion methods. Mathematical Finance, 7, 413–426. Stein, E., & Stein, J. (1991). Stock price distributions with stochastic volatility. Review of Financial Studies, 4, 727–752. Whaley, R. (1982). Valuation of American call options on dividend paying stocks. Journal of Financial Economics, 10, 29–58. Wiggins, J. (1987). Option values under stochastic volatilities. Journal of Financial Economics, 19, 351–372.
99
The Le Chaˆtelier Principle of the Capital Market Equilibrium Chin W. Yang, Ken Hung, and Matthew D. Brigida
Contents 99.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99.2 The Le Chatelier Principle of the Markowitz Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99.4 Policy Implications of the Le Chatelier Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2702 2702 2704 2707 2707 2708
Abstract
This chapter purports to provide a theoretical underpinning for the problem of the Investment Company Act. The theory of the Le Chatelier principle is well known in thermodynamics. The system tends to adjust itself to a new equilibrium as far as possible. In capital market equilibrium, added constraints on portfolio investment in each stock can lead to inefficiency manifested in the right-shifting efficiency frontier. According to the empirical study, the potential loss can amount to millions of dollars coupled with a higher risk-free rate and greater transaction and information costs.
C.W. Yang (*) Clarion University of Pennsylvania, Clarion, PA, USA National Chung Cheng University, Chia-yi, Taiwan e-mail: [email protected]; [email protected]; [email protected] K. Hung Division of International Banking & Finance Studies, Texas A&M International University, Laredo, TX, USA e-mail: [email protected] M.D. Brigida Department of Finance, Clarion University of Pennsylvania, Clarion, PA, USA e-mail: [email protected] C.-F. Lee, J. Lee (eds.), Handbook of Financial Econometrics and Statistics, DOI 10.1007/978-1-4614-7750-1_47, # Springer Science+Business Media New York 2015
2701
2702
C.W. Yang et al.
Keywords
Markowitz model • Efficient frontiers • With constraints • Without constraints • Le Chatelier principle • Thermodynamics • Capital market equilibrium • Diversified mutual funds • Quadratic programming • Investment Company Act
99.1
Introduction
In the wake of a growing trend of deregulation in various industries (e.g., utility, banking, and airline), it has become more and more important to study the responsiveness of the market to the exogenous perturbations as the system is gradually constrained. According to the law of thermodynamics, the system tends to adjust itself to a new equilibrium by counteracting the change as far as possible. This law, the Le Chatelier principle, was applied to economics by Samuelson (1949, 1960, 1972), Silberberg (1971, 1974, 1978), and to a class of spatial equilibrium models: linear programming, fixed demand, quadratic programming, full-fledged spatial equilibrium model by Labys and Yang (1996). Recently, it has been applied to optimal taxation by Diamond and Mirrlees (2002). According to subchapter M of the Investment Company Act of 1940, a diversified mutual fund cannot have more than 5 % of total assets invested in any single company and the acquisition of securities does not exceed 10 % of the acquired company’s value. By meeting this diversification threshold, funds are considered “pass-through” entities enabling capital gains and income taxes to accrue to the fund’s investors. This diversification rule, on the one hand, reduces the portfolio risk according to the fundamental result of investment theory. On the other hand, more and more researchers begin to raise questions as to the potential inefficiency arising from the Investment Company Act (see Elton and Gruber 1991; Roe 1991; Francis 1993; Kohn 1994). Further, Almazan et al. (2004) document inefficiencies from a broad set of mutual fund investment constraints. With the exception of the work by Cohen and Pogue (1967), Frost and Savarino (1988), and Loviscek and Yang (1997), there is very little evidence to refute or favor this conjecture. Empirical findings (e.g., Loviscek and Yang 1997) suggest that over 300 growth mutual funds evaluated by Value Line show that the average weight for the company given the greatest share of a fund’s assets was 4.29 %. However, the Le Chatelier principle in terms of the Investment Company Act has not been scrutinized in the literature of finance. The objective of this chapter is to investigate the Le Chatelier principle applied to the capital market equilibrium in the framework of the Markowitz portfolio selection model.
99.2
The Le Chatelier Principle of the Markowitz Model
In a portfolio of n securities, Markowitz (1952, 1956, 1959, 1990, 1991) formulated the portfolio selection model in the form of a quadratic programming as shown below:
99
The Le Chaˆtelier Principle of the Capital Market Equilibrium
minxi xj u ¼
X X i2I
xxs j2J i j ij
2703
(99.1)
subject to X
k
(99.2)
¼1
(99.3)
rx i2I i i
X
x i2I i
xi 0 8 i 2 I
(99.4)
where xi ¼ proportion of investment in security i sii ¼ variance of rate of return of security i sij ¼ covariance of rate of return of security i and j ri ¼ expected rate of return of security i k ¼ minimum rate of return of the portfolio I and J are sets of positive integers The resulting Lagrange function is therefore X X L¼uþl k r i xij þ g 1 xi
(99.5)
The solution to the Markowitz is well known (1959). The Lagrange multiplier of constraint Eq. 99.2 assumes the usual economic interpretation: change in total risk in response to an infinitesimally small change in k while all other decision variables adjust to their new equilibrium levels, i.e., l ¼ dv/dk. Hence, the Lagrange multiplier is of utmost importance in determining the shape of the efficiency frontier curve in the capital market. Note that values of xis are unbounded between 0 and 1 in the Markowitz model. However, in reality, the proportion of investment on each security many times cannot exceed a certain percentage to ensure adequate diversification. As the maximum investment proportion on each security decreases from 99 % to 1 %, the solution to the portfolio selection model becomes more constrained, i.e., the values of optimum xs are bounded within a narrower range as the constraint is tightened. Such impact on the objective function v is straightforward: as the system is gradually constrained, the limited freedom of optimum xs gives rise to a higher and higher risk level as k is increased. For example, if parameter k is increased gradually, the Le Chatelier principle implies that in the original Markowitz minimization system, isorisk contour has the smallest curvature to reflect the most efficient adjustment mechanism: abs
∂2 u ∂k2
2 2 ∂ u ∂ u abs abs ∂k2 ∂k2
(99.6)
2704
C.W. Yang et al.
where v* and v** are the objective function (total portfolio risk) corresponding to the additional constrains of xi < s* and xi < s** for all i and s* > s** represent different investment proportions allowed under V* and V** and abs denotes absolute value. Via the envelope theorem (Dixit 1990) we have dfLðxi ðkÞ, kÞ ¼ uðxi ðkÞÞg ∂fLðxi ðkÞ, kÞ ¼ uðxi ðkÞÞg ¼ ¼ ljxi ¼ xi ðkÞ dk ∂k
(99.7)
Hence, Eq. 99.6 can be rewritten as ∂l ∂l ∂l abs abs abs ∂k ∂k ∂k
(99.8)
Equation 99.8 states that the Lagrange multiplier of the original Markowitz portfolio selection model is less sensitive to an infinitesimally small change in k than that of the model when the constraints are gradually tightened. Note that the Lagrange multiplier l is the reciprocal of the slope of the efficiency frontier curve frequently drawn in investment textbooks. Hence, the original Markowitz model has the steepest slope for a given set of xi s. However, the efficiency frontier curve of the Markowitz minimization system has a vertical segment corresponding to a range of low ks and a constant v. Only within this range do the values of optimum xs remain equal under various degrees of constraints. Within this range constraint Eq. 99.2 is not active; hence the Lagrange multiplier is zero. As a result, equality relation holds for Eq. 99.8. Outside this range, the slopes of the efficiency frontier curve are different owing to the result of Eq. 99.8.
99.3
Simulation Results
To verify the result implied by the Le Chatelier principle, we employ a five-stock portfolio with xi < 50 % and xi < 40 %. The numerical solutions are reported in Table 99.1. An examination of Table 99.1 indicates that the efficiency frontier curve is vertical and all optimum xs are identical between 0.001 < k < 0.075. After that, the solutions of xs begin to change for the three models. Note that the maximum possible value for x4 remains 0.4 throughout the simulation for k > 0.075 for the model with the tightest constraint xi < 0.4. In the case of xi < 0.5, a relatively loosely constrained Markowitz system, all the optimum values of decision variables remain the same as the original Markowitz model between 0.01 < k < 0.1. Beyond that range, the maximum value of x4 is limited to 0.5. As can be seen from Table 99.1, the total risk v responds less volatile to the change in k in the original unconstrained Markowitz system than that in the constrained systems. In other words, the original Markowitz minimization system guarantees
Least constrained solution (original Markowitz model) k (%) v(105) x1 % x2 % x3 % x4 % x5 % 1 257.2 39.19 0 31.87 28.94 0 2 257.2 39.19 0 31.87 28.94 0 3 257.2 39.19 0 31.87 28.94 0 4 257.2 39.19 0 31.87 28.94 0 5 257.2 39.19 0 31.87 28.94 0 6 257.2 39.19 0 31.87 28.94 0 7 260.8 35.02 0 32.6 32.38 0 7.5 274.8 30.54 0 32.77 36.69 0 8 299.3 25.82 0 33.27 40.91 0 8.5 333.1 21.65 0 33.26 43.63 1.45 9 371.2 17.82 0 32.92 45.73 3.53 9.5 413.2 14.05 0 32.53 47.64 5.79 10 459 9.68 0.58 32.17 49.59 7.98 10.5 508.3 4.83 1.96 31.44 51.56 10.2 11 560.9 0 3.53 30.46 53.55 12.46 11.5 619.9 0 1.34 27.91 55.8 14.95 12 687.5 0 0 24.31 58.11 17.58 12.5 765.4 0 0 19.02 60.68 20.3 13 854.3 0 0 13.73 63.2 23.07 13.5 954 0 0 8.45 65.72 25.83
Table 99.1 Simulation Results Solution with xi 0.5 v(105) x1 % x2 % 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 260.8 35.02 0 274.8 30.54 0 299.3 25.82 0 333.1 21.65 0 371.2 17.82 0 413.2 14.05 0 459 9.68 0.58 509.5 4.25 2.1 567.5 0 2.66 637.4 0 0 724.5 0 0 826.7 0 0 949.7 0 0 1,086.8 0 0 x3 % 31.87 31.87 31.87 31.87 31.87 31.87 32.6 32.77 33.27 33.26 32.92 32.53 32.17 32.23 32.03 30.39 25.39 20.52 15.53 10.65
x4 % 28.94 28.94 28.94 28.94 28.94 28.94 32.38 36.69 40.91 43.63 45.73 47.64 49.59 50 50 50 50 50 50 50
x5 % 0 0 0 0 0 0 0 0 0 1.45 3.53 5.79 7.98 11.42 15.31 19.62 24.62 29.48 34.48 39.45
Solution with xi 0.4 v(105) x1 % x2 % 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 257.2 39.19 0 260.8 35.02 0 274.8 30.54 0 300.5 24.91 0 340.2 20.42 0 387.7 15.93 0 443 11.44 0 506.2 6.95 0 576.7 1.23 1.93 656.5 0 0.21 751.7 0 0 866.3 0 0 995.2 0 0 x3 % 31.87 31.87 31.87 31.87 31.87 31.87 32.6 32.77 34.55 35.34 36.13 36.92 37.71 37.7 36.45 31.79 26.79 21.91
x5 % 0 0 0 0 0 0 0 0 5.39 4.24 7.94 11.64 15.34 19.15 23.34 28.22 33.22 38.09
(continued)
x4 % 28.94 28.94 28.94 28.94 28.94 28.94 32.38 36.69 40 40 40 40 40 40 40 40 40 40
99 The Le Chaˆtelier Principle of the Capital Market Equilibrium 2705
K(%) 0
5
10
15
20
0
Least constrained solution (original Markowitz model) k (%) v(105) x1 % x2 % x3 % x4 % x5 % 14 1,064.6 0 0 3.16 68.25 28.59 14.5 1,309.1 0 0 0 55.63 44.37 15 2,847.3 0 0 0 20 80 15.29 4,402 0 0 0 0 100
Table 99.1 (continued)
20
Sigma
40
60
Xi