Econophysics and Economics of Games, Social Choices and Quantitative Techniques [1 ed.] 8847015006, 9788847015005

The combined efforts of the Physicists and the Economists in recent years in analyzing and modelling various dynamic phe

391 113 5MB

English Pages 394 [406] Year 2010

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter....Pages i-xiv
Front Matter....Pages 1-1
Kolkata Paise Restaurant Problem in Some Uniform Learning Strategy Limits....Pages 3-9
Cycle Monotonicity in Scheduling Models....Pages 10-16
Reinforced Learning in Market Games....Pages 17-23
Mechanisms Supporting Cooperation for the Evolutionary Prisoner’s Dilemma Games....Pages 24-31
Economic Applications of Quantum Information Processing....Pages 32-43
Using Many-Body Entanglement for Coordinated Action in Game Theory Problems....Pages 44-51
Condensation Phenomena and Pareto Distribution in Disordered Urn Models....Pages 52-60
Economic Interactions and the Distribution of Wealth....Pages 61-70
Wealth Redistribution in Boltzmann-like Models of Conservative Economies....Pages 71-82
Multi-species Models in Econo- and Sociophysics....Pages 83-89
The Morphology of Urban Agglomerations for Developing Countries: A Case Study with China....Pages 90-97
A Mean-Field Model of Financial Markets: Reproducing Long Tailed Distributions and Volatility Correlations....Pages 98-109
Statistical Properties of Fluctuations: A Method to Check Market Behavior....Pages 110-118
Modeling Saturation in Industrial Growth....Pages 119-124
The Kuznets Curve and the Inequality Process....Pages 125-138
Monitoring the Teaching — Learning Process via an Entropy Based Index....Pages 139-146
Technology Level in the Industrial Supply Chain: Thermodynamic Concept....Pages 147-153
Discussions and Comments in Econophys Kolkata IV....Pages 154-171
Front Matter....Pages 173-173
On Multi-Utility Representation of Equitable Intergenerational Preferences....Pages 175-180
Variable Populations and Inequality-Sensitive Ethical Judgments....Pages 181-191
Front Matter....Pages 173-173
A Model of Income Distribution....Pages 192-203
Statistical Database of the Indian Economy: Need for New Directions....Pages 204-212
Does Parental Education Protect Child Health? Some Evidence from Rural Udaipur....Pages 213-232
Food Security and Crop Diversification: Can West Bengal Achieve Both?....Pages 233-240
Estimating Equivalence Scales Through Engel Curve Analysis....Pages 241-251
Testing for Absolute Convergence: A Panel Data Approach....Pages 252-262
Goodwin’s Growth Cycles: A Reconsideration....Pages 263-276
Human Capital Accumulation, Economic Growth and Educational Subsidy Policy in a Dual Economy....Pages 277-292
Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis....Pages 293-305
Trade andWage Inequality with Endogenous Skill Formation....Pages 306-319
Dominant Strategy Implementation in Multi-unit Allocation Problems....Pages 320-330
Allocation through Reduction on Minimum Cost Spanning Tree Games....Pages 331-346
Unmediated and Mediated Communication Equilibria of Battle of the Sexes with Incomplete Information....Pages 347-361
A Characterization Result on the Coincidence of the Prenucleolus and the Shapley Value....Pages 362-371
The Ordinal Equivalence of the Johnston Index and the Established Notions of Power....Pages 372-380
Reflecting on Market Size and Entry under Oligopoly....Pages 381-394
Recommend Papers

Econophysics and Economics of Games, Social Choices and Quantitative Techniques [1 ed.]
 8847015006, 9788847015005

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Econophysics and Economics of Games, Social Choices and Quantitative Techniques

New Economic Windows Series Editors M ARISA FAGGINI , M AURO G ALLEGATI , A LAN K IRMAN

Series Editorial Board Jaime Gil Aluja Departament d’Economia i Organitzaci´o d’Empreses, Universitat de Barcelona, Spain

Fortunato Arecchi Dipartimento di Fisica, Universit`a degli Studi di Firenze and INOA, Italy

David Colander Department of Economics, Middlebury College, Middlebury, VT, USA

Richard H. Day Department of Economics, University of Southern California, Los Angeles, USA

Steve Keen School of Economics and Finance, University of Western Sydney, Australia

Marji Lines Dipartimento di Scienze Statistiche, Universit`a degli Studi di Udine, Italy

Thomas Lux Department of Economics, University of Kiel, Germany

Alfredo Medio Dipartimento di Scienze Statistiche,Universit`a degli Studi di Udine, Italy

Paul Ormerod Directors of Environment Business-Volterra Consulting, London, UK

Peter Richmond School of Physics, Trinity College, Dublin 2, Ireland

J. Barkley Rosser Department of Economics, James Madison University, Harrisonburg, VA, USA

Sorin Solomon Racah Institute of Physics, The Hebrew University of Jerusalem, Israel

Pietro Terna Dipartimento di Scienze Economiche e Finanziarie, Universit`a degli Studi di Torino, Italy

Kumaraswamy (Vela) Velupillai Department of Economics, National University of Ireland, Ireland

Nicolas Vriend Department of Economics, Queen Mary University of London, UK

Lotfi Zadeh Computer Science Division, University of California Berkeley, USA

Editorial Assistant: Anna Parziale Dipartimento di Studi Economici, Universit`a degli Studi di Napoli “Parthenope”, Italy

Banasri Basu · Bikas K. Chakrabarti Satya R. Chakravarty · Kausik Gangopadhyay Editors

Econophysics and Economics of Games, Social Choices and Quantitative Techniques

123

Editors Banasri Basu Physics and Applied Mathematics Unit Indian Statistical Institute Kolkata 700108 India

Satya R. Chakravarty Economic Research Unit Indian Statistical Institute Kolkata 700108 India

Bikas K. Chakrabarti Centre for Applied Mathematics and Computational Science Saha Institute of Nuclear Physics Kolkata 700064 India and Economic Research Unit Indian Statistical Institute Kolkata 700108 India Kausik Gangopadhyay Economic Research Unit Indian Statistical Institute Kolkata 700108 India Presently at Indian Institute of Management Kozhikode Kozhikode 673570 India

ISBN 978-88-470-1500-5 e-ISBN 978-88-470-1501-2 DOI 10.1007/978-88-470-1501-2 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2009936774 © Springer-Verlag Italia 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher and the authors accept no legal responsibility for any damage caused by improper use of the instructions and programs contained in this book and the CD. Although the software has been tested with extreme care, errors in the software cannot be excluded. Cover design: Simona Colombo, Milano Final data processing: le-tex publishing services GmbH, Leipzig, Germany Printing and binding: Grafiche Porpora, Segrate (MI) Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)

Preface

The combined efforts of the Physicists and the Economists in recent years in analyzing and modeling various dynamic phenomena in monetary and social systems have led to encouraging developments, generally classified under the title of Econophysics. These developments share a common ambition with the already established field of Quantitative Economics. This volume intends to offer the reader a glimpse of these two parallel initiatives by collecting review papers written by well-known experts in the respective research frontiers in one cover. This massive book presents a unique combination of research papers contributed almost equally by Physicists and Economists. Additional contributions from Computer Scientists and Mathematicians are also included in this volume. It consists of two parts: The first part concentrates on econophysics of games and social choices and is the proceedings of the Econophys-Kolkata IV workshop held at the Indian Statistical Institute and the Saha Institute of Nuclear Physics, both in Kolkata, during March 9-13, 2009. The second part consists of contributions to quantitative economics by experts in connection with the Platinum Jubilee celebration of the Indian Statistical Institute. In this connection a Foreword for the volume, written by Sankar K. Pal, Director of the Indian Statistical Institute, is put forth. Both parts specialize mostly on frontier problems in games and social choices. The first part of the book deals with several recent developments in econophysics. Game theory is integral to the formulation of modern economic analysis. Often games display a situation where the social optimal could not be reached as a result of non co-operation between different agents. The Kolkata Paise Restaurant problem is an example of such a game, where the outcome of a non-dictatorial allocation, is far inferior compared to the social optimum. Asim Ghosh, Anindya Sundar Chakrabarti, and Bikas K. Chakrabarti study this problem under some homogenous learning strategies, when the agents are symmetric in nature. Debasis Mishra and Manipushpak Mitra characterize the optimal rules to allocate a set of jobs to set of heterogenous machines. Edward W. Piotrowski, Jan Sładkowski, and Anna Szczypi´nska studies how investors facing different games, gather information and form the decision despite being unaware of the complete structure of the game. They apply reinforced learning methods to study the market. Prisoner’s Dilemma is

v

vi

Preface

a classic game. There are mechanisms to implement co-operation in this game and ensure the socially optimal outcome. Gy¨orgy Szab´o, Attila Szolnoki and Jeromos Vukov show how the efficiency of such a mechanism can be improved. The applications of the theory of quantum mechanics is pervasive. Recently, a new interdisciplinary stream of quantum game investigates into any possible improvement in strategy to apply to classical games. Tad Hogg, David A. Fattal, KayYut Chen, and Saikat Guha applies the theory of quantum information to the field of an economic system. They show that quantum information technology offers a new paradigm for various economic applications and provides new ways to construct and formulate economic protocols, particularly in the context of auctions. In another article, Sudhakar Yarlagadda uses the many-body entangled states to ensure coordination in some games, where otherwise coordination could not be obtained. Another pertinent contribution of Econophysics is the distributional analysis of different social phenomena. Starting with simple processes, researchers have matched the complicated empirically observed distributions with remarkable success. One recurring distribution in this literature is named after Pareto. Jun-ichi Inoue and Jun Ohkubo investigate equilibrium properties of disordered urn model and discuss the conditions for the heavy tailed power-law (Pareto) in the occupation probability by using statistical physics of disordered spin systems. In the next article, Davide Fiaschi and Matteo Marsili forms an economic environment, in which large number of firms and households interact through the capital and the labor markets. In that model economy, the top tail of the equilibrium wealth distribution is well-represented by a Pareto distribution. A kinetic model for wealth distribution including taxation and redistribution is put forward by Giuseppe Toscani and Carlo Brugna. The impact of the model parameters on the Pareto exponent is numerically analyzed. Bertram D¨uring’s article is concerned with the formation of bimodal income and wealth distribution in a society along with opinion formation in a heterogenous society. Pareto law is omnipresent in nature. Besides wealth and income distribution, it is also observed in the contexts of city-size distribution and behavior in financial markets. Kausik Gangopadhyay and Banasri Basu investigate the relationship between two well-accepted empirical propositions regarding the distribution of population in cities, namely, Gibrat’s law and Zipf’s law, using the Chinese census data. They also build a relevant theoretical framework imbibing the formation of special economic zones. A mean-field model of financial markets is proposed by Vikram S. V. and Sitabhra Sinha. This model reproduces the long tailed distributions and volatility correlations without explicit agent-agent interaction or prior assumptions about individual strategies. Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara analyze the Bombay stock exchange price data using recently developed wavelet based methods. Comparison of this method with the Fourier power spectrum analysis characterize the periodic nature and correlation properties of the time series. A dynamic nonlinear modeling on industrial growth data is performed by Arnab K. Ray. In any social discipline, the measurement of inequality and welfare is fundamental for quantitative analysis. John Angle, Franc¸ois Nielsen, and Enrico Scalas

Preface

vii

proposes Kuznets curve for the measurement of income inequality in a society comprising of two types of workers - the poor unskilled and the rich skilled. An entropy based performance index is suggested by Vijay A. Singh, Praveen Pathak, and Pratyush Pandey for monitoring the teaching-learning process. They elucidate their proposition through a survey-based empirical analysis. Jisnu Basu, Bijan Sarkar, and Ardhendu Bhattacharya’s article uses the concept of thermodynamics to evaluate the technology level in an industrial supply chain with empirical illustrations. In the concluding session of Econophys Kolkata IV workshop, there have been many stimulating discussions on the course the discipline of Econophysics. Some of them are noted in the last section of Part I. The papers in Part II span both emerging and classical areas in both theoretical and applied economics. They involve applications of mathematical and econometric techniques. Some papers argue about feasibilities and implementation of new economic policies arising from changes in a country over the past half-century. Recently the literature on multi-utility representation of binary relations has received a significant attention. The article of Kuntal Banerjee and Ram Sewak Dubey demonstrates the impossibility of representation of intergenerational preferences in the multi-utility framework under certain restrictions on the cardinality of the set of utilities. A reformulation of public policies like expenditure on public health and education and the design of social security systems may be necessary when the standard of living and the composition of the population change. This problem receives attention from S. Subramanian, who investigates a particular approach to ethical aspects of population change in the context of inequality measurement. Generation of income and its distribution are often explained by stochastic processes. The contribution of Satya R. Chakravarty and Subir Ghosh uses an “economics approach” to derive a size distribution of income. The Indian economy, as well as the discipline of development economics, has undergone substantial changes over the past half-century. Consequently, many new areas of economic policy require better information; new theories and empirical research methodologies need surveys to be designed and implemented in different ways. Also there exist problems with coordination across different sources of data, and with respect to under-utilization of existing information. An account by Dilip Mookherjee provides the need for a comprehensive reassessment of the Indian Statistical System, with a view to proposing vital changes and amendments. Sudeshna Maitra’s contribution in the important area of health economics examines the role of parental education in influencing child health care using recent Indian data. A paper by V. K. Ramachandran, Madhura Swaminathan and Aparajita Bakshi attempts to assess whether it is possible to achieve simultaneously the objectives of food security in rice production and large-scale diversification in crop production in an Indian state. Estimation of equivalence scale is an important topic of research in applied economics. Amita Majumder and Manisha Chakrabarty’s contribution provides a concise account of a two-step estimation procedure using Engel curve analysis based on a single cross section data on household consumer expenditure. It is accompanied by an illustration using Indian consumer expenditure data. One of the major issues of recent empirical growth literature is ‘convergence’. In their con-

viii

Preface

tribution Samarjit Das and Manisha Chakrabarty develop a new test for ‘absolute convergence’. The proposed methodology is applied to check if there is absolute convergence in terms of real per capita income in different OECD countries. Economic growth has always been an important area of research in Economics. Soumya Datta and Anjan Mukherji examine the robustness of results in Goodwin’s growth cycles and demonstrate that if the equation determining the rate of change of the real wages depends only on employment rate, Goodwin’s conclusions follow. But the Goodwin cycles disappear if the share of wages is admitted into this equation. Bidisha Chakraborty and Manash Ranjan Gupta’s article develops policy implications of a redistributive tax on the incomes of the rich to finance an education subsidy to the poor in a less developed economy. Two contributions on the theory of international trade deal with two different issues. Sajal Lahiri’s paper on trade and conflict resolution examines the effect of foreign aid and a tax on arms exports of countries involved in war efforts. It is demonstrated that while foreign aid to the countries engaged in war is likely to increase war efforts, the opposite effect is likely to arise because of a tax on exports of arms. Brati Sankar Chakraborty and Abhirup Sarkar’s contribution traces the effect of trade on the skill premium in the trading countries and shows that even under full factor equalization skill premium rises all over the trading world. Moreover, the wage inequality in each country keeps on rising gradually over time as more countries participate in trade. The next set of papers takes the readers to different issues on game theory and industrial organization. Manipushpak Mitra and Arunava Sen analyze allocation problems and look for a mechanism in which truth telling is a dominant strategy for each agent and outcome in every state is efficient. It is shown that in the homogeneous good case an impossibility arises with diverse preferences. However, a possibility result emerges if a package assignment problem is considered. Anirban Kar examines the problem of allocating costs associated with a project among the set of individuals who derive benefit from the project. A class of cost allocation rules over the efficient network is constructed and their fairness properties are investigated. In their contribution, Chirantan Ganguly and Indrajit Ray consider the Battle of Sexes game with private information and allow cheap talk before the game is played. It is shown that if the best fully revealing symmetric cheap talk equilibrium exists then it has a desirable property. The paper of Anirban Kar, Manipushpak Mitra and Suresh Muthuswami analytically identifies a situation in which two solution concepts in cooperative game theory, the Pre-nucleolus and the Shapley value, coincide. Sonali Roy’s paper provides an understanding of the measurement of voting power. It is shown that if the voters can be ranked in terms of their influence over a decision making process, then the Johnston index can be a useful indicator of voting power. Conditions for the Johnston index for ranking the voters in the same order as the Banzhaf-Coleman and Shapley-Shubik voting power indices are also identified. Krishnendu Ghosh Dastidar’s paper examines the effects of increase in market size and entry of additional firms on equilibrium configurations in a homogeneous good market. It is demonstrated that the conventional wisdom regarding the effects of in-

Preface

ix

crease in market size may not hold unambiguously. However, existing results on the effects of additional entry are reconfirmed. We extend our sincere gratitude to the participants of the workshop, EconophysKolkata IV, as well as to the other contributors of this volume. Our thanks are specially for Mauro Gallegati of the editorial board of New Economic Windows series; his encouragement and support enabled us to publish this volume in this esteemed series once again. It may here be mentioned that the proceedings of previous sessions of Econophys-Kolkata (I, II, and III) have been published in this very series respectively under the titles of Econophysics of Wealth Distributions (Springer, Milan, 2005), Econophysics of Stock and other Markets (Springer, Milan, 2006), and Econophysics of Markets and Business Networks (Springer, Milan, 2007). We appreciate very much the prompt and cordial support from Marina Forlizzi (Springer, Milan) regarding these publications. We are also grateful for the support from the Collaboration Project “ISI-ERU & SINP-CAMCS”, funded by Centre for Applied Mathematics and Computational Science, Saha Institute of Nuclear Physics, Kolkata, and for the infrastructure provided by the Indian Statistical Institute, Kolkata. We sincerely hope, the readers will enjoy the novelty and richness of the recent research ideas in both economics and econophysics of various game theoretic and social choice models as well as the quantitative techniques (even some quantum mechanical techniques!) developed to handle them. We also hope, these papers will inspire new researchers to make ventures and contribute in these rapidly growing fields.

Kolkata, June 2009

Banasri Basu Bikas K. Chakrabarti Satya Ranjan Chakravarty Kausik Gangopadhyay

Foreword

The Indian Statistical Institute (ISI) was established on 17th December, 1931 by a great visionary Prof. Prasanta Chandra Mahalanobis to promote research in the theory and applications of statistics as a new scientific discipline in India. In 1959, Pandit Jawaharlal Nehru, the then Prime Minister of India introduced the ISI Act in the parliament and designated it as an Institution of National Importance because of its remarkable achievements in statistical work as well as its contribution to economic planning. Today, the Indian Statistical Institute occupies a prestigious position in the academic firmament. It has been a haven for bright and talented academics working in a number of disciplines. Its research faculty has done India proud in the arenas of Statistics, Mathematics, Economics, Computer Science, among others. Over seventy five years, it has grown into a massive banyan tree, like the institute emblem. The Institute now serves the nation as a unified and monolithic organization from different places, namely Kolkata, the Headquarters, Delhi, Bangalore, and Chennai, three centers, a network of five SQC-OR Units located at Mumbai, Pune, Baroda, Hyderabad and Coimbatore, and a branch (field station) at Giridih. The platinum jubilee celebrations of ISI have been launched by Honorable Prime Minister Prof. Manmohan Singh on December 24, 2006, and the Govt. of India has declared 29th June as the “Statistics Day” to commemorate the birthday of Prof. Mahalanobis nationally. Prof. Mahalanobis, was a great believer in interdisciplinary research, because he thought that this will promote the development of not only Statistics, but also the other natural and social sciences. To promote interdisciplinary research, major strides were made in the areas of computer science, statistical quality control, economics, biological and social sciences, physical and earth sciences. The Institute’s motto of ‘unity in diversity’ has been the guiding principle of all its activities since its inception. It highlights the unifying role of statistics in relation to various scientific activities. In tune with this hallowed tradition, a comprehensive academic programme, involving Nobel Laureates, Fellows of the Royal Society, Abel prize winner and other dignitaries, has been implemented throughout the Platinum Jubilee year, highlight-

x

Foreword

xi

ing the emerging areas of ongoing frontline research in its various scientific divisions, centers, and outlying units. It includes international and national-level seminars, symposia, conferences and workshops, as well as series of special lectures. As an outcome of these events, the Institute is bringing out a series of comprehensive volumes in different subjects including those published under the title Statistical Science and Interdisciplinary Research of the World Scientific Press, Singapore. The present volume titled “Econophysics and Economics of Games, Social Choices and Quantitative Techniques” is one such outcome published by Springer Verlag. It deals with frontier problems in games and social choices, and has thirty six chapters written by eminent physicists and economists from different parts of the world. The chapters are divided in two parts. The first part consisting of eighteen articles discusses on the development of econophysics of games and social choices. The remaining eighteen articles in part two concentrate on recent advances in quantitative economics ranging from classical to modern areas, both theoretical and applied, using mathematical and econometric techniques. I believe the state-ofthe art studies presented in this book will be very useful to researchers as well as practioners. Thanks to the contributors for their excellent research contributions, and to the volume editors Dr. Banasri Basu, Prof. Bikas K. Chakrabarti, Prof. Satya R. Chakravarty and Dr. Kausik Gangopadhyay for their sincere effort in bringing out the volume nicely in time. Thanks are also due to Springer Verlag for their initiative in publishing the book and being a part of the Platinum Jubilee endeavor of the Institute. Kolkata, June 2009

Sankar K. Pal Director, Indian Statistical Institute

Contents

Part I Econophysics of Games and Social Choices Kolkata Paise Restaurant Problem in Some Uniform Learning Strategy Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asim Ghosh, Anindya Sundar Chakrabarti, and Bikas K. Chakrabarti

3

Cycle Monotonicity in Scheduling Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Debasis Mishra and Manipushpak Mitra Reinforced Learning in Market Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Edward W. Piotrowski, Jan Sładkowski, and Anna Szczypi´nska Mechanisms Supporting Cooperation for the Evolutionary Prisoner’s Dilemma Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Gy¨orgy Szab´o, Attila Szolnoki and Jeromos Vukov Economic Applications of Quantum Information Processing . . . . . . . . . . . 32 Tad Hogg, David A. Fattal, Kay-Yut Chen, and Saikat Guha Using Many-Body Entanglement for Coordinated Action in Game Theory Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Sudhakar Yarlagadda Condensation Phenomena and Pareto Distribution in Disordered Urn Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Jun-ichi Inoue and Jun Ohkubo Economic Interactions and the Distribution of Wealth . . . . . . . . . . . . . . . . . 61 Davide Fiaschi and Matteo Marsili Wealth Redistribution in Boltzmann-like Models of Conservative Economies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Giuseppe Toscani and Carlo Brugna

xii

Contents

xiii

Multi-species Models in Econo- and Sociophysics . . . . . . . . . . . . . . . . . . . . . 83 Bertram D¨uring The Morphology of Urban Agglomerations for Developing Countries: A Case Study with China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Kausik Gangopadhyay and Banasri Basu A Mean-Field Model of Financial Markets: Reproducing Long Tailed Distributions and Volatility Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Vikram S. V. and Sitabhra Sinha Statistical Properties of Fluctuations: A Method to Check Market Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara Modeling Saturation in Industrial Growth . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Arnab K. Ray The Kuznets Curve and the Inequality Process . . . . . . . . . . . . . . . . . . . . . . . 125 John Angle, Franc¸ois Nielsen, and Enrico Scalas Monitoring the Teaching - Learning Process via an Entropy Based Index . 139 Vijay A. Singh, Praveen Pathak, and Pratyush Pandey Technology Level in the Industrial Supply Chain: Thermodynamic Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Jisnu Basu, Bijan Sarkar, and Ardhendu Bhattacharya Discussions and Comments in Econophys Kolkata IV . . . . . . . . . . . . . . . . . 154 Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev Part II Contributions to Quantitative Economics On Multi-Utility Representation of Equitable Intergenerational Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Kuntal Banerjee and Ram Sewak Dubey Variable Populations and Inequality-Sensitive Ethical Judgments . . . . . . . 181 S. Subramanian A Model of Income Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Satya R. Chakravarty and Subir Ghosh Statistical Database of the Indian Economy: Need for New Directions . . . . 204 Dilip Mookherjee

xiv

Contents

Does Parental Education Protect Child Health? Some Evidence from Rural Udaipur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Sudeshna Maitra Food Security and Crop Diversification: Can West Bengal Achieve Both? 233 V. K. Ramachandran, Madhura Swaminathan, and Aparajita Bakshi Estimating Equivalence Scales Through Engel Curve Analysis . . . . . . . . . . 241 Amita Majumder and Manisha Chakrabarty Testing for Absolute Convergence: A Panel Data Approach . . . . . . . . . . . . 252 Samarjit Das and Manisha Chakrabarty Goodwin’s Growth Cycles: A Reconsideration . . . . . . . . . . . . . . . . . . . . . . . 263 Soumya Datta and Anjan Mukherji Human Capital Accumulation, Economic Growth and Educational Subsidy Policy in a Dual Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Bidisha Chakraborty and Manash Ranjan Gupta Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis . . . . . . . 293 Sajal Lahiri Trade and Wage Inequality with Endogenous Skill Formation . . . . . . . . . . 306 Brati Sankar Chakraborty and Abhirup Sarkar Dominant Strategy Implementation in Multi-unit Allocation Problems . . . 320 Manipushpak Mitra and Arunava Sen Allocation through Reduction on Minimum Cost Spanning Tree Games . . 331 Anirban Kar Unmediated and Mediated Communication Equilibria of Battle of the Sexes with Incomplete Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Chirantan Ganguly and Indrajit Ray A Characterization Result on the Coincidence of the Prenucleolus and the Shapley Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Anirban Kar, Manipushpak Mitra, and Suresh Mutuswami The Ordinal Equivalence of the Johnston Index and the Established Notions of Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Sonali Roy Reflecting on Market Size and Entry under Oligopoly . . . . . . . . . . . . . . . . . 381 Krishnendu Ghosh Dastidar

Part I

Econophysics of Games and Social Choices

Kolkata Paise Restaurant Problem in Some Uniform Learning Strategy Limits Asim Ghosh, Anindya Sundar Chakrabarti, and Bikas K. Chakrabarti

Abstract We study the dynamics of some uniform learning strategy limits or a probabilistic version of the “Kolkata Paise Restaurant” problem, where N agents choose among N equally priced but differently ranked restaurants every evening such that each agent can get dinner in the best possible ranked restaurant (each serving only one customer and the rest arriving there going without dinner that evening). We consider the learning to be uniform among the agents and assume that each follow the same probabilistic strategy dependent on the information of the past successes in the game. The numerical results for utilization of the restaurants in some limiting cases are analytically examined.

1 Introduction The Kolkata Paise Restaurant (KPR) problem (see [1]) is a repeated game, played between a large number of agents having no interaction among themselves. In KPR, N prospective customers choose from N restaurants each evening (time t) in a parallel decision mode. Each restaurant have identical price but different rank k (agreed by the all the N agents) and can serve only one customer. If more than one agents Asim Ghosh Theoretical Condensed Matter Physics Division, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Kolkata 700 064, India. e-mail: [email protected] Anindya Sundar Chakrabarti Indian Statistical Institute, 203 Barrackpore Trunk Road, Kolkata 700108, India. e-mail: [email protected] Bikas K. Chakrabarti Centre for Applied Mathematics & Computational Science and Theoretical Condensed Matter Physics Division, Saha Institute of Nuclear Physics, Kolkata 70064, India, and Economic Research Unit, Indian Statistical Institute, Kolkata 700108, India. e-mail: [email protected]

3

4

Asim Ghosh, Anindya Sundar Chakrabarti, and Bikas K. Chakrabarti

arrive at any restaurant on any evening, one of them is randomly chosen and is served and the rest do not get dinner that evening. Information regarding the agent distributions etc. for earlier evenings are available to everyone. Each evening, each agent makes his/her decision independent of others. Each agent has an objective to arrive at the highest possible ranked restaurant, avoiding the crowd so that he or she gets dinner there. Because of fluctuations (in avoiding herding behavior), more than one agent may choose the same restaurant and all of them, except the one randomly chosen by the restaurant, then miss dinner that evening and they are likely to change their strategy for choosing the respective restaurants next evening. As can be easily seen, no arrangement of the agent distribution among the restaurants can satisfy everybody on any evening and the dynamics of optimal choice continues for ever. On a collective level, we look for the fraction ( f ) of customers getting dinner in any evening and also its distribution for various strategies of the game. It might be interesting to note here that for KPR, most of the strategies will give a low average (over evenings) value of resource utilization (average fraction f¯ 0 and T > 0, the probability for any agent to choose a particular restaurant increases with its rank k and decreases with the past popularity of the same restaurant (given by the number nk (t − 1) of agents arriving at that restaurant on the previous evening). For α = 0 and T → ∞, pk (t) = 1/N corresponds to random choice (independent of rank) case. For α = 0, T → 0, the agents avoid those restaurants visited last evening and choose again randomly among the rest. For α = 1, and T → ∞, the game corresponds to a strictly rank-dependent choice case. We concentrate on these three special limits.

Kolkata Paise Restaurant Problem in Some Uniform Learning Strategy Limits

5

2 Numerical Analysis 2.1 Random-Choice For the case where α = 0 and T → ∞, the probability pk (t) becomes independent of k and becomes equivalent to 1/N. For simulation we take 1000 restaurant and 1000 agents and on each evening t an agent selects any restaurant with equal probability p = 1/N. All averages have been made for 106 time steps. We study the variation of probability D( f ) of the agents getting dinner versus their fraction f . The numerical analysis shows that mean and mode of the distribution occurs around f ≃ 0.63 and that the distribution D( f ) is a Gaussian around that (see Figure 1).

frequency distribution D ( f )

0.06

avoiding-past-crowed choice strict-rank-dependent choice random choice

0.05 0.04 0.03 0.02 0.01 0 0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

fraction of people getting dinner ( f ) Fig. 1 Numerical simulation results for the distribution D( f ) of the fraction f of people getting dinner any evening (or fraction of restaurants occupied on any evening) against f for different limits of α and T . All the simulations have been done for N = 1000 (number of restaurants and agents) and the statistics have been obtained after averages over 106 time steps (evenings) after stabilization

2.2 Strict-Rank-Dependent Choice For α = 1, T → ∞, pk (t) = k/z; z = ∑ k. In this case, each agent chooses a restaurant having rank k with a probability, strictly given by its rank k. Here also we take 1000 agents and 1000 restaurants and average over 106 time steps for obtaining the statistics. Figure 1 shows that D( f ) is again a Gaussian and that its maximum occurs at f ≃ 0.58 ≡ f¯.

6

Asim Ghosh, Anindya Sundar Chakrabarti, and Bikas K. Chakrabarti

average fraction of utilization ( f )

0.65 0.6

0.55 α = 0.0 α = 1.0

0.5

0.45 0.4

0

1

2

3

4

5

6

7

8

9

10

noise parameter T Fig. 2 Numerical simulation results for the average resource utilization fraction ( f¯) against the noise parameter T for different values of α (>0)

2.3 Avoiding-Past-Crowd Choice In this case an agent chooses randomly among those restaurents which went vacant in the previous evening: with probability pk (t) = exp(− nk (t−1) )/z, where z = T nk (t−1) ∑k exp(− T ) and T → 0, one gets pk → 0 for k values for which nk (t − 1) > 0 and pk = 1/N ′ for other values of k where N ′ is the number of vacant restaurants in time t − 1. For numerical studies we again take N = 1000 and average the statistics over 106 time steps. In the Figure 1, the Gaussian distribution D( f ) of restaurant utilization fraction f is shown. The average utilization fraction f¯ is seen to be around 0.46.

3 Analytical Results 3.1 Random-Choice Case Suppose there are λ N agents and N restaurants. An agent can select any restaurant with equal probability. Therefore, the probability that a single restaurant is chosen by m agents is given by a Poisson distribution in the limit N → ∞:   1 λN D(m) = pm (1 − p)λ N−m; p = m N λm exp(−λ ) as N → ∞. (2) = m!

Kolkata Paise Restaurant Problem in Some Uniform Learning Strategy Limits

7

Therefore the fraction of restaurants not chosen by any agents is given by D(m = 0) = exp(−λ ) and that implies that average fraction of restaurants occupied on any evening is given by [1] f¯ = 1 − exp(−λ ) ≃ 0.63 for λ = 1,

(3)

in the KPR problem. The distribution of the fraction of utilization will be Gaussian around this average.

3.2 Strict-Rank-Dependent Choice In this case, an agent goes to the k-th ranked restaurant with probability pk (t) = k/ ∑ k; that is, pk (t) given by (1) in the limit α = 1, T → ∞. Starting with N restaurants and N agents, we make N/2 pairs of restaurants and each pair has restaurants ranked k and N + 1 − k where 1 ≤ k ≤ N/2. Therefore, an agent chooses any pair of restaurant with uniform probability p = 2/N or N agents chooses randomly from N/2 pairs of restaurants. Therefore the fraction of pairs selected by the agents (from Eq. (2)) f0 = 1 − exp(−λ ) ≃ 0.86 for λ = 2.

(4)

Also, the expected number of restaurants occupied in a pair of restaurants with rank k and N + 1 − k by a pair of agents is Ek = 1 ×

(N + 1 − k)2 k(N + 1 − k) k2 + 1 × +2×2× . (N + 1)2 (N + 1)2 (N + 1)2

(5)

Therefore, the fraction of restaurants occupied by pairs of agents f1 =

1 ∑ Ek ≃ 0.67. N k=1,...,N/2

(6)

Hence, the actual fraction of restaurants occupied by the agents is f¯ = f0 . f1 ≃ 0.58.

(7)

Again, this compares well with the numerical observation of the most probable distribution position (see Figures 1 and 2).

3.3 Avoiding-Past-Crowd Choice We consider here the case where each agent chooses on any evening (t) randomly among the restaurants in which nobody had gone in the last evening (t − 1). This

8

Asim Ghosh, Anindya Sundar Chakrabarti, and Bikas K. Chakrabarti

correspond to the case where α = 0 and T → 0 in Eq. (1). Our numerical simulation results for the distribution D( f ) of the fraction f of utilized restaurants is again Gaussian with a most probable peak at f¯ ≃ 0.46 (see Figures 1 and 2). This can be explained in the following way: As the fraction f¯ of restaurants visited by the agents in the last evening is avoided by the agents this evening, the number of available restaurants is N(1 − f¯) for this evening and is chosen randomly by all the N agents. Hence, when fitted to Eq. (2), λ = 1/(1 − f¯). Therefore, following Eq. (2), we can write the equation for f¯ as   1 ¯ (1 − f ) 1 − exp(− ) = f¯. (8) 1 − f¯

Solution of this equation gives f¯ ≃ 0.46. This result agrees well with the numerical results for this limit (see Figures 1 and 2; α = 0, T → 0).

4 Summary and Discussion We consider here a game where N agents (prospective customers) attempt to choose every evening (t) from N equally priced (hence no budget consideration for any individual agent is important) restaurants (each capable of serving only one) having well-defined ranking k (= 1, ..., N), agreed by all the agents. The decision on every evening (t) is made by each agent independently, based on the information about the rank k of the restaurants and their past popularity given by nk (t − 1), .., nk (0) in general. We consider here cases where each agent chooses the k-th ranked restaurant with probability pk (t) given by Eq. (1). The utilization fraction f of those restaurants on every evening is studied and their distributions D( f ) are shown in Figure 1 for some special cases. From numerical studies, we find their distributions to be Gaussian with the most probable utilization fraction f¯ ≃ 0.63, 0.58 and 0.46 for the cases with α = 0, T → ∞, α = 1, T → ∞ and α = 0, T → 0 respectively. The analytical estimates for f¯ in these limits are also given and they agree very well with the numerical observations. The KPR problem (see also the Kolkata Restaurant Problem [2]) has, in principle, a ‘trivial’ solution (dictated from outside) where each agent gets into one of the respective restaurant (full utilization with f = 1) starting on the first evening and gets the best possible sharing of their ranks as well when each one shifts to the next ranked restaurant (with the periodic boundary) in the successive evenings. However, this can be extremely difficult to achieve in the KPR game, even after long trial time, when each agent decides parallelly (or democratically) on their own, based on past experience and information regarding the history of the entire system of agents and restaurants. The problem becomes truly difficult in the N → ∞ limit. The KPR problem has similarity with the Minority Game Problem [3, 4] as in both the games, herding behavior is punished and diversity’s encouraged. Also, both involves learning of the agents from the past successes etc. Of course, KPR has some simple exact solution limits, a few of which are discussed here. In none

Kolkata Paise Restaurant Problem in Some Uniform Learning Strategy Limits

9

of these cases, considered here, learning strategies are individualistic; rather all the agents choose following the probability given by Eq. (1). In a few different limits of such a learning strategy, the average utilization fraction f¯ and their distributions are obtained and compared with the analytic estimates, which are reasonably close. Needless to mention, the real challenge is to design algorithms of learning mixed strategies (e.g., from the pool discussed here) by the agents so that the simple ‘dictated’ solution emerges eventually even when every one decides on the basis of their own information independently. Acknowledgements We are grateful to Arnab Chatterjee and Manipushpak Mitra for their important comments and suggestions.

References 1. A.S. Chakrabarti, B.K. Chakrabarti, A. Chatterjee, M. Mitra, The Kolkata Paise Restaurant Problem and Resource Utilization, Physica A 388 (2009) 2420-2426 2. B.K. Chakrabarti, Kolkata Restaurant Problem as a generalised El Farol Bar Problem, in Econophysics of Markets and Business Networks, Eds. A. Chatterjee and B. K. Chakrabarti, New Economic Windows Series, Springer, Milan (2007), pp. 239-246 3. D. Challet, M. Marsili, Y.-C. Zhang, Minority Games: Interacting Agents in Financial Markets, Oxford University Press, Oxford (2005) 4. D. Challet, Model of Financial Market Information Ecology, in Econophysics of Stock and Orther Markets, Eds. A. Chatterjee and B. K. Chakrabarti, Springer, Milan (2006) pp. 101112

Cycle Monotonicity in Scheduling Models Debasis Mishra and Manipushpak Mitra

Abstract We study the scheduling model where the problem is to allocate a set of jobs through a set of machines where the processing speed of the machines can differ. We assume that the waiting cost of each job is private information and that all jobs take identical processing time in any given machine. By allowing for monetary transfer, we show that an allocation rule is strategyproof if and only if it is nonincreasing in completion time. We also identify the unique class of transfer rules that guarantee strategyproofness of any allocation rule satisfying non-increasingness in completion time.

1 Introduction In this paper we address the incentive issue in scheduling model with non-identical machines.1 Scheduling model with non-identical machines is a set up characterized by the following features: (a) there are n agents and m machines, (b) each agent has exactly one job to process using these machines, (c) each machine can process one job at a time, (d) jobs are identical in the sense that all jobs have the same completion time in any given machine and (e) machines are non-identical in the sense that completion time for any given job may differ across machines. In the scheduling context, an extensively studied allocation rule is the efficient allocation rule that minimizes the aggregate waiting cost. Maniquet [2] notes that scheduling models with efficient allocation rule capture many economic environDebasis Mishra Palnning Unit, Indian Statistical Institute, 7, SJSS Marg, New Delhi-110016, India. e-mail: [email protected] Manipushpak Mitra Economic Research Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata-700108, India. e-mail: [email protected] 1

This problem was introduced in Mitra [3].

10

Cycle Monotonicity in Scheduling Models

11

ments. However, there are situations where we want to implement an allocation rule that is different from efficiency. For example, there may be some well-defined priority across the set of agents. In an academic institute, faculty members may be given priority over students in using computers (or printers or photocopiers), that is, the objective is to process the jobs of the faculty members first (may be in an efficiently way) and, after they are through, the jobs of the students are processed. There may be further priority within faculty members (professors and non-professors etc.) and/or within students (graduates and undergraduates etc.) and so on. We can have a situation where a set of agents always gets some fixed allotted processing slots in a given order while the remaining agents are allotted in the remaining processing slots. We can also have a situation where there is a subset of jobs that needs to be processed in any given order while other jobs have no such order. Given the possibility of having situations where the objective is not one of efficiency, we ask the following general question. If the waiting costs of the agents are private information what are the allocation rules for which one can find transfer rules that ensure strategyproofness (or truth-telling in dominant strategies)? We identify the complete class of allocation rules for which one can find transfer rules that satisfy strategyproofness for the scheduling model with non-identical machines. In this context, a very general result, using directed graphs, is due to Rochet [5] and Rockafellar [6]. They show that an allocation rule is strategyproof if and only if it satisfies cycle monotonicity. Cycle monotonicity requires that there is no cycle of negative length in an underlying directed graph. We show that cycle monotonicity for the scheduling model simply means that the allocation rule is non-increasing in completion time. Given any allocation rule with non-increasing completion time we also identify the complete class of transfer rules that implements it. Deriving such transfer rules is quite transparent because for the scheduling model with multiple non-identical machines we have the revenue equivalence property. The revenue equivalence property was first introduced by Myerson [4] in the context of optimal auction design. That this property holds for the scheduling problems is a consequence of the works of Rockafellar [6] and Krishna and Maenner [1].

2 The Model Let N = {1, . . . , n}, n ≥ 2 be the set of agents each having one job to process and M = {1, . . . , m} be the set of machines. Each machine can process the jobs sequentially. A job cannot be stopped once its processing is started. Serving any agent takes same amount of time in any given machine. Given any machine q ∈ M, the speed of completing a job is cq ∈ (0, 1]. Therefore, the processing speed of different machines is captured by the vector c = (c1 , . . . , cm ) ∈ (0, 1]m . We assume without loss of generality that 0 < c1 ≤ . . . ≤ cm = 1. Each agent is identified with a waiting cost (or type) wi ∈ ℜ+ , that is the cost of waiting per unit of time. A typical profile of waiting cost is w = (w1 , . . . , wn ) ∈ ℜn+ . For any profile w ∈ ℜn+ , the server allocates

12

Debasis Mishra and Manipushpak Mitra

n jobs through m machines. In this context the server selects the vector of n numbers from the set {kc1 , . . . , kcm }nk=1 for this allocation problem. Let z = (z(1), . . . , z(n)) represent the selected numbers from the set {kc1 , . . . , kcm }nk=1 and assume without loss of generality that 0 < z(1) ≤ . . . ≤ z(n). We refer to z as the completion time vector. An allocation is simply an onto function σ : N → N. If agent i’s rank is σi under some allocation vector σ = (σ1 , . . . , σn ), then the total cost of agent i with waiting cost wi is z(σi )wi . Let Σ (N) be the set of all possible allocation vectors of agents in N. We assume that the utility function of each agent i ∈ N is quasi-linear and is of the form Ui (σi ,ti ; wi ) = −z(σi )wi + τi where τi is the monetary transfer to agent i. An allocation of the n jobs to z can be done in many ways. An allocation rule is a mapping σ : ℜn+ → N that specifies for each profile w ∈ ℜn+ an allocation vector σ (w) ∈ Σ (N). Definition 1. An allocation rule σ : ℜn+ → N is non-increasing in completion time if for all i ∈ N, z(σi (wi , w−i )) is non-increasing in wi for any given w−i ∈ ℜn−1 + .

A transfer rule is a mapping τ : ℜn+ → ℜn that specifies for each profile w ∈ ℜn+ a transfer vector τ (w) = (τ1 (w), . . . , τn (w)) ∈ ℜn . A mechanism (σ , τ ) constitutes of an allocation rule σ and a transfer rule τ . In this paper we are interested in allocation rules that are strategyproof.

Definition 2. The allocation rule σ is strategyproof (or dominant strategy incentive compatible) if there exists a transfer rule τ such that for every agent i ∈ N, every wi , w′i ∈ ℜ+ and every w−i ∈ ℜn−1 + , we have −z(σi (wi , w−i ))wi + τi (wi , w−i ) ≥ −z(σi (w′i , w−i ))wi + τi (w′i , w−i ). From the above definition it follows that to identify allocation rules that are strategyproof one can without loss of generality look at the single agent case by fixing the profiles of all other agents at some w−i . Hence for the next two sections we suppress w−i and simply write that a decision rule is strategyproof if there exists a transfer rule τ such that for every agent i and every t, s ∈ ℜ+ , −z(σi (t))t + τi (t) ≥ −z(σi (s))t + τi (s).

(1)

Two obvious implications of (1) are the following: (1a) For all t, s ∈ ℜ+ such that σi (t) = σi (s), τi (t) = τi (s). (1b) For all t, s ∈ ℜ+ , z(σi (t))t + z(σi (s))s ≤ z(σi (t))s + z(σi (s))t.

3 Directed Graphs and Strategyproofness A directed graph is a tuple (T, E) where T is called the set of nodes and E is called the set of edges. An edge is an ordered pair of nodes. The set T can be finite or infinite. A complete directed graph is a directed graph (T, E) in which for every

Cycle Monotonicity in Scheduling Models

13

i, j ∈ T (i = j), there is an edge from i to j.2 For our purposes, we restrict attention to complete directed graphs and call them graphs. We will associate with a graph (T, E) a length function l : E → ℜ. A (finite) path in a graph (T, E) is a sequence of distinct nodes P = (t1 , . . . ,tk ). The length of a path P is the sum of lengths of edges in that path P, that is l(P) = l(t1 ,t2 ) + . . . + l(tk−1 ,tk ). A (finite) cycle in a graph (T, E) is a sequence of distinct C = (t1 , . . . ,tk ,t1 ) such that (t1 , . . . ,tk ) is a path. The length of a cycle C is the sum of lengths of edges in that cycle C, that is l(C) = l(t1 ,t2 ) + . . . + l(tk−1 ,tk ) + l(tk ,t1 ). We define the notion of strategyproofness for the scheduling model in terms of directed graphs. First assume that the set of nodes T = ℜ+ represents the type (or waiting cost) of a typical agent i ∈ N. Let l(s,t) = −z(σi (t))t + z(σi (s))t. Therefore, condition (1) implies that an allocation rule σ is strategyproof if there exists a transfer rule τ such that for all i ∈ N,

τi (s) − τi (t) ≤ l(s,t)

∀ s,t ∈ ℜ+ .

(2)

A well known result in graph theory states that inequality (2) has a solution if the corresponding graph has no cycle of negative length. This property is known as cycle monotonicity. Definition 3. The allocation rule σ satisfies cycle monotonicity if for each agent i ∈ N, σi satisfies the following property: for every finite and distinct sequence of types (t1 , . . . ,tk ) ∈ ℜk+ (k ≥ 2), we have l(C) = l(t1 ,t2 ) + . . . + l(tk−1 ,tk ) + l(tk ,t1 ) ≥ 0. Theorem 1. The allocation rule σ is strategyproof if and only if it satisfies cycle monotonicity. The proof of Theorem 1 follows from Rochet [5] and Rockafellar [6].

4 The Main Result When will an allocation rule for the scheduling model with non-identical machines satisfy cycle monotonicity? Theorem 2. The allocation rule σ satisfies cycle monotonicity if and only if σ is non-increasing in completion time. Proof: Consider any agent i ∈ N. If σ satisfies cycle monotonicity then for agent i ∈ N and for any waiting cost pair s,t ∈ ℜ+ such that s > t, it is necessary that l(s,t) + l(t, s) ≥ 0. Condition l(s,t) + l(t, s) ≥ 0 ⇒ −z(σi (t))t + z(σi (s))t − z(σi (s))s + z(σi (t))s ≥ 0 ⇒ [z(σi (s)) − z(σi (t))](t − s) ≥ 0 ⇒ z(σi (s)) ≤ z(σi (t)). Since the selection of i is arbitrary it follows that cycle monotonicity implies that the allocation rule σ is non-increasing in completion time. 2

We rule out the possibility of edges from a node to itself.

14

Debasis Mishra and Manipushpak Mitra

Using non-increasingness in completion time of the allocation rule σ it follows that if s > t then z(σi (s)) ≤ z(σi (t)) ⇒ [z(σi (s)) − z(σi (t))](t − s) ≥ 0 ⇒ −z(σi (t))t + z(σi (s))t − z(σi (s))s + z(σi (t))s ≥ 0 ⇒ l(s,t) + l(t, s) ≥ 0. So any cycle with two nodes has non-negative length. Now consider a cycle with (k + 1) nodes and assume that any cycle involving less than (k + 1) nodes has non-negative length. Let the cycle be (t1 , . . . ,tk ,tk+1 ,t1 ) and assume without loss of generality that tk+1 > t j for all j ∈ {1, . . . , k}. Observe first that l(tk ,tk+1 ) + l(tk+1 ,t1 ) − l(tk ,t1 ) = [z(σi (tk+1 ) − z(σi (tk )](t1 − tk+1 ) ≥ 0 since tk+1 > tk > t1 and the allocation rule σ is non-increasing in completion time. Therefore, we have (A) l(tk ,tk+1 ) + l(tk+1 ,t1 ) ≥ l(tk ,t1 ). Using (A) it follows that the length of the cycle (t1 , . . . ,tk ,tk+1 ,t1 ) is l(t1 ,t2 ) + . . . + l(tk−1 ,tk ) + l(tk ,tk+1 ) + l(tk+1 ,t1 ) ≥ l(t1 ,t2 ) + . . . + l(tk−1 ,tk ) + l(tk ,t1 ) ≥ 0. The last inequality follows from our induction hypothesis. Thus, if the allocation rule σ is non-increasing in completion time then it is cycle monotonic.  The proof of Theorem 2 uses arguments that are very similar to the one used by Vohra [7].

5 Transfer Rules and Revenue Equivalence Consider any allocation rule σ with non-increasing completion time. Select any i ∈ N and fix the profile of all other agents at some w−i ∈ ℜn−1 + . Given w−i , let S(w−i ) = {p1 , . . . , pR } ⊆ {1, . . . , N} represent the set of ranks possible for agent i where p1 < p2 < . . . < pR and R is a positive integer. Note that it is possible that R = R−1 1. Let T 0 = ∞ and T R = 0. If R > 1 then consider the numbers T 1 , . . . , T R−1 ∈ ℜ+ such that 0 = T R < T R−1 < . . . < T 1 < T 0 = ∞ and σi (t 1 , w−i ) = p1 < σi (t 2 , w−i ) = p2 < . . . < σi (t R−1 , w−i ) = pR−1 < σi (t R , w−i ) = pR for all t r ∈ (T r , T r−1 ) with R−1 r = 1, . . . , R. Therefore, the numbers {Tr }r=1 represents the waiting costs at (‘just’ above or ‘just’ below) which the rank of agent i changes given w−i . Note that for states like (T r , w−i ) (where r = 1, . . . , R − 1), σi (T r , w−i ) ∈ {pr , pr+1 } and we select pr or pr+1 in any arbitrary way. Let the transfer for agent i with type t r be ⎧ hi (w−i ) if r = R, ⎨ R−1 τi (t r , w−i ) = (3) s ⎩ − ∑ [z(ps+1 ) − z(ps )]T + hi (w−i ) if r < R and R > 1 s=r

where hi : ℜn−1 → ℜ is any function that depends on the profile of waiting costs of all but agent i. Note that if R = 1 then r = R = 1 and τi (t 1 , w−i ) = hi (w−i ). Theorem 3. An allocation rule σ is strategyproof if and only if the transfer rule is such that the transfer of each agent i ∈ N is given by (3).

Proof: Fix w−i and consider any pair (t r+1 ,t r ) ∈ (T r+1 , T r ) × (T r , T r−1 ). By applying (1a) when the actual state is (t r+1 , w−i ) ((t r , w−i )) and the misreport of agent i

Cycle Monotonicity in Scheduling Models

15

is t r (t r+1 ) we get [z(pr+1 ) − z(pr )]t r+1 ≤ τi (t r+1 , w−i ) − τi (t r , w−i ) ≤ [z(pr+1 ) − z(pr )]t r .

(4)

Using condition (1a) and using the fact that condition (4) must hold for all (t r+1 ,t r ) ∈ (T r+1 , T r ) × (T r , T r−1 ), it follows that

τi (t r+1 , w−i ) − τi (t r , w−i ) = [z(pr+1 ) − z(pr )]T r .

(5)

Condition (5) must hold for all r ∈ {1, . . . , R − 1}. By setting τi (t R , w−i ) = hi (w−i ) and then solving condition (5) recursively we get (3). The other part of the proof, that is, if the transfer rule satisfies (3) then we have strategyproofness, is quite easy and is hence omitted.  Remark 1. One can also prove the only if part of Theorem 3 using directed graphs and revenue equivalence. In this remark we only give a sketch of this proof. Consider an agent i and fix w−i ∈ ℜn−1 + . As earlier, we suppress the dependence on w−i in the notation here. Consider the type graph (T, E), where the edge length from type wi + δ to wi is z(σi (wi + δ )) − z(σi (wi )) wi . A possible transfer function which makes an allocation rule strategyproof is to fix a node in this type graph, say node 0, and take the negative of the shortest path from every node to 0.3 The proof of this claim can be found, for example, in Vohra [7]. To compute the shortest path from any node wi to node 0, note that for δ > 0 and

0 < x < w i , we have l(x + 2δ , x + δ ) + l(x + (x + 2 (x + δ , x) = z( σ δ )) − z( σ δ )) (x + δ ) + z(σi (x + δ )) − z(σi (x)) x ≤ i i

z(σi (x + 2δ )) − z(σi (x)) x = l(x + 2δ , x), where the inequality comes from nonincreasingness in completion time for the allocation rule σ . Hence, shortest path from wi to 0 can be computed by taking sum of edges involving arbitrarily close nodes from wi to 0. This implies that a possible payment for type wi is −

0

wi

z(σi (s + ds)) − z(σi (s)) s.

But note that z(σi (s + ds)) − z(σi (s)) = 0 if σi (s + ds) = σi (s). Since the possible set of ranks for agent i is finite, this value changes at finite number of places. Now, suppose agent i gets a rank of pr when his type is wi = t r . By non-increasingness, R−1 he gets highest rank at 0. So, the above integral reduces to − ∑s=r z(ps+1 ) −

the z(ps ) T s . Now, an allocation rule satisfies revenue equivalence if for all w−i and for all wi , τi (wi , w−i ) = τi′ (wi , w−i ) + hi (w−i ) for every pair of payment functions τ and τ ′ , where hi is an arbitrary function that depends on w−i . If the type space is connected and value function is continuous in type, revenue equivalence holds (see, for example, Krishna and Maenner [1] and Vohra [7]). In our model, type space is ℜ+ and value function is linear in type. Hence, revenue equivalence holds and the result follows. 3

The shortest path in a directed graph from a node s to another node t is the path which has the minimum length over all the paths from s to t.

16

Debasis Mishra and Manipushpak Mitra

Acknowledgements The authors are grateful to Herv´e Moulin for suggesting this type of scheduling model with non-identical machines. The authors are also thankful to S. R. Chakravarty for his comments.

References 1. Krishna, V. and E. Maenner 2001. Convex Potentials with an Application to Mechanism Design. Econometrica, 69, 1113-1119 2. Maniquet, F. 2003. A characterization of the Shapley value in queueing problems. Journal of Economic Theory 109, 90-103 3. Mitra, M. 2006. Information Extraction in Scheduling Problems with Non-identical Machines. In Bikas K. Chakrabarti and Arnab Chatterjee (Edited), Econophysics of Stocks and Markets, Pages-175-182, New Economic Window Series, Springer Verlag Italia, Milan 4. Myerson, R. B. 1981. Optimal Auction Design. Mathematics of Operations Research, 6, 58-73 5. Rochet, J. C. 1987. A Necessary and Suffcient Condition for Rationalizability in a Quasi-linear Context. Journal of Mathematical Economics, 16, 191-200 6. Rockafellar, R. T. 1970. Convex Analysis, Princeton University Press 7. Vohra, R. 2008. Paths, Cycles and Mechanism Design. Manuscript, Kellogg School of Management, Northwestern University

Reinforced Learning in Market Games Edward W. Piotrowski, Jan Sładkowski, and Anna Szczypi´nska

Abstract Financial markets investors are involved in many games – they must interact with other agents to achieve their goals. Among them are those directly connected with their activity on markets but one cannot neglect other aspects that influence human decisions and their performance as investors. Distinguishing all subgames is usually beyond hope and resource consuming. In this paper we study how investors facing many different games, gather information and form their decision despite being unaware of the complete structure of the game. To this end we apply reinforcement learning methods to the Information Theory Model of Markets (ITMM). Following Mengel, we can try to distinguish a class Γ of games and possible actions (strategies) aimi for i−th agent. Any agent divides the whole class of games into analogy subclasses she/he thinks are analogous and therefore adopts the same strategy for a given subclass. The criteria for partitioning are based on profit and costs analysis. The analogy classes and strategies are updated at various stages through the process of learning. We will study the asymptotic behavior of the process and attempt to identify its crucial stages, e.g., existence of possible fixed points or optimal strategies. Although we focus more on the instrumental aspects of agents behaviors, various algorithm can be put forward and used for automatic investment. This line of research can be continued in various directions.

Edward W. Piotrowski Institute of Mathematics, University of Białystok, Lipowa 41, Pl 15424 Białystok, Poland. e-mail: [email protected] Jan Sładkowski Institute of Physics, University of Silesia, Uniwersytecka 4, Pl 40007 Katowice, Poland. e-mail: [email protected] Anna Szczypi´nska Institute of Physics, University of Silesia, Uniwersytecka 4, Pl 40007 Katowice, Poland. e-mail: [email protected]

17

18

Edward W. Piotrowski, Jan Sładkowski, and Anna Szczypi´nska

Motto: ”The central problem for gamblers is to find positive expectation bets. But the gambler also needs to know how to manage his money, i.e. how much to bet. In the stock market (more inclusively, the securities markets) the problem is similar but more complex. The gambler, who is now an investor, looks for excess risk adjusted return.” Edward O. Thorp

1 Introduction Noise or structure? We face this question almost always while analyzing large data sets. Patern discovery is one of the primary concerns in various fields in research, commerce and industry. Models of optimal behavior often belong to that class of problems. We restrict ourselves to the discrete-event systems that are dynamic systems with discrete inputs and outputs: the behavior can be described in terms of discrete state changes [1]. The goal of an agent in such a dynamic environment is to make optimal decision over time. One usually have to discard a vast amount of data (information) to obtain a concise model or algorithm. Therefore prediction of individual agent behavior is often burdened with large errors. The prediction game algorithm can be described as follows: FOR n = 1, 2, . . . Reality (Nature) announces xn ∈ X Predictor announces γn ∈ Γ Reality (Nature) announces yn ∈ Y to be compared with γn END FOR, where xn ∈ X is the data upon which the prediction γn ∈ X of yn ∈ Y is made at each round n. Note that the presence of black swans could make the whole idea of predictions questionable [2] as we may only have a vague idea what the set Y is. The prediction quality is measured by some utility function υ : Γ ×Y → R. One can view such a process as a communication channel that transmit information from the past to the future [3]. The gathering of information, often indirect and incomplete, is referred to as measurements. Learning theory deals with the abilities and limitations of algorithms that learn or estimate functions from data. Learning helps with optimal behavior decisions by adjusting agent’s strategies to information gathered over time. Agents can base their action choices on prediction of the state of the environment or on reward received during the process. For example, Markov decision process can be formulated as a problem of finding a strategy π that maximizes the expected sum of discounted rewards:

Reinforced Learning in Market Games

19

υ (s, π ) = r(s, aπ ) + β ∑ p(s′ |s, aπ )υ (s′ , π ), s′

where s is the initial state, aπ the action induced by the strategy π , r, the reward at stage t and β the discount factor; υ is called the value function. p(s′ |s, aπ ) denotes the (conditional) probability of reaching the state s′ from the state s as the result of an action a1 .It can be shown that, in the case of infinite horizon, an optimal strategy π ∗ such that (Bellman optimality equation)

υ (s, π ∗ ) = max{r(s, a) + β ∑ p(s′ |s, a)υ (s′ , π ∗ )} a

s′

exists. In reinforcement learning, the agent receives rewards from the environment (Nature) and use them as feedback for its action. Reinforcement learning has its roots in statistics cybernetics, psychology, neuroscience, computer science. . . . In its standard formulation, the agent must improve his/her performance in a game through trial-and-error interaction with a dynamical environment. There are two ways of finding the optimal strategy: strategy iteration – one directly manipulates the strategy; value iteration – one approximates the optimal value function. Therefore two classes of algorithms are usually considered: strategy (policy) iteration algorithms and value iteration algorithms. In the following section we discuss the adequacy of reinforced learning in market games.

2 Reinforcement Learning in Market Games Can reinforcement learning help with market games analysis? Could it be used for finding optimal strategies? It is not easy to answer this question because it involves the problem of real-time decision making one often have to (re-)act as quickly as possible. Consider model-free reinforcement learning, Q-learning2. In this approach one defines the value of an action Q(s, a) as a discounted return if action a following from the strategy π is applied: Q∗ (s, a) = r(s, a) + β ∑ p(s′ |s, a)υ (s′ , π ∗ ) s′

then

υ (s, π ∗ ) = max Q∗ (s, a) a

1

In a more formal setting it would be a transition kernel for the process of consecutive actions and observations. 2 This is obviously a value iteration, but in market games there is a natural value function – the profit.

20

Edward W. Piotrowski, Jan Sładkowski, and Anna Szczypi´nska

In Q-learning, the agent starts with an arbitrary Q(s, a) and at each stage t observes the results (reward) of his/her action rt and then the agent updates the value of Q according to the rule: Qt+1 (s, a) = (1 − αt )Qt (s, a) + αt (rt + β max Qt (s, b)), b

where αt ∈ [0, 1) is the learning rate that needs to decay over time for the learning algorithm to converge. This approach is frequently used in stochastic games setting. Watkins and Dayan proved that this sequence converges provided all states and actions have been visited/performed infinitely often [7]. Therefore we anticipate weak convergence of ratios. Indeed, various theoretical and experimental analysis [8]-[10] showed that even very simple games might require 108 steps! If a well-shaped stock trend is formed one can expect that there are sorts of adversarial equilibria (no agent is hurt by any change of others’ strategies) ′ ′ Ri (π1 , . . . , πn ) ≤ Ri (π1′ , . . . , πi−1 , πi′ , πi−1 , . . . , πn′ ),

Rs are the pay-off functions and π s the one-stage strategies, or coordination equilibria (all agents achieve their highest possible return) Ri (π , . . . , πn ) = max Ri (a1 , . . . , an ) a1 ,...,an

are formed. The problem is that they can be easily identified with technical analysis3 tools and there is no need to recall to learning algorithms. In the most interesting games, neither adversarial equilibria nor coordination equilibria exist. This type of learning is much more subtle and, up to now, there is no satisfactory analysis in the field of reinforcement learning. Therefore a compromise is needed, for example we must be willing to accept returns that might not be optimal. The models discussed in the following subsections belong to that class and seem to be tractable by learning algorithms.

2.1 Kelly’s Criterion Kelly’s criterion [12] can be successfully applied in horse betting or blackjack when one can discern biases [13] even though its optimality and convergence can be proven only in the asymptotic cases. The simplest form of Kelly’s formula is:

Θ = W − (1 − W)/R where: • Θ = percentage of capital to be put into a single trade. 3

We understand the term technical analysis as simplified hypothesis testing methods that can be applied in real time [11].

Reinforced Learning in Market Games

21

• W = historical winning percentage of a trading system. • R = historical average win/loss ratio. Originally, Kelly’s formula involves finding the “bias ratio” in a biased game. If the game is infinitely often repeated then one should put at stake the percentage one’s capital equal to the bias ratio. Therefore one can easily construct various learning algorithms that perform the task of finding an environment so that Kelly’s approach can be effectively applied (bias search + horizon of the investment) [14, 15].

2.2 MMM Model Most of market/trading activities can be reduced to the following simple scenario: one buys a good and tries to sell it, possibly, with a profit. In this section we present a simple mathematical model of such activities (repeated many times). The simplest possible market consists in exchanging two goods which we would call the asset and the money and denote by Θ and $, respectively. Our model [16] consist in the repetition of two simple basic moves (in principle, the process is continued endlessly): 1. First move consists in a rational buying (see below) of the asset Θ (exchanging $ for Θ ). 2. The second move consists in a random selling of the purchased in the first move amount of the asset Θ (exchanging Θ for $). (One can easily reverse the buying and selling strategies.) We have analyzed the model in the case where the trader fixes a maximal price he is willing to pay for the asset Θ and then, if the asset is bought, after some time sells it at random [16]. The expected value of the profit after the whole cycle is

ρη (a) =

−a

−∞ p η (p) d p , −a 1 + −∞ η (p) d p



where a is the withdrawal price. The maximal value of the function ρ , amax , lies at a fixed point of ρ , that fulfills the condition ρ (amax ) = amax . The simplest version of the strategy is as follows: there is an optimal strategy that fixes the withdrawal price at the level historical average profit4 . Task: find an implementation of reinforced learning algorithm that can be used effectively on markets. We should control both, the probability distribution η and the profit ”quality”. A wide class of probability distribution functions to be taken into account enlarges the number of learning steps and make real time implementation questionable.

4

Or else: do not try to outperform yourself.

22

Edward W. Piotrowski, Jan Sładkowski, and Anna Szczypi´nska

2.3 Learning Across Games An interesting approach was put forward by Mengel [17]. One can easily give examples of situations where agents cannot be in what game they are taking parts (e.g. the games may have the same set of actions). Distinguishing all possible games and learning separately for all of them requires a large amount of alertness, time and resources (costs). Therefore the agent should try to identify some classes of situations she/he perceives as analogous and therefore takes the same actions. The learning algorithm should update both the partition of the set of games and actions to be taken: • Agents are playing repeatedly a game (randomly) drawn from a set Γ . • Agents partition the set of all games into subset (classes) of games they learn not to discriminate (see them as analogous). • Agents update both propensities to use partitions {G and attractions towards using their possible strategies/actions. Asymptotic behavior and computation complexity of such process is discussed in Ref. [17]. Stochastic approximation is working in this case (approximation through a system of deterministic differential equations is possible). It would be interesting to analyze the following problems. Problem 1: Identify possible “classes of market games” Problem 2: Identify “universal” set of strategies. For example, on the stock exchange one can try the brute force approach. Admit as strategies buying/selling at all possible price levels and identify classes of games with trends. Unfortunately, the number of approximations generates huge transaction costs. On the derivative markets, this can be reduced as the leverage of the ratio of transaction cost to price movement is much lower. We envisage that an agent may try to optimize among various classes of technical analysis tools.

3 Conclusion As conclusion we would like to stress the following points. • Algorithms are simple but computation is complex, time and resource consuming. • Learning across games could be used to “fit” technical analysis models. • Dynamic proportional investing (Kelly) should be the easiest to implement. But here we envisage problems analogous to heat (entropy) in thermodynamics, and exploration of knowledge might involve in cases of high effectiveness paradoxes [14], which is analogous to those of arising when speed approaches the speed of light [15]. • One can envisage learning (information) models of markets/portfolio theory. • Implementation should be carefully tested – transaction costs can “kill” even crafty algorithms [18].

Reinforced Learning in Market Games

23

• Quantum algorithms/computers, if ever come true, might change the situation in a dramatic way: we would have powerful algorithm at our disposal and the learning limits would certainly broaden [19]-[21]. • We have neglected the fact that the uncertain sequences might not be of a probabilistic nature but the learning across games approach easily copes with such a case.

References 1. Cassandras, C. G., Lafortune, S., Introduction to Discrete Event Systems, volume 11 of The Kluwer International Series in Discrete Event Dynamic Systems, Kluwer Academic Publisher, Boston, MA, (1999) 2. Taleb N. N., The Black Swan: The Impact of the Highly Improbable, Random House, Inc (2008) 3. Crutchfield, J. P., Shalizi, C. R., Physical Review E 59 (1999) 275 4. Benveniste, A., Metevier, M., Priouret, P., Adaptive Algorithms and Stochastic Approximation, Berlin: Springer Verlag (1990) 5. Fudenberg, D., Levine D. K., The Theory of Learning in Games, Cambridge: MIT-Press (1998) 6. Kuschner, H. J., Lin G. G., Stochastic Approximation and Recursive Algorithms and Applications, New York: Springer (2003) 7. Watkins, C. J. C. H., Dayan, P., Q-learning, Machine Learning 8 (1992) 279 8. Farias, V. F., Moallemi,C. C., Weissman, T., Van Roy, B., Universal Reinforcement Learning, arXiv:0707.3087v1 [cs.IT] 9. Shneyerov, A., Chi Leung Wong, A., The rate of convergence to a perfect competition of a simple Matching and bargaining Mechanism,Univ of Britich Columbia preprint (2007) 10. Littman, M. L., Markov games as a framework for multi-agent reinforcement learning, Brown Univ. preprint 11. Fliess, M., Join, C., A mathematical proof of the existence of trends in financial time series, arXiv:0901.1945v1 [q-fin.ST] 12. Kelly, J. L., Jr., A New Interpretation of Information Rate, The Bell System Technical Journal 35 (1956) 917 13. Thorpe, E. O., Optimal gambling systems for favorable games, Revue de l’Institut International de Statistique / Review of the International Statistical Institute, 37 (1969) 273 14. Piotrowski, E. W., Schroeder, M., Kelly Criterion revisited: optimal bets, Eur. Phys. J. B 57 (2007) 201 15. Piotrowski, E. W., Łuczka, J., The relativistic velocity addition law optimizes a forecast gambler’s profit, submmited to Physica A; arXiv:0709.4137v1 [physics.data-an] 16. Piotrowski, E. W., Sładkowski, J., The Merchandising Mathematician Model, Physica A 318 (2003) 496 17. Mengel, F., Learning across games, Univ. of Alicante report WP-AD 2007-05 18. Piotrowski, E. W., Sładkowski, J., Arbitrage risk induced by transaction costs, Physica A 331 (2004) 233 19. Piotrowski, E. W., Fixed point theorem for simple quantum strategies in quantum market games, Physica A 324 (20043) 196 20. Piotrowski, E. W., Sładkowski, J., Quantum computer: an appliance for playing market games, International Journal of Quantum Information 2 (2004) 495 21. Miakisz, K.,Piotrowski, E. W., Sładkowski, J., Quantization of Games: Towards Quantum Artificial Intelligence, Theoretical Computer Science 358 (2006) 15

Mechanisms Supporting Cooperation for the Evolutionary Prisoner’s Dilemma Games Gy¨orgy Szab´o, Attila Szolnoki and Jeromos Vukov

Abstract We survey the evolutionary Prisoner’s Dilemma games where players are located on the sites of a graph, their income comes from games with the neighbors, and the players try to maximize their income by adopting one of the successful neighboring strategies with a probability dependent on the payoff difference. We discuss briefly the mechanisms supporting the maintenance of cooperation if the players are located on a lattice or on the so-called scale-free network. In the knowledge of these mechanisms we can introduce additional personal features yielding relevant improvement in the maintenance of cooperative behavior even for a spatial connectivity structure. Discussing several examples we show that the efficiency of these mechanisms can be improved by considering co-evolutionary games where players are allowed to modify not only their strategy but also the connectivity structure and their capability to transfer their strategy.

1 Introduction The evolutionary game theory gives a general framework to study the effect of agent-agent interactions on the behavior of multi-agent systems [9, 17]. In these systems each agent’s income comes from a one-shot bi-matrix game [25] played with his/her neighbors defined by a connectivity matrix. In contrary to traditional Gy¨orgy Szab´o Research Institute for Technical Physics and Materials Science, POB 49, H-1525 Budapest, Hungary. e-mail: [email protected] Attila Szolnoki Research Institute for Technical Physics and Materials Science, POB 49, H-1525 Budapest, Hungary. e-mail: [email protected] Jeromos Vukov Research Institute for Technical Physics and Materials Science, POB 49, H-1525 Budapest, Hungary. e-mail: [email protected]

24

Mechanisms Supporting Cooperation

25

game theory [25] here the agent’s intelligence is reduced. Namely, the agents are not capable to determine their own optimum decision (strategy). Instead of it, they imitate those neighbors who have higher scores. This idea is adopted from the field of biology where this rule describes the effect of Darwinian selection [8]. In biological systems each species corresponds to a strategy and their interactions influence their fitness (characterizing their capability to create new offspring) that is quantified in a way similar to the payoff of games. Within the field of evolutionary games the Prisoner’s Dilemma attracted a progressive activity because these models are capable to study the emergence and maintenance of cooperative behavior among selfish individuals. In the subsequent sections we introduce a general formalism and briefly discuss several mechanisms supporting the cooperative behavior.

2 Evolutionary Prisoner’s Dilemma Games on Graphs For a generalized version of the two-strategy evolutionary Prisoner’s Dilemma (PD) games each player is located on a site x of a lattice (or interaction graph G with edges connecting neighbors playing game with each other). The players are equivalent and use one of the two strategies, namely, unconditional cooperation C and defection D, that is     0 1 . (1) sx = D = , C= 1 0 We assume that the total payoff Ux of player x comes from 2 × 2 symmetric bimatrix games with the neighbors (defined by the edges of the interaction graph G) y ∈ Ωx , that can be expressed by the sum of matrix products, Ux =



y∈Ωx

s+ x A · sy ,

(2)

where s+ x denotes the transpose of the state vector sx , the summation runs over sites of the neighborhood Ωx (self-interaction is excluded). Henceforth our analysis will be restricted to the weak PD games where the elements of payoff matrix is given as   0 b , 1 < b < 2−c , c < 0 (3) A= c 1 in the limit c → −0 [11]. In this game the highest total payoff is shared equally if both players choose C. For mutual defection both players receive 0. Despite it intelligent players are enforced to choose the dominant strategy D providing higher score independent of the co-player’s choice. In evolutionary games the intelligence of a player is reduced. Here the players, instead of searching for their optimum choices, wish to maximize their income by

26

Gy¨orgy Szab´o, Attila Szolnoki and Jeromos Vukov

adopting (imitating) one of the neighboring strategies if the given player has higher score. In the last decades many dynamical rules have been introduced [9, 17]. Now we describe only one rule called random sequential pairwise comparison. In this case the subsequent elementary steps are repeated: (1) we choose two neighboring players (x and y) at random; (2) player x adopt the neighboring strategy sy with a probability depending on the payoff difference as W [sx → sy ] = wxy

1 , 1 + exp[(Ux − Uy )/K]

(4)

where K characterizes the magnitude of noise in this stochastic rule allowing irrational decisions that is the players can adopt the less successful strategies with a low probability. The multiplicative factor wxy defines the strength of strategy transfer from y to x. If the strategy transfers are equivalent along the edges of the so-called replacement graph G′ then wxy can be considered as the adjacency matrix of G′ . The graphs G and G′ can differ in their edges [12, 27] although for most of the early investigations equivalence (G = G′ ) is assumed. In the last years the cases with weighted edges of G′ are also investigated [29, 28, 22]. In the Monte Carlo (MC) simulations the system is started from a random initial distribution of C and D strategies. Due to the stochastic evolutionary rules the system evolves towards a state which is characterized by the average frequency ρ of cooperators. It is emphasized that the average payoff of the whole community is strongly related to ρ (the maximum is reached at ρ = 1). Before considering the numerical results for different connectivity structures we briefly survey the general features of these systems. As the above strategy adoption (dynamical) rules do not change the homogeneous states (e.g., sx = D) therefore whenever one of the homogeneous states is reached the system remains there forever. During Monte Carlo simulations this phenomenon can be observed frequently for finite number N of players after a transient time with an average value increases very fast with N [1]. For sufficiently large systems the coexistence of C and D strategies can be observed during the whole period of simulation. Analytical results (surveyed in [9, 17]) are achieved if G and G′ are complete graphs, wxy = 1 for any pair of players in the limit N → ∞ . In this cases the defectors always receive higher income then the cooperators. Consequently, the frequency of cooperators vanishes in the final stationary states for most of the dynamical rules (including the random sequential pairwise comparison) if b > 1. Similar behavior is predicted if we assume a well-mixed population with a finite number of co-players, that is when each player’s income comes from |Ωx | = z ≪ N co-players chosen at random before the pairwise comparison. Significantly different behavior was reported by Nowak and May [11] who considered deterministic evolutionary PD games with synchronized strategy update on a square lattice with first and second neighbor interactions. In these cellular automaton type models the solitary defectors receive the highest score. Despite it, the defection cannot spread away because in the next generation the central D and her offsprings mutually decrease each other’s income. On the other hand, the cooperators forming

Mechanisms Supporting Cooperation

27

rectangular blocks can invade the territory of defectors along horizontal and vertical fronts. In the final state cooperators and defectors coexist (even for b > 3/2) and their portion depends on the payoff (b). For stochastic evolutionary rules the formation of rectangular C blocks is practically suppressed and the effect of the above described network reciprocity [10] is weakened. As a result the average frequency ρ of cooperators is reduced drastically in comparison to the case when the evolution is based on synchronized deterministic rules. The systematic investigations have indicated that ρ depends on the connectivity structures, payoffs, and dynamical rules (this means the noise magnitude K for the pairwise comparison). For example, when increasing the value of b on a square lattice with nearest neighbor interactions (|Ωx | = 4) then ρ drops from 1 to 0 at b = 1 if K = 0 or K → ∞ (similar behavior is found for the well-mixed population discussed above). For a finite K, however, the cooperators and defectors coexist within a region of b and ρ decreases monotonously as illustrated in Figure 1 where the plotted data are obtained for those K (more precisely, K = 0.4) providing the widest coexistence region of b.

Fig. 1 Monte Carlo results for the non-vanishing average frequency of cooperators as a function of b on the square lattice (closed squares) at K = 0.4, kagome lattice (triangles) at K = 0, and Barab´asi-Albert scale-free network (diamonds) at a low noise level

In order to illustrate the relevant effect of the connectivity structure on the frequency of cooperators we compare three ρ (b) functions in Figure 1. For all the three connectivity structures the average number of neighbors is 4. Notice that the cooperators are present in the system for b < 3/2 on the kagome lattice in the limit K → 0. The width of the coexistence region of b tends monotonously to 0 if K goes to infinity [18]. The striking difference in the frequency of cooperators between the square and kagome lattice is related to a mechanism supporting the spreading of cooperators through the overlapping triangles in the connectivity structure [26]. In the frequency of cooperators, however, a more relevant enhancement (see Figure 1) was reported by Santos et al. [15, 16] who considered a similar model on the Barab´asi-Albert (BA) scale-free networks where the degree distribution follows a power law behavior [2]. For the latter connectivity structures a small portion of sites (called hubs) have a large neighborhood with |Ωx | ≫ 4 and the connected hubs play a determinant role in the evolution of cooperation. The income of players located on hubs exceeds significantly the income of those located in their neighborhood

28

Gy¨orgy Szab´o, Attila Szolnoki and Jeromos Vukov

therefore the pairwise comparison rule favors the strategy adoption from the hubs to their neighborhood rather than backward. As a result after a suitable transient time most of the hubs’ neighbors will follow the strategy of the central player. This process is beneficial for the cooperative central (influential) players who become the best example to be followed by others. Consequently, the cooperative strategy can be also transferred rarely to the (connected) defective central players who will convince their neighbors to choose the C strategy afterwards. Finally most of the players cooperate within the whole payoff (b) region of the PD as shown in Figure 1. For the BA connectivity structures the enhanced influence (convincing capability) of the players with large neighborhood comes from their large income proportional to |Ωx |. Similar difference in individual influences, however, can be introduced artificially as a personal feature in human societies. In that case one can observe the relevant increase of ρ even for regular (|Ωx | = constant) connectivity structures as it is briefly surveyed in the next section.

3 Numerical Results for Heterogeneous Influential Effects In the last years several models with different choices of wxy were already investigated assuming that G = G′ [28, 29, 22]. It turned out that the most relevant increase of ρ occurs if the different values of wxy depends only on the personal features of player y who the strategy is adopted from [22]. In the simplest case we distinguish only two types of players (e.g., nx = A or B) and assume that

1, if ny = A wxy = , 0< w $i (s′i , ..., s′i , s′j , ..., s′N ),

(2)

$ j (s′1 , ..., si , s′j , ..., s′N ) < $ j (s′i , ..., s′i , s′j , ..., s′N ).

(3)

then

Thus a Pareto optimal cannot be improved upon without hurting at least one person’s pay-off. In the past quantum entanglement has been incorporated in classical two-party games such as the prisoner’s dilemma by Eisert et al. [4], the battle of sexes by Marinatto and Weber [5], etc. These authors demonstrated how optimal solutions can be achieved using entanglement. The purpose of the present work is to propose manyparticle entangled states and show how they can be used to obtain improved/optimal solutions for classical problems requiring coordinated action by many players. We show that, for games involving N players wishing to make N mutually exclusive choices, the entangled state of an integer quantum Hall effect state at filling factor 1 offers the best strategy (i.e., best Nash equilibrium).

2 Quantum Solutions to Classical Problems In this section we will pose a couple of classical problems and show that entanglement not only significantly improves the solution, in fact, it also produces the best possible solution.

2.1 Kolkata Paise Restaurant Problem We will first examine the Kolkata paise restaurant (KPR) problem [6] which is a variant of the Minority game problem [7]. In the KPR problem (in its minimal form)

46

Sudhakar Yarlagadda

there are N restaurants (with N → ∞) that can each accommodate only one person and there are N agents to be accommodated. All the N agents take a stochastic strategy that is independent of each other. If we assume that, on any day, each of the N agents chooses randomly any of the N restaurants such that if m (> 1) agents show up at any restaurant, then only one of them (picked randomly) will be served and the rest m− 1 go without a meal. It is also understood that each agent can choose only one restaurant and no more. Then the probability f that a person gets a meal (or a restaurant gets a customer) on any day is calculated based on the probability P(m) that any restaurant gets chosen by m agents with P(m) =

N! exp(−1) pm (1 − p)N−m = , (N − m)!m! m!

(4)

where p = 1/N is the probability of choosing any restaurant. Hence, the fraction of restaurants that get chosen on any day is given by f = 1 − P(0) = 1 − exp(−1) ≈ 0.63.

(5)

Now, we extend the above minimal KPR game to get a more efficient utilization of restaurants by taking advantage of past experience of the diners. We stipulate that the successful diners (NFn ) on the nth day will visit the same restaurant on all subsequent days as well, while the remaining N − NFn unsuccessful agents of the nth day try stochastically any of the remaining N − NFn restaurants on the next day (i.e., n + 1th day) and so on until all customers find a restaurant. The above procedure can be mathematically modeled to yield the following recurrence relation Fn+1 = Fn + f (1 − Fn),

(6)

where Fn is the fraction of restaurants occupied on the nth day with F1 = f = 1−1/e. Upon making a continuum approximation we get dF = f (1 − F), dn

(7)

1 − F = (1 − f )e f (1−n).

(8)

which yields the solution

Thus we see that at least a few iterations (i.e., at least 5 days) are needed to get close to 100% occupation. We will now investigate how superior quantum solutions can be obtained for the KPR problem. We will introduce quantum mechanics into the problem by asking the N agents to share an entangled N-particle quantum Hall state at filling factor 1 described in the Appendix (see Eqs. (15) & (16)). We assign to each of the N restaurants a unique angular momentum picked from the set {0, 1, 2, ..., N − 1}. We ask each agent to measure the angular momentum of a randomly chosen particle from the N-particle entangled state. Then, based on the measured angular momen-

Using Many-Body Entanglement for Coordinated Action in Game Theory Problems

47

tum, the agent goes to the restaurant that has his/her particular angular momentum assigned to it. In this approach all the agents get to eat in a restaurant and all the restaurants get a customer. Thus we see that the prescribed entangled state always produces restaurant-occupation probability 1 and is thus superior to the classical solution mentioned above! Furthermore, the probability that an agent picks a restaurant is still p = 1/N and hence all agents are equally likely to go to any restaurant. Thus even if there is an accepted-by-all hierarchy amongst the restaurants (in terms of quality of food with price of all restaurants being the same) the entangled state produces an equitable (Pareto optimal) solution where all agents have the same probability of going to the best restaurant, or the second-best restaurant, and so on. Quite importantly, it can be shown that the chosen entangled quantum strategy (i.e., the entangled N-particle quantum Hall state at filling factor 1) actually represents the best Nash equilibrium when there is a restaurant ranking! [8].

2.2 Kolkata Stadium Problem We will next analyze a variant of the Minority game problem which we will call as the Kolkata Stadium Problem (KSP). In the KSP, there are NK spectators trapped inside a theater or a stadium that has K exits. There is a panic situation of a fire or a bomb-scare and all the spectators have to get out quickly through the K exits each of which has a capacity of α N with α ≥ 1. We assume that all NK spectators have equal access to all the exits and that each agent has enough time to approach only one exit before being harmed. The probability P(m) that any exit gets chosen by m spectators is given by the binomial distribution P(m) =

(NK)! pm (1 − p)NK−m , (NK − m)!m!

(9)

where p = 1/K is the probability of choosing any gate. For a capacity of α N for N each gate, the cumulative probability P = ∑αm=1 P(m) that (on an average) a gate is approached by α N or less spectators is given in Table 1. Thus we see that if a gate has the optimal capacity of N (i.e., α = 1),then P is close to 0.5 and is not affected by the number of gates K (for small K) with P → 0.5 for N → ∞. However, as α increases even slightly above unity, P increases significantly for fixed values of N and K. Furthermore, for fixed values of α > 1 and K (with α only slightly larger than 1 and K being small), P → 1 as N becomes large. Here it should be mentioned that, even when P → 1 on an average, there can be fluctuations in a stampede situation with more than α N people approaching a gate and thus resulting in fatalities. Here too in the KSP game, if the NK spectators were to use the entangled NKparticle state given by the quantum Hall effect state at filling factor 1 (see Appendix), then every agent is assured of safe passage. In this situation, since there are NK angular momenta and only K gates, the angular momentum Mi measured by an agent

48

Sudhakar Yarlagadda

Table 1 The calculated values of the cumulated probability P for a system with NK persons and K gates with a gate-capacity α N

α

N

K

P

α

N

K

P

1 1 1

100 1000 10000

10 10 10

0.5266 0.5084 0.5027

1.05 1.05 1.05

100 1000 10000

10 10 10

0.7221 0.9531 1.0000

1 1 1

100 1000 10000

20 20 20

0.5266 0.5084 0.5027

1.1 1.1 1.1

100 1000 10000

10 10 10

0.8652 0.9995 1.0000

i for his/her particle should be divided by K and the remainder be taken to give the appropriate gate number (i.e., gate number = Mi (mod K)). Thus entanglement gives safe exit with probability 1 even when α = 1!

3 Conclusions In the N-agent KPR game, while the number of satisfactory choices is only N!, in sharp contrast the number of possibilities is N N when all the restaurants have the same ranking. Thus, in the classical stochastic approach, the probability of getting the best solution where all the restaurants are occupied by one customer is given by the vanishingly small value exp(−N). Even  in the KSP case, it can be shown that there is a vanishingly small probability [= K/(2π N)K−1 ] of providing safe passage to all when only N people are allowed to exit from each of the K gates (i.e., when α = 1). On the other hand, in this work we showed how quantum entanglement can produce a coordinated action amongst all the N-agents leading to the best possible solution with a probability 1! Thus quantum entanglement produces a much more desirable scenario compared to a classical approach at least for the KPR game. As a candidate for entanglement, one can also consider N number of identical qudits each with N possible states. By producing an antisymmetric entangled state from these N qudits, one can get better results than classical approaches. However, physically realizing a qudit with a large number of states is a challenging task [9]. Lastly, although it has not been shown that our many-particle entangled state (i.e., the quantum Hall effect state at filling factor 1) will have long-distance and also long-term correlations, we are hopeful of such a demonstration in the future. Acknowledgements The author would like to thank Bikas K. Chakrabarti, K. Sengupta, Diptiman Sen, and R. Shankar for useful discussions. Furthermore, discussions with R. K. Monu on the literature are also gratefully acknowledged.

Using Many-Body Entanglement for Coordinated Action in Game Theory Problems

49

Appendix Many-Particle Entangled State We consider the case of the integer quantum Hall state at filling factor 1 which is a degenerate filled band state. Such a state is a Slater determinant of all allowed N single particle eigen states of the filled band, i.e., it is an antisymmetric linear superposition of N!-many N-particle eigen states. In our quantum Hall state, electrons are chosen to be confined to the xy-plane and subjected to a perpendicular magnetic field. On choosing a symmetric gauge vector potential, A = 0.5B(y x − x y), the degenerate single-particle wavefunctions for the lowest Landau level (LLL) are given by:  m 2 2 z 1 e−|z| /4l0 , (10) φm (z) ≡ |m = 1 (2π l 2 2m m!) 2 l0 0

where z = x − iy is the electron position in complex plane, m is the orbital angular momentum, and l0 ≡ h¯ c/eB is the magnetic length. The area occupied by the electron in state |m is m|π r2 |m = 2(m + 1)π l02.

(11)

Thus the LLL can accommodate only Ne electrons given by Ne = (M + 1) =

A , 2π l02

(12)

where A is the area of the system and M is the largest allowed angular momentum for area A. The many-electron system is described by the Hamiltonian H=

e2 1  e 2 −i¯h∇ j − A j + ∑ c j ρc , the saddle Eq. of (8) no longer has any solution as zs < 1. Obviously, for the solution zs > 1, the partition function diverges. Then, we should bear in mind that the term ρε =0 in (6), which was omitted before the condensation, becomes O(1) object and the saddle point equation we should deal with is not (8) but (6). As the result, the Eq. (6) has a solution zs = 1 even for ρ > ρc and the number of balls k∗ in the condensation state increases linearly in ρ as k∗ = N(ρ − ρc ), whereas the number of balls in excited states reaches kˆ ≡ N ρc . Thus, we obtained the saddle point zs for a given density and inverse-temperature. We found that the condensation is specified by the solution zs = 1. We next investigate the density dependence of the occupation probability through the saddle point. For the solution of the saddle point Eq. zs , the occupation probability at inverse temperature β is evaluated as follows. P(k) =

ε0 Γ (3/2) zks ε0Γ (3/2) −3/2 zk+1 k − s (k + 1)−3/2. β 3/2 β 3/2

(10)

The above occupation probability is valid for an arbitrary integer value of k for k ≥ 1. We plot the behavior of the occupation probability P(k) in finite k regime in Figure 1. In this plot, we set ε0 = 1 and zs as zs = 0.1, 0.8, 1.0, and the inverse temperate is β = 1. In the inset of the same figure, we also show the same data in Log-Log scale for the asymptotic behavior of the probability P(k) for several values of zs , namely, zs = 0.1, 0.5, 0.7, 0.99 and 1. From this figure, we find that the powerlaw k−5/2 emerges when the condensation is taken place for ρ > ρc . The numerical analysis of the occupation probability (10) in the limit of k → ∞ is easily confirmed by asymptotic analysis of Eq. (10). We easily find that the asymptotic form of the wealth distribution P(k) behaves as P(k) = β −3/2zks (1 − zs )ε0Γ (3/2) k−3/2 3 −5/2 + β −3/2 zk+1 + O(k−7/2 ). s ε0 Γ (3/2) k 2

(11)

Condensation Phenomena and Pareto Distribution in Disordered Urn Models 0.6

57

zs = 1 zs = 0.8 zs = 0.1 10

0.5

zs = 1 zs = 0.99 zs = 0.7 zs = 0.5 zs = 0.1 k –2.5

1

0.4

0.1

log P

P 0.3

0.01 0.001 1e-04

0.2

1e-05 1e-06

0.1

0

1

2

4

10

6

8

log k

k

10

100

1000

12

14

16

Fig. 1 The behavior of the occupation probability (10) in non-asymptotic regime. We set ε0 = 1 and zs as zs = 0.1, 0.8, 1.0, and the inverse temperate is β = 1. The inset of the figure shows the asymptotic behavior of the occupation probability P(k) as Log-Log plots for the cases of zs = 0.1, 0.5, 0.7, 0.99 and 1 (For the coloured version of this figure, contact the author)

We also should notice that a macroscopic number of balls k∗ is gathered to a specific urn with energy level ε = 0 when the condensation occurs. As the result, the term such as ∼ (1/N)δ (k − k∗ ) should be added to the occupation probability. Let us summarize the results as follows: ⎧ s ) −3/2 ⎨ ε0 (1−z k exp [−k log(1/zs )] (ρ < ρc : zs < 1), 3/2 P(k) = 3εβ0Γ (3/2) −5/2 1 (12) ⎩ (ρ ≥ ρc : zs = 1). k + N δ (k − k∗ ) 2β 3/2

The scenario of the condensation is as follows. For a given ρ < ρc , one can find a solution zs < 1 for the saddle point Eq. (6), and hence the second term of Eq. (6) is zero in the thermodynamic limit. Then, for non-condensed N ρ balls, the occupation probability follows ∼ k−3/2 e−k -law. Namely, urns possessing a large number of balls do not appear due to the repulsive force as E = ε n. When ρ > ρc , the saddle point zs is fixed as zs = 1; if zs > 1, the first term of the saddle point Eq. (6) has a singularity. Therefore, in order to avoid the singularity, the second term of the saddle point Eq. (6) becomes from zero to a finite value. As the result, the occupation probability is described by the k−5/2 -law with a delta peak which corresponds to an urn of ε = 0 gathering the condensed N(ρ − ρc ) balls. This corresponds to the condensation phenomena in the disordered urn model. In particular, the occurrence of the condensation in the disordered urn model treated in the present paper is characterized by the transition from the exponential-law to the heavy tailed power-law. We also mention the effect of disorder on the power-law behavior of the occupation probability. We easily find that the power-law behavior disappears when one

58

Jun-ichi Inoue and Jun Ohkubo

cancels the disorder of the system by choosing the density of the energy such as D(ε ) = δ (ε − εˆ ) (εˆ is a constant). This fact means that the disorder appealing in the system possesses a central role to make the occupation probability to have a power-law behavior.

4 Interpretation From a View Point of Macroeconomics In this section, we reconsider the results obtained in the previous sections from a view point of macro economics. It is easy for us to regard the occupation probability as wealth distribution when we notice the relations: balls - money and urns - people in a society. In following, we attempt to find an interpretation of the condensation and the emergence of the Pareto-law [23] in terms of wealth differentials [24, 25, 26, 27, 28, 29, 30, 31, 32]. It is quite important for us to consider the whole range of the wealth. As reported in [31], the wealth distribution for small income regime follows the Gibbs/Lognormal-law and a kind of transition to the Pareto-law phase is observed. For the whole range distribution of the wealth, the so-called Lorentz curve [33, 34, 35] is obtained. The Lorentz curve is given in terms of the relation between the cumulative distribution of wealth and the fraction of the total wealth. Then, the so-called Gini index [33, 34, 35, 36, 37], which is a traditional, popular and one of the most basic measures for wealth differentials, could be calculated. The index could be changed from 0 (no differentials) to 1 (the largest differentials). For the energy function (4) in the previous section, we derived the wealth distribution for the whole range of incomes k. In this section, we evaluate the Gini index analytically. As we mentioned above, the Lorentz curve is determined by the relation between the cumulative distribution of wealth X(t) = ttmin P(k)dk and the fraction of the total wealth Y (t) = ttmin kP(k)dk/ t∞ kP(k)dk for a given wealth distribution P(k). For min instance, the Lorentz curve for the exponential distribution P(k) = γ e−γ k is given by Y = X + (1 − X ) log(1 − X ). We should notice that the Lorentz curve for the exponential distribution is independent of γ . For the power-law distribution P(k) = (γ − 1) k−γ (γ > 1), we have Y = 1 − (1 − X )γ −2/γ −1 as the Lorentz curve. Then, the Gini index G is defined as an area between the perfect equality line Y = X and the Lorentz curve. This quantity explicitly reads G=2

1 0

(X − Y )dX = 2



tmin

(X(t) − Y (t)) ·

dX dt dt

(13)

and we have G = 1/2 [34, 35] for the exponential distribution and G = 1/(2γ − 3) for the power-law distribution. As the occupation probability distribution (10) is defined for k > 1, one can evaluate the Gini index as a function of the saddle point zs . In Figure 2, we plot the Lorentz curve (left) for several values of zs . In the right panel, the Gini index G(zs ) is shown. We find that the index approaches to 1/2 as zs → 1.

Condensation Phenomena and Pareto Distribution in Disordered Urn Models

59

Fig. 2 The left panel: the Lorentz curve for (10). The right panel shows the Gini index for several values of zs (For the coloured version of this figure, contact the author)

From the argument in the previous section, we easily find that the occupation distribution for N ρc non-condensed balls beyond the critical point is modified such as ∼ k−(α +2) by choosing the density of the energy D(ε ) = ε0 ε α . Namely, for the Pareto-law distribution ∼ k−(α +2) , the Gini index leads to G = 1/(2α + 1). Therefore, the condensation is specified by the change of the Gini index from G = 1/2 to 1/(2α + 1).

5 Conclusion In this paper, we investigated equilibrium properties of disordered urn model and discuss the condition on which the heavy tailed power-law appears in the occupation probability by using statistical physics of disordered spin systems. For the choice of the energy function as E(ε , n) = ε n with density of state D(ε ) = ε0 ε α for the Monkey class urn model, we found that above the critical density ρ > ρc for a temperature, the condensation phenomenon is taken place, and most of the balls falls in an urn with the lowest energy level. As the result, the occupation probability changes its scaling behavior from the exponential k−(α +1) e−k -law to the k−(α +2) power-law in large k regime. We also provided a possible link between our results and macro economy, in particular, wealth differentials. We hope that various versions and extensions of the disordered urn model, including Backgammon model [21, 19], the model describing resource utilization such as the Kolkata Paise Restaurant Problem [38] could be applied to research area beyond conventional statistical physics. Acknowledgements We acknowledge the organizers of Econophysics-Kolkata IV. One of the authors (J.I.) was financially supported by Grant-in-Aid Scientific Research on Priority Areas “Deepening and Expansion of Statistical Mechanical Informatics (DEX-SMI)” of The Ministry of Edu-

60

Jun-ichi Inoue and Jun Ohkubo

cation, Culture, Sports, Science and Technology (MEXT) No. 18079001. He also thanks Professor Robin Stinchcombe for fruitful discussion.

References 1. Sherrington D and Kirkpatrick S 1975 Phys. Rev. Lett. 32 1792 2. Nishimori H 2001 Statistical Physics of Spin Glasses and Information Processing: An Introduction (Oxford: Oxford University Press) 3. Coolen A.C.C 2006 The Mathematical Theory Of Minority Games: Statistical Mechanics Of Interacting Agents (Oxford Finance) (Oxford: Oxford University Press) 4. Mezard M, Parisi G and Virasoro M A 1987 Spin Glass Theory and Beyond (Singapore: World Scientific) 5. Ehrenfest P and Ehrenfest T 1907 Phys. Zeit. 8 311 6. Kac M 1959 Probability and Related Topics in Physical Science, (London: Interscience Publishers) 7. Huberman B A and Adamic L A 1999 Nature 401 131 8. Barab´asi A-L and Albert R 1999 Science 286 509 9. Mantegna R N and Stanley H E 2000 An Introduction to Econophysics: Correlations and Complexity in Finance (Cambridge: Cambridge University Press) 10. Bouchaud J-P and Potters M 2000 Theory of Financial Risk and Derivative Pricing (Cambridge: Cambridge University Press) 11. Scalas E, Martin E and Germano G 2007 Phys. Rev. E 76 011104 12. Godreche C and Luck J M 2001 Eur. Phys. J. B. 23 473 13. Evans M R and Hanney T 2005 J. Phys. A: Math. Gen. 38 R195 14. Ohkubo J, Yasuda M and Tanaka K 2006 Phys. Rev. E 72 065104 (R) 15. Ohkubo J, Yasuda M and Tanaka K 2006 J. Phys. Soc. Jpn. 75 074802; Erratum: 2007 ibid. 76 048001 16. Ohkubo J 2007 J. Phys. Soc. Jpn. 76 095002 17. Ohkubo J 2007 Phys. Rev. E. 76 051108 18. Bialas P, Burda Z and Johnston D 1997 Nucl. Phys. B 493 505 19. Leuzzi L and Ritort F 2002 Phys. Rev. E 65 056125 20. Inoue J and Ohkubo J 2008 J. Phys. A: Math. Theor. 41 324020 21. Ritort F 1995 Phys. Rev. Lett. 75 1190 22. Morse P M and Feshbach H 1953 Models of theoretical physics (New York: McGraw-Hill, New York) 23. Pareto V 1897 Cours d’ Economie Politique Vol. 2 ed Pichou F (Lausanne: University of Lausanne Press) 24. Angle J 1986 Social Forces 65 293 25. M. L´evy and S. Solomon, Int. J. Mod. Phys. C 7, 65 (1996). 26. Ispolatov S, Krapivsky P L and Redner S 1998 Eur. Phys. J. B 2 267 27. Bouchaud J-P and Mezard M 2000 Physica A 282 536 28. Dr˘agulescu A and Yakovenko V M 2000 Eur. Phys. J. B 17 723 29. Chatterjee A, Chakrabarti B K and Manna S S 2003 Physica Scripta 106 36 30. Fujiwara Y, DiGuilmi C, Aoyama H, Gallegati M and Souma W 2004 Physica A 335 197 31. Chatterjee A, Yarlagadda S and Chakrabarti B K (Eds.) 2005 Econophysics of Wealth Distributions (New Economic Window) (Berlin: Springer) 32. Burda Z, Johnston D, Jurkiewicz J, Kaminski M, Nowak M A, Papp G and Zahed I 2002 Phys. Rev. E 65 026102 33. Kakwani N 1980 Income Inequality and Poverty (Oxford: Oxford University Press) 34. Dr˘agulescu A and Yakovenko V M 2001 Eur. Phys. J. B 20 585 35. Silva A C and Yakovenko V M 2005 Europhys. Lett. 69 304 36. Sazuka N and Inoue J 2007 Physica A 383 49 37. Sazuka N, Inoue J and Scalas E 2009 Physica A 388 2839 38. Chakrabarti A S, Chakrabarti B K, Chatterjee A and Mitra M 2009 Physica A 388 2420

Economic Interactions and the Distribution of Wealth Davide Fiaschi and Matteo Marsili

Abstract This paper analyzes the equilibrium distribution of wealth in an economy where firms’ productivities are subject to idiosyncratic shocks, returns on factors are determined in competitive markets, dynasties have linear consumption functions and government imposes taxes on capital and labour incomes and equally redistributes the collected resources to dynasties. The equilibrium distribution of wealth is explicitly calculated and its shape crucially depends on market incompleteness. In particular, a Paretian law in the top tail only arises if capital markets are incomplete. The Pareto exponent depends on the saving rate, on the net return on capital, on the growth rate of population and on portfolio diversification. On the contrary, the characteristics of the labour market mostly affects the bottom tail of the distribution of wealth. The analysis also suggests a positive relationship between growth and wealth inequality.

1 Introduction The statistical regularities in the distribution of wealth have attracted considerable interest since the pioneering works of Pareto [9] (see Atkinson and Harrison [1] and Davies and Shorrocks[4] for a review). The efforts of economists have focused primarily on the understanding the micro-economic causes of inequality. A more recent trend, reviewed in Chatterjee et. al. [3], has instead focused on mechanistic models of wealth exchange with the aim of reproducing the observed empirical distribution.

Davide Fiaschi Dipartimento di Scienze Economiche, University of Pisa, Via Ridolfi 10, 56124 Pisa Italy. e-mail: [email protected] Matteo Marsili The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34014 Trieste Italy. e-mail: [email protected]

61

62

Davide Fiaschi and Matteo Marsili

A general conclusion is that the Pareto distribution arises from the combination of a multiplicative accumulation process, and an additive term. This paper attempts to establish a link between these two literatures, by showing that the same mathematical structure emerges in a model which takes into account explicitly the complexity of market interactions of a large economy. In brief, the model describes how idiosyncratic shocks in the production of firms propagating through the financial and the labor markets shape the distribution of wealth. Market networks, i.e. who works and who invests in each firm, play a crucial role in determining the outcome. As suggested in Aiyagari [2], the shape of the equilibrium distribution crucially depends on market incompleteness, i.e. on the fact that individuals do not invest in all firms. With complete markets, the equilibrium distribution of wealth is determined solely by shocks transmitted through the labor market, and it takes a Gaussian shape, a result at odds with empirical evidence (see, e.g., Klass et al. [8]). Only when frictions and transaction costs impede full diversification of dynasties’ portfolios, the shape of the top tail of the distribution follows a Paretian law. The Pareto exponent computed explicitly allows to individuate the effects which different parameters have on wealth inequality. We find that an increase in the taxation of capital income or in the diversification of dynasties’ portfolios increases the Pareto exponent, whereas changes in the saving rate or in the growth rate of the population impact inequality in different ways, depending on technological parameters, due to indirect effects on the return on capital. The bottom tail of the equilibrium distribution of wealth is instead crucially affected by the characteristics of labour market. With a labour market completely decentralized, so that individual wages immediately respond to idiosyncratic shocks to firms, the support of the equilibrium distribution of wealth includes negative values; on the contrary if all workers receive the same wage, i.e. bargaining in the labour market is completely centralized, shocks are only transmitted through return on capital and the distribution of wealth is bounded away from zero. Finally, we show that, if the growth rate of the economy is endogenous, there is a negative relationship between the latter and the Pareto exponent, i.e. wealth inequality.

2 The Model We model a competitive economy in which F firms demand capital and labour. We assume all the wealth is owned by N households (assumed to be infinitely lived), who offer capital and labour and decide which amount of their disposable income is saved. Wages and returns on capital adjust to clear the labour and capital markets respectively. We derive continuum time stochastic equations for the evolution of the distribution of wealth, specifying the dynamics over a time interval [t,t + dt) and then letting dt → 0. We refer the interested reader to Fiaschi and Marsili [6] for details, and report directly the dynamical equations. The wealth pi of household i obeys the

Economic Interactions and the Distribution of Wealth

63

following stochastic differential equation:

d pi = s (1 − τk ) ρ pi + (1 − τl ) ω li + τk ρ p¯ + τl ω l¯ − χ − ν pi + ηi , dt

where ηi is a white noise term with E [ηi (t)] = 0 and covariance:  

  E ηi (t) ηi′ t ′ = δ t − t ′ Hi,i′ [p] .

(1)

(2)

The first three terms in the r.h.d. of Eq. (1) detail a simple behavioral model of how the consumption of household i depends on her income and wealth. The term in square brackets represents the disposable income of household i, which arises i) from the return on investment, at an interest rate ρ , taxed by government at a flat rate τk , and ii) from labor income, which is taxed at a rate τl . Here ω is the wage rate and li is the labor endowment of household i. The last two terms in the square brackets denote the equal redistribution of collected taxes on capital and labor markets, respectively, where p¯ and l¯ are the average wealth and labor endowment. A fraction s of the income is saved, i.e. s is the saving rate on income. The term χ represents minimal consumption, i.e. the rate at which household would consume in the absence of wealth and income, whereas ν is the rate of consumption of wealth. This simple consumption model finds solid empirical support, as discussed in Fiaschi and Marsili [6]. The return of capital markets ρ and the wage rate ω are fixed by the equilibrium conditions of the economy. In brief, each firm j buys capital k j and labor l j from households in capital and labor markets, i.e.: N

k j = ∑ θi, j pi , i=1

N

l j = ∑ φi, j ,

j = 1, . . . , F,

i=1

where θi, j (φi, j ) is the fraction of i’s wealth (labor) invested in firm j. These are used as inputs in the production of firm j, and produce an amount dy j = q(k j , l j )dA j of output in the time interval dt. Here q(k, l) is the production function of firms, whereas dA j (t) is an idiosyncratic shock, which is modeled as a random variable with mean E[dA j ] = adt and variance a2 Δ dt. Under the standard assumption that q(k, l) = lg(k/l) is an homogeneous function of degree one, when capital and labor markets clear, we find that i) each firm has the same capital to labor ratio k j /l j = λ , ii) the return on capital is given by ρ = ag′ (λ ) and iii) the wage rate is ω = a[g(λ ) − λ g′(λ )]. Since labor and capital are provided by households, and because of i), the constant λ = p¯ also equals household wealth per unit labor. Setting li = 1 for all i, the constant λ then equals the average wealth p¯ of households. The covariance of the stochastic noise in Eq. (1) is given by:

64

Davide Fiaschi and Matteo Marsili

 Hi,i′ [p] = Δ s2 (1 − τk )2 ρ 2 pi pi′ Θi,i′ + (1 − τl )2 ω 2 li li′ Φi,i′ +

+ (1 − τk )(1 − τl )ρω pi li′ Ωi,i′ + li pi′ Ωi′ ,i + τk ρ + τl ω /λ [(1 − τk )ρ (pi ϑi + pi′ ϑi′ ) + (1 − τl )ω (li ϕi + li′ ϕi′ )] + + N  [τk ρ + τl ω /λ ]2 F 2 + ∑ kj , N2 j=1 where

N

ϑi =

Θi,i′ pi′ , ∑ ′

i =1

and

F

Θi,i′ =

∑ θi, j θi′ , j ,

j=1

F

Ωi,i′ =

N

ϕi =

Ωi,i′ pi′ , ∑ ′

(3)

i =1

F

∑ θi, j φi′ , j and Φi,i′ = ∑ φi, j φi′ , j .

j=1

(4)

j=1

The parameters in Eq. (3) characterize the degree of intertwinement of economic interactions, i.e. how random shocks propagate throughout the economy. For example Θi,i′ is a scalar which represents the overlap of investments of dynasty i with those of dynasty i′ .

3 Infinite Economy We analyze the properties of the stochastic evolution of wealth discussed in the previous paragraph in the case of an infinite economy, that is of an economy where N and F → ∞. In particular, we assume that F = f N, where f is a positive constant. This assumption is not a relevant limitation of the analysis because in a real economy N and F may be of the order of some millions. We take the further simplifying assumption that households do not differ among themselves in their endowment of labour li , in the diversification of their portfolios Θi,i , in the allocation of their wealth among the firms where they are working Ωi,i and in the number of firms where they are working Φi,i , i.e. we assume that: li = l¯ = 1, Θi,i = Θ¯ , Ωi,i = Ω¯ and Φi,i = Φ¯ ∀i. For example, Θ¯ = 1 implies no diversification of the dynasties’ portfolios (i.e. all wealth is invested in the same firm), whereas Θ = 1/F (i.e. Θ → 0 for F → ∞) corresponds to maximal diversification of portfolios; similarly, Φ = 1 means that each dynasty is working in just one firm. In the limit N, F → ∞, the per capita wealth p¯ follows a deterministic dynamics given by d p¯ = s (ρ p¯ + ω ) − χ − v p. ¯ (5) dt

Economic Interactions and the Distribution of Wealth

65

Besides a technical condition1, this result requires that the average wealth satisfies the Law of Large Numbers, i.e. that the wealth distribution f (p) has a finite first moment. Two different regimes are possible: i) the stationary economy where wealth is constant in equilibrium; and ii) the endogenous growth economy, where wealth is growing at constant rate in equilibrium.

3.1 Stationary Economy If the growth rate of per capita wealth becomes negative for large value of p, ¯ i.e. if lim g′ ( p) ¯
0 and, the equilibrium distribution of wealth attains a Gaussian shape, f (pi ) = N e

(z −z p )2 − 0 z a1 i 1 0 ,

(8)

with mean z0 /z1 = p¯ and variance a0 / (2z1 ) (these parameters are defined below in Eq. (9)). • In the more realistic incomplete market case, i.e. Θ¯ , Ω¯ , Φ¯ > 0, i.e. when full diversification is not possible, both in the capital and in the labor market (incomplete markets), then: 

f (pi ) =  1 2

N 1+z1 /a2 a0 + a1 pi + a2 p2i

4

e



z0 +z1 a1 /(2a2 ) 4a0 a2 −a2 1



arctan

The technical condition ∑Ni=1 θi, j ≤ θ¯ ∀ j, N is needed to show this result. For the proof see Fiaschi and Marsili [6].

!

√a1 +2a2 pi 2 4a0 a2 −a1

"

,

(9)

66

Davide Fiaschi and Matteo Marsili

where z0 = s [ω ∗ + τk ρ ∗ p] ¯ − χ;

z1 = ν − s (1 − τk ) ρ ∗ ;

a0 = Δ s2 (1 − τl )2 ω ∗2 Φ¯ ;

a1 = 2Δ s2 (1 − τk )(1 − τl )ρ ∗ ω ∗ Ω¯ and a2 = Δ s2 (1 − τk )2 ρ ∗2Θ¯ ,

∞ where N is a constant defined by the condition −∞ f (pi ) d pi = 1. For large pi −α −1 follows a Pareto distribution whose exponent is given by: f (pi ) ∼ pi

α = 1 + 2z1/a2 = 1 + 2

ν − s (1 − τk ) ρ ∗ . Δ s2(1 − τk )2 ρ ∗2Θ¯

(10)

We observe that z1 , a2 > 0 (see Eq. (6)) and hence α > 1: this ensures that the first moment of the wealth distribution is indeed finite. • The case Θ¯ > 0 and Φ¯ = Ω¯ = 0 corresponds to the rather unrealistic situation where households distribute their labor on all firms. It turns out, however, that the resulting distribution of wealth is exactly the same as that of an economy in which Trade Unions have a very strong market power, such that the bargaining on labour market is completely centralized. Hence wages are fixed (staggered wages) in the short run and productivity shocks are absorbed by the returns on capital. Mathematically this corresponds exactly to the case Φ¯ = Ω¯ = 0, for which the distribution of wealth reads f (pi ) =

N 2(1+z1 /a2 )

a2 pi

# $ 2z − a 0p i 2 , e

(11)

where N is a normalization constant, z1 and a2 are the same as above. The results above indicate that while the bottom of the wealth distribution is determined by the labor market, the top tail only depends on the working of capital markets. If wages respond to productivity shocks and households are not able to fully diversify their employment (as is typically the case), then the distribution extends to negative values of the wealth. If, instead, staggered wages are imposed by a centralized bargaining in the labor market, then inequality in the bottom tail is highly reduced. With respect to the upper tail, we observe that the assumption Θi,i = Θ¯ ∀i eliminates cross-household heterogeneity in Eq. (9). However, it is worth noting that if dynasties were heterogeneous in their portfolio diversification, i.e. Θi,i = Θi′ ,i′ , then the top tail distribution would be populated by the dynasties with the highest Θi,i , that is by those dynasties with the less diversified portfolios. This finding agrees with the empirical evidence on the low diversification of the portfolios of wealthy households discussed in Guiso et al. [7, Chapter 10].

Economic Interactions and the Distribution of Wealth

67

The (inverse of the) exponent α provides a measure of inequality. Our results show that inequality increases with the volatility Δ of productivity shocks and with the concentration Θ¯ of household portfolios, and it decreases with capital taxation τk . Changes in s and v have, on the contrary, an ambiguous effect on the size of the top tail of distribution of wealth. More precisely, an increase in the gross return on capital ρ ∗ amplifies inequality (i.e. ∂ α /∂ ρ ∗ < 0). When s increases, a direct effect tends to decrease α , while an induced effect tends to increase α , because it causes an increase in the equilibrium per capita wealth p¯∗ , and hence a decrease in the return on capital ρ ∗ . When ν increases the contrary happens. Without specifying the technology g(λ ) it is not possible to determine which effect prevails (see Fiaschi and Marsili [6] for some examples).

3.2 Endogenous Growth Economy If the dynamics of per capita wealth obeys Eq. (5) and lim g′ ( p) ¯ >

p→∞ ¯

v , sa

(12)

then, in the long run, the returns on factors are given by:

ρ ∗ = lim ag′ ( p) ¯ and

(13)

ω∗ = 0

(14)

p→∞ ¯

and per capita wealth grows at the rate3

ψ EG = lim sag′ ( p) ¯ − ν = sρ ∗ − ν . p→∞ ¯

(15)

Notice that ψ EG is independent of the flat tax rate on capital4 τk and of the diversification of dynasty i’s portfolio Θ¯ ; however, ψ EG increases with saving rate s and with return on capital ρ ∗ and it decreases with ν ; changes in technology which increase the return on capital, therefore, also cause an increase in ψ EG . The distribution of wealth is best described in terms of the relative per capita wealth of households ui = pi / p. ¯ In the long run household i’s relative wealth obeys the following stochastic differential equation:

3

If g (0) > χ / (sa), this result holds independently of the initial level of per capita wealth, otherwise endogenous growth sets in only if the initial per capita wealth is sufficient high (see Fiaschi and Marsili [6]). 4 This is due to the assumption of constant saving rate s. Generally, s increases with the net return on capital (1 − τk ) ρ ∗ , hence s decreases with τk . This suggests that the growth rate ψ EG decreases with capital taxation τk .

68

Davide Fiaschi and Matteo Marsili

lim

t→∞

dui = sρ ∗ τk (1 − ui) + η˜ i , dt

where η˜ i = ηi / p¯ is a white noise term with E [η˜ i (t)] = 0 and covariance:    

E η˜ i (t) η˜ i′ t ′ = δ t − t ′ Hi,i′ [u] ,

where:

(16)

(17)

lim lim Hi,i′ [u] = Δ s2 (1 − τk )2 ρ ∗2Θi,i′ ui ui′ .

t→∞ N→∞

In the limit p¯ → ∞ the equilibrium wage rate converges to 0 and therefore wages do not play any role in the dynamics of relative per capita wealth of dynasty i, as stated above. In the long run, the equilibrium distribution of the relative per capita wealth ui , in the non-trivial (and realistic) case of incomplete markets Θ¯ > 0, is given by EG N EG f EG (ui ) = EG e−(α −1)/ui , (18) α +1 ui where N

EG

is a normalization constant, and

α EG = 1 + 2

τk Δ s(1 − τk)2 ρ ∗Θ¯

(19)

is the Pareto exponent. We remark that while capital taxation τk has no direct effect on growth, it has a direct effect on inequality.5 Hence capital taxes do not (directly) affect growth, but have a crucial redistributive function: wealth is redistributed away from wealthy to poor dynasties by an amount proportional to aggregate wealth, so preventing the possible ever-spreading wealth levels, and stabilizing the equilibrium distribution of relative wealth. Finally, the Pareto exponent is continuous across the transition from a stationary to an endogenously growing economy, i.e. lim

sρ ∗ −ν →0−

α=

lim

sρ ∗ −ν →0+

α EG ,

though it has a singular behavior in the first derivative (with respect to ν or s). We remark that the Pareto exponent α EG decreases with saving rate s, return on capital ρ ∗ , the diversification of portfolio Θ¯ and it increases with τk ; α EG is, on the contrary, independent of ν . Interestingly, since ψ EG increases with s and ρ ∗ , we find an inverse relationship between growth and wealth inequality. Indeed the Pareto exponent α EG and the growth rate ψ EG show an inverse relationship under changes in saving rate s and/or return on capital ρ ∗ . For example, an economy increasing its saving rate s (or its 5

The results above, in the limit τk → 0, do not reproduce the behavior of the economy with τk = 0: Indeed, Eq. (16), with τk = 0 and Hi,i′ = 0 for i = i′ , describes independent log-normal processes ui (t).

Economic Interactions and the Distribution of Wealth

69

Fig. 1 Behavior of the Pareto exponent as a function of the parameter ν for an economy where g(λ ) = [ελ γ + 1 − ε ]1/γ (constant elasticity of substitution technology) with ε = 0.2 and γ = 0.7. The other parameters take values: a = 1.0, s = 0.2, τk = 0.2 and ΔΘ = 300

return on capital ρ ∗ ) should move to an equilibrium where both its growth rate and its wealth inequality (in the top tail of the distribution of wealth) are larger than before. The behavior of the Pareto exponent and of the growth rate is illustrated in Figure 1 for a particular choice of the production function.

4 Conclusions and Future Research This paper discusses how the equilibrium distribution of wealth can be derived from the equilibrium of an economy with a large number of firms and households, who interact through the capital and the labour markets. Under incomplete markets, the top tail of the equilibrium distribution of wealth is well-represented by a Pareto distribution, whose exponent depends on the saving rate, on the net return on capital, on the growth rate of the population, on the tax on capital income and on the degree of diversification of portfolios. On the other hand, the bottom tail of the distribution mostly depends on the working of the labour market: a labour market with a centralized bargaining where workers do not bear any risk determines a lower wealth inequality. Our framework neglects important factors which have been shown to have a relevant impact on the distribution of wealth (see Davies and Shorrocks [4]). Moreover, our analysis is relative to the equilibrium distribution of wealth and it neglects out-of-equilibrium behavior and issues related to the speed of convergence. The re-

70

Davide Fiaschi and Matteo Marsili

lationship between the distribution of wealth and the distribution of income, as well as its relation with the distribution of firm sizes is a further interesting extension of our analysis. An additional interesting aspect is that of finite size effects in aggregate fluctuations. This issue has been recently addressed by Gabaix [5] in an economy in which aggregate wealth exhibits a stochastic behavior. In the light of our findings, the latter behavior can arise because of correlations in productivity shocks, which were neglected here, because dynasties concentrate their investments in few firms/assets or because the number of firms/assets is much smaller than the number of dynasties. This extension would draw a theoretical link between the dynamics of the distribution of wealth, the distribution of firm size and business cycle. Acknowledgements We thank the seminars’ participants in Bologna, Pisa and Trento, and Anthony Atkinson and Vincenzo Denicol`o for useful comments. Usual disclaimers apply.

References 1. Atkinson A. B., and A.J. Harrison (1978), Distribution of the Personal Wealth in Britain, Cap. 3, Cambridge: Cambridge University Press 2. Aiyagari R. (1994), Uninsured Idiosyncratic Risk and Aggregate Saving, Quarterly Journal of Economics, 109, 659-684 3. Chatterjee A., S. Yarlagadda and B.K. Chakrabarti (eds) (2005) Econophysics of Wealth Distribution, Berlin: Springer 4. Davies, J.B. and . F. Shorrocks (1999), The Distribution of Wealth in Handbook of Income Distribution, A.B. Atkinson and F. Bourguignon (eds), Amsterdam: Elsevier 5. Gabaix X. (2008), The Granular Origins of Aggregate Fluctuations, SSRN working paper: http://ssrn.com/abstract=1111765 6. Fiaschi D and Marsili M, (2009), Distribution of Wealth and Incomplete Markets: Theory and Empirical Evidence, DSE Discussion Paper 2009/83, University of Pisa, Italy, (available at http://ideas.repec.org/p/pie/dsedps/2009-83.html) 7. Guiso L., M. Haliassos and T. Jappelli (eds) (2001). Household Portfolios, Cambridge: MIT Press 8. Klass O., Biham O., Levy M., O. Malcai and S. Solomon (2006), The Forbes 400 and the Pareto wealth distribution, Economics Letters, 90, 290-295 9. Pareto, V. (1897), Corso di Economia Politica, Busino G., Palomba G., edn (1988), Torino: UTET

Wealth Redistribution in Boltzmann-like Models of Conservative Economies Giuseppe Toscani and Carlo Brugna

Abstract One of the goals of Boltzmann-like models for wealth distribution in conservative economies is to predict the stationary distribution of wealth in terms of the microscopic trade interactions. In a recent paper [1], a kinetic model for wealth distribution able to reproduce the salient features of this stationary curve by including taxation and redistribution has been introduced and discussed. This continuous model represents the natural extension of some recent researches [11, 12, 15, 18], in which discrete simplified models for the exploitation of finite resources by interacting agents, where each agent receives a random fraction of the available resources, have been considered. Here we show that a simple modification of the kinetic model introduced in [1] can be studied numerically to quantify the effect of various taxation regimes.

1 Introduction The statistical mechanics approach to multi-agents economies became popular in recent years, due to its flexibility in modelling wealth exchange processes which produce a distribution of wealth similar to that observed in real economies. Most results [2, 3, 6, 7, 8, 13, 14, 16, 19, 20] deal with microscopic models of markets where the economic activity is considered as a scattering process, and the evolution of wealth obeys a kinetic equation of Boltzmann type. The features typically incorporated in kinetic trade models are saving effects and randomness. Saving, means that each agent is guaranteed to retain at least a certain minimal fraction of his initial Giuseppe Toscani Department of Mathematics “F. Casorati”, via Ferrata 1, 27100 Pavia, Italy. e-mail: [email protected] Carlo Brugna Department of Mathematics “F. Enriques”, via Saldini 50, 20133 Milano, Italy. e-mail: [email protected]

71

72

Giuseppe Toscani and Carlo Brugna

wealth at the end of the trade. This concept has been introduced in [3], where a fixed saving rate for all agents has been proposed, and generalized in [4] by introducing a individual saving rate. Randomness means that the amount of money changing hands is non-deterministic. Among others, this idea has been developed in [7], to include the effects of a risky market. Numerous numerical simulations for models of the prescribed type have been carried out with different mechanism for saving and varying degree of randomness (see the recent book [5] for an overview of the recent results). In most of the models introduced so far, the microscopic wealth exchange process leaves the total mean wealth unchanged. Then, a substantial difference on the final behavior of the model (presence or not of Pareto’s tailed steady states) can be observed depending of the fact that binary trades are pointwise conservative , or conservative in the mean [9, 17]. In all cases, however, the asymptotic distribution of wealth depends completely on the microscopic structure of binary trades. Almost all the microscopic models of markets introduced so far, are fully based on scattering processes (trades), while other realistic and essential events, like taxation, are not taken into account. The few exceptions are represented by the contributions [11, 12, 15, 18], who succeeded in introducing discrete markets in which a redistribution mechanism is present. A continuous model in the spirit of collisional kinetic theory, has been recently developed by Bisi and coworkers [1]. The model considered in [1], is based on a kinetic equation of Boltzmann type, similar to the ones introduced in [17]. The novelty was to introduce a simple taxation mechanism at the level of the single trade to push aside a portion of the mean wealth of the society, that is subsequently redistributed to agents, to maintain the total wealth constant. The mechanism of redistribution has been chosen sufficiently flexible to be able to redistribute to agents a constant amount of wealth independently of the wealth itself, and to mimic a further global taxation to redistribute proportionally (or inversely proportionally) to their wealth. In this paper we consider a variation of the collision mechanism introduced in [1] to produce a taxation mechanism, by leaving the redistribution operator unchanged. This new kinetic model is described in Section 2. The analysis of moments evolution then clarifies the role of the redistribution and taxation mechanism. A simple way to obtain the Pareto index in this situation is briefly presented in Section 3. Numerical results on the solution of the kinetic equation are subsequently illustrated in Section 4, where the formation of bimodal distributions is shown in presence of a very high taxation parameter.

2 A Kinetic Model with Redistribution A systematic study of the time-evolution of the wealth distribution among individuals in a simple economy, together with a reasonable explanation of the formation of tails in this distribution has been recently achieved by means of kinetic collisionlike models in [17] (see also [9, 10]). In this picture, the time variation of the wealth

Wealth Redistribution in Boltzmann-like Models of Conservative Economies

73

distribution f (v,t), where v ∈ R+ represents the wealth variable, is assumed to be a consequence of binary collision-like trade events. In a suitable scaling, the effect of these collisions is quantitatively described by a Boltzmann-like equation

∂f = Q( f , f ). ∂t

(1)

The bilinear operator Q describes the change of f due to trades among agents. The binary trade is determined by the linear exchange rules v∗ = p1 v + q1w;

w∗ = p2 v + q2w.

(2)

Here (v, w) denote the (positive) money of two arbitrary individuals before the trade, and (v∗ , w∗ ) the money after the trade. In general, the transaction coefficients pi , qi , i = 1, 2 can be either given constant or random quantities, with the obvious constraint to be nonnegative. Following [1] the random distribution (or collision kernel, in the kinetic language) is assumed to be independent of the wealth variables v and w, and independent of time. Moreover, only conservative models, characterized by the further property p1 + p2  = 1, q1 + q2 = 1, (3) will be considered. In (3), · denotes the expectation value. A number of different trade models fit into this class. The first by Chakraborti and colleagues [3, 2] conserves money during the exchange and allows savings that can be a fixed and equal percentage of the initial money held by each agent. Allowing the saving percentage to take on a random character [4] introduces then a power law character to the distribution for high incomes, that can be shown to allow existence of power moments only up to exactly order one. The presence of random terms in the trade, introduced by Cordier, Pareschi and one of the authors [7] which destroy the pointwise conservation of wealth, was subsequently shown to be responsible of a robust convergence to a steady distribution with tails [17]. The homogeneous Boltzmann Eq. (1) can be easily written in weak form. It corresponds to say that the solution to (1) satisfies, for all smooth functions φ (v)     φ (v∗ ) + φ (w∗ ) − φ (v) − φ (w) d f (v) f (w) dv dw . f (v)φ (v) dv = dt R+ 2 R2+ (4) Note that (4) implies that f (v,t) remains a probability density if it so initially

R+

f (v,t) dv =



R+

f0 (v) dv = 1.

(5)

Moreover, owing to (3), also the total mean wealth is preserved in time m(t) =



R+

v f (v,t) dv =



R+

v f0 (v) dv = m(0).

(6)

74

Giuseppe Toscani and Carlo Brugna

In [1], by assuming the transaction coefficients pi , qi , i = 1, 2 bounded from below min {pi , qi } > δ ,

i=1,2

(7)

for a given small constant δ > 0, the trade vε∗ = (p1 − ε )v + q1w;

w∗ε = p2 v + (q2 − ε )w

(8)

has been considered. Since ε ≤ δ , both v∗ and w∗ are non-negative, but conservation of wealth is lost v∗ε + wε∗  = (1 − ε )(v + w). (9) In trade (8) a percentage of the total wealth involved in the trade is not returned to agents. This small fraction that does not return can be considered as a taxation operated on the trade. The weak point of trade (8) is that the resulting post-trade wealths are non-negative only if ε ≤ δ , while, from a numerical point of view it would be certainly interesting to check the effects of any size of taxation. To this extent, we modify the post-trade wealths by assuming, for 0 < ε < 1 vε∗ = (1 − ε )(p1 v + q1w);

wε∗ = (1 − ε )(p2v + q2w).

(10)

This new collision rule satisfies (9). Let Qε ( f , f ) define the collision operator governing the non–conservative process which corresponds to trade (10). In weak form,     d φ (vε∗ ) + φ (wε∗ ) − φ (v) − φ (w) f (v) f (w) dv dw . f (v)φ (v) dv = dt R+ 2 R2+ (11) Note that, due to (9), the total mean wealth, on account of (3), is exponentially decaying in time m(t) =



R+

v f (v,t) dv = m(0) exp{−ε t}.

(12)

The percentage of mean wealth that comes out by taxation, can be restituted to the agents in such a way that the total wealth is left unchanged. In [1] this has been done by resorting to the redistribution operator Rεχ ( f )(v,t) = ε

 ∂  (χ v − (χ + 1)m(t)) f (v,t) , ∂v

(13)

where χ is a given constant. The presence of m(t) makes the operator Rεχ nonlinear. The choice of a linear weight factor multiplying the distribution function inside the square brackets in (13) involves in the mechanism only the most meaningful moments, those of order zero and one. Such a weight function contains only one disposable real parameter χ , a constant which characterizes the type of redistribution, and that determines the slope of the straight line as well as the value of v, no matter if physical or non-physical, at which the weight itself vanishes. The redistri-

Wealth Redistribution in Boltzmann-like Models of Conservative Economies

75

bution operator preserves the number of agents and actually redistributes the total amount of money that is being collected by taxation. In fact, note that we have, whatever the constant χ ,

R+

vRεχ ( f )(v,t) dv = ε m(t),

(14)

and, provided f (v,t) satisfies in addition the “boundary” condition f (0,t) = 0, also

R+

Rεχ ( f )(v,t) dv = 0.

(15)

The operator Rεχ can be seen as the sum of two different contributions, Rεχ = T ε + Dεχ , where ∂ (16) T ε f (v,t) = −ε m(t) f (v,t), ∂v and  ∂  (v − m(t)) f (v,t) . (17) Dεχ ( f )(v,t) = ε χ ∂v The operator T ε is clearly a transport operator, and its effect is to move right uniformly the underlying distribution function, in such a way that the mean wealth lost in the taxation of trades is completely restituted. Therefore its effect is a uniform redistribution among agents. The second operator is a drift operator, which corresponds to a selective redistribution, and may correspond to some partition strategy. From the properties of drift operator, one can deduce that for positive values of the parameter χ , money is redistributed to agents with little wealth, whereas agents of large wealth are taxed once more. When χ < −1, one has the opposite situation in which the poorest part of the population supplies additional resources to the richest part, and for this reason one usually has to exclude this range of parameter values from the analysis. In all cases, however, the drift operator Dεχ ( f )(v,t) does not modify the mean wealth of the population. The time-evolution of wealth distribution in presence of taxation/redistribution is given by the solution to the kinetic equation

∂ f (v,t) = Qε ( f , f ) (v,t) + Rεχ ( f )(v,t), ∂t

(18)

where the bilinear operator Qε accounts for taxation in trades, while the differential operator Rεχ accounts for redistribution and (possible) additional taxation. Sufficient conditions on the initial density, which guarantee that the process (18) governed by the full operator Qε + Rεχ not only preserves the number of agents, but also is globally conservative in the mean, namely m(t) = m(0) = m, have been quantified in [1].

76

Giuseppe Toscani and Carlo Brugna

3 Pareto Tails and Taxation As usual in Boltzmann’s framework, information on the stationary equilibrium distribution may be achieved for general values of parameters also from the evolution of moments, which is governed by the weak form of the kinetic equation relevant to the test function φ (v) = vn d dt



n

R+

=

 % ∂  χ v − (χ + 1)m f dv = ∂v R+    (vε∗ )n + (wε∗ )n − vn − wn f (v) f (w) dv dw .

v f (v) dv − ε



R2+



vn

Using formulas (10), the integral on the right hand side may be recast as     (vε∗ )n + (w∗ε )n − vn − wn f (v) f (w) dv dw = R2+

 n−1 & 1 k n−k k = (1 − ε )n ∑ pn−k 1 q1 + p2 q2 Mn−k Mk − cn Mn 2 k=1

where

1 cn = 1 − (1 − ε )n pn1 + qn1 + pn2 + qn2  . 2

In (20) Mj =



R+

(19)

(20)

(21)

v j f (v) dv .

The remaining contribution due to Rεχ is simply handled by parts to yield finally   dMn + nε χ Mn − (χ + 1)m Mn−1 = dt  n−1 & 1 k n−k k = (1 − ε )n ∑ pn−k 1 q1 + p2 q2 Mn−k Mk − cn Mn . 2 k=1

(22)

Therefore, moments of the steady state are recursively obtained from the formula  1 1 − (1 − ε )n pn1 + qn1 + pn2 + qn2 + ε χ n Mn∞ = 2 n−1 # $ n ∞ ∞ (1 − γ − ε )n−k γ k Mn−k +∑ Mk∞ . = ε (χ + 1)m n Mn−1 k k=1 

(23)

Let us denote by Sε (n) the coefficient of the Mn -th moment of f . 1 Sε (n) = 1 − (1 − ε )n pn1 + qn1 + pn2 + qn2  + ε χ n, 2

(24)

Wealth Redistribution in Boltzmann-like Models of Conservative Economies

77

As long as Sε (n) is strictly positive, formula (23) allows to compute the moment of order n in terms of all lower order moments, starting from M0∞ = 1 and M1∞ = m, even in absence of an explicit knowledge of the steady distribution. If Sε (n) is strictly positive for all n > 0, then all moments of the stationary wealth distribution are well defined, and one has slim tails. On the other hand, if for some n = n¯ the coefficient of Mn∞ becomes negative, a breakdown in the procedure (all moments are bound to be positive) appears, meaning lack of higher order moments, and thus implying fat Pareto–like tail. The example in Section C of [9], can be rephrased in presence of taxes, to show that Pareto tails in zone III of Figure 1 still remain in a regime of low taxation. We will go back to this example in the next Section. In all cases, however, the key function which gives the exact number of finite moments is in this case given by 1 Sε (s) = 1 − (1 − ε )s ps1 + qs1 + ps2 + qs2 + ε χ s. 2

(25)

This function is convex in s > 0, with Sε (0) = 1, and Sε (1) = 0, for all values of the parameters ε and χ , as soon as ε < 1. The results from [17, 9] can be generalized to the present situation to give the following: Unless Sε (s) ≥ 0 for all s > 0, any solution f (t; w) to the Boltzmann Eq. (18) tends to a steady wealth distribution P∞ (w) = f∞ (w), which depends on the initial wealth distribution P(0; w) only through the conserved mean wealth M > 0 and the parameters ε and χ . Moreover, exactly one of the following is true: (Pareto Tails)if Sε (s) = 0 for some s > 1, then P∞ (w) has a Pareto tail of index s; (Slim Tails) if Sε (s) < 0 for all s > 1, then P∞ (w) has a slim tail; (Concentration)if Sε (s) = 0 for some 0 < s < 1, then P∞ (w) = δ0 (w), a Dirac Delta at w = 0. We remark that the presence of ε and χ in the expression of Sε has always the effect of destroying or at least increasing the value of Pareto index.

4 Numerical Results To illustrate the relaxation behavior and to study the influence of the different model parameters, we have performed a series of kinetic Monte Carlo simulations for the Boltzmann model presented in the previous section. The trade coefficients in the simulations are those of the CPT model introduced in [7], which we write in the form p1 =

1+λ + η, 2

q1 =

1−λ , 2

p2 =

1−λ , 2

q2 =

1+λ + η∗ , 2

(26)

where the positive constant λ < 1 measures, via its complement to unity, the common saving propensity of the transacting agents, and η , η∗ are random variables, representing, according to an idea of [7], the market returns of an open economy,

78

Giuseppe Toscani and Carlo Brugna

$ and both taking values in the open interval − 1+2 λ , 1+2 λ , with zero mean and equal #

variance σ 2 , namely

η  = η∗  = 0,

η 2  = η∗2  = σ 2 .

(27)

In this way the transaction operation, free from taxation and redistribution, is conservative in the mean, but not necessarily point-wise conservative (at the microscopic scale). Generally, in this kind of simulations, known as Direct Simulation Monte Carlo (DSMC) or Bird’s scheme, pairs of agents are randomly and non-exclusively selected for binary trades, and exchange wealth according to the trading rule under consideration. To extend this procedure to the situation in which the redistribution operator is present, we pursue the following approach, which is composed by various steps. Let us indicate with 2N the number of traders we will take for our simulation. One time step in our simulation corresponds to N interactions. In the first stage, we select randomly two agents, with wealths (vi , w j ). Once the agents are selected, the trade takes place and wealth is exchanged according to the trading rule (26). In our experiments, the random variable η , η∗ are simply represented by independent coins, both taking the values σ and −σ , with equal probabilities. By assuming 1−λ 1+λ

+ w j > vi . vi + 2 2 2 2 2 Therefore, in absence of taxation, by winning the agent increases his wealth after the trade. Since it is reasonable that the same could happen in presence of taxation (which corresponds to the realistic assumption that an agent plays only if there is a return), we assume that the rate ε satisfies a smallness assumption, given by (1 − ε )v∗i > vi which holds if

ε
0.3, while Pareto tails are lost, and an exponential decay at infinity appears (in agreement with the theoretical prediction of formula (25)), the density start to develop a bimodal profile, independently of the action of the parameter χ . The steady distribution in this range of the parameter is presented in Figure 4.

1

7000

0.9 6000

χ=ε

ε = 0.4 χ = 0.04

0.8 χ = 0.1ε 5000 χ = 10ε

0.6

P(w)

max(P(w))

0.7

0.5

4000

3000

0.4 0.3

2000

0.2 1000 0.1 0

0

0.1

0.2

0.3

0.4

ε

0.5

0.6

0.7

0.8

0

0

1

2

3

4

5

w

Fig. 4 Evolution of the pick of the steady profile in terms of ε (left); A bimodal distribution for large values of ε (right) (For the coloured version of this figure, contact the author)

Acknowledgements All authors acknowledge support from the Italian MIUR, project “Kinetic and hydrodynamic equations of complex collisional systems”. GT acknowledges partial support of the Acc. Integ. program HI2006 − 0111.

References 1. Bisi M., Spiga G., Toscani G.: Kinetic models of conservative economies with wealth redistribution. (preprint) (2009) 2. Chakraborti A.: Distributions of money in models of market economy. Int. J. Modern Phys. C 13, 1315–1321 (2002) 3. Chakraborti A. , Chakrabarti B.K.: Statistical Mechanics of Money: Effects of Saving Propensity. Eur. Phys. J. B 17, 167-170 (2000) 4. Chatterjee A., Chakrabarti B.K., Manna S.S.: Pareto Law in a Kinetic Model of Market with Random Saving Propensity. Physica A 335, 155-163 (2004)

82

Giuseppe Toscani and Carlo Brugna

5. Chatterjee A., Yarlagadda S.V., Chakrabarti B.K. Eds.: Econophysics of Wealth Distributions. New Economic Window Series, Springer-Verlag (Italy), (2005) 6. Chatterjee A., Chakrabarti B.K., Stinchcombe R.B.: Master equation for a kinetic model of trading market and its analytic solution. Phys. Rev. E 72, 026126 (2005) 7. Cordier S., Pareschi L., Toscani G.: On a kinetic model for a simple market economy. J. Stat. Phys. 120, 253-277 (2005) 8. Drˇagulescu A., Yakovenko V.M.: Statistical mechanics of money, Eur. Phys. Jour. B 17, 723729 (2000) 9. D¨uring B., Matthes D., Toscani G.: Kinetic Equations modelling Wealth Redistribution: A comparison of Approaches. Phys. Rev. E, 78, 056103 (2008) 10. D¨uring B., Matthes D., Toscani G.: A Boltzmann type approach to the formation of wealth distribution curves, Riv. Mat. Univ. Parma, in press, (2009) 11. Garibaldi U., Scalas E., Viarengo P.: Statistical equilibrium in simple exchange games II. The redistribution game. Eur. Phys. Jour. B 60(2) 241–246 (2007) 12. Guala S.: Taxes in a simple wealth distribution model by inelastically scattering particles. (preprint) arXiv:0807.4484v1 13. Hayes B.: Follow the money. American Scientist 90, 400-405 (2002) 14. Ispolatov S., Krapivsky P.L., Redner S.: Wealth distributions in asset exchange models. Eur. Phys. Jour. B 2, 267-276 (1998) 15. Iglesias J.R., Gonc¸alves S., Pianegonda S., Vega J.L., Abramson G.: Wealth redistribution in our small world. Physica A 327, 1217 (2003) 16. Malcai O., Biham O.,Richmond P., Solomon S.: Theoretical analysis and simulations of the generalized Lotka-Volterra model. Phys. Rev. E 66, 031102 (2002) 17. Matthes D., Toscani G.: On steady distributions of kinetic models of conservative economies. J. Stat. Phys. 130, 1087-1117 (2008) 18. Pianegonda S., Iglesias J.R., Abramson G. , Vega J.L.: Wealth redistribution with finite resources. Physica A 322 667–675 (2003) 19. Slanina F.: Inelastically scattering particles and wealth distribution in an open economy. Phys. Rev. E 69, 046102 (2004) 20. Solomon S., Richmond P.: Stable power laws in variable economies; Lotka-Volterra implies Pareto-Zipf. Eur. Phys. J. B 27, 257-262 (2002)

Multi-species Models in Econo- and Sociophysics Bertram D¨uring

Abstract In econo- and sociophysical modeling of heterogeneous problems it is often natural to study the time-evolution of distribution functions of different, interacting species. Such models can be seen as the analogue to the physical problem of a mixture of gases, where the molecules of the different gases exchange momentum during collisions. We give two examples of problems where models with multiple, interacting species arise naturally. One is concerned with the formation of bimodal wealth or income distributions in a society, the other considers the process of opinion formation in a heterogeneous society which is built of two groups, one group of ordinary people and one group of so-called strong leaders.

1 Introduction Various kinetic models to describe economic and sociologic phenomena have been proposed in recent years. Such models successfully use methods from statistical mechanics to describe the behavior of a large number of interacting individuals or agents in an economy or individuals in a society. This leads to generalizations of the classical Boltzmann equation for gas dynamics. Typical applications are the evolution of the distribution of wealth in an economy and the process of opinion formation in a homogeneous society. The classical theory for homogeneous gases is adapted to the economic (or sociologic, respectively) framework in the following way: molecules and their velocities are replaced by agents (individuals) and their wealth (opinion), and instead of binary collisions, one considers trades (information exchange) between two agents (individuals). A variety of models has been proposed and studied in view of the relation between parameters in the microscopic rules and the resulting macroscopic statistics. Bertram D¨uring Institut f¨ur Analysis und Scientific Computing, Technische Universit¨at Wien, Wiedner Hauptstraße 8-10, 1040 Wien, Austria. e-mail: [email protected]

83

84

Bertram D¨uring

In the prevalent models, the situation is typically homogeneous. To model certain, realistic situations one needs to consider inhomogeneous models. One solution is to study stratified models where the distribution function depends on an additional variable as e.g. in [3]. This leads to inhomogeneous Boltzmann equations for the distribution function f = f (x, w,t) which are of the following form

∂ 1 f + Φ (x, w) · ∇w f = Q( f , f ). ∂t τ Clearly, the choice of the field Φ (x, w) which describes the stratification trajectories plays a crucial role. It may not be easy to determine a suitable field from the economic or sociologic problem, in contrast to the physical situation where the law of motion yields the right choice. Another way, which arises naturally in certain situations, is to consider the timeevolution of distribution functions of different, interacting species. To some extent this can be seen as the analogue to the physical problem of a mixture of gases, where the molecules of the different gases exchange momentum during collisions [1]. This leads to systems of Boltzmann-like equations which are of the form

∂ fi (w,t) = ∂t

n

1

∑ τi j Q( fi , f j )(w),

i = 1, . . . , n.

(1)

j=1

To model exchange of individuals (mass) between different species, additional collision operators can be present on the right hand side of (1) which are reminiscent of chemical reactions in the physical situation. In following sections, we will give two examples of problems in econo- and sociophysics, where models with multiple species arise naturally. One is concerned with the formation of bimodal distributions in a society, the other considers the process of opinion formation in a heterogeneous society which is built of two groups, one group of ordinary people and one group of so-called strong leaders. Both lead to a system of Boltzmann-type equations of the form (1).

2 Multi-modal Wealth Distributions A kinetic model for wealth distribution, where agents from n different countries or social groups trade with each other, has been introduced in [4]. It is a generalization of the Cordier-Pareschi-Toscani (CPT) model presented in [2]. When two agents, one from country i (i = 1, 2, . . . , n) with pre-trade wealth v and the other from country j ( j = 1, 2, . . . , n) with pre-trade wealth w interact, their post-trade wealths v∗ and w∗ are given by v∗ = (1 − γi γ )v + γ j γ w + ηi j v, w∗ = (1 − γ j γ )w + γi γ v + η ji w.

(2) (3)

Multi-species Models in Econo- and Sociophysics

85

In (2), (3), the trade depends on the transaction parameters γ and γi (i = 1, . . . , n), while the risks of the market are described by ηi j (i, j = 1, . . . , n), which are equally distributed random variables with zero mean and variance σi2j = λi j γ . The different variances for domestic trades in each country and for international trades reflect different risk structures in these trades. The trading rule (2), (3) preserves — as in the original CPT model — the total wealth in the statistical mean, ' ∗      v + w∗ = 1 + ηi j  v + 1 + η ji  w = v + w. (4)

In this setting, we are led to study the evolution of the distribution function for each country as a function depending on the wealth w ∈ R+ and time t ∈ R+ , fi = fi (w,t). In analogy with the classical kinetic theory of mixtures of rarefied gases, we study the evolution of the distribution function for each country as a function depending on the wealth w ∈ R+ and time t ∈ R+ , fi = fi (w,t) which obey a system of n Boltzmann-like equations, given by

∂ fi (w,t) = ∂t

n

1

∑ τi j Q( fi , f j )(w),

i = 1, . . . , n.

(5)

j=1

Herein, τi j are suitable relaxation times, which depend on the velocity of money circulation [10]. The Boltzmann-like collision operators read  # $  1 Q( fi , f j )(w) = fi (v∗ ) f j (w∗ ) − fi (v) f j (w) dv . (6) R+ Ji j In (6), (v∗ , w∗ ) denote the pre-trade pair that produces the post-trade pair (v, w), following rules like (2) and (3), while Ji j denotes the Jacobian of the transformation of (v, w) into (v∗ , w∗ ). Finally, · denotes the operation of mean with respect to the random quantities ηi j . A useful way of writing the collision operator (6), that allows to avoid the Jacobian, is the so-called weak form. It corresponds to consider, for all smooth functions φ (w),     ∗ φ (v ) − φ (v) fi (v) f j (w) dv dw . (7) Q( fi , f j )(w)φ (w) dw = R2+

R+

In the continuous trading limit (γ , σi j → 0 with fixed quotient σi2j /γ = λi j ) one obtains [4] a system of Fokker-Planck equations

∂ gi = ∂τ

 λ ∂2   1 ∂   ij 2 ( v + g v − m )g , ρ γ ρ γ j i i j j j i 2 τi j ∂ v j=1 2τi j ∂ v n



i = 1, . . . , n,

(8)

for the scaled densities gi (w, τ ) = fi (w,t) with τ = γ t. To illustrate the relaxation behavior and to study the influence of the different model parameters, we have performed a series of kinetic Monte Carlo simulations.

86

Bertram D¨uring

We will focus on the situation of two countries, i.e. n = 2. Hence, let us consider

∂ f1 (w,t) = ∂t ∂ f2 (w,t) = ∂t

1 1 Q( f1 , f1 )(w) + Q( f1 , f2 )(w), τ11 τ12 1 1 Q( f2 , f2 )(w) + Q( f2 , f1 )(w). τ22 τ21

Herein, Q( f1 , f1 ) and Q( f2 , f2 ) represent the collision operators which describe the change of density due to binary domestic trades, while Q( f1 , f2 ), Q( f2 , f1 ) are the collision operators which describe the change of density due to binary international trades. In this kind of simulations, known as Direct Simulation Monte Carlo (DSMC) or Bird’s scheme, pairs of agents are randomly and non-exclusively selected for binary collisions, and exchange wealth according to the trading rule under consideration. A detailed description of this procedure is given in [4]. In all our experiments, every agent possesses unit wealth initially. The relaxation in the CPT model occurs exponentially fast [5]. Hence, to compute a good approximation of the steady state it suffices to carry out the simulation for about 104 time steps, and then average the wealth distribution over another 1000 time steps. In every experiment, we average over M = 100 such simulation runs. We consider two groups with N1 = N2 = 5000 agents. We investigate the relaxation behavior when the random variables ηi j , i, j ∈ {1, 2}, attain values ± µ with probability 1/2 each. We set the coefficient γ = 1. Let µ = 0.15 and τi j = 1 for i, j ∈ {1, 2}. We choose γ1 = 0.125 and γ2 = 0.01. The histogram and the cumulative probability density is plotted in Figure 1. We observe a bimodal distribution in the histogram and a Pareto tail in the cumulative probability distribution. Such bimodal distributions (and a polymodal distribution, in general) are reported using real data for the income distributions in developing countries [7, 8]. The cumulative distribution function is dominated by

0

10

Cumulative Probability

0.08

Probability

0.06

0.04

0.02

−1

10

−2

10

−3

10

−4

0.00

10 10^−4

10^−2

10^0

Wealth w

10^2

10^4

−2

10

0

2

10

10

4

10

Wealth w

Fig. 1 Histogram of steady state distribution (left) and cumulative wealth distribution function (right) for γ1 = 0.125 and γ2 = 0.01

87

0.06

0.06

0.05

0.05

0.04

0.04

Probability

Probability

Multi-species Models in Econo- and Sociophysics

0.03 0.02 0.01 0.00

0.03 0.02 0.01

10^−4

10^−2

10^0

10^2

10^4

0.00

10^−4

Wealth w

10^−2

10^0

10^2

10^4

Wealth w

Fig. 2 Influence of ηi j : Wealth distribution with γ1 = 0.125, γ2 = 0.01 with η12 = η21 = 0.075 (left) and η12 = η21 = 0.225 (right) and η11 = η22 = 0.15 in both cases

the tail behavior of the second group with smaller γ and shows a Pareto tail of the respective index. Comparative simulations show that the distance of the two peaks in the distribution decreases with decreasing difference between γ1 and γ2 . To illustrate the influence of the risk parameter ηi j , we perform simulations with increased and decreased risk for international trades, i.e. we choose η12 = η21 = 0.075 and η12 = η21 = 0.225, respectively, while we keep the other parameters unchanged. The wealth distributions are shown in Figure 2. For η12 = η21 = 0.075, the bimodal profile is more pronounced, while the additional diffusion in the case η12 = η21 = 0.225 tends to blur the bimodal shape. For more details and analytical as well as numerical results, we refer to [4].

3 Opinion Formation with Strong Leaders In this section we present a kinetic approach to study the time-evolution of an opinion distribution in a heterogeneous society, which consists of a large group of ordinary people and a smaller group of strong opinion leaders. Opinion is represented as a continuous quantity w ∈ I with I = (−1, 1), where ±1 represent extreme opinions. The model is a generalization of a homogeneous model for opinion formation developed in [9]. The group of strong leaders is supposed to have a stronger influence on public opinion through their strong personalities, financial means, control of media etc. In the kinetic model, this sociophysical phenomenon is represented by the fact that leaders’ opinions are not changed through interactions with ordinary society members. The leaders can, however, influence each other. Hence, if one individual from the group of ordinary people with opinion v meets a strong leader with opinion w their post-interaction opinions v∗ , w∗ are given by

88

Bertram D¨uring

v∗ = v − γ P3(|v − w|)(v − w) + η1D1 (|v|),

w∗ = w.

(10a) (10b)

If two individuals from the same group meet, the interaction shall as in [9] be given by v∗ = v − γ P1,2(|v − w|)(v − w) + η1D1,2 (|v|), w∗ = w − γ P1,2(|w − v|)(w − v) + η2D1,2 (|w|).

(11a) (11b)

Herein, γ ∈ (0, 21 ) is the constant compromise parameter. We assume for simplicity that all individuals in the society share a common compromise parameter. This assumption can be further relaxed by choosing the compromise parameter as a random quantity, with a certain statistical mean. The quantities η1 and η2 are random variables with mean zero and variance σ 2 . They model self-thinking that each individual performs in a random diffusion fashion through an exogenous, global access to information, e.g. through the press, television or internet. The functions Pi (·) (i = 1, 2, 3) and D j (·) ( j = 1, 2) model the local relevance of compromise and selfthinking for a given opinion. The random variable and the function D j (·) are characteristic for the respective class of individuals, and are the same in both types of interaction while the compromise function Pi (·) can be different in the three types of interactions. Additional assumptions need to be made on the random variables and the functions D j (·) to ensure that opinions remain inside the interval I . We consider the distribution function fi = fi (w,t) (i = 1, 2) of each group as a function depending on the opinion w ∈ I and time t ∈ R+ . In analogy with the classical kinetic theory of mixtures of rarefied gases, the time-evolution of the distributions will obey a system of two Boltzmann-like equations, given by

∂ f1 (w,t) = ∂t ∂ f2 (w,t) = ∂t

1 1 Q11 ( f1 , f1 )(w) + Q12 ( f1 , f2 )(w), τ11 τ12 1 Q22 ( f2 , f2 )(w). τ22

Herein, τi j are suitable relaxation times. The Boltzmann-like collision operators are derived by standard methods of kinetic theory, considering that the change in time of fi (w,t) due to binary interaction depends on a balance between the gain and loss of individuals with opinion w. The operators Q11 and Q22 relate to the microscopic interaction (11), whereas Q12 relates to (10). A detailed study of this model as well as numerical results are forthcoming in [6]. Acknowledgements The author acknowledges support by the Deutsche Forschungsgemeinschaft, grant JU 359/6 (Forschergruppe 518).

Multi-species Models in Econo- and Sociophysics

89

References 1. A.V. Bobylev and I.M. Gamba, Boltzmann equations for mixtures of Maxwell gases: exact solutions and power like tails, J. Stat. Phys., 124, 497-516, 2006 2. S. Cordier, L. Pareschi, and G. Toscani, On a kinetic model for a simple market economy, J. Stat. Phys., 120, 253-277, 2005 3. B. D¨uring and G. Toscani, Hydrodynamics from kinetic models of conservative economies, Physica A, 384(2), 493-506, 2007 4. B. D¨uring and G. Toscani, International and domestic trading and wealth distribution, Comm. Math. Sci. 6(4), 1043-1058, 2008 5. B. D¨uring, D. Matthes, and G. Toscani, Kinetic equations modelling wealth redistribution: a comparison of approaches, Phys. Rev. E 78(5), 056103, 2008 6. B. D¨uring, P.A. Markowich, J.-F. Pietschmann, and M.-T. Wolfram. Opinion formation with strong leaders, in preparation, 2009 7. J.C. Ferrero, The monomodal, polymodal, equilibrium and nonequilibrium distribution of money, in: Econophysics of Wealth Distributions, A. Chatterjee, S. Yarlagadda, and B.K. Chakrabarti (eds.), Springer (Italy), 2005 8. K. Gupta, Money exchange model and a general outlook, Physica A, 359, 634-640, 2006 9. G. Toscani. Kinetic models of opinion formation, Commun. Math. Sci. 4(3) 481-496, 2006 10. Y. Wang, N. Ding, and L. Zhang, The circulation of money and holding time distribution, Physica A, 324(3-4), 665-677, 2003

The Morphology of Urban Agglomerations for Developing Countries: A Case Study with China Kausik Gangopadhyay and Banasri Basu

Abstract In this article, the relationship between two well-accepted empirical propositions regarding the distribution of population in cities, namely, Gibrat’s law and Zipf’s law, are rigorously examined using the Chinese census data. Our findings are quite in contrast with the most of the previous studies performed exclusively for developed countries. This motivates us to build a general environment to explain the morphology of urban agglomerations both in developed and developing countries. A dynamic process of job creation generates a particular distribution for the urban agglomerations and introduction of Special Economic Zones (SEZ) in this abstract environment shows that the empirical observations are in good agreement with the proposed model.

1 Introduction Social phenomenon is a pertinent topic of discussion among the Economists and Econophysicists - partly because, human behavior can be explained in terms of Economic motives as well as a manifestation of a complex natural system. One of the interesting observation is distribution of dwellers in different urban agglomerations. A simple empirical law, namely Zipf’s law [16], is often successful in describing the distribution of populations for various cities1 in a nation.

Kausik Gangopadhyay Economic Research Unit, Indian Statistical Institute, Kolkata-700108, India. e-mail: [email protected] Banasri Basu Physics and Applied Mathematics Unit, Indian Statistical Institute, Kolkata-700108, India. e-mail: [email protected] 1

In this article, “Urban Agglomeration” and “City” have throughout been used interchangeably. The literature starting from the Zipf’s law have historically looked into the population distribu-

90

The Morphology of Urban Agglomerations 0

0

10

−1

10

−2

10

−3

10

−4

14 12

−1

Growth Rate −−−−−−−−−−−−−>

Complemenatry CDF: PC(x) −−−−−−−−−−−−−>

Complemenatry CDF: PC(x) −−−−−−−−−−−−−>

10

10

91

10

−2

10

−3

10

4

5

6

10 10 City−size: x −−−−−−−−−−−−−>

7

10

(a) Census year 1990: rank of a city plotted against its size

8 6 4 2 0

−4

10

10

10

4

10

5

10

6

7

10 10 City−size: x −−−−−−−−−−−−−>

8

10

(b) Census year 2000: rank of a city plotted against its size

−2

7

8

9

10 11 12 13 14 City−size (ln Scale) −−−−−−−−−−−−−>

15

16

(c) Scatter plot of city growth against city size (1990-2000)

Fig. 1 Chinese Cities: 1990-2000

In Economics, there is a body of literature devoted to explain morphology of cities. The survey paper by Gabaix and Ioannides [7] enlists most of them. Krugman [9] have looked at the top 135 U.S. cities and have found that the log-rank of a city bears a linear relation to the log-size of the same in a significant way. The slope of the linear relation is also found to be quite close to one as expected from the Zipf’s law. Gabaix [6] investigates into the growth of cities and their adherence to the Zipf’s law. This is because Zipf’s law is not a static phenomenon, but is the outcome of a dynamic process. Different cities have presumably different growth processes. We can express the expected growth rate of a city with population S as a random variable, µ (S). The standard deviation in the growth rate of cities with population S are denoted by σ (S). If either µ (S) or σ (S) is a non-trivial function of S at least in the upper tail of the distribution of S, there would be violations of Zipf’s law. This is a consequence of the Gibrat’s law being followed in the upper tail of the city distribution. Gibrat’s law proposes that the growth rate process of a city is independent of the size of the city. Therefore the mean growth rate and the standard deviation of the growth rate for a city is independent of its size. It must be clarified that Gibrat law does not say that the growth rate of any city follows the same stochastic process. It only says that there is no relation between growth rate of a city and its size. Gibrat law and its relation to Zipf’s law is particularly pertinent for a nation experiencing growth in urban inhabitants. A developing country is very different compared to its developed counterparts in terms of economic and social structures. Therefore, the inter relationship between this two empirical conjectures might be particularly interesting. A pertinent case study is the People’s Republic of China, where urbanization is taking place in a fast pace. We investigate into the occurrence of these laws in case of China. The next section discusses our empirical analysis with the findings. A model is proposed in Section 3 along with an appropriate simulation study. The concluding remarks are noted in Section 4.

tion of the urban areas formally denoted as “Cities”. However the more general notion of urban agglomerations have been used in the relatively recent literature.

92

Kausik Gangopadhyay and Banasri Basu

Table 1 Data Description: Values of city-population are reported in units of thousands. The left truncation of the data is determined through the value of xmin . The numbers in parenthesis represent the standard errors for the estimates Source: [17]

Census n Min Max Mean Median First Third Estimate of α Year Value Value Quartile Quartile Linear Fit MLE 2000 1462 50.08 14230.99 298.27 136.63 80.86 265.42 1.7544 2.2975 (0.0018) (0.0572) 1990 1345 25.02 7821.79 156.33 68.71 44.23 128.96 1.7701 2.2308 (0.0032) (0.0736)

2 Data Treatment The People’s Republic of China conducted censuses in 1953, 1964, and 1982. At the 2000 census, the total population stood at approximately 1.29533 billion, which is about 22% of total population in the world. 36% of the Chinese population used to reside in urban agglomerations in 2000. We use the data [17] from 1990 and 2000 census (plotted in Figure 1).

2.1 Verification of Zipf’s Law Let p(·) be a probability density function of the city-size distribution. The corresponding Cumulative Distribution Function (CDF) and the Complementary Cumulative Distribution Function (CCDF) are given by P(·) and PC (·), respectively. By definition, x

P(x) =

0

p(x′ )dx′ ;

PC (x) = 1 − P(x).

In case of city-size distribution following the Zipf’s law, pα (x) = Cx−α and PαC (x) =

C −(α −1) x α −1

(1)

where α and C are constants. α is called the exponent of the power law. This family of power law distributions for α > 1 are known as the Pareto distribution. From Eq. (1), it is obvious that pα (x) diverges to infinity for any value of α > 1 as x → 0. Therefore, some minimum value, xmin , is usually considered for the support of the Pareto distribution. The slope of the plot, in which log of the rank of a city, log(Rx ), is plotted against the log of its population, log(x), has been used to estimate the exponent of the power law in almost all the previous studies. It has been shown [3, 6] that this produces a biased estimate of the power law exponent. Alternatively the Maximum Likelihood

The Morphology of Urban Agglomerations

93

6

2.5

2

E(Growth | City Size)

E(Growth | City Size)

5

4

3

2

1

0

−1

1.5

1

0.5

0

−0.5

8

9

10

11 12 City Size (log scale)

13

14

15

−1

8

(a) Epanechnikov Kernel

9

10

11 12 City Size (log scale)

13

14

15

(b) Gaussian Kernel

Fig. 2 Kernel estimates of population growth against city-size (dotted line represents the 95% bootstrapped confidence interval)

Estimator2 (MLE) produces the most efficient estimate. We find 3 the estimate of α to be significantly bigger than 2 as a departure from the Zipf’s law (see Table 1).

2.2 Verification of Gibrat’s Law The cities in the upper tail of the size distribution follow a constant rate of growth for various developed countries [5]. It is interesting to repeat this exercise for a developing nation, where urbanization is happening fast to notice any discrepancy among cities in terms of growth regarding size. We perform various non-parametric as well as parametric exercises on the data to find out the relationship between the size of of a city and its growth rate. We plot the growth rate of population in all available urban agglomerations for the period of 1990-2000 against the population of the corresponding urban agglomeration in 1990. The standard non-parametric measure is to use the Kernel estimates of local mean. Suppose, the growth rate of a city, gi , bears some relation with the size of the city, Si , modeled as: gi = m(Si ) + εi for all i = 1, 2, ..., n, n being the total number of cities with available data. The objective is to find a smooth estimate of local means of growth rate over size and to verify whether there is any visible relationship between growth and size based on this estimate m(·). gi is the growth rate of the ith city over 1990-2000. We perform a Kernel

2 3

$−1  # MLE = 1 + n ∑ni=1 log x xi . The MLE is given by the expression, α min

The detail procedure regarding our empirical analysis is discussed in [8].

94

Kausik Gangopadhyay and Banasri Basu

density regression in the support of Si .4 The local average smooths around the point s, and the smoothing is done using a kernel, i.e. a continuous weight function symmetric around s. The bandwidth h of a kernel determines the scale of smoothing. The Nadaraya-Watson estimate [11] of m(·) is given by the following expression, m(s) ˆ =

n−1 ∑ni=1 Kh (s − Si )gi . n−1 ∑ni=1 Kh (s − Si )

We use two most popular Kernels, Gaussian and Epanechnikov. For Gaus

sian Kernel, K(ψ ) = (2π )−1/2 exp − 21 (ψ )2 , and for the Epanechnikov Kernel,   K(x) = 43 1 − ψ 2 · 1|ψ |≤1 . For both the kernels, we find that m(·) does depend on the size. The visual observation is verified through the following regression, where the growth rate of a city is regressed on the size of the city.5 We find a significant 6 negative coefficient for the variable of city-size. 00 gi = 2.635 − 4.681 × 10−7 · S90 +S 2 . −8 (0.039) (8.982 × 10 )

We conclude that there is a definite variation among cities in terms of growth process and the overall evidence indicates that the growth process is negatively biased against the cities of higher sizes at least at the upper tail of the distribution.

3 A Migration Based Model To illustrate the empirical anomalies found in the context of distribution of urban agglomerations in China, we can motivate our findings with a mathematical model of city formation. There are several recent attempts [1, 2, 12] to model urban growth. It uses the idea that the growth of cities resembles to that of the two-dimensional aggregates of particles. There are results in the the statistical physics of clusters regarding the growth of the two-dimensional aggregates of particles. These results are applied in the context of modeling the population distribution of urban agglomerations. In particular, the model of Diffusion Limited Aggregation (DLA) predicted the existence of only one large fractal cluster that is almost perfectly screened from incoming development units so that almost all the cluster growth occurs in the extreme peripheral tips. The morphology of cities is also explained using a percolation model [10], where the scaling of the urban perimeter of individual cities and the distribution of system of cities are tested. The intermittency mechanism [15] is used to model [14] a large scale city formation and understand the universal properties of 4

We have chosen the interval [1.1 · min Si , 0.9 · max Si ] to exclude the effect of the boundaries to i i some extent. 5 We choose the average of the populations in 1990 and 2000 at a city, respectively denoted as S 90 and S00 , as the population of the city. We can however choose the population of the city in 1990 as the regression. The result is quite similar even quantitatively. 6 The standard errors of the estimates are shown in the parenthesis below.

The Morphology of Urban Agglomerations

95 0

Complemenatry CDF: PC(x) (log scale)−−−−−−−−−−−−−>

Complemenatry CDF: PC(x) (log scale)−−−−−−−−−−−−−>

0

10

−1

10

−2

10

−3

10

−4

10

0

10

1

2

3

10 10 10 City−size (log scale) −−−−−−−−−−−−−>

4

10

(a) Before introduction of SEZs, city sizes plotted against corresponding ranks

10

−1

10

−2

10

−3

10

−4

10

0

10

1

2

3

10 10 10 City−size (log scale) −−−−−−−−−−−−−>

4

10

(b) After introduction of SEZs, city sizes plotted against corresponding ranks

Fig. 3 Simulation study for the model

the social phenomenon of city formation and global demographic development. In a different approach [13], the laws of population growth is explained using the City Clustering Algorithm (CCA). The CCA is used to examine Gibrat’s law of proportional growth and finds that the mean growth rate of a cluster exhibits deviations from the Gibrat’s law. For China, we need a model that is consistent with the empirical phenomenons observed and yet models the violations of the power law as found in the data. However, it must be taken into account that in the developed countries, this empirical observations are often reversed as we have found out from the literature. We introduce the aspect of Special Economic Zones in my model and explain the empirical anomalies in contrast to the developed countries in terms of Special Economic Zones. We construct a baseline environment without any Special Economic Zones. Then we add Special Economic Zones to that environment to observe any effect due to introduction of SEZ.7 There are k locations in a country. Jobs are spawn one at a time. The probability of a job being spawn in a location is a function number of already existing jobs in that location. More particularly, the probability of an additional job being created at γ the ith location is proportional to ni , where ni is the number of already existing jobs th at the i location. We let jobs spawn at different location until total number of jobs becomes N. The parameter γ is an important parameter of scale. If γ is 1, the growth rate of a city is independent of its size. On the other hand, if γ is less than unity, larger cities are discriminated against regarding growth. A value of γ being more 7

A Special Economic Zone (SEZ) is a geographical region that has economic laws that are more liberal than a country’s typical economic laws. The category ‘SEZ’ covers a broad range of more specific zone types, including Free Trade Zones (FTZ), Export Processing Zones (EPZ), Free Zones (FZ), Industrial Estates (IE), Free Ports, Urban Enterprize Zones and others. Usually the goal of a structure is to increase foreign investment. One of the earliest and the most famous Special Economic Zones were found by the government of the People’s Republic of China under Deng Xiaoping in the early 1980s. The most successful Special Economic Zone in China, Shenzhen, has developed from a small village into a city with a population over ten million within 20 years.

96

Kausik Gangopadhyay and Banasri Basu

than one means that the growth process favours the large cities to growth against the smaller cities. We introduce a migration based Special Economic Zones in this model. The government introduce the feature of Special Economic Zones by giving special privileges to some cities. The privileged urban agglomerations are chosen in such a way that they are not from the most populous cities. A number of new jobs are created in the locations of the SEZs. These new jobs require higher skill levels compared to the previously existing jobs. A worker matched with these jobs leave their old locations of work and move to the new location. Also higher skilled workers are primarily from the top ranking cities.

3.1 A Simulation Study To evaluate the performance of our economically tenable model, we resort to the widely used technique of simulation. We choose 3,000 locations (k) and one million agents (N). Jobs are spawn randomly in various locations are defined in our framework until the total number of spawned jobs is equal to total number of agents. We choose the value of γ to be 0.9 so that there is a negative bias towards the growth of top ranking cities as observed as observed in the data. We consider the top 2,500 locations and estimate the power law coefficient using the maximum likelihood method. we find αˆ MLE to be 1.0419 with standard error of the estimate being 0.0208. This baseline study is devoid of any SEZ and is quite in accordance with the Zipf’s Law. To introduce SEZ in this model, we randomly select 270 locations outside the top 300 locations and introduce a number of new jobs in those locations equaling 20% of already existing jobs in the economy.8 Workers from the top 300 locations are randomly matched with the newly created jobs and once matched, they migrate to the location of their new jobs. We compute αˆ MLE in the same way considering top ranking 2,500 locations and find it to be 1.2667 with 0.0259 to be the standard error of the estimate. This is demonstrative of the high value of α estimated using the data for China. Moreover, α estimated for the census year of 2000 is higher than that for the census year of 1990. It is associated with the rising importance of SEZs in the Chinese economy.

4 Discussion Economists often surmise [7] that Zipf’s law is the consequence of Gibrat’s law as far as city-size distribution is concerned. A simultaneous violation of both is natural. However, Gibrat’s law is associated with the free market economy [4]. A breech in 8

There is nothing special about the numbers used in this constructed model. A numerical experiment with different values for the parameters would qualitatively yield the same response.

The Morphology of Urban Agglomerations

97

Gibrat’s law implies a wedge in the free market. A possible source of this wedge is debatable. We focus on government’s intervention on the natural process of morphology of cities. The cities under SEZ are subject to very different economic regulations compared to their counterparts in the rest of the country. This is analogous to a wedge in a perfectly competitive economic system. It has been pointed out [5] that the Zipf’s exponent does depend on the cut-off in the upper tail of the city size distribution. The difference in socio-economic structure may give rise to different values of the Zipf’s exponent with the same minimum cut-off. It is observed that in case of China, the exponent of Zipf’s law augments for the year of 2000 compared to the value in the year of 1990. However, number of locations above the minimum cut-off are quite close (see Table 1). This phenomenon cannot be explained by a static process as modeled in [5]. Nevertheless, our model reconciles this empirical scenario with the gradual importance of SEZs in China.

References 1. M. Batty and P. Longley, Fractal Cities,Academic Press, San Diego, 1994 2. L. Benguigui, A new aggregation model. Application to town growth, Physica A 219, 1995 13 3. A. Clauset, C. R. Shalizi, and M. J. Newman, Power-law distributions in empirical data, arxiv:0706.1062v1 4. Juan Carlos Cordoba, On the Distribution of City Sizes, Journal of Urban Economics, January 2008 5. Jan Eeckhout, Gibrats Law for (all) Cities, American Economic Review 94(5), 2004, 14291451 6. Xavier Gabaix, Zipfs Law for Cities: An Explanation, Quarterly Journal of Economics 114 (3), August 1999, p.739-67 7. The Evolution of City Size Distributions, Xavier Gabaix and Yannis Ioannides, Handbook of Regional and Urban Economics 4, V. Henderson and J-F. Thisse eds, 2004, North-Holland, 2341-2378 8. Kausik Gangopadhyay and B. Basu, City Size Distribution for India and China, Physica A (2009) 9. The Self-Organizing Economy, (Blackwell Publishers Oxford, UK and Cambridge, MA) 10. H A. Makse et al. Modeling Urban Growth Patterns with Correlated Percolation, arXiv:condmat/9809431v1 [cond-mat.dis-nn], Phys. Rev. E (1 December 1998); Nature 377, 608 (1995) 11. Adrian Pagan and Aman Ullah, Nonparametric Econometrics, Cambridge University Press, 1999 12. T.M. Witten and L.M. Sander, Diffusion-Limited Aggregation, a Kinetic Critical Phenomenon, Phys. Rev. Lett 47 (1981) 1400 13. H.D. Rozenfeld et. al.,Laws of population growth, arXiv:08082202 14. D. H. Zanette and S.C. Manrubi, Role of Intermittency in Urban Development: A Model of Large-Scale City Formation, Phys. Rev. Lett. 79(1997) 523 15. Y.B. Zeldovich et al. The Almighty Chance (World Scientific, Singapore 1990) 16. G.K.Zipf, Human Behavior and the Principle of Least Effort (Addison-Wesley, Cambridge, MA, 1949) 17. Chinese Census data uploaded in http://www.citypopulation.de/China.html

A Mean-Field Model of Financial Markets: Reproducing Long Tailed Distributions and Volatility Correlations Vikram S. V. and Sitabhra Sinha

Abstract A model for financial market activity should reproduce the several stylized facts that have been observed to be invariant across different markets and periods. Here we present a mean-field model of agents trading a particular asset, where their decisions (to buy or to sell or to hold) is based exclusively on the price history. As there are no direct interactions between agents, the price (computed as a function of the net demand, i.e., the difference between the numbers of buyers and sellers at a given time) is the sole mediating signal driving market activity. We observe that this simple model reproduces the long-tailed distribution of price fluctuations (measured by logarithmic returns) and trading volume (measured in terms of the number of agents trading at a given instant), that has been seen in most markets across the world. By using a quenched random distribution of a model parameter that governs the probability of an agent to trade, we obtain quantitatively accurate exponents for the two distributions. In addition, the model exhibits volatility clustering, i.e., correlation between periods with large fluctuations, remarkably similar to that seen in reality. To the best of our knowledge, this is the simplest model that gives a quantitatively accurate description of financial market behavior.

1 Introduction Financial markets are examples of complex systems having many interacting agents, whose collective behavior have emergent features that are often seen to be invariant across individual realizations of such systems [1]. The analysis of stock price data from various markets around the world have brought to light certain universal properties that are together referred to as stylized facts [2]. One of the most remarkable of Vikram S. V. Department of Physics, Indian Institute of Technology Madras, Chennai - 600036, India. Sitabhra Sinha The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai - 600 113, India. e-mail: [email protected]

98

A Mean Field Model of Financial Markets

99

these is the long-tailed nature of the distribution of fluctuations in individual stock price or market index. When the fluctuation is measured in terms of the logarithmic return rt (i.e., logarithm of the ratio of prices at two successive time intervals), the corresponding cumulative distribution has the form: P(|rt | > x) ∼ x−α ,

(1)

where the exponent α ∼ 3. This “inverse cubic law” [3, 4] has been observed to be remarkably robust, seen in both long-established markets such as the New York Stock Exchange (NYSE) [5] as well as in emerging ones, like the National Stock Exchange of India [6, 7]. There are other variables associated with trading in the market whose distributions have been reported to follow a power law tail, but their universality is still under debate. An example is the cumulative distribution of the total volume of shares traded in a given interval of time, Vt , that has been reported to have the form: P(|Vt | > x) ∼ x−ζV ,

(2)

where the exponent ζV ∼ 1.5 [8]. Another well-known stylized fact is that, while the auto-correlation of the return decays very fast as expected from the Efficient Market Hypothesis, that for the absolute return shows a relatively slow decay [9]. Thus, although the stock price movement does not show any predictable trend, there are long-memory effects seen for volatility, which measures the degree of market fluctuations within a given period. It implies that periods with high volatility tend to follow each other, the corresponding time series showing intermittent bursts of large-amplitude swings in both positive and negative directions. There have been several attempts at modeling the dynamics of markets which reproduce at least a few of the above stylized facts. Many of them assume that the price fluctuations are driven by endogenous interactions rather than exogenous factors such as, arrival of news affecting the market and variations in macroeconomic indicators. A widely used approach for such modeling is to consider the market movement to be governed by explicit interactions between agents who are buying and selling assets [10, 11, 12, 13, 14]. While this is appealing from the point of view of statistical physics, resembling as it does interactions between spins arranged over a specified network, it is possible that in the market the mediation between agents is done through means of a globally accessible signal, namely the asset price. This is analogous to a mean-field like simplification of the agent-based model of the market, where each agent is taking decisions based on a common indicator variable. Here, we propose a model of market dynamics where the agents do not interact directly with each other, but respond to a global variable defined as price. The price in turn is determined by the relative demand and supply of the underlying asset that is being traded, which is governed by the aggregate behavior of the agents (each of whom can buy, sell or hold at any given time). In the next section, we describe our model, where the trading occurs in a two-step process, with each agent first deciding whether to trade or not at that given instant based on the deviation of the current

100

Vikram S. V. and Sitabhra Sinha

price from an agent’s notion of the “true” price (given by a long-time moving average). This is followed by the agents who have decided to trade choosing to either buy or sell based on the prevalent demand-supply ratio measured by the logarithmic return. In the following section we describe our results, focusing in turn on each of the stylized facts reproduced by our model, viz., the long-tailed distributions of returns and trading volumes, as well as the volatility clustering. Finally we indicate how using a random distribution of a parameter over the agents reproduces quantitatively the exponents for the two distributions. We conclude with a brief discussion of the robustness of the model.

2 The Model A simplified view of a financial market is that it consists of a large number of agents (say, N) trading in a single asset. During each instant the market is open, a trader may decide to either buy, sell or hold (i.e., remain inactive) based on its information about the market. Thus, considering time to evolve in discrete units, we can represent the state of each trader i by the variable Si (t) (i = 1, . . . , N) at a given time instant t. It can take values +1, −1 or 0 depending on whether the agent buys or sells an unit quantity of asset or decides not to trade at time t, respectively. We assume that the evolution of price in a free market is governed only by the relative supply and demand for the asset. Thus, the price of the asset at any time t, P(t), will rise if the number of agents wishing to buy it (i.e., the demand) exceeds the number wishing to sell it (i.e., supply). Conversely, it will fall when supply outstrips demand. Therefore, the relation between prices at two successive time instants can be expressed as: Pt+1 =

1 + Mt Pt , 1 − Mt

(3)

where Mt = ∑i Si (t)/N is net demand for the asset, as the state of agents who do not trade is represented by 0 and do not contribute to the sum. This functional form of the time-dependence of price has the following desirable feature: when everyone wants to sell the asset (Mt = −1), its price goes to zero, whereas if everyone wants to buy it (Mt = 1), the price diverges. When the demand equals supply, the price remains unchanged from its preceding value, indicating an equilibrium situation. The multiplicative form of the function not only ensures that price can never be negative, but also captures the empirical feature of the magnitude of stock price fluctuations in actual markets being proportional to the price. Note that, if the ratio of demand to supply is an uncorrelated stochastic process, price will follow a geometric random walk, as originally suggested by Bachelier [15]. The exact form of the price function (see (Eq. 3)) does not critically affect our results, as we shall discuss later. Having determined how the price of the asset is determined based on the activity of traders, we now look at how individual agents make their decisions to buy, sell or hold. As mentioned earlier, we do not assume direct interactions between agents, nor do we consider information external to the market to be affecting agent behavior. Thus, the only factor governing the decisions made by the agents at a given time is

A Mean Field Model of Financial Markets

101

the asset price (the current value as well as its history upto that time). First, we consider the condition that prompts an agent to trade at a particular time (i.e., Si = ±1), rather than hold (Si = 0). The fundamental assumption that we shall use here is that this decision is based on the deviation of the current price at which the asset is being traded from an individual agents notion of the “true” value of the asset. Observation of order book dynamics in markets has shown that the life-time of a limit order is longer, the farther it is from the current bid-ask [16]. In analogy to this we can say that the probability of an agent to trade at a particular price will decrease with the distance of that price from the “true” value of the asset. This notion of the “true” asset price is itself based on information about the price history (as the agents do not have access to any external knowledge related to the value of an asset) and thus can vary with time. The simplest proxy for estimating the “true” value is a long-time moving average of the price time-series, Pt τ , with the averaging window size, τ , being a parameter of the model. Our use of moving average is supported by previous studies that have found the long-time moving average of prices to define an effective potential that is seen to be the determining factor for empirical market dynamics [17]. In light of the above discussion, a simple formulation for the probability of an agent i to trade at time t is     Pt  P(Si (t) = 0) = exp −µ log , (4) Pt τ 

where µ is a parameter that controls the sensitivity of the agent to the magnitude (i.e., absolute value) of the deviation from the “true” value. This deviation is expressed in terms of a ratio so that, there is no dependence on the scale of measurement. For the limiting case of µ = 0, we get a binary-state model, where each agent trades at every instant. Once an agent decides to trade based on the above dynamics, it has to choose between buying and selling an unit quantity of the asset. We assume that this process is fully dictated by the principle of supply and demand, with agents selling (buying) if there is an excess of demand (supply) resulting in an increase (decrease) of the price in the previous instant. Using the logarithmic return as the measure for price movement, we can use the following simple form for calculating the probability that an agent will sell at a given time t: P(Si (t) = −1) =

#

1

Pt 1 + exp −β log Pt−1

$.

(5)

The form of the probability function is adopted from that of the Fermi function used in statistical physics, e.g., for describing the transition probability of spin states in a system at thermal equilibrium. The parameter β , corresponding to “inverse temperature” in the context of Fermi function, is a measure of how strongly the information about price variation influences the decision of a trader. It controls the slope of the function at the transition region where it increases from 0 to 1, with the transition

102

Vikram S. V. and Sitabhra Sinha 40

4

Price Pt

3 2.5 2 1.5 1 0

1

2

3 t ( itrns )

4

5 5

x 10

30 20 10

t

R

4

window size =10

t

t

t



Normalized Returns r ( = R / σ )

3.5

Pt

0 −10 −20 −30 −40 0

1

2

3 t ( itrns )

4

5 5

x 10

Fig. 1 (Left) Price time series, along with the moving average of price calculated over a window of size τ , and (right) the corresponding logarithmic returns, normalized by subtracting the mean and dividing by the standard deviation, for a system of N = 20, 000 agents. The model is simulated with parameter values µ = 100, β = 0 and averaging window size, τ = 104 iterations (For the coloured version of this figure, contact the author)

getting sharper as β increases. In the limit β → ∞, the function is step-like, such that every trading agent sells (buys) if the price has risen (fallen) in the previous instant. In the other limiting case of β = 0, the trader buys or sells with equal probability, indicating an insensitivity to the current trend in price movement. The results of the model are robust with respect to variation in β and for the remainder of the article we have considered only the limiting case of β = 0.

3 Results We now report the results of numerical simulations of the model discussed above, reproducing the different stylized facts mentioned earlier. For all runs, the price is assumed to be 1 at the initial time (t = 0). The state of every agent is updated at a single time-step or iteration. To obtain the “true” value of the asset at t = 0, the simulation is initially run for τ iterations during which the averaging window corresponds to the entire price history. At the end of this step, the actual simulation is started, with the averaging being done over a moving window of fixed size τ .

3.1 Long Tailed Nature of the Return Distribution The variation of the asset price as a result of the model dynamics is shown in Figure 1 (left), which looks qualitatively similar to price (or index) time-series for real markets. The moving average of the price, that is considered to be the notional “true” price for agents in the model, is seen to track a smoothed pattern of price variations, coarse-grained at the time-scale of the averaging window, τ . The price fluctuations, as measured by the normalized logarithmic returns (Figure 1, right),

A Mean Field Model of Financial Markets

103

Fig. 2 Cumulative distributions of (left) normalized returns and (right) trading volume measured in terms of the number of traders at a given time, for a system of N = 20, 000 agents. The model is simulated for T = 200, 000 iterations, with parameter values µ = 100, β = 0 and averaging window size, τ = 104 iterations. Each distribution is obtained by averaging over 10 realizations (For the coloured version of this figure, contact the author)

show large deviations that are significantly greater than that expected from a Gaussian distribution. We now examine the nature of the distribution of price fluctuations by focusing on the cumulative distribution of returns, i.e., P(rt > x), shown in Figure 2 (left). We observe that it follows a power law having exponent α ≃ 2 over an intermediate range with an exponential cut-off. The quantitative value of the exponent is seen to be unchanged over a large range of variation in the parameter µ and does not depend at all on β . At lower values of µ (viz., µ < 10) the return distribution becomes exponential. The dynamics leading to an agent choosing whether to trade or not is the crucial component of the model that is necessary for generating the non-Gaussian fluctuation distribution. This can be explicitly shown by considering the special case when µ = 0, where, as already mentioned, the number of traders at any given time is always equal to the total number of agents. Thus, the model is only governed by Eq. (5), so that the overall dynamics is described by a difference equation or map with a single variable, the net demand (Mt ). Analyzing the map, we find that the system exhibits two classes of equilibria, with the transition occurring at the critical value of β = 1. For β < 1, the mean value of M is 0, and the price fluctuations follow a Gaussian distribution. When β exceeds 1, the net demand goes to 1 implying that price diverges. This prompts every agent to sell at the next instant, pushing the price to zero, analogous to a market crash. It is a stable equilibrium of the system, corresponding to market failure. This underlines the importance of the dynamics described by Eq. (4) in reproducing the stylized facts.

104

Vikram S. V. and Sitabhra Sinha 10

10 1

−2

Positive returns Negative returns Exponent = 2

10 10 10 10 10 10

−3

N = 10000 N = 20000

−4

N = 40000

Exponent

Scaled C D F, N –1/2 Pc (rt )

N = 5000

−5

−6

10 0

−7

−8

10

0

10

1

10

2

Scaled Normalized Returns, N 1/4 rt

10

3

10 –1 10

3

10

4

10

5

Order Statistic [k]

Fig. 3 (Left) Finite size scaling of the distribution of normalized returns for systems varying between N = 5000 and 40, 000 agents, and (right) estimation of the corresponding power law exponent by the Hill estimator method for a system of N = 20, 000 agents. The model is simulated with parameter values µ = 100, β = 0 and averaging window size, τ = 104 iterations (For the coloured version of this figure, contact the author)

3.2 Long-tailed Nature of the Distribution of Traders As each trader can buy/sell only an unit quantity of asset at a time in the model, the number of trading agents at time t, Vt = ∑i |Si (t)|, is equivalent to the trading volume at that instant. The cumulative distribution of this variable, shown in Figure 2 (right), has a power law decay which is terminated by a exponential cut-off due to the finite number of agents in the system. The exponent of the power law, ζV , is close to 1, indicating a Zipf’s law [18] distribution for the number of agents who are trading at a given instant. As in the case of the return distribution exponent, the quantitative value of the exponent ζV is seen to be unchanged over a large range of variation in the parameter µ . The power law nature of this distribution is more robust, as at lower values of µ (viz., µ < 10), when the return distribution shows exponential behavior, the volume distribution still exhibits a prominent power law tail.

3.3 Verifying the Power Law Nature of the Return Distribution It is well-known that for many systems, their finite size can affect the nature of distributions of the observed variables. In particular, we note that the two distributions considered above have exponential cut-offs that are indicative of the finite number N of agents in the system. In order to explore the role of system size in our results, we perform finite size scaling of the return distribution to verify the robustness of the power law behavior. This is done by carrying out the simulation at different values of N and trying to see whether the resulting distributions collapse onto a single curve when they are scaled properly. Figure 3 (left) shows that for systems between N = 5000 and 40,000 agents, the returns fall on the same curve, when the abscissa and ordinate are scaled by the system size, raised to an appropriate power. This im-

A Mean Field Model of Financial Markets

105

Fig. 4 (Left) The time evolution of volatility (measured as the standard deviation of returns over a moving window of size 100 time steps) and (right) the auto-correlation of returns (triangles) and absolute returns (circles) for a system of N = 20, 000 agents. The absolute return is an alternative measure of volatility and shows a long memory, as compared to the returns whose correlations reach the noise level within a few time steps. The model is simulated with parameter values µ = 100, β = 0 and averaging window size, τ = 104 iterations (For the coloured version of this figure, contact the author)

plies that the power law is not an artifact of finite size systems, but should persist as larger and larger number of agents are considered. To get a quantitatively accurate estimate of the return exponent, we use the method described by Hill [19]. This involves obtaining the Hill estimator, γk,n , from a set of finite samples of a time series showing power law distributed fluctuations, with n indicating the number of samples. Using the original time series {xi }, we create a new series by arranging the entries in decreasing order of their magnitude and labeling each entry with the order statistic k, such that the magnitude of xk is larger than that of xk+1 . The Hill estimator is calculated as

γk,n =

xi 1 k log , k∑ x k+1 i

(6)

where k = 1, . . . , n − 1. It approaches the inverse of the true value of the power law exponent as k → ∞ and nk → 0. Figure 3 (right) shows the estimated value of −1 the return distribution exponent, α (i.e., γk,n ) calculated for returns obtained for a system of size N = 20, 000 agents. This confirms our previous estimate of α ≃ 2 based on least square fitting of the data.

3.4 Volatility Clustering As mentioned earlier, volatility is a measure of risk or unpredictable change over a time in the value of a financial instrument. In the stock market, this is typically measured by computing the standard deviation of the logarithmic returns for an asset over a specified time window. Figure 4 (left) shows the time evolution of volatility

106

Vikram S. V. and Sitabhra Sinha

calculated from the model. We observe intermittent bursts of high volatility events which mimic the pattern of volatility time series for real markets. Typically, periods with high volatility tend to follow each other in time, a phenomenon known as volatility clustering [2]. A signature of this stylized fact is the behavior of the auto-correlation for absolute returns, which is an alternative measure of volatility. As seen from Figure 4 (right), the correlation for absolute return decays very slowly, especially when compared with that for returns. The latter is expected from the Efficient Market Hypothesis [20], according to which markets are efficient at disseminating information such that the price at any given time reflects the entire knowledge available about the market at that time. Therefore, the returns should be uncorrelated, as otherwise this indicates the existence of information that has so far not yet been factored into the asset price. The volatility, on the other hand, is seen to exhibit positive auto-correlation over several days across different markets, which indicates the existence of longterm memory effects governing market activity. Our model captures this aspect of real world markets, where the absolute return auto-correlation shows a power law decay while that for the return quickly drops to the noise level [9].

4 Reproducing the Inverse Cubic Law In the previous section, we have reported the simulation results for our model when all the parameter values are constant and uniform for all agents. However, in the real world, agents tend to differ from one another in terms of their response to the same market signal, e.g., in their decision to trade in a high risk situation. We capture this heterogeneity in agent behavior by using a random distribution of the parameter µ that controls the probability that an agent will trade when the price differs from the

Fig. 5 Distribution of (left) the normalized returns and (right) the number of trading agents, for a system of N = 10, 000 agents when the parameter µ is randomly selected for each agent from an uniform distribution between [10, 200]. The exponents for the power law seen in both curves agree with the corresponding values seen in actual markets. The model is simulated with parameter values β = 0 and averaging window size, τ = 104 iterations (For the coloured version of this figure, contact the author)

A Mean Field Model of Financial Markets

107

“true” value of the asset. A low value of the parameter represents an agent who is relatively indifferent to this deviation. On the other hand, an agent who is extremely sensitive to this difference and refuses to trade when the price goes outside a certain range around the “true” value, is a relatively conservative market player with a higher value of µ . Figure 5 shows the distributions for the return and number of traders when µ for the agents is distributed uniformly over the interval [10, 200] (we have verified that small variations in the bounds of this interval do not change the results). While the power law nature is similar to that for the constant parameter case seen earlier, we note that the exponent values are now different and quantitatively match those seen in real markets. In particular, the return distribution reproduces the inverse cubic law, that has been found to be almost universally valid. Surprisingly, we find that the same set of parameters which yield this return exponent, also result in a cumulative distribution for the trading volume (i.e., number of traders) with a power law exponent ζV ≃ 1.5 that is identical to that reported for several markets [8]. Thus, our model suggests that heterogeneity in agent behavior is the key factor behind the observed distributions. It predicts that when the behavior of market players become more homogeneous, as for example, during a market crash event, the return exponent will tend to decrease. Indeed, earlier work [21] has found that during crashes, the exponent for the power law tail of the distribution of relative prices has a significantly different value from that seen at other times. From the results of our simulations, we predict that for real markets, the return distribution exponent α during a crash will be close to 2, the value obtained in our model when every agent behaves identically.

5 Discussion Having seen that our proposed model accurately reproduces several stylized features of a financial market, we now discuss how sensitive the results are to the specific forms of the dynamics we have used. For example, we have tested the robustness of our results with respect to the way asset price is defined in the model. We have considered several variations of Eq. (3), including a quadratic function, viz., Pt+1 =



1 + Mt 1 − Mt

2

Pt ,

(7)

and find the resulting nature of the distributions and the volatility clustering property to be unchanged. The space of parameter values has also been explored for checking the general validity of our results. As already mentioned, the parameter β does not seem to affect the nature, or even the quantitative value of the exponents, of the distributions. We have also verified the robustness of the results with respect to the averaging window size, τ . We find the numerical values of the exponents to be unchanged over the range τ = 104 − 106.

108

Vikram S. V. and Sitabhra Sinha

It may be pertinent here to discuss the relevance of our observation of an exponential return distribution in the model at lower values of the parameter µ . Although the inverse cubic law is seen to be valid for most markets, it turns out that there are a few cases, such as the Korean market index KOSPI, for which the return distribution is reported to have an exponential form [22]. We suggest that these deviations from the universal behavior can be due to the existence of a high proportion of traders in these markets who are relatively indifferent to large deviations of the price from its “true” value. In other words, the presence of a large number of risk takers in the market can cause the return distribution to have exponentially decaying tails. The fact that for the same set of parameter values, the cumulative distribution of number of traders still shows a power law decay with exponent 1, prompts us to further predict that, despite deviating from the universal form of the return distribution, the trading volume distribution of these markets will follow a power law form with ζV close to 1.

6 Conclusions In this study we have presented a model of financial market that is capable of reproducing several stylized facts without explicit agent-agent interactions or prior assumptions about individual strategies, such as that of chartists or fundamentalists. Apart from replicating observed features such as the power-law nature of distributions for price fluctuations (measured in terms of logarithmic returns) and trading volume (measured in terms of the number of trading agents at a given time), our model also shows that the return distribution tail exponent tends to a lower value (around 2) when the behavior of market agents become homogeneous, e.g., during a market crash. On the other hand, by introducing heterogeneity in agent behavior, we observe exponents that are quantitatively identical to the ones measured in real markets (e.g., the “inverse cubic law”). In addition, the model shows volatility clustering, reflected in the slow decay of autocorrelations in absolute return. These properties are an outcome of the endogenous dynamics of the model, as we do not consider the arrival of information external to the market (e.g., news breaks). The crucial feature of the model that gives rise to non-Gaussian behavior is the choice dynamics of agents deciding whether to trade or not. The model is seen to be robust to structural variations such as alternative choices of functional forms.

References 1. Farmer J D, Shubik M Smith E (2005) Is economics the next physical science ? Physics Today 58(9): 37-42 2. Cont R (2001) Empirical properties of asset returns: stylized facts and statistical issues, Quant. Fin. 1: 223:236 3. Lux T (1996) The stable Paretian hypothesis and the frequency of large returns: An examination of major German stocks, Appl. Fin. Econ. 6: 463-475

A Mean Field Model of Financial Markets

109

4. Gopikrishnan P, Meyer M, Amaral L A N, Stanley H E (1998) Inverse cubic law for the probability distribution of stock price variations, Eur. Phys. J. B 3: 139-140 5. Gopikrishnan P, Plerou V, Amaral L A N, Meyer M, Stanley H E (1999) Scaling of fluctuations of financial market indices, Phys. Rev. E 60: 5305-5316 6. Pan R K, Sinha S (2007) Self-organization of price fluctuation distribution in evolving markets, Europhys. Lett. 77: 58004 7. Pan, R K, Sinha S (2008) Inverse-cubic law of index fluctuation distribution in Indian markets, Physica A 387: 2055-2065 8. Gopikrishnan P, Plerou V, Gabaix X, Stanley H E (2000) Statistical properties of share volume traded in financial markets. Phys. Rev. E 62: R4493-R4496 9. Liu Y, Gopikrishnan P, Cizeau P, Meyer M, Peng C-K, Stanley H E (1999) The statistical properties of the volatility of price fluctuations. Phys. Rev. E 60: 1390-1400 10. Lux T, Marchesi M (1999) Scaling and criticality in a stochastic multi-agent model of a financial market, Nature 397: 498-500 11. Chowdhury D, Stauffer D (1999) A generalised spin model of financial markets, Eur. Phys. J. B 8: 477-482 12. Cont R, Bouchad J P (2000) Herd behavior and aggregate fluctuations in financial markets, Macroecon. Dyn. 4: 170-196 13. Bornholdt S (2001) Expectation bubbles in a spin model of markets: Intermittency from frustration across scales, Int. J. Mod. Phys. C 12: 667-674 14. Iori G (2002) A microsimulation of traders activity in the stock market: the role of heterogeneity, agents’ interaction and trade frictions, J. Econ. Behav. Organ. 49:269-285 ´ 15. Bachelier L (1900) Th´eorie de la sp´eculation, Ann. Sci. Ecole Norm. Sup. S´er 3 17: 21-86 16. Potters M, Bouchaud J-P (2003) More statistical properties of order books and price impact, Physica A 324: 133-140 17. Alfi V, Coccetti F, Marotta M, Pietronero L, Takayasu M (2006) Hidden forces and fluctuations from moving averages: A test study, Physica A 370: 30-37 18. Newman M E J (2005) Power laws, Pareto distributions and Zipf’s law, Contemp. Phys. 46: 323-351 19. Hill B M (1975) A simple approach to inference about tail of a distribution, Ann. Stat. 3: 1163-1174 20. Fama E (1970) Efficient capital markets: A review of theory and empirical work, J. Finance 25: 383-417 21. Kaizoji T (2006) A precursor of market crashes: Empirical laws of Japan’s internet bubble, Eur. Phys. J. B 50: 123-127 22. Yang J S, Chae S, Jung W S, Moon H T (2006) Microscopic spin model for the dynamics of the return distribution of the Korean stock market index, Physica A 363: 377-382

Statistical Properties of Fluctuations: A Method to Check Market Behavior Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara

Abstract We analyze the Bombay Stock Exchange (BSE) price index over the period of last 12 years. Keeping in mind the large fluctuations in last few years, we carefully find out the transient, non-statistical and locally structured variations. For that purpose, we make use of Daubechies wavelet and characterize the fractal behavior of the returns using a recently developed wavelet based fluctuation analysis method. the returns show a fat-tail distribution as also weak non-statistical behavior. We have also carried out continuous wavelet as well as Fourier power spectral analysis to characterize the periodic nature and correlation properties of the time series.

1 Introduction Financial markets are known to show different behavior at different time scales and under different socio-economic conditions. The random behavior of fluctuations in Prasanta K. Panigrahi Indian Institute of Science Education and Research (Kolkata), Salt Lake City, Kolkata 700 106, India and Physical Research Laboratory, Navrangpura, Ahmedabad 380 009, India. e-mail: [email protected] Sayantan Ghosh The Insitute of Mathematical Sciences, C.I.T. Campus, Taramani, Chennai 600 113, India. e-mail: [email protected] P. Manimaran Centre for Mathematical Sciences, C. R. Rao Advanced Institute of Mathematics, Statistics and Computer Science, HCU Campus, Hyderabad 500 046, India. e-mail: [email protected] Dilip P. Ahalpara The Institute for Plasma Research, Bhat, Gandhinagar, 382 428, India. e-mail: [email protected]

110

Statistical Properties of Fluctuations: A Method to Check Market Behavior

111

the smaller time scales and the manifestation of structured behavior at intermediate and long time scales have been well studied [1]-[13]. Many of the stock markets have shown large scale fluctuations during the past three years. Here we concentrate on the behavior of the fluctuation of the Bombay Stock Exchange high price values in daily trading. The point that makes the analysis of the BSE price index interesting is the fact that it has a significant fluctuations on a shorter time scale while growing tremendously over a longer time period. The statistical properties of the fluctuations and the behavior of the returns of such a growing market are of particular interest. Wavelet transform [14]-[16] based multi-resolution analysis [17, 18] has been successfully used earlier to analyze time series from various areas [19]-[21]. In this work, we analyze the BSE high price index value using both Continuous and Discrete Wavelet Transforms and Multifractal Detrended Fluctuation Analysis (MF-DFA) [22]-[32]. We use the Continuous Wavelet Transform (CWT) to analyze the behavior of the time series at different frequencies and extract the periodic nature of the series if existent. The Discrete Wavelet Transform based method is used to find the multifractal nature of the time series. For the purpose of comparison, the MF-DFA method is used for characterization of the time series. It has been observed in [20] that BSE returns showed a Gaussian random behavior and certain non-statistical features. The present work is organized as follows. Section 2 contains a brief description and applications of the continuous and discreet wavelet transforms. The discrete wavelet based method [21] to analyze fluctuations is reviewed in Section 3. In Section 4, the data is analyzed through the wavelet based method, MF-DFA and Fourier analysis. We conclude in Section 5 with results and a brief discussion. The BSE index [33] dates from July 01, 1997 to March 31, 2009. The data spans over 2903 points and is shown in Figure 1(a). As is evident from the data, the first half does not show much activity but the second half shows significant variations. Figure 1(b) depicts the logarithmic returns calculated from (6) and Figure 1(c) depicts the shuffled returns, which reveals some differences with the returns.

2 Continuous Wavelet Analysis through Morlet Wavelet Continuous Wavelet Transform (see [34] and [41] for an excellent introduction to the topic) has been used in recent times to analyze financial time series to study selforganized criticality [36, 37], correlations [35, 38], commodity prices [39] to name a few. Recently, in [40], an effort towards the characterization of cyclic behavior in the financial markets has been made through the multi resolution analysis of wavelet transforms. Here, the CWT of the BSE data has been carried using the Morlet wavelet given by [42],

ψ0 (n) = π −1/4eıω0 n e−n

2 /2

(1)

112

Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara 4

BSE index

3

x 10 a

2 1 0

0

500

1000

1500

2000

2500

500

1000

1500

2000

2500

500

1000

2000

2500

0.2

Return

b 0

Shuffled return

−0.2

0

0.2 c 0 −0.2

0

1500 time(days)

Fig. 1 (a) BSE high price index value in daily trading over a period of 2903 days, (b) Logarithmic returns estimated from Eq. (6). (c) Shuffled returns

where n is a localized time index, ω0 = 6 for zero mean and localization in both time and frequency space (admissibility conditions for a wavelet) [34]. The Morlet wavelet has a Fourier wavelength λ [41] given by

λ=

4π s ( ≈ 1.03s ω0 + 2 + ω02

(2)

which means that here, the scale and the Fourier wavelength are approximately equal. The wavelet coefficients are calculated [41] by the convolution of a discrete sequence xn with scaled and translated ψ0 (n), N−1

Wn (s) =

∑ xn ′ ψ

n′ =0





(n − n′) s



(3)

where s is the scale. The wavelet coefficients for the BSE data has been given in a scalogram in Figure 2(a) as a function of scale and time. The periodicity of the coefficients over the scales is calculated as Pn = ∑ Wn (s)

(4)

n

and it is given in Figure 2(b). To analyze the periodicity of the data at different frequencies, s ∝ ν −1 , (5) where ν is the frequency; we have shown the Wn (s) at different scales in Figure 3. One observes significant fluctuations at different scales in the second half of the

Statistical Properties of Fluctuations: A Method to Check Market Behavior

113

7

10

1000 900 800

6

10

700 600 5

10

500 400 300

4

10 200 100

3

500

1000

1500

2000

10

2500

0

500

1000

(a) Scalogram

1500

2000

2500

3000

(b) Periodogram

Fig. 2 (a) is the scalogram of the wavelet coefficients computed from scale 1 to 1024. The x-axis is the time, n and the y-axis is the scale s. (b) Periodogram plotted on a semilog scale as Pn vs n. One observes a period of approximately 250 trading days (For the coloured version of this figure, contact the author)

4

Signal

x 10

Scale 8 6000

2

4000

1.5

2000

Coeff

Index value

2.5

1 0.5 0

0

500

1000

4

1

0 −2000

x 10

1500 time Scale 16

2000

2500

3000

−4000

0 −0.5

0

500

1000

x 10

1500 time Scale 64

2000

2500

−1

3000

Coeff

Coeff

3000

0

500

1000

1500 time Scale 128

2000

2500

3000

500

1000

1500 time Scale 512

2000

2500

3000

500

1000

1500 time

2000

2500

3000

x 10

0 −1

0

500

1000

4

x 10

1500 time Scale 256

2000

2500

−2

3000

0 4

10

1

x 10

5 Coeff

Coeff

2500

1

−1

0

0

−1 −2

2000

4

2

0

2

1500 time Scale 32

0

1

−2

1000

−0.5

4

2

500

x 10

0.5 Coeff

Coeff

0.5

−1

0 4

1

0

500

1000

1500 time

2000

2500

3000

−5

0

Fig. 3 Wavelet coefficients at scales 8, 16, 32, 64, 128, 256 and 512

Signal

500

1000

1500

2000

2500

Level 1

0

50 0

500

1000

1500

2000

2500 Level 2

0 50

0

500

1000

1500

2000

2500 Level 3

0 50

0

500

1000

1500

2000

2500 Level 4

0 50 0

0

500

1000

1500

2000

2500 Level 5

Level 1 Level 2 Level 3 Level 4 Level 5

4

x 10 2 1 0

50 0

Index value

Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara Index value

114

0

500

1000

1500 Time

2000

(a) DWT with Haar

2500

4

Signal

x 10 2 1 0

0

500

1000

1500

2000

2500

0

500

1000

1500

2000

2500

0

500

1000

1500

2000

2500

0

500

1000

1500

2000

2500

0

500

1000

1500

2000

2500

0

500

1000

1500 Time

2000

2500

50 0 50 0 50 0 50 0 50 0

(b) DWT with Db-4

Fig. 4 (a) Discrete Wavelet Transform (DWT) of the data through Haar wavelet. (b) DWT of the data through Daubechies-4 (Db-4) wavelet. Akin to the CWT case, the fluctuations show self similar behavior

data.It is evident that the fluctuations have a self similar character. We depict the fluctuations at smaller scale as also the dominant periodic variations at different scales and one does not see significant transient fluctuations in the variations. The extracted fluctuations at different levels through DWT are shown in Figure 4(a) and Figure 4(b). Having seen the periodic behavior of the data, and having extracted the fluctuations through DWT, in the next section, we discuss the wavelet based method for analysis of fluctuations to identify their fractal behavior.

3 Discrete Wavelet Based Method for Characterizing Multifractal Behavior We have observed earlier the self-similar nature of the fluctuations in the wavelet domain. In the following we describe the procedure of the wavelet based method. From the financial (BSE stock index) time series x(t), the scaled logarithmic returns G(t) is defined as, 1 [log(x(t + 1)) − log(x(t))], t = 1, 2...(N − 1); (6) σ here σ is the standard deviation of x(t). The profile of the time series is obtained from the cumulative, G(t) ≡

Statistical Properties of Fluctuations: A Method to Check Market Behavior

115

i

Y (i) = ∑ [G(t)], t=1

i = 1, ...., N − 1.

(7)

Next, apply the wavelet transform on the time series profile Y (i) to extract the fluctuations from the trend. The trend is extracted by discarding the high-pass coefficients and reconstructing only with low-pass coefficients using inverse wavelet transform. The fluctuations are then extracted at each level by subtracting the trend from the original time series. This procedure is followed to extract fluctuations at different levels. Here the wavelet window size at each level of decomposition is considered as the scale s. We have made use of Daubechies (Db) wavelets for the extraction of desired polynomial trend. Although the Daubechies wavelets extract the fluctuations effectively, its asymmetric nature and wrap around problem affects the precision of the values. We apply wavelet transform on the reverse profile, to extract a new set of fluctuations. These fluctuations are then reversed and averaged over the earlier obtained fluctuations. Now the extracted fluctuations using wavelet transform are subdivided into nonoverlapping segments Ms = int(N/s) where N is the length of the fluctuations and s is the scale. The qth order fluctuation function Fq (s) is then obtained by squaring and averaging the fluctuations over all segments: Fq (s) ≡

!

1 2Ms 2 ∑ [F (b, s)]q/2 2Ms b=1

"1/q

.

(8)

Here ’q’ is the order of moment. The above procedure is repeated for different scale sizes for different values of q (except q = 0). The power law scaling behavior is obtained from the fluctuation function, Fq (s) ∼ sh(q) ,

(9)

in a logarithmic scale for each value of q. If the order q = 0, direct evaluation leads to the divergence of the scaling exponent. In that case, logarithmic averaging has to be employed to find the fluctuation function: Fq (s) ≡ exp

!

1 2Ms ∑ ln[F 2 (b, s)]q/2 2Ms b=1

"1/q

.

(10)

For the monofractal time series, h(q) values are independent of q and for the multi-fractal time series h(q) values are dependent on q. h(q = 2) = H, the Hurst scaling exponent is a measure of fractal nature such that varies 0 < H < 1. Here H < 0.5 and H > 0.5 reveal the anti-persistent and persistent nature of the time series, whereas H = 0.5 is for random time series.

116

Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara

4 Data Analysis and Observations The Wavelet Based Fluctuation Analysis (WBFA) which is used here was carried on the time series profile obtained from the returns and shuffled returns. The analyzed time series using discrete wavelet based method with Db-6 wavelet reveals the presence of multifractal nature with long-range correlation behavior that is shown in Figure 5(a) (top panel). For the sake of comparison, MF-DFA method with quadratic polynomial fit is also used which complements the wavelet based method (see Figure 5(a) (bottom panel)). The Hurst scaling exponent reveals that the time series possesses persistent behavior, which is shown in Table 1. The semi-log plot of distribution of logarithmic returns of BSE index and the Gaussian white noise is shown in Figure 5(b) for the BSE index. The fat tails for large fluctuations and sharper behavior near the origin for small fluctuations are clearly seen.

1 WBFA(Db6)−unshuffled WBFA(Db6)−Shuffled

0.8 h(q)

0 BSE Gaussian white noise

0.6 −1

0.4 −2

−5

0 q

5

10 ln P(G(t))

0.2 −10 1

MFDFA(Quadratic)−unshuffled MFDFA(Quadratic)−shuffled

h(q)

0.8

−4

−5

0.6 0.4 0.2 −10

−3

−6

−5

0 q

(a)

5

10

−7 −10

−5

0 G(t)

5

10

(b)

Fig. 5 (a) h(q) values of BSE Sensex price index using (top panel) WBFA (Db-6) and (bottom panel) MF-DFA (Quadratic) analysis; (b) Log Normal Distribution of BSE sensex index return and Gaussian white noise

Table 1 h(q) versus q values for WBFA (Db-6) and MFDFA (Quadratic) analysis

X h(q)W BFA h(q)W BFAs h(q)MFDFA h(q)MFDFAs Hurst Scaling Exponent 0.5486 0.5218 0.5590 0.5420

We have also analyzed the scaling behavior through Fourier power spectral analysis,  2   −2π ıst   P(s) =  Y (t)e (11) dt  .

Statistical Properties of Fluctuations: A Method to Check Market Behavior

117

4

2

x 10

1 0 −1 −2

0

500

1000 1500 2000 2500 Wavelet coefficients at scale s=119

3000

500

1000 1500 2000 2500 Wavelet coefficients at scale s=196

3000

4

4

x 10

2 0 −2 −4

0

Fig. 6 (a) Fourier power spectral analysis on BSE price index; (b) Continuous wavelet analysis show two dominant periodic modulations at scale 119 and 194; (c) The CWT coefficients at the above scales (119 and 194) as a function of time (For the coloured version of this figure, contact the author)

Here Y (t) is the accumulated fluctuations after subtracting the mean Y . It is well known that, P(s) ∼ s−α . For the BSE price index time series, the scaling exponent α = 2.11 which reveals long range correlated behavior as shown in Figure 6(a). The obtained scaling exponent α can be compared with Hurst exponent by the relation α = 2H + 1. The wavelet based method and FFT are comparable.

5 Conclusion We have analyzed BSE high price index values in daily trading. A detailed study reveals multifractal behavior and non statistical distribution of the returns. The distribution function of the returns also show fat-tail behavior. The analysis through the wavelet based method, MFDFA method and also Fourier power spectrum analysis reveal a persistent nature, as well as multifractal behavior of the BSE price index values. The multifractal nature of the time series may arise due to herding behavior, and other intrinsic non-linear character of the market and other control mechanism. We intend to study the fluctuations in different price indices in different countries for this time period. This may reveal the physical origin of the other time periods as also the multi fractal character. Acknowledgements This paper is dedicated to the memory of Prof. J. C. Parikh, who was one of the founding fathers of econophysics in India. PM, one of the authors would like to thank the Department of Science and Technology for their financial support (DST-CMS GoI Project No. SR/S4/MS:516/07 Dated 21.04.2008).

118

Prasanta K. Panigrahi, Sayantan Ghosh, P. Manimaran, and Dilip P. Ahalpara

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47.

Plerou V et al (2000), Physica A 279:443 ´ Bachelier L (1900) Ann. Sci. Ecole Norm. Sup. 3:21 ´ Pareto V (1897) Cours d’Economie Politique Lausanne, Paris L´evy P (1937) Th´eorie de l’Addition des Variables Al´e atoires, Gauthier-Villars, Paris Mandelbrot B B (1963) J. Bus. 36:394419 Mantegna, R N, Stanley H E (2000), Introduction to Econophysics: Correlations and Complexity in Finance, Cambridge University Press, Cambridge Bouchaud J P, Potters M (2000) Theory of Financial Risk, Cambridge University Press, Cambridge Farmer J D (1999) Comput. Sci. Eng. 1:26 Kondor I, K´ertesz (eds.) (2000) Econophysics: An Emerging Science, Kluwer, Dordrecht Mantegna R N, (ed.) (1999) Proceedings of the International Workshop on Econophysics and Statistical Finance, Physica A (special issue) 269:1 Bouchaud J P, Alstr¨om P, Lauritsen K B (eds.), (2000) Application of Physics in Financial Analysis, Int. J. Theor. Appl. Finance (special issue) 3 Takayasu H (ed.) (2002), The Application of Econophysics: Proceedings of the Second Nikkei Econophysics Symposium, Springer Mandelbrot B B (1999),The Fractal Geometry of Nature Freeman, San Francisco Daubechies I (1992) Ten lectures on wavelets SIAM, Philadelphia Mallat S (1999) A Wavelet Tour of Signal Processing Academic Press Burrus C S, Gopinath R A, and Guo H (1998) Introduction to Wavelets and Wavelt Transforms Prentice Hall, New Jersey Manimaran P, Panigrahi P K, and Parikh J C (2005), Phys. Rev. E 72:046120 Manimaran P, Lakshmi P A, Panigrahi P K (2006), J. Phys. A 39:L599 O´swiecimka P, Kwapi´en, Drozdz (2006) Phys. Rev. A. 74:016103 Manimaran P, Panigrahi P K, and Parikh J C (2008) Physica A 387:5810 Manimaran P, Panigrahi P K, and Parikh J C (2009) Physica A 388:2306 Hu K, Ivanov P Ch, Chen Z, Carpena P, and Stanley H E (2001) Phys.Rev. E 64:11114 Gopikrishnan et al (1999) Phys. Rev. E 60:5305 Plerou V et al (1999) Phys. Rev. E 60:6519 Chen Z, Ivanov P Ch, Hu K, and Stanley H E (2002) Phys. Rev. E 65:041107 Matia K, Ashkenazy Y, and H. E. Stanley, (2003) Europhys. Lett. 61:422 Hwa R C et al(2005) Phys. Rev. E. 72:066308 Ohashi K, Amaral L A N, Natelson B H, Yamamoto Y (2003) Phys. Rev. E. 68:065204 Xu L et al (2005) Phys. Rev. E. 71:051101 Brodu N, eprint:nlin.CD/0511041 Gu G F, Zhou W X (2006) Phys. Rev. E 74:061104 Mohaved M S, Hermanis E (2008) Physica A 387:915 obtained from http://in.finance.yahoo.com Farge M, (1992) Annu. Rev. Fluid Mech. 24:395 Stuzik Z R, (2001) Physica A 296:307 Bartolozzi M et al (2005) Physica A 350:451 Bartolozzi M (2007) Eur. Phys. J. B 57:337 Simonsen I, Hansen A, and Nes O -M, Phys. Rev. E 58 (1998) 2779 Connor J and Rossiter R (2005), Studies in Nonlinear Dynamics and Econometrics 9:1 Ahalpara D P et al (2008) Pramana -J. Phys. 71:459 Torrence C and Compo G P (1998) Bull. Amer. Meteorol. Soc. 79:61 Goupillaud P, Grossman A, and Morlet J (1984) Geoexploration 23:85-102 Hurst H E (1951) Trans. Am. Soc. Civ. Eng. 116:770 Feder J (1988) Fractals Plenum Press, New York Arneodo A et al (1988) Phys. Rev. Lett. 61:2284; Muzy J F et al (1993) Phys. Rev. E 47:875 Peng C K, et al (1994) Phys. Rev. E 49:1685 Kantelhardt J W, et al (2003) Physica A 330:240

Modeling Saturation in Industrial Growth Arnab K. Ray

Abstract Long-time saturation in industrial growth has been modeled by a logistic equation of arbitrary degree of nonlinearity. Equipartition between nonlinearity and exponential growth in the integral solution of this logistic equation gives a nonlinear time scale for the onset of saturation. Predictions can be made about the limiting values of the annual revenue and the human resource content that an industrial organization may attain. These variables have also been modeled to set up an autonomous first-order dynamical system, whose equilibrium condition forms a stable node (an attractor state) in a related phase portrait. The theoretical model has received strong support from all relevant data pertaining to the well-known global company, IBM.

1 Introduction The present global economic recession has made it imperative to devise mathematical models of high quantitative accuracy for understanding economic stagnation in industries [1]. The health of a company can be judged from the revenue that it generates and the human resource that it employs in achieving its objectives. Precise numerical measures of all these variables can be made, affording a clear understanding of industrial growth pattern and, consequently, allowing for a mathematical model to be framed for it. Even when an industrial organization displays noticeable growth in its early stages, there is a saturation of this growth towards a terminal end after the elapse of a certain scale of time [2]. As the system size begins to grow through time, a self-regulatory mechanism drives the system towards a terminal state. Therefore, the effectiveness of any mathematical model that purports to explain saturation in industrial growth, lies in studying the global growth behavior of a company, whose Arnab K. Ray Homi Bhabha Centre for Science Education, TIFR, V. N. Purav Marg, Mankhurd, Mumbai 400088, India. e-mail: [email protected]

119

120

Arnab K. Ray

operating space is on the largest available scale, and, as an additional advantage of operating on these scales, whose overall growth pattern becomes free of local inhomogeneities. To this end the growth trends of the annual revenue and the human resource strength of the multi-national company, IBM, have been analyzed here. Data about its annual revenue generation, the net annual earnings and the cumulative human resource strength, dating from the year 1914, have been published on the company website.1 Both the capacity for revenue generation and the human resource content of IBM, over a period of more than ninety years of the existence of the company, show an initial phase of exponential growth, to be followed later by a slow drive towards saturation. An understanding of the general nature of this saturation, and other adversities lying ahead, can enable a company to apply corrective measures at the right juncture. This ought to be the guiding principle behind a feasible management strategy for long-term growth, especially in the case of organizations that are still in their early stages. As a result there will be a more effective formulation and implementation of innovative growth strategies, like the “Blue Ocean” strategy [3].

2 A Nonlinear Model for Growth Regarding the study of growth from industrial data, a preceding work [4] has pedagogically underlined the relevance of various model differential equations of increasing complexity. Saturation in growth can be described by a logistic differential equation, as it is done to study the growth of a population [5, 6, 7]. Along the same lines, a generalization of the logistic prescription, to any arbitrary degree of nonlinearity, is being posited here, to follow industrial growth through time, t. Such a logistic equation will read as

φ˙ (t) = λ φ (1 − ηφ α ) ,

(1)

where φ can be any relevant variable to guage the health of a firm, like its annual revenue (or cumulative revenue growth) and human resource strength. The parameters α and η are, respectively, the nonlinear saturation exponent, and the “tuning” parameter for nonlinearity. A primary factor contributing to growth saturation is the space within which an organization can thrive. If this space is constrained to be of finite size (practically it has to be so), then terminal behavior becomes a certainty. This brings growth to a slow halt. Indeed, saturation in growth due to finite-size effects is understood well by now in other situations of economic interest where physical models can be applied [8]. The adverse conditions against growth can be further aggravated by the presence of rival organizations competing for the same space. Integration of Eq. (1), which is a nonlinear differential equation, yields the general integral solution (for α = 0), 1

http://www-03.ibm.com/ibm/history.

Modeling Saturation in Industrial Growth

121

1e+06

100000 100000

H

R

10000

1000 10000 100

10

1

1000 1

10 t

1

(a) The lower curve gives the model fit for the annual revenue generated by IBM. The fit by the theoretical model agrees well on nonlinear time scales, for α = 1, λ = 0.145 and η = 10−5 . The cumulative growth of the annual revenue generated by IBM is fitted by the upper curve. This fit is given by η = 4 × 10−7

10 t

(b) The growth of the human resource strength against time is fitted globally by the theoretical model for α = 1, λ = 0.09 and η = 2 × 10−6 . There has been a noticeable depletion of human resource on the same nonlinear saturation time scale for revenue growth, i.e. 75 − 80 years

Fig. 1 Fit of the logistic equation with revenue and human resources growth



−1/α φ (t) = η + c−α exp (−αλ t) ,

(2)

in which c is an integration constant. The fit of the foregoing integral solution with the data has been shown in Figure 1, whose left panel gives a log-log plot of the annual revenue, R, that IBM has generated over time, t. Here the annual revenue has been measured in millions of dollars, and time has been scaled in years. The data and the theoretical model given by Eq. (2) agree well with each other, especially on mature time scales, when nonlinear saturation is conspicuous. The left panel in Figure 1 also shows the plot of the cumulative growth of the IBM revenue. While the early stages of growth is exponential, the later stages shift into a saturation mode. A limiting value in relation to this saturated state is given by φ˙ = 0 in Eq. (1), leading to φsat = η −1/α . Using the values of α and η , by which the saturation properties of the plot in Figure 1 have been fitted, a prediction can be made that the maximum possible annual revenue that IBM can generate will be about 100 billion dollars. Similar claims can be made about the limiting value of the human resources of IBM, which is another important indicator of the prevailing state of a company. The data for the human resource of IBM have been plotted in the right panel in Figure 1. Going by the values of α and η needed for the model fit here, the maximum possible human resource that IBM can viably employ is predicted to be about 500, 000 strong. A point of great interest here is that the growth data have been fitted well by the simplest possible case of nonlinearity, given by α = 1. This will place the present mathematical problem in the same class of the logistic differential equation devised by Verhulst to study population dynamics [5, 6, 7]. This equation has also been applied satisfactorily to a wide range of other cases involving growth [5, 6, 7].

122

Arnab K. Ray

10000

1

8000 0.1 6000 0.01

4000

0.001

0

v

P

2000

0.0001

-2000 -4000

1e-05

-6000 1e-06 -8000 -10000 10

20

30

40

50

60

70

80

1e-07 1e-07

90

t

(a) The net annual earnings (in millions of dollars) made by IBM has shown steady growth, except for the early years of the 1990s decade, which was about 80 years of the existence of the company. Around this time the company suffered major losses in its net earnings, and this time scale corresponds very closely to the time scale for the onset of nonlinear saturation in revenue growth

1e-06

1e-05 u

0.0001

0.001

(b) The straight-line fit validates the logistic equation model. The slope of the straight line is given as 1.4, and it closely matches the value of 1.6 that can be obtained from the parameter fitting in Figure 1. The cusp at the bottom left is due to the loss of human resource. Growth, corresponding to the positive slope in this plot, can be modeled well by the logistic equation

Fig. 2 The two plots here support modeling by the logistic equation

The time scale for the onset of nonlinearity can be fixed by requiring the two terms on the right hand side of Eq. (2) to be in rough equipartition with each other. This will yield the nonlinear time scale as tnl ∼ − (αλ )−1 ln |η cα |, from which, making use of the values of α , η and c needed to calibrate the IBM revenue data (both the annual revenue and the cumulative revenue), one gets tnl ∼ 75 − 80 years. An indirect confirmation about the relevance of this time scale comes from the plot in the left panel of Figure 2, which shows the growth of the net annual earnings of IBM (labelled as P, the profit, scaled in millions of dollars), against time, t (in years). The company suffered major reverses in its net earnings around 1991-1993 (upto 8 billion dollars in 1993), which was indeed very close to 80 years of the company, since its inception in 1914.

3 A Dynamical Systems Modeling It is clear that there is a strong correlation among the variables by which the state of an industrial organization is monitored. The concept of the “Balanced Scorecard” is somewhat related to this principle [9]. The growth rate of any relevant variable will have a correlated functional dependence on the current state of all the other variables. If an industrial organization generates enough revenue, it becomes financially viable for it to maintain a sizeable human resource pool, while the human resource strength will translate into a greater ability to generate revenue. In this

Modeling Saturation in Industrial Growth

123

manner both the revenue and the human resource content of an organization will sustain the growth of each other. Considering a general revenue variable, R (which can be either be the annual revenue or the cumulative revenue), its coupled dynamic growth along with the human resource, H, can be stated mathematically by the relations R˙ = ρ (R, H)and H˙ = σ (R, H). The foregoing coupled set of autonomous first-order differential equations forms a two-dimensional system, and the equilibrium condition of this dynamical system is obtained when R˙ = H˙ = 0. The corresponding coordinates for this condition in the H-R plane may be labelled (H0 , R0 ). Since the terminal state implies the cessation of all growth in time, it is now possible to argue that the equilibrium state in the H-R plane actually represents a terminal state in real time growth. One might describe the individual growth patterns of R and H by simply us˙ = ing an uncoupled logistic equation for either variable. This will go as R(t) α α r ˙ h λr R (1 − ηr R )and H(t) = λh H (1 − ηhH ), with the subscripts r and h in the parameters α , λ and η , indicating that R and H will each, in general, have its own different set of parameter values. The integral solution in the H-R plane can be transformed in a compact power-law form as v = κ uβ under the definitions that v = R−αr − ηr , u = H −αh − ηh , and β = (αr /αh ) × (λr /λh ), with κ being an integration constant. The power-law behavior implies that a log-log plot of v against u will be a straight line with a slope, β . This fact has been shown in the right panel of Figure 2. In this plot, v has been defined in terms of the cumulative revenue, and the slope of the resulting straight line is given by β ≃ 1.4, which is quite close to the value of β ≃ 1.6, found simply by taking the ratio of the respective theoretical values of λ , chosen to fit the empirical data in Figure 1. The cusp in the bottom left corner of the plot has arisen because of an irregular depletion of human resource in IBM in the early 1990s. However, the lower arm of the cusp has nearly the same positive slope as the straight-line fit. This shows that intermittent deviations do not affect the overall course of the evolutionary growth process [6]. By use of the logistic equation, the limiting state for industrial growth is represented by a stable node in the phase portrait of an autonomous first-order dynamical system [5]. Extending this argument, the limiting state can be perceived to be an attractor state, towards which there will be an asymptotic approach through an infinite passage of time [5]. Acknowledgements The author thanks A. Basu, J. K. Bhattacharjee, S. Bhattacharya, B. K. Chakrabarti, I. Dutta, T. Ghose, W. C. Kim, A. Kumar, A. Marjit, S. Marjit, S. Roy Chowdhury, H. Singharay, J. Spohrer and V. M. Yakovenko for useful comments and suggestions. A. Varkey helped in collecting data.

References 1. Bouchaud J-P (2008) Economics needs a scientific revolution, Nature 455:1181 2. Aghion P, Howitt P (1998) Endogenous Growth Theory, The MIT Press, Cambridge, Massachusetts

124

Arnab K. Ray

3. Kim WC, Mauborgne R (2005) Blue Ocean Strategy, Harvard Business School Press, Boston 4. Marjit A, Marjit S, Ray AK (2007) Analytical modelling of terminal properties in industrial growth, arXiv:0708.3467 5. Strogatz SH (1994) Nonlinear Dynamics and Chaos, Addison–Wesley Publishing Company, Reading, MA 6. Montroll EW (1978) Social dynamics and the quantifying of social forces, Proceedings of the National Academy of Science of the USA 75:4633 7. Modis T (2002) Predictions 10 Years Later, Growth Dynamics, Geneva 8. Mantegna RN, Stanley HE (2000) An Introduction to Econophysics, Cambridge University Press, Cambridge 9. Kaplan RS, Norton DP (1996) The Balanced Scorecard, Harvard Business School Press, Boston

The Kuznets Curve and the Inequality Process John Angle, Franc¸ois Nielsen, and Enrico Scalas

Abstract Four economists, Mauro Gallegati, Steven Keen, Thomas Lux, and Paul Ormerod, published a paper after the 2005 Econophysics Colloquium condemning conservative particle systems as models of income and wealth distribution. Their critique made science news: coverage in a feature article in Nature. A particle system model of income distribution is a hypothesized universal statistical law of income distribution. Gallegati et al. [1] claim that the Kuznets Curve, well known to economists, shows that a universal statistical law of income distribution is unlikely and that a conservative particle system is inadequate to account for income distribution dynamics. The Kuznets Curve is the graph of income inequality (ordinate variable) against the movement of workers from rural subsistence agriculture into more modern sectors of the economy (abscissa). The Gini concentration ratio is the preferred measure of income inequality in economics. The Kuznets Curve has an initial uptick from the Gini concentration ratio of the earned income of a poorly educated agrarian labor force. Then the curve falls in near linear fashion toward the Gini concentration ratio of the earned incomes of a modern, educated labor force as the modern labor force grows. The Kuznets Curve is concave down and skewed to the right. This paper shows that the iconic Kuznets Curve can be derived from the Inequality Process (IP), a conservative particle system, presenting a counter-example to Gallegati et al.’s claim. The IP reproduces the Kuznets Curve as the Gini ratio of a mixture of two IP stationary distributions, one characteristic of the wage income John Angle Inequality Process Institute, Post Office Box 215, Lafayette Hill, Pennsylvania 19444-0215, USA. e-mail: [email protected] Franc¸ois Nielsen Department of Sociology, University of North Carolina, Chapel Hill, North Carolina 27514, USA. e-mail: [email protected] Enrico Scalas Department of Advanced Sciences and Technology, Laboratory on Complex Systems, East Piedmont University, Via Michel 11, I-15100 Alessandria, Italy. e-mail: [email protected]

125

126

John Angle, Franc¸ois Nielsen, and Enrico Scalas

distribution of poorly educated workers in rural areas, the other of workers with an education adequate for industrial work, as the mixing weight of the latter increases and that of the former decreases. The greater purchasing power of money in rural areas is taken into account.

1 Introduction Four economists, Mauro Gallegati, Steven Keen, Thomas Lux, and Paul Ormerod attended the 2005 Econophysics Colloquium and published a paper in its proceedings condemning conservative particle systems as models of income distribution [1]. Their critique made science news: a feature news article in an issue of Nature. Their paper did a service to research on conservative particle systems as models of income distributions by raising its visibility and encouraging discussion. We agree with Gallegati et al. that a conservative particle system model of income distribution is a hypothesized universal statistical law. Gallegati et al. assert that the economics literature on the Kuznets Curve shows the unlikelihood that such a universal law exists. They write that in economics relationships between phenomena can change. They claim a conservative particle system cannot account for such change. They give the Kuznets Curve as an example of change that they don’t think a conservative particle system can explain [2, 3]. Simon Kuznets won the third Nobel Prize in economics for, ‘inter alia’, finding the Kuznets Curve. There is a literature in economics on the Kuznets Curve which continues today. Neither we nor Gallegati et al. [1] have seen in this literature a conservative particle system used to explain the Kuznets Curve. See Nielsen [4] for a review of Kuznets Curve studies in economics and sociology. Gallegati et al. view explaining the Kuznets Curve as an open problem. Kuznets [2, 3] observed that, during the industrialization of an agrarian economy, income inequality first rises and then falls. Gallegati et al. [1] write that there are ”good reasons” for the Kuznets Curve. One reason they cite is the rising proportion of human capital in the labor force. Another is the shift of the labor force out of subsistence agriculture into the modern sector of manufacturing and services. We provide a counter-example to Gallegati et al.’s [1] claim that a conservative particle system cannot account for the Kuznets Curve. Gallegati et al. have no mathematical model behind the assertiion of ”good reasons” for the curve. They cite none.

2 The Kunzets Curve The oldest and best known statistical law of income distribution is the Pareto Law, a broad statement of which is that all size distributions of personal income (in large populations defined geographically) are right skewed with gently tapering right tails, power series tails. In 1954, Simon Kuznets [3] announced a statistical law of per-

127

0.38

Gini concentration ratio 0.42 0.46 0.50 0.54

0.58

The Kuznets Curve and the Inequality Process

0.05

0.20 0.35 0.50 0.65 0.80 0.95 Proportion of Population in Modern Sector Fig. 1 An iconic Kuznets curve

sonal earned income, now called the Kuznets Curve. By volume of literature generated, the Kuznets Curve approaches the fame of the Pareto Law. We examine the Kuznets Curve as the graph of the Gini concentration ratio of personal earned income (or a related income concept such as household income) against the movement of workers from low-skilled, poorly paid work in subsistence agriculture requiring little education into more productive modern sectors of the economy, requiring at least a secondary education and offering higher pay. Social scientists use the word ‘inequality’ casually to name any of several statistics of income when they find the values of these statistics disagreeable. Besides the Gini concentration ratio and the Lorentz Curve of which it is a summary statistic, measures such as %poor, %poor and % rich (with various income cut points for these categories), and dispersion (e.g., variance, interquartile range) have been used as indicators of inequality. These statistics do not necessarily covary [5], who terms the Gini concentration ratio the ”gold standard” of income inequality statistics [5, p. 353]. See Kleiber and Kotz [6, p. 20-29, 164] for a discussion of the Gini concentration and the Lorentz Curve. The iconic shape of the Kuznets Curve is an initial uptick in the Gini concentration ratio from that of the earned income of a poorly educated 100% agrarian labor force, a Gini higher than that characteristic of a modern economy, followed by a long, nearly linear decline to a modern Gini as the labor force shifts into the modern sector. The iconic Kuznets Curve is concave down, often called an “inverted U” although skewed to the right, with its right endpoint lower than its left endpoint. See, for an empirical example, Nielsen [4, p. 667], a graph of the Gini concentration of income as a function of the percent of a birth cohort that eventually enrolls in

128

John Angle, Franc¸ois Nielsen, and Enrico Scalas

secondary school in 56 countries circa 1970. Figure 1 is a stylized iconic Kuznets Curve. The present paper shows how a particular conservative particle system model of income distribution gives rise to the iconic Kuznets Curve as the Gini concentration ratio of the mixture of a model agrarian distribution of earned income and a model modern distribution as the mixing weight goes from 100% agrarian to 100% modern. The particle system generating the model agrarian and model modern earned income distributions is the Inequality Process [7, 8, 9, 10], a conservative particle system. We use Gallegati et al.’s measure of the transition of a labor force from agrarian to modern: the acquisition of human capital as workers move from subsistence agriculture in rural areas to employment in the modern sector in cities. We take into account the greater purchasing power of a unit of currency in rural than in urban areas. Kuznets [3] argued that the shift of the labor force from the agrarian sector with low average income to the modern sector with higher average income produces a trajectory of the inequality of income in both labor forces combined that rises, levels off, and declines during the transition. Using point estimates of the agrarian and modern wage, the result follows for the Gini concentration ratio from its definition in the case of discrete observations [6, p. 164]: n

Δn ≡

n

∑ ∑ |xi − x j |

i=1 j=1

n(n − 1)

(1)

where Δn is Gini’s mean difference, xi is the income of the i-th recipient in a population of n recipients. The Gini concentration ratio, Gn : Gn ≡

Δn 2µ

(2)

where mean income of the population is µ . If all agrarian workers earn an income of xa and all modern workers earn an income of xm , and xm > xa , then the number of nonzero terms contributing to Δn and Gn , i.e., |xa − xm | and |xm − xa |, is proportional to pq where p is the proportion of agrarian workers and q the proportion of modern workers, and p + q = 1. Hence the concave down curve of Gn plotted against q. Since µ increases as the proportion, q, of modern workers rises, the concave down graph of the Gini concentration ratio, G, against q, is skewed to the right. However, this result is not a satisfactory account of the empirical Kuznets Curve since at the start point and end point of the transition, i.e., q equals 0.0 or 1.0, the Gini concentration ratio, (1), equals 0.0, a value of the Gini concentration ratio never seen or approached empirically. At least two more considerations have to be taken into account to generate an empirically relevant Kuznets Curve.

The Kuznets Curve and the Inequality Process

129

3 Explaining The Empirical Kuznets Curve The two considerations needed to account for the empirical Kuznets Curve are: (a) the difference in the purchasing power of money, or, equivalently, the difference in the cost of living, between the agrarian and modern sectors, and (b) the difference between the earned income distribution of the poorly educated agrarian labor force and that of more educated workers in the modern sector. (a) The Kuznets Curve and the Metro-Nonmetro Gap in the Cost of Living in the U.S. Gallegati et al. [1] define the transition from agrarian to modern sectors of employment in terms of the education level of the labor force and the migration of labor from rural to urban areas. Nord [11] estimated the difference in the cost of living between the metro and nonmetro1 U.S. in the 1990’s. Joliffe [12] also estimated this difference. Nord estimated that the cost of living in the nonmetro U.S. was about 84% that of the metro. Joliffe estimated the cost of living in the nonmetro U.S. at 79% that of the metro. Taking the mean of these two estimates at 81.5% implies that $1 of earned income in the nonmetro U.S. has the purchasing power of approximately $1.23 in the metro U.S. A similar difference in the purchasing power of currency exists between urban and rural areas worldwide. This difference is likely much greater in economies whose labor forces are transitioning out of subsistence agriculture to employment in the modern sector. This transition was made in the U.S. in the 19th and early 20th centuries. The U.S. metro and nonmetro labor forces are similar, although nonmetro wages are lower partly due to a lower cost of living in the nonmetro U.S. and partly due to the somewhat lower level of education of the nonmetro labor force. (b) Modern and Agrarian Income Distributions and The Kuznets Curve. The ‘metro’ and ‘nonmetro’ concepts are the nearest approximation to the concepts ‘modern’ and ‘agrarian’ within the U.S. national statistical system. Besides the effect of the cost of living difference between metro and nonmetro areas on wage incomes, there is also the effect of difference in the distribution of education in the metro and nonmetro labour forces. The distributions of an1

The term ‘rural’ has a specific meaning in the U.S. Federal statistical system, a meaning farther from what the expression ‘rural’ in, for example, ‘rural America’ means than does the term ‘nonmetropolitan’ (nonmetro). A nonmetro county is a county not in a Metropolitan Statistical Area (MSA) as defined by the Office of Management and Budget (OMB), the regulator of the U.S. Federal statistical system. MSA’s include core counties containing a city of 50,000 or more people or having an urbanized area of 50,000 or more and total area population of at least 100,000. Additional contiguous counties are included in the MSA if they are economically integrated with the core county or counties. The metropolitan status of every county in the U.S. is re-evaluated following the Decennial Census. While there has been a net decline in counties classified as nonmetro since 1961, the definition of nonmetro has remained roughly constant. A nonmetro wage income is defined here as the annual wage and salary income of an earner whose principal place of residence is in a nonmetro county. The percentage of the U.S. labor force thus classified has declined in the data on which Figures 2 and 3 are based from about 31 to 18 percent from 1961 to 2003.

wage in

The Metro Distribution of Wage Income Conditioned on Education (averaged by five year intervals) Wage incomes from $1 to $60,000 in 2003 dollars; partial distributions of more educated closer to viewer; relative frequencies from 0 to .4

wage in

rel. freq.

come

uc

me

uc

wage in

uc

wage in

ed

come

1991–1995

rel. freq.

1986–1990

uc

ed

1996–2001

uc

wage in

ed

come

uc

ed

1981–1985

wage inco

ed

come

rel. freq.

wage in

ed

come

1971–1975

rel. freq.

rel. freq.

1976–1980

uc

wage in

uc

ed

come

1966–1970

rel. freq.

1961–1965

rel. freq.

John Angle, Franc¸ois Nielsen, and Enrico Scalas rel. freq.

130

come

ed

come

wage in

uc

ed

come

rel. freq.

come

uc

ed

1991–1995

wage in

come

uc

ed

uc

ed

rel. freq.

1981–1985

wage in

come

uc

ed

1986–1990

rel. freq.

Wage Income Conditioned on Education (averaged by five year intervals)

me

Wage incomes from $1 to $60,000 in 2003 dollars; partial distributions of more educated closer to viewer; relative frequencies from 0 to .5

wage in

come

wage inco

ed

The Nonmetro Distribution of

1976–1980

wage in

uc

1971–1975

uc

ed

1996–2001

rel. freq.

wage in

1966–1970

rel. freq.

rel. freq.

1961–1965

rel. freq.

rel. freq.

Fig. 2 The U.S. metro distribution of annual wage and salary income conditioned on education

wage in

come

uc

ed

Fig. 3 The U.S. non-metro distribution of annual wage and salary income conditioned on education

nual wage income conditioned on education in the metro and nonmetro U.S. are similar. See Figures 2 and 3. The two parameter gamma pdf offers a good fit to the distribution of annual income in the U.S. conditioned on education in the period 1961-2003 [13, 9]. The mixture of the partial distributions of this

The Kuznets Curve and the Inequality Process

131

conditional distribution (each partial distribution weighted by its share of the labor force) has a right tail heavy enough to account for the National Income and Product Account estimates of aggregate wage income in the U.S., an approximately Pareto right tail [14, 15]. The shape parameters of the gamma pdfs fitted to partial distributions of the distribution of annual wage income conditioned on education scale from low to high with worker education in the whole U.S. [13, 9] (see Table 1). The two parameter gamma pdf is: f (x) ≡

λ α α −1 −λ x x e Γ (α )

(3)

where, x > 0, x is interpreted as earned income, a is the shape parameter, λ is the scale parameter, and (3) is referred to as GAM(a, λ ). In terms of a gamma pdf model of earned income distribution of the whole U.S. labor force, a mixture of the metro (m) and nonmetro (nm) distributions, the Kuznets Curve is the graph of G, the Gini concentration ratio of h(x) plotted against q, the proportion metro, where h(x) is: # αnm $ # αm $ λm αm −1 e−λnm + q αm −1 e−λm h(x) ≡ p Γλ(nm x x αnm ) Γ (α m ) (4) ≡ pGAM(αnm , λnm ) + qGAM(αm , λm ) and p + q = 1. anm = shape parameter of the gamma pdf model of the nonmetro wage income distribution. λm = scale parameter of the gamma pdf model of the metro wage income distribution. The two parameter gamma pdf is not in general closed under mixture, i.e., h(x) is not itself a two parameter gamma pdf unless either p or q = 0. Table 1 Gamma shape parameters of partial distributions of the distribution of annual wage income conditioned on education in the U.S., 1961-2003. Standard errors of estimate are negligibly small. Source: [9]

Highest Level of Education Estimate of shape parameter, ai , of ith partial distribution Eighth Grade or Less Some High School High School Graduate Some College College Graduate Post Graduate Education

1.2194 1.4972 1.8134 2.0718 2.8771 3.7329

132

John Angle, Franc¸ois Nielsen, and Enrico Scalas

Most of the workers in the lowest level of education in Table 1 were close to the upper limit of that category. A fully agrarian labor force, in the sense of a labor force uninvolved with an industrial economy, would be largely illiterate and, extrapolating from Table 1, would have a shape parameter fitted to their earned income distribution distinctly smaller than 1.2. To extrapolate conservatively, we specify the shape parameter of the gamma pdf of an agrarian distribution of earned income as 1.0. For much of the 20th century in the U.S. a high school diploma (completion of secondary education) was the standard qualification for industrial, ”blue collar” labor. We take the gamma shape parameter of U.S. high school graduates, 1.8, as the model of earned income distribution of the modern sector of an economy. The Gini Concentration Ratio of a Gamma PDF and a Mixture of Two Gamma PDF’s McDonald and Jensen [16] give the Gini concentration ratio, GΓ , of a two parameter gamma pdf (3) as:

Γ (α + 12 ) GΓ = √ . πΓ (α + 1)

(5)

GΓ is a monotonically decreasing function of a. GΓ = .5 when a = 1.0. The G of a mixture of gamma pdfs cannot be expressed, in general, as a linear function of the GΓ ’s of the gamma pdf summands. The GΓ of a gamma pdf is a function of its shape parameter alone. The G of a mixture of two gamma pdfs is, in general, a function of all four gamma parameters. There is no simple expression for the Gini concentration ratio of a mixture of two gamma pdfs with distinct shape and scale parameters. However, the G of h(x), (4), can be found by numerically integrating the Lorentz Curve of h(x) and subtracting that integral from the integral of the Lorentz Curve of perfect equality. The Gini concentration ratio of h(x) is twice that difference. See Kleiber and Kotz [6] for a discussion of the Gini concentration ratio as a summary statistic of the Lorentz Curve. Does the Greater Purchasing Power of Money in the Agrarian Sector Account for the Kuznets Curve? If a unit of currency has greater purchasing power for the agrarian labor force than the modern labor force, an agrarian wage income with purchasing power equal to that in the modern sector is smaller. Assuming that education levels in both the rural and urban labor force were equal, gamma models of the wage income in both sectors will differ only in their scale parameters, i.e., GAM(aM , λM ) is the model of the distribution of the modern sector, GAM(aA , λA ) the model of the agrarian sector, aM = aA and λM < λA . Suppose the purchasing power of a unit of currency in the agrarian sector is twice that of the modern sector, i.e., λA = 2.0λM . Since the mean of the two parameter gamma pdf model is a/λ , mean wage income in the modern sector is twice that of the agrarian sector. Figure 4 graphs the Gini concentration ratio of the mixture of the two gamma pdfs, h(x) = p GAM(aA = 1.0, λA = 2.0) + q GAM(aM = 1.0, λM = 1.0), as q, the proportion in the modern sector, goes from 0.0 to 1.0. Figure 4 shows that when the purchasing power of a unit of currency in the agrarian sector is twice that in the modern sector, that difference alone cannot

133

0.50

Gini concentration ratio 0.51 0.52

0.53

The Kuznets Curve and the Inequality Process

0.05

0.20 0.35 0.50 0.65 0.80 0.95 Proportion of Population in Modern Sector

Fig. 4 Gini concentration ratio of a mixture plotted against proportion of population in modern sector

produce the iconic Kuznets Curve of Figure 1. Figure 4 shows 1) the Gini concentration ratios of the 100% agrarian and the 100% modern labor forces as equal, and 2) the Kuznets Curve as nearly symmetric. Thus, Figure 4’s hypothesis is not empirically relevant. Does the Rise of the Educational Level of the Labor Force during the Agrarian to Modern Transition Account for the Kuznets Curve? Suppose that there is no difference in the purchasing power of a unit of currency received by a worker in the agrarian sector and a worker in the modern sector (λA = λM = 1.0), but rather there is a substantial difference in education and a concomitant difference in the shape parameters of the gamma pdfs fitting the distributions of earned income in each sector. Let the shape parameter of the gamma pdf model of wage income distribution in the agrarian sector be aA = 1.0, i.e., somewhat smaller than the shape parameter of the gamma pdf fitted to the wage income distribution of U.S. workers with eight years or less of elementary schooling. Let the shape parameter of the gamma pdf model of wage income distribution in the modern sector be aA = 1.8, i.e., the estimate of the shape parameter of the gamma pdf fitted to the wage income distribution of U.S. workers who completed high school (secondary education). Figure 5 shows the Gini concentration ratio of the mixture, h(x) = p GAM(aA = 1.0, λA = 1.0) + q GAM(aM = 1.8, λM = 1.0), as the mixing weight, q, the proportion in the modern sector, goes from 0.0 to 1.0. Figure 5 demonstrates that a rise in the education level of the labor force in its transition from the agrarian to the modern sectors accounts for the decrease in the Gini concentration ratio of earned income but not for the initial uptick of the curve.

John Angle, Franc¸ois Nielsen, and Enrico Scalas

0.38 0.40

Gini concentration ratio 0.42 0.44 0.46 0.48

0.50

134

0.05

0.20 0.35 0.50 0.65 0.80 0.95 Proportion of Population in Modern Sector

Fig. 5 Gini concentration ratio of a mixture plotted against proportion of population in modern sector

Does the Joint Effect of Greater Purchasing Power in the Agrarian Sector and A Rise in Education Level in the Agrarian to Modern Transition Account for the Kuznets Curve? The greater purchasing power of a unit of currency accounts for the upward movement of the Kuznets Curve over its left side, i.e., as the fraction of the labor force in the modern sector moves up from 0. The rise in education level of the labor force accounts for the fall in the Kuznets Curve. Suppose the cost of living in the agrarian sector is 81.5% of that of the modern sector. The greater purchasing power of a unit of currency in the agrarian sector would be 1.23 that of the modern sector, using estimates of the greater purchasing power of a U.S. dollar in the nonmetro U.S. than the metro U.S. in the 1990’s. Suppose the education level of the modern sector results in a wage income distribution that is fitted by a gamma pdf with the same shape parameter as that fitted to the wage income distribution of high school graduates (secondary school completion) in the U.S., a gamma shape parameter of 1.8. The graph of the Gini concentration ratio of h(x) = p GAM(aA = 1.0, λA = 1.23) + q GAM(aM = 1.8, λM = 1.0) is shown in Figure 6. Figure 6 contains both defining features of the iconic Kuznets Curve, the initial uptick in the Gini concentration ratio over a small proportion of the labor force in the modern sector followed by a long, nearly linear decline to the lower Gini of the modern sector as the proportion of the labor force in the modern sector rises. If the cost of living in the agrarian sector is somewhat lower than 81.5% that of the modern sector - say 2/3, and if there is the difference in education levels of Figures 5 and 6, then Figure 1 results. Figure 1 is the iconic Kuznets Curve of the introduction to this paper. So, if a conservative particle system can account for a) the approxi-

135

0.38

Gini concentration ratio 0.42 0.46

0.50

The Kuznets Curve and the Inequality Process

0.05

0.20 0.35 0.50 0.65 0.80 0.95 Proportion of Population in Modern Sector

Fig. 6 Gini concentration ratio of a mixture plotted against proportion of population in modern sector

mately gamma distribution of wage income, and b) the shape of this distribution by level of education, we have a counter-example to Gallegati et al.’s proposition that a conservative particle system cannot account for the Kuznets Curve. The difference in cost of living by sector is an adjustment that is easily made.

4 The Inequality Process and The Kuznets Curve The earliest article we have found that develops a statistical mechanical theory of wealth distribution is Harro Bernadelli’s 1943 article in Sankhya, ¯ “The Stability of the Income Distribution”, a paper that recognizes that stable features of this distribution indicate its generation by a statistical law [17]. The Inequality Process is a candidate model of that law similar to the Kinetic Theory of Gases particle system model of statistical mechanics [18]. The Inequality Process [7, 8, 9, 10] randomly matches pairs of particles for competition for each other’s “wealth”, a positive quantity that is neither created nor destroyed in the particle encounter. The Inequality Process is thus a conservative particle system model, i.e., in the class of model criticized by Gallegati et al. The transition equations of the Inequality Process are: xit = xi(t−1) + di ωθ j x j(t−1) − (1 − dt ) ωψ t xi(t−1) x jt = x j(t−1) + di ωθ j x j(t−1) − (1 − dt ) ωψ t xi(t−1)

(6)

where xit is the wealth of particle i at time step t; ωθ j ∈ (0, 1) is the fraction lost in loss by particle j; ωψ i ∈ (0, 1) is the fraction lost in loss by particle i; and dt is a

136

John Angle, Franc¸ois Nielsen, and Enrico Scalas

sequence of dichotomous independent random variables equal to 1 with probability 1/2 and to 0 with probability 1/2. The provenance of the Inequality Process is a verbal theory of social science [8, 10] that identifies competition as the generator of income distributions. In particular, this source of the Inequality Process asserts that more skilled and productive workers are more sheltered in this competition, i.e., a particle with smaller ωψ represents a more productive worker. Consequently, the Inequality Process must show that particles more sheltered from competition have a distribution of wealth that fits the empirical distribution of earned income of more productive workers. We agree with Gallegati et al. that worker education is a measure of worker productivity. The Inequality Process must account for the distribution of earned income conditioned on education. The test of whether it does so is performed by equating an ωψ equivalence class of particles with observations on workers who report a given level of education and then by fitting the stationary distribution of particle wealth in the ωψ equivalence class to the income distribution of workers at that level of education. The Inequality Process passes this test [10]. Gallegati et al. are concerned about testing a model of the stock form of wealth against data on its flow form, income. Capitalizing aggregate earned income shows that most of the stock of wealth of an industrial economy is in human capital, largely the educations, of its workers. Earned income is the annuitization of human capital. Earned income is closely correlated with human capital. The substitution of one variable for another one that is closely correlated is well established in economics. See Friedman [19]. The Inequality Process’ stationary distribution of wealth in the ωψ equivalence class is approximately a gamma pdf [7, 8, 9, 10] for ωψ ’s estimated from earned income distributions conditioned on education: α

f (x) ≡

λψ tψ αψ −1 −λψ t x x e Γ (αψ )

(7)

where x > 0represents wealth (income) in the ωψ equivalence class; the shape pa1−ω 1−ω rameter is αψ ≈ ωψ ψ ; the scale parameter λψ t ≈ ω˜t µtψ , with ω˜t the harmonic mean of ωψ ’s and µt the unconditional mean of x at time t. µψ t is the mean of x in the ωψ equivalence class; µψ t ≈ αψ /λψ t = (ω˜t µt )/ωψ . The Macro Model of the Inequality Process, (7), [20] represents the agrarian distribution of earned income as a gamma pdf with a larger ωψ (smaller aψ ) and smaller µt (larger λψ t ), than that of the modern distribution, i.e., able to reproduce Figure 1, the iconic Kuznets Curve, as the Gini concentration ratio of the mixture of the two pdf’s, h(x): h(x) = p GAM(aA , λA ) + q GAM(aM , λM ), where the subscript A indicates the agrarian distribution and the subscript M the modern distribution. Reproduction of the iconic Kuznets Curve requires both a

The Kuznets Curve and the Inequality Process

137

lower cost of living in the agrarian sector and a higher education level of the labor force in the modern sector. 2

5 Conclusions The Inequality Process, a conservative particle system, implies its macro model, a model of its stationary distribution in each equivalence class of its particle parameter. The macro model of the Inequality Process (7) presents Gallegati et al.’s claim that the iconic Kuznets Curve is a dynamic empirical income phenomenon a conservative particle system cannot explain with a counter-example. Kuznets (1965) thought the curve named after him resulted from the transition of a labor force from employment in the agrarian sector to employment in the modern sector. The macro model of the Inequality Process explains the Kuznets Curve the same way. Its explanation is more satisfactory because more relevant information is included and more of the features of the iconic Kuznets Curve are reproduced. We thank Gallegati et al. (2006) for stimulating discussion and research into particle system models of economic phenomena, indeed for encouraging the present paper. Their paper succeeded in directing attention to the subject of particle system models of income distributions more effectively than reports of the wide empirical relevance of such models. Economists need not fear this class of model. We expect that this line of research will put many of the verbal tenets and basic insights of the paradigm of economics on a firm scientific footing for the first time. We welcome Gallegati et al. as collaborators in this enterprise.

References 1. Gallegati, Mauro, Steven Keen, Thomas Lux, and Paul Ormerod. 2006. “Worrying Trends in Econophysics”. Physica A 370:1-6 2. Kuznets, Simon. 1955. “Economic Growth and Income Inequality.” American Economic Review 45:1-28 3. Kuznets, Simon. 1965. Economic Growth and Structure. New York: Norton 4. Nielsen, Franois. 1994. “Income Inequality and Development”. American Sociological Review 59:654-677 5. Wolfson, Michael. 1994. “When inequalities diverge.” American Economic Review 84(#2) : 353-358 6. Kleiber, Christian and Samuel Kotz. 2003. Statistical Size Distributions in Economics and Actuarial Sciences. New York: Wiley 7. Angle, John. 1983. “The surplus theory of social stratification and the size distribution of Personal Wealth.” 1983 Proceedings of the American Statistical Association, Social Statistics Section. Pp. 395-400. Alexandria, VA: American Statistical Association 2

We have not excluded the possibility that other conservative particle system models of income distributon might do the same. Nor do we assert that we have identified all factors that might give rise to an iconic Kuznets Curve. Indeed we think that the left censoring of the income distribution (exclusion of small incomes from tabulation) in countries with small GD per capita is also involved.

138

John Angle, Franc¸ois Nielsen, and Enrico Scalas

8. Angle, John. 1986. “The surplus theory of social stratification and the size distribution of Personal Wealth.” Social Forces 65:293-326 9. Angle, John. 2002. “The statistical signature of pervasive competition on wages and salaries”. Journal of Mathematical Sociology. 26:217-270 10. Angle, John. 2006 (received 8/05; electronic publication: 12/05; hardcopy publication 7/06). AThe Inequality Process as a wealth maximizing algorithm@. Physica A: Statistical Mechanics and Its Applications 367:388-414 (DOI information: 10.1016/j.physa.2005.11.017) 11. Nord, Mark. 2000. “Does it cost less to live in rural areas? Evidence from new data on food security and hunger”. Rural Sociology 65(1): 104-125 12. Joliffe, Dean. 2006. “The cost of living and the geographic distribution of poverty”. Economic Research Report #26 (ERR-25) [ http://www.ers.usda.gov/publications/err26 ]. Washington, DC: Economic Research Service, U.S. Department of Agriculture 13. Angle, John. 1996. “How the gamma law of income distribution appears invariant under aggregation”. Journal of Mathematical Sociology. 21:325-358 14. Angle, John. 2001. “Modeling the right tail of the nonmetro distribution of wage and salary income”. 2001 Proceedings of the American Statistical Association, Social Statistics Section. [CD-ROM], Alexandria, VA: American Statistical Association 15. Angle, John. 2003. “Imitating the salamander: a model of the right tail of the wage distribution truncated by topcoding@. November, 2003 Conference of the Federal Committee on Statistical Methodology, [ http://www.fcsm.gov/events/papers2003.html ] 16. McDonald, James and B. Jensen. 1979. “An analysis of some properties of alternative measures of income inequality based on the gamma distribution function.” Journal of the American Statistical Association 74: 856-860 17. Bernadelli, Harro. 1942-44. “The stability of the income distribution”. Sankhya 6:351-362 18. Angle, John. 1990. “A stochastic interacting particle system model of the size distribution of wealth and income.” 1990 Proceedings of the American Statistical Association, Social Statistics Section. Pp. 279-284. Alexandria, VA: American Statistical Association 19. Friedman, Milton. 1970 [1953]. “The methodology of positive economics”. Pp. 3-43 in Essays in Positive Economics. Chicago: University of Chicago Press 20. Angle, John. 2007A. The Macro Model of the Inequality Process and The Surging Relative Frequency of Large Wage Incomes@. Pp. 171-196 in A. Chatterjee and B.K. Chakrabarti, (eds.), The Econophysics of Markets and Networks (Proceedings of the Econophys-Kolkata III Conference, March 2007, Milan: Springer

Monitoring the Teaching - Learning Process via an Entropy Based Index Vijay A. Singh, Praveen Pathak, and Pratyush Pandey

Abstract The concept of entropy is central to thermodynamics, statistical mechanics and information theory. Inspired by Shannon’s information theory we define an entropy based performance index (S p ) for monitoring the teaching-learning process. Our index is based on item response theory which is commonly employed in psychometrics and currently in physics education research. We propose a parametrization scheme for distractor curve. We have carried out a number of surveys to see the dependence of S p on student’s ability, peer instruction and collaborative learning. Our surveys indicate that S p plays a role analogous to entropy in statistical mechanics, with student’s ability being akin to inverse temperature, peer instruction to an ordering (magnetic) field and collaborative learning to interaction.

1 Introduction There is a deep and useful connection between information and entropy. This is evident from Szilard’s analysis of Maxwell’s Demon [1]. Shannon’s seminal work on communication theory made this connection clearer. Shannon’s information definition takes a probabilistic view and is defined for an ensemble of messages [2]. It is the same as the usual definition of entropy in statistical mechanics, the later being defined for an ensemble of microstates. Inspired by these works Jaynes [3] and Brillouin [4] placed statistical mechanics on an information-theoretic foundation. Vijay A. Singh Homi Bhabha Centre for Science Education (TIFR), V. N. Purav Marg, Mankhurd, Mumbai 400088, India. e-mail: [email protected] Praveen Pathak Homi Bhabha Centre for Science Education (TIFR), V. N. Purav Marg, Mankhurd, Mumbai 400088, India. e-mail: [email protected] Pratyush Pandey Department of Electrical Engineering, IIT - Kanpur, U.P. - 208016, India.

139

140

Vijay A. Singh, Praveen Pathak, and Pratyush Pandey

At another level there is a similarity between an interacting many body system in statistical mechanics and a group of interacting students involved in the learning process. Entropy is a measure of disorder for the former. We can in a similar fashion define an index akin to (Shannon) entropy which will act as a monitor for the efficiency of the Teaching - Learning (TL) process. We propose such an index in this paper. We review the notion of entropy in information theory. We then introduce our definition of entropy based performance index (S p). We compute this index from Item Response Curves (IRCs) which we briefly explain with a simple example. We present the results for S p . We have tested S p under two field conditions. Our first sample consisted of 101 students from Sagar in central India. Our second sample comprised of 92 students from Kanpur in north India. We first show how teaching lowers S p . This is analogous to magnetic domains being ordered by external magnetic field. Next we investigate the role of collaborative learning on S p . We find that collaborative learning lowers S p . Once again this has a resonance with statistical mechanics where it is found that interaction lowers entropy.

2 The Learning Index Our proposed learning index is based on Item Response Theory which is now increasingly used in Physics Education Research (PER) [5]. Its use in the form of Item Response Curve (IRC) in psychometrics is wide ranging. We briefly review IRC. In IRC we display the fraction of students (P(θ )) who have selected a particular answer choice vis-a-vis the ability (θ ) of the students. To illustrate it, we take a simple example. Consider an item in an inventory with four alternate choices. To fix our ideas, let this item be a question as follows: If 6x = 18, then x = (a) 12 (b) 24 (c) 3 (d) 108. The hypothetical item response curves for this are shown in Figure 1. Let us for the sake of argument assume that the ability θ has been measured by an elaborate test on arithmetic conducted prior to this question. We normalize θ such that the average ability of student is indicated by θ = 0 and θ = −3(3) indicates very low (high) ability students1 . Figure 1 shows the IRCs for all the four choice, where ′ c′ is correct choice. The IRC for the ideal correct choice is often parameterized by a three parameter logistic type response function [6]. Pc (θ ) = s1 +

(1 − s1 ) . 1 + exp[− (θ − m1)/w1 ]

(1)

Here θ is the ability and Pc (θ ) is the performance of the student in the test. Subscript c is used to depict correct choice. Here s1 is the probability that a candidate 1

Let the marks of the N students be mi (i = 1.....N) and m and Δ m be the average marks and standard deviation of the students respectively. Then the normalized ability θi = (mi − m)/Δ m.

Monitoring the Teaching - Learning Process via an Entropy Based Index

141

with very low ability will respond correctly to the item and w1 is the item discrimination parameter. When w1 is small the curve is steep and almost step like. The item difficulty m1 is the ability level at the point of inflection of the logistic function. These aspects are well-known [5] and we have summarized it here for the sake of completeness. We can see that Eq. (1) can describe choice ’c’ in Figure 1. However, it is equally important to study the distractors (incorrect choices) in the IR analysis. The distractor may take various forms. For example it may exhibit a behavior complementary to Eq. (1). But an IRC for a good distractor has a peak at some medium ability level (see Figure 1). Thus the distractor IRC contains vital information about medium ability level students. This particular behavior cannot be captured with complementary behavior of logistic response function. Hence we propose the following parameterization scheme based on our study of the various shapes for the distractor:      (θ − m)2 (e − s) θ −m 1 + tanh + p exp − . (2) Pd (θ ) = s + 2 w 2w2 Parameter e is the probability that a high ability student will respond incorrectly to the item. The distractor amplitude parameter is p and this determines the popularity of this choice with a candidate of ability m. Parameter m can be seen as ability level where distractor curve starts to change its behavior. A distractor maybe deemed “good” if both distractor level m and amplitude parameter p are moderately large. This indicates that a fair fraction of above average ability students are attracted to this incorrect choice. This may thus aid in identifying an important misconception. A large value of w broadens the peak. Thus a large (small) spread parameter w indicates that a large (small) fraction of students around ability level m are choosing this particular distractor. Our proposed scheme is “universal” in a sense: it captures a variety of distractor behavior. For example, different values of parameters will capture the shape of distractor IRCs (choices ′ a′ and ′ b′ ) shown in Figure 1. We associate entropy with randomness [7]. Our definition of an entropy based performance index attempts to capture this concept. In order to do this, recall Shannon’s definition of information which is used in communication channels [2]. Suppose that there are N possible messages labeled s i, i = 0, 1, ..., N − 1 that can be sent and that the probability that si is sent is Pi . Then, the information content I per message transmitted is N−1

I=−

∑ Pi logN Pi .

(3)

i=0

Here I is the Shannon Information. This definition is similar to the standard definition of entropy in statistical mechanics except for Boltzmann constant kB [8]. We suggest that the IRCs can be used to assign a value to Pi and N of Eq. (3), so that the appropriate definition of learning efficiency may be constructed. We associate N with number of choices which is 4. Further, we associate the fraction P(θ ) with probability Pi . The performance of students at ability level θ = -0.5, i.e. just below the average, is indicated in Figure 1. Thus our entropy based performance index is

142

Vijay A. Singh, Praveen Pathak, and Pratyush Pandey d

S p (θ ) = − ∑ Pi (θ ) log4 Pi (θ ).

(4)

i=a

We repeat that Pi (θ ) is fraction of students at ability θ who selected choice i. Note that we have made a conceptual leap in associating fraction P(θ ) with probability. As Figure 1 indicates, for θ = −0.5, Pa = 0.45, Pb = 0.30, Pc = 0.20, Pd = 0.05. Hence S p for this ability level, S p (θ = −0.5) = − [0.45 log4 0.45 + 0.30 log4 0.30+ 0.20 log4 0.20 + 0.05 log4 0.05] = 0.86.

If we perform this calculation for each ability level then we would obtain S p over the entire ability range. This is also plotted in Figure 1. x 1

1 x

0.75 x

c

0.5

a

* * *

+b

Sp

P (θ)

x x

* +

+

*

+

* 0.25 +x* x x + o x x x x x x x o o

d 0

−3

o o o

o

−0.5

x

+ x *x +

+

* + *

o o

0

o

o

+ + o*+ 0

3

θ Fig. 1 Hypothetical IRCs and our proposed index. Lines with ‘*’, ‘+’, ‘x’ and ‘o’ depict the IRCs while the dark line shows the curve of our entropy index (S p ). At ability level θ = −0.5, Pa = 0.45, Pb = 0.30, Pc = 0.20, and Pd = 0.05. Value of S p at this ability level is 0.86

Our entropy index (S p ) has an appealing quality. It is normally large for low ability and small for high ability. Recall that in IRT there is an explicit guessing parameter associated with low ability students [9]. Thus for low ability students P(θ ) values will be close to 0.25, or in other words, Pa = Pb = Pc = Pd = 0.25. Hence, S p will be the highest (i.e. 1) for this ability level. On the other hand for the maximum ability level, S p will go to zero provided the correct selection is made by all the students at this level, e.g. Pa = 1, Pb = Pc = Pd = 02 . We also note that it is possible that students around a given ability level say θm , are convinced about 2

Note that: log4 0.25 = −1, and log4 1 = 0. In the limiting case when Pi → 0, Pi log4 Pi → 0.

Monitoring the Teaching - Learning Process via an Entropy Based Index

143

the correctness of a particular distractor. In that case S p (θm ) will once again be low. Thus small entropy may not necessarily be confined to high ability region alone.

3 Results

1

Sp

0.5

0

3

θ

3

Fig. 2 Entropy curve at different stages of teaching. Uppermost solid line depicts the pre-test entropy curve. Next three curves in succession are based on surveys done after 2 weeks, 4 weeks (course completion) and 6 weeks (post revision). See text for more details

We have tested the entropy index (S p ) under field conditions with students being at the higher secondary school level. We had earlier carried out a large scale survey of misconceptions regarding the role of friction in rolling bodies [10]. We repeated this exercise taking a smaller sample of 101 students in Sagar, a region in central India. An external magnetic field aligns the magnetic moments of a paramagnetic system and lowers its entropy. Our field study in Sagar (101 students) suggests that teaching plays a role akin to this ordering process in statistical mechanics. Students were given rigorous teaching in rotational dynamics. We administered the friction in rolling bodies inventory to this sample. Students were administered equivalent tests at different stages of the teaching process (See Figure 2): (i) Stage I was conducted in the beginning at t = t1 (0). This is the pre-test (solid line). (ii) Stage II was conducted at t = t2 (2 weeks), when approximately half the material was taught (dashed curve). (iii) Stage III was conducted at t = t3 (4 weeks), when all the material was taught (dotted curve). (iv) Stage IV was conducted at t = t4 (6 weeks), where the material was revised and summarized (dotdashed curve). Results are presented in Figure 2. There is a clear lowering of S p with teaching. The pre-test S p (solid line) is higher at all ability levels. This indi-

144

Vijay A. Singh, Praveen Pathak, and Pratyush Pandey

cates that students are guessing randomly. The post-test S p (dotted curve) shows an unmistakable monotonically decreasing behavior with ability. However Stage III and IV results are not different for low and high ability students. There is a lowering of S p for the medium ability students. Thus the revision process, at least in our case of rotational dynamics, is found not to benefit the lowest and highest ability students just as external magnetic field cannot further order already aligned domain moments or those which are sluggish. Overall TL imparts a monotonic quality to S p . It can be shown in statistical mechanics that the entropy of two non-interacting systems is additive. Interaction lowers the entropy. A similar result may hold in our case. Monte-Carlo based simulations of the TL process as well as extensive field studies [11, 12, 13, 14] have documented that teaching is more effective if it is accompanied by collaborative learning between students. We have carried out a study of 92 students in Kanpur, a place in north central India to see if our index reflects this observation. Figure 3 shows results for this exercise. We divided students into two groups, I (48 students) and II (44 students). Both groups were taught the basics of an advanced concept, namely precession, theoretically as well as with a laboratory example of a flywheel. Both groups of students (I and II) separately attended lectures by the same teacher. In addition, Group II students engaged in collaborative study and discussions forming small fixed groups of three or four. The pre and posttest results of both groups are depicted. The concept of precession is difficult, so the pre-test results show the entropy to be uniformly high for both groups. However, the post-test results do show a distinct dip in the entropy for the average and high ability students of the interacting Group II. We stress that this is an empirical result and unlike statistical mechanics, we cannot prove it rigorously. But the fact that the result confirms to what is well known in educational research and socio-dynamics as well as to the behavior of interacting systems in statistical mechanics is heartening.

4 Discussion Our definition of entropy is based on IRC. We note that this does have limitations such as distortion of scale (or slope) parameters and errors in determination of ability, among others [9]. We can also define a performance entropy for the cumulative performance of an item. If N is the total number of students and the number of students opting for choice i is Mi then we can define Pi = Mi /N. One can thus define an entropy analogous to Eq. (4). This would yield only a cumulative index whereas Eq. (4) maps the entropy over the entire ability landscape. We note that the analogy with the entropy can be carried further. Ability plays the role of inverse temperature. High ability implies low entropy and low information content. Entropy index (S p) can be successfully used to improve the TL process. Consider the less likely hypothetical case where student groups at some median ability level (say - 0.5 ≤ θ ≤ 0.5) have low S p . This immediately flags a situation for investigation. Perhaps this is due to a compelling distractor. Thus one must modify the

Monitoring the Teaching - Learning Process via an Entropy Based Index

1

145

II I II

I

Sp

0.5

0 3

3

θ Fig. 3 Pre-test and post-test entropy curves for two groups. Solid line depicts the entropy for Group I, which is peer instruction alone. The dashed line is for Group II which involves peer instruction and collaborative learning. The upper pair of curves represent pre-test data, whereas the lower pair represents the post-test data. See text for detailed discussion

teaching practice to remedy this situation. This will improve the teaching process in addition to monitoring it. Others and perhaps better monitors of the TL process and item difficulty exist. We have proposed one which has a resonance with the field of physics and information science and is worthy of further exploration. We stress that our aim is not to ferret out the ”fraction of correct answers” (which in any case our IRC does). Further, the focus is not on the question but on the students as a group. Acknowledgements This work was supported by the Science Olympiad Programme and the National Initiative on Undergraduate Science (NIUS) undertaken by the Homi Bhabha Centre for Science Education - Tata Institute of Fundamental Research (HBCSE-TIFR), Mumbai, India. We thank Dr. Manish Kapoor of Christ Church College, Kanpur and Dr. Ramkumar Nagarch of Govt. Girls Degree College, Sagar for assisting us in carrying out the surveys.

References 1. L. Szilard, Z. Phys. 53, 840856 (1929). Translation by A. Rapoport and M. Knoller, reprinted in Quantum Theory and Measurement, edited by J. A. Wheeler and W. H. Zurek, Princeton, U. P. (1983) 2. C. E. Shannon and W. Weaver, In The Mathematical Theory of Communication, University of Illionis Press, Urbana (1949) 3. E. T. Jaynes, Phys. Rev. 106, 620-630 (1957) 4. L. Brillouin, in Science and Information Theory, Academic, New York (1956) 5. Gary A. Morris, Lee Branum-Martin, Nathan Harshman, Stephen D. Baker, Eric Mazur, Suvendra Dutta, Taha Mzoughi, and Veronica McCauley, Am. J. of Phy. 74 5 (2006)

146

Vijay A. Singh, Praveen Pathak, and Pratyush Pandey

6. Herbert J Walberg and Geneva D Haertel, The International Encyclopedia of Educational Evaluation, Oxford U.K., Pergamon Press (1990) 7. J. Machta, Am. J. Phys. 67 12 (1999) 8. R. P. Feynman, in Statistical Mechanics, W. A. Benjamin Inc., Reading, Massachusetts (1972) 9. R. K. Hambleton, H. Swaminathan, and H. J. Rogers, Fundamentals of Item Response Theory Sage Publications, Thousand Oaks, CA (1991) 10. Vijay A. Singh, and Praveen Pathak, in Proceedings of International Conference to Review Research in Science, Technology and Mathematics Education, Mumbai, Feb. 12-15, 2007 11. C. M. Bordogna and E. V. Albano, The European Physical Journal B 87 3, 391-396 (2002) 12. N. Webb, K. Nemer, C. Chizhik, and B. Sugrue, Am. Educ. Res. J. 35 607 (1998) 13. R. Hake, Am. J. Phys. 66 64 (1998) 14. R. Mislovaty, E. Klein, I. Kanter, and W. Kinzel, Phys. Rev. Lett. 91, 118701 (2003)

Technology Level in the Industrial Supply Chain: Thermodynamic Concept Jisnu Basu, Bijan Sarkar, and Ardhendu Bhattacharya

Abstract Functioning of an industrial supply-chain can be viewed as a series of Carnot cycles, operating between numbers of temperature pairs. Economic temperature of vendors and receiving farms is determined by the technology level of their production process. Instead of the Cobb-Douglus equation of production, the level of technology can also be calculated from the second law of thermodynamics. Technology is the integrating factor of the non-total differential form of the Production, as it directs the actual process variables of the production chain. System entropy changes with collection and distribution of goods and money. Advantages and limitation of vendor development can also be explained. Case studies from Indian automobile industry and some industrial clusters have been presented in this article as empirical evidence for this hypothesis. For large manufacturing organizations, technology level can be estimated from the annual balance sheet, but it is difficult to determine it in the unorganized portion of the supply chain and industrial clusters. Average income of worker and production rate per person is a fair indicator of the technology level. Correlation analysis of these two indicators with Technology level has also been included in this study.

1 Introduction The supply chain paradigm has changed over the years with the development of organization theory and technological upgradation of the production process. Supply Chain Management (SCM) in an automobile industry is a major control parameter Jisnu Basu Saha Institute of Nuclear Physics, Kolkata, India. e-mail: [email protected] Bijan Sarkar Production Engineering Department, Jadavpur University, Kolkata-700032, India. Ardhendu Bhattacharya Mechanical Engineering Department, Jadavpur University, India.

147

148

Jisnu Basu, Bijan Sarkar, and Ardhendu Bhattacharya

for success, where one car needs more than 4000 components of about a million design verities. To make a car, the manufacturer has to arrange on an average 100 key components from 300 suppliers [1]. After the introduction of Japanese management in organization theory, SCM has also been gradually changed to the Just-In-Time (JIT) system and later from JIT to Lean production. Concept of Kaizen (continuous development) had dominated the supply chain system of Japanese automobile giants like Toyota, Suzuki and Kawasaki during last two decades. Presently the idea of Convisint -the super supplier has crept in the global automobile market. This type of business to business (B2B) relationship in Convisint has not only challenged the Japanese model but also questioned the vertical integration of the supply chain. Thus the evolution of the supply chain is closely related with the upgradation of the technology level and the change in culture of industrial organizations. Chinho et al. [2] opine that traditional focus on business functionalities like purchasing, inventory control, scheduling and transportation has been shifted to the technology level selection, quality management and support logistic operations of the supply chain. Along with other infrastructure factors technology level plays a major role in supply chain management. Purchasing consortium is a subject of increasing interest in Asian developing countries. Consortiums are considered as short length homogeneous supply chains. But Eija et al. [3] have shown that for effective functioning of Consortiums the delicate balance between social understanding and information sharing are essential. Is there any relation between the technology level of production units and social understanding or information sharing? This type of multithread relationship leads to a complex multivariate analysis to find the optimal operational strategies of organizations for using a supply chain or taking part in it. Researchers have suggested some structural equation models for supply chain quality management to improve organizational performance. Hau et al. [4] suggest methods for lowering the cost of higher supply chain security by Total Quality Management. But in most of the cases the suggestions are qualitative comparators. The presents study aims to find a general and flexible analytical tool for evaluating the efficiency of a supply chain.

2 Technology Level: Thermodynamic Concept In the last decade of the last century some scientists introduced thermodynamic models to explain economic realities. Recently Mimkes [5] has developed a simple but effective model to describe different functions of production economics through thermodynamics. Valery Chalidze [6] shows that some important problems connected with the evaluation of goods produced in an economic system come from the energy landscape and entropic components of the economy. Samuel et al. has explained Latin American and the Caribbean income mobility using maximum entropy econometrics [7]. Gohin with his positive mathematical programming used maximum entropy analogy for applied production analysis [8]. Thermodynamic relations are statistical theories for large atomic systems under constraints of energy.

Technology Level in the Industrial Supply Chain

149

The above-mentioned studies have proved, that an economic activity in a many particle system of society behaves in a manner similar to stochastic thermodynamic cycles. The present paper proposes thermodynamic concepts as decision-making tools to design effective supply chains for an industrial organization. Mimkes suggests that there is a close analogy with the heat described in the first law of thermodynamics and the role of profit in an economy cycle [9]. The first law of thermodynamics states that heat is a non-total differential, the closed integral is not zero and the value of Q depends on the path of integration. Production is also a non-total differential; the value of which depends on the system of investment. A supply chain

Fig. 1 Thermodynamic model of the Supply Chain - Surgical Instrument Manufacturing Cluster

is the flow of goods and money. Goods produced by an individual firm, which are used by another firm consists two reservoirs of goods. Components produced by a sub vendor-A with a goods reservoir RA is supplying components to a user-B. B keeps these raw material or semi finished goods in its own reservoir RB . Actually other than very few exceptions of absolute monopoly there is more than one firm at the end of A and also many firms are supposed to be present at the end of B as users. Similar products produced by a numbers of companies form a market of product or a virtual reservoir. On the other end of the supply chain inventory of user firms also construct a bank or virtual reservoir of goods. The total values of components of RA and RB are different. The difference in the value of reservoirs is not only because of the volume of goods, but their marginal utilities, expected opportunity cost and value addition due to quality screening etc. Reservoir RA is full of products produced by supplier firms. As per Cobb-Douglas production function, production P can be expressed as P = T K α 1 Lα 2 . Where K and L are the amount of Capital and Labor respectively. Here α1 and α2 are production indexes, which determine the convexity of the production curve. The constant T is the technology used in the pro-

150

Jisnu Basu, Bijan Sarkar, and Ardhendu Bhattacharya

duction process. The 2nd law of thermodynamics is very similar to Cobb-Douglas production function. Production can be expressed as work done δ W = dH − T ds and s = lnP = −(xlnx + ylny + zlnz + ) x, y, z.... are proportional values of factors of production (1). The term Technology (T) is the integrating factor of the non-total differential form of the Production. Its physical significance is that among different possible ways of production, technology directs) the actual process variables. Thus technology governs the Production equation δ W = 0of an economic cycle and makes it finite as δ w/T ≥ 0 From Eq. (1) the definition of technology (T) can be derived. Technology is the maximum amount of profit generated by the use of 1 unit of each factor of production. For a particular process combination it is supposed to be a constant.

3 The Thermodynamic Model of a Supply Chain To explain the thermodynamic concept of a supply chain, let us consider an example of a simple supply chain (Figure 1). Surgical Instrument Clusters of the Indo-Pak subcontinent is a simple but appropriate example of a multi layer supply chain. Stainless steel blanks are forged either by black smiths in a manual forging process or by power forging. These forged blanks are purchased by fitting shops. In fitting shops there are grinding, drilling and polishing machines, which are electrical power driven. They use a moderate technology level which is higher than that of the blacksmiths. After fitting operations, semi finished instruments are supplied to finishing units. These units have precision measuring instruments, advanced machine tools and computers; the average income of a worker is also higher than the earlier levels. Level of technology ‘T’ for hand forging shops, fitting shops or finishing units can be measured and obviously they are different. This system is a three tire supply chain, the first link from forging to fitting unit and the second from fitting to finishing units. There is also a parallel chain from chemical treatment shops to the finishing units. Reservoirs R1 , R2 , R3 and R4 are the reservoirs of produced items (semi finished or finished) in every tier. In a particular reservoir, level of technology ‘T’ is constant and analogues with the mean kinetic energy or temperature of a heat reservoir. Supply chain between two reservoirs (two groups of firms) functions like a Carnot machine, i.e. combination of a heat engine and one heat pump. Funds are flowing from the purchaser to the supplier. This process is equivalent to the combination of an isentropic heat transfer and one isothermal expansion. Entropy increases with the distribution. If the outgoing fund from purchaser is QH and the fund used by the supplier is QL , then work done W = QH − QL . In the reverse cycle, the heat pump collects the goods of value QL and after value addition in the supply chain pumps the goods costing QH . The work done is again W = QH − QL . In developing countries, during the production process components don’t change their value. The dH component of the equation can be ignored. Then production(W) = −T Δ s (2). T Δ s, the product of Technology level and change of entropy in the supply chain. In

Technology Level in the Industrial Supply Chain

151

econometrics the entropy is closely connected to the capital distribution in an economic system like market. Mimkis has shown that during economic distributions like fund flow or goods allocation, entropy changes in a manner similar to atoms in gas phase. s = lnP, whereP = N!/(N1 !N2 !N3 !..Nk !)/K N .

4 Empirical Study Indian automobile manufacturing industry is an example of a diversified potential landscape. Society of Indian Automobile Manufacturer (SIAM) has 38 member companies (www.siamindia.com). Number of employee of Tata Motors Limited is 22254 and that of Skoda Auto India Limited is 123. All these small and big organizations are supplied by 1st, 2nd or 3rd tier suppliers, from both India and abroad. Auto Components Manufacturing Association (ACMA), the official representative of this multilayer supply base, has 479 member companies (www.acmainfo.com). This number excludes those tiny manufacturing units not registered in the record of the government duty payers. So the industry is multilayered and extremely heterogeneous. Average income of the employee in an industrial house or an industrial cluster is a parameter closely related to the technology level. Indian automobile component suppliers with and without Foreign Direct Investment (FDI) have differences in technology level in their production units. In general component suppliers with FDI are more exposed to the modern technology. This parity is reflected in the average Annual Earning per Employee (AEPE). A study shows that AEPE for autocomponent supplier with and without FDI are Rs.51, 000 and Rs.36, 138 respectively [10]. Maruti Udyog Ltd., Bajaj Auto Ltd., Ashoklayland Ltd. and Mahindra & Mahindra Ltd. are four leading automotive industries of India. Technology level as per Eq. (2) has been calculated from their annual reports of seven successive years, 1998 to 2004. Direct material, inward excise duties have been considered as material. Personnel expanse has been taken as labor. Sum of Interest, depreciation etc. has been considered as Capital. Summation of all other expanses towards production has been computed as the fourth factor of production. Production values of every year have been divided by the calculated entropy as per Eq. (1). The value of Technology level (T), Average Earning per Employee (AEPE) and Vehicle Produced per Employee (VPPE) for all the four firms also been calculated. Correlations of AEPE and VPPE with the calculated values of Technology level (T) have been furnished below.

5 Conclusion The thermodynamic model may be used as a decision making tool by the industries for vendor development. A participating member of a supply chain can also asses his expected gain from it. Researchers may analyze the sustainability of an

152

Jisnu Basu, Bijan Sarkar, and Ardhendu Bhattacharya Table 1 Correlation with Technology Level (Based on Annual Reports 1998-2004)

Ashoklayland Maruti Udyog Bajaj Auto Mahindra&Mahindra Ltd. Ltd. Ltd. Ltd. AEPE 0.876 VPPE 0.950

0.53 0.433

0.540 0.566

0.394 0.526

AEPE: Average earning per employee; VPPE: Vehicle produced per employee

existing cluster by this method. In this article the calculation of entropy in different stages of a supply chain is discussed, but the Technology level has not yet been standardized. From the empirical study it appears that, there is a strong positive correlation between Technology level and Average Earning per Employee. It has also strong correlation with the parameter Production per Employee. These two parameters are easily available even from unorganized industries. Suitable conversion factors should be developed to derive a dimensionless value of Technology level (T) from these variables. This would be an interesting subject for further studies.

References 1. MacDuffie John Paul, The global supply chain in the World Auto Industries: Role of the new Mega-Suppliers, International Motor Vehicle Program, M.I.T. (2001) 2. Chinho Lin, Wing S. Chow, Christian N. Madu, Chu-Hua Kim and Pei Pei Yu , A structural equation model of supply chain quality management and organizational performance, International Journal of Production Economics (2005) 96 (3), 355-365 3. Eija Tella, Veli-Matti Virolainen, Motives behind purchasing consortia, International Journal of Production Economics, 93-94 (2005), 161-168 4. Hau L. Lee and Seungjin Whang, Higher supply chain security with lower cost: Lessons from Total Quality Management, International Journal of Production Economics (2005) 96 (3),289300 5. Mimkes J¨urgen Concepts of thermodynamics in Economic Systems I: Lagrange principle and Boltzmann Distribution of Wealth , Econophysics of Wealth Distributions, Eds. A. Chatterjee, Y. Sudhakar and B. K. Chakrabarti, New Economic Windows Series, Springer-Verlag Italia, Milan (2005) 6. Valery Chalidze, Entropy Demystified: Potential Order, Life and Money, Universal Publishers, USA (2000) 7. Samuel Morley, Sherman Robinson, Rebecca Harris, Estimating income mobility in Colombia using maximum entropy econometrics, International Journal of Production Economics, 93-94 (2005), 161-168 (May 1998) 8. Gohin Alexander, Positive Mathematical Programming and Maximum Entropy: Economic tools for applied production analysis, INRA Seminar on Production Economics, Paris (November 2000) 9. Mimkes J¨urgen, Concepts of Thermodynamics in Economics Systems, 1.Economic growth, Physics Department, University of Paderborn, Germany (2004) 10. Okada Aya, Globalization and Jobs in the Automotive Industry, A Research Funded by the Alfred P. Sloan Foundation, Research Note-3,Department of Urban Studies and Planning, Massachusetts Institute of Technology (October 1998)

Technology Level in the Industrial Supply Chain

153

11. Istvan Jenei, Krisztina Demeter, Andrea Gele, The effect of strategy on supply chain configuration and management practices on the basis of two supply chains in the Hungarian automotive industry, International Journal of Production Economics (2006) vol. 104, issue 2, pages 555-570

Discussions and Comments in Econophys Kolkata IV Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

Abstract Abhirup Sarkar and Sitabhra Sinha discusses the historic relationship between the disciplines of Economics and Physics in their articles. They focus their discussions on any further avenues through which physics can shed any light on Economics. Bikas K. Chakrabarti proposes a modified version of the Fisher equation and conjectures how the recent economic meltdown could be accommodated in this modified framework. A. M. Tishin and V. I. Zverev outlines a quantum theoretic model of economics. An appendix is attached with the article containing a criticism of that article from the participants of Econophys-Kolkata IV.

Abhirup Sarkar Economic Research Unit, Indian Statistical Institute, Kolkata 700108, India. e-mail: [email protected] Sitabhra Sinha The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai - 600 113, India. e-mail: [email protected] Bikas K. Chakrabarti Centre for Applied Mathematics & Computational Science, Saha Institute of Nuclear Physics, Kolkata 70064, India, and Economic Research Unit, Indian Statistical Institute, Kolkata 700108, India. e-mail: [email protected] A.M. Tishin Physics Department of M. V. Lomonosov Moscow State University, Leninskie Gory, Moscow 119992, Russia. e-mail: [email protected] V.I. Zverev Physics Department of M. V. Lomonosov Moscow State University, Leninskie Gory, Moscow 119992, Russia. e-mail: [email protected]

154

Discussions and Comments

155

1 Economics and Physics Abhirup Sarkar There was a time when economics had a close association with physics. The concepts of equilibrium, elasticity, stability and many others were borrowed from physics and introduced into economics. The association probably reached its peak in the late forties when Professor Paul Samuelson’s seminal book ‘Foundations of Economic Analysis’ was published. Any careful reader could see the structural similarity between the Foundations and classical mechanics. During the fifties, however, economics started drifting away from physics trying to find a haven in the rigor of formal mathematics. Grand and elegant general equilibrium models were written by the best minds in the discipline, beautiful and deep theorems were proved with utmost sophistication, though the more mathematically abstract the research became, the more it seemed to get divorced from reality. However, in spite of all its abstraction and alleged diversion from reality, research in mathematical economics played a very important role in the development of mainstream neo-classical economics during the nineteen fifties and sixties. This was the time when socialism had not yet been discredited by the performance of the Soviet bloc countries, the cold war was at its zenith and the third world economies were still contemplating the leftist path as a possible alternative to economic development. The elegant and beautiful theorems of general equilibrium, developed by Kenneth Arrow, Gerard Debreau, Lionel McKenzie, David Gale and others proved that under certain conditions a freely competitive market economy left on its own is not only capable of reaching an equilibrium but the equilibrium it reaches satisfies certain important optimality properties. In particular, two theorems, subsequently known as the first fundamental theorem and the second fundamental theorem of welfare economics, were proved, which demonstrated that a competitive economy is capable of implementing any desirable allocation through the market mechanism, provided the initial endowments are right. These celebrated theorems laid the philosophical foundations of a free market economy and encouraged the cerebrally inclined to consider the capitalist market economy as a viable and perhaps better alternative to socialism. But apart from a broad endorsement of free capitalism very little insight of practical significance could be obtained from the grand general equilibrium model. To get results with specific policy implications one had to simplify the structure, drastically reducing the number of goods by assuming that most other markets do not have any significant effect on the working of the market under consideration. This somewhat diluted the grandness and generality of the competitive equilibrium models. Further, two very important developments took place. The frictionless competitive model was extended to accommodate imperfect competition, externalities and increasing returns and more importantly, information economics. Secondly, to get a firmer grip on the various aspects of interaction between a small group of agents, non-cooperative game theory was developed and widely used.

156

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

Models of imperfect information set up in the structure of a game gave one important message. These models tended to imply that outcomes of games are often structure specific, that is, dependent on the way the game is set up and on the order in which players make their moves. Sometimes a minor change in the rules of the game could completely change the outcome. In short, the new partial equilibrium models of games and imperfect competition suggested that there is no universal truth or reality, as the grand general equilibrium models would like to have us believe. Economic truth is context specific, norm dependent and sensitive to even small happenings in history. This perception is still persisting. One serious problem with this particular view of economics is that prediction becomes a serious problem in models that are not robust to minor perturbations. In other words, from a theoretical point of view, minor changes in the behavioral assumptions of the model can often produce drastically different results and hence the predictive content of most of these models becomes suspect. Therefore, the gravest allegation against these game theoretic models is that they are models with little predictive content. But this is not the only allegation. These models treat ‘rationality’ of agents as an infallible axiom and stretch the assumption of rationality to its logical extreme. As a result, these games which experts take months to figure out, are assumed to be solved instantaneously by super-rational human agents. There are, of course, models of ‘bounded rationality’, but they have not made sufficient progress so far. Therefore, the question as to how people actually behave remains an open one and here experimental methods come in handy. Recently, a substantial and promising body of literature has developed dealing with experimental economics including experimental games. Perhaps one day the results from these experiments can be consolidated to form a basis of economic behavior of humans. This would roughly conform to the methods used in Physics where an empirical law, obtained through careful experiments, is taken for granted and premises are built upon it. Perhaps one day economics might once again come closer to Physics like it used to be in the past by learning to ignore unbounded rationality and the related logical optimisation exercises which normal human beings are unable to undertake in any case, and basing its analysis on meaningful human behavior as obtained from careful experiments.

2 Why Econophysics? Sitabhra Sinha “[Economics should be] concerned with the derivation of operationally meaningful theorems [. . . ]. [Such a theorem is] simply a hypothesis about empirical data which could conceivably be refuted, if only under ideal conditions.” – Paul A. Samuelson (1947) [1] “I suspect that the attempt to construct economics as an axiomatically based hard science is doomed to fail.” – Robert Solow (1985) [2]

Discussions and Comments

157

“It was the best of times, it was the worst of times” could be an apt phrase for describing the circumstances in which the fourth of the Econophys-Kolkata series of meetings is taking place. On the one hand, the ongoing economic and financial crisis has been declared by many to be one of the worst that the world has faced upto now, probably as bad (if not worse) than the Great Depression of the 1930s. On the other hand, this has led to widespread discontent with the state of the academic discipline of economics. The latter development is welcome news for econophysics, the subject to which this meeting is devoted. Indeed, several scientists, who have been associated with the econophysics movement for a long time, have written articles in widely circulated journals arguing that a “revolution” is needed in the way economic phenomena is investigated [3, 4]. They have pointed out that academic economics, which could neither anticipate the current worldwide crisis nor gauge its seriousness once it started, is in need of a complete overhaul as this is a systemic failure of the discipline. The roots of this failure have been traced to the dogmatic adherence to deriving elegant theorems from “reasonable” axioms, with complete disregard to empirical data. While it is perhaps not surprising that physicists working on social and economic phenomena should be so critical of mainstream economics and suggest econophysics as a possible alternative theoretical framework, it is heartening to see that even traditional economists have started to acknowledge that not everything is well in their ivory tower [5]. It will of course take more than simple acknowledgement of the serious deficiencies of economics as it is currently practised, to turn the attention of economists towards econophysics. As Everett Rogers [6] has pointed out, the adoption of any new innovation diffuses slowly through society (Figure 1, left), starting with a few pioneering innovators and early adopters before eventually building up a majority community of converts. Econophysics, which started out in the early 1990s (the term itself was coined in 1995 at a conference in Kolkata) has already gone through the early phases and is now presumably poised to become a major intellectual force in the world of academic economics. This is indicated by the fact that even prior to the current economic crisis, the economics community had been grudgingly coming to recognize that econophysics could not be ignored, and entries on “Econophysics” as well as on “Economy as a Complex System” had appeared in the New Palgrave Dictionary of Economics published in 2008. Whether econophysics manages to successfully make the transition to become the dominant research methodology for studying economic phenomena, or whether it is beaten to this position by one of the several other competing disciplines (such as, behavioral economics), shall become evident within the next few years. However, the impact that physicists working on problems arising in the economic and financial arena have made on the research methodology for investigating social phenomena will have a lasting legacy. Indeed, the association between physics and economics is hardly new. As pointed out by Mirowski [7], the pioneers of neoclassical economics had borrowed almost term by term the theoretical framework of classical physics in the 1870s to build the foundation of their discipline. One can see traces of this origin in the fixation of economic theory with describing equilibrium situations, as is clear from the following statement of Vilfredo Pareto in his textbook on

158

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

economics: “The principal subject of our study is economic equilibrium. [. . . ] this equilibrium results from the opposition between men’s tastes and the obstacles to satisfying them. Our study includes, then, three distinct parts: 1. the study of tastes; 2. the study of obstacles; 3. the study of the way in which these two elements combine to reach equilibrium.” [8]. Another outcome of the historical contingency of neoclassical economics being influenced by late 19th century physics is the obsession of economics with the concept of maximization of individual utilities. This is easy to understand once we remember that classical physics of that time was principally based on minimization principles, such as the Principle of Least Action. We now know that, even systems for which the energy function cannot be written can be rigorously analyzed, e.g., by using the techniques of nonlinear dynamics. However, academic disciplines are often driven into paths constrained by the availability of investigative techniques, and economics has not been an exception. There are also several instances where investigations into economic phenomena have led to developments which have been followed up in physics only much later. For example, Louis Bachelier had developed the mathematical theory of random walks in his 1900 thesis on the analysis of stock price movements, that was to be independently discovered five years later by Einstein to explain Brownian motion [9]. This pioneering work had been challenged by several noted mathematicians, on the grounds that the Gaussian distribution for stock price returns as predicted by Bachelier’s theory is not the only possible stable distribution that is consistent with the assumptions of the model. This foreshadowed the work on Benoit Mandelbrot in the 1960s on using Levy-stable distributions to explain commodity price movements. However, recent work by H. E. Stanley and others have shown that Bachelier was right after all: stock price returns over very short times do follow a distribution with a long tail, the so-called “inverse cubic law”, but being unstable, it converges to a Gaussian distribution at longer time scales (e.g., for returns calculated over a day or longer). Another example of how economists have anticipated developments in physics is the discovery of power laws of income distribution by Pareto in the 1890s, long before such long-tailed distributions became interesting to physicists in the 1960s and 1970s in the context of critical phenomena. With such a rich history of exchange of ideas between the two disciplines, it is probably not surprising that Paul Samuelson tried to turn economics into a natural science in the 1940s, in particular, to base it on “operationally meaningful theorems”’ subject to empirical verification (see the opening quote of this article). But in the 1950s, economics took a very different turn. Modeling itself more on mathematics, it put stress on axiomatic foundations, rather than on how well the resulting theorems matched reality. The focus shifted completely towards derivation of elegant propositions untroubled by empirical observations. The divorce between theory and reality became complete when the analysis of economic data became a completely separate subject called econometrics. The separation is now so complete that even attempts from within mainstream economics to turn the attention back to explaining real phenomena (for example, by Steven Levitt) has met with tremendous resistance. On hindsight, the seismic shift in the nature of economics in the 1950s was probably not an accident. Physics of the the first half of 20th century had moved so far

Discussions and Comments

159 Spatial or interaction complexity

100

Percentage of adopters of econophysics

Laggards

Late majority

spin−spin ordering on complex networks

games on complex networks

coordination behavior on lattice systems

Early majority 2009 (?) Early adopters 1995 0

Input−output systems

2−person game theory

Innovators Time

zero−intelligence

Agent complexity

hyper−rationality

Fig. 1 (left) A schematic projection of how econophysics may slowly be accepted by the economics community, based on Rogers’ model of adoption of innovations in a society. (right) The wide spectrum of theories proposed for explaining the behavior of economic agents, arranged according to agent complexity (abscissa) and interaction or spatial complexity (ordinate). Traditional physics based approaches stress interaction complexity, while conventional game theory focusses on describing agent complexity

away from the observable world, that by this time it did not really have anything significant to contribute in terms of techniques to the field of economics. The quantummechanics dominated physics of those times would have seemed completely alien to anyone interested in explaining economic phenomena. All the developments in physics that have contributed to the birth of econophysics, such as nonlinear dynamics or non-equilibrium statistical mechanics, would flower much later, in the 1970s and the 1980s. Some economists have said that the turn towards game theory in the 1950s and 1960s allowed their field to describe human motivations and strategies in terms of mathematical models. This was truly something new, as the traditional physicist’s view of economic agents was completely mechanical: almost like classical particles whose motions are determined by external forces. However, this movement soon came to make a fetish of “individual rationality” by over-estimating the role of the “free will” of agents in making economic choices, something that ultra-conservative economists with a right-wing political agenda probably deliberately promoted. In fact, it can be argued that the game-theoretic turn of economics led to an equally mechanical description of human beings as agents whose only purpose was to devise strategies to maximize their utilities. An economist has said that this approach views all economic transactions to be akin to a chess match between Kenneth Arrow and Paul Samuelson, the two most notable American economists of the postWWII period. Surely, we do not solve complicated optimization problems in our head when we shop at the corner store. The rise of bounded rationality and computable economics reflects the emerging understanding that human beings behave quite differently from the rational agents of game theory, in that they are bound by constraints in terms of space, time or computational resources. Maybe it is time again for economics to look at physics, as the developments in physics during the intervening period such as non-equilibrium statistical mechanics,

160

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

theory of collective phenomena, nonlinear dynamics and complex systems theory, along with the theories developed for describing biological phenomena, do provide an alternate set of tools to analyze (and a new language) for describing economic phenomena. I believe that econophysics has shown how a balanced marriage of economics and physics can work successfully in discovering new insights. An example of how it can go beyond the limitations of the two disciplines out of which it is created, is provided by the recent spurt of work on using game theory in complex networks (Figure 1, right). While economists had been concerned exclusively with the rationality of individual agents (cf. the agent complexity axis in Figure 1, right), physicists have been more concerned with the spatial (or interaction) complexity of agents having limited or zero intelligence. Such emphasis on only interaction-level complexity has been the motivating force of the field of complex networks that has developed over the last decade. However, in the past few years, there has been a sequence of well-received papers on games on complex networks. There is hope that by understanding such systems, we will get an understanding of how social networks develop, how real hierarchies emerge and how inter-personal trust leading to societies and trade can emerge. Possible prospects of further developments in econophysics include looking at avenues for sustainable growth. Professor Yakovenko’s demonstration at this conference of the Lorentz curve for energy consumption of various countries suggests that the theories of economic inequality previously used to explain differences in personal wealth (or income) can be extended to explain inequalities between nations in terms of energy consumption. We need to understand whether such gross inequalities are historical contingencies or something that is part of the system, if we are to take steps to rectify them. Another possible promising avenue is to try to explain business cycles as endogenously developing out of inherent delays in the system. Given that such oscillations arise naturally in many nonlinear systems through delay in communication between various components, it is possible that econophysicists would be able to come up with a theory that explains both the oscillations of business cycles, as well as, the exponential curve representing overall economic growth on which they are superposed. However, whichever path it takes, econophysics should guard against making the same mistakes as contemporary mainstream economics: that of getting trapped into mathematically elegant but ultimately sterile blind alleys. We have already seen a few such cases in the field (such as, “quantum finance”), which are being pursued just because they employ mathematically sophisticated techniques that are wellknown to physicists. Just because a problem requires advanced mathematical analysis does not necessarily mean that it is worth pursuing or that it is meaningful for understanding the real world. The true test of econophysics should be to make empirically verifiable statements about observed economic phenomena. To avoid the fate of academic economics (or econo-mathematics, as it should properly be called), we should keep in mind the cautionary words of Solow, mentioned with respect to economics but which applies equally well to econophysics: “[...] the true functions of analytical economics are [...] to organize our necessarily incomplete perceptions about the economy, to see connections that the untutored eye would miss, to tell

Discussions and Comments

161

plausible - sometimes even convincing - causal stories with the help of a few central principles and to make rough quantitative judgements about the consequences of economic policy and other exogenous events.” [2] Acknowledgements I would like to take this opportunity to thank the organizers of EconophysKolkata IV Workshop, Banasri Basu, Kausik Gangopadhyay and Bikas K. Chakrabarti.

References 1. Samuelson P A (1947) Foundations of Economic Analysis. Harvard University Press, Cambridge, Mass 2. Solow R M (1985) Economic History and Economics, Am. Econ. Rev. 75: 328-331 3. Bouchaud, J-P (2008) Economics needs a scientific revolution, Nature 455: 1181 4. Lux T, Westerhoff F (2009) Economics crisis, Nature Physics 5: 2-3 5. Sen, A (2009) Capitalism beyond the crisis, New York Review of Books 56(5): available at http://www.nybooks.com/articles/22490 6. Rogers, E M (1962) Diffusion of Innovations. Free Press, New York 7. Mirowski, P (1989) More Heat Than Light: Economics as Social Physics, Physics as Nature’s Economics. Cambridge University Press, Cambridge 8. Pareto V (1906) Manual of Political Economy (trans. A S Schwier, 1971). Macmillan, London 9. Bernstein J (2005) Bachelier, Am. J. Phys. 73: 395-398

3 Subprime Crisis and Fisher Equation Bikas K. Chakrabarti The recent economic meltdown, starting from the so-called subprime mortgage crisis (becoming apparent since 2007) [1], has initiated, among others, several questions regarding the foundation of main-stream economic theories, which failed to anticipate any such major crisis. Faced with this, some economists already suggest that a new foundation of economic theory in the line of econophysics might rectify these kind of failures in future [2]. The subprime crisis initially began with the careless offer of (housing) loans, in the expectation (and indeed some immediate results) of increased activity in the financial market. Of course it eventually ended in market collapse and huge losses in production or economic output of several major economies of the world. In this context, I would like to point out that a naive modification of the Fisher equation [3], MV = PQ, connecting the money flow and general production level might give us some insight. Here M refers to the measure of the cash flowing with an average velocity V , P denoting the average price level and Q denoting the real output of the economy (usually measured by GDP). The minimal modification of the above equation, proposed here, is: (M − M0 )V = PQ, where M0 is the ‘condensed’ part of

162

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

the money (coming from investments on nonperforming assets) which drops out of circulation and hence from the equation. This part may arise and grow due to the cuts in the rates of bank interest and may indeed grow with the circulation velocity V , which clearly increases with ‘easy’ loans and had occurred during the subprime crisis. In fact, in its Declaration of the Summit on Financial Markets and the World Economy, dated 15 November 2008, leaders of the Group of 20 cited [4] the following causes: “During a period of strong global growth, growing capital flows, and prolonged stability earlier this decade, market participants sought higher yields without an adequate appreciation of the risks and failed to exercise proper due diligence. At the same time, weak underwriting standards, unsound risk management practices, increasingly complex and opaque financial products, and consequent excessive leverage combined to create vulnerabilities in the system.” This increased ‘capital flow’ and ‘leverage’ must have necessitated increased value of M0 (usually dormant and can be taken to be of vanishing magnitude) in the above modified Fisher equation. Hence, no matter how much the velocity V increases, soon as M0 (≤ M; necessarily) grows enough to threaten M, the output of the economy Q drops sharply as the general price level remains constant over short time scales. Thus a simple modification of the Fisher equation, with the circulation amount of money getting subtracted by its condensed part (coming due to liquidity trap during the early part of the crisis), may explain a general failure of the entire economy leading to a serious drop in the output of the economy. It may be noted that the above-mentioned correction is only natural when one considers the econophysical identification of money as energy in a thermodynamical system (see e.g., refs. [5-7] for some recent reviews on the literature establishing this identification in econophysics). Following such identifications, one necessarily has to extract out the potential energy part (here the condensed part of the money, coming due to spending on some nonperforming assets) from the total energy or money, so that the effective kinetic energy equivalent is used in the Fisher’s (kinetic only) equation. It is only very natural that in conserved systems, like in a pendulum, the kinetic and potential energy parts may dominate at different phases of the motion (keeping the total energy conserved): kinetic energy of the pendulum becomes zero at the extreme end points of the oscillation when the pendulum changes its direction of motion, while it is maximum at the mean position of pendulum’s dynamics. An equation like the Fisher equation, which necessarily equates the kinetic part of the money (or energy) with the net production worth, should incorporate the correction in the amount of money available for circulation or kinetics. Additionally, a further correction in the above equation might be necessary. In fact, the two sides of the Fisher equation involve aggregates of quantities which are dispersed over various time scales: MV ≡ ∑ri MiVi and PQ ≡ ∑sj Pj Q j . The identities of the bundles of output commodities j (up to a maximum s of them in number) are quite different from those of money bunches i (maximum value r) having uniform flow velocities Vi ’s. Obviously the time scales involved for different elements of the bundles on the different sides of the Fisher equation are different. For very short time scales (say, over months), the right side of the Fisher equation may not show significant change. This also naturally suggests that if the flow (average) velocity

Discussions and Comments

163

changes more rapidly (say over weeks; keeping the money supply M same), the only way to maintain the equality in the equation is to introduce a flexible (velocity dependent) condensed money part M0 in M. This again suggests the same modification of Fisher equation as mentioned above. But there may be more to it! In such short time scales (where we suppose that the right hand side of the Fisher equation practically remains constant) the left hand side of the original equation suggests a simple dispersion M ∼ 1/V or perhaps Mi ∼ 1/Vi . With the identification of energy E with money M and flow velocity V with momentum K, as discussed above, we get the dispersion relation E(K) ∼ K −1 . If one identifies this energy dispersion with the Kolmogorov dispersion [8] E(K) ∼ K −5/3 of kinetic energy in various momentum modes in a turbulence, one should equivalently get the final modified form of the Fisher equation to be: (M − M0 )V 5/3 = PQ. The comparison of dispersions of money in a healthy economy and of energy in turbulence may appear surprising at a first look, but it indeed is very natural: unlike in a streamline motion of a fluid (where mixing does not occur at all length scales), the Kolmogorov dispersion in a turbulent motion assures mixing at all length scales (as we stir violently or turbulently with the tea spoon to mix the sugar in the tea cup). For an healthy economy, if the benefit (or the consequent disturbance) of an investment in any sector has to spread (mix) over the other sectors of the entire economy, and not remain confined to that sector only (as in streamline motion of a fluid), one necessarily has to have turbulent flow like mixing. If the Fisher equation corresponds to this type of money flow through the various sectors in an economy, the appropriate dispersion to compare with would be that of Kolmogorov, as mentioned before. That immediately suggests the above modified form of Fisher equation: (M − M0 )V 5/3 = PQ. Here also, an uncontrolled growth in M0 would similarly lead to a collapse of the economy as discussed above. Acknowledgements I am grateful to Anindya Sundar Chakrabarti, Kausik Gangopadhyay, Pradip Maity and Manipushpak Mitra for many useful discussions, criticisms and suggestions.

References 1. 2. 3. 4. 5.

See e.g., http://en.wikipedia.org/wiki/Subprime− mortgage− crisis T. Lux and F. Westerhoff, Economics crisis, Nature Physics 5 2-3 (2009) See e.g., http://en.wikipedia.org/wiki/Irving− Fisher Declaration of G20, 2009-02-27 Econophysics of Wealth Distributions, Eds. A. Chatterjee, Y. Sudhakar and B. K. Chakrabarti, New Economic Windows Series, Springer-Verlag Italia, Milan (2005) 6. A. Chatterjee and B. K. Chakrabarti, Kinetic Exchange Models for Income and Wealth Distributions, Eur. Phys. J. B 60 (2007) 135-149 7. V. Yakovenko and J. Barkley Rosser, Statistical Mechanics of Money, Wealth and Income, Rev. Mod. Phys. (in Press); http://arxiv.org/abs/0905.1518 8. See e.g., http://en.wikipedia.org/wiki/Turbulence

164

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

4 Quantum Theory of Economics V.I. Zverev and A.M. Tishin

Introduction Two seemingly unconnected factors were the reason to start our research. The first one is that according to the statements of world leading macroeconomists there is no appropriate economical theory able to describe all processes in the world economics nowadays [1]. The second one is that the determined limits of quantum theory applicability still are not defined. We should try to answer the following question. Is it possible to introduce the analogue of uncertainty relation for separate elements of macro systems? How much is it justified to extend strict laws of quantum theory to financial activity of business-structures?

Quantum Mechanics and Companies The main aim of quantum mechanics is in finding probability of getting this or that result of measuring in any concrete experiment. Quantum theory can not answer what precise value we will get but it can answer with what probability we will get this value [2]. At least we can speak about indeterminacy in company or market behavior of shares cost as the result of accident or planned events in economics, politics, and society with confidence.

Typical Properties of Quantum Objects and Companies We make a reservation that by pursuing our object we content ourselves with only those properties of quantum objects analogues of which can be found in businesscompanies. So let us enumerate the properties of an object which can be considered to state surely that the object is quantum. They are typical masses, typical times, tracks of decay and interconversions, ground and excited states of quantum system, spin and fundamental interactions.It is shown that the analysis of typical properties of typical quantum objects and business structures behavior is the evidence of the supposition that there are no limitations to use quantum approach for the description of business activity.

Uncertainty Relation Actual uncertainty of the experiment results gives birth to a number of conceptual problems of quantum theory. The first one is explanation of physical reason of experiment’s results plurality. The answer is Heisenberg uncertainty relation received

Discussions and Comments

165

by him in 1927. Uncertainty relation is a quantitative formulation of quantum object’s peculiar features. With the help of mathematical apparatus of quantum mechanics we get generalized uncertainty relation for coordinates and impulses:

δx·δ p ≥

h¯ gen . 2

(1)

h¯ gen (generalized) is the analogue of Planck’s constant in quantum theory of economics. Search of its numerical value is the task of a separate research. What does (1) mean from economical point of view? For the better clarity, let us introduce business space with dimension N where the company exists. Business activity depends on a huge amount of factors (variables). Let us define precisely the coordinate of the company in business space at the concrete moment of time. Let this coordinate be the volume of money, which the company possesses now. But as it follows from the common sense such an information can be known if we stop all economical activity of the company at that moment (block bank accounts). In other words if we know the precise value of the company’s impulse, then it would put a stop to all activities of all employees and clients, who form an effective mass of the company, immediately for the moment. As we understand it is absolutely impossible. And the vice versa is true. Let us suppose that we precisely know the impulse of the company at the moment, i.e. the precise quantity of employees and clients and also the number of bargains at the same moment. This defines the effective velocity. But in this case we will not be able to find the coordinate precisely, e.g. the amount of money, because business activity is not stopped and value of the coordinate is changing every moment. So we have shown that because of generalized uncertainty relation we can not absolutely precisely and simultaneously know values of the coordinate and impulse of the company in the business space. For a quasi-stationary case we have uncertainty relation for energy and time:

Δ E · Δ t ≈ h¯ gen .

(2)

We will understand Δ E as a quantitative measure of different forms of motion of quantum macro-objects and all types of interactions, and Δ t is time necessary for measurement. So the economical meaning of the relation (2) is the following. If the energy of the quantum system is measured as Δ E, then the time, which corresponds to this measuring, has the minimal uncertainty according to (2).

Consequences of Generalized Uncertainty Relation In this paragraph let us consider (2) for the description of business activity in more detail. For the sake of simplicity and without loss of generality, let us consider company evolution in two-dimensional space. In the light of (2) let us choose usual time as time variable (it is quite natural to observe any evolution in time). For energy variable we obviously choose such a variable that can be measured in money. Func-

166

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

Fig. 2 Dependence of electron’s coordinate in dependence on time (measurement term is 10−15 s). Source: http://www.1580.ru/album/2001/26-01-01/index.html

tions of money from economical point of view quite well correspond to the energy of the company expressed in money equivalent. It can be said business is the exchange of the energy of employees to money. Moreover, choice of money as energy variable is determined by the following fundamental fact. As it is known that classical theory deals with continuously changing quantities whereas in quantum theory we have to do with discrete processes. Quantum is indivisible portion of energy and nobody managed to carry out an experiment to discover part of quantum. So it was shown that transfer of energy process is of discrete character. The same situation is in the economics. The natural indivisible and minimal portion of energy is cent in the USA, kopeck in Russia, eurocent in EU etc. Thus it is quite reasonable to imagine the transfer of energy process as receiving or return of some amount of money which is divisible by the minimal portion of energy by the way. Quarterly and annual net profit of the company was chosen as a concrete energy characteristic of the business activity. The different definitions of measurement for an electron’s coordinates play the most important role in quantum theory. Let us measure the successive values of electron’s coordinates in equal terms. More or less smooth ‘trajectory’ can be produced if we measure the coordinates with little degree of accuracy, e.g., condensation of drops of steam in the cloud chamber. The word ‘trajectory’ is used here to underline the point that, when we say about the trajectory of quantum micro particle, it is associated with little precision. If we make terms shorter and leave the degree of accuracy unchanged neighboring measurements will give, of course, close values, but the results of successive measurements will not correspond to any smooth curve and will be absolutely scattered without random any visible pattern. For later use, we hypothesize that the motion of a company in the business space can be considered as free with great reservation because of competitors and antimonopoly law. All this facts limit business activity of the company. That is why it is reasonable to examine the coordinates of an electron in a potential well (with a re-

Discussions and Comments

167

Fig. 3 Coordinate of Microsoft in dependence on time (measurement term is one quarter). Source: http://www.microsoft.com

stricted motion) instead of a free electron. For the quantitative comparison between a quantum micro object and a company, let us look at Figures 2 and 3. Leaving degree of accuracy unchanged and ignoring the higher order terms, we find the results for both an electron and a company, which is not located on any smooth curve (a curve with a tendency to grow or decrease). The situation is quite the reverse to the one with long terms of measurement. For the more accurate investigation it is necessary to define the values of effective masses of the given companies and space intervals of the events happening more precisely.

Discussion of Results So it was shown that quantum micro objects and business structures have much in common, in particular, absence of trajectory and the complexity in forecasting, which is again connected with this absence of trajectory and the discrete character of energy transfer process. We should say that there is probably a simple way of finding h¯ gen . It is associated with so called modified uncertainty relation for strongly correlated systems. We should make a reservation that construction of quantum theory of economics is impossible without using the principles of classical economics as instruments of measurement. As in quantum mechanics the task of quantum theory of economics could be the definition of the probability of the ‘measurement’ result. Some basic facts from our work can be rather important to the economists. First of all, do not make vast plans on many years ahead. Second, it is impossible to determine simultaneously the coordinates of the company in the business space with dimension N and its direction. Third, try always to stay in quantum state. The fourth one may seem rather contradictory to all the points mentioned above. It says that try to organize your business to have the minimum value of correlation radius r. In this

168

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

case the dispersion in impulses and coordinates of the company will have much less value.

Conclusion In the present work it is shown that both quantum micro objects and business companies with small effective masses have the following common features. They are inseparability, the absence of trajectory and as a result big problems with any forecasting of their behavior, small effective masses, a discreet process of interactions and energy transfer. From our point of view discussed above, generalized uncertainty relation can be used, not only in case of economical processes, but also for the description of other macro systems behavior. The main reason for this conclusion is a quantum nature of the surrounding space. However finding of numerical value of h¯ gen remains the central problem of future creation of quantum theory of economics. Finding this value will allow to introduce the analogue of Schr¨odinger equation and the elements of quasi-classical approach of Wentzel-Cramers-Brilluen (WKB method) in our theory. Acknowledgements Authors would like to thank Dr. Y.I. Spichkin for useful discussion and K.V. Nechaev.

References 1. http://worldcrisis.ru/crisis/87897 2. Landau and Lifshitz Course of Theoretical Physics vol. 3: ”Quantum Mechanics: NonRelativistic Theory”. L. D. Landau, E. M. Lifshitz 2001

Discussions and Comments

169

4.1 Appendix The article has received the following comment from some participant-reviewers of Econophys-Kolkata IV workshop. This is a highly speculative article that makes several unfounded analogies between probabilistic behavior of organizations and quantum mechanics, with no mathematical foundation. Here is a brief list of some doubtful arguments that the authors make: 1. The authors’ statement about not being able to measure companies in principle is doubtful. Their argument is more like the (classical) thermodynamics of a thermometer changing the temperature of what it measures – because both the thermometer and the system change to reach equilibrium. But in principle this effect can be made arbitrarily small by using a small mass thermometer. There is nothing quantum mechanical about this. 2. The author’s analogy above between the uncertainty in the state of a company (a completely classical probabilistic uncertainty) and the uncertainty in the energy of an electron in a pure coherent superposition state (which cannot be explained by classical physics) is incorrect. For example, an electron can exist in a coherent superposition of two (or more) energy eigenstates — a phenomena that is not permitted by classical physics. Consider an electron in an equal superposition of two energy eigenstates with energies E √0 and E1 , i.e., the state of the electron is a pure state |ψ  = (|E0  + |E1) / 2. Measuring the energy of the electron in the state |ψ  yields one of the two energy values E0 or E1 as the measurement result with probability half each, and the post-measurement state of the electron collapses into one of the two energy eigenstates |E0  and |E1 , depending upon which result was obtained. In contrast to the above, if the electron is in a mixed state ρ = (1/2)|E0 E0 | + (1/2)|E1 E1 |, this is a classical probabilistic mixture of the two pure states |E0  and |E1 . An energy measurement on ρ will have the same measurement statistics as an energy measurement on |ψ . But, ρ represents (in a sense) our lack of complete knowledge of the energy of the system, which is E0 with probability half and E1 with probability half. This is analogous to the uncertainty in the state of a company for instance. Whereas the state vector |ψ  represents complete knowledge of the pure state of the electron, |ψ  and ρ are different states and behave completely differently under a variety of transformations and measurements. A full description of the basics of quantum mechanics is beyond the scope of this review, and the authors are referred to [1] and [2]. 3. The electron in pure state |ψ  indeed does not have a deterministic value of energy in principle. Two electrons prepared in the same pure state |ψ  are identical physical objects. Anyone who doesn’t believe in the statement made above does not believe in (and understand) quantum mechanics. Superposition states are very fragile, and have not yet been demonstrated to occur in macroscopic

170

Abhirup Sarkar, Sitabhra Sinha, Bikas K. Chakrabarti, A.M. Tishin, and V.I. Zverev

objects apart from very recent experiments using nanomechanical resonators. Putting a company into a quantum-mechanical superposition is a lot harder than preparing a cat in a superposition of being dead and alive. A company always has a definite state – even though we may have (classical) uncertainty in knowing what it is. To elaborate further, even though at a given time, the price of the stock of a company may be indeterminate, but it does have a value at each moment in time! We may not know how to predict its value at a future time because we do not know the very complex probabilistic correlations that stock price may have with a variety of other economic and political factors, but at a time t = t0 , the price of the stock c(t = t0 ) is well defined. Can you ever imagine a stock price being in the coherent superposition of $2, $3 and $4, and only on observing its value, it collapses to one of those three states with certain probabilities? Of course not! 4. Coherent superpositions of pure states allow a new kind of multipartite states in quantum mechanics known as entangled a $ in # states. Two electrons can be √ pure entangled state given by |ψ (1,2) = |E0 (1) |E0 (2) + |E1(1) |E1 (2) / 2.

Again, before any measurement is made, two electrons in the joint state |ψ (1,2) do not have any deterministic energy in principle. An energy measurement on the first electron will yield an answer E0 or E1 with probability half each. After this measurement, the joint state of the two electrons will collapse to one of the two pure states |E0 (1) |E0 (2) or |E1 (1) |E1 (2) depending upon what measurement outcome was obtained. Hence, an energy measurement on the second electron now would yield the same exact answer as the answer obtained in the first measurement. It is impossible to put two companies in an entangled state. Stock prices, net assets of a company, number of employees in a firm, etc. are time varying quantities which are extremely hard to accurately model probabilistically, just because of our lack of knowledge of all the factors that might affect them – exactly in the same way as it is hard to predict precisely whether or not it is going to rain in Boston on a particularly day next month. But these quantities can all be technically measured without disturbing those quantities. These can all be modeled completely by classical physics. No quantum mechanics is necessary. At least, no one has proved that there is any need for quantum mechanics in explaining the behavior and evolution of these quantities. No one has ever shown that the observation of the first person in Boston to wake up on a particular morning has any effect on whether the rain clouds decide to rain that day or not! If you have the bank maintain a precise record of the company’s net assets at every moment during the entire week and also maintain perfect record of all financial transactions as a function of time, and also maintain precise record of the all the employees and clients and their activities, you can (even though it might be practically very hard to do!) maintain perfect records of both (what you call) ‘coordinate’ and ‘impulse’ of the company. There is no intrinsic theoretical uncertainty involved.

Discussions and Comments

171

As a conclusion, we believe that trying to understanding macroeconomic phenomena with the aid of physical models is a great pursuit. The same has been done by researchers trying to model complex communication networks, by borrowing intuition from models in statistical physics. There is just not enough evidence yet that the behavior of macroscopic objects such as companies have any quantum mechanical aspect in them.

References 1. Griffths, D. J., Introduction to Quantum Mechanics, Prentice Hall; United States edition (1994). ISBN 0-13-124405-1 2. Sakurai, J. J., Modern Quantum Mechanics, Addison Wesley; 2 edition (1993). ISBN 0-20153929-2

A section of the participants of the Econophys-Kolkata IV Workshop

Part II

Contributions to Quantitative Economics

On Multi-Utility Representation of Equitable Intergenerational Preferences Kuntal Banerjee and Ram Sewak Dubey

Abstract We investigate the possibility of representing ethical intergenerational preferences using more than one utility function. It is shown that the impossibility of representing intergenerational preferences equitably persists in the multi-utility frame work with some resonable restrictions on the cardinality of the set of utilities.

1 Introduction In ranking infinite utility streams we seek to satisfy two basic principles. The equal treatment of all generations and the sensitivity of the ranking to the utility of every generation in the Pareto sense. The former is captured in the axiom of anonymity while the latter axiom is called strong Pareto. We will call a social evaluation satisfying these two conditions ethical. The theory of intergenerational social choice explores the possibility of obtaining ethical social evaluation criteria. We will not attempt to summarize the vast literature on intergenerational social choice, interested readers are referred to Basu and Mitra (2007) and the references therein. Diamond (1965) established the impossibility of ranking infinite utility streams satisfying anonymity, strong Pareto and continuity of the Social Welfare Relation (SWR, a reflexive and transitive binary relation). Svensson (1980) showed that Diamond’s impossibility result could be avoided by weakening the continuity requirement on the ethical Social Welfare Relation.

Kuntal Banerjee Barry Kaye Collage of Business, Department of Economics, Florida Atlantic University, Boca Raton, USA. e-mail: [email protected] Ram Sewak Dubey Department of Economics, Cornell University, Ithaca, New York, USA. e-mail: [email protected]

175

176

Kuntal Banerjee and Ram Sewak Dubey

While much of this literature concerned itself with the existence of ethical Social Welfare Orders (SWOs), Basu and Mitra (2003) proved that there is no ethical social welfare function. In view of these impossibility results subsequent analysis was concentrated on defining ethical SWRs and exploring some of their important properties1. Our concern in this paper is to investigate whether we can avoid the impossibility result of Basu and Mitra (2003) using some weaker requirement of representability. Two directions are pursued. For an ethical SWR we ask whether there is a RichterPeleg Representation of the partial order. It follows in a straightforward way from the analysis in Basu and Mitra (2003) that no such ethical SWR exists. Following some recent developments in the theory of representable partial orders we ask whether one can define an ethical SWR that can be represented by not just a single utility function but possibly many utility functions. This approach is called the multi-utility representation2. As is argued by Ok (2002), in the special case with a multi-utility representation using a finite set of utility functions one might even be able to use the theory of vector optimization (multi-objective programming) in determining best alternatives over a constrained set, as is often the primary goal of most economic actors endowed with preferences. This feature makes this approach particularly appealing. The literature on multi-utility representation of binary relations have received significant attention in the works of Ok (2002), Ok and Evren (2007). Unfortunately, both the alternative approaches fail to yield a positive resolution to the Basu-Mitra impossibility result. Preliminaries, are provided in the Section 2. In Section 3 the main results are stated and proofs are provided.

2 Preliminaries The space of utility profiles (we will also call them utility streams) is the infinite cartesian product of the [0, 1] interval, denoted by X 3 . Denoting by N the set of all natural numbers, we can write X as [0, 1]N . A partial order on any set is a binary relation  that is reflexive and transitive. The word partial order is used interchangeably with social welfare relation. The asymmetric (“strictly better than”) and the symmetric (“indifferent to”) relation associated with  will be denoted by ≻ and ∼ respectively. We will be concerned with the representation of social welfare relations that satisfy the following axioms. A SWR  defined on X satisfies Anonymity: For all x, y ∈ X, if there exists i, j ∈ N such that xi = y j , x j = yi and xk = yk for all k = i, j, then x ∼ y. 1

Asheim and Tungodden (2004), Banerjee (2006), Basu and Mitra (2007) and Bossert, Sprumont and Suzumura (2007) are some of the representative papers in this area. 2 A precise definition of each approach is provided in Section 2. 3 We will write a vector x in X or R∞ as (x , x , ..., x , ...). The following vector inequalities are 1 2 i maintained throughout this paper. x > y iff xi ≥ yi for all i and x j > y j for some j, x ≥ y iff xi ≥ yi for all i. So, x > y iff x ≥ y and x = y.

On Multi-Utility Representation of Equitable Intergenerational Preferences

177

Strong Pareto: For x, y ∈ X, if x > y, then x ≻ y. Social Welfare Relations that satisfy the axioms of anonymity and strong Pareto will be called ethical. To ease the writing, for any two sets A, B, let us denote by AB the class of functions with domain A and range in B. Let us recall the standard notion of representing binary relations that are complete. Given , a SWO on a set X , we say that u ∈ XR represents if x  y iff u(x) ≥ u(y). In this case, the order is said to have a standard representation. A SWR on X is said to have a Richter-Peleg Representation if there exists some u ∈ XR such that x ≻ y implies u(x) > u(y). It is easily seen that if u is a RichterPeleg Representation of a partial order, then if u(x) > u(y) holds for the pair x, y we know that y ≻ x is not true, but we cannot conclude whether x ≻ y is true or false. So there is no way to recover the binary relation ≻ using the information in the Richter-Peleg Representation. This point is made in Majumdar and Sen (1976). A SWR  on X is said to have a multi-utility representation if there is some class U ⊂ XR such that x  y iff u(x) ≥ u(y) for all u ∈ U. (1) Obviously if a SWO has a standard representation, then it must have a multiutility representation, but the converse is not true. The multi-utility representation approach in utility theory has received significant attention through the works of Ok (2002), Ok and Evren (2007) and Mandler (2006). We use the term multi-utility representation to refer to this representation approach following Ok (2002). Notably, Mandler (2006) calls the class U, a psychology.

3 Results It is now well known from the result in Basu and Mitra (2003) that ethical SWOs cannot have a standard representation. In this section, we will consider the representation of ethical SWRs on X under the Richter-Peleg criterion and the multi-utility criterion. For ready reference let us state the Basu and Mitra (2003, Theorem 1). Theorem 1 (Basu-Mitra Impossibility Theorem). There does not exist an ethical SWO in X that has a standard representation. As an easy consequence of this theorem it follows that an ethical SWR cannot have a Richter-Peleg Representation (RPR). Proposition 1. There does not exist an ethical SWR that has a Richter-Peleg Representation. Proof: Let  be an ethical SWR with its asymmetric and symmetric parts ≻ and ∼ respectively. Using the ethical SWR  and its RPR u ∈ XR we can construct the following SWO: For x, y ∈ X , we define ′ by declaring x ′ y iff u(x) ≥ u(y). We will show that ′ is an ethical SWO. For any x, y ∈ X satisfying x > y we would have from strong Pareto, x ≻ y. By the RPR of the SWR, we must have u(x) > u(y),

178

Kuntal Banerjee and Ram Sewak Dubey

this implies from the definition of ′ , x ≻′ y. Similarly, for any x, y ∈ X with xi = y j , x j = yi and xk = yk for all k = i, j, we must have x ∼ y. This means both x ≻ y and y ≻ x must be false, implying from the definition of RPR that u(x) = u(y), implying x ∼′ y. This establishes that ′ is an ethical SWO and that u is a standard representation of ′ , thereby contradicting Theorem 1. We now turn our attention to multi-utility representation of ethical preferences. Suppose  is a SWR satisfying anonymity and strong Pareto. Assume that  has a multi-utility representation using a class of utility functions U. Let x > y, then x ≻ y as  satisfies strong Pareto. This implies that for all u ∈ U, u(x) ≥ u(y) and for some u ∈ U, u(x) > u(y).

(2)

If for x, y ∈ X , there exists i, j ∈ N such that xi = y j , x j = yi and xk = yk for all k = i, j, then x ∼ y. This implies u(x) = u(y) for all u ∈ U.

(3)

Observe that in Proposition 1 in Ok and Evren (2007) it is shown that any partial order has a multi-utility representation (without restricting the cardinality of the set of utility function U). However, for the resultant representation to be tractable and useful we would prefer the utility set U to be of minimal cardinality. In that regard, given an ethical SWR  on X , we ask whether there is a multiutility representation with the set of utilities U having finite cardinality? The answer to that is, no! Suppose there is a multi-utility representation of  with the set of utility U having cardinality 2. Write U = (u1 , u2 ). Consider the function u ∈ XR defined by u(x) = u1 (x) + u2 (x) and define a SWO ∗ by x ∗ y iff u(x) ≥ u(y). It is easily checked that ∗ satisfies the axioms of anonymity and Strong Pareto. The function u is also a standard representation of ∗ . This contradicts the conclusion of Theorem 1. This contradiction establishes that no ethical SWR can have a multi-utility representation, where the cardinality of the set of utility functions is 2. The idea of the proof readily extends to the case when the set U is allowed arbitrary finite cardinality. We can in fact show a stronger result. In the next theorem, it is shown that there is no ethical SWR that has a multi-utility representation using a set of utility functions that is countably infinite. Theorem 2. There does not exist an ethical SWR that has a multi-utility representation with the set of utilities being countably infinite. Proof: By way of contradiction, assume that there exists a SWR  that has a multiutility representation with the cardinality of the set of utilities U being countably infinite. This is equivalent to saying that there exists a u ∈ XR∞ such that x  y iff u(x) ≥ u(y). Let I denote the interval [−1, 1]. Let g : R∞ → I ∞ be defined as follows:

ai if ai  0 i (4) gi (a) = 1+a for all i ∈ N and all a ∈ R∞ ai 1−ai if ai < 0

On Multi-Utility Representation of Equitable Intergenerational Preferences

179

and g(a) = (g1 (a), g2 (a), ....) ∈ I ∞ . Observe the following facts about the function g: (a) gi (a) = 0 iff ai = 0 (b) ai /(1 + ai ) is a strictly increasing function for all ai ≥ 0 and (c) ai /(1 − ai ) is a strictly increasing function for all ai < 0. Define the vector α = (1/2, 1/22, ...) and a function V : X → R as follows : V (x) = α · g(u(x)).4

(5)

Let us now define the SWO as follows: for all x, y ∈ X x ′ y iff V (x) ≥ V (y).

(6)

We will now show that ′ satisfies the axioms of anonymity and strong Pareto. To check anonymity of ′ , let x ∈ X and xπ be a profile with the utilities of the ith and jth generation in x swapped. By (3), u(x) = u(xπ ) and consequently, g(u(x)) = g(u(xπ )). Hence, V (x) = V (xπ ). So, x ∼′ y. To check strong Pareto, let x, y ∈ X such that x > y. We will show that V (x) > V (y). By (2), ui (x) ≥ ui (y) for all i ∈ N and for at least some j ∈ N, u j (x) > u j (y). Three cases are possible: (i) ui (x) ≥ ui (y) ≥ 0 (ii) ui (x) ≥ 0 ≥ ui (y) and (iii) 0 ≥ ui (x) ≥ ui (y). In case (i), gi (ui (x)) ≥ gi (ui (y)) ≥ 0 follows from (4) and the fact that gi (u) is a strictly increasing function in ui ≥ 0. In case (ii), gi (ui (x)) ≥ 0 ≥ gi (ui (y)) follows from the definition of g. In case (iii), 0 ≥ gi (ui (x)) ≥ gi (ui (y)) follows from (4) and the fact that gi (u) is a strictly increasing function in ui < 0. Observe that since each component function in the definition of gi is strictly increasing, g j (u j (x)) > g j (u j (y)). In all three cases, gi (ui (x)) ≥ gi (ui (y)), and g j (u j (x)) > g j (u j (y)). From the definition of V it now follows that V (x) > V (y). This implies x ≻′ y. So ′ is a SWO that has a standard representation satisfying anonymity and strong Pareto. This violates Theorem 1. Acknowledgements We thank Tapan Mitra for a helpful conversation.

References 1. Asheim G.B, Tungodden B. Resolving distributional conflicts between generations, Econ. Theory 24 (2004), 221-230 2. Banerjee K. On the extension of utilitarian and Suppes-Sen social welfare relations to infinite utility streams, Soc. Choice Welfare 27 (2006), 327-339 3. Basu K, Mitra T. Aggregating infinite utility streams with intergenerational equity: the impossibility of being Paretian, Econometrica 71 (2003), 1557-1563 4. Basu K, Mitra T. Utilitarianism for infinite utility streams: a new welfare criterion and its axiomatic characterization, J. Econ. Theory 133 (2007), 350-373 5. Bossert W, Sprumont Y, Suzumura K. Ordering infinite utility streams. J. Econ. Theory 135 (2007), 579-589 4

α · g(u(x)) = ∑i∈N (1/2)i gi (u(x)).

180

Kuntal Banerjee and Ram Sewak Dubey

6. Diamond P.A. The evaluation of infinite utility streams. Econometrica 33 (1965), 170-177 7. Majumdar M, Sen A. A note on representation of partial orderings, Rev. Econ. Studies 43 (1976), 543-545 8. Mandler M. Cardinality versus ordinality: a suggested compromise, Amer. Econ. Review 96 (2006), 1114-1136 9. Ok E. Utility Representation of an incomplete preference relation, J. Econ. Theory 104 (2002), 429-449 10. Ok E, Evren O. On the multi-utility representation of preference relations, mimeo New York University (2007) 11. Svensson L.G. Equity among generations. Econometrica 48 (1980), 1251-1256

Variable Populations and Inequality-Sensitive Ethical Judgments S. Subramanian

Abstract This note makes the very simple point that apparently unexceptionable axioms of variable population inequality comparisons, such as the replication invariance property, can militate against other basic and intuitively plausible desiderata. This has obvious, and complicating, implications for the measurement of inequality which, for the most part, has been routinely guided by a belief in the unproblematic nature of population-neutrality principles.

1 Introduction Inequality comparisons are greatly facilitated when they are guided by axiom systems. (This is true also of welfare and poverty comparisons.) For the most part, the tradition has been to postulate axioms that are valid for fixed population comparisons. The bridge between fixed and variable population contexts has, almost entirely, been constituted by the so-called replication invariance axiom. Taking income, for specificity, to be the space in which inequality is appraised, replication invariance requires that how one assesses inequality should be invariant with respect to a k-fold replication of an income distribution, where k is any positive integer. The axiom, on the face of it, is unexceptionable, and is routinely treated as being innocuous: for example, Shorrocks (1988; p.433) refers to it as “perhaps the least controversial of the “subsidiary” properties [of inequality indices]”. Recent work in poverty measurement - see Chakravarty, Kanbur and Mukherjee (2006) - however suggests that replication invariance may not be quite so un-contentious as it seems. As they put it (p.479): “Population replication axioms are now so much a part of the axiology of poverty measurement that economists take them on board without much thought.” The present article is concerned with making a similar point about the axiology of inequality comparisons. S. Subramanian Madras Institute of Development Studies, Chennai, India. e-mail: [email protected]

181

182

S. Subramanian

Replication invariance is concerned with inequality comparisons which have a focus on the proportions of a population commanding different levels of income. However, another sort of criterion by which inequality in a society can be assessed would relate to the absolute numbers of the population that command, or fail to command, a preponderance of its income. Such a criterion is operationalized, in the present note, through the postulation of a pair of properties called, respectively, ‘Upper Pole Monotonicity’ and ‘Normalization’. The first of these properties requires that if all the income of a society is concentrated in the ownership of a single person, then an addition to the population of a person with identical income should result in a dilution of inequality. The second property is asymmetric in relation to the first: it requires that, given the regime of income-concentration just described, an addition to the population of a person with zero income should not be construed as worsening inequality - on the ground that, with the initial concentration of all income in a single person’s ownership, inequality is already as bad as it could possibly get. It is not hard to see that the intuition underlying a property like replication invariance could be at odds with the intuition underlying properties like Upper Pole Monotonicity and Normalization. The tension has to do with pitting considerations of relative population proportions against considerations of absolute population size - a conflict, in effect, of fractions versus whole numbers. The problem is elaborated on in the rest of the paper.

2 Preliminary Concepts For specificity, inequality in this note will be assessed in the space of incomes. R is the set of real numbers and M is the set of positive integers. Xn is the set of all non-negative n-vectors, where n is a positive integer. A typical element of Xn is x = (x1 , . . . , xi , . . . , xn ) where xi (≥ 0) is the income of the ith individual. Define X ≡ ∪nε M Xn . For every xε X, n(x), is the dimensionality of x, that is, n(x) ≡ #N(x), where N(x) is the set of people whose incomes are represented in x. Define X ∗ ≡ {xε X |xi = 0 ∀ iε N(x){ j}&∃ jε jε N(x) : x j > 0}. X ∗ , then, is the set of income distributions which are extremal, in the sense of having only two types of individuals - the ‘haves’, constituted by a single individual in whose ownership the entire income of the society is concentrated, and the ‘have-nots’, with no income at all, who constitute the rest of the society. Let R be a binary relation of ‘inequality-sensitive’ comparison defined on X. For all x, yε X, we shall write xRy to signify that “x reflects at most as much inequality as y”. P and I are the asymmetric and symmetric parts respectively of R. For all x, yε X, xPy will signify that “x reflects less inequality than y”, and xIy will signify that “x reflects exactly as much inequality as y”. We shall take it that R is reflexive (for all xε X : xRx) and transitive (for all distinct x, y, zε X: xRy&yRz → xRz), but not necessarily complete (that is, for all distinct x, yε X, it is not necessarily true that either xRy or yRx must hold). That is R, will be taken to be a quasi-order. Further, the binary relation R will be presumed to be anonymous, that is, for all x, yε X, if

Variable Populations and Inequality-Sensitive Ethical Judgments

183

y is derived from x by a permutation of incomes across individuals, then x and y will be held to reflect the same extent of inequality. The assumption of anonymity ensures that all populations of different sizes and containing different people can be compared as though one population was derived from the other through a population increment or decrement. We let ℜ stand for the set of all quasi-orders on X.

3 Some Axioms For Variable Population Inequality Comparisons In what follows, we seek to impose more structure on the binary relation R by restricting it with a set of properties that may be regarded as desirable for an inequality judgment to possess. The most widely invoked restriction on inequality comparisons in a variable populations context is the property of replication invariance which - as we have seen - requires inequality judgments to be invariant with respect to population size replications. In this view, two income distributions should be treated as being identically unequal if the relative frequency distributions are identical. Formally, we have: Replication Invariance (Axiom RI). A binary relation Rε ℜ satisfies Axiom RI if and only if, for all x, y and kε M, if y = (x, . . . , x) and n(y) = kn(x), then xIy. We next propose a simple criterion for inequality judgments relating to extremal income distributions. Consider an extremal distribution x of dimensionality n, such that (n − 1) persons have an income of zero each and 1 person has the entire income, say x, of the society. Suppose y is derived from x through the addition of a single person with income x. For a given number (n − 1) of ‘have-nots’ in x, the number of ‘haves’ has risen from 1 to 2 in y: this increase can naturally be associated with a dilution in the extent to which the society is polarized, and be taken to signify a reduction in inequality. This property of an inequality comparison will be called Upper Pole Monotonicity, to signify that inequality will decline monotonically with an increase in the population of the upper end of an extremal distribution: Upper Pole Monotonicity (Axiom UPM). A binary relation Rε ℜ satisfies Axiom UPM if and only if, for all x, yε X , if xε X ∗ and y is derived from x by the addition of a single person with the same income as that of the richest person in x , then yPx. Considerations of symmetry with Axiom UPM may suggest a routine endorsement of a property such as the following one. Imagine an extremal distribution x of dimensionality n, such that (n − 1) persons have an income of zero each and 1 person has a positive income of x. Suppose v is derived from x through the addition of a single person with income 0. For a given number (one, as it happens) of ‘haves’ in x , the number of ‘have-nots’ has risen from (n − 1) to n in v : should not this increase be naturally associated with an increase in the extent to which the society is polarized, and be taken to signify an increase in inequality? A mechanical rehearsal of the reasoning underlying Axiom UPM would suggest an answer in the affirmative. However, there is a possible complication which may inhibit such a mechanical rehearsal, and this is considered in what follows.

184

S. Subramanian

The difference between the distributions x and v conceals a certain crucial similarity between them, which is that both are extremal distributions. This, indeed, is their distinctive feature. In each distribution, income is divided as unequally as it possibly could be: there is then no reason to rank the one distribution above the other in terms of inequality. In particular, it is not clear why the size of the population should enter into an assessment of the extent of inequality when, given the population size, inequality cannot get any worse. Yet, virtually all real-valued indices of inequality incorporate this irrelevant item of information. An inequality index is a mapping D : X → R, such that, for every xε X, D(x) specifies a unique real number which is intended to signify the extent of inequality in x. Consider, for instance, the squared coefficient of variation (C2 ): For all xε X ,C2 (x) = [1/n(x)µ 2 (x)]



iε N(x)

x2i − 1,

(1)

where µ (x) is the mean of the distribution x. If one person appropriates the entire income, the value of C2 is (n − 1). Thus, for an extremal distribution of a hundred persons, the value of C2 is 99, while for an extremal distribution of two hundred persons, the value of C2 is 199: it is not clear why the extent of inequality in the second case should be judged to be over twice as high as in the first case when in both cases inequality is as high as it could be. Suppose a nation starts out with an income distribution in which a single person owns all the income, and that this feature of the distribution is preserved over a period of time during which the population grows. Then it would appear to be reasonable to suggest that inequality has remained unchanged in the society, and odd to assert that inequality has increased over time. There is a piquant passage in Carroll’s Through the Looking Glass which has relevance for this view: “I like the Walrus best”, said Alice: “because he was a little sorry for the poor oysters”. “He ate more than the Carpenter, though”, said Tweedledee. “You see he held his handkerchief in front, so that the Carpenter couldn’t count how many he took: contrariwise”. “That was mean!” Alice said indignantly. “Then I like the Carpenter best - if he didn’t eat so many as the Walrus”. “But he ate as many as he could get”, said Tweedledum. This was a puzzler. Without intending to be frivolous, one could invite the reader to think of the distribution x as the Carpenter and the distribution v as the Walrus: what we then encounter is a version of Carroll’s “puzzler”. Briefly, if all extremal distributions are treated as being indistinguishable in terms of the extent of inequality, then this would be a case against postulating a property - call it ‘Lower Pole Monotonicity’ which is derived as a mirror image of ‘Upper Pole Monotonicity’. Rather, the case would be in favour of a sort of ‘Weak Normalization’ which asserts that additions of zero-income individuals to an extremal distribution should not be construed as worsening inequality:

Variable Populations and Inequality-Sensitive Ethical Judgments

185

Weak Normalization (Axiom WN). A binary relation Rε ℜ satisfies Axiom WN if and only if, for all x, xε X , if xε X ∗ and y is derived from x by the addition of a single person with zero income, then ¬(xPy). Axiom WN demands only that an extremal distribution of smaller population should not be declared to be inequality-wise preferred to an extremal distribution of larger population. A stronger condition - call it Normalization - might call for declaring the two distributions to be inequality-wise indifferent: Normalization (Axiom N). A binary relation Rε ℜ satisfies Axiom N if and only if, for all x, yε X , if xε X ∗ and y is derived from x by the addition of a single person with zero income, then xIy. For future reference, I present a normalized version of the squared coefficient of variation C2∗ : For all xε X : C2∗ (x) = [1/(n(x) − 1)][1/n(x)µ 2(x)) ∑iε N(x) x2i − 1] = [{1/n(x) − 1)}C2(x)].

(2)

In what follows, we examine the sorts of inequality judgments that are possible when they are required to satisfy certain combinations of the axioms we have discussed.

4 On The Possibility Of Consistent Inequality Comparisions The following proposition is true. Proposition. (i) There exists a binary relation Rε ℜ which satisfies Upper Pole Monotonicity and Normalization; and (ii) there exists a binary relation Rε ℜ which satisfies Upper Pole Monotonicity and Replication Invariance; but (iii) there exists no binary relation Rε ℜ which satisfies Replication Invariance, Upper Pole Monotonicity and Weak Normalization. Proof: ((i) & (ii)) It can be straightforwardly shown (a) that the requirements stated in part (i) of the Proposition are satisfied by the binary relation R∗ , defined as follows: ∀x, yε X , xR ∗ y if and only if C2∗ (x) ≤ C2∗ (y) , where the index C2∗ is as defined in Eq. (2); and (b) that the requirements stated in part (ii) of the Proposition ˆ defined as follows: by the binary relation R, ˆ if and only if C2 (x) ≤ C2 (y), where the index C2 is as defined in ∀s, y, ε X , xRy Eq. (1). The demonstration is trivial, and therefore omitted. (iii) A simple counter-example suffices to prove part (iii) of the Proposition. Let x be any positive real number, and consider the following distributions x, y and z belonging to X : x = (0, 0, x, x), y = (0, 0, x), and z = (0, x). Then: By the upper pole monotonicity axiom, (P1)xPy. By the axiom of replication invariance,

186

S. Subramanian

(P2)zIx. By virtue of transitivity of R, and given (P1) and (P2), (P3)zPy. However, by the Weak Normalization Axiom, (P4)¬zPx. From (P3) and (P4) we have a contradiction. This completes the proof of the Proposition.  The proof of part (i) of the Proposition above revolves around the construction of an inequality index - C2∗ - which must be judged to be a peculiar index in terms of common convention: it violates replication invariance, which is not a feature of any commonly employed inequality index. The proof of part (ii), however, revolves around an inequality index - C2 - which violates Normalization, and this is a feature of all commonly employed inequality indices. The point about replication invariance and normalization is at the heart of the (impossibility) result contained in part (iii) of the Proposition. This leads naturally to a consideration of the real-valued representation of inequality under alternative specifications of the axiom system by which the aggregation procedure is constrained. In particular, it is of interest to examine some consequences of measuring inequality when the inequality index is required to satisfy (a) Replication Invariance at the expense of Normalization, and (b) Normalization at the expense of Replication Invariance. One aspect of this problem is discussed in what follows.

5 Aggregation: Normalization Versus Replication Ivariance? When inequality is measured in terms of a real-valued index, the conflict between Replication Invariance and Normalization is reflected sharply in one particular interpretation of the inequality index. This interpretation revolves around establishing a correspondence between the value of an inequality measure for an n-person distribution and the shares in which a cake of given size is split between two persons. If such an equivalence can be demonstrated, then this would be a very useful outcome, because, in many ways, our intuitive grasp of inequality is clearest and sharpest in the context of a two-person cake-sharing exercise. As it happens, the correspondence in question can, indeed, be effected with or without qualification. Two alternative approaches to the problem are available in Shorrocks (2005) and Subramanian (1995, 2002). The difference in the results obtained by the two authors resides in the fact that Shorrocks considers inequality measures which satisfy Replication Invariance, while Subramanian considers inequality measures which satisfy Normalization. As discussed below, Normalization at the expense of Replication Invariance affords a rather more unambiguous link between n-person and two-person distributions than does Replication Invariance at the cost of Normalization. Shorrocks (2005) demonstrates that the Gini coefficient (which, of course, is a replication-invariant inequality measure) can be interpreted as the ‘excess share’ of the richer of two persons when a cake of given size is split between two individuals.

Variable Populations and Inequality-Sensitive Ethical Judgments

187

The ‘fair share’ in a two-person situation is one-half; and a Gini coefficient (G) of 0.4, as it turns out, can be interpreted as the share in excess of 0.5 going to the richer person: in this interpretation, a Gini coefficient of 0.4 for an n-person distribution is equivalent to the richer of two persons, in a two-way division of a cake, receiving 90 per cent of the cake. When, however, G for an n-person distribution exceeds 0.5, the corresponding share of the poorer of the two persons in an ‘equivalent’ cake-sharing setting would have to be negative - and negative shares are not easy to get an intuitive handle on. Shorrocks shows that the ‘excess share’ interpretation is valid not only for a two-person split but for a general, n-person split, in terms of which, G is the excess of the richest person’s share, call it r, over his fair share, which is just 1/n, so that r = G+ 1/n. Shorrocks uses the term ‘modulo 2’ to indicate excess shares in the context of a 2-person split, and the term ‘modulo 10’ to indicate excess shares in the context of a 10-person split. Thus, while a G-value of, say, 0.7 would be equivalent to a hard-to-interpret 120 per cent share for the richer person in a cake split two ways, it would also be equivalent to a readily comprehensible share of 80 per cent for the richest person in a cake split 10 ways, with the poorest 9 individuals sharing the balance 20 per cent equally among themselves. Notice that G can be as high as 0.9 before r begins to exceed 100 per cent of the cake in a 10-way split. Values of G in excess of 0.9 are not generally encountered in actual empirical distributions, and therefore, in practice, the ‘modulo 10’ interpretation of Gini should not pose the problem of having to interpret negative shares or shares in excess of unity. Shorrocks indicates that the ‘excess share’ interpretation can be applied to a range of inequality measures, including the class of Generalized Entropy measures and the family of ‘ethical’ indices due to Atkinson (1970). As he puts it (Shorrocks 2005; p.4): ‘If one [. . .] considers a distribution consisting of one rich person with the income share r and (n−1) poorer people each with income share (1−r)/(n−1), the values of each of these indices may be written as increasing functions of the excess share of the richest person, r1/n. However, the relationship [between] the inequality value and the excess share is more complex and, as a consequence, the interpretation is less immediate’. (I have taken some minor notational liberties in reproducing the quote.) While the ‘excess share’ interpretation of an inequality index is of very considerable interpretive value, the keen edge of immediate intuitive clarity does get blunted when one departs from the ‘modulo 2’ interpretation. Subramanian (1995, 2002) indicates that if an inequality index is required to satisfy Normalization at the expense of replication invariance, then it is possible to preserve the ‘2-person split’ interpretation of the inequality value, without risking shares in excess of 100 per cent for the richer person and negative shares for the poorer person. This can be shown to hold for normalized versions of the Gini coefficient (Subramanian 2002) and the Atkinson class of indices (Subramanian 1995). The relevant results are briefly reviewed in what follows. It may be recalled that Atkinson (1970) sought to relate the extent of inequality in any income distribution to the loss in welfare caused by the presence of inequality. To operationalize this approach requires, first, the specification of some appropriate (‘equity-sensitive’) welfare function defined on an income distribution. Thus, given any income vector x , with mean µ (x), let W (x) be the welfare level associated with

188

S. Subramanian

the given distribution of incomes. The equally distributed equivalent income is that level of income, call it xc , such that its equal distribution leads to a level of welfare which is the same as the welfare level associated with the distribution x under review. Given any x, and a welfare function W defined on it, a measure of inequality D for the distribution can be obtained as the proportionate difference between the mean of the distribution and the equally distributed equivalent income: D(x) = [µ (x) − xc (x)] / µ (x).

(3)

D∗ ,

can be obtained as the ratio of the differA normalized version of D, call it ence between µ and xc0 and the difference between µ and the lowest value xc0 which xc can attain, namely its value when the distribution is maximally concentrated (with the richest person appropriating the entire income): D∗ (x) = [µ (x) − xc (x)] / [µ (x) − xc0 (x)] .

(4)

σG = (1 − G∗)/2.

(5)

It is now possible to obtain a result on the relationship between the value of D∗ and the share of the poorer of two individuals when a cake is split two ways. To see what is involved, given any ordered n-vector of incomes x, define a dichotomously allocated equivalent distribution (daed) as a non-decreasingly ordered 2-vector x∗ ≡ (x∗1 , x∗2 ) such that x∗ has the same mean µ (x) as x and the same normalized D-value D∗ (x)a as x. One then obtains a pair of simultaneous equations in x∗1 and x∗2 ; solving for these, and letting σ stand for the income share x∗1 /2µ (x) of the poorer individual in the dead x∗ , one can proceed - in a general way - to obtain a relationship between D∗ and σ . In the context of the Gini coefficient, which has its origin in a ‘Borda’ welfare function (wherein aggregate welfare is a rank-order weighted sum of individual incomes, on which see Sen 1973), it can be verified - Subramainan 2002 provides the details - that the relationship between the normalized Gini and the income - share σG of the poorer individual in the dead x∗ is given, very simply, by:

Thus, if G∗ for some n-person distribution should be 0.4, this is equivalent to a situation in which the poorer of two persons gets a 30 per cent share of a cake that is split two ways. The significance of the normalized Gini can always be understood in terms of this helpful ‘2-person split’. The Atkinson class of inequality indices can be similarly interpreted, as is discussed below in the light of Subramanian (1995). Atkinson employs a utilitarian social welfare function which is a sum of identical individual utility functions that are symmetric, increasing and strictly concave, and specialized to the ‘constant elasticity-of-marginal utility’ form. The utility function, for each person i, is given by u(xi ) =

1 λ x , λ ε (0, 1). λ i

(6)

Variable Populations and Inequality-Sensitive Ethical Judgments

189

(Non-positive values of λ are not considered, because of the problems occasioned by these in the presence of zero-incomes in a distribution (for a discussion of which see Anand 1983; pp. 84-86)). Given any income vector x, the Atkinson welfare function on x can be written as: W (x) = [





u(xi )] = (1/λ )

iε N(x)

xλi , λ ε (0, 1).

(7)

iε N(x)

It is easy to check that the equally-distributed equivalent income for the Atkinson social evaluation function is xcA (x)



1 xλi = n iε∑ N(x)

1/λ

, λ ε (0, 1).

(8)

Atkinsons inequality index is then given by: A(x) [= {µ (x) − xcA (x)} / µ (x)] = 1 −



1 nµ λ

1/λ



xλi

, λ ε (0, 1).

(9)

iε N(x)

If xcA0 is the minimum value which xcA can attain (corresponding to a situation in which all income is concentrated in a single persons hands), then it is easy to check that xcA0 (x) = n

λ −1 λ

µ (x), λ ε (0, 1).

(10)

A normalized version of the Atkinson inequality index A can be obtained as the ratio of the difference between µ and xcA and the difference between µ and xcA0 : this index - call it A∗ - always attains a value of unity, irrespective of the dimensionality of the distribution, when the latter is an extremal one. Given (8) and (10), it can be verified that

A∗ (x) =



1 1 − n(λ −1)/λ





⎣1 −



1 µ (x)

!

1 (xλi n i∑ εN

"1/λ ⎤

⎦ , λ ε (0, 1).

(11)

Providing a simple interpretation for the index A∗ , to recall, is the motivation with which we started out. Given an n-person distribution and some value for the ‘inequality-aversion’ parameter λ , it is not the easiest of things to conceptualize precisely what a particular value of the inequality measure A∗ (x) ‘really means’ in terms of categories of inequality that we may be familiar with at a more ‘primitive’ level. The notion of a ‘dichotomously’ allocated equivalent distribution (or daed), as we have seen, is of help in this context. Given x, the corresponding daed is the ordered 2-vector x∗ = (x∗1 , x∗2 ) such that x∗ and x have both the same means and the same values of the (normalized) Atkinson inequality index. With some routine mainpulation, and employing σA to designate the income share x∗i /2µ (x) of the poorer of the two individuals in the dead x∗ , one can verify that

190

S. Subramanian

σAλ + (1 − σA)λ = 21−λ [1 − A∗(x)(1 − 2(λ −1)/λ )]λ .

(12)



Using (12), we can solve for σA , given any value of A (though a closed-form solution expressing σA as a function of A∗ is not available). Notice from (12) that when A∗ (x) = 0 (no inequality), σA = 1/2 (equal share), and when A∗ (x) = 1 (perfect concentration), σA = 0 (the poorer person receives nothing). In general, given the value of the index A∗ for any n-person distribution, one can transform it into an ‘equivalent’ value of σA - the share of the poorer person in a 2-person distribution - which affords an immediate and vivid picture of the extent of inequality that A∗ ‘signifies’. Importing the Normalization Axiom at the expense of the replicationinvariance axiom into the aggregation exercise facilitates this clear and unqualified equivalence result.

6 Concluding Observations Derek Parfit (1984) has alerted us to the serious possibility that variable population situations can present a major challenge to ones moral intuition. This paper is a specific, and very simple, example of this proposition, as applied to the exercise of effecting inequality-sensitive ethical comparisons. In assessing the import of the Proposition presented in Section 4, it seems to be hard to quarrel with the Upper Pole Monotonicity Axiom. The real problem is the conflict between Weak Normalization and Replication Invariance. An attempt has been made in this note to rationalize Weak Normalization. If there is something to be said for the rationale provided, then this should cast doubt on the readiness with which Replication Invariance has been routinely accepted in much of the literature on inequality measurement. This also has implications for the aggregation problem in inequality measurement - the problem of constructing ‘satisfactory’ real-valued inequality indices which must be informed by a deliberate judgment on whether to sacrifice Replication Invariance in the cause of Normalization or the other way around. The compulsion for conscious choice is precipitated by the fact that individually attractive principles of inequality comparison may not always be collectively coherent. Acknowledgements This paper is based on an earlier, shorter version that appeared under the title ‘Inequality Comparisons Across Variable Populations’ in Contemporary Issues and Ideas in Social Sciences (2006; Volume 2, Issue 2). The author is grateful for discussions with Satya Chakravarty, Rafael de Hoyos, Mark McGillivray, and Tony Shorrocks; and for valuable direction from Satish Jain and Sanjay Reddy. The usual caveat applies.

Variable Populations and Inequality-Sensitive Ethical Judgments

191

References 1. Anand, S. (1983). Inequality and Poverty in Malaysia: Measurement and Decomposition. New York: Oxford University Press 2. Atkinson, A.B. (1970). On the Measurement of Inequality, Journal of Economic Theory, 2, 244-263 3. Chakravarty, S., S. R. Kanbur and D. Mukherjee. 2006. Population Growth and Poverty Measurement, Social Choice and Welfare 26(3): 471-483 4. Parfit, D. 1984. Reasons and Persons, Oxford: Clarendon Press 5. Sen, A.K. (1973). On Economic Inequality. Oxford: Clarendon Press 6. Shorrocks, A. F. (1988). Aggregation Issues in Inequality Measurement, In W. Eichhorn (ed.) Measurement in Economics: Theory and Applications in Economic Indices, Heidelberg: Physica-Verlag 7. Shorrocks, A. F. (2005). Inequality Values and Unequal Shares, UNU-WIDER, Helsinki. Available at http://www.wider.unu.edu/conference/conference-2005-5/conference2005.5.htm 8. Subramanian, S. (1995). Toward a Simple Interpretation of the Atkinson Class of Inequality Indices, MIDS Working Paper No. 133, Madras Institute of Development Studies: Chennai. (mimeo) 9. Subramanian, S. (2002). An Elementary Interpretation of the Gini Inequality Index, Theory and Decision, 52(4): 375-379

A Model of Income Distribution Satya R. Chakravarty and Subir Ghosh

Abstract This paper determines the distribution of income that maximizes aggregate saving when the economy meets the restrictions that the mean income and level of social welfare are given. Presuming that the aggregate demand in the economy consists of the sectoral demand components, consumption and investment, the determined distribution is the one of a given total, that maximizes the funds that can be generated for investment without any loss of welfare. The saving function is assumed to be of Keynesian type: the marginal propensity to save is less than unity and the average propensity to save is increasing with income. The social welfare function we employ here is the single parameter Gini social welfare function introduced by Donaldson and Weymak (1983). If social welfare is assumed to be measured by the Gini social welfare function, then for a simple saving function, the resulting distribution turns out to be the Pareto. We also present an alternative unrestricted maximization of the aggregate saving function and look for the underlying income density function. We finally demonstrate that the Pareto income distribution is completely identifiable for the prespecified levels of welfare and the mean of the income distribution.

1 Introduction Many authors have attempted to explain the generation of income and its distribution by stochastic processes. The approach adopted by Champernowne (1953) is an application of the Markov process. According to this model, there exist probabilSatya R. Chakravarty Economic Research Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata, India. e-mail: [email protected]. Subir Ghosh Department of Statistics, University of California, Riverside, USA. e-mail: [email protected]

192

A Model of Income Distribution

193

ity distributions for individual incomes in the current period, given incomes in the previous period. Income is measured in intervals and the model specifies a set of transition probabilities, each showing the probability that an income in some interval in the current period will be in a different interval in the next period. Given these transition probabilities, under certain fairly general assumptions, the income distribution will converge to a unique equilibrium distribution independent of any initial distribution. In different models of this type, the equilibrium distributions have been shown to be some variants of the Pareto distribution (see, for example, Wold and Whittle, 1957). However, the stochastic nature of a model of this type is subject to much criticism because of the lack of economic content in it. It is alleged that in such a model the element of chance has taken the place of economic theory. In the words of Lydall (1968,p.21) “[...]too much reliance is placed on the laws of chance and too little on specific factors which are known to influence the distribution”. However, some authors, including Chipman (1974), believe that economic factors have no significant influence on the distribution of income (see also Shorrocks, 1975). In this paper we consider an economic approach to derive the size distribution of income. We begin by assuming that the aggregate demand in an economy consists of two sectoral demand components: consumer demand and investment demand. The investment is equal to the funds that can be raised through saving. We then look for the distribution of income that maximizes aggregate saving when the economy meets the following restrictions: (i) the mean income and (ii) social welfare are given a priori. The latter restriction is imposed to have information on distributional equity. Thus, our purpose is to ascertain the distribution of income of a given total, on a specific indifference surface, that will maximize total saving. Here (forced) saving can be viewed as an instrument available in the hands of a social planner to effect a more equitable distribution of income. The social welfare function we employ in this paper is the Donaldson and Weymark (1983) single-parameter Gini (S-Gini) social welfare function, which contains the Gini welfare function as a special case. The saving function is assumed to be of Keynesian type: the (positive) marginal propensity to save is less than unity and the average propensity to save is increasing with income. To illustrate the general formula we assume that marginal propensity to save as a function of income is of the power function type. Given that social welfare is measured by the Gini social welfare function, we demonstrate that the income distribution which maximizes the aggregate saving has the Pareto density function. Thus we have a model with economic assumptions instead of only assumptions of stochastic nature, that leads to the Pareto law of income. It may be mentioned that economic factors, in alternative formulations, have already been incorporated successfully to explain the distribution of income (see, for example, Becker and Tomes, 1979; Loury, 1981; Eckstein, Eichenbaum and Peled, 1985; Yoshino, 1993 and Jerison, 1994). Therefore the emergence of the Pareto law as the saving maximizing distribution of income can be regarded as a contribution to this literature. We then demonstrate that the aggregate saving function attains its maximum when the income density function is proportional to the second derivative of the

194

Satya R. Chakravarty and Subir Ghosh

saving function. For the Pareto income distribution, we obtain a power function type saving function. Emergence of the Pareto income distribution as the saving maximizing income distribution in an alternative framework is also examined. We observe that the parameters of the Pareto income distribution are completely determined by the prespecified levels of welfare and the mean of the income distribution.

2 The Aggregate Saving Maximizing Income Distribution Let F be the cumulative distribution function on R1+ \[0, m] where m ≥ 0 is the threshold income and R′ is the real line. Then F(x) gives the cumulative proportion of persons with income less than or equal to x, F(m) = 0, F(∞) = 1, and F is increasing. Assume that F is continuously differentiable. Then the Donaldson Weymark (1983) S-Gini social welfare function Eδ (F) is given by Eδ (F) =

∞ m

δ x(1 − F(x))δ −1 f (x)dx,

where f (x) = F ′ (x) is the income density funciton1. The higher in the value of the parameter δ > 1, the higher is the concern for welfare of the poor. The S-Gini inequality index Iδ (F) = ν (1 − Eδ (F)) and Eδ (δ ) become respectively the Gini inequality index and Gini welfare function for δ = 2, where ν is the mean income (see Chakravorty, 2009). In the simple model of income determination, saving is defined as the difference between income and consumption expenditures: s(x) = x − c(x), where c(x) stands for consumption expenditures. This definition of saving relies on the Keynesian argument that consumption generally has the first claim on a person’s income, and saving is simply the residual that materializes between income and consumption. According to Keynes, marginal propensity to consume c′ (x) ε [0, 1] and average propensity to consume c(x) x is a decreasing function of x. Since any change in income must be associated to changes in consumption and saving, this in turn implies that marginal propensity to save s′ (x) satisfies 0 ≤ s′ (x) ≤ 1 and average propensity to save s(x) x is an increasing function in x. In the simple Keynesian model what individuals choose not to spend on consumption good is equal to the level of investment expenditures desired by the business sector. The aggregate saving can now be written as S(F) =



s(x) f (x)dx.

(1)

m

Denoting the prespecified level of welfare by µ , the restrictions imposed by the economy can be written as 1

Throughout the paper the first and second derivatives of any twice differentiable function p : R1 → R1 will be denoted by p′ and p′′ , respectively.

A Model of Income Distribution

195



x f (x)dx = ν ,

(2)

m

and Eδ (F) =

∞ m

δ x(1 − F(x))δ −1 f (x)dx = µ ,

(3)

where ν is assumed to be given. Definition 1. An income distribution will be called the Aggregate Saving Maximizing (ASM) if it maximizes the objective function (1) subject to the constraints in (2) and (3). Theorem 1. Assume that the saving function s is continuously twice differentiable. Then the density function of the ASM income distribution is given by 2−δ

1

f (x) = aδ − δ −1 s′′ (x)[a(b − s′(x))] δ −1 ,

(4)

where s′′ (x) < 0 is a necessary condition for the requirement that f (x) > 0 for every x ε [m, ∞) and a < 0 and b are constants such that m∞ f (x)dx = 1. Proof: We use the Euler-Lagrange technique to prove the theorem. Let   L(F) = m∞ s(x) f (x)dx − λ1 m∞ x f (x)dx − ν   (5) ∞ δ −1 −λ2 m δ x(1 − F(x)) f (x)dx − µ ,

where λ1 and λ2 are Lagrange multipliers. The L(F) in (5) can be rewritten as   ∞ ∞ L(F) = m s(x) f (x)dx − λ1 m x f (x)dx − ν −λ2





m δx



∞ x

 δ −1 f (x)dx − µ . f (z)dz



(6)

Let h : R1+ \[0, m] → R1 be any arbitrary continuous function such that m∞ h(x)dx = 0. For any arbitrary real ε , we denote L(F + ε h) by g(ε ). If L(F) attains the maximum for some f , then g(ε ) attains the maximum when ε = 0. Now   ∞ ∞ x( f (x) + ε h(x))dx − ν s(x)( f (x) + ε h(x))dx − λ1 g(ε ) = m

m

 ∞  ∞  δ −1 −λ2 δ ( f (x) + ε h(x))dx − µ . ( f (z) + ε h(z))dz x m

x

196

Satya R. Chakravarty and Subir Ghosh

Then g′ (ε ) =





xh(x)dx s(x)h(x)dx − λ1 m m δ −2  ∞  ∞ −λ2 δ (δ − 1) x ( f (z) + ε h(z))dz m



x

x

× ( f (x) + ε h(x)) dx δ −1 ∞  ∞ ( f (z) + ε h(z))dz x h(x)dx. −λ2 δ

 h(z))dz

x

m

Since g′ (0) = 0, we have δ −2 ∞ ∞  ∞ xh(x)dx − λ2δ (δ − 1) x s(x)h(x)dx − λ1 f (z)dz m m x m   ∞  ∞  ∞ δ −1 x f (z)dz × h(z))dz f (x)dx − λ2 δ h(x)dx = 0.



m

x

(7)

x

We rewrite (7) as ∞





s(x)h(x)dx − λ1 xh(x)dx − λ2δ (δ − 1) x(1 − F(x))δ −2 m m m  ∞ ∞ δ −1 x(1 − F(x)) h(x)dx = 0. h(z))dz f (x)dx − λ2 δ ×

(8)

m

x

Changing the order of integration in (8), we get ∞



xh(x)dx − λ2δ (δ − 1) s(x)h(x)dx − λ1 m m   x ∞ δ −2 × f (z)dz h(x)dx z(1 − F(z)) m

− λ2 δ

m ∞ m

(9)

x(1 − F(x))δ −1 h(x)dx = 0.

Eq. (9), on rearrangement, gives x ∞ s(x) − λ1 x − λ2δ (δ − 1) z(1 − F(z))δ −2 f (z)dz− m m  λ2 δ x(1 − F(x))δ −1 h(x)dx = 0.

(10)

Since (10) holds for all continuous functions h : R1+ \[0, m] → R1 such that ∞

h(x)dx = 0, we have, from a Lemma of Courant and Hilbert (1953, Page 201),

m

s(x)− λ1 x− λ2 δ (δ − 1)

x m

z(1 − F(z))δ −2 f (z)dz− λ2 δ x(1−F(x))δ −1 = λ3 , (11)

A Model of Income Distribution

197

for all x ≥ m, where λ3 is a constant not dependent on x. We now differentiate both sides of (11) with respect to x to get s′ (x) − λ1 − λ2 δ (δ − 1)x(1 − F(x))δ −2 f (x) +λ2 δ (δ − 1)x(1 − F(x))δ −2 f (x) − λ2 δ (1 − F(x))δ −1 = 0. Thus we have Denoting

− λ1 2

s′ (x) − λ1 − λ2δ (1 − F(x))δ −1 = 0.

(12)

by a and λ1 by b, we rewrite (12) as (1 − F(x))δ −1 = δ −1 (a(b − s′(x)).

(13)

From (13) we have

2−δ 1 f (x) = aδ − δ −1 s′′ (x) a(b − s′(x)) δ −1 .

(14)

Since f is a density function, the constants a and b must satisfy the condition m f (x)dx = 1. To ensure that S(F) attains the maximum for f given by (14), we must verify the Legendre condition g′′ (0) < 0. We have



g′′ (ε ) =

∞  ∞

−λ2δ (δ − 1)(δ − 2) × ∞

−2λ2δ (δ − 1)



x

m

x



x

m

2 ∞ h(z))dz ( f (x) + ε h(x))dx ∞

δ −2  ( f (z) + ε h(z))dz

x

x

Then



g′′ (0) = −λ2 δ (δ − 1)(δ − 2)



x

δ −3 ( f (z) + ε h(z))dz



− 2λ2δ (δ − 1)

m

m



 h(z))dz h(x)dx.

x(F1 (x))δ −3 H12 (x) f (x)dx

xF1 (x))δ −2 H1 (x)h(x)dx,

(15)



f (z)dz and H1 (x) = x∞ h(z)dz. Since   d δ −2 2 δ −3 2 δ −2 ((F1 (x)) H1 (x)) = − (δ − 2)(F1 (x)) H1 (x) f (x) + 2(F1 (x)) H1 (x)h(x) dx

where F1 (x) =

and

1 λ2

x

= a, we express g′′ (0) in (15) as ′′

−1



g (0) = −a δ (δ − 1)

m

  δ −2 2 x d (F1 (x)) H1 (x) .

(16)

198

Satya R. Chakravarty and Subir Ghosh

For any arbitrary T ≥ m, we obtain by using the integration by parts

  T T  (F1 (x))δ −2 H12 (x)dx x d (F1 (x))δ −2 H12 (x) = x(F1 (x))δ −2 H12 (x) − m m m  T  δ −2 2 δ −2 2 (F1 (x))δ −2 H12 (x)dx = T (F1 (T )) H1 (T ) − m(F1 (m)) H1 (m) −

T

= T (F1 (T ))δ −2 H12 (T ) −

T m

m

(F1 (x))δ −2 H12 (x)dx,

(17)

because H1 (m) = 0. Since F1 (∞) = H1 (∞) = 0, we get when T → ∞ ∞

g′′ (0) = a−1 δ (δ − 1)

m

(F1 (x))δ −2 H12 (x)dx.

(18)

We know that (F1 (x))δ −2 H12 (x) is positive within the range of the integral. Since δ > 1 is also given, g′′ (0) is negative if and only if a < 0. That is, negativity of a becomes a necessary and sufficient condition for aggregate saving to fulfill the second order condition to achieve a maximum. From (18), we have a(b − s′ (x)) > 0 because 1 − F(x) > 0 for all x ε [m, ∞). This 2−δ

in turn shows that [a(b − s′ (x))] δ −1 > 0 for all x ε [m, ∞). Since a < 0 and δ > 1 are constants, we have f (x) > 0 for every x ε [m, ∞) holds only when s′′ (x) < 0 for all x ε [m, ∞). This shows that the strict concavity of the saving function turns out to be a necessary condition for positivity of the ASM income density function. This completes the proof of the theorem. Theorem 1 states that given restrictions on mean income together with the restriction that social welfare is given a priori, if we want to maintain the suppositions, which are also observed empirically, that average propensity to save is increasing with income and that (positive) marginal propensity to save is less than unity, then the general formula (4) corresponds to the distribution of income that maximizes aggregate saving2 . That is, given a constant level of social welfare, the density function in (4) represents the distribution of income of a given total that will maximize the funds which can be generated for investment (through saving). To illustrate the general formula in (4) we now assume that the saving function is of the form x1−r s(x) = α + β x + (1 − β )mr , x ≥ m > 0, (19) 1−r where α , α ≤ 0, and β , 12 < β < 1, are constants. Clearly, s(x) in (19) is strictly concave, s′ (x) ε [0, 1] and s(x) x is increasing over the domain of s. If the saving function is given by (19), then the income density function in (4) for δ = 2 becomes   a(1 − β ) rmr x−r−1 , x ≥ m > 0. (20) f (x) = − 2 2

We may note here that strict concavity of a function does not necessarily imply that it cannot have an increasing average. For example, the function y(1 − e−y ), where y ≥ 2 is a scalar, is strictly concave and has an increasing average.

A Model of Income Distribution

The condition (20)

∞ m

199

  a(1−β ) = 1 and consequently from f (x)dx = 1 implies that − 2 f (x) = rmr x−r−1 , x ≥ m > 0,

(21)

which is the Pareto density function. The parameter r here becomes the Pareto inequality parameter. The restriction r > 1 ensures that the distribution has a finite variance. In fact observed values of r fluctuate around the critical value r = 2. We thus have Corollary 1. Suppose that an economy meets the following restrictions: (i) the mean income and (ii) social welfare are given, where it is assumed that welfare is measured by the Gini social welfare function. Then the income distribution that maximizes aggregate saving, where the saving function is of the form given by (19), has the Pareto density function. We now carry out a comparative static analysis of the particular formula in (21). The aggregate amount of saving in this case for r > 1 is   ∞ x1−r r S(α , β , m, r) = m α + β x + (1 − β )m 1−r rmr x−r−1 dx (22) mr(2β r−1) . = α + (r−1)(2r−1) Clearly, for given values of α , β , and m, an increase in r is associated with a smaller level of aggregate saving S. This property seems intuitively reasonable since for a fixed x > m, marginal propensity to save s′ (x) is a decreasing function of r. Consequently, with a given threshold income m, out of every increment in income a smaller amount of fund will be saved if the saving function has higher value of r. Aggregating the funds saved at all income levels by individual saving schedules shows that the saving function with a smaller r will generate a higher total. Now, the decreasingness of the power function type marginal propensity to save s′ (x) = β + (1 − β )mr x−r with r is equivalent to the statement that an increase in r makes the saving function more concave. Consequently, there must be a tradeoff between the size of the maximal aggregate saving and concavity of the saving schedule.

3 An Alternative Formulation of Aggregate Saving Maximization and the Pareto Income Distribution We now consider an alternate unrestricted maximization of the aggregate saving function S(F) defined in (1). We observe that S(F) attains its maximum when the income density function f (x) is proportional to the second derivative of the saving function s(x), that is, the density function f (x) in (4) for δ = 2. Theorem 2. For S(F) in (1) and a < 0, we have

200

Satya R. Chakravarty and Subir Ghosh



(−a)S(F) ≤ (−a)s(∞) + as′ (m) + 21



(F(x))2 dx +

m

F(x)dx +

m ∞ 1 (a(s′ (x) − s′ (m)))2 dx. 2 m

(23)

The equality holds when F(x) = a[s′ (x) − s′ (m)]. Proof: The proof follows from the facts that F(m) = 0, F(∞) = 1, F ′ (x) = f (x) and Properties 1 − 12 given in Appendix. It follows from Theorem 2 that the maximum value of (−a)S(F) is ∞

(−a)S(F) = (−a)s(m) + (−a)s′ (m)

m

(1 − F(x))dx −

∞ m

F(x)(1 − F(x))dx.

(24) Theorem 3. If f (x) = as′′ (x), x ≥ m > 0, a < 0, then (i) F(x) = a[s′ (x) − s′ (m)], (ii) s′′ (x) ≤ 0 and hence s(x) is concave, (iii) 1a + s′ (m) ≤ s′ (x) ≤ s′ (m), for all x ≥ m > 0, a < 0, (iv) s(x) = s(m) + s′ (m)(x − m) + 1a mx F(u)du, (v) s′ (x) is a decreasing function in x. Proof: The proof of (i) follows from the fact F(x) = mx f (u)du. The proof of s′′ (x) ≤ 0 follows from the property of f (x) ≥ 0 and a < 0. The condition s′′ (x) ≤ 0 implies that s(x) is concave. The proof of (iii) follows from 0 ≤ F(x) ≤ 1. The proof of (iv) can be seen from 1 a

x m

F(u)du =

x m

[s′ (u) − s′ (m)]du = s(x) − s(m) − (x − m)s′(m).

The property that F(x) is increasing in x implies (v). This completes the proof. We note that f (x) = as′′ (x) if and only if F(x) = a[s′ (x) − s′ (m)]. When f (x) is the Pareto density function, we obtain below the saving function s(x). Theorem 4. If f (x) = as′′ (x) = rmr x−r−1 , x ≥ m > 0, r > 0, then  r m = a[s′ (x) − s′ (m)], (25) (i). F(x) = 1 − x    1−r  m 1 − mx (ii). s(x) = s(m) + s′ (m)(x − m) + a1 (x − m) + 1−r = s(m) − ms′ (m) +

  mr x1−r 1 mr + x s′ (m) + . − a(1 − r) a a(1 − r)

(26)

Proof: The proof of (i) is easy. The proof of (ii) follows from the part (iv) of Theorem 3 by using the result below

A Model of Income Distribution

201

 x     r    1 x mr m 1 1 1−r 1−r x −m 1− du = x − m − F(u)du = a m a m u a 1−r =

   1−r    mr x1−r m x 1 1 mr +x− (x − m) + 1− . = a 1−r m a 1−r 1−r

This completes the proof. For s(x) to be a saving function, we need the conditions in (26) that 0 ≤ s′ (x) ≤ 1 and s(x) x is an increasing function in x. The saving function s(x) in Theorem 4 is of the form mr x1−r s(x) = α + β x + γ , (27) 1−r where   mr 1 1 ′ ′ , β = s (m) + , γ = − . α = s(m) − ms (m) + a(1 − r) a a Theorem 5. If F(x) = 1 − ( mx )r = 1 − ( mx )−r , x ≥ m > 0, r > 0 and f (x) = F ′ (x) satisfy the Eqs. (2) and (3), then r=

δν − µ (δ − 1)µν ,m = . δ (ν − µ ) δν − µ

(28)

rm rm , and from (3) we have µ = δδr−1 . The rest can be Proof: From (2) we have ν = r−1 checked. This completes the proof. We can determine completely the parameters r and m of Pareto distribution function from (28) for given values of δ , µ , and ν . We therefore have a striking observation that the prespecified levels of welfare and the mean of the income distribution identify the Pareto income distribution function completely.

4 Conclusions The characterizations of alternative income distributions that have been developed in the literature have rarely used concepts from economic theory. Assuming that the total income and aggregate welfare are given, in this paper we determine the distribution of income that maximizes aggregate saving, where welfare is assumed to be measured by the Donaldson-Weymark(1983) S-Gini social welfare function. All the savings functions considered here are of Keynesian type. In this particular framework we have a formalization of the Pareto law of income. We present an alternative unrestricted maximization of the aggregate saving function and demonstrate that it attains its maximum when the income density function is proportional to the second derivative of the saving function.

202

Satya R. Chakravarty and Subir Ghosh

References 1. Becker, G. and Tomes, N. (1979). An Equilibrium Theory of the Distribution of Income and Intergenerational Mobility. Journal of Political Economy 87, 1153-1189 2. Chakravarty, S.R. (2009). Inequality, Polerization and Poverty: Advancer in Distributional Analysis. Springer, New York 3. Champernowne, D.G. (1953). A Model of Income Distribution. Economic Journal 63, 318-351 4. Chipman, J.S.(1974). The Welfare Ranking of Pareto Distributions. Journal of Economic Theory 9, 275-282 5. Courant, R. and Hilbet, D. (1953). Methods of Mathematical Physics, Vol.1. Wiley InterScience, New York 6. Donaldson, D. and Weymark, J.A. (1983). Ethically Flexible Gini Indices of Inequality for Income Distributions in the Continuum. Journal of Economic Theory 29, 353-358 7. Eckstein, Z., Eichenbaum, M.S. and Peled, D. (1985). The Distribution of Wealth and Welfare in the Presence of Incomplete Annuity Markets. Quarterly Journal of Economics 99, 789-806 8. Jerison, M. (1994). Optimal Income Distribution Rules and Representative Consumers. Review of Economic Studies 61, 739-771 9. Loury, G.C. (1981). Intergenerational Transfers and the Distribution of Earnings. Econometrica 49, 843-967 10. Shorrocks, A.F. (1975). On the Stochastic Models of Size Distributions. Review of Economic Studies 42,631-641 11. Wold, H.O.A. and Whittle, P. (1957) A Model Explaining the Pareto Distribution of Wealth. Econometrica 25, 591-595 12. Yoshino, O. (1993). Size Distribution of Workers Household Income and Macroeconomic Activities in Japan: 1963-88. Review of Income and Wealth 39, 387-402

Appendix Properties ∞  1. m∞ s(x)F ′ (x)dx + m∞ F(x)s′ (x)dx = s(x)F(x) = s(∞). Note that the left-hand m side depends on F but the right-hand side does not depend on F.

2. 2 m∞ (F(x)as′ (x))dx ≤ m (F(x))2 dx + m∞ (as′ (x))2 dx. The equality holds when F(x) = as′ (x). ∞

3.

4. 5.

∞ m





s(x)F ′ (x)dx = m∞ s(x)dF(x) = 01 (sF −1 )(w)dw = 01 G(w)dw = H(1) − H(0), where H ′ (w) = G(w) = (sF −1 )(w).

′ m x(1 − F(x))F (x)dx +



∞  ′ F(x)[x(1 − F(x))] dx = x(1 − F(x))F(x)  = 0. m



m

∞ ′ ′ m [s(x) + x(1 − F(x))]F (x)dx + ∞ m F(x)[s(x) + x(1 − F(x))] dx 

= [s(x) + x(1 − F(x))]F(x) = s(∞). m

A Model of Income Distribution

6.

203



′ (x)dx + ∞ F(x)[s(x) + x(1 − F(x))]′ dx [s(x) + x(1 − F(x))]F m m = m∞ s(x)F ′ (x)dx + m∞ F(x)s′ (x)dx = s(∞).



7. 2a m∞ F(x)(s′ (x) − s′ (m))dx = − m∞ (F(x) − a(s′ (x) − s′ (m)))2 dx + m∞ (F(x))2 dx + m∞ (a(s′ (x) − s′ (m)))2 dx. ∞  8. m∞ s(x) f (x)dx + m∞ F(x)s′ (x)dx = s(x)F(x) = s(∞). m

9. S(F) = s(∞) − ∞

∞ m

F(x)s′ (x)dx aS(F) = as(∞) − a

∞ m

F(x)s′ (x)dx.

′ (x) − s′ (m))dx m F(x)(s 1 ∞ = 2 m (F(x) − a(s′ (x) − s′ (m)))2 dx − 21 m∞ (F(x))2 dx − 12 m∞ (a(s′ (x) − s′ (m)))2 dx.

10. −a



11. (−a)S(F) = (−a)s(∞) + a m∞ F(x)s′ (x)dx = (−a)s(∞) + as′ (m) mx F(x)dx + a m∞ F(x)(s′ (x) − s′ (m))dx = (−a)s(∞) + as′ (m) m∞ F(x)dx − 12 m∞ (F(x) − a(s′ (x) − s′ (m)))2 dx + 21 m∞ (F(x))2 dx + 21 m∞ (a(s′ (x) − s′ (m)))2 dx.

Thus, (−a)S(F) ≤ (−a)s(∞) + as′ (m) m∞ F(x)dx + 12 + 12 m∞ (a(s′ (x) − s′ (m)))2 dx. The equality holds when F(x) = a(s′ (x) − s′ (m)).

− s(m)] 12. (−a) m∞ s′ (x)dx = (−a)[s(∞) and hence (−a)s(∞) = (−a) m∞ s′ (x)dx + (−a)s(m).



2 m (F(x)) dx

Statistical Database of the Indian Economy: Need for New Directions Dilip Mookherjee

Abstract Data is an important ingredient for economic analysis. This paper discusses the various sources of data in the context of India and argues for reform in the Indian Statistical System. In particular, its focus is on maintenance of a panel micro data, helpful in conducting economic research.

1 Introduction Closely following Independence of the country in 1947, the Indian Statistical Institute and its founder, P.C. Mahalanobis played a critical pioneering role in the development of the Indian statistical system devoted to careful measurement of India’s demographics and economic performance1. At this time the National Sample Survey (NSS) was created, and methods of sampling used in living standards surveys. Apart from its well-known consumption and household asset surveys, the NSS prepares periodic surveys of housing conditions, employment and unemployment; migration; health, nutrition and schooling; unorganized manufacturing; informal sector; common property resources; energy; public distribution system. The Central Statistical Organization (CSO) was also reorganized in the late 1940s, and assigned a central coordinating role in developing a country-wide statistical system, apart from its role in the population censuses, Annual Surveys of Industries (ASI) and national income accounts. This enabled the newly independent country to develop a reliable nationwide statistical system for assessing changes in income, consumption, agricultural and industrial production. By the standards of most developing countries at that time, this was a tremendous achievement.

Dilip Mookherjee Department of Economics, Boston University, 270 Bay State Road,Boston MA 02215, USA. e-mail: [email protected] 1

See Rudra (1996, Chapter 10) for a detailed account.

204

Statistical Database of the Indian Economy

205

The system created in the late 1940s continues to form the backbone of the current data base of the Indian economy, more than half a century later. In this article, I argue the need for a fresh reassessment of the system, with a view to proposing necessary changes and amendments to it required to consolidate it in line with the needs of development in the 21st century. Such a reassessment is urgently required, for at least two important reasons. First, as Mahalanobis repeatedly emphasized, statistics is a tool for economic development. The Indian economy, as well as the discipline of development economics, has changed considerably over the past halfcentury. There are urgent new areas of economic policy that require better information; many new theories and empirical research methodologies that require surveys to be designed and implemented in different ways. Second, there are problems with coordination across different sources of data, and with regard to under-utilization of already existing information. The administration of agencies such as the NSS and CSO is ultimately in the hands of a bureaucracy that reports to various wings of the government, receiving very little input from those outside the government who use these data for purposes of research and policy evaluation. In this article I will explain what I see the major needs for reform, from my own perspective as an academic economist. I hope this will initiate a national dialogue from various points of view regarding directions for long-term changes that are needed. In the next section I will explain some of the changes in the nature of development economics over the past half century that necessitate changes in the statistical system. In the concluding section, I will turn to problems of utilization, coordination and dissemination.

2 New Questions and Issues Around the late 1940s and early 1950s, development economics as a discipline was in every way in its infancy. The earliest important papers on the subject date back to the early 1940s with the work of Rosenstein-Rodan and Nurkse, with important contributions by Leibenstein, Lewis, Scitovsky, Sen and others in the 1950s2. Development ‘planning’ was the dominant paradigm among both academics and policy-makers. Mahalanobis himself played a key role by developing a model analogous to that developed by Feldman in the 1920s, which formed the basis of the strategy of industrialization focusing especially on heavy industry. Neoclassical and Keynesian economists in the West were developing new models of growth, based on theories of Harrod, Domar and Solow. In all these models, the main focus was on capital accumulation as a source of growth in living standards, occurring more or less as a result of technological relationships between capital and income or output, as represented by a production function. Other important parameters that affected growth such as savings rates and population growth rates were treated as constants 2 Of course, there were some earlier important contributions in the 1920s, such as the Russian economists Chayanov, Feldman and Preobrazhensky, as well as Allyn Young’s work on increasing returns.

206

Dilip Mookherjee

specific to a given country. Accordingly planning development was treated as setting targets for capital accumulation, based on estimates of the production function, assumptions regarding demographics and savings rates, and targeted growth rates. Later developments in the theory of planning and growth in the 1960s developed more sophisticated versions of this, with extension to multisectoral Leontief inputoutput models, and exercises in optimizing savings and growth rates. This view of the development process accordingly required detailed estimates of technological relationships between inputs and outputs, or the production function. This created the focus of the statistical system towards measuring technological relationships between inputs and outputs in agriculture and industry, as well as measures of national income and living standards that measured the success of development policies. Today the shortcomings of this paradigm are all too apparent, for its neglect of behavioural and motivational factors of households and firms on the one hand, as well as of markets, contracts and related institutional constraints on the growth process. The economic liberalization process under way since the early 1990s in India and many other developing countries have been motivated by the need to impose macroeconomic discipline, free trade and industry from cumbersome regulations, lower and rationalize tax rates, enhance competitiveness, and integrate India more closely with the world economy. Well into the 1970s, there was relatively little focus on issues of employment and income distribution, and even less so on important dimensions of human development: health, nutrition and education. These were consigned by Mahalanobis to areas requiring ad hoc studies3 . The need to ensure equitable distribution of the gains from growth have motivated numerous policy initiatives in the areas of health, education and infrastructure in the past two decades. Increasing scope for participative decision-making at grassroot levels, decreasing centralization of decision-making have motivated increasing devolution of economic power to local governments (panchayats) and state governments. Hence, the development paradigm has decisively shifted to embrace factors involving incentive and institutional reforms of the economy, in contrast to the single-minded focus on technology and capital accumulation a half-century ago. These changes create the need for better understanding of key behavioural, motivational and institutional aspects of economic performance, and at a more microeconomic level. It is no longer enough to monitor the outcomes of development efforts, by measuring changes in living standards or production levels. One needs to go into understanding the way that the economy functions, why and how certain outcomes have or have not been achieved, and what this tells us about future policies needed. This requires us to formulate and test hypotheses about decisions made by millions of active economic agents concerning decisions that take about labor supply, production, consumption, savings, fertility, education, nutrition and so on, and how these are coordinated by market forces and government programs to generate macroeconomic outcomes. 3

See Extracts From a Note Submitted by P.C. Mahalanobis to the Government of India, 20 February 1952, Appendix C to Chapter 10 of Rudra (1996). See also Chapter 11, Professor Mahalanobis and Economics, by T.N. Srinivasan.

Statistical Database of the Indian Economy

207

The academic discipline of development economics has undergone a similar change, with greater emphasis on market and contractual imperfections. It is wellrecognized that various imperfections pervade key factor markets – land, labour, credit, livestock, irrigation – owing to a combination of problems of asymmetric information, enforcement, and poorly defined property rights. These help explain why productivity of agricultural farms or firms in the industrial sector is not just a technological datum to be estimated mechanically: they depend on household demographics, assets, motivations, contractual relationships, and exposure to competitive pressures. In turn many of these depend on legal and political aspects of the environment in which farms and firms function. Accordingly, there is the need to incorporate and estimate the importance of these factors, and separate their role from the purely technological factors. Modern models of agricultural or industrial production integrate information about the assets and liabilities of key actors, their contractual relationships, access to credit and insurance, sources of information, exposure to personal and environmental shocks, in addition to more conventional sets of variables representing market prices and technology parameters. Accordingly, data covering a larger and more comprehensive set of variables is needed. Second, the range of areas now considered central to the development process is much broader – in particular, human development, environmental sustainability, and goals of improving governance, local participation, empowerment of minorities and women occupy centre stage, along with capital formation in agriculture and industry. No longer is development visualized in terms of growth and distribution of per capita income alone, based on the assumption that everything else will automatically follow. The range of issues that need to be understood and evaluated is now correspondingly larger. Linkages between health, nutrition, schooling, intrahousehold allocations on the one hand and household assets and production on the other, are sought to be studied. These necessitate integration of information about household demographics, assets, intrahousehold allocation of labor with their schooling, health and nutrition. Third, advances in econometric methodology have brought to the fore various statistical problems inherent in simple-minded cross-sectional regression relationships: confounding of correlations with causation, problems of identification (or endogeneity of regressors), omitted variable bias and measurement error. The last two decades have witnessed remarkable advances in methods for overcoming these problems, which require different kinds of datasets. Of particular importance is the role of longitudinal rather than cross-sectional surveys, the use of instrumental variables for overcoming problems of endogeneity and measurement error, increasing reliance on natural experiments and randomization of policy experiments. The existing sources of data concerning the Indian economy fall short on many of these dimensions. First, most of the surveys are cross-sectional, with no effort made to extend these to longitudinal studies of households and firms. This is akin to looking at a sequence of snapshots of different sets of people, rather than a sequence of moving images of the same set of people over time. It is not possible to trace the dynamic impact of various policy or environmental variations. Even in terms of estimating trends in

208

Dilip Mookherjee

inequality of incomes or consumption, cross-sectional inequality includes the effects of purely transitory shocks. Of greater importance is extent of inequality in long run or permanent income, and in the extent of intergenerational mobility with regard to incomes, education or occupations. These can be measured only with longitudinal studies. We need to understand the extent to which current poverty is transient or chronic: whether the economy is providing the children of households of low current socio-economic status with opportunities to move upwards and participate in the growth process. Nor can sources of unobserved cross-sectional variations be controlled for in the absence of longitudinal datasets. An illustration of some of the problems is in the investigation of the hypothesized inverse relationship between farm size and productivity, or of sharecropping distortions, which underlies most arguments for land reform. It has been argued that observed cross-sectional inverse relation between size and productivity may simply be reflecting unobserved heterogeneity of quality of soils between large and small farms, rather than any causal effect of size on productivity. Or that lower yields in sharecropped farms may reflect lower quality of soils or farmer technical knowledge rather than the causal impact of sharecropping contracts. One way of checking whether this is indeed the case is to use a longitudinal panel of yields achieved by the same plots and farmers over time as the overall size of the farm or its tenancy status vary. Such forms of unobserved heterogeneity cannot be controlled for when using cross-sectional datasets. Second, there is considerable fragmentation of information concerning different sets of variables that need to be combined in the analysis. For instance, estimation of models of peasant households need to integrate information about the demographics and assets of these households, with details of their production, contractual arrangements, market access, as well as education and health status. A single survey needs to integrate all this information for the same set of households. It does not help if there is one survey of demographics, assets and living standards of one household sample at one point of time; another of credit market conditions for another sample in another year; a different survey of the distribution of landholdings for yet another sample for another year. In addition, information about independent sources of variation of environmental, institutional or market parameters are needed as instruments to identify causal relationships. A wide range of village, weather and market variables need to be linked with household-level information. Third, the evaluation of specific policy interventions is made substantially more precise if these interventions are designed to be randomly assigned to different jurisdictions, or phased in according to a randomized design. Otherwise when assignments are determined by discretion of bureaucrats, politicians combined with the enterprise and efforts of NGOs and local agents, there is always a possibility of confounding the effects of these policies with unobserved underlying jurisdictionspecific factors that affect both policy choice and outcomes of interest. Other developing countries now understand the importance of designing statistical systems and policy interventions in a way to maximize the information that can be extracted about key underlying behavioural and institutional relationships.

Statistical Database of the Indian Economy

209

The PROGRESA interventions in Mexico represent a good example of this4 . This program involved provision of subsidies to poor households in Mexico conditional on their children attending school and checkups in health clinics. The interventions were phased in with randomization of villages selected for treatment, with others observationally similar used as controls. Data concerning a wide range of variables covering demographics, assets, labor supply, education, health were collected for a longitudinal sample of households, spanning several years. Added to this were variables representing environmental conditions, weather and local markets. The data has been used as a rich source of information about the effects of these policies on health and education, living standards, and exposure to risk, generating numerous research studies that have greatly enriched our understanding of poverty in Mexico and how it can be affected both in the short and long term by specific targeted policies5 . The ICRISAT data compiled for villages in Maharashtra and Andhra Pradesh for a ten year period between the mid-1970s and mid-1980s also represented a detailed longitudinal survey of households consumption, assets, savings, agricultural productivity and insurance, forming the basis of many important research studies that have enriched the understanding of key behavioural and institutional relationships, in a way that NSS cross-sectional studies could not have6 . For instance, ICRISAT data has been used in important studies by Shaban (1987) and Braido (2008) of the extent to which differences in productivity between sharecropped plots and other plots reflect possible differences in skills of the farmers in question, or of soil quality, rather than of the sharecropping contract per se. Shaban finds these are not explained by the former, while Braido finds that they are significantly explained by the latter. The latter paper thus casts doubt on the significance of sharecropping distortions on farm productivity. Braidos analysis reveals that observed differences in yields between sharecropped plots and others are accounted for by the tendency for landowners to lease out inferior quality plots to their tenants, while retaining better ones for self-cultivation. Hence regulation of tenancy contracts with regard to shares accruing to tenants are unlikely to have a significant impact on farm productivity. These are the kinds of insights into the effectiveness of government policies that require comprehensive, longitudinal surveys. Let me now turn to another shortcoming: there are many important new areas of policy concern in India that cannot be studied empirically owing to absence of well-designed surveys. Let me mention three such areas. One of the most critical issues in current industrial growth and employment concerns the interaction between formal and informal manufacturing sectors. The ASI provides data concerning the former, while the NSS provides surveys concerning the latter. But there are no available surveys that enable the links between the two sectors to be studied and understood. Of crucial importance is the nature of contractual arrangements between the two sectors, and comparisons of productivity and 4 5 6

See Levy (2007) for a concise and comprehensive account. Levy (2007) provides a useful summary of these various studies. See Walker and Ryan (1976).

210

Dilip Mookherjee

employment generation. Are labour market regulations hurting competitiveness of Indian manufacturing? This is one of the critical questions concerning industrial policy today. Some argue that restrictions on the ability of employers to dismiss workers imposed by the Industrial Disputes Act has a serious adverse impact on investment and employment in the formal manufacturing sectors. Others who disagree argue that these restrictions can be circumvented by the formal sector by subcontracting with the informal sector. To assess the overall impact of these regulations on productivity and employment generation, we need to understand these linkages across the two sectors, and how they compare with regard to productivity and employment generation. Second, actual devolution of economic and political powers to panchayats varies considerably across Indian states, as the implementation of the provisions of the 73rd and 74th Constitutional amendments have been left to corresponding state governments7 . While the progress of these initiatives in certain states have been studied by some researchers, an all-India comparative perspective is required on the extent to which they have been implemented de facto, and their impact on delivery of key public services and development programs. No surveys are currently available for this purpose. Third, the future of Indias growth trajectory relies quite intrinsically on its success in the knowledge sector, which depends in turn on the higher education sector. Since the 1980s this sector seems to be considerably strained by lack of finances, quality manpower, and lack of merit-oriented considerations in selection and appointment of teachers and students8 . Whether growth based on the success of the knowledge sector spreads widely or is restricted to a few elite groups depends on access to higher education across different socio-economic groups. To evaluate these, one needs data concerning access to and value of higher education of diverse groups in the country, in a form that is not currently available from any single source.

3 Problems of Coordination, Utilization and Dissemination I now turn to problems concerning the administration of existing statistical systems. In many surveys, important information that is actually collected is not disclosed. A leading example is ASI data made available by the CSO, on the basis of census of manufacturing establishments above a certain size. The data is released in the form of annual cross-sections, though it could also be released in the form of a longitudinal data set at the level of plants or firms above the minimum size. There are ways of releasing the data in ways that conceal the identity of firms while still allowing researchers to link the data for the same firm across different years. Another important instance of this concerns the level of aggregation of released data. Landholding or cost of cultivation surveys are conducted at the farm level, but 7 8

See Chaudhuri (2006). See Kapur and Mehta (2007).

Statistical Database of the Indian Economy

211

these are aggregated upto the district level, and thereafter at the state level. The state Agricultural Censuses report these at the district and state levels: those interested in using these for research purposes are interested in it at a far more disaggregated level: individual villages and farms. Some sources of data collected by various regulatory divisions of state and central governments are not released at all for purposes of research. An example is the extremely detailed firm-level data concerning economic, financial and technological variables for all major industries by the Bureau of Industrial Costs and Prices (BICP), for the purpose of estimating costs of major industrial products. This data could form the basis of an incomparable longitudinal firm-level panel of most major industries, if systematically compiled, documented and stored. Yet after the main bureaucratic purposes have been served and the data has been aggregated up and reported to the concerned ministry officials, it gathers dust in BICP offices. I have made numerous efforts spanning several years with various BICP senior officials to have this data computerized and used for research purposes, but have never succeeded. It is not clear whether the Right to Information Act would make any difference to this, as the systematic storage and codification of all this data would require a major administrative effort on the part of the government. It is beyond the capacity of any single external researcher to do this, even if they had the right to access the information available to the BICP. Finally, there have been shortcomings in the process of dissemination of data, though there have been significant advances in this respect in recent years in the context of NSS reports and survey data which are now either downloadable from the worldwide web or available for a nominal fee to all researchers. This should become the norm for all data collected by government agencies pertaining to the Indian economy. In conclusion, I would argue that the solid foundations of the Indian statistical system initiated by PC Mahalanobis half a century ago require further consolidation, integration and modernization, to meet the needs of contemporary academic research and policy. This would not require a radical overhaul of the NSS or the CSO; instead they need to be provided fresh directions for further development. The advances in information technology now make it feasible to expand the scale and scope of stored data, and methods of disseminating them. Given the crucial importance of better information concerning economic policy, it would be a very worthwhile venture for the Indian government to consider seriously. The Indian Statistical Institute played a central role in assisting the development of the statistical system of the Indian economy a half century ago. On the occasion of its Platinum Jubilee, I hope it will be able to do so again in providing useful feedback and advice to the Indian government on how to update and reinvigorate the system. There is no better way to celebrate its legacy or that of its founder.

212

Dilip Mookherjee

References 1. Braido L. (2008), ‘Evidence on the Incentive Properties of Share Contracts,’ Journal of Law and Economics, 51 (2), 327-349 2. Chaudhuri S. (2006), ‘What Difference Does a Constitutional Amendment Make? The 1994 Panchayati Raj Act and the Attempt to Revitalize Rural Local Government in India, in P. Bardhan and D. Mookherjee (ed.) Decentralization and Local Governance in Developing Countries: A Comparative Perspective, MIT Press: Cambridge, Massachusetts 3. Kapur D. and Mehta P. (2007) Indian Higher Education Reform: From Half-Baked Socialism to Half-Baked Capitalism India Policy Forum 2007, National Council of Applied Economic Research New Delhi and Brookings Institution, Washington D.C. 4. Levy S. (2006), Progress Against Poverty, Brookings Institution, Washington D.C. 5. Rudra A. (1996), Prasanta Chandra Mahalanobis: A Biography, Oxford University Press, New Delhi 6. Shaban R. (1987), ‘Testing Between Competing Models of Sharecropping,’ Journal of Political Economy, 95(5), 893-920 7. Walker, T.S., and Ryan, J.G. (1990). Village and household economies in India’s semi-arid tropics. Baltimore and London: The Johns Hopkins University Press

Does Parental Education Protect Child Health? Some Evidence from Rural Udaipur Sudeshna Maitra

Abstract The role of parental education in influencing child health outcomes has received much attention in the development literature. In this paper, I ask if parental education is protective of child health, as measured by seven different health outcomes, in a recent survey conducted in rural Udaipur. This study differs from most previous research in that it offers insight on the impact of parental education on the health of older children (aged 0-13) instead of infants alone and that it explores the relationship for multiple instead of only one or two diverse measures of child health. I show that the overall effect of parental education on child health is weak and that this finding could, in part, be driven by a failure of better parental health behaviors to lead to better child health outcomes, even though parental education is strongly associated with these better behaviors.

1 Introduction The relationship between parental education and child health has evoked substantial interest in the economic development literature (Currie (2008), Maitra et al (2006), Thomas, Strauss and Henrique (1991), Chou et al (2007), Bicego and Boerma (1993), Desai and Alva (1998), Caldwell (1979)). Child health is important for its own sake; it also has significant implications for both market and non-market outcomes such as adult labour productivity, adult health and learning. Moreover, a recent strand of literature argues that the origins of the much-documented positive association (the gradient) between adult health and Socioeconomic Status (SES) may lie in childhood health differences by SES (Case, Lubotsky and Paxson (2002), Case, Fertig and Paxson (2005)). Children from low-SES - viz. poorer or less educated - households are often found to suffer from worse health, which not only Sudeshna Maitra Department of Economics, York University, 1038 Vari Hall, 4700 Keele Street, Toronto, Canada. e-mail: [email protected]

213

214

Sudeshna Maitra

persists in adulthood but also impedes their educational attainment and incomeearning capabilities as adults. The adverse health effect could, therefore, perpetuate inter-generationally as children of future generations are affected in turn by their low-SES environments. Understanding the relationship between parental education and child health is, therefore, key to understanding the long-run intergenerational impact of schooling on health outcomes, and is crucial for framing appropriate policy for achieving improvements in such outcomes. In this paper, I ask the fundamental question: does parental education have a protective effect on child health? I attempt to answer this question by focusing on the simple association between parental education and various child health outcomes in a cross-sectional survey conducted recently in rural Udaipur. I show that the overall effect of parental education on child health is weak and that this effect could, in part, be driven by a failure of better health behaviors to lead to better child health outcomes, even though parental education is strongly associated with these better behaviors. Several existing studies have attempted to address the issue of whether parental education protects child health (see Currie (2008) for a review, see also Maitra et al (2006)). However, most analyses focus on one or two aspects of health alone which makes it hard to ascertain the generality of the results an issue, which given the inherent multi-dimensionality of health, plagues most of the literature in the area. Also, studies on developing countries have largely tended to address infant health measures such as infant mortality, birth weight or height for age. The contribution of the current study is that it offers insight on the impact of parental education on the health of older children (aged 0-13) and also explores the relationship for multiple measures of child health, both subjective (e.g. adult-reported overall child health status on a scale of 1 to 10) and objective (e.g. interviewer-measured peak flow meter readings and hemo cue readings). An analysis of the impact of parental education on child survival (till ages one and five) is also made possible by the availability of detailed survey information on each birth experienced by adult women. The main finding of this study is an overall weak impact of parental education on various measures of child health, with the effects often of the wrong sign1 . Parental education has some (jointly significant) protective effect on only two out of seven child health outcomes, viz. peak flow meter readings and the time taken by the child to squat and stand up five times. Maternal education is significantly associated with an improvement in the first of these measures while paternal education improves the second measure. What could be driving the weak impact of parental education on child health obtained herein? One explanation could be that parental education is of poor quality, thereby preventing the desirable health behaviors that education is expected to 1

I test if this weak relationship is driven by survival bias, viz. that frailer children are more likely to survive when parental education is higher, thereby weakening the association between parental education and child health for surviving children. I show that survival bias is not likely to be important since, conditional upon birth, parental education is not significantly correlated with child survival up to age 1 or age 5.

Does Parental Education Protect Child Health?

215

foster, and which lead to better child health. However, I find that desirable health behaviors - such as immunization practices and choice of healthcare at the time of child delivery - are indeed positively and significantly correlated with higher parental (especially paternal) education. Another explanation could be the absence of complementary inputs in the health generation process - such as adequate and effective health infrastructure - which prevent the translation of good health behaviors into good health. Indeed, I find the association between desirable parental health behaviors and child health outcomes to be generally weak and often of the wrong sign. In fact, the only two outcomes for which health behaviors are found to be (jointly) significantly associated with child health are the ones for which parental education is found to have some protective effect, viz. peak flow meter reading and time taken to squat and stand five times. The overall weak protective effect of parental education on child health can therefore be traced, at least in part, to a breakdown of the channel by which better health behaviors are translated to better child health outcomes2. It is plausible that poor health infrastructure is responsible for the breakdown. This hypothesis would be consistent with Banerjee, Deaton and Duflo’s (2004a, b) finding of high absenteeism and poor quality of service in healthcare facilities in the survey region. If true, the hypothesis would also indicate that education policy could well be inefficacious in promoting child health if unaccompanied by appropriate health policy. Statistical issues such as self-selection - for instance, if sicker children are more likely to have completed immunization routines - could also be responsible for the weak link obtained between health inputs and child health. Limitations of the current dataset prevent a conclusive identification of the factors that could be responsible for the breakdown in the relationship between health behaviors and child health generation. However, the findings obtained herein point to the need for future research to further explore this channel and identify the source of the same. The results of such research will have crucial implications for policy. The rest of the paper is organized as follows. The data is described in Section 2. Section 3 investigates if parental education has a protective impact on different measures of child health. Section 4 examines the role of health behaviors in explaining the relationship obtained in Section 3. Section 5 summarizes and concludes the paper.

2

There are multiple alternative channels that could potentially drive the association between parental education and child health. For example, the relationship could be driven by parental income if higher parental education leads to higher income which can be used to buy better child health. Alternatively, there could be third factors that explain the relationship, such as inherent parental health (which is correlated with the health of their offspring and which also determines how much education parents managed to attain in the first place) or family wealth (correlated with higher education of parents and also with better health of their children). It is beyond the scope of the current study to disentangle the effects of such channels, due to the unavailability of appropriate instruments and due to the cross-sectional nature of the data.

216

Sudeshna Maitra

2 Data The data were collected in 2002-03 from 100 hamlets in Udaipur district in Rajasthan, India. The sample is stratified by access to road, and hamlets within each stratum are selected randomly, with the probability of selection being proportional to the population of the hamlet. The survey has multiple components - a facility survey describing all public facilities serving the sample villages; a weekly visit to all public facilities checking for absenteeism of health care providers; and a household and individual survey of 5759 individuals (including adults and children aged 0-13) in 1024 households3. For most of the analysis reported herein, I have used the data on children from the individual survey and merged these with parental characteristics from the household and adult surveys. Household characteristics (such as caste and religion) are merged from the household survey. The size of the sample (sample 1) is 2263 children. To look at the effect of parental education on child survival to ages one and five, I use information available from the adult survey on all live births to mothers. The size of this sample (sample 2) is about 8100 live births. Seven measures of child health have been used in the study. Two of these are reported on behalf of the child by an adult respondent. The first is a report of the child’s general health - on a scale of 1 to 10, with higher numbers denoting better health - reported by an adult respondent. An adult respondent is also asked if the child has experienced a series of conditions in the last 30 days; these include cold, cough, hot fever, diarrhoea, weakness, vomiting, worms in stool, trouble breathing, abdominal pain, skin problems and ‘other problems’. The second measure of adultreported child health used here is the total number of these conditions that is reported for each child. The remaining five health measures are objective indicators obtained from health measurements taken by the interviewer. These are the ratio of weight to height, an indicator for low hemo cue reading (below 11 g/dl), an indicator for high temperature (above 37.7 C or 100 F), peak flow meter reading and the time taken by the child to bow squat and stand up five times4 . What do the above health indicators measure? Adult-reported child health measures the general physical well-being of the child (subject, of course, to the adults understanding of the same). The total number of conditions experienced in the last 30 days also measures the general health of the child (once again depending on how aware the adult respondent is of the child’s ailments) since the conditions asked about are indicative of the level of immunity in the child, her exposure and response to infections and general nutritional status. The weight-to-height ratio measures the nutritional status of children. The hemo cue reading measures the hemoglobin content of blood. A low reading (< 11 g/dl) indicates the presence of anemia in children and is related to nutritional deficiencies (Agarwal (2006)) and concomitant exposure to infestations such as hookworm. High temperature is indicative of exposure and reaction to infections. The peak flow meter reading is an indicator of lung capac3 4

See Banerjee, Deaton, Duflo (2004a, b) for further details of the design of the survey. The peak flow reading used is the mean of three readings recorded by the interviewer.

Does Parental Education Protect Child Health?

217

ity and measures respiratory health, including possible exposure to infections such as tuberculosis. The time taken to squat and rise five times is an indicator of the physical fitness and well-being (also related to the nutritional status) of the child. Summary statistics of all variables are listed in Table 1. Sample 1 is the sample of all living children aged 0-13 who are asked about in the child survey. The summary statistics for this sample (Table 1(a)) reveal two interesting features which highlight the contribution of the present paper to the existing literature on parental education and child health. The first feature to note is the clearly diverse aspects of health that the seven health outcomes represent (Table 1(a)). The mean values of overall child health (7 out of 10), number of conditions (1 out of 11) and the proportion of children who have high temperature (1%) seem to indicate fairly good child health at the average. However, the high proportion of children with low hemo cue readings (50%) and the relatively low mean weight to height ratio 5 (0.15) appear to indicate poor child health at the average. Examining the effects of parental education on each of the measures would therefore provide a good summary of the overall health benefits of parental schooling through its impact on these diverse measures. Second, the general level of parental education in the sample is very low. About 94% of children in the survey have illiterate mothers or mothers that did not pass class 1, while 58% of children have illiterate fathers or fathers that did not pass class 1. 3% of children have mothers that have been to primary school (classes 1-5).

3 Does Parental Education Protect Child Health? The simple OLS model described below is used to measure the association between parental education and child health outcomes controlling for some basic demographic variables (e.g. child age, gender, caste, religion and parental age)6. Hi = Xi β + Zi γ + ε ,

(1)

where, Hi : health outcome of child i, Xi : parental education dummies, Zi : controls. The coefficient of interest, β , measures the impact of parental education on child health. Depending on what aspect of health Hi denotes - i.e. whether higher Hi indicates better or worse health - a positive or negative β will indicate if parental education is protective of child health. For instance, if higher Hi denotes better health, 5

A weight to height ratio of 0.15 is at the lower end of the normal range of weight to height for a 6 year old child, the mean age in sample 1. 6 Using non-linear models such as ordered probits and probits yield the same qualitative results, but are often plagued by convergence issues. Also, all regressions reported in the paper include the most basic set of controls; increasing the number of controls results in loss of observations with little change in qualitative results.

218

Sudeshna Maitra

then a positive β would indicate that parental education has a positive impact and is hence protective of child health. Likewise, if higher Hi denotes worse health, a negative β would point to a protective impact. Note that β represents a simple association between parental education and child health and hence cannot be given a causal interpretation. The sign (and statistical significance) of the estimated value of β will indicate if parental education is an important marker for child health and can predict the latter. The objective of the current study is to investigate if this is indeed the case. The results are presented in Tables 2(a)-(b) and discussed below. Table 2(a) presents the effects of parental education on different measures of child health. The reported regressions include both maternal and paternal education as explanatory variables. However, results are very similar when maternal or paternal education is included individually. The impact of parental education on child health is weak. Parental education has some (jointly significant) protective effect on only two out of seven child health outcomes, viz. peak flow meter readings and the time taken by the child to squat and rise five times. Maternal education is significantly associated with an improvement in the first of these measures but only one education group has a significant independent effect: children whose mothers have completed middle school (standards 6, 7 or 8) have a significantly higher peak flow reading - denoting better respiratory health - than children whose mothers are illiterate. The effect of paternal education on the second measure - time to squat - appears to be strong as it is significant for three of the six paternal education dummies: children whose fathers have primary education (standards 1-5), secondary education (standards 9, 10) and higher secondary education (standards 11, 12) take less time to squat and stand five times than those whose fathers are illiterate. Moreover, the effects are higher when fathers have secondary and higher secondary education than when they have primary education. Only one paternal education dummy is significant for weight for height: children whose fathers have completed higher secondary education (standards 11 or 12) have a significantly higher weight-for-height ratio than children with illiterate fathers7 . However, parental education is not jointly significant in predicting the weight for height of children. The overall weak effect of parental education on child health could be driven, at least in part, by the bias in the health of surviving children, by parents’ education. Recall that the sample in use is that of living children. If the offspring of more educated parents are more likely to survive, then the surviving children of these parents could well be frailer than those of their less educated counterparts. This would diminish the measured impact of parental education on the health of surviving children even though there is a clear protective effect of parental education on child health, when the latter is measured by the probability of child survival. 7

Note the significant but adverse effect of paternal education on number of conditions experienced by children in the last 30 days. Children whose fathers have higher secondary education experience significantly more conditions than those whose fathers are illiterate. The effect could be driven by a greater awareness of children’s conditions by adults in such households (Murray and Chen (1992), Sen (2002)).

Does Parental Education Protect Child Health?

219

To test the importance of child survival in driving the results in Tables 2(a), I turn to the birth history of all women in the adult sample, and examine if higher parental education increases the probability that each born child (sample 2) survives to age 1, age 5 and to the present day (conditional upon present age). The results are presented in Table 2(b). Once again, although the reported regressions control for both maternal and paternal education the results are very similar when these are included individually. There appears to be no protective effect of maternal or paternal education on child survival. Parental education fails to be jointly significant in any of these regressions and the individual coefficients are often small and of the wrong sign. The results in Table 2(b) suggest that survival bias may not be an important factor in driving the weak relationship obtained between parental education and health outcomes of surviving children (Tables 2(a)-(b)). They also provide support for the earlier finding that parental education has a weak protective effect on overall child health in the current survey.

4 The Role of Parental Health Behaviors The literature has often cited parental health behaviors as an important reason for the protective effect of education on child health (Currie (2008), Maitra (2004)). Higher parental education is expected to lead to a choice of better health inputs such as regular immunization practices or the choice of better healthcare at the time of child delivery - which are, in turn, expected to have a positive impact on child health. A weak relationship between parental education and child health - as obtained above - could therefore be driven by a breakdown in the link between parental education and health behaviors and/or the link between health behaviors and child health. For example, it is possible that higher parental education is not associated with better health behaviors, as would be the case if education is of poor quality. Alternatively, it could be the case that better health behaviors are not translated into better child health, possibly due to missing complementary inputs in the health generation process, such as an effective healthcare system. Here I show that while the link between parental education and health behaviors is valid and in the expected direction, the link between health behaviors with child health is weak. This suggests that the absence of complementary inputs in the health generation process may be the source of the obtained weak relationship between parental education and child health. To test the link between health behaviors, represented by covariates Wi , and parental education, I run the following OLS regression: Wi = Xi η + Zi + µ + ν ,

(2)

220

Sudeshna Maitra

where, Wi : parental health behaviors (immunization practices, healthcare utilization at the time of childbirth), Xi : parental education dummies, Zi : controls. The sign of η will indicate if parental education (Xi ) has the expected effect on Wi . For example, if Wi represents completion of immunization routines, a positive η would represent that higher parental education is associated with better health behaviors. Finally, to test if health behaviors Wi have the expected (improving) impact on child health, I use the following OLS regression: Hi = Wi π + Zi ρ + ζ .

(3)

The sign of π will indicate if the link between Wi and child health Hi is valid and in the expected direction. The results of running (2) and (3) are presented in Tables 3(a)-(b) and Table 4, respectively, and discussed below. Table 3(a) presents the effects of parental education on immunization behaviors, viz. whether the child has an immunization card, and whether the child has completed the BCG, DPT, OPV, measles and pulse polio immunization routines, respectively8 . Maternal education is significantly positively associated with (also jointly significant for) each of these behaviors except pulse polio immunization, although the independent effects of maternal education are reduced and insignificant when paternal education is also controlled for. However, paternal education has a very strong impact on each type of immunization behavior and these effects remain individually significant even after maternal education is controlled for. It seems, therefore, that parental education - especially paternal education - plays a very important role in adopting better immunization practices. Table 3(b) presents the effects of parental education on the venue of childbirth and the qualifications of attendant caregivers at the time of childbirth (for each child). The place of childbirth is defined as ‘formal’ if it is a community health center/ sub center, a government hospital or a private hospital. ‘Informal’ arrangements include birth at home or at a daima’s (a traditional birth attendant’s) home. Birth attendants are classified as ‘formal’ if they include a government doctor, a private doctor, an ANM (auxiliary nurse/ midwife), a nurse/ compounder or a trained representative from an NGO. ‘Informal’ attendants include a family member, an untrained daima or a bhopa (a traditional healer). Clearly, the adoption of ‘formal’ healthcare arrangements counts as desirable health behavior, as ‘informal’ arrangements are characterized by a lack of medical facilities or the appropriate training of staff.

8 The BCG vaccine protects against tuberculosis, DPT against diphtheria, pertussis (whooping cough) and tetanus and OPV (Oral Polio Vaccination) against poliomyelitis. The survey has two questions on polio vaccination: one on whether the child has completed the OPV routine and another on whether the child has completed the pulse polio routine. Each is included separately in the analysis.

Does Parental Education Protect Child Health?

221

Once again, parental education is found to be significantly associated with better health behaviors. The effects are jointly significant in each of the regressions reported in Table 3(b). Maternal education is significantly correlated with choosing a formal location for childbirth though the individual effects become mostly insignificant after controlling for paternal education. Paternal education is also significantly associated with choosing a formal location for childbirth; moreover, the individual effects remain highly significant even after controlling for maternal education. Regarding the qualifications of birth attendants, maternal education is significantly associated with choosing formal (trained) care but the individual effects are reduced (though not completely) when paternal education is controlled for. Paternal education, however, is significantly associated with choosing formal care with the individual effects remaining significant even after maternal education is included in the regression. In conclusion, the link between parental education and desirable health behaviors appears to be valid and operating in the expected direction. Maternal education is important for choosing desirable health behaviors but the effect is often driven by paternal education. The effect of fathers, education on desirable health behaviors is strong and persists even after controlling for mother’s education. Table 4 presents evidence of the link between immunization behaviors and child health outcomes. This link is generally weak with most coefficients in Table 4 being insignificant and often of the wrong sign. Health behaviors have a jointly significant impact on only two health outcomes - peak flow meter reading and time taken to squat and rise five times. Notice that while the impact of having completed the Oral Polio Vaccination (OPV) routine on peak flow readings is positive (albeit insignificant) the effect of having completed the pulse polio immunization routine is negative and significant. This effect could be driven by the emphasis placed by the pulse polio programme on areas and households where polio is more likely to occur, viz. those with innately sicker children9. In summary, therefore, the results reported in Tables 3 and 4 indicate that although there are strong links in the expected direction between parental education and health behaviors, the link between health behaviors and health outcomes is quite weak. This suggests, in turn, that the breakdown of the second link could be responsible for the weak protective effect of parental education on child health outcomes obtained in Section 3. Interestingly, the two health outcomes for which parental education was found to have some protective effect in Section 3 - time to squat and peak flow readings are the ones for which both the links are operative. That is, for these two outcomes, parental education is found to be significantly (jointly) associated with health behaviors, which are in turn significantly (jointly) associated with time to squat and 9

Note also that the overall weak association between health behaviors and child health outcomes could be driven, at least in part, by a self-selection effect, viz. that sicker children are more likely to receive better health inputs. I try to test (albeit crudely) if this is likely by controlling for mother’s health measures - as proxies for innate child health - but this does not appear to strengthen the relationship between parental health behaviors and child health (regressions not reported).

222

Sudeshna Maitra

peak flow readings. This too suggests that the link between health behaviors and child health outcomes could be an important channel for the realization of parental education benefits. What factor/s could be responsible for the failure of translation of better parental health behaviors into better health outcomes? It could be argued that an inability of parents to correctly execute the better health behaviors - a potential consequence of poor education quality - could be responsible for this finding. While this may be true of a lot of complex health behaviors, those considered here are simple enough viz. whether immunization routines for different diseases have been completed and the choice of childbirth venue and trained birth attendants - that it seems unlikely that they could be performed inadequately. It seems likely that there may be missing inputs in the health generation process which work in conjunction with parental health behaviors to produce better child health. Examples of such inputs would be the presence of adequate and effective healthcare facilities or parental access to a minimal quantity of economic resources and well-being. The evidence provided in Banerjee, Deaton and Duflo (2004a, b) of the high absenteeism and poor quality of health services in facilities in the region is consistent with the first hypothesis. The general poverty of households in the region (Banerjee et al. (2004a, b)) is consistent also with the second hypothesis. It is also possible that the weak link between health inputs and child health is driven by statistical issues such as self-selection, for instance, if sicker children (born of sicker mothers) are more likely to have completed the immunization routines (or been born under ‘formal’ medical supervision). The survey used herein does not allow an attempt to conclusively identify which factors are responsible for the breakdown of the link between parental education and child health - the absence of specific missing inputs in the health generation process or statistical issues such as self-selection. But a deeper understanding of these factors is essential for an effective framing of education, health and income distribution policies, with an appreciation for the inter-linkages between the same. Future research must therefore apply itself to the task and attempt to address the issue appropriately.

5 Summary and Conclusion The role of parental education in influencing child health outcomes has received much attention in the development literature. In this paper, I ask if parental education is protective of child health, as measured by seven different health outcomes. While existing studies have addressed the question of whether parental education impacts child health, each of these has focused on at most one or two health measures. Moreover, studies based in developing countries have used mainly infant health outcomes to answer this and related questions. The main contribution of the current study is that it offers insight on the impact of parental education on the health

Does Parental Education Protect Child Health?

223

of older children (aged 0-13) and also explores the relationship for multiple - viz. seven - measures of child health, both subjective and objective. I find that parental education has an overall weak impact on child health outcomes. Only two of the seven health outcomes - viz. peak flow meter readings and time to squat and rise five times - are (jointly significantly) protected by parental education. I then show that the generally weak impact of parental education can be traced, at least in part, to a failure of better health behaviors to lead to better child health outcomes, even though parental education is strongly associated with these better behaviors. It is important to identify conclusively the reasons behind the inefficacy of better health behaviors in generating better child health outcomes. The behaviors used herein are simple enough that an inappropriate execution of the same is unlikely to be the reason behind their measured inefficacy. It is likely that there are missing complementary inputs in the health generation process that prevent the translation of better parental health behaviors into better child health outcomes. Two inputs which could be important in this respect are an effective healthcare system and parental access to a minimal level of economic resources. There is evidence in the literature (Banerjee et al. (2004a, b)) of the poor quality of healthcare facilities in the survey region and also of the general poverty of households in the region. Statistical issues such as self-selection could also play a role in driving, in part or in entirety, the weak relationship between health inputs and child health obtained herein. In order to appreciate the inter-linkages between education, health and income redistribution schemes and to frame effective policy, it is essential to understand very precisely the role that these or other factors might play in the health generation process. It is beyond the scope of the current study to conclusively address this issue but the results obtained herein point to the need for future research to apply itself to the task.

References 1. Agarwal, K.N. (2006). “Indicators for Assessment of Anemia and Iron Deficiency in the Community”.Available at: http://www.pitt.edu/ super1/lecture/lec24831/article.doc 2. Banerjee, Abhijit, Deaton, Angus and Duflo, Esther (2004a). “Health Care Delivery in Rural Rajasthan”, Economic and Political Weekly, February 28, 2004, pp. 944-949 3. Banerjee, Abhijit, Deaton, Angus and Duflo, Esther (2004b). “Wealth, Health and Health Services in Rural Rajasthan”, American Economic Review, 94(2), pp. 326-330 4. Bicego, George T. and Boerma, J. Ties (1993). “Maternal Education and Child Survival: A Comparative Study of Survey Data from 17 Countries.” Social Science and Medicine, 36(9), pp. 1207-1227 5. Caldwell, J.C. (1979). “Education as a Factor in Mortality Decline: An Examination of Nigerian Data.”Population Studies, 33(3), pp. 395-413 6. Case, Anne, Fertig, Angela and Paxson, Christina (2005). “The Lasting Impact of Childhood Health and Circumstance.” Journal of Health Economics, 24, pp. 365-389 7. Case, Anne, Lubotsky, Darren and Paxson, Christina (2002). “Economic Status and Health in Childhood: The Origins of the Gradient.” American Economic Review, 92(5), pp. 1308-1334

224

Sudeshna Maitra

8. Chou, Shin-Yi, Liu, Jin-Tan, Grossman, Michael and Joyce, Theodore J. (2007). Parental Education and Child Health: Evidence from a Natural Experiment in Taiwan. NBER Working Paper No. 13466 9. Currie, Janet (2008). Healthy, Wealthy and Wise: Socioeconomic Status, Poor Health in Childhood, and Human Capital Development. NBER Working Paper No. 13987 10. Desai, Sonalde and Alva, Soumya (1998). Maternal Education and Child Health: Is There a Strong Causal Relationship? Demography, 35(1), pp. 71-81 11. Lam, David and Duryea, Suzanne (1999). Effects of Schooling on Fertility, Labor Supply and Investments in Children, with Evidence from Brazil. Journal of Human Resources, 34(1), pp. 160-192 12. Maitra, Pushkar (2004). Parental Bargaining, Health Inputs and Child Mortality in India. Journal of Health Economics, 23, pp. 259-291 13. Maitra, Pushkar, Peng, Xiujian and Zhuang, Yaer (2006). Parental Education and Child Health: Evidence from China. Asian Economic Journal, 20(1), pp. 47-74 14. Murray, Christopher J.L. and Chen, Lincoln C. (1992). Understanding Morbidity Change. Population and Development Review, 18(3), pp. 481-503 15. Psacharopoulos, George (1988). Education and Development: A Review. World Bank Research Observer, 3(1), pp. 99-116 16. Sen, Amartya K. (2002). Health: Perception versus Observation. British Medical Journal, 324, pp. 860-861 17. Thomas, Strauss and Henrique (1991). How Does Mothers Education Affect Child Height? Journal of Human Resources, 26(2), pp. 183-211

Does Parental Education Protect Child Health?

225

Table 1 (a) Summary Statistics for Sample 1: Living Children Aged 0-13 (Mean Age 6.34 years) Obs Age: 0-2 yrs Age: 3-5 yrs Age: 6-9 yrs Age: 10-13 yrs Female No. Of Elder Male Siblings No. Of Elder Female Siblings Caste: Scheduled Tribe Caste: Scheduled Caste Caste: Other Backward Class Caste: Minority Caste: Other Religion: Adivasi Religion: Hinduism Religion: Islam Religion: Christianity Religion: Jainism Overall Child Health Reported by Adult Respondent (1-10) No. Of Conditions (out of 11) Weight to Height Ratio (kg/cm) If Low Hemo Cue Reading ( 37.7 C) Peak Flow Meter Reading (L/min) Time Taken to Squat & Rise 5 Times (seconds) If Child Has Immunization Card If Child Has Completed BCG Immunization If Child Has Completed DPT Immunization If Child Has Completed OPV Immunization If Child Has Completed Measles Immunization If Child Has Completed Pulse polio Immunization If child Was Born in a ‘Formal Location If Formally Trained Attendant Present at Child Delivery If Untrained Attendant (‘Informal) Present at Child Delivery Mother’s Age (Years) Mother’s Age Squared Father’s Age (Years) Father’s Age Squared Mother’s Education: Illiterate/Did not complete class 1 Mother’s Education: Classes 1-5 Mother’s Education: Classes 6-8 Mother’s Education: Classes 9-10 Mother’s Education: Classes 11-12 Mother’s Education: College or Higher Father’s Education: Illiterate/ Did not complete class 1 Father’s Education: Classes 1-5 Father’s Education: Classes 6-8 Father’s Education: Classes 9-10 Father’s Education: Classes 11-12 Father’s Education: College or Higher

2263 2263 2263 2263 2263 2263 2263 2219 2219 2219 2219 2219 2211 2211 2211 2211 2211 1742 2232 1872 1072 1722 1218 1048 2261 2217 2217 2219 2217 2246 2263 2262 2262 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263 2263

Mean

Std. Dev. Min

Max

0.194 0.396 0 1 0.247 0.431 0 1 0.308 0.462 0 1 0.251 0.433 0 1 0.490 0.500 0 1 1315 1.127 0 7 0.517 0.846 0 5 0.817 0.387 0 1 0.036 0.186 0 1 0.097 0.296 0 1 0.002 0.042 0 1 0.048 0.214 0 1 0.221 0.415 0 1 0.788 0.409 0 1 0.001 0.030 0 1 0.006 0.076 0 1 0.000 0.000 0 0 6.780 1.968 1 10 1.220 1.710 0 9 0.150 0.060 0.043 1.394 0.497 0.500 0 1 0.012 0.110 0 1 162.039 61.872 20 413.333 6.198 1.710 3 21.070 0.097 0.296 0 1 0.382 0.486 0 1 0.354 0.478 0 1 0.338 0.473 0 1 0.327 0.469 0 1 0.713 0.453 0 1 0.078 0.268 0 1 0.052 0.222 0 1 0.893 0.310 0 1 32.794 7.467 16 60 1131.151 529.359 256 3600 35.819 8.134 18 65 1349.124 633.018 324 4225 0.935 0.246 0 1 0.031 0.173 0 1 0.017 0129 0 1 0.011 0.105 0 1 0.003 0.051 0 1 0.003 0.056 0 1 0.582 0.493 0 1 0.196 0.397 0 1 0.114 0.318 0 1 0.068 0.251 0 1 0.021 0.143 0 1 0.013 0.114 0 1

226

Sudeshna Maitra

Table 1 (b) Summary Statistics for Sample 2: All Live Births to Mothers (Mean Agea : 3.23 years) Obs Age: 0-2 yrs Age: 3-5 yrs Age: 6-9 yrs Age: 10-13 yrs Age: 14 + yrs Female If One of a Twin Caste: Scheduled Tribe Caste: Scheduled Caste Caste: Other Backward Class Caste: Minority Caste: Other Religion: Adivasi Religion: Hinduism Religion: Islam Religion: Christianity Religion: Jainism If Child Survived a Year After Birth If Child Survived 5 Years After Birth If Child Survived Upto Present Father’s Age (Years) Father’s Age Squared Mother’s Age (Years) Father’s Fathers Education: Illiterate/Did not complete class 1 Father’s Education: Classes 1-5 Father’s Education: Classes 6-8 Father’s Education: Classes 9-10 Father’s Education: Classes 11-12 Father’s Education: College or Higher Mother’s Education: Illiterate/Did not complete class 1 Mother’s Education: Classes 1-5 Mother’s Education: Classes 6-8 Mother’s Education: Classes 9-10 Mother’s Education: Classes 11-12 Mother’s Education: College or Higher Measures of Mother’s Health (Proxies for Child Frailty at Birth) Self Reported Health Status Proportion of Live Births That Have Died If Stillbirths Between Current & Previous Child If Spontaneous abortions Between Current & Previous Child If Induced Abortions Between Current & Previous Child Mother’s Age at Birth of This Child Mother’s Age at Birth of This Child, Squared

Mean

Std. Dev. Min Max

8171 0.712 0.453 0 1 8171 0.074 0.262 0 1 8171 0.069 0.254 0 1 8171 0.057 0.232 0 1 8171 0.093 0.290 0 1 8256 0.155 0.362 0 1 8256 0.322 0.467 0 1 8124 0.725 0.446 0 1 8124 0.041 0.199 0 1 8124 0.140 0.347 0 1 8124 0.003 0.054 0 1 8124 0.090 0.286 0 1 8100 0.178 0.382 0 1 8100 0.825 0.380 0 1 8100 0.001 0.038 0 1 8100 0.004 0.067 0 1 8100 0.000 0.000 0 0 2565 0.888 0.315 0 1 2086 0.828 0.378 0 1 8256 0.952 0.214 0 1 8256 35.026 9.951 5 90 8256 1325.837 783.636 25 8100 8256 31.584 8.628 15 63 8256 0.478 0.500 0 1 8256 0.234 0.423 0 1 8256 0.145 0.352 0 1 8256 0.081 0.273 0 1 8256 0.026 0.160 0 1 8256 0.032 0.176 0 1 8256 0.904 0.295 0 1 8256 0.036 0.187 0 1 8256 0.036 0.187 0 1 8256 0.016 0.125 0 1 8256 0.004 0.066 0 1 8256 0.003 0.054 0 1 6648 5.675 2.045 1 10 7620 0.132 0.201 0 1 8256 0.005 0.069 0 1 8256 0.006 0.078 0 1 8256 0.001 0.029 0 1 8171 28.267 8.279 9 63 8171 867.572 527.426 81 3969

————————– a The age of a child is her current age if she is alive or the age of which she would have been had she been alive.

Does Parental Education Protect Child Health?

227

Table 2 (a) Parental Education and Child Health Outcomes (OLS Regressions using Sample 1a ) Low Time Overall No. of Weight by Hemo High Peak Flow Taken to Dependent Variable: Child Conditions Height Reading Temperature Meter Squat & Health ( 37.7 C) Reading Rise 5 Times (1)

(2)

(3)

(4)

(5)

(6)

(7)

-0.127 (0.258) 0.507 (0.330) 0.153 (0.450) -1.249 (0.811) 0.343 (1.328)

0.012 (0.210) -0.396 (0.286) -0.295 (0.375) -0.045 (0.700) 0.050 (1.147)

-0.001 (0.008) -0.003 (0.010) 0.001 (0.013) -0.010 (0.023) 0.000 (0.000)

-0.082 (0.088) 0.210 (0.151) 0.361* (0.177) -0.110 (0.315) -0.312 (0.293)

-0.009 (0.016) -0.018 (0.021) 0.000 (0.029) -0.006 (0.049) 0.000 (0.000)

11.186 (7.349) 27.409* (13.763) 16.413 (15.714) 28.653 (24.864) 43.787 (23.444)

-0.280 (0.297) -0.703 (0.533) -0.723 (0.649) -0.724 (0.955) 0.188 (0.917)

0.031 (0.127) 0.053 (0.153) 0.244 (0.192) 0.357 (0.322) 0.425 (0.428)

0.001 (0.098) 0.089 (0.123) -0.141 (0.152) 0.868** (0.250) -0.251 (0.357)

0.000 (0.003) 0.004 (0.004) 0.001 (0.005) 0.027** (0.009) 0.012 (0.013)

0.021 (0.041) -0.016 (0.051) 0.034 (0.062) -0.144 (0.090) -0.063 (0.181)

-0.013 (0.007) 0.014 (0.009) -0.010 (0.011) -0.017 (0.018) -0.004 (0.030)

4.266 -0.378* (3.467) (0.148) 8.283 -0.222 (4.460) (0.181) 1.776 -0.697** (5.356) (0.219) 6.278 -0.713* (7.966) (0.316) 18.957 -0.376 (19.603) (0.761)

2145 0.060 1.74

1788 0.270 1.32

1043 0.130 1.10

1655 0.020 1.03

1175 0.530 1.92*

1009 0.090 2.61**

0.066

0.219

0.361

0.411

0.039

0.004

Mother’s Education Dummiesb Classes 1-5 Classes 6-8 Classes 9-10 Classes 11-12 College or Higher Father’s Education Dummiesb Classes 1-5 Classes 6-8 Classes 9-10 Classes 11-12 College or higher

Observations 1674 R-squared 0.050 F-Stat (Parental 1.01 Education Variables) Prob. > F 0.436

————————– Standard errors in parentheses. ∗ significant at 5% , ∗∗ significant at 1 %. a Sample 1 includes all living children

aged 0-13 (summary statistics are provided in Table 1(a)). All regressions reported here are weighted. Other controls include child age dummies, gender, caste & religion dummies, number of elder male & female siblings (proxies for birth order), mother’s age (& squared) and father’s age (& squared).

b Omitted

Category: Illiterate/Did not complete CLASS 1.

228

Sudeshna Maitra Table 2 (b) Parental Education and Child Survival (OLS Regressions using Sample 2a ) Survival to Survival to Survival 1 Year 5 Years To Present (1) (2) (3) Mother’s Education Dummies (Omitted: Illiterate/ Did not complete class 1) Classes 1-5 Classes 6-8 Classes 9-10 Classes 11-12 College or Higher

-0.003 (0.033) 0.004 (0.051) 0.044 (0.075) 0.033 (0.112) 0.059 (0.276)

0.021 (0.040) 0.076 (0.087) 0.031 (0. 108) 0.100 (0.156) 0.000 (0.000)

-0.008 (0.013) -0.006 (0.014) -0.014 (0.022) 0.005 (0.039) -0.026 (0.056)

-0.008 (0.016) -0.006 (0.020) 0.016 (0.027) -0.010 (0.045) -0.003 (0.056)

-0.011 (0.020) -0.037 (0.026) -0.004 (0.035) -0.027 (0.057) -0.061 (0.075)

-0.005 (0.006) -0.007 (0.008) -0.003 (0.010) -0.008 (0.017) 0.005 (0.018)

1876 0.220 0.15 0.999

1531 0.310 0.42 0.922

5888 0.190 0.26 0.990

Father’s Education Dummies (Omitted: Illiterate/Did not complete class 1) Classes 1-5 Classes 6-8 Classes 9-10 Classes 11-12 College or Higher

Observations R-squared F-Stat (Parental Education Variables) Prof > F

————————– Standard errors in parentheses. *significant at 5% **significant at 1%. a Sample

2 includes all live births to mothers (summary statistics are provided in Table I(b)). Other controls in the reported regressions are age dummies for the child (proxies for year of birth), gender, if one of a twin, caste and religion dummies, age of father (& squared), age at childbirth of mother (& squared), sampling stratum (measures distance to a road) and some maternal characteristics that could affect the inherent frailty of the child (proportion of children born to the mother that have died, separate dummies for if the mother had stillbirths. spontaneous abortions or induced abortion before the previous child and the current child and the mother’s self-reported health status).

Does Parental Education Protect Child Health?

229

Table 3 (a) Parental Education and Health Behaviors - Immunization Practices (OLS Regressions using Sample 1a ) (contd. on next page) Dependent Variable:

Child Has an Immunization Card (1) (2) (3)

Child Has Completed BCG Immunization (4) (5) (6)

Child Has Completed DPT Immunization (7) (8) (9)

Mother’s Education Dummiesb Classes 1-5 Classes 6-8 Classes 9-10 Classes 11-12 College or Higher

0.058 (0.035) 0.058 (0.048) 0.090 (0.058) 0.255* (0.120) 0.176 (0.155)

0.054 (0.035) 0.006 (0.048) -0.021 (0.064) 0.229 (0.120) 0.067 (0.164)

0.010 (0.057) 0.141 (0.077) 0.383** (0.091) 0.458* (0.188) 0.482* (0.244)

-0.034 (0.057) 0.063 (0.076) 0.108 (0.101) 0.311 (0.188) 0.100 (0.257)

0.003 (0.056) 0.100 (0.076) 0.381** (0.090) 0.483** (0.186) 0.519* (0.242)

-0.040 (0.056) 0.020 (0.075) 0.122 (0.100) 0.330 (0.186) 0.177 (0.255)

Father’s Education Dummiesb Classes 1-5

0.000 (0.017) 0.067** (0.021) 0.071** (0.025) 0.323** (0.042) 0.153** (0.051)

Classes 6-8 Classes 9-10 Classes 11-12 College or Higher Observations R-squared F-Stat (Parental Education Variables) Prob. > F

-0.003 (0.017) 0.067** (0.021) 0.054* (0.026) 0.312** (0.042) 0.144* (0.061)

0.064* (0.027) 0.115** (0.033) 0.237** (0.040) 0.422** (0.066) 0.538** (0.081)

0.064* (0.026) 0.111** (0.033) 0.223** (0.041) 0.392** (0.067) 0.469** (0.096)

0.057* (0.026) 0.124** (0.032) 0.244** (0.039) 0.402** (0.066) 0.512** (0.080)

0.057* (0.026) 0.123** (0.033) 0.229** (0041) 0.374** (0.066) 0.431** (0.095)

2170 2170 2170 2128 2128 2128 2128 2128 2128 0.077 0.102 0.109 0.132 0.157 0.168 0.125 0.151 0.160 2.30* 15.97** 8.07** 5.84** 22.04** 10.79** 5.90** 21.64** 10.66** 0.043

0.000

0.000

0.000

0.000

0.000

0.000

0.000

0.000

————————– Standard errors in parentheses. ∗significant at 5%; ∗∗ significant at 1%. a

Sample 1 includes all living children aged 0-13 (summary statistics are provided in Table 1(a)). All regressions reported here are weighted. Other controls are age dummies for the child, gender, dummies for caste and religion and the age of the parent’s (& squared) whose education features in the regression.

b

Omitted category: Illiterate/Did not complete class 1.

230

Sudeshna Maitra

Table 3 (a) Parental Education and Health Behaviors - Immunization Practices (OLS Regressions using Sample 1a ) (contd. from previous page) Dependent Variable:

Child Has Completed OPV Immunization (10) (11) (12)

Child Has Completed Measles Immunization (13) (14) (15)

Child Has Completed Pulse Polio Immunization (16) (17) (18)

Mother’s Education Dummiesb Classes 1-5 Classes 6-8 Classes 9-10 Classes 11- 12 College or Higher

0.005 (0.055) 0.092 (0.075) 0.407** (0.089) 0.491** (0.184) 0.537* (0.239)

-0.027 (0.056) 0.015 (0.075) 0.152 (0.099) 0.356 (0.184) 0.187 (0.252)

0.068 (0.055) 0.058 (0.075) 0.388** (0.090) 0.502** (0.183) 0.604* (0.291)

0.024 (0.056) -0.016 (0.074) 0.122 (0.099) 0.341 (0.184) 0.196 (0.301)

-0.022 (0.053) 0.032 (0.072) 0.103 (0.084) 0.086 (0.175) 0.159 (0.227)

-0.056 (0.054) 0.003 (0.072) -0.026 (0.095) -0.055 (0.177) -0.002 (0.243)

Father’s Education Dummiesb Classes 1-5

0.050 (0.026) 0.116** (0.032) 0.224** (0.039) 0.360** (0.065) 0.531** (0.079)

Classes 6-8 Classes 9-10 Classes 11-12 College or Higher Observations R-squared F-Stat (Parental Education Variables) Prob. > F

0.050 (0.026) 0.115** (0032) 0.206** (0.040) 0.327** (0.065) 0.437** (0.094)

0.053* (0.026) 0.103** (0.032) 0.250** (.0.039) 0.280** (0.065) 0.557** (0.081)

0.051* (0.026) 0.104** (0.032) 0.228** (0.040) 0.253** (0065) 0.484** (0.094)

0.004 (0.025) 0.015 (0.031) 0.159** (0.037) 0.085 (0.062) 0.199** (0.076)

0.004 (0.025) 0.014 (0.031) 0.165** (0038) 0.074 (0.063) 0.196* (0.090)

2130 2130 2130 2128 2128 2128 2155 0.127 0.148 0.161 0.123 0.145 0.154 0.108 6.59** 20.08** 9.99** 6.21** 19.66** 9.83** 0.51

2155 2155 0.116 0.121 5.18** 2.51**

0.000

0.000

0.000

0.000

0.000

0.000

0.000

0.772

0.005

————————– Standard errors in parentheses. *significant at 5%, ** significant at 1%. a

Sample 1 includes all living children aged 0-13 (summary statistics are provided in Table 1(a)). All regressions reported here are weighted. Other controls are age dummies for the child, gender, dummies for caste and religion and the age of the parent’s (& squared) whose education features in the regression.

b

Omitted category. Illiterate/Did not complete class 1.

Does Parental Education Protect Child Health?

231

Table 3 (b) Parental Education and Health Behaviors - Choice of Healthcare at Child Delivery (OLS Regressions using Sample 1a ) Dependent Variableb

Child Born in ‘Formal’ Location ‘Formal’ Staff Present at Child Delivery ‘Informal’ Staff Present at Child Delivery (1) (2) (3) (4) (5) (6) (7) (8) (9)

Mother’s Education Dummiesc Classes 1-5 Classes 6-8 Classes 9-10 Classes 11-12 College or Higher

0.079* (0.032) 0.032 (0.043) 0.233** (0.052) 0.198 (0.108) 0.538** (0.140)

0.062 (0.032) -0.007 (0.044) 0.084 (0.058) 0.140 (0.109) 0.308* (0.149)

0073** (0.026) -0.045 (0.036) 0.168** (0.043) -0.086 (0.090) -0.083 (0.116)

0.069** (0.027) -0.082* (0.036) 0.043 (0.048) -0.106 (0.090) -0.278* (0.123)

-0.122** (0.036) -0.103* (0.050) -0.208** (0.060) -0.407** (0.124) -0.472** (0.161)

-0.111** (0.037) -0.044 (0.050) -0.002 (0.067) -0.372** (0.124) -0.141 (0.170)

Father’s Education Dummiesc Classes 1-5

0.023 (0.015) 0.040* (0.019) 0.100** (0.023) 0.165** (0.038) 0.338** (0.047)

0.019 (0.015) 0.041* (0.019) 0.083** (0.024) 0.151** (0.038) 0.260** (0.055)

0.001 (0.013) 0.033* (0.016) 0.043* (0.019) 0.169** (0.032) 0.203** (0.039)

-0.003 (0.013) 0.037* (0.016) 0.037 (0.020) 0.164** (0.032) 0.214** (0.046)

Observations 2172 2172 R-squared 0.095 0.107 F-Stat (Parental 8.50** 16.60** Education Variables) Prob. > F 0.000 0.000

2172 0.115 8.66**

2171 2171 0.076 0.089 5.26** 12.13**

2171 0.099 7.77**

0.000

0.000

0.000

Classes 6-8 Classes 9-10 Classes 11-12 College or Higher

0.000

-0.005 (0.017) -0.061** (0.022) -0.090** (0.026) -0.265** (0.044) -0.393** (0.053)

0.001 (0.017) -0.060** (0.022) -0.059* (0.027) -0.251** (0.044) -0.359** (0.063)

2171 0.108 8.64**

2171 0.124 19.98**

2171 0.136 11.14**

0.000

0.000

0.000

————————– Standard errors in parentheses. * significant at 5%: ** significant at 1%. a

Sample 1 includes all living children aged 0-13 (summary statistics are provided in Table 1(a)). Regressions are weighted with the same controls as in Table 3(a).

b

Definitions of ‘Formal’ and ‘Informal’ healthcare are provided in the main text.

c

Omitted category: Illiterate/Did not complete class 1.

232

Sudeshna Maitra Table 4 Health Behaviors and Health Outcomes (OLS Regressions using Sample 1a )

Dependent Variable

Low Hemo High Peak Flow Time Taken Overall No. of Weight by Reading Temperature Meter to Squat & Child Health conditions Height (< 11 g/dl) (> 37.7 C) Reading Rise 5 Times (1) (2) (3) (4) (5) (6) (7)

If Formal Location of Birth

-0.004 (0.279)

.-0.031 (0.230)

0.010 (0.008)

-0.003 (0.095)

-0.021 (0.018)

11.353 (8.590)

0.264 (0.342)

If Formally Trained Attendant at Child Delivery

-0.247 (0.277)

0.255 (0.227)

-0.009 (0.008)

0.000 (0.096)

-0.009 (0.017)

-0.212 (8.473)

-0.690* (0.339)

If Untrained Attendant (‘Informal’) at Child Delivery

0.075 (0.232)

-0.070 (0.190)

-0.009 (0.007)

0.035 (0.080)

-0.013 (0.014)

-0.702 (7.104)

0.083 (0.280)

Child Has Immunization Card

-0.025 (0.160)

0.025 (0.136)

0.004 (0.005)

-0.052 (0.055)

0.010 (0.010)

-9.361 (5.043)

-0.085 (0.204)

Child Has Completed BCG Immunization

-0.244 (0.242)

0.513** (0.183)

-0.006 (0.007)

-0.047 (0.087)

0.006 (0.015)

-5.498 (7.635)

0.403 (0.318)

Child Has Completed DPT Immunization

0.858* (0.363)

-0.011 (0.290)

-0.009 (0.011)

0.080 (0.137)

-0.043 (0.023)

-1.667 (13.270)

0.569 (0.518)

Child Has Completed OPV Immunization

-0.242 (0.372)

-0.166 (0.300)

0.010 (0.011)

-0.003 (0.146)

0.052* (0.024)

27.346 (14.193)

-0.396 (0.569)

Child Has Completed Measles Immunization

-0.044 (0.278)

-0.244 (0.216)

0.001 (0.008)

0.015 (0.105)

-0.017 (0.017)

-18.360 (9.649)

-0.717 (0.384)

Child Has Completed Pulse Polio Immunization

-0.147 (0.129)

-0.057 (0.094)

0.000 (0.003)

-0.057 (0.040)

-0.010 (0.007)

-17.671** (3.531)

-0.438** (0.142)

Observations R-squared

1632 0.040

2090 0.050

1751 0.260

1022 0.120

1622 0.010

1151 0.500

987 0.090

F-Stat (All Health Behavior Variables) Prob. > F

1.59

1.74

1.51

0.44

1.22

4.60**

3.58**

0.111

0.075

0.138

0.912

0.278

0.000

0.000

————————– Standard errors in parentheses. * significant at 5% : ** significant at 1%. a Sample

1 includes all living children aged 0-13 (summary statistics are provided in Table 1 (a)). All regressions reported here are weighted Other controls include child age dummies, gender, caste & religion dummies and number of elder male and female siblings (proxies for birth order).

Food Security and Crop Diversification: Can West Bengal Achieve Both? V. K. Ramachandran, Madhura Swaminathan, and Aparajita Bakshi

Abstract The paper tries to estimate the cereal requirement of the population of West Bengal at the end of the 11th Plan period and make State- and district-level estimates of the levels of grain production and yield that are required to permit alternative levels of diversification by releasing alternative amounts of land for noncereal crop production. The paper concludes that the required yield levels are well within the capabilities of regular green revolution technology. Such yields have been achieved regularly in leading rice-growing regions of the State in the past. In order to achieve the yields necessary to ensure food security and release a significant extent of land for diversification, however, growth rates of the rice-yield in West Bengal must rise well above the record of the 1990s and 2000s.

1 Introduction This paper deals with the prospects for cereal crop production and crop diversification in contemporary West Bengal1 . Current agricultural policy in West Bengal can be said to have four inter-related objectives:

V. K. Ramachandran Sociological Research Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata, India. e-mail: [email protected] Madhura Swaminathan Sociological Research Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata, India. e-mail: [email protected] Aparajita Bakshi Sociological Research Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata, India. e-mail: [email protected] 1

The paper draws extensively on a section of Rawal, Swaminathan and Ramachandran (2003).

233

234

V. K. Ramachandran, Madhura Swaminathan, and Aparajita Bakshi

• To protect and extend the achievements of the State with regard to rice production, thereby protecting and extending the basis for self-sufficiency in food production and for food security. • To improve yields in rice production, thus releasing a significant proportion of cropped area in the State for the diversification of crop production, and, in particular, the production of oilseeds, pulses, fruit, vegetables and flowers and other non-food crops. • To protect bio-diversity in West Bengal and develop agriculture and related activities - and, in general, plan land use for agriculture and non-agricultural purposes - in an ecologically sustainable way. • To ensure that the development of agriculture and related activities is an instrument of employment-generation, income-enhancement and, in general, qualitative improvement in the living standards of the working people of the countryside. This paper attempts to assess whether it is indeed possible to achieve simultaneously the objectives of food security in rice production and large-scale diversification in crop production. The paper is based on State- and district-level data on area, production and yields of rice for the time period 1980-81 to 2006-07. For most of the analysis, districts that have recently been bifurcated have been combined, since separate data on the districts created newly are available only from 2000 onwards.

2 Context West Bengal’s rural economy was characterised by rapid growth in the 1980s and early 1990s. The major features of growth, which was particularly marked in the rice economy of the State, were rapid growth in aggregate production; growth in yields per hectare, particularly in the boro (or rabi) season, but also in the aman (or kharif) season; and an overall narrowing of the gap between districts with respect to production and yield performance. The West Bengal path to agricultural growth has been unique in post-Independence India2 . In those parts of the rest of India that saw a rapid and substantial growth in agricultural incomes, the major sources of surplus accumulation were capitalist landlords, rich peasants, and, in general, the rural rich. In West Bengal, by contrast, the moving force of agricultural change and of the dynamism of the rural economy in the 1980s and 1990s were small cultivators. Agricultural growth in West Bengal was made possible because of the removal, by means of land reform and the establishment of panchayati raj, of institutional fetters to growth. It has been pointed out that “the West Bengal example, where value added has grown faster than gross 2

Abhijit Sen has noted that ”West Bengal, with a growth rate of over 7 per cent per annum in agricultural value added – more than two-and-a-half times the national average – can be described as the agricultural success story of the eighties” (Sen, 1992).

Food Security and Crop Diversification: Can West Bengal Achieve Both?

235

Table 1 Exponential trend growth rates of area, production and yield of rice in West Bengal Period

Years

Area

Production

Yield

1980s 1990s 2000s Last 10 years Last 15 years Full period

1980-81 to 1989-90 1990-91 to 1999-2000 2000-2001 to 2006-07 1997-98 to 2006-07 1992-93 to 2006-07 1980-81 to 2006-07

1.4 0.37 1.64* -0.28 -1.14* 0.6

7.32 2.08 1.27 1.7 1.98 3.48

5.98 1.71 1.64 1.98 2.11 2.9

Notes: *Not significant at 10 per cent level of confidence. Estimated using three year moving averages. Source: Computed from Government of West Bengal, Economic Review (various issues), Government of West Bengal, Statistical Abstract (various issues).

output, contrary to the trends elsewhere, suggests that greater efficiency in input use is possible through reform and devolution” (Sen 1992). In 2005-06, with a production of 14.5 million tonnes, West Bengal was the largest producer of rice in the country, followed by Andhra Pradesh and Uttar Pradesh. West Bengal accounted for 15.8 per cent of all-India rice production in 2005-06.

Table 1 shows that, while over a 26-year period, rice production in the State grew at a remarkable 3.5 per cent per annum, the growth spurt of the 1980s has petered out. The growth rate over the last decade was only 1.7 per cent. The rate of growth of production of rice in West Bengal continues to be greater than the rate of growth of population. Nevertheless, with population growing at 1.04 per cent in this decade (2001 to 2006), the slowdown is a matter of serious concern3. The slowdown in production growth is primarily on account of a slowdown in the growth of yields. Yields have grown at less than 2 per cent per annum over the last ten years. It is of note that the average yield of rice in West Bengal in 2005-06 was 2509 kg/hectare. Although this is above the all-India average of 1984 kg/hectare, it is below the yields reported for Andhra Pradesh (2939 kg/hectare), Punjab (3858 kg/hectare), Haryana (3051 kg/hectare), Karnataka (3868 kg/hectare) and Tamil Nadu (2546 kg/hectare) (Government of India 2007). Rice yields in West Bengal are below the averages reported for various countries in Asia including Vietnam (3260 kg/hectare), China (4180 kg/hectare) and Japan (4230 kg/hectare) (IRRI 2008). There is clearly scope for increasing rice yields in West Bengal, in relation to the actual yields obtained in other parts of the country, in relation to yields obtained in other rice growing regions and countries and in relation to potential yields obtained in field trials. 3

District-wise growth rates are reported in the Appendix.

236

V. K. Ramachandran, Madhura Swaminathan, and Aparajita Bakshi Table 2 Districts grouped by rice yields, West Bengal, 2006-07

Yield rate (tonnes per hectare) 1.5 to 2 2 to 2.5 2.5 to 3 Above 3 Highest Lowest

Districts

Jalpaiguri, Koch Bihar, Darjiling Haora, South 24 Parganas, Uttar Dinajpur, Dakshin Dinajpur, Purba Medinipur Paschim Medinipur, Purulia, Murshidabad, North 24 Parganas, Nadia, Bankura, Hugli Malda, Barddhaman, Birbhum Birbhum (3.13 tonnes per ha) Jalpaiguri (1.82 tonnes per ha)

Share in total area (per cent)

Share in total production (per cent)

8.79 24.68

6.26 21.95

45.76

47.13

20.76 6.74 4.04

24.66 8.13 2.84

Source: Government of West Bengal, Economic Review, 2007-08.

In terms of absolute levels of rice yields in 2006-07, the districts of West Bengal can be categorised into four groups (Table 2)4 .

3 Prospects for Crop Diversification We first estimate the cereal requirement of the population of West Bengal at the end of the 11th Plan period. We assume this requirement to chiefly be met by the production of rice, which accounts at present for 93 per cent of total cereal production in the State. We then estimate the levels of grain production and yield that are required to permit alternative levels of diversification by releasing alternative amounts of land for non-cereal crop production.

3.1 Requirements of Cereals for Food Security Projection of Cereals Requirements Based on FAO Norms In 2006-07, for a population of 85.53 million persons, the requirement of cereals in West Bengal was 15.22 millions tonnes. The actual production of cereals was 15.8 million tonnes in 2006-07 (of which rice amounted to 14.7 million tonnes), an amount sufficient to meet our current requirement. Using Food and Agriculture Organization (FAO) norms, the requirement of cereals for the projected population of West Bengal in 2011 is 15.98 million tonnes.

4

The spelling of district names follows the Census of India 2001.

Food Security and Crop Diversification: Can West Bengal Achieve Both?

237

3.2 Prospects for Diversification State-level Prospects In 2006-07, total area under rice cultivation in West Bengal was 5.69 million hectares and yield of rice was 2.59 tonnes per hectare. We now create four alternative prospects (or scenarios) for crop diversification - or, more specifically, for the release of land for non-cereal production - in 2011. In each case below, the State meets the required rice production of 16 million tonnes. 1. If 1.25 million hectares of land on which rice is now grown were to be released for non-cereal production in 2011, an average yield of 3.61 tonnes per hectare is required to maintain food security. Rice yields must grow at 6.82 per cent per annum to achieve this yield. 2. If one million hectares of land on which rice is now grown were to be released for non-cereal production in 2011, an average yield of 3.41 tonnes per hectare is required to maintain food security. Rice yields must grow at 5.65 per cent per annum to achieve this yield. 3. If 500,000 hectares of land on which rice is now grown were to be released for non-cereal production in 2011, an average yield of 3.08 tonnes per hectare is required to maintain food security. Rice yields must grow at 3.53 per cent per annum to achieve this yield. 4. If only 250,000 hectares of land on which rice is now grown were to be released for non-cereal production in 2011, an average yield of 2.94 tonnes per hectare is required to maintain food security. Rice yields must grow at 2.56 per cent per annum to achieve this yield. Two major conclusions emerge. First, the required yield levels are well within the capabilities of regular green revolution technology. Such yields have been achieved regularly in leading rice growing regions of the State in the past, and within the yield levels established through recent field trials5 . Secondly, in order to achieve the yields necessary to ensure food security and release a significant extent of land for diversification, growth rates of the rice-yield in West Bengal must rise well above the record of the 1990s and 2000s. Even to release 250,000 hectares of land from rice production, the required growth rate of rice yields is 2.56 per cent per annum, while actual growth rates in the 1990s and 2000s were 1.71 per cent and 1.64 per cent respectively (Table 1). A return to the growth surge of the 1980s, when the rate of growth of rice yields was 5.98 per cent per annum, will, of course, permit the release of more than one million hectares for alternative crops by 2011.

5 In Tamil Nadu, ICAR field trials conducted by the All India Coordinated Rice Improvement Project on irrigated plots reported yields per hectare of rice of 5.46 tonnes for high yielding varieties and 7.01 tonnes for hybrid rice varieties (www.ppi-for.org/ppiweb).

238

V. K. Ramachandran, Madhura Swaminathan, and Aparajita Bakshi

3.3 District-level Projections We can also create alternative district-wise scenarios. Here is an exercise in which, making certain assumptions based on current performance, one million hectares are released from rice production and an aggregate output of 16.1 million tonnes of rice is achieved. We assume that rice yields of the four districts with highest yield in 2006-07 (Birbhum, Barddhaman, Malda and Hugli) will reach 3.8 tonnes per hectare in 201112 (that is, a level equivalent to average yields in Punjab and Karnataka), rice yields in Bankura, Nadia, North 24 Parganas, Murshidabad, Purulia, West Medinipur, East Medinipur and Dakshin Dinajpur will reach 3.5 tonnes per hectare, rice yields in Uttar Dinajpur, South 24 Parganas and Haora will reach 3 million tonnes per hectare and rice yields in the remaining districts will reach 2.5 million tonnes per hectare. If 10 per cent of the total area under rice is released from the four districts with the highest yields, and 20 per cent of the area under rice is released from the remaining districts, a total of 1 million hectares of land can be diverted from rice to other crops. The total production of rice will be 16.1 million tonnes, an amount sufficient to meet the demand for rice in 2011-12 (Table 3).

4 Concluding Notes The answer to the question in the title of this paper is a ”Yes, but only if [. . .]” We have shown that, by the end of the 11th Plan period, if West Bengal is to maintain rice self-sufficiency and release even 250,000 hectares of land currently under rice cultivation, it must achieve an average rice yield of 2.94 tonnes per hectare, a target well within the capabilities of the rice technologies available in the State. In order to do so, however, the rate of growth of rice yields must be well above the rates of growth achieved after 1990. The yield levels required to release more than one million hectares of land for non-rice cultivation are also well within the capabilities of the technology currently available; indeed these are yields that have been achieved in West Bengal and elsewhere. To achieve such average yield levels for the State as a whole by 2011 requires, however, that the State recapture an earlier experience - that it achieve once again growth rates similar to those of the 1980s, the surge period in West Bengal agriculture.

References 1. Boyce, James K (1987), Agrarian Impasse in Bengal: Agricultural Growth in Bangladesh and West Bengal, 1949-1980, Oxford University Press, New York 2. Government of West Bengal (2001), Tenth Plan Document: Agriculture, Department of Agriculture, Kolkata

Food Security and Crop Diversification: Can West Bengal Achieve Both?

239

3. Census of India, Projected Population based on 2001 Census (2001 to 2026) available at ¡http://www.indiastat.com¿ 4. Government of India (2007), Agricultural Statistics at a Glance 2006-07, Department of Agriculture and Cooperation, available at ¡http://dacnet.nic.in/eands/agStat06-07.htm¿ 5. Government of West Bengal (2008), Economic Review 2007-08, Kolkata 6. Government of West Bengal (various issues), Economic Review, Kolkata 7. Government of West Bengal (various issues), Statistical Abstract, Bureau of Applied Economics and Statistics, Kolkata 8. IRRI (International Rice Research Institute) (2008), World Rice Statistics 2008, available at ¡http://www.irri.org/statistics¿ 9. Rawal, Vikas, Madhura Swaminathan and V. K. Ramachandran (2003), ”Agriculture in West Bengal: Current Trends and Directions for Future Growth”, Chapter prepared for the West Bengal State Development Report, State Planning Board, Kolkata 10. Sen, Abhijit (1992), ”Economic Liberalisation and Agriculture in India,” Social Scientist, 20 (11), November

Table 3 District-wise targeted yield, area and production of rice in West Bengal, 2011-12 Districts

Jalpaiguri Koch Bihar Darjiling Haora South 24 Parganas Uttar Dinajpur Dakshin Dinajpur Purba Medinipur Paschim Medinipur Purulia Murshidabad North 24 Parganas Nadia Bankura Hugli Malda Barddhaman Birbhum West Bengal

Area 2006 Yield 2006-07 Yield target Proposed area Reduced area Projected proreduction duction (million ha) (tonnes/ha) (tonnes/ha) (million ha) (million tonnes) 0.23 0.24 0.03 0.12 0.42 0.26 0.19 0.43 0.69 0.28 0.40 0.28 0.25 0.41 0.30 0.15 0.64 0.38 5.69

1.82 1.86 1.87 2.09 2.19 2.30 2.41 2.43 2.60 2.61 2.61 2.61 2.71 2.80 2.83 3.05 3.06 3.13 2.59

2.5 2.5 2.5 3 3 3 3.5 3.5 3.5 3.5 3.5 3.5 3.5 3.5 3.8 3.8 3.8 3.8 -

20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 20 per cent 10 per cent 10 per cent 10 per cent 10 per cent -

0.18 0.19 0.03 0.09 0.33 0.21 0.15 0.34 0.55 0.22 0.32 0.22 0.20 0.33 0.27 0.14 0.58 0.35 4.70

0.46 0.48 0.06 0.28 1.00 0.62 0.52 1.20 1.94 0.79 1.11 0.78 0.70 1.14 1.02 0.53 2.20 1.31 16.13

Source: Computed from Government of West Bengal, Economic Review, 2007-08. Note: ”Medinipur” includes Purba and Paschim Medinipur; ”Dinajpur” includes Uttar and Dakshin Dinajpur.

240

V. K. Ramachandran, Madhura Swaminathan, and Aparajita Bakshi

Appendix Table District-wise exponential trend growth rates of area, production and yield of rice in West Bengal District Area Bankura Birbhum Barddhaman Koch Bihar Darjiling Dinajpur Hugli Haora Jalpaiguri Malda Medinipur Murshidabad Nadia Purulia 24 Parganas West Bengal

1.66 0.85 1.56 1.47 3.78 0.43 1.4 3.71 0.18* 2.52 0.93 1.93 3.56 2.97 0.84 1.4

1980-81 to 1989-90 Production Yield 8.69 7.86 7.21 5.12 3.21 6.07 5.04 8.74 2.4 6.24 8.25 8.45 10.59 7.47 7.15 7.32

7.22 7.16 5.75 3.65 -0.57* 5.64 3.72 5.09 2.32 3.69 7.39 6.64 7.08 4.5* 6.31 5.98

1990-91 to 1999-2000 Area Production Yield -0.32* 0.97 2.33 -1.21 -4.84 0.42 0.54* -0.83 -0.78 -1.09 0.6 -0.3* 0.99 -0.36* 0.77 0.37

2.03 3.85 3.58 0.09* -6.49 3.38 1.45 -1.49* 0.15* 1.57 2.06 1.58 1.66 2.09 1.38 2.08

2.31 2.83 1.23 1.29 -1.73* 2.97 0.94 -0.68* 0.87* 2.68 1.45 1.89 0.68 2.3 0.61* 1.71

Area

2000-01 to 2006-07 Production Yield

-2.25* 0.55* 0.1* -1.51 -0.87* -0.63* 2.45 1.01* -1.5 -1.77 -0.17* 4.13 -2.16* 0.32* -2.13 -0.34*

-1.61* 2.33 1.84 0.95* -0.11* 3.16 4.06 1.84* -0.52* 3.14 0.57* 6.72 -2.71* 1.83 -1.49 1.27

0.66* 1.92 1.77 2.44 0.84* 3.77 1.68 0.85* 1.04 4.9 0.72 2.71 -0.59* 1.58* 0.68 1.64

Note: * estimates are not significant at even 10 per cent level of confidence; all other estimates are highly significant. Estimates are based on three year moving averages.

Estimating Equivalence Scales Through Engel Curve Analysis Amita Majumder and Manisha Chakrabarty

Abstract This paper proposes a simple two-step estimation procedure for Equivalence scales using Engel curve analysis based on a single cross section data on household level consumer expenditure. It uses Quadratic Logarithmic (QL) preferences with the maintained hypothesis of Generalized Equivalence Scale Exactness (GESE) (Donaldson and Pendakur, 2004). The novelty of the proposed procedure is that it neither requires any assumption on the form in which demographic attributes enter into the system of demands, nor any algebraic specification of the underlying cost/utility functions. More importantly, it does not require a computationally heavy estimation of complete demand systems. As an illustrative exercise the methodology is applied to Indian consumer expenditure data.

1 Introduction Equivalence scale is defined as the relative cost of maintaining the same level of utility under different demographic regimes. There are several theoretical and structural problems in the calculation and interpretation of Equivalence scales and it is well known that Equivalence scales are identifiable only under explicit assumptions. Muellbauer (1974) noted that welfare comparison across households require unconditional Equivalence scales which is based on utility derived from both goods and household’s fertility-decision on having children. But traditional budget data allow us to calculate only the conditional Equivalence scales, where different Equivalence scales can be consistent with the same preferences. The issue of identifiability Amita Majumder Economic Research Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata - 700108, India. e-mail: [email protected] Manisha Chakrabarty Indian Institute of Management,, Kolkata, India. e-mail: [email protected] and e-mail: [email protected]

241

242

Amita Majumder and Manisha Chakrabarty

of household Equivalence scale has been discussed in many studies, which include Pollak and Wales (1979, 1981, 1992), Deaton and Muellbauer (1986), Fisher (1987), Lewbel (1989), Deaton, Ruiz-Castillo and Thomas (1989), Blundell and Lewbel (1991, 1994), Dickens, Fry and Pashardes (1993), Blackorby and Donaldson (1994), Lewbel (1997), Pendakur (1999) and Lewbel and Pendakur (2006). Functional specification of the demographic vector augmented demand system plays an important role in the identification issue. Equivalence scales cannot be recovered from demand behavior in a single cross-section study (where there is no price variation) in case of a rank-two demand system with budget shares linear in logarithm of expenditure. Examples are PIGLOG1 systems such as the Almost Ideal Demand System or the Translog demand system [Muellbauer (1974), Blackorby and Donaldson (1994), Pashardes (1995), Phipps (1998)]. Introduction of price variation also cannot solve this problem due to limited covariance between prices and demographic characteristics (Dickens et al. (1993), Ray (1983)). Deaton, Castillo and Thomas (1989) suggested parameter restriction such as Demographic Separability2 (DS) as a remedial measure which imposes zero demographic substitution effect; but this restriction can yield biased estimates of Equivalence scales. On the other hand, a rank-three demand system or a rank-two model that allows for non-linear log-expenditure effects on the budget share enables estimation of identifiable scales, where scales are invariant to the utility level at which the welfare comparisons are made, without the restriction of DS (Pashardes (1995)). The property of invariance of Equivalence scales to the utility level has been termed Independent of Base (IB) by Lewbel (1989) and Equivalence Scale Exactness (ESE) by Blackorby and Donaldson (1994). Formally, Equivalence scales satisfy IB/ESE if and only if the cost function is separable in the utility level and the household attributes (Lewbel, 1989; Blackorby and Donaldson, 1993), implying that the Equivalence scales depend only on prices and demographic composition. Although this property is frequently used in the literature, there is no rationale for assuming that the norms of comparison should be the same for rich and poor households (Szulc, 2003; Donaldson and Pendakur, 2004)3. Donaldson and Pendakur argue that there may be two reasons why Equivalence scales should depend on total expenditure. First, because economies of household formation are associated with sharable commodities such as housing whose expenditure share decreases as total expenditure rises, it is reasonable to expect expenditure-dependent Equivalence scales for multi-person households to increase with expenditure. Second, because the consumption of many luxuries, such as eating in good restaurants or attending 1

The Price Independent Generalized Log-Linear (PIGLOG) systems are characterized by the cost function of the form c(u, p) = {b(p)}n a(p) where p is the price vector, b(p) is homogeneous of degree zero and a(p) is linear homogeneous in prices. 2 An item-group is said to be demographically separable from a demographic group, if changes in the demographic structure within the demographic group exert only income-like effects on the goods in the item-group. 3 In a cross-country study of Equivalence scales by Lancaster, Ray and Valenzuela (1999) wide variation in Equivalence scales across countries that span a wide range of per capita GNP has been observed.

Estimating Equivalence Scales Through Engel Curve Analysis

243

the theatre, are more enjoyable when done in groups, we may expect Equivalence scales for households with more than one member to decrease with expenditure4. They propose a generalization of ESE, which they call Generalised Equivalence Scale Exactness (GESE) that allows the scales to be different for rich and poor. They also show that if GESE is a maintained hypothesis, and the reference expenditure function is not PIGLOG, the equivalent expenditure function5 can be identified from demand behavior. In this paper we propose an estimation procedure for Equivalence scales using Engel curve analysis, based on a single cross section data on household level consumer expenditure, the underlying system being Quadratic Logarithmic (QL) (Lewbel, 1990) in a GESE set up. The novelty of our procedure is that it does not require any assumption on the form in which demographic attributes enter the system. Briefly, the estimation involves two steps. In the first step, the set of itemspecific Engel curves relating budget shares to the logarithm of income is estimated for different demographic groups in a single equation framework using household level consumer expenditure data.6 In the second step the Equivalence scale for each demographic group is estimated using the coefficients of the item-specific Engel curves, estimated in the first step, based on a pooled regression taking demographic groups and commodities as observations. The validity of ESE assumption is then tested under this general set up.7 The paper is organized as follows: Section 2 sets out the estimation procedure for the Equivalence scales; Section 3 describes the data used for the illustrative exercises done and presents the results; and finally, Section 4 concludes the paper.

2 The Proposed Procedure The general cost function underlying the Quadratic Almost Ideal Demand System (QUAIDS) of Banks, Blundell and Lewbel (1997) and the Generalized Almost Ideal 4 Recent works of Koulovatianos et al. (2005a, 2005b) based on survey data also report evidence that Equivalence scales are decreasing in income. 5 Equivalent expenditure for a household is the expenditure level which would make each member of the household as well off as the single adult reference household. Thus, Equivalence scale is actual expenditure divided by equivalent expenditure. 6 In fact, the proposed method does not require a computationally heavy estimation of complete demand systems. Equivalence scales in a system framework have been estimated by Pashardes (1995), Lancaster and Ray (1998), Szulc (2003), Majumder and Chakrabarty (2003), Lyssiotou and Pashardes (2004) and Donaldson and Pendakur (2006). 7 It may be pointed out that as per the existing literature the test of the ESE property is conclusive only in case of rejection as suggested by Blundell and Lewbel (1991), Blackorby and Donaldson (1993, 1994). Murthi (1994) tested the restriction implied by exactness in the context of different parametric forms of engel curves on Sri Lankan data and in most of the cases exactness was not rejected. Pashardes (1995), on the other hand, found rejection of the hypothesis on UK data for the model he proposed. Gozalo (1997) and Pendakur (1994) proposed different nonparametric tests of the IB restriction on engel curves. Gozalo statistically rejected IB while Pendakur did not reject.

244

Amita Majumder and Manisha Chakrabarty

Demand System (GAIDS) of Lancaster and Ray (1998), is of the form   b(p) . C(u, p) = a(p) exp (1/ ln u) − λ (p)

(1)

where a(p) is homogeneous of degree one in prices, b(p) and λ (p) are homogeneous of degree zero in prices and u is the level of utility. From (1), the demographic vector augmented Quadratic Logarithmic (QL) Indirect Utility Function can be written as: V (p, y, z) =



ln y − lna(p, z) b(p, z)

−1

− λ (p, z)

−1

,

(2)

where y is income and z is the vector of demographic characteristics. Donaldson and Pendakur (2004) showed that GESE with QL preference implies the following relations: ln a(p, z) = K(p, z) ln a0 (p) + ln G(p, z),

(3)

b(p.z) = K(p, z)b0 (p),

(4)

0

λ (p, z) = λ (p),

(5)

where a(.) is homogeneous of degree one in p, b(.) and λ (.) are homogeneous of degree zero in p, a0 (p) = a(p, z0 ), b0 (p) = b(p, z0 ), and λ 0 (p) = λ (p, z0 ), 0 being the reference household. It is evident from the above relationships that K(p, z) is homogeneous of degree zero in prices. The logarithm of Equivalence scale under GESE is given by: ln S(p, y, z) =

(K(p, z) − 1) ln y + ln G(p, z) . K(p, z)

(6)

S(.) is increasing (decreasing) in y if K(p, z) > 1(K(p, z) < 1)8 . ESE implies K(p, z) = 1, so that Equivalence scale is independent of income, and in that case ln S(p, z) = ln a(p, z) − lna0 (p).

(7)

Now, applying Roy’s identity to (2), the budget share equations are given by wi = αi (p, z) + βi (p, z) ln



   2 λi (p, z) y y + ln a(p, z) b(p, z) a(p, z)

(8)

a(p,z) δ ln b(p,z) δ λ (p,z) where αi (p, z) = δ ln δ ln pi , βi (p, z) = δ ln pi , λi (p, z) = δ ln pi . Now, given household level consumer expenditure data, one can define specific demographic groups and classify each household as a member of certain

8

A possible practical problem could be that for K(p, z) < 1, the Equivalence scale S(p,z) may turn out to be less than 1 for high values of y when lnG(p,z) is small.

Estimating Equivalence Scales Through Engel Curve Analysis

245

demographic group. Thus, for commodity group i and demographic group j the household-level budget share Eqs. (8) can be written as:

j wih

j

j

= αi (p, z ) + βi(p, z ) ln

!

j

yh a(p, z j )

"

λi (p, z j ) + b(p, z j )

! ! ln

j

yh a(p, z j )

""2

;

(9)

i = 1, 2, . . . , n; j = 0, 1, 2, . . . , J; h = 1, 2, . . . , H j ; where z j is the demographic vector and H j is the number of households in group j, respectively. Rearranging the terms, Eq. (9) can be written as wihj = [αi (p, z j ) − βi (p, z j ) ln a(p, z j ) + +[βi (p, z j ) − 2

λi (p, z j ) (ln a(p, z j ))2 ] b(p, z j )

λi (p, z j ) λi (p, z j ) ∗ j2 (ln a(p, z j )]y∗h j + y , j b(p, z ) b(p, z j ) h

(10)

where y∗h j = ln(yhj ). Note that, for a single cross section data prices may be assumed fixed. Hence, Eq. (10) can be written as 2

wihj = [αij − βij π j + λi∗ j π 2j ] + [βij − 2λi∗ j π j ]y∗h j + λi∗ j y∗h j , j

∗j

j

where π j = ln a(p, z j ),αi = αi (p, z j ), βi = βi (p, z j ) and λi = Equivalently, j j j ∗j j ∗ j2 wih = ai + bi yh + ci yh , where

(11)

λi (p,z j ) . b(p,z j )

(12)

ai = αi − βi π j + λi π 2j ,

∗j

(13)

bij = βij − 2λi∗ j π j

, (14)

j

j

j

cij = λi∗ j .

(15)

Thus, using cross-section data, the following budget share equation for item i and demographic group j can be estimated (first stage estimation) taking households belonging to the demographic group as observations: 2

wihj = aij + bij y∗h j + cij y∗h j + εihj .

(16)

In order to estimate the Equivalence scales from (6) we need to have estimates of K(p, z) and ln G(p, z), which can be obtained from the parameter estimates of Eq. (16) and Eqs. (3)-(5) as follows. Note from Eqs. (8), (11) and (15) that cij = λi∗ j =

δ λ (p, z j ) 1 , δ ln pi b(p, z j )

246

Amita Majumder and Manisha Chakrabarty

or, cij = Again,

δ λ 0 (p) 1 δ ln pi b(p,z j ) ,

since by GESE λ (p, z j ) = λ 0 (p) (from Eq. (5)).

c0i = λi∗0 = Hence, or,

c0i j

ci

=

b(p,z j ) b0 (p)

δ λ 0 (p) 1 . δ ln pi b0 (p)

= K(p, z j ) by Eq. (4) j

c0i = K j ci .

(17) j

The estimates of K j can be obtained by regressing cˆ0i on cˆi (without intercept) for each j, taking items as observations, i = 1, 2, . . . , n.9 Hence ESE can be tested by testing the hypothesis that the slope coefficient =1 in this regression. We now propose a simple method for estimating ln G j (= π j − K j π0 ) under the additional assumption that βij = βi + γ j , say. Now note from (14) and (15) that bij − b0i = (γ j − γ0 ) − 2cij π j + 2c0i π0 .

(18)

Given the estimates bˆ ij , cˆij , Eq. (18) is written as bˆ ij − bˆ 0i = γ ∗j + 2cˆij (K j π0 − π j ) + eij , i = 1, 2, . . . , n; j = 1, 2, . . . , J

(19)

j

using Eq. (17), where γ ∗j = γ j − γ0 and ei is a composite error term. Here again, it may be pointed out that although the relationship in (18) is exact, replacement of the variables by their estimated values yields a regression set-up. eij is a linear combination of the individual errors of estimation of bij , cij , b0i , c0i . Using arguments similar to those used in estimation of K j , estimates of ln G j (= π j − K j π0 ) and γ ∗j are obtained from a pooled regression of demographic groups and commodities. Finally, given income y˜ and demographic group j, Equivalence scale under GESE and a given price level, can be estimated using the following expression: ln S(y, ˜ z j) =

ˆ j (Kˆ j − 1) ln y˜ + lnG , j Kˆ

ˆ j are the estimated values obtained from (17) and (19). where Kˆ j and lnG To obtain the standard error of this generalized expenditure dependent Equivalence scale, for which the analytical expression is not possible to derive, we use bootstrap method to obtain the approximate standard errors. For estimation of Equivalence scales under ESE, we proceed as follows. Note that under ESE, c0i = cij ∀ j. After having obtained cˆ0i from estimation of Eq. (16) for the reference group, the budget shares for the other demographic groups are now estimated by putting in this restriction for each j. This yields estimates of aˆij s and bˆ ij s under ESE. Estimates of (π j − π0 ) are then obtained from the following regression 9

See Appendix for an explanation for a regression set-up although the relationship in (17) is exact.

Estimating Equivalence Scales Through Engel Curve Analysis

247

equation10 bˆ ij − bˆ 0i = 2cˆ0i (π0 − π j ) + eij , i = 1, 2, . . . , n; j = 1, 2, . . . , J.

(20)

3 Data and Results The data for the present analysis have been taken from the data collected by the National Sample Survey Organization (NSSO), India, in its 61st round enquiry on Employment-Unemployment during July, 2004 - June, 2005. The data provide information on household characteristics, demographic particulars and employment status at the individual level within each household surveyed. In addition, this survey also provides data on consumption expenditure on several detailed items and total expenditure. Since the estimation is based on a single cross section data, prices are assumed fixed. We also assume that all demographic groups face the same price. Data for only the urban sector have been used to illustrate the estimation procedure described in Section 2. The all India urban data we consider here consist of 5959 households comprising only three types of households based on demographic composition11. The reference households are taken to be those consisting of 2 adults only. The two other household-types are households consisting of (i) 2 adults plus 1 male child (0-17 years) and (ii) 2 adults plus 1 female child (0-17 years). These groups consist of 3321, 1513, 1125 households, respectively. We consider 10 commodity groups, namely, (i) Cereals and cereals substitutes, (ii) Milk and milk products, (iii) Edible oils, (iv) Meat, fish & egg, (v) Sugar & salt, (vi) Other food, (vii) Pan, tobacco & intoxicants, (viii) Clothing & footwear, (ix) Services and (x) Other non-food12. The estimation procedure involves first estimating Eq. (16) for 10 commodity groups (i = 1, 2, . . . , 10) and for the three household types ( j = 0, 1, 2) mentioned earlier13 . It is observed that except in cases of Other food, Pan, tobacco & intoxicants and Clothing & footwear, for all other items most of the coefficients turn out to be significant. The estimated values of K for two household types, viz., households with 2 adults plus 1 male child and households with 2 adults plus 1 female child, taking the twoadult household as numeriare, are reported in Table 1. The results of the test for ESE 10

There will be no constant term here in view of the fact that under ESE b(p, z) = b0 (p), which implies βi j = βi for all j. 11 Here All-India refers to 15 major states, viz., Andhra Pradesh, Assam, Bihar, Gujarat, Harayana, Punjab, Karnataka, Kerala, Madhya Pradesh, Maharashtra, Orissa, Rajasthan, Tamil Nadu, Uttar Pradesh and West Bengal. 12 “Other food” includes beverages, processed foods, vegetables and fruits. “Other non food” includes fuel and light, entertainment, education, medical, transport, rent & tax, personal care, toilet article, sundry article.These items have been merged to avoid too many zero observations. 13 Owing to shortage of space the parameter estimates have not been presented here. The estimates may be made available to interested readers.

248

Amita Majumder and Manisha Chakrabarty

are also presented. It is evident from the results that ESE is rejected at 5% level of significance for this data set. Our next step of estimation involves estimating Eq. (19), from which estimate of log G j = −(K j π0 − π j ) can be obtained directly as the coefficient of 2cˆij . The estimated values of logG j turn out to be 2.946 and 2.876 for demographic groups 1 and 2, respectively. Finally, we calculate log Equivalence scale for demographic group j at income j ˜ Gj level y˜ from the expression ln S(y, ˜ z j ) = (K −1)Klnjy+ln by using the estimated values j j of both K and log G . The Equivalence scales at different levels of income for the two household types are presented in Table 2. The minimum value for income has been chosen to be a value close to the sample minimum. We report Equivalence scale up to the income level of Rs.10,000/- basically for two reasons. First, only 2% of the sample households fall beyond this level; and second, the value of the Equivalence scale starts to become implausible (less than one) at this level. However, as pointed out in footnote 8, this could be due to a problem with the GESE set up itself. Note that given K j < 1, the Equivalence scale is a decreasing function of income by construction as observed in Table 2. This corroborates the findings from the studies of Donaldson and Pendakur (2004, 2006) using Canadian data. Similar results have been obtained through a subjective (survey) method for evaluating Equivalence scales using data from Germany and France (Koulovatianos et al., 2005a) as well as from Cyprus (Koulovatianos et al., 2005b). The result implies that the cost of raising a child relative to the income level is much higher for a poorer household than for a richer household, a scenario that fits well into the Indian context. The fact that a child is indeed a burden for a poor household in India, is reflected through the high values of Equivalence scales at the lower end of the income distribution. The bootstrapped estimates of standard errors (from 2000 re-samples) reveal that except for the lowest income group, almost all values are significant. For comparability with other Indian studies, the ESE Equivalence scales are also presented in Table 2. The values 0.319 for boys and 0.376 for girls indicate that boys cost less than girls in an overall sense. This observation is in line with the finding by Lancaster, Ray and Valenzuela (1999) who obtain Equivalence scales (averaged over three children groups, viz., 0-4 years, 5-14 years, 15-17 years) to be 0.171 for boys and 0.192 for girls under a Rank 3 demand system for India. Similar pattern has also been noted by Chakrabarty (2000) for the state of Maharashtra (India). Here the Engel Equivalence scale for a boy (0-14 years) turns out to be 0.502 and that for a girl (0-14 years) turns out to be 0.569, and the corresponding Rothbarth scales turn out to be 0.047 and 0.069 under a QL budget share curve.

4 Conclusion In this paper we have proposed a simple estimation procedure for Equivalence scales in a GESE set up, using Engel curve analysis based on a single cross section data on household level consumer expenditure, where the budget shares are Quadratic Log-

Estimating Equivalence Scales Through Engel Curve Analysis

249

arithmic (QL) in income. The novelty of our procedure is that no explicit algebraic form for the coefficients of the Engel curves (which are functions of demographic variables14) is required. In other words, the proposed method, which is a two-step procedure for estimating Equivalence scales, does not require any assumption on the form in which demographic attributes enter the system of demands. More importantly, the proposed method does not require a computationally heavy estimation of complete demand systems. As an illustrative exercise the methodology is applied to a limited number of demographic groups where children of 0-17 years of age have been clubbed into one group. The procedure is, however, extendable to any number of groups, subject to availability of data in each demographic group. From the test of validity of ESE assumption it emerges that ESE is rejected on Indian data and the generalized Equivalence scale is found to be inversely related to income, a result that corroborates the findings of other studies on developed and underdeveloped countries. It is also observed that boys cost less than girls. Acknowledgements The authors thank Professor Dipankor Coondoo of the Indian Statistical Institute for helpful suggestions. The authors would also like to thank Professor Gauthier Lanot of Keele University, UK for his immense help and constructive suggestions. The usual disclaimer applies.

References 1. Banks, J., R. Blundell and A. Lewbel (1997): Quadratic Engel Curves and Consumer Demand, Review of Economics and Statistics, 79, 527-539 2. Blackorby, C. and D. Donaldson (1993): Adult-equivalence Scales and the Economic Implementation of Interpersonal Comparisons of Well-being, Social Choice and Welfare, 10, 335-361 3. Blackorby,C. and D. Donaldson (1994): Measuring the Cost of Children: A Theoretical Framework in Blundell, R., I. Preston and I. Walker (eds.) The Measurement of Household Welfare, Cambridge University Press, Cambridge, 51-69 4. Blundell, R. and A. Lewbel (1991): The Information Content of Equivalence Scales, Journal of Econometrics, 50, 49-68 5. Chakrabarty, M. (2000): Gender-Bias in Children in Rural Maharashtra - An Equivalence Scale Approach, Journal of Quantitative Economics, 16(1), 51-65 6. Deaton, A. and J. Muellbauer (1986): On Measuring Child cost: With Application to Poor Countries, Journal of Political Economy, 79, 481-507 7. Deaton, A., J. Ruiz-Castillo and D. Thomas (1989): The Influence of Household Composition on Household Expenditure Patterns: Theory and Spanish Evidence, Journal of Political Economy, 97, 179-200 8. Dickens, R., V. Fry and P. Pashardes (1993): Nonlinearities, Aggregation and Equivalence Scales, Economic Journal, 103, 359-368 9. Donaldson, D. and K. Pendakur (2004): Equivalent-expenditure Functions and Expenditure Dependent Equivalence Scales, Journal of Public Economics, 88, 175-208 10. Donaldson, D. and K. Pendakur (2006): The Identification of Fixed Costs from Consumer Behaviour, Journal of Business and Economic Statistics, 24, 255-265 14

As we are dealing with cross section data, prices are assumed fixed.

250

Amita Majumder and Manisha Chakrabarty

11. Fisher, F. M. (1987): Household Equivalence Scales and Interpersonal Comparisons, Review of Economic Studies, 54, 519-524 12. Gozalo, P. (1997): Nonparametric Bootstrap Analysis with Applications to Demographic Effects in Demand Functions, Journal of Econometrics, 81, 357-393 13. Koulovatianos, C., C. and U. Schmidt (2005a): On the Income Dependence of Equivalence Scales, Journal of Public Economics, 89, 967-996 14. Koulovatianos, C., C. and U. Schmidt (2005b): Properties of Equivalence Scales in Different Countries, Journal of Economics, 86,19-27 15. Lancaster, G. and R. Ray (1998): Comparison of Alternative Models of Household Equivalence Scales: The Australian Evidence on Unit Record Data, The Economic Record, 74, 1-14 16. Lancaster, G., R. Ray and M. R. Valenzuela (1999): A Cross-Country Study of Equivalence Scales and Expenditure Inequality on Unit Record Household Budget Data, Review of Income and Wealth, 45, 455-482 17. Lewbel, A. (1989): Household Equivalence Scales and Welfare Comparisons, Journal of Public Economics, 39, 377-391 18. Lewbel, A. (1990): Full Rank Demand Systems, International Economic Review, 31, 289-300 19. Lewbel, A. (1997): Consumer Demand Systems and Household Equivalence Scales in Pesaran, M.H. and M.R. Wickens (eds.) Handbook of Applied Econometrics, Vol. II (Microeconomics), 167-201 20. Lewbel, A. and K. Pendakur (2006): Equivalence Scales: Entry for The New Palgrave Dictionary of Economics, 2nd Edition 21. Lyssiotou, P. and P. Pashardes (2004): Comparing the True Cost of Living Indices of Demographically Different Households, Bulletin of Economic Research, 56, 21- 39 22. Majumder, A. and M. Chakrabarty (2003): Relative Cost of Children: The Case of Rural Maharashtra, India, Journal of Policy Modeling, 25, 61-76 23. Muellbauer, J. (1974): Household Composition, Engel Curves and Welfare Comparisons Between Households: A Duality Approach, European Economic Review, 5, 103-122 24. Murthi, M. (1994): Engel Equivalence Scales in Sri Lanka: Exactness, Specification, Measurement Error in Blundell, R., I. Preston and I. Walker (eds.) The Measurement of Household Welfare, Cambridge University Press, Cambridge 25. Pashardes, P. (1995): Equivalence Scales in a Rank-3 Demand System, Journal of Public Economics, 58, 143-158 26. Pendakur, K. (1999): Estimates and Tests of Base-independent Equivalence Scales, Journal of Econometrics, 88, 1-40 27. Phipps, S. (1998): What Is the Income Cost of Child? Exact Equivalence Scales for Canadian Two-Parent Families, Review of Economics and Statistics, 80, 157-164 28. Pollak, R. and T. Wales (1979): Welfare Comparisons and Equivalence Scales, American Economic Review, 69, 216-221 29. Pollak, R. and T. Wales (1981): Demographic Variables in Demand Analysis, Econometrica, 49, 1533-1551 30. Pollak, R. and T. Wales (1992): Demand System Specification and Estimation, Oxford University Press, London 31. Ray, R. (1983): Measuring the Costs of Children: An Alternative Approach, Journal of Public Economics, 22, 89-102 32. Szulc, A. (2003): Is It Possible to Estimate Reliable Household Equivalence Scales?, Statistics in Transition, 6, 589-611

Estimating Equivalence Scales Through Engel Curve Analysis

251

Table 1 Estimated values of K for two household types ( j = 1, 2) Household type 1 Household type 2 (2 adults + 1 male child (0-17 years)) (2 adults+1 female child (0-17 years)) 0.6605 0.6842 (Standard error = 0.1150) R2 =0.778 (Standard error = 0.0809) R2 =0.884 H0 : K = 1 H0 : K = 1 |t9 | = 2.95 , p-value: 0.016 |t9 | = 3.91 , p-value: 0.004

Table 2 Equivalence scales Income Level (Rs.) 1000 1500 2500 5000 10000 ESE scale

GESE Equivalence Scales Household type 1 Household type 2 (2 adults + 1 male child (0-17 years)) (2 adults + 1 female child (0-17 years)) 2.504 2.696 (4.314) (2.534) 2.034 2.230** (1.314) (0.942) 1.566*** 1.757 *** (0 .349) (0.286) 1.098*** 1.271 *** (0.261) (0. .211) 0.770** 0.919*** (0.299) (0. 264) 1.319 1.376

Note: A two-adult household has a value 1. Bootstrapped standard errors are in parentheses. *, ** and *** indicate significance at 10%, 5% and 1% levels, respectively.

Appendix From Eq. (17) we have c0i = K j cij . To estimate K j we replace cij s by their estimated values. Let cˆij = cij + δij , where δij s are the errors. Then, cˆ0i − δi0 = K j (cˆij − δij ), Or, cˆ0i = K j cˆij + δi0 − δij K j , Or, cˆ0i = K j cˆij + δij∗ , say.

(∗)

Note that the regression error is assumed to be present only because of estimation errors in the first stage. Since the first stage estimates are unbiased and consistent, asymptotically Eq. (*) would hold exactly. Now, as the observations here are over items, the itemwise errors can be assumed to be uncorrelated with the regressor, as the estimation errors originate from estimation of itemwise budget shares separately.

Testing for Absolute Convergence: A Panel Data Approach Samarjit Das and Manisha Chakrabarty

Abstract This paper develops a new test for absolute convergence under cross sectional dependence. A detailed Monte Carlo study is then carried out to evaluate the performance of this test in terms of size and power. From our Monte Carlo simulations it turns out that the test performs well with respect to size and power. The proposed test is then applied to find whether there is absolute convergence in terms of real per capita income across various countries in OECDs. Using our test which is robust to cross sectional dependence, it is found that various countries in OECD are absolutely convergent.

1 Introduction The debate on whether low income countries tend to catch up with high income countries, commonly termed “convergence”, is one of the crucial issues of recent empirical growth literature. The empirical literature uses single equation regressions to study economic convergence across countries and regions, based on the popular notions of β and σ - convergence (Barro and Sala-i-Martin, 1992). These methods fail to allow for unobserved (and persistent) differences across countries, and are susceptible to endogeneity bias and spatial autocorrelation (Temple, 1999). Subsequent research, based either on long-run behavior of output differences across countries (Bernard and Durlauf (1995, 1996)) or on panel unit root tests (Evans and Karras (1996a, b)), have motivated a new generation of convergence tests which address some of these serious econometric issues. Samarjit Das Economic Research Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata-700108, India. e-mail: [email protected] Manisha Chakrabarty Indian Institute of Management, Kolkata, India. e-mail: [email protected]

252

Testing for Absolute Convergence: A Panel Data Approach

253

The present paper attempts to examine convergence hypothesis by using panel unit root tests in the framework of Evans and Karras (1996 a, b). However, most of these panel unit root literature including Evans and Karras assume that the individual time series in the panel are cross-sectionally independent. Common sense suggests that due to common trade policies, near free factor movements, regional proximity, and common currency, countries may be interdependent. Recent studies by O’Connell (1998) and Breitung and Das (2005) have highlighted that, in the presence of contemporaneous correlation, standard panel unit root tests suffer from severe oversize; more frequently such tests will suggest acceptance of convergence hypothesis. This problem has recently given major impetus to the research on panel unit root tests allowing for cross sectional dependence (see Breitung and Pesaran (2007) for recent development). In this paper, we adopt a two-step testing procedures to examine the nature of convergence as in Evans and Karras (1996 a). In the first step we apply panel unit root tests which are robust to cross sectional dependence to test for conditional convergence. If the conditional convergence hypothesis is accepted, then in the second step, absolute convergence hypothesis is examined. To the best of our knowledge, there is no test for absolute convergence which incorporate cross sectional dependence. We show that, the F test as proposed by Evans and Karras (1996 a) is severely biased and reject the null hypothesis of absolute convergence too frequently when data are cross sectionally dependent. This paper develops a new test for absolute convergence under cross sectional dependence. The proposed test is then applied to find whether there is absolute convergence in terms of real per capita income across various countries in OECDs. The paper has been organized as follows. The proposed test of absolute convergence is discussed in Section 2. In Section 3, the findings of the Monte Carlo study are summarized. The findings of the empirical example are discussed in Section 4. The paper concludes in Section 5.

2 Tests for Convergence in Panel Framework with Cross Sectional Dependence: Conditional and Absolute Let zit be the logarithm of output per worker for economy i during period t. A group of countries 1,2 .....N+1 is said to converge if and only if zit − z jt is stationary for every pair, (i, j). Convergence is said to be absolute if and only if the unconditional mean of zit − z jt is zero for every pair, (i, j).

254

Samarjit Das and Manisha Chakrabarty

2.1 Tests for Conditional Convergence under Cross Sectional Dependence In this section we briefly discuss various panel unit root tests which are robust to cross sectional dependence. Consider the collection of time series {yi0 , . . . , yiT }i=1,...,N that is generated by a simple AR(pi )1 pi

Δ yit = µi + φi yi,t−1 + ∑ νi j Δ yi,t− j + εit ,

(1)

j=1

t = 1, 2, . . . , T, where the starting values yi0 . . . yi,−pi are set equal to zero. To take i care of autocorrelations, Eq. (1) includes the term ∑ pj=1 νi j Δ yi,t− j . Individual specific intercepts µi have also been included because series means are generally not zeroes. We make the following assumption: Assumption 1. The error vector εt = [ε1t , . . . , εNt ]′ is i.i.d. with E(εt ) = 0 and E(εt εt′ ) = Ω , where Ω is a positive definite matrix with eigenvalues λ1 ≥ · · · ≥ λN and λ1 < c < ∞. Furthermore, E(εit4 ) < ∞ for all i and t. The null hypothesis of unit root is H0 : φi = 0 ,

(2)

for all i, that is, all time series are independent random walks with non-zero drifts. Against the null hypothesis of ‘no convergence’, two different kind of alternatives may be considered. H1a : φ1 = φ2 = . . . = φN = φ < 0

(3)

H1b : φ1 < 0, φ2 < 0, . . . , φN0 < 0,

(4)

or

with N0 ≤ N.

2.2 Tests for Cross-Sectional Dependence As there is no apriori knowledge of spatial or weighting matrix, the lagrange multiplier (LM) kind of test as proposed by Breusch and Pagan (1980) may be more appropriate in the panel unit root context. LM test is used to test for cross-sectional 1

yit = zit − zN+1,t . zN+1,t is taken as numeraire. Any individual series can be taken as numeraire. However, it is always better to consider the richest or poorest country from the group as the numeraire to have power gain.

Testing for Absolute Convergence: A Panel Data Approach

255

dependence in regression framework where the number of equations (N) is finite but time dimension (T ) is infinite. However, simple modification2 of the original LM test provides normal distribution under very large N as opposed to chi-square distribution of LM test for finite N.

2.3 Tests for Absolute Convergence under Cross-Sectional Dependence In this section, we assume that panel unit root tests reject the null hypothesis of unit roots in yit . That implies that the convergence holds and the convergence is conditional one. Hence all the N series, yit , are stationary with possibly non-zero means. As in Evans and Karras (1996 a, b), test for unconditional convergence is essentially now a joint test for zero mean for all the underlying series. The null hypothesis is H0 : µi = 0 , (5) for all i, that is, all time series have mean zero. Against the above null hypothesis of mean zero, two different kinds of alternatives may be considered. H1a : µ1 = µ2 = . . . , = µN = µ = 0,

(6)

H1b : µ1 = 0, and/or, µ2 = 0, and/or, . . . , and/or, µN = 0.

(7)

or

To develop our test, first construct the pre-whitened series as pi

xit = Δ yit − φˆi yi,t−1 − ∑ νˆi j (Δ yi,t− j ).

(8)

j=1

The parameter estimates, φˆi and νˆi j are OLS estimates obtained by running N separate regressions as in (1). We have the following result under Assumption 1. Theorem 1. Let yt be generated as in (1) with φi < 0. If T → ∞, xit = µi + εit + o p(1).

2

See Pesaran (2004) for more discussion and for small sample performance.

(9)

256

Samarjit Das and Manisha Chakrabarty

Proof: pi

xit = Δ yit − φˆi yi,t−1 − ∑ νˆi j (Δ yi,t− j ) j=1

pi

= µi + (φi − φˆi )yi,t−1 + ∑ (νi j − νˆi j )Δ yi,t− j + εit j=1

= µi + εit + o p (1). In vector form we can express the above as xt = µ + εt + o p(1), where xt = (x1t , x2t , . . . , xNt )′ and µ = (µ1 , µ2 , . . . , µN )′ .



Note that, since all the parameter estimates are consistent,(φi − φˆi ) and (νi j − νˆi j ) are all O p (T −1/2 ). With this backdrop we need to develop tests which are robust to cross sectional dependence. We propose the following test statistic as T

∑ 1′ xt

t=1 trob = ( ,  1) T (1′ Ω

(10)

where xt = (x1t , x2t , . . . , xNt )′ , 1′ = (1, 1, . . . , 1), and = 1 Ω T

T

∑ xt xt′ .

t=1

In the following Theorem it is shown that trob has a standard normal limiting distribution under H0 . Theorem 2. Let yt be generated as in (1) with Ω = E(εt εt′ ). If T → ∞ is followed by N → ∞, then trob is asymptotically distributed as N(0, 1). Proof: First, we decompose the covariance matrix as Ω = V Λ V ′ , where Λ is a diagonal matrix on the leading diagonal and V is the matrix of eigenvectors. Let zt = [z1t , . . . , zNt ]′ = Λ −1/2V ′ xt such that zt is a vector of mutually uncorrelated random variables with unit variances and T

d

T −1/2 ∑ zit → N(0, 1), t=1

T

cNT = N −1/2 T −1/2 ∑ 1′ xt . t=1

Testing for Absolute Convergence: A Panel Data Approach

257

If T → ∞ is followed by N → ∞ we have d

cNT → N(0, λ 2 , ) where λ 2 = lim N −1 ∑ λi2 δi2 . As T → ∞ it follows N

 1 = N −1 1′ Ω 1 + o p(1) = N −1 ∑ λ 2 δ 2 + o p(1). dNT = N −1 1′ Ω i i i=1

√ p If T → ∞ is followed by N → ∞ we have dNT → λ 2 . It follows that trob = cNT / dNT has a standard normal limiting distribution.  The following theorem presents the asymptotic distribution of the tests under the alternative hypotheses. Theorem 3. Let yt be generated as in (1) with Ω = E(εt εt′ ). Assume that µ = lim N −1/2 1′ µ < ∞. If T → ∞ is followed by N → ∞, then trob is asymptotically distributed as N(d, g). Proof: T

T

√ ′ N −1/2 T −1/2 ∑ 1′ εt T1 µ t=1 t=1  =  trob = ( . + −1 ′ ′  1 ′  1) N 1Ω 1 Ω1 T (1 Ω ∑ 1′ xt

Now as in Theorem 2,

T

cNT = N −1/2 T −1/2 ∑ 1′ εt . t=1

If T → ∞ is followed by N → ∞ we have d

cNT → N(0, λ 2 ), where λ 2 = lim N −1 ∑ λi2 δi2 . As T → ∞ it follows that N

 1 = N −1 ∑ λ 2 δ 2 + N −1 (1′ µ )2 + o p(1). dNT = N −1 1′ Ω i i i=1

p

If T → ∞ is followed by N → ∞ we have dNT → λ 2 + µ 2 , where

258

Samarjit Das and Manisha Chakrabarty

µ 2 = lim N −1 (1′ µ )2 . √



Hence trob is asymptotically distributed as N(d, g), where d = lim √T 1 µ √

= lim √T N As d

−1/2 1′ µ

= lim

1 N −1 1′ Ω → ∞, the trob

√ Tµ λ 2 +µ 2

→ ∞, and g =

λ2 . λ 2 +µ 2

1 1′ Ω



will actually diverge giving power advantage.

3 Small Sample Performance In this section, we present details of a simulation experiment to investigate the finite sample performance of the proposed test statistic. We compare our test with that of Evans and Karras (1996 a, b) For the Monte Carlo simulations, we consider the following data generating process: DGP : yit = µi + αi yi,t−1 + uit , where the starting values yi0 and yi,−1 are set equal to zero. The parameter αi are drawn from a uniform [0.2, 0.7] distribution. Such choice of short run dynamics parameter introduces heterogeneity in the data under both null and alternative hypotheses. The error vector ut = [u1t , u2t , . . . uNt ]′ is drawn from iid N(0, Σ ). The parameters of the N × N matrix Σ is also drawn randomly using uniform [0, 1] distribution. As there is no natural model to generate cross sectional dependence, the parameters of the N × N covariance matrix Σ is generated randomly by using Σ = SS′ , where the elements of S are drawn randomly from a uniform [0, 1] distribution. Such data generating processes ensure various aspect of heterogeneity in the data. For the purpose of power calculation, µi is assumed to follow a U(0, 0.25) distribution under the alternative hypothesis.3 For all tests, data have been generated by 10,000 replications of the model. We consider combinations of the sample dimensions N and T that are generally available in practice. Table 1 reports the empirical sizes for our test, trob and F-test, whereas Table 2 presents empirical powers. From Table 1, it is evident that the robust test performs quite well. It achieves nominal sizes. However, the F-test suffer from severe oversize distortions. From Table 2 it is quite evident that robust test also performs quite well in terms of power; power increases as T increases.

3

We also conducted a small simulation study with µi ∼ U(−0.25, 0.50). The test was found to have reasonable power in all such cases.

Testing for Absolute Convergence: A Panel Data Approach

259

4 Empirical Findings In this section we first attempt to test for conditional convergence using panel unit root tests which are cross sectionally dependent. If the hypothesis of conditional convergence is supported by evidence, we will test for unconditional convergence by using our proposed test. To examine the growth convergence, per capita GDP for the time period 1950-2001 have been considered.4 All the GDP value calculated in 1990 US dollar as base. We take the 30 OECD countries.5 United States has been considered as numeraire. Table 3 summarizes the results of panel unit roots tests under cross-sectional dependence. These tests implicitly assume that cross-sectional dependence is of arbitrary form and ‘weak’ in nature. For all these tests, individual specific intercepts are incorporated. For bootstrap based tests, we have considered 5000 bootstrap replication. All tests suggest that the per capita income (relative to a common numeraire, USA) are jointly stationary, implying conditional convergence. This convergences may be termed as conditional convergence as non-zeroes intercepts have been allowed. The possible presence of common factors may provide a different conclusion. Therefore, we need to decompose the data into factors and idiosyncratic components. We have considered all six criteria of Bai and Ng (2002) to select optimal number of factors, searching for over 5 possible factors. Interestingly all six criteria uniformly suggest presence of only one factor. We present Moon-Perron (2003) (MP) test and Direct Dickey-Fuller (DDF) test on the estimated factor and the robust test as developed by Breitung and Das (2008) on the series as a whole. Table 4 summarizes panel unit roots tests under common factor structure. All three tests uniformly suggest convergence across member countries in OECDs when the series are decomposed into common factors and idiosyncratic components. After being confirmed about conditional convergence, we now turn into examining whether convergence is indeed an absolute one or not. To this end, we apply our robust test. The valus of the test statistic turns out to be -0.90. This finding provides strong evidence in favour of absolute convergence for OECD countries.

5 Conclusions In this paper, we have developed a test for absolute convergence which is robust to cross sectional dependence. A detailed Monte Carlo study has been carried out to evaluate the performance of the proposed test in terms of size and power. From 4

Virtually all the data are derived from The World Economy: Historical Statistics, OECD Development Centre, Paris 2003, which contains detailed source notes. See also The World Economy: A Millennial Perspective, OECD Development Centre, Paris 2001. 5 Australia, Austria, Belgium, Canada, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Japan, Korea, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States.

260

Samarjit Das and Manisha Chakrabarty

our Monte Carlo simulations it has been evidenced that the test performs well with respect to size and power. Based on the Monte Carlo study it has been found that the F test as proposed by Evans and Karras (1996 a) is severely biased and it rejects the null hypothesis of absolute convergence too frequently when data are cross sectionally dependent. A two-step testing procedures to examine the nature of convergence as in Evans and Karras (1996 a) has been adopted. We have suggested application of panel unit root tests which are robust to cross sectional dependence to test for conditional convergence. If the conditional convergence hypothesis is accepted, one might then conduct a test to examine whether the conditional convergence is indeed an absolute convergence or not. As an illustration we have then applied the test to a data set comprising 30 countries from OECDs. Various tests of cross sectional dependence have shown that these countries are strongly cross sectionally dependent. This has led us to apply tests which are robust to cross sectional dependence. In this testing procedure, we have first applied various panel unit root tests which are robust to cross sectional dependence in order to examine whether there have been convergence (conditional) in terms of real per capita income across various countries in OECDs. We have found that uniformly all tests evidence in favour of conditional convergence. Once conditional convergence has been established, we have applied our proposed test to examine whether the convergence is absolute or not. Applying our test which has been shown to be robust to cross sectional dependence, we have found that various countries in OECDs are in the mode of absolute convergence.

References 1. Bai, J. and S Ng (2002): Determining the Number of Factors in Approximate Factor Models, Econometrica, 70, 191-221 2. Bai, J. and S Ng (2004): A Panic Attack on Unit Roots and Cointegration, Econometrica, 72, 1127-1177 3. Barro, R.J. and X. Sala-i-Martin, 1992, Convergence, Journal of Political Economy, 100, 223251 4. Bernard, A.B. and S.N, Durlauf, 1995, Convergence of international output, Journal of Applied Econometrics, 10, 97-108 5. Bernard, A.B. and Durlauf, S.N. (1996): Interpreting Tests of Convergence Hypothesis, Journal of Econometrics, 71, 161-173 6. Breitung, J. and S. Das (2005): Panel Unit Root Tests Under Cross Sectional Dependence, Statistica Neerlandica, 59, 1-20 7. Breitung, J. and S. Das (2008): Testing for Unit Roots in Panels with a Factor Structure, Econometric Theory, vol-24, 88-108 8. Breitung, J. and H. Pesaran (2008): Unit Roots and Cointegration in Panels, in: L. Matyas and P. Sevestre (eds), The Econometrics of Panel Data: Fundamentals and Recent Developments in Theory and Practice, Kluwer Academic Publishers, Chap. 9, p. 279-322 9. Breusch, T. S. and A. R. Pagan (1980): The Lagrange Multiplier Test and Its Application to Model Specification in Econometrics, Review of Economic Studies, 47, 239-253 10. Chang, Y. (2004): Bootstrap Unit Root Tests in Panels with Cross-Sectional Dependency, Journal of Econometrics, 120, 263-293 11. Chang, Y. (2002): ”Nonlinear IV Unit Root Tests in Panels with Cross-Sectional Dependency”, Journal of Econometrics, 110, 261-292

Testing for Absolute Convergence: A Panel Data Approach

261

12. Evans, P. and G. Karras (1996a): Convergence Revisited, Journal of Monetary Economics, 37, 249-265 13. Evans, P. and G. Karras (1996b): Do Economies Converge? Evidence From a Panel of U.S. States, Review of Economics and Statistics, 78, 384-388 14. Islam, N., (1995), Growth Empirics : A Panel Data Approach, Quarterly Journal of Economics, 110, 1127-1170 15. Moon, R.,and B. Perron (2004): Testing for Unit Root in Panels with Dynamic Factors, Journal of Econometrics, 122, 81-126 16. O’Connell, P. (1998): ”The overvaluation of Purchasing Power Parity”, Journal of International Economics, 44, 1-19 17. Pesaran H., (2004): General Diagnostic Tests for Cross Section Dependence in Panels, CESifo Working Papers, No. 1229, June 2004 18. Temple, J. (1999): The New Growth Evidence, Journal of Economic Literature, 37, 112-156

Table 1 Empirical sizes: Robust Test vs F Test

N T 20 100 125 150 175 30 100 125 150 175 50 100 125 150 175 75 100 125 150 175

trob 4.7 4.1 5.2 4.3 4.6 4.7 4.3 4.8 5.2 4.7 5.0 5.1 4.8 5.5 5.3 4.9

Note: The nominal size for all tests is 0.05.

F 16.4 15.8 17.7 16.1 16.2 16.9 16.6 15.2 18.0 18.1 18.3 18.9 18.8 18.4 17.3 18.5

262

Samarjit Das and Manisha Chakrabarty Table 2 Empirical Powers: Robust Test vs F Test

N T 20 100 125 150 175 30 100 125 150 175 50 100 125 150 175 75 100 125 150 175

trob 74.2 75.7 77.4 79.0 76.1 77.8 79.2 80.2 75.2 76.7 77.5 78.1 76.8 77.5 78.7 80.5

F 92.6 95.8 97.8 99.4 94.2 98.4 99.5 100.0 98.0 98.6 99.2 100.0 98.8 99.8 100.0 100.0

Note: The nominal size for all tests is 0.05.

Table 3 Panel Unit Root Tests and Cross-Sectional Dependence Tests

OECD Countries (N=29)

∗ tols

∗ tgls

trob

tiv

LM

MLM

-3.87 -4.21 -4.09 -3.38 2393.71 (-1.08) (-2.01) (-1.65) (-1.65) (238.52)

46.10 (1.96)

∗ , t , t ∗ and t denote the t-statistics corresponding to bootstrap-OLS, Note: (i) tols iv rob gls robust-OLS, bootstrap-GLS and Chang’s (2002) instrumental variable method respectively. The LM and MLM statistics are presented for testing cross sectional dependence; 5% critical values are given in parenthesis below the test statistic values.

Table 4 Results of Various Panel Unit Root Tests under Common Factor

OECD Countries DDF trob N=29 -3.12 -8.87 Critical value -1.945 -1.945

MP -8.42 -1.645

Note: (i) DDF, trob and MP denote the t-statistics corresponding to direct Dickey Fuller tests based on estimated principal component, Robust-OLS and Moon-Perron (2002) methods respectively. For all three tests, the nominal size is 0.05.

Goodwin’s Growth Cycles: A Reconsideration Soumya Datta and Anjan Mukherji

Abstract The paper reconsiders the Goodwin’s growth model and demonstrates the extent to which questions relating to robustness of its results can be answered. The paper also provides a method of tackling the boundary problems, in case the solutions encounter them.

1 Introduction In one of the earliest and most well known economic applications of the LotkaVolterra system of equations (originally developed by [10] and [16] in a biochemical and ecological application respectively), [6] modeled the contradictions of a class struggle between labor and capital as a predator-prey relationship leading to growth cycles. The cyclical conclusions of the model, however, are subject to two major criticisms: 1. It can be shown that the cyclical conclusions disappear on introduction of small perturbations, for instance, when ‘social phenomenon’1 is introduced in the system. 2. Unlike the original Lotka-Volterra model, the variables of Goodwin’s model, namely, the share of wages in national income (u) and the rate of employment (v), being pure ratios, are subject to additional restrictions, and must lie in the [0, 1] interval. In other words, the solution to Goodwin’s model must lie within a compact unit box. [6] himself noted this problem but did not offer a method to Soumya Datta Department of Economics, Shyamlal College (Evening), University of Delhi, Delhi 110032, India. e-mail: [email protected] Anjan Mukherji Centre for Economic Studies and Planning, Jawaharlal Nehru University, New Delhi, India. e-mail: [email protected] 1

A term due to [7, Section 1, Page 257].

263

264

Soumya Datta and Anjan Mukherji

prevent the trajectory represented by Lotka-Volterra system of equations from escaping the unit box. It would be clear that we need to address these two concerns if we want to rehabilitate the cyclical conclusions of Goodwin’s model. This paper is an attempt in this direction.

2 Lotka-Volterra System An interesting dynamic story may be built on a simultaneous system of equations known as the Lotka-Volterra or the Predator-Prey Model. Consider an environment made up of two species of life-forms, one of which (the predator) preys on the other (the prey). Let the population of the prey be designated by x and that of the predator by y. The basic assumption is that in the absence of the predator, the population of the prey grows at a constant proportional rate a; on the other hand, in the absence of the prey, the population of the predator decays at a constant proportional rate b (here both a and b are assumed positive). In the presence of both the prey and predator, adjustments to this basic story have to be made and we have x˙ = x (a − α y) y˙ = y (β x − b)

(1)

where α , β are also assumed to be positive and are to be interpreted as the effect of the presence of one population on the other. There are two equilibria for the above system of equations: (x = 0, y = 0)   b a Non-Trivial Equilibrium (NTE): x = , y = . β α Trivial Equilibrium (TE):

We are interested in what happens to the solution, z (t) = (x (t) , y (t)) to the dynamical system represented by (1) beginning from an initial configuration z◦ = (x◦ , y◦ ); we shall represent this solution by z (t, z◦ ). We note first of all, the following local stability properties of the equilibria mentioned above: Lemma 1. For the dynamical system represented by (1), TE is a saddle point while NTE is a center. Proof: We note that the Jacobian of the right hand side of (1) is given by   a − α y −α x . yβ β x − b At TE the characteristic roots are: (a, −b);

Goodwin’s Growth Cycles: A Reconsideration

265

while at NTE, the characteristic roots are purely imaginary: √ √ (ı a.b, −ı a.b). 

Hence, Lemma 1 follows. Next, we note the following global results:

Lemma 2. With any z◦ = (x◦ , y◦ ) ∈ intℜ2++ as initial point, the solution to the dynamical system represented by (1) is a closed orbit around NTE. Proof: We define a scalar function, V (x (t) , y (t)) = {β x (t) − b ln x (t)} + {α y (t) − a ln y (t)} so that at NTE,

∂V ∂V = = 0, ∂x ∂y

∂ 2V > 0, ∂ x2

∂ 2V > 0, ∂ y2

∂ 2V =0 ∂ x∂ y

i.e. V attains its minimum value, Vmin at NTE.2 It might be noted that along any solution to (1), x˙ y˙ V˙ = (β x − b) + ((α y − a) = 0; x y i.e. V remains constant; the value of this constant is defined by the initial point z◦ = (x◦ , y◦ ). This also defines the level curve along which the solution moves.3 ⊓ ⊔

3 Goodwin’s Growth Model We next consider one of the earliest and most well-known economic applications of the Lotka-Volterra system of equations. [6] modeled the contradictions of a class struggle between labor and capital, a idea originally put forward in [11, Chapter XXV], as a predator-prey relationship leading to growth cycles. We begin by listing the assumptions of this model: (i) Steady disembodied technical progress; (ii) Steady growth in labor force; (iii) Only two factors of production, ‘labor’ and ‘capital’; (iv) All quantities are real and net; (v) All wages are consumed whereas all profits are saved and automatically invested; (vi) A constant capital output ratio; (vii) A real wage rate which rises in the neighborhood of full employment. 2

We may interpret V (x (t) , y (t)) as a measure of the distance of an arbitrary point in the interior of the positive orthant of x-y plane from NTE. 3 An analytical proof is provided in [7, page 262, Theorem 1].

266

Soumya Datta and Anjan Mukherji

The notation is as follows: q is output; k is capital; w is wage rate; a = ao eα t is labor productivity growth, where α is a constant; σ is the constant capital-output ratio; u = w/a is the share of workers in total product; hence, the share of the capitalists is 1 − w/a; k˙ = (1 − w/a)q is the investment; ℓ, the employment is then equal to q/a; labor, n at time t is given by no eβ t where β is constant. The employment ratio is given by v = ℓ/n. Finally (vii) is captured by the equation: w˙ = f (v) = −γ + ρ v. w

(2)

From above, it follows that

so that

ℓ˙ # w$ 1 = 1− −α ℓ a σ

v˙ 1 − u = − (α + β ). v σ Further, using the assumption contained in (2) we have: u˙ = − (γ + α ) + ρ v. u

(3)

(4)

It should be clear that the system of equations made up of (3) and (4) constitutes a Lotka-Volterra system of the type we analyzed in section 2. Consequently, for ◦ ) ∈ ℜ2 , the above system of equations generate any arbitrary initial point (v◦ , u# ++ $ α +γ , the particular orbit being , 1 − ( α + β ) σ closed orbits around the NTE ρ determined by the initial point (See Figure 2). We recall that the share of workers w is given by u = so that the share of capitalists is given by 1 − u; hence, the rate a 1−u of profit is given by , which, given assumptions (v) and (vi) made above, also σ ˙ represents the rate of growth, Y Y . As would be evident from the above discussion and Figure 2, the variables u and v will fluctuate within the range, say, [umin , umax ] and [vmin , vmax ] respectively. According to [6]: “[. . . ] when profit is at its greatest (u = umin ), employment is average (vmin < v < vmax ) and the high growth rate pushes employment up to its maximum (v = vmax ), which squeezes the profits to its average level (umin < u < umax ). The deceleration of growth lowers employment [. . . ] The improved profitability carries the seed of its own destruction by engendering a too vigorous expansion of output and employment [. . . ]” [6, pp. 57-8]

[6] argued that this captures what [11] called the “contradiction of class struggle between labor and capital under capitalism”. While this is an interesting account, let us examine whether the model constructed above tells this story. We can easily identify two problems:

Goodwin’s Growth Cycles: A Reconsideration

267

1. It is known that the closed orbits of the Lotka-Volterra system collapse, and the NTE becomes globally stable under some conditions, for example, when ‘social phenomenon’ is introduced. In other words, if the system of Eqs. (1) is replaced by: x˙ = x (a + γ x − α y) and y˙ = y (β x − b). (5)

γ = 0 may be shown to be a point of Hopf bifurcation for the system (5)4. Consequently, the specification of Eq. (2) in the Phillip’s curve argument is crucial to obtaining cyclical behavior in Goodwin’s model. 2. The Goodwin application involves two ratios u, v, which are subjected to definitional restrictions (v, u) ∈ Z (6)

where Z = {(v, u) : 0 ≤ v ≤ 1 & 0 ≤ u ≤ 1} ⊂ ℜ2++ ; there is nothing in the formulation, however, which ensures that along the solution this indeed is continually met.

It is clear that, to maintain the cyclical conclusions of Goodwin’s model, we need to handle these two problems. We shall turn to them in detail in the following sections. Before passing on to these considerations, however, we should note that the equations in the Goodwin’s model, viz., (4) and (3), originate from the specification of assumption (vii) and from other assumptions and definitions respectively. In other words, while we have some flexibility in choosing the former, we have little flexibility in choosing Eq. (3), without drastically changing the specifications. Any effort to address the problems listed above must take this fact into account.

4 A General Lotka-Volterra Model In this section, we attempt to provide a general treatment of the Predator-Prey class of models and identify the conditions for periodic behavior and convergence. A general form was first considered by [9]; [5, Chapter 5] contains a more recent update, in English, of these considerations. Some of the results discussed below are derived in [13] and [8]. Consider the following general predator-prey model: x˙ = xM (x, y) y˙ = yN (x, y)

(7)

where M, N : ℜ2++ → ℜ are continuously differentiable functions that satisfy the following conditions: . P1 : M (0, 0) > 0, 0 ≥ N (0, 0) ; My (x, y) < 0, Nx (x, y) > 0 ∀ (x, y) ∈ ℜ2++ P2 : Mx (x, y) ≤ 0, Ny (x, y) ≤ 0 4

See, for instance [12].

268

Soumya Datta and Anjan Mukherji

where subscripts denote partial derivatives.5 It should be pointed out that P1 and P2 constitute a weaker set of restrictions than the ones found in either [9] or [5]. The following results have been shown in this setup in [13]: 1. There are three types of equilibria: No Species: (0, 0); No Predator: (x, ˆ 0) where xˆ > 0; and, Both Species: (x, ˆ y) ˆ where x, ˆ yˆ > 0. No species equilibrium always exists. 2. In case the Both Species equilibrium does not exist, there can be no limit cycle, and the solution approaches the No Predator equilibrium. 3. If the inequalities in P2 are strict (or alternatively, not identically zero), there can be no cycles. 4. If any solution is bounded, and if Both Species equilibrium exist, and if xM ˆ x (x, ˆ y) ˆ + y( ˆ x, ˆ y) ˆ ≥ 0, then there exists a cyclical orbit around the No Species equilibrium. We also note, from Poincar´e-Bendixson Theorem6, that for motion on the plane, if the solution is bounded then it either approaches an equilibrium or there is a limit cycle. We next analyze the nature of the cyclical behavior in the Goodwin’s growth model in light of the above results.

5 Cyclical Behavior in Goodwin’s Growth Model We note that persistent cyclical behavior maybe obtained under certain special circumstances. From the discussion in the previous section, whenever the derivatives Mx and Ny , the so-called social phenomenon, have a fixed and same sign pattern over the domain without being identically zero, the cyclical behavior is destroyed. Thus, for the cycles to be maintained, it is necessary to have the expression xMx (x, y) + yNy (x, y) change sign. Often, the linear assumptions about the rates x/x ˙ and y/y ˙ are thought to be responsible for cyclical behavior. We show, by a routine exercise, that it is not so. Consider the Goodwin’s growth model represented by the system of Eqs. (3) and (4); we now replace Eq. (4) by the following equation: u˙ = f (v) − α . u

(8)

In the above, the Phillips curve formulation has been kept in the original Goodwin form, without linearizing, and in consonance with Goodwin’s formulation, we maintain f ′ (v) > 0. We note that at the NTE, the Jacobian is given by:   0 u f p (v ) −v /σ 0 5

If the partial derivatives in P2 are not identically zero, we have the existence of ‘social phenomenon’. 6 See [7, pp. 248-9].

Goodwin’s Growth Cycles: A Reconsideration

269

where we designate the components of the NTE by (u , v ). Since the characteristic roots are purely imaginary, the NTE is locally a center. We next turn our attention f (v) to global stability. Consider a function θ (v) = , and define for ε > 0 v

φ (v) =

v ε

θ (s)ds;

so that φ ′ (v) = θ (v). Now consider the function: 

.  u 1 − − (α + β ) ln u . V (t) = {φ (v) − α ln v} + σ σ We note that V˙ (t) = 0 along the solution to the system of equations represented by (3) and (8), so that the earlier conclusions about closed orbits around NTE are obtained once more.

If, however, instead of f (v) we have some term such as G (v, u),7 the cycles or the closed orbits collapse and the interesting conclusions disappear. Thus the only way to reintroduce cycles in this system would be to have a function such as f (v), ruling out social phenomenon of any kind. The lack of robustness may thus be attributed to only the independence of w/w ˙ from the variable u. An example of a set of equations which might exhibit robust cyclical behavior in this sense maybe found in [13].

6 Boundary Conditions in Goodwin’s Growth Model We next turn our attention to the problem of boundary conditions in Goodwin’s model.

6.1 Original Goodwin’s Model (LVG1) Consider the original Goodwin’s model described in Section 3 (LVG1 from now on): .   1 1 − (α + β ) − u v v˙ = (9) σ σ u˙ = {− (α + γ ) + ρ v} u

7

See, for instance, [4] or [13] for details.

270

Soumya Datta and Anjan Mukherji

which we rewrite as

v˙ = (a − bu) v u˙ = (cv − d) u

(10)

where a = 1/σ − (α + β ), b = 1/σ , c = ρ and d = α + γ . Let the non-trivial  equi librium, (d/c, a/b) be referred to as NTE. Given any (v◦ , u◦ ) ∈ int ℜ2++ as the initial point, let the solution be Ψ1 (t) = (v1 (t) , u1 (t) ; v◦ , u◦ ). We define a scalar function, V (v, u) = cv − d lnv + bu − a ln u, so that at NTE, ∂ V /∂ v = ∂ V /∂ u = 0, ∂ 2V /∂ v2 > 0, ∂ 2V /∂ u2 > 0 and ∂ 2V /∂ v∂ u = 0, i.e. V attains its minimum value, Vmin , at NTE. It might be noted that along any solution to LVG1, V˙ = 0, i.e. V is a constant. Hence, Ψ1 (t) = (v1 (t) , u1 (t) ; v◦ , u◦ ) represent closed orbits at a constant distance (measured by V ) from NTE (see Figure 2), so that V (v1 (t) , u1 (t)) = V (v◦ , u◦ ) ∀ t.

(11)

[15] showed that the period of the closed orbit is finite if |V − Vmin| < 2π min [a, d] .

(12)

It might be recalled that the variables v and u are meaningful only if they lie in a compact subset Z of the positive orthant, i.e. (v, u) ∈ Z

(13)

where Z = {(v, u) : 0 ≤ v ≤ vmax & 0 ≤ u ≤ umax } ⊂ ℜ2++ .8 Let us assume that NTE and the initial point are both economically meaningful, i.e. NTE, (v◦ , u◦ ) ∈ Z, and that v◦ , u◦ > 0. Let C1 and C2 represent the closed orbits passing through (vmax , a/b) and (d/c, umax ) respectively. The corresponding values of V for the closed orbits C1 and C2 are V (vmax , a/b) and V (d/c, umax ) respectively. Let C denote the smaller of the two orbits, with a value of V equal to min [V (vmax , a/b) ,V (d/c, umax)] . We further define a subset Z of Z as follows:

Z = {(v, u) : V (v, u) ≤ min [V (vmax , a/b) ,V (d/c, umax )]} ⊂ Z.

 the closed orbits will completely lie Lemma 3. For any initial point (v◦ , u◦ ) ∈ Z, inside Z and, hence, will always obey condition (13).

Proof: For (v◦ , u◦ ) ∈ Z ⇔ V (v◦ , u◦ ) ≤ min [V (vmax , a/b) ,V (d/c, umax)], hence, from (11), V (v1 (t) , u1 (t)) ≤ min [V (vmax , a/b) ,V (d/c, umax )], hence these orbits can never cross either vmax or umax , so that (v1 (t) , u1 (t)) ∈ Z ∀ t (see Figure 2). 

Lemma 4. For any initial point (v◦ , u◦ ) ∈ Z − Z (with v◦ , u◦ > 0), at least a part of the closed orbit will lie outside Z, and hence, will violate condition (13). 8

For Goodwin’s model, both vmax and umax are 1.

Goodwin’s Growth Cycles: A Reconsideration

271

Proof: For (v◦ , u◦ ) ∈ Z − Z ⇔ V (v◦ , u◦ ) > min [V (vmax , a/b) ,V (d/c, umax )], hence, from (11), V (v1 (t) , u1 (t)) > min [V (vmax , a/b) ,V (d/c, umax )]. Hence these orbits must cross either vmax or umax or both, so that (v1 (t) , u1 (t)) ∈ / Z for some t, taking us to regions which are economically not meaningful.  Consider, for instance, an initial point (v◦ , u◦ ) such that the closed orbit first crosses vmax at point A (vmax , uˆ1 ). Applying (11) on this orbit, we have cvmax − d ln vmax + buˆ1 − a ln uˆ1 = cv◦ − d lnv◦ + bu◦ − a ln u◦ , which, on solving for uˆ1 yields # $ ⎡ ⎤ 1 /a) a ω − b exp(−K + K 1 a ⎦ uˆ1 = exp ⎣− (14) a

where K1 = c (v◦ − vmax ) − d ln(v◦ /vmax ) + bu◦ − a ln u◦ , and ω (·) refers to the Lambert’s ω function9. Thus, A can be uniquely determined from (14). Similarly, for an initial point (v◦ , u◦ ) such that the closed orbit first crosses umax at point B (vˆ1 , umax ), we repeat the above procedure to get # $ ⎡ ⎤ 2 /a) d ω − c exp(−K + K2 d ⎦ vˆ1 = exp ⎣− (15) d where K2 = cv◦ − d ln v◦ + b (u◦ − umax ) − a ln (u◦ /umax ). Thus, B can be uniquely determined from (15).10 From (12), for |V | < 2π min [a, d] + Vmin , the points A and B will be attained in finite time. In other words, as long as  #   d a$ min V vmax , , umax ,V < 2π min [a, d] + Vmin b c there will always be a non-empty set of initial points for which the solutions will violate condition (13) in finite time. It is this problem which concerns us. Hence, we shall confine our attention to only those cases where the period of the closed orbit of LVG1 is finite.

6.2 Modified Goodwin’s Model (LVG2) We next consider a modified Goodwin’s model (LVG2 from now on), by adding boundary restrictions on LVG1:

9

Lambert’s ω is the inverse function of f (ω ) = ω exp (ω ), i.e. ω (x) exp (ω (x)) = x (Refer to [2] for a more detailed discussion). 10 Eqs. 14 and 15 solved using Matlab (Version 7.0.0.19920, Release 14).

272

Soumya Datta and Anjan Mukherji

Fig. 1 Simple Goodwin’s Model (LVG1)

(a − bu) v if min [0, (a − bu) v ] max if

(cv − d) u if u˙ = min [0, (cv − d) umax ] if

v˙ =

(a, d = 1, b, c = 3)

v < vmax v = vmax u < umax u = umax

(16)

Let the initial point (v◦ , u◦ ) ∈ Z and v◦ , u◦ > 0. Define

Ψ2 (t) = (v2 (t) , u2 (t) ; v◦ , u◦ ) as follows: ⎧ ⎨ (vmax , uˆ1 exp ((cvmax − d)t)) if v1 (t) = vmax & u1 (t) < a/b Ψ2 (t) = (vˆ1 exp ((a − bumax)t) , umax ) if u1 (t) = umax & v1 (t) > d/c ⎩ Ψ1 (t) otherwise

(17)

where Ψ1 (t) = (v1 (t) , u1 (t) ; v◦ , u◦ ) represents solution to LVG1, vˆ1 and uˆ1 as defined in (15) and (14) respectively. Definition 1. A function Ψ : [0, +∞] → Z : t → Ψ (t) is said to be a solution to (16) iff (a) Ψ (t) is absolutely continuous on [0, T ] ∀ T ∈ [0, +∞] ; (b) Ψ (0) = Ψ ◦ , where Ψ ◦ is the initial point ; (c) (d/dt) Ψ (t) = f (Ψ (t)) for almost every t ∈ [0, +∞], where f (·) represents the right hand side of (16); and (d) (d + dt) Ψ (t) = f (Ψ (t)) ∀ t ∈ [0, +∞], where (d + /dt) denotes derivative on the right. (see [1, Section 5]) Lemma 5. Ψ2 (t) = (v2 (t) , u2 (t) ; v◦ , u◦ ) as defined in (17) is a unique solution in the sense defined above to LVG2 represented by (16).

Goodwin’s Growth Cycles: A Reconsideration

Fig. 2 Modified Goodwin’s Model (LVG2)

273

(a, d = 1, b, c = 3)

 condition (13) is always satisfied, reducing LVG2 repreProof: For (v◦ , u◦ ) ∈ Z, sented by (16) to LVG1 represented by (10). Hence Ψ2 (t) = Ψ1 (t) is a solution to LVG2. For (v◦ , u◦ ) ∈ Z − Z (with v◦ , u◦ > 0) we have the following possibilities: 1. v1 (t) < vmax , u1 (t) < umax : LVG2 represented by (16) is reduced to LVG1 represented by (10). Hence Ψ2 (t) = Ψ1 (t) is a solution to LVG2. 2. v1 (t) = vmax & u1 (t) > a/b : In this case (a − bu1 (t)) vmax < 0, so from (16), v˙ = min [0, (a − bu1 (t)) vmax ] = (a − bu1 (t)) vmax , once again reducing LGV2 to LGV1. Hence, Ψ2 (t) = Ψ1 (t) is a solution. 3. v1 (t) = vmax & u1 (t) < a/b : In this case (a − bu1 (t)) vmax > 0, so from (16), v˙ = min [0, (a − bu1 (t)) vmax ] = 0, i.e. v2 (t) = vmax . Now u˙ = (cvmax − d) u. Taking point A, where the system switches to case (3), as the initial point, we get u2 (t) = uˆ1 exp((cvmax − d)t). In other words, Ψ2 (t) is a trajectory that moves upward along the vertical line vmax from point A till u2 (t) = a/b, where the system reverts to case (2). 4. u1 (t) = umax & v1 (t) < a/b : In this case (cv1 (t) − d) umax < 0, so from (16), u˙ = min [0, (cv1 (t) − d) umax ] = (cv1 (t) − d) umax , once again reducing LGV2 to LGV1. Hence, Ψ2 (t) = Ψ1 (t) is a solution. 5. u1 (t) = umax & v1 (t) > a/b : In this case (cv1 (t) − d) umax > 0, so from (16), u˙ = min [0, (cv1 (t) − d) umax ] = 0, i.e. u2 (t) = umax . Now v˙ = (a − bumax) v. Taking point B, where the system switches to case (5), as the initial point, we get v2 (t) = vˆ1 exp((a − bumax)t). In other words, Ψ2 (t) is a trajectory that moves leftward along the horizontal line umax from point B till v2 (t) = d/c, where the system reverts to case (4).

274

Soumya Datta and Anjan Mukherji

It would be evident from above that Ψ2 (t) keeps switching between various cases described above, till it attains a small enough orbit contained completely inside Z. We recall from (17) that at any t, either Ψ2 (t) = Ψ1 (t), or Ψ2 (t) = (vmax , uˆ1 exp ((cvmax − d)t)) or Ψ2 (t) = (vˆ1 exp ((a − bumax)t) , umax ), and that by construction at any t, Ψ2 (t) is a continuous function; from above, in each of the phases, except at the switchpoints such as A or B, Ψ2 (t) is continuously differentiable as well; at the switchpoints (there can only be a finite number of such points) the right hand derivative, (d + /dt)Ψ2 (t) is given by (0, uˆ1 (cvmax − d) exp((cvmax − d)t)) or by (vˆ1 (a − bumax) exp ((a − bumax)t) , 0) both of which are continuous; thus, on any compact set [0, T ], these derivatives have a bound given by (M, N) say. In other words, |∇Ψ2 (t)| ≡ (|Ψ2v | , |Ψ2u |) ≤ (M, N) if it exists, otherwise  +      ∇ Ψ2 (t) ≡ Ψ +  , Ψ +  ≤ (M, N) 2v 2u

where

Ψ2v+ = lim

h→0+

Ψ2 (v + h, u) − Ψ2 (v, u) Ψ2 (v, u + h) − Ψ2 (v, u) + , Ψ2u . = lim h→0+ h h

This demonstrates the absolute continuity of Ψ2 (t) on [0, T ] for any finite T .11 Hence, Ψ2 (t) is a solution to (16) in the sense defined above.12 We further note that Ψ2 (t) is the unique solution.  ◦ ◦ ◦ ◦ Theorem 1. Given |V − Vmin| < 2π min [a, d], for any (v , u ) ∈ Z − Z (with v , u > 0), there exists finite T such that Ψ2 (t) = (v2 (t) , u2 (t) ; v◦ , u◦ ) ∈ C ∀ t > T . Proof: For definiteness, let C1 be the smaller orbit, i.e.

V (vmax , a/b) < V (d/c, umax ) . Consider a trajectory starting from (v◦ , u◦ ) ∈ Z − Z where v◦ , u◦ > 0 (see Figure 1). Recall uˆ1 defined in (14), then from (17), such a trajectory will go through three phases: (i) from (v◦ , u◦ ) to A (vmax , uˆ1 ), v1 (t) < vmax and u1 (t) < umax , i.e. Ψ2 (t) = Ψ1 (t) ⇒ V˙ = 0 ; (ii) from A (vmax , uˆ1 ) to (vmax , a/b), v1 (t) = vmax and u1 (t) < a/b, 11

See, for instance, [14, page 108, problem 16a]; the complication created by the existence of a finite number of switchpoints does not appear to pose any additional problems in applying this result. 12 Note that at points such as A or B, (d + /dt) Ψ (t) satisfies right hand side of (16).

Goodwin’s Growth Cycles: A Reconsideration

275

hence, Ψ2 (t) = (vmax , uˆ1 exp ((cvmax − d)t)) and V˙ = (∂ V /∂ u) u˙ < 0; and (iii) from (vmax , a/b) onwards, Ψ2 (t) = Ψ1 (t) and V (v (t) , u (t)) = V (vmax , a/b), hence, from this stage onwards, Ψ2 (t) coincides with the closed orbit C. Notice that V˙ ≤ 0 in all three stages, consequently V (t) must converge to V (vmax , a/b). Thus, for any (v◦ , u◦ ) ∈ Z − Z (with v◦ , u◦ > 0), there exists finite T such that Ψ2 ∈ C ∀ t > T.  Corollary 1. Given |V − Vmin | < 2π min [a, d], for any (v◦ , u◦ ) ∈ Z (with v◦ , u◦ > 0), there exists finite T such that Ψ2 (t) = (v2 (t) , u2 (t) ; v◦ , u◦ ) ∈ Z ∀ t > T .

Proof: For (v◦ , u◦ ) ∈ Z ⇔ Ψ2 (t) = Ψ1 (t), it follows that V (v2 (t) , u2 (t)) = V (v◦ , u◦ ) ≤ min [V (vmax , a/b) ,V (d/c, umax )]  For (v◦ , u◦ ) ∈ Z − Z (with v◦ , u◦ > 0) we have Ψ2 (t) ∈ Z by The⇒ Ψ2 (t) ∈ Z. orem 1. In other words, irrespective of the position of the initial point, the solution will finally stay within a compact subset Z of the feasible set Z. 

7 Conclusion We have thus shown that the interesting Goodwin conclusions follow whenever the equation determining the rate of change of the real wages, w/w ˙ depend only on the employment rate, v; almost any form of this dependence will imply the Goodwin conclusions. However, whenever one admits the share of wages, u, into this equation, the Goodwin cycles disappear. Moreover, the upper bounds of unity, required by the definition of the variables u and v, are not a problem and maybe accommodated by a slight modification in the dynamical system.13 It might be mentioned here that an alternative method of dealing with this problem was proposed by [3]. This essentially involves modifying the investment function and the Philips curve in the original Goodwin’s model, so that the following system of equations replaces (9): v˙ = [−λ ln (1 − u) ¯ − (α + β )] + λ ln (u¯ − u) v  ′  u˙ = − γ + α + ρ ′ (1 − v)−δ . u

(18)

Using a simple simulation exercise, [3] show that for certain set of initial points, trajectories for the original Goodwin’s model represented by (9) escape the feasible region, whereas for the modified system represented by (18) they stay within the feasible region. We, however, note that the method proposed by [3] involves making fundamental changes to Goodwin’s model, affecting all trajectories including the ones that do not actually encounter the upper bounds. The modification made in our method, on the other hand, is effective only if required and causes no change to the basic formulation of Goodwin’s model. We feel that this is a strong point in favor of such an exercise. 13

The lower bounds are not a problem here, since the axes are trajectories and cannot be crossed.

276

Soumya Datta and Anjan Mukherji

Finally, we should also point out that the method of handling the upper bounds in the more general case, for instance, the general Lotka-Volterra system represented by (7), or the modified Goodwin’s model, consisting of (3) and (8), remains the same. Thus, in any other similar situations the method maybe used – it does not depend on any of the special features of the Lotka-Volterra model.

References 1. Paul Champsaur and Jacques H. Dr`eze and Claude Henry, Stability Theorems with Economic Applications, Econometrica,1977, March, 45, 2, 273-294 2. R.M. Corless and G.H. Gonnet and D.E.G. Hare and D.J. Jeffrey and D.E. Knuth, On Lambert W function, Advances in Computational Mathematics, 1996, 5, 329-359 3. Meghnad Desai and Brian Henry and Alexander Mosley and Malcolm Pemberton, A Clarification of the Goodwin Model of the Growth Cycle, Journal of Economic Dynamics & Control, 2006, 30, 2661-2670 4. Peter Flaschel, Some stability properties of Goodwin’s growth cycle, a critical evaluation, Zeitschrift f¨ur National¨okonomie, 1984, 44, 63-69 5. H.I. Freedman, Deterministic Mathematical Models in Population Ecology, Marcel Dekker Inc., New York, 1980 6. R.M. Goodwin, A Growth Cycle, C.H. Feinstein, Socialism, Capitalism and Economic Growth: Essays Presented to Maurice Dobb, Cambridge University Press, London, 1967, 54-58. Revised version in: Hunt, E.K., Schwartz, J. (Eds.), A Critique of Economic Theory. Harmondsworth, UK: Penguin, 1972, pp. 442-449 7. Morris W. Hirsch and Stephen Smale, Differential Equations, Dynamical Systems, and Linear Algebra, Academic Press, Inc, New York, 1974 8. Xun C. Huang and L. Zhu, Limit Cycles in a General Kolmogorov Model, Nonlinear Analysis, 2005, 60, 8, 1393-1414 9. N. Kolmogorov, Sulla Teoria di Volterra della Lotta per l’esistenza, Giornelle dell’Istituto Italiano degli Attuari, 1936, 7, 74-80 10. A.J. Lotka, Elements of Physical Biology, 1925, Williams and Wilkins, New York 11. Karl Marx, Capital, I, Frederick Engels, Progress Publishers, Moscow, 1971, Originally written in German in 1867 12. Anjan Mukherji, Robustness of Closed Orbits in a Class of Dynamic Economic Models, Sajal Lahiri and Pradeep Maiti, Economic Theory in a Changing World, Policy Modeling for Growth, Oxford University Press, Delhi, 2005 13. Anjan Mukherji, The Possibility of Cyclical Behavior in a Class of Dynamic Models, American Journal of Applied Sciences, 2005, Special Issue, 27-38 14. H.L. Royden, Real Analysis, 1968, Collier-Macmillan Canada, Ltd., Toronto, 2 15. Shagi-Di Shih and Shue-Sum Chow, A Power Series in Small Energy for the Period of the Lotka-Volterra System, Taiwanese Journal of Mathematics, 2004, December, 8, 4, 569-591 16. V. Volterra, Variazioni e Fluttuazioni del Numero d’individui in Specie Animali Conviventi, Memorie del R. Comitato Talassografico Italiano, Memoria CXXXI, 1927, Translated in: Applicable Mathematics of Non-Physical Phenomena, Oliveira-Pinto, F., Conolly, B.W., John Wiley & Sons, New York, 1982, pp. 23-115

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy in a Dual Economy Bidisha Chakraborty and Manash Ranjan Gupta

Abstract This paper develops an endogenous growth model with dualism in human capital accumulation of two types of individuals. The government imposes a proportional redistributive tax on the resources of rich individuals to finance the educational subsidy given to poor individuals. We find out the properties of the optimal tax financed educational subsidy policy using the technique of Stackelberg differential game.

1 Introduction Human capital accumulation with its role on economic growth is a major area of research in macroeconomics. The literature starts with the seminal paper of Lucas (1988) that shows the growth rate of per capita income to depend on the rate of human capital accumulation determined by the labour time allocation of individuals for acquiring skill. The Lucas (1988) model is extended and reanalyzed by various authors in various directions. A subset of that literature1 is concerned with the effects of taxation on the long-term growth rate in these Lucas-type models. However, the model builders do not adopt the framework of Stackelberg differential games. These Stackelberg differential games are nowadays widely used to study the dynamic interaction between the government and the private agents. Here, the government natBidisha Chakraborty Bijoygarh Jotish Roy College, Kolkata, India. e-mail: [email protected] Manash Ranjan Gupta Indian Statistical Institute, 203 B.T. Road, Kolkata, India. e-mail: [email protected] 1

See the works of Jones, Manuelli and Rossi (1993), Stokey and Rebelo (1995), Chamley (1992), Mino (1996), Uhlig and Yanagawa (1996), Ortigueira (1998), Alonso Carrera and Freire Seren (2004), De Hek (2005) etc.

277

278

Bidisha Chakraborty and Manash Ranjan Gupta

urally plays the role of a leader setting the fiscal policies; and the private agents act as followers determining their levels of consumption, investment, labour supply and so on. The government then takes the private agents’ best response into account and designs the optimal policy. Few models are developed using this framework to analyze the optimal fiscal policies2 . However, in none of the Lucas-type models, tax revenue is used to subsidize the human capital accumulation sector. Lucas (1990) has already drawn our attention to the role of “increased subsidies to education on the long term growth rate of an economy. Many authors have analyzed the issue of education subsidy in recent years. The set of literature includes the works of Zhang (2003), Blankenau and Simpson (2004), Bovenberg and Jacobs (2003, 2005), Boskin (1975), Blankenau (2005), Brett and Weymark (2003) and of many others. Most of them deal with the effects of subsidies and of public educational expenditures on economic growth. However, none of these papers analyzes the optimality of educational subsidy policy using the framework of Stackelberg differential game. The present paper develops a growth model of an economy in which human capital accumulation is viewed as the source of economic growth and in which dualism exists in the mechanism of human capital accumulation of the two types of individuals – the rich and the poor. There exists a substantial theoretical literature dealing with the structural dualism and income inequalities in less developed countries3. However, none of the existing dual economy models has focused on the dualism in the mechanism of human capital formation of two different groups of individuals. In a less developed economy, the stock of human capital of the poor individual is far lower than that of the rich individual. Also there exists a difference in the mechanism of human capital accumulation between a rich individual and a poor individual. On the one hand, there are rich families who can spend a lot of resources for schooling of their children. On the other hand, there are poor families who have neither leisure time nor resources to spend for education of their children. The opportunity cost of schooling of children of the poors is very high because they can alternatively be employed as child labour. However, they receive support from exogenous sources. Government sets up free public schools and introduces various schemes of paying book grants and scholarships to the meritorious students coming from poor families. Government meets the cost of public education program through taxes imposed on rich individuals. In India, the government gives special emphasis on the subsidized education programme for the people belonging to scheduled castes and scheduled tribes who are economically backward; and backwardness in education is considered as one of the important causes of their economic backwardness. So the efficiency enhancement mechanisms for rich individuals and poor individuals are different. While rich individuals can build up their human capital on their own, poor individuals need the support from exogenous sources in accumulating their human capital. 2

See the works of Judd (1985, 1997), Chamley (1986), Lansing (1999), Guo and Lansing (1999), Mino (2001), Park and Philippopoulos (2003, 2004), Ben Gad (2003) etc. 3 This includes the works of e.g Lewis (1954), Ranis and Fei (1961), Sen (1966), Dixit (1969), Todaro (1969), Benabou (1994, 1996a, 1996b) etc.

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

279

In the present model, we assume that the representative rich individual has a high initial level human capital endowment and an efficient human capital accumulation technology4. The representative poor individual lags behind both in terms of initial human capital endowment and in terms of the productivity of human capital accumulation technology. We call them rich and poor because human capital is an important determinant of income 5 . However, poor individuals are benefitted by the sacrifices of rich individuals in this model; and redistributive taxes are imposed on rich individuals to finance the educational subsidy given to poor individuals. The government taxes a fraction of the resources or of the income of the representative rich individual and spends it to meet the cost of training given to the representative poor individual. Neither Lucas (1988) himself nor any extension of the Lucas (1988) model has considered this dualism in human capital accumulation. Our objective is to analyze the properties of the optimum tax financed educational subsidy policy for the poors in the long run equilibrium of the model. We do this adopting a framework of Stackelberg differential game. We derive some interesting results from this model. It appears to be optimal to adopt a tax financed educational subsidy policy for poor individuals in the long run equilibrium of the model. This optimal tax financed educational subsidy rate varies positively with the relative weight given to consumption of the poor individual in the social welfare function and with the learning ability of that individual. However, this tax rate varies negatively with the learning ability of the rich individual who is the tax payer. The optimal policy also implies an interesting trade off between growth and inequality. The rest of the paper is organized as follows. Section 2 presents the basic model. Section 3 presents the properties of the optimal policies in the long run equilibrium when the government can not internalize the externalities. Concluding remarks are made in Section 4.

2 The Basic Dual Economy Model We consider an economy with two types of individuals – rich individuals and poor individuals. Human capital accumulation is a non market activity like that in Lucas (1988). However, the mechanisms of human capital accumulation are different for two types of individuals. There is no external effect of human capital on production. 4 It means that the rich individual has a higher ability of learning and a larger stock of secondary inputs of human capital accumulation. 5 The empirical works on the skilled-unskilled wage inequality in different countries, i.e., the works of Robbins (1994a, 1994b), Lachler (2001), Beyer, Rojas and Vergara (1999), Marjit and Acharyya (2003), Wood (1997)etc. have a debate over this hypothesis. Beyer, Rojas and Vergara (1999) have shown that the extent of wage inequality and the proportion of the labour force with college degrees in the post liberalization period in Chile were negatively related. According to the World Development Report (1995), increased educational opportunities exerted downward pressures on wage inequality in Columbia and Costa Rica. Many other works have shown the opposite empirical picture in many other countries.

280

Bidisha Chakraborty and Manash Ranjan Gupta

Population size of either type of individuals is normalised to unity. All individuals belonging to each of the two groups are identical. There is full employment of both types of labour; and the labour market is competitive. The government deducts (1 − x) fraction of labour time of the representative rich individual to finance the training of the poor individual. Labour endowment is the only resource of the individual6. Out of the remaining x fraction of labour time, the rich individual allocates ‘a’ fraction to production and (1 − a) fraction to his own human capital accumulation. The poor individual spends u fraction of non leisure time for production. Let HR and HP be skill levels of the representative rich individual and of the poor individual respectively. We assume that HR (0) > HP (0). This means that the representative poor individual lags behind the rich individual in terms of initial human capital endowment. Both the rich individual and the poor individual consume whatever they earn and hence they do not save (or invest). So there is no accumulation of physical capital in this model; and hence physical capital does not enter as an input in the production function7. Each of the two types of individuals produce the product using its labour as the only input; and this labour input is expressed in efficiency (human capital) unit. The production functions of the rich worker (individual) and of the poor worker (individual) take the following forms respectively.

and

ε CR = YR = AR a x HR H¯R R ;

(1)

ε CP = YP = AP u HP H¯P P .

(2)

Here 0 ≤ x ≤ 1; and H¯R and H¯P represent average levels of human capital of all rich individuals and of all poor individuals respectively. εR > 0 and εP > 0 are magnitudes of their community specific external effects of human capital on production. Production function of each of the two types satisfies CRS in terms of private inputs but shows social IRS when external effect is taken into consideration. YR and YP stand for the levels of production of the representative rich individual and of the representative poor individual respectively; and CR and CP denote their levels of consumption. The representative individual of either type maximizes his discounted present value of instantaneous utility over the infinite time horizon with respect to labour time allocation variable. The instantaneous utility function of the ith type of individual is given by U(Ci ) = lnCi (3) 6

Park and Philippopoulous (2004), Benhabib et. al (1997) consider proportional taxation on the stock of physical capital. Generally taxes are imposed on income. However, taxes are also imposed on land and many other properties in the less developed countries like India. 7 Though it is assumed for simplicity, it is a serious limitation of the exercise. However, the model becomes highly complicated when physical capital accumulation is introduced. It may be an weak excuse that many other models in the existing literature are subject to this limitation. The set includes the works of Mino (1998), Pecorino (1992), Rosendahl (1996), Lucas (2004), Driskill and Horowitz (2002) etc.

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

281

for i = R, P. Human capital accumulation mechanism of the representative rich individual is assumed to be similar to that in Lucas (1988). Hence H˙R = mR (1 − a)xHR.

(4)

Here 0 ≤ a ≤ 1; and mR is a positive constant representing the productivity parameter of the human capital formation function of the rich individual. However, the mechanisms of human capital formation for the two classes of individuals are different. The skill formation of a poor individual takes place through the training program conducted by the government. The government taxes (1 − x) fraction of the available labour endowment of rich individual and spends this in this training programme. The poor individual devotes (1 − u) fraction of non-leisure time for learning. The human capital accumulation function of the representative poor individual is assumed to take the following form. H¯R H˙P = mP (1 − u)HP[q( ¯ − 1)γ (1 − x) + 1]δ . HP

(5)

Here 0 < δ < 1, γ > 0, q > 0 and mP > 0. The knowledge accumulation technology is such that the knowledge needs to trickle down from the more knowledgeable H¯R γ persons to the inferiors. q( H ¯P − 1) can be interpreted as the degree of effectiveness of the teaching program. Here γ > 0. So the higher the extent of the knowledge gap between the rich individual and the poor individual the more effective will be the teaching programme and the tax cum educational subsidy. Here the cost of teaching is met by the educational subsidy and these are measured in terms of labour time. We also assume mR > mP . This means that the human capital accumulation technology of the rich individual is more productive than that of the poor individual in the absence of teaching, i.e., in the absence of a tax financed education subsidy policy which implies x = 1. In models of Tamura (1991), Eaton and Eckstein (1997), Lucas (2004) etc. the human capital accumulation technology is subject to external effects. In the models of Eaton and Eckstein (1997) and Tamura (1991), average human capital stock of the society brings external effect on the human capital accumulation of every individual. However, in the model of Lucas (2004), human capital stock of the leader causes external effect on the human capital accumulation of all other individuals. Leader is that individual whose human capital endowment is at the highest level. In our model, the representative rich individual has already attained a higher level of human capital and the representative poor individual is lagging behind. Rich individuals and poor individuals are assumed to be identical within their respective groups. So the representative rich individual may be treated as the leader and the average human capital of rich individuals relative to that of poor individuals should have a positive external effect on poor individual’s human capital accumulation technology.

282

Bidisha Chakraborty and Manash Ranjan Gupta

3 Optimum Growth Path We consider an open loop Stackelberg differential game with private individuals being followers and the government being the leader.

3.1 The Optimization Problem of the Rich Individual The objective functional of the rich individual is given by JR = 0∞ U(CR )e−ρ t dt. This is to be maximized with respect to the control variable, a, subject to Eqs. (1), (3) and (4) and given the initial value of the state variable, HR . Here ρ is the constant positive discount rate. Defining the relevant Hamiltonian, solving this optimization problem and using λR as the co-state variable, we obtain the following optimality conditions:

λ˙R = ρ − mRx; λR

(6)

1 . λR mR HR x

(7)

and a=

3.2 The Optimization Problem of the Poor Individual The objective functional of the poor individual is given by JP = 0∞ U(CP )e−ρ t dt. This is to be maximized with respect to the control variable, u, subject to Eqs. (2), (3) and (5) and given the initial value of the state variable, HP . Solving this optimization problem and using λP as the co state variable, we obtain following optimality conditions:

λ˙P H¯R = ρ − mP [q( ¯ − 1)γ (1 − x) + 1]δ ; λP HP and u=

1 H¯R λP mP HP [q( H ¯P

− 1)γ (1 − x) + 1]δ

.

(8)

(9)

Eqs. (7) and (9) summarize private agents’ decision rules in a decentralized competitive equilibrium.

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

283

3.3 The Optimization Problem of the Government The government chooses the tax rate, (1 − x), to maximize the welfare of the society subject to the decentralized competitive equilibrium conditions. Thus, the maximization problem of the government is also constrained by the private agents’ optimal decision rules given by Eqs. (6), (7), (8) and (9). The objective of the government is to maximize the discounted present value of instantaneous social welfare over the infinite time horizon. Here the instantanous social welfare function is defined as follows: W = b lnCR + (1 − b) lnCP where b and (1 − b) are the weights given to the consumption of the rich individual and to the consumption of the poor individual respectively. The objective functional is given by JG = 0∞ We−ρ t dt which is to be maximized with respect to the control variable, x, subject to Eqs. (1), (2), (4), (5), (6), (7), (8) and (9). The current value Hamiltonian is given by H g = b ln ·CR (t) + (1 − b)lnCP(t) + ξR[λR (ρ − mR x)] ¯

HR γ δ +ξP [λP {ρ − mP [q( H ¯ − 1) (1 − x) + 1] }] P

¯

HR γ δ +µR [mR (1 − a)xHR] + µP mP (1 − u)HP[q( H ¯ − 1) (1 − x) + 1] ; P

where ξR , ξP , µR and µP are the co-state variables. Using equations (1), (2), (7) and (9), this Hamiltonian expression can be modified as follows: εR

H g = b · ln[ AλRRHmRR ] + (1 − b)ln[

AP HP εP



λP mP [q( H¯R −1)γ (1−x)+1]δ

]

P

¯

HR γ δ +mR x(µR HR − ξR λR ) + mP[q( H ¯ − 1) (1 − x) + 1] (µP HP − ξP λP ) P

+ρ (ξR λR + ξPλP ) − ( λµRR + µλPP ).

This is not a centralized planned economy. The government can impose taxes and provide subsidies only. So we assume that the government can not internalize the external effects i.e. the government takes H¯R and H¯P to be given in the optimization exercise. The first order optimality condition for this optimization problem with respect to the control variable, x, is given by H¯

∂ Hg ∂x

=

(1−b)δ q( H¯R −1)γ H¯

P

+ mR(µR HR − ξR λR )+

{q( H¯R −1)γ (1−x)+1} P H¯R H¯R γ mP δ q( H ¯P − 1) [q( H¯P

− 1)γ (1 − x) + 1]δ −1(ξ

(10) P λP − µP HP ) = 0.

Time derivatives of co-state variables should satisfy the following differential equations along the optimum growth path.

µR −b + ξR (ρ − mRx) + 2 ]; ξ˙R = ρξR − [ λR λR

(11)

284

Bidisha Chakraborty and Manash Ranjan Gupta

µP (1 − b) H¯R + ξP[ρ − mP {q( ¯ − 1)γ (1 − x) + 1}δ ] + 2 ]; ξ˙P = ρξP − [− λP HP λP

and

(12)

µ˙R = ρ µR − µR mR x;

(13)

H¯R µ˙P = ρ µP − µPmP {q( ¯ − 1)γ (1 − x) + 1}δ . HP

(14)

The transversality conditions are given by limt→∞ e−ρ t ξR (t)λR (t) = limt→∞ e−ρ t ξP (t)λP (t) = limt→∞ e−ρ t µR (t)HR (t) = limt→∞ e−ρ t µP (t)HP (t) = 0. We define the followings. ωR = ξR λR , ωP = ξP λP , vR = λR HR , vP = λP HP , ηP = µP HP , ηR = µR HR and HR z= H . P Using the optimality conditions and the time derivatives of the costate variables we have ω˙R ηR b = − + ρ, (15) ωR ωR ωR v R

and

ω˙P (1 − b) ηP = − + ρ, ωP ωP ωP v P

(16)

1 v˙R =ρ− , vR vR

(17)

v˙P 1 =ρ− , vP vP

(18)

η˙R 1 =ρ− , ηR vR

(19)

η˙P 1 =ρ− . ηP vP

(20)

From the Eq. (10) we have ∂ Hg ∂x

=

(1−b)δ q(z−1)γ + mR (ηR − ωR ) − mPq(z − 1)γ {q(z−1)γ (1−x)+1} ×δ {q(z − 1)γ (1 − x) + 1}δ −1(ηP − ωP) = 0.

(21)

Eqs. of motions from (15) to (20) are modified as follows:

η˙R − ω˙R = ρ (ηR − ωR ) − b;

(22)

η˙P − ω˙P = ρ (ηP − ωP) − (1 − b).

(23)

and

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

285

Also, using Eqs. (4) and (5), we have z˙ = mR (1 − a)x − mP(1 − u)[q(z − 1)γ (1 − x) + 1]δ . z

(24)

(1−b)δ q2 (z−1)2γ ∂ 2Hg 2 2γ = {q(z−1) γ (1−x)+1}2 +mP δ q (z−1) (δ −1)(ηP − ωP )[q(z− ∂ x2 1)γ (1 − x) + 1]δ −2; and this can be simplified into the following.

Also, in this case, ∂ 2Hg ∂ x2

So

=

∂ 2Hg ∂ x2

(1−b)δ q2 (z−1)2γ δ) [1 − mR x(1− ]. ρ {q(z−1)γ (1−x)+1}2

is negative if x >

ρ mR (1−δ )

= x.

3.4 Semi Stationary Equilibrium Equations of motion given by (17), (18), (22), (23) and (24) describe the dynamics of the system. Along the semi stationary equilibrium growth path v˙R = v˙P = η˙R − ω˙R = η˙P − ω˙P = z˙ = 0; and their equilibrium values are denoted by vR ∗ , vP ∗ , z∗ , (ηR ∗ − ωR ∗ ) and (ηP ∗ − ωP ∗ ). Since these values are time independent, Eq. (10) shows that x∗ is also time independent. Using Eqs. (7) and (9) we derive a∗ and u∗ and they are also time independent. Note that, along the growth path, ηR , ωR , ηP , ωP are not individually time independent but their linear combinations are time independent. So the equilibrium is called semi stationary equilibrium 8 . A semi stationary equilibrium is the equilibrium where all state and control variables are stationary but their associated shadow prices are not stationary. Equilibrium values of vR , vP , z, ηR − ωR and ηP − ωP are given by followings: vR ∗ =

1 ; ρ

(25)

vP ∗ =

1 ; ρ

(26)





z = 1+[

1

( mmRPx ) δ − 1 q(1 − x∗)

(ηP − ωP )∗ =

(27)

(1 − b) ; ρ

(28)

b . ρ

(29)

and (ηR − ωR)∗ =

8

1

]γ ;

See Van Long and Shimomura (2000) according to whom steady state equilibrium is often unwarranted and semi stationary equilibrium is the only equilibrium.

286

Bidisha Chakraborty and Manash Ranjan Gupta

Any trajectory converging to this semi stationary equilibrium point should satisfy the following modified transversality conditions: limt→∞ e−ρ t (t)(ηR − ωR ) = limt→∞ e−ρ t (t)(ηP − ωP ) = 0. Using those above mentioned semi stationary equilibrium values of the variables vR , vP , z∗ , ηP − ωP and ηR − ωR , and using Eqs. (7) and (9), we determine semi stationary equilibrium values of a and u in terms of x∗ . These are given by the followings: ρ a∗ = , (30) mR x ∗ and u∗ =

ρ mP

[q(z∗ − 1)γ (1 − x∗) + 1]δ

.

(31)

Substituting these equilibrium values in equation (10) we have mR b(1 − x∗) mR x∗ − 1 = (1 − b) δ {1 − ( ) δ }; (mR x∗ − ρ ) mP

(32)

and this Eq. (32) solves for x∗ . Note that, if any of the parameters of mP , δ , q, (1 − b) is zero, then, using Eq. (10) and semi stationary equilibrium values of the g variables, (ηR − ωR )∗ , and (ηP − ωP )∗ , we find that ∂∂Hx > 0 which implies that x∗ = 1. So we have the following proposition. Proposition 1. If any one of the parameters – mP , δ , q, (1 − b) takes zero value, then it is not optimal to adopt a tax cum educational subsidy policy in the semi stationary equilibrium. This can be explained as follows. mP = 0 implies that human capital accumulation technology of the poor individual is always unproductive. q = 0 or δ = 0 implies that the teaching programme is unproductive. Lastly, b = 1 implies that the social welfare function of the government does not take care of the interests of the poor individual. So, in all these cases, a policy of subsidization to the education programme for poor individuals can not be optimum. ∗ P The lower limit of x∗ is max{ mρR , m mR } because we have assumed z > 1 and ρ ρ 0 < a∗ = mR x∗ < 1. If mP < ρ , then the lower limit of x∗ is mR . At x∗ = mρR , the LHS of Eq. (32) is infinitely large; and at x∗ = 1, it is zero. LHS of Eq. (32) is a decreasing function of x∗ when x∗ lies between mρR and 1. On the other hand, its RHS is an increasing function of x. The LHS and RHS of Eq. (32) are denoted by G(x) and H(x) respectively. At x∗ = mρR , 1

H(x) = (1 − b)δ {1 − ( mρP )− δ };

1

and, at x∗ = 1, H(x) = (1 − b)δ {1 − ( mmPR )− δ } > 0. Since mR > ρ , the value of H(x) at x∗ = 1 is higher than that at x∗ = mρR . mP ∗ P If mP > ρ , then the lower limit of x∗ is m mR ; and, at x = mR , H(x) = 0; and G(x) =

b(mR −mP ) (mP −ρ )

> 0. Hence, in both the cases, G(x) curve and H(x) curve in Figure

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

x = Max

ρ mP , mR mR

287

x=1

G(x), H(x)

x=x

G(x)

H(x)

0

ρ mR

x

x*

1

x

Fig. 1 Semi Stationary Equilibrium

1 intersects each other at only one value of x ∈ [ mρR , 1]. So Eq. (32) solves for unique x∗ satisfying mρR < x∗ < 1. Proposition 2. There exists a unique x∗ satisfying rameters of mP , δ , q, (1 − b)– is zero.

ρ mR

< x∗ < 1 if none of the pa-

So a tax financed educational subsidy policy to the education sector of the poor individuals is optimal. Note that this equilibrium value of x∗ depends upon the values mR (1−x) ∂ H(x) of parameters mR , mP , δ , ρ and b. Note that, ∂ G(x) ∂ b = (m x−ρ ) > 0; and ∂ b = 1 −δ {1 − ( mmRPx ) δ }

R

< 0. The increase in b causes G(x) curve in Figure 1 to shift upward and the H(x) curve to shift downwards. So the equilibrium value of x∗ rises in this case. Also we have ∂ G(x) ∂ H(x) (1−b) mR x − 1 δ < 0. ∂ mP = 0; and ∂ mP = − mP ( mP )

288

Bidisha Chakraborty and Manash Ranjan Gupta

If mP is increased, then H(x) curve in Figure 1 shifts downward but G(x) curve does not shift. So the optimum tax rate 1 − x∗ , is reduced in this case. Also we have ∂ G(x) −b(1−x∗ )ρ ∂ mR = (mR x∗ −ρ ) < 0; ∗

∂ H(x) ∂ mR

1

mR x − δ = (1−b) > 0. mR ( mP ) So if mR is increased, then then H(x) curve in the Figure 1 shifts upward and G(x) curve shifts downward. So the optimum tax rate, 1 − x∗, is increased. We now summarize these comparative static results in the form of the following proposition.

Proposition 3. The optimal tax financed educational subsidy rate denoted by (1 − x∗ ), varies negatively with b and mP and positively with mR . We now provide the intuition behind these results. A higher value of b implies a greater (lower) relative weight to the consumption of the rich (poor) individuals in the social welfare function. As the government puts higher relative weight to the consumption of the rich individual, optimal tax rate should be lower because it is imposed on the resources of the rich individual. A higher value of mP indicates the greater efficiency of the human capital accumulation technology of the poor individual. So the need for educational subsidy to poor individuals is reduced when their human capital accumulation technology is more efficient. Hence it is optimum to reduce the tax financed educational subsidy rate in this case. However, the optimum tax financed subsidy rate is independent of the values of externality parameters here. This is so because we consider logarithmic instantaneous utility functions of each of the two types of individuals. One should remember that this property is also obtained in Lucas (1988) model when the elasticity of marginal utility of consumption of the representative individual is equal to unity9 . The optimal policy implies an interesting trade off between growth and inequality. Using Eqs. (4) and (30), in the semi stationary equilibrium, we have ˙



1

m x ( R ) δ −1 1

˙

mP HP ∗ ∗ R γ g= H HR = HP = mR x − ρ . Similarly, from Eq. (27), we have z − 1 = [ q(1−x∗ ) ] . ∗ Here z is a measure of the degree of inequality in human capital between two groups of individuals. This gives an idea of income inequality too because human capital of an individual is the only determinant of his income (consumption) in this model. g is the balanced growth rate of human capital of the two groups. It is clear that g as well as z∗ varies positively with x∗ . So the higher (lower) the tax rate, i.e., the value of 1 − x∗ , the lower (higher) are the balanced growth rate and the degree of inequality, i.e., the values of g and z∗ . So g and z∗ vary in the same direction with respect to the change in the optimal tax financed educational subsidy policy. Gomez (2004) and Garcia-Castrillo and Sanso (2000) also find the optimal tax financed education subsidy policy in a Lucas (1988) model. However, we make our analysis using a more general framework endogenizing the government’s optimizing behavior on the one hand and allowing dualism in the human capital accumulation 1−σ

i If we consider U(Ci ) = C1− σ with σ = 1, then technical complications prevent us from being successful in proving the existence of long run equilibrium.

9

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

289

of two groups of individuals on the other hand. Those two authors do not consider redistributive taxes like ours because they assume all individuals to be identical. They do not consider the role of a backward sector on the properties of the optimal policies. There is a literature initiated by Judd (1985), Chamley (1986) etc. dealing with the optimality of redistributive taxes from capitalists to workers. This literature analyzes the validity of the Judd- Chamley proposition which states that, in the steadystate equilibrium, a pure redistributive tax on capital income is not optimal. In this paper, we consider a different type of redistributive tax- a tax designed to reduce the inequality in the stock of human capital between two groups of individuals. Inequality in the distribution of human capital is an important determinant of income inequality. Optimum tax rate is not necessarily equal to zero in this case. Optimality of such a tax financed educational subsidy policy to the backward sector is justified in a world where human capital accumulation of the poor individual is benefited by the sacrifice of the rich individual and the government’s social welfare function takes care of the interests of the rich as well as of the poors.

4 Conclusion Existing endogenous growth models dealing with human capital accumulation do not consider dualism in human capital formation among different classes of individuals. This paper attempts to develop a theoretical model of endogenous growth involving redistributive taxation and educational subsidy to build up human capital of the individuals belonging to the less privileged section of the community. Here we analyze the model of an economy with two different classes of individuals in which dualism exists in the nature of human capital accumulation of those two types of individuals. The government imposes a proportional tax on the resources of rich individuals and uses that tax revenue to finance the educational subsidy given to the poors. The optimal tax financed educational subsidy rate is found out solving a Stackelberg differential game with government being the leader. We derive some interesting properties of optimal tax financed educational subsidy policy. It is optimal to adopt such a policy in the semi stationary equilibrium of this model. This optimal tax financed educational subsidy rate varies positively with the relative weight given to the consumption of the poor individual in the social welfare function and with the learning ability of that individual. However, this tax rate varies negatively with the learning ability of the rich individual who is the tax payer. The optimal policy also implies an interesting trade off between growth and inequality. The model, in this paper, does not consider many important features of less developed countries. Accumulation of physical capital is ruled out and there is no justification of this exclusion apart from the weak excuse of technical simplicity. The present model does not consider many other problems of a dual economy e.g. unemployment of labour, market imperfections etc. Technical complications prevent us from analyzing the transitional dynamic properties of this model. Our purpose is

290

Bidisha Chakraborty and Manash Ranjan Gupta

to focus on the role of dualism in the human capital accumulation on the growth path in a less developed economy and to analyze the properties of optimal educational subsidy policy in this context. In order to keep the analysis otherwise simple, we do all kinds of abstraction – a standard practice often followed in the theoretical literature.

References 1. Acemoglu D and J. Angrist, 2000, How Large are the Social Returns to Education? Evidence from Compulsory Schooling Laws, NBER Working Paper No. 7444, National Bureau of Economic Research 2. Alonso-Carrera J. and M. J. Freire- Seren, 2004. Multiple equilibria, fiscal policy and human capital accumulation. Journal of Economic Dynamics and Control. 28: 841-856 3. Ben Gad, M., 2003, Fiscal Policy and Indeterminacy in Models of Endogenous Growth, Journal of Economic Theory, 108, 322-344 4. Benhabib, J. and A. Rustichini, 1997, Optimal Taxes Without Commitment, Journal of Economic Theory, 77(2), 231-259 5. Beyer,H., P. Rojas and R. Vergara, 1999. Trade liberalization and wage inequality, Journal of Development Economics, 59 (1), 103-123 6. Blankenau, W, 2005, Public schooling, college subsidies and growth, Journal of Economic Dynamics and Control, 29 (3), 487-507 7. Blankenau, W. F. and N. B. Simpson, 2004, Public education expenditures and growth, Journal of Development Economics, 73 (2), 583-605 8. Boskin, M, 1975. Notes on the Tax Treatment of Human Capital, NBER Working Paper No. 116, National Bureau of Economic Research 9. Bovenberg, A. L. and B. Jacobs, 2003. On the optimal distribution of education and income, mimeo: University of Amsterdam/ Tilburg 10. Bovenberg, A. L and B. Jacobs, 2005. Redistribution and education subsidies are siamese twins, Journal of Public Economics, 89(11-12), 2005-2035 11. Brett, C and J. A. Weymark, 2003, Financing education using optimal redistributive taxation, Journal of Public Economics, 87 (11), 2549-2569 12. Chamley, C., 1986. Optimal Taxation of Capital Income in a General Equilibrium Model with Infinite Lives, Econometrica 54(3): 607 622 13. Chamley, C., 1992. The Welfare Cost of Taxation and Endogenous Growth, Boston University, IED Discussion Paper No 30 14. Ciccone, A. and G. Peri, 2002. Identifying Human Capital Externalities: Theory with an Application to US Cities; IZA Discussion Paper, 488, Institute for the Study of Labor (IZA) 15. De Hek, P. A, 2005, On taxation in a two-sector endogenous growth model with endogenous labor supply, Journal of Economic Dynamics and Control, forthcoming 16. Dockner. Engelberg, Steffen Jorgensen, Ngo Van Long and Gerhard Sorger, 1999, Differential Games in Economics and Management Sciences, Cambridge University Press, Cambridge, U.K. 17. Driskill, R. A. and A. W. Horowitz, 2002. Investment in Hierarchical Human Capital, Review of Development Economics, 6, 48-58 18. Garcia-Castrillo, P and M. Sanso, 2000. Human capital and optimal policy in a Lucas-type model, Review of Economic Dynamics, 3, 757-770 19. Glaeser, E.L., 1997, Learning in Cities, NBER Working Paper No.6271, National Bureau of Economic Research 20. Glaeser, E.L. and D.C.Mare, 1994, Cities and skills, NBER Working Paper No. 4728, National Bureau of Economic Research

Human Capital Accumulation, Economic Growth and Educational Subsidy Policy

291

21. Glomm, G. and B. Ravikumar, 1992. Public versus Private Investment in human capital: Endogenous growth and income inequality, Journal of Political Economy. 100(4), 818-834 22. Gomez, M. A., 2003. Optimal fiscal policy in the Uzawa-Lucas model with externalities, Economic Theory, 22, 917-925 23. Guo, J and K.J. Lansing, 1999. Optimal taxation of capital income with imperfectly competitive product markets, Journal of Economic Dynamics and Control, 23, 967-995 24. Jones, L.E., R.E. Manuelli and P.E. Rossi, 1993. Optimal Taxation in Models of Endogenous Growth, Journal of Political Economy, 101, 485-517 25. Judd, K.L.,1985. Redistributive taxation in a simple perfect foresight model. Journal of Public Economics 28, 59-83 26. Judd K.L., 1997. The Optimal Tax Rate for Capital Income is Negative, NBER Working Paper No. 6004, National Bureau of Economic Research 27. Karp, L and In Ho Lee, 2003. Time-Consistent Policies. Journal of Economic Theory, 112, 353-364 28. Lachler, U., 2001, Education and earnings inequality in Mexico, World Bank Working Paper 29. Lansing, K.L, 1999. Optimal redistributive capital taxation in a neoclassical growth model. Journal of Public Economics, 73, 423-453 30. Lucas, R. E, 1988. On the Mechanics of Economic Development, Journal of Monetary Economics, 22, 3-42 31. Lucas, R. E, 1990, Supply-Side Economics: An Analytical Review, Oxford Economic Papers, 42(2), 293-316 32. Lucas, R. E, 2004. Life earnings and Rural-Urban Migration, Journal of Political Economy, 112, S29-S59 33. Marjit, S and R. Acharyya, 2003, International Trade, Wage inequality and the Developing Economy, Physica Verlag, New York 34. Mino, K, 1996. Analysis of a Two-Sector Model of Endogenous Growth with Capital Income Taxation, International Economic Review, 37, 227-51 35. Mino, K, 2001. Optimal taxation in dynamic economies with increasing returns, Japan and the World Economy, 13, 235-253 36. Moretti, E, 2003. Human capital externalities in cities, NBER Working Paper No.9641, National Bureau of Economic Research 37. Ortigueira S, 1998. Fiscal policy in an endogenous growth model with human capital accumulation - Transitional dynamics with multiple equilibria, 42, 323-355 38. Park, H and A. Philippopoulos, 2003. On the dynamics of growth and fiscal policy with redistributive transfers, Journal of Public Economics, 87, 515-538 39. Park, H and A. Philippopoulos, 2004. Indeterminacy and fiscal policies in a growing economy, Journal of Economic Dynamics and Control, 28. 645-660 40. Pecorino, Paul, 1992. Rent Seeking and Growth: The Case of Growth through Human Capital Accumulation, Canadian Journal of Economics, 25, 4, 944-56 41. Peri, G, 2002, Young workers, learning and agglomeration, Journal of Urban Economics, 52, 582-607 42. Robbins, D., 1994 a, Malaysian wage structure and its causes, Harvard Institute for International Development 43. Robbins, D., 1994 b, Philippine wage structure and its causes, Harvard Institute for International Development 44. Rosendahl, K.E, Does improved Environmental Policy Enhance Economic Growth?, Environmental and Resource Economics, 9, 341-364 45. Rudd, J. B., October 2000. Empirical Evidence on Human Capital Spillovers, FEDS Discussion Paper No. 2000-46, Board of Governors of the Federal Reserve - Macroeconomic Analysis Section 46. Stokey, N.L. and S.Rebelo 1995, Journal of Political Economy, Growth Effects of Flat-Rate Taxes, 103, 519-50 47. Uhlig, H and N. Yanagawa, 1996. Increasing the capital income tax may lead to faster growth, European Economic Review, 40(8), 1521-1540

292

Bidisha Chakraborty and Manash Ranjan Gupta

48. Van Long N and K. Shimomura, 2000. Semi stationary equilibrium in leader follower games, CIRANO working paper 49. Wood, A, 1997. Openness and wage inequality in developing countries: The Latin American Challenge to East Asian Conventional Wisdom, World Bank Economic Review 11(1):33-57 50. Xie, D, 1997. On time inconsistency: a technical issue in Stackelberg differential games, Journal of Economic Theory, 76, 412-430 51. Zhang, J., 2003. Optimal debt, endogenous fertility, and human capital externalities in a model with altruistic bequests, Journal of Public Economics, 87(7-8), 1825-1835

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis Sajal Lahiri

Abstract We construct a trade-theoretic model for three open economies two of which are in conflict with each other and the third exports arms to the two warring countries. War efforts – which involve the use of soldiers and military hardware – and the price of arms are determined endogenously. The purpose of war is the capture of land, but the costs are that lives are lost and production sacrificed. We examine the effect of foreign aid and a tax on arms exports on war efforts. Whereas foreign aid to the warring countries is likely to increase war efforts, a tax on arms exports is likely to have just the opposite effect. The endogeneity of arms price helps to derive the optimal level of such a tax.

1 Introduction International and regional conflicts are more commonplace that one would like.1 Moreover, modern conflicts are more often than not capital intensive and therefore there is a thriving international market in military hardware. The size of this market is not well measured. While the legal trade in weapons is estimated to be worth at around $50 billion in the mid-1990s, the exact extent of illegal trade (which includes covert sales by governments) is still unknown but thought to be quite substantive (Brzoska, 2001). Thus, the arms market does not appear to be small. Given this, a good understanding of this market is necessary in order to examine policy options that can lead to conflict reduction or resolution. This paper is a small attempt to do so.

Sajal Lahiri Department of Economics, Southern Illinois University Carbondale, Carbondale, USA. e-mail: [email protected] 1

According to Gleditsch (2004), there were 199 international wars and 251 civil wars between 1816 and 2002.

293

294

Sajal Lahiri

The literature on conflict has made attempts to examine how arms trade affects conflicts. Anderton (1995) reviewed the economic literature on arms trade and concluded that it, as it stands now, is in fact not very helpful in understanding how arms trade is related to conflict. A small section of the literature has incorporated arms trade in trade-theoretic models (see, for example, Levine and Smith, 1995). However, the focus has been mainly on the behavior of arms suppliers, and not so much on the demand side of the problem. The use of arms in war comes with costs as well as benefits. The benefits come mainly from a gain of resources, and the costs of warfare are of two types. The first is the direct cost of purchase.2,3 The second cost of warfare is the loss of lives. The human cost has not been incorporated into modern analyzes of conflict.4 Collier and Hoeffler (2005) estimate the human cost as being equivalent of two years of the initial GDP for the a typical developing country engaged in civil war. By failing to incorporate the human cost of warfare into previous analytical frameworks, the literature misses, apart from the humanitarian arguments that have often shaped policies, an important relationship between different inputs in war efforts. For example, modern military hardwares can have a protective aspect in the sense that extensive conflicts can involve relatively minimal loss of lives of soldiers. Thus, military hardwares can often make it easier for nations to engage in conflicts as politician may find it easier to ‘sell’ war to a otherwise doubtful electorate. As we shall show later, this protective nature of arms will give us a number of interesting and counter-intuitive results. To explore the aid-arms trade-conflict nexus, we develop a trade-theoretic model. Our framework has three countries. Two of the countries are in conflict with each other and the third exports arms and give foreign aid to the two warring countries. The war is for the capture of disputed land, and soldiers and imported arms are used to fight the war. The war equilibrium is specified as a Nash one where each warring country decides on its war effort taking as given the war effort of the adversary. The model is closely related to that of Becsi and Lahiri (2006b and 2007b), and we shall draw heavily from those papers. The main addition here is to consider the third

2

The use of soldiers has a cost which is due to foregone production as labor is diverted toward warfare. This cost has been the focus of the recent trade theoretic literature on conflict (Skaperdas and Syropoulos (2001), Syropolous (2004), Becsi and Lahiri (2006a), (2007a). 3 There is now a significant theoretical and empirical literature on the economics of conflict. The theoretical literature follows the seminal work of Hirshleifer (1988) and develops game-theoretic models where two rival groups allocate resources between productive and appropriative activities (see, for example, Brito and Intriligator (1985), Hirshleifer (1995), Grossman and Kim (1996), Skaperdas (1992), Neary (1997), and Skaperdas and Syropoulos (2002)). Recent contributions by Anderton, Anderton, and Carter (1999), Skaperdas and Syropoulos (1996, 2001), Garfinkel, Skaperdas and Syropoulos (2004), and Findlay and Amin (2000) emphasize trade and conflict in two-country frameworks and Becsi and Lahiri (2007a) consider a three-country model. Anderson and Marcouiller (2005) examine the consequences of endogenous transaction costs in the form of predation on international trade. 4 Becsi and Lahiri (2006b and 2007b) are the exceptions.

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis

295

country explicitly and making the determination of the international price of arms endogenous to the model. The purpose of this paper is to examine the effects of two policy options available to the international community to help resolve conflicts. The policy options we consider here are foreign aid and a tax on the exports of military hardware.5 We find that an increase in aid leads to an increase in the use of soldiers and weapons for the adversaries and thus ultimately to an increase in conflict intensity, if military hardware has a very significant protective effect on lives. By contrast, a tax on exports has exactly the opposite effect. Although the underlying models are completely different, our current results for aid resemble the findings of Grossman (2002) and Collier and Hoeffler (2002). We also find that the endogeneity of the price of arms mitigates to some extent the conflict-increasing effect of foreign aid. The plan of the paper is as follows. In the next Section 2 we spell out our model structure. Section 3 performs the analysis of the model. Some concluding remarks are made in Section 5.

2 The Model We develop a three-region, many-factor model where the two of the regions – labeled region a and region b – are engaged in a war or conflict with each other. The third region c is a supplier of arms and foreign aid to the warring parties. All product and factor markets are perfectly competitive and the regions act like small open economies in international markets of all goods except arms. There are many inelastically supplied factors of production; however, two of the factors, namely labor and land, play important roles in our analysis. A part of labor endowment in regions a and b is used in production and the rest is used to fight the war and land is what they fight for. Each warring region i (i = a, b) has an amount of land V¯ i that is undisputed, and the war is about a disputed amount of land denoted by X . Without loss of any generality we shall assume that the disputed land is initially in possession of region b. Regions a and b fight over the disputed land by employing soldiers Las and Lbs and buying military hardware Aa and Ab from country c. We define f (Las , Lbs , Aa , Ab )X as the net gain of land by country a from war. The net gain function for country a increases when more fighting forces and military hardware – Las and Aa – are committed to conflict but decreases when the opposition increases its fighting forces Lbs and hardware Ab . For this net-gain function we make the following assumptions.

5

Hufbauer, Schott, and Elliott (1990) provide a large number of case studies on the effect of external interventions on conflicts.

296

Sajal Lahiri

Assumption 1. The function f (·) satisfies: f1 > 0, f2 < 0, f3 > 0, f4 < 0, f33 < 0, f44 > 0, f11 < 0, f13 > 0, f24 > 0, and f22 > 0. The assumption that f13 > 0 and f24 > 0 implies that soldiers and military hardware complement each other in war. The production side of the economies indexed by i = a, b, c is described by three revenue functions Ra (L¯ a − Las , V¯ a + f (Las , Lbs , Aa , Ab )X ), Rb (L¯ b − Lbs , V¯ b + (1 − f (Las , Lbs , Aa , Ab ))X), and Rc (pA − t, L¯ c , V¯ c ), where L¯ i and V¯ i are the endowments of labor and undisputed land respectively in country i, pA is the international price of arms, and t is a tax on the production (exports) on arms in country c.6 We assume that the two factors are complements, i.e., Ri12 > 0, i = a, b. Some of the soldiers die in course of the war, and the representative consumers in the warring countries suffer some disutility from it. The number of soldiers that die is denoted by Di and gives a measure of the intensity of conflict. Deaths of soldiers and the utility of the consumer ui , in country i (i = a, b) are determined by Di = g¯i (Las , Lbs , Aa , Ab ), i

i

i

i

i

u = u˜ − h (D ) = u˜ − g

(1) i

(Las , Lbs , Aa , Ab ),

i = a, b,

(2)

where u˜i is the utility from the consumption of goods and the disutility function gi is assumed to satisfy Assumption 2. gi (Las , Lbs , Aa , Ab ) is additively separable, i.e., gi (Las , Lbs , Aa , Ab ) = g¯ia (Las , Aa ) + g¯ib(Lbs , Ab ), so that gi12 = gi34 = gi14 = gi23 = 0. It is also assumed to satisfy gi1 > 0, gi2 > 0, ga3 < 0, ga4 > 0, gb3 > 0, gb4 < 0, gi11 < 0, gi22 < 0, ga13 < 0, ga33 > 0, gb24 < 0, gb33 < 0, ga44 < 0, gb44 > 0, (i = a, b). The assumptions that ga3 , gb4 , ga13 and gb24 are all negative capture the defensive or protective roles of military hardware. That is, military hardware is assumed to protect the lives of soldiers. This is in contrast to the net gain function f (·) which a and f b are all positive. has an aggressive role in the sense that f3a , f4b , f13 24

6

All factors other than land and labor are suppressed in the revenue functions as they do not change in our analysis. Since the three countries are assumed to be small in the goods market, goods prices are exogenous and they are omitted from the revenue functions as well. Since arms is produced only in country c, its price appears inside the revenue function of that country. As is well known, the partial derivative of a revenue function with respect to the price of a good gives the output supply function of that good. Similarly, the partial derivative of a revenue function with respect to a factor endowment gives the price of that factor. The revenue functions are positive semi-definite in prices and negative semi-definite in the endowments of the factors of production. In particular, they satisfy Rij j ≤ 0, for i = a, b, c and j = 2, 3. For these and other properties of revenue functions see Dixit and Norman (1980).

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis

297

Given the above utility function, the consumption side of the economies is represented by the expenditure functions E a (ua + ga(·)), E b (ub + gb(·)), and E c (uc ).7 Normalizing X = 1, the income-expenditure balance equations of consumers in the two countries are given by: E a (ua + ga (·)) = Ra (L¯ a − Las , V¯ a + f (·)) + Ra1Las + F a − T a , # # $ $ E b ub + gb(·) = Rb L¯ b − Lbs , V¯ b + 1 − f (·) + Rb1 Lbs + F b − T b , E c (uc ) = Rc (pA − t, L¯ c , V¯ c ) − T c ,

(3) (4) (5)

where F i is the amount of aid received by country i and disbursed by country c. The second term on the right hand side of (3) and (4) – Ra1 Las and Rb1 Lbs respectively – is the income of the soldiers in the two countries. The terms T a and T b are lump-sum taxes on the consumers in the two countries. The second term on the right and side of (5) is the amount of lump-sum tax levied on the representative consumer in country c. We assume that the expenditure on war effort is paid for in the two warring countries by taxation of the consumers. That is, the governments’ budget-balance equations are given by T i = Ri1 Lis + pAAi , i = a, b, and T c = F a + F b − tRc1,

(6)

where and pA is the price of military hardware or arms. As for the international prices, the warring countries are assumed to be small open economies so that the good prices (except that of arms) are exogenous. As for arms, we assume that three countries are large in the international market for arms, and, in particular, country c is the sole producer of arms and the demand for arms comes only from the two warring countries a and b so that we have the following market-clearing condition: Aa + Ab = Rc1 . (7) This completes the description of the basic model except that the conflict equilibrium has not been described yet, and this is what we shall now do. Substituting (6) into (3)-(5) and then differentiating equations (3), (4) and (5), we obtain

7

Once again, goods prices are omitted from the expenditure function. The partial derivative with respect to the utility level is the reciprocal of the marginal utility of income.

298

Sajal Lahiri

E1a dua = [ f1 Ra2 − Ra1 − E1aga1 ]dLas + [Ra2 f3 − E1a ga3 − pA]dAa

+[Ra2 f2 − E1a ga2 ]dLbs + [Ra2 f4 − E1a ga4 ]dAb − Aa d pA + dF a ,

(8)

+[−Rb2 f1 − E1bgb1 ]dLas + [−Rb2 f3 − E1b gb3 ]dAa − Abd pA + dF b ,

(9)

E1b dub = [− f2 Rb2 − Rb1 − E1b gb2 ]dLbs + [−Rb2 f4 − E1a gb4 − pA]dAb E1c duc = [Rc1 − tRc11]d pA − tRc11dt − dF a − dF b .

(10)

The first and the second terms in (8) and (9) give the effects of increased use of soldiers and arms by a country on its own welfare. The benefit for the country of using more soldiers is the additional output from appropriated land, while the costs are the loss of output because labor is diverted from the productive sector to the war sector and increased disutility from the death of soldiers. The third and the fourth terms in (8) and (9) give the international externalities from war effort on the two warring countries. Higher war efforts by one country (either by the use of more soldiers or arms), reduces the adversary’s utility by reducing the adversary’s endowment of land and by increasing soldier deaths. An increase in the price of imported arms increases the costs of imports, and these effects are captured by the penultimate terms in (8) and (9). The direct effect of foreign aid is to increase welfare in the recipient countries, and these effects are given by the last term in (8)-(9). The first term on the right hand side of (10) gives the usual terms-of-trade effect: country c is better off if the price of arms which it exports, increases. The second term is the decrease in tax revenue caused by a tax-induced decrease in arms production. The last two terms in (10) are the increases in the costs of financing foreign aid. We can now describe how the war equilibrium or the levels of war efforts in the two warring countries, Las , Lbs , Aa and Ab are determined. We assume that each warring country decides on the levels of its own war effort by maximizing its welfare level, taking war efforts in the other country as given. The first order conditions are given by:

∂ ua = f1 Ra2 − Ra1 − E1a ga1 = 0, ∂ Las ∂ ua E1a a = Ra2 f3 − E1a ga3 − pA = 0, ∂A ∂ ub E1b b = − f2 Rb2 − Rb1 − E1bgb2 = 0, ∂ Ls

(12)

∂ ub = −Rb2 f4 − E1agb4 − pA = 0. ∂ Ab

(14)

E1a

E1b

(11)

(13)

The above conditions are the same as those derived in Becsi and Lahiri (2006b and 2007b). An increase in Lis , increases income in country i (i = a, b) by increasing the amount of land, but it also has costs in the sense that it reduces the amount of labor than can be used for producing goods and services, and increases the disutility from the death of soldiers. The first term in (11) and (13) is the marginal benefit of warfare, and the second term and third are the marginal costs. Similarly, an increase in the imports of arms has costs in terms of the direct costs of imports, but it also

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis

299

benefits the country by increasing the amount land and by reducing the death of soldiers. The first two terms in (12) and (14) are the marginal benefits, and the third terms are the marginal costs. Henceforth we assume that the two countries are symmetric so that Las = Lbs , Aa = b A , f (·) = 0, f1 = − f2 , f3 = − f4 , g3 = −g4 , f12 = f14 = f32 = f34 = 0.8 Suppressing country-specific superscripts a and b for variables in both warring countries, the first order conditions (11)-(14) can be rewritten as: f1 R2 − R1 − E1 g1 = 0, R2 f3 − E1 g3 − pA = 0.

(15) (16)

These two equations can now be solved for Ls and A in terms of F and pA . This completes the description of the model. We have five equations (3), (5), (7), (15) and (16) and six endogenous variables Ls , A, pA , u(= ua = ub ), and uc .

3 Foreign Aid, Tax on Arms Exports, and War In this section we shall examine the effect of an increase in foreign aid F, and of a tax on arms exports t, to both (symmetric) warring countries on the war equilibrium. We first of all consider the effect of foreign aid. We can separate two effects: one direct and one indirect via changes in the price of arms, and the total effects are given by dLs ∂ Ls ∂ Ls d pA = + , · dF ∂ F ∂ pA dF dA ∂A ∂ A d pA = + A· . dF ∂ F ∂ p dF

(17) (18)

Some of above terms in (17) and (18) such as ∂ Ls /∂ F, ∂ A/∂ F, ∂ Ls /∂ pA and ∂ A/∂ pA have been examined in Besci and Lahiri (2006b and 2007), and we shall briefly reproduce them here. Differentiating, (3), (5), (7), (15) and (16) and after some substitutions, we obtain g1 E11 g1 E11 A · d pA + · dF, α11 dLs + α12 dA = − E1 E1   g3 E11 g3 E11 A d pA + · dF a , α21 dLs + α22 dA = 1 − E1 E2a

(19) (20)

8 It can be verified that if the functions f and g take the form f (La , Aa , Lb , Ab ) = s s ((ha (Las ) + ka (Aa ))/(ha (Las ) + hb (Lbs ) + ka (Aa ) + kb (Ab )))X and g(Las , Aa , Lbs , Ab ) = (h¯ a (Las ) + k¯ a (Aa ))/(h¯ a (Las ) + h¯ b (Lbs ) + k¯ a (Aa ) + k¯ b (Ab )), these restrictions will be satisfied under symmetry.

300

Sajal Lahiri

where g1 E11 (R2 f2 − E1 g1 ) , E1 g1 E11 (R2 f4 − E1g4 ) , = f13 R2 − E1 g13 − E1 g3 E11 (R2 f2 − E1 g1 ) , = R2 f31 − E1 g31 − f3 R21 − E1 g3 E11 (R2 f4 − E1g4 ) = R2 f33 − E1 g33 − . E1

α11 = f11 R2 − f1 R21 + R11 − g11E1 − α12 α21 α22

We note that from the second order conditions relating the war equilibrium, we must have: (21) α11 < 0, α22 < 0, and Δ = α11 α22 − α12 α21 > 0. Furthermore, because of assumptions 1 and 2 it follows that α12 > 0. Solving (19) and (20) simultaneously for dLs and dA we can examine the effects of changes in foreign aid F and arms price pA on the levels of war activities measured by the number for soldiers Ls and amount of arms imports A, in the two warring countries. Turning to the effect of foreign aid first, from (19) and (20) we get E1 Δ ∂ Ls = [g1 α22 − g3α12 ], · E11 ∂ F = R2 [g1 f33 − g3 f13 ] + E1 [g3 g13 − g1g33 ], gg3 E1 f3 gR2 f f g g − εLg εAL ]+ − εLg εAA ], (22) · [εAg εAL · [εAg εAL = ALs ALs E1 Δ ∂ A = [g3 α11 − g1α21 ], · E11 ∂ F = g3 R11 + R21 [ f3 g1 − f1 g3 ] + R2[g3 f11 − g1 f31 ] + E1[g1 g31 − g3g11 ], (23) g f 1 R2 f ] + E1 [g1 g31 − g3g11 ], · [εAg εLLf − εLg εLA = g3 R11 + R21 [ f3 g1 − f1 g3 ] + ALs where

∂ g Ls g1 Ls ∂g A g3 A , εAg = − · =− , · = ∂ Ls g g ∂A g g ∂ f3 A ∂ f3 Ls ∂ f1 Ls f33 A f31 Ls f11 Ls f · =− =− = · = · , εAL , εLLf = − =− , ∂ A f3 f3 ∂ Ls f3 f3 ∂ Ls f1 f1 f13 A g33 A ∂ f1 A ∂ g3 A ∂ g3 Ls g31 Ls g g = =− = · , εAA =− , εAL = , · = · ∂ A f1 f1 ∂ A g3 g3 ∂ Ls g3 g3 ∂ g1 Ls ∂ g1 A g11 Ls g13 A g · =− =− · =− , εLA =− . ∂ Ls g1 g1 ∂ A g1 g1

εLg = f εAA f εLA g εLL

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis

301

From (22), we find that an increase in foreign aid will decrease the employf − ε g ε f < 0 and ε g ε g − ε g ε g > 0. Also, an increase ment of soldiers if εAg εAL L AL L AA A AL f − ε g ε f > 0 and in foreign aid will increase the employment of soldiers if εAg εAL L AL g g g g εA εAL − εL εAA < 0. As for the imports of arms, it is clear that if α21 is negative, it follows from the first line of (23) that an increase in foreign aid will increase the amount of arms imports. In general, from the last line of (23) we find that f so that the effect of the last terms dA/dF a > 0 if, for example, εAg εLLf >> εLg εLA in (23) (which is the only negative term) is outweighed. Combining the results, we can state that an increase in foreign aid will raise warfare by increasing both the employment of soldiers and the imports of arms if εAg is sufficiently large. Formally, Proposition 1. [Becsi and Lahiri (2007b)] An increase in foreign aid to two symmetric warring countries, for a given level of the price of arms, will increase the employment of soldiers and the imports of arms in both countries if the effect of arms on the protection of the lives of soldiers is very significant. Intuitively, as we have seen before the direct effect of an increase in income (induced by foreign aid) will reduce the employment of soldiers and increase the imports of arms, and the magnitude of the increase in arms imports is positively related to the magnitude of εAg which signifies the degree of protectiveness of arms (for soldiers’ lives). As we have also discussed before, an increase in arms imports has a positive indirect effect on the employment of soldiers (α12 > 0). If εAg is sufficiently high the indirect effect will dominate the direct effect and an increase in foreign aid will increase both Ls and A. Turning now to the effect of the price of arms at the international market, from (19) and (20) we get

∂ Ls = −g1 E11 Aα22 − α12 [E1 − g3 E11 A], ∂ pA ∂A Δ E1 = g1 E11 Aα21 + α11 [E1 − g3E11 A]. ∂ pA Δ E1

(24) (25)

From (25) it is clear that ∂ A/∂ pA is negative if α21 < 0 which would be true if is sufficiently large. That is, an increase in the import price of arms will reduce A the imports of arms. The effect of pA on soldiers is ambiguous. However, Eq. (24) simplifies to

εg

∂ Ls E11 f g ) + g1g3 (1 − εAA )], = [g13 − R2 f31 ] [E1 − E11 Ag3 ] + [−R2 g1 f3 (1 − εAA ∂ pA E1 (26) f < 1 and ε g < 1. That is, an increase from which it follows that ∂ Ls /∂ pA < 0 if εAA AA in pA can reduce both soldiers and arms imports. Thus, we have Δ E1

Proposition 2. [Becsi and Lahiri (2006b)] A tax on the exports of arms to two symmetric warring countries, for a given level of the price of arms, will reduce the employment of soldiers and the imports of arms in both countries, if the effect of

302

Sajal Lahiri

f < 1 and arms on the protection of the lives of soldiers is very significant, and if εAA g εAA < 1.

There are two direct effects of an increase in pA on A: a substitution effect and an income effect. An increase in the price of A reduces the demand for arms in favor of the demand for soldiers, and an increase in pA also reduces income and therefore the demand for A for reasons explained before. These effects then have secondary effects via induced effects on Ls . An increase in demand for soldiers via the substitution effect reduces the demand for the imports of arms even further if α21 is negative. Thus, when α21 < 0, an increase in pA reduces the equilibrium values of arms imports. An increase in pA has a direct income effect which increases the equilibrium values of Ls . It also has a positive substitution effect on Ls : an increase in pA leads to a substitution of A by Ls . The magnitude of this substitution effect f and ε g (which determine the magnitude of changes in depends on the size of εAA AA the marginal benefits of importing A). Finally there is a negative effect via induced changes in A: an increase in Ls reduces A, which in turn reduces LS . The net effect f g turns out to be negative if εAA < 1 and εAA < 1. In the preceding analysis, we obtained partial effects of foreign aid on war efforts in the two symmetric warring countries. We also obtained the effect of an increase in the price of arms on war efforts. We shall first of all examine the effects of foreign aid and a tax on the exports of arms on the price of arms, and look at the total effect of foreign aid on war efforts. Differentiating (7) and using (23), we find

Δ1 d pA = −Rc11 dt − where

Δ1 =

2E11 [g3 α11 − g1α21 ] dF, E1 Δ

(27)

2[g1 E11 Aα21 + α11 {E1 − g3E11 A}] − Rc11 Δ E1

is the slope of the excess demand function for arms, and therefore for Walrasian stability we must have Δ1 < 0. An increase in the tax of arms exports reduces the supply of arms in the world market and this increases its price. An increase in foreign aid, on the other hand, increases the demand of arms and thus the price of arms if the protective nature of arms for soldiers is very strong (see Proposition 1). Combining the above results and assuming the protective nature of arms for soldiers’ lives to be strong, the total effect can be characterized as:

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis

dLs ∂ Ls ∂ Ls d pA = · , + dF ∂ F ∂ pA dF (+)

(−)

(−)

(28)

(+)

∂A ∂ A d pA dA = . + A· dF ∂ F ∂ p dF (+)

303

(29)

(+)

From (28) and (29), it should be clear that endogenizing the price of arms reduces the adverse effect of aid on war efforts. This is because the initial foreign aid-induced increase in the demand for arms increases the international price of arms and this in turn reduces the use of both arms and soldiers in war efforts. Although the net effect of an increase in foreign aid on war efforts is ambiguous, Eq. (29) can be simplified to obtain an unambiguous effect. Substituting the individual terms in (29), we get: [g3 α11 − g1α21 ]E11 Rc11 dA =− , dF Δ 1 Δ E1

(30)

which is positive under the hypothesis of Proposition 1, i.e., when the protective nature of arms is significant. In other words, when the protective nature of arms is significant, foreign aid as a means to reduce conflict is counterproductive even when the price of arms is endogenous. Turning now to the instrument of a tax on exports of arms, from (27) we know that such a tax will unambiguously increase the international price of arms and this price increase will reduce war efforts in both warring countries if the hypothesis in Proposition 2 (which includes the condition that arms is highly protective of soldiers’ lives) is satisfied. These results are summarized in the following proposition. Proposition 3. When the the international price of arms is endogenous, while an increase in foreign aid to the warring countries is likely to increase war efforts in the two warring countries, an increase in the tax on the exports of arms to the warring countries is likely to have the opposite effect. We conclude the paper by considering the effect of a on tax on the export of arms on the welfare of the exporting country, i.e., country c. From (10) and (27), we find E1c

[Rc + tRc11]Rc11 duc =− 1 − tRc11, dt Δ1

and the optimal level for the tax on exports of arms can be derived as t∗ = −

r1c Δ E1 > 0. 2[g1 E11 Aα21 + α11 {E1 − g3E11 A}]

(31)

That is, it is optimal for the arms exporting country to impose a positive tax on exports. The intuition is similar to the one for optimal tariffs from the trade theory literature: by imposing a tax the exporting country is able to improve its terms of

304

Sajal Lahiri

trade. That is, the endogeneity of arms price pA gives more incentive to the rest of the world to impose a tax on its exports of military hardware.

4 Conclusion Territorial conflicts between nations or regions are unfortunately commonplace. Such conflicts often increases the demand of military hardware in the world market. Military hardwares can often make it easier for nations to engage in conflicts as modern military hardwares can have a protective aspect in the sense that extensive conflicts can involve relatively minimal loss of lives of soldiers. This aspect of the role of military hardware can make it easier for politicians to ‘sell’ war to a otherwise doubtful electorate. In this paper, we contribute to the theoretical literature on conflicts by explicitly considering the price of arms as an endogenous variable. We then examine the effects of two policy instruments for the rest of the world on the levels of warfare in two symmetric warring countries. The protective nature of military hardware, gives rise to a number of interesting results. For example, we find that foreign aid to the warring countries can actually increase both the employment of soldiers and the imports of military hardware in the two warring countries. We also find that an increase in the tax on exports of military hardware can have exactly the opposite effect. Thus, control of arms exports may be a better instrument for conflict resolution than foreign aid. However, we also find that the endogeneity of the international price of military hardware mitigates to some extent the war-increasing effects of foreign aid. The endogeniety of the international price of military hardware also gives more incentive to the rest of the world to impose a tax on the exports of such hardware: the manipulation of the world price of military hardware via taxation is welfare improving for the military hardware exporting country. We obtain an expression for the optimal level of such taxation.

References 1. Anderton C.H., (1995) “Economics of Arms Trade”, In Handbook of Defense Economics, Vol. 1, K. Hartley and T. Sandler (eds.), 523-561, Amsterdam, North-Holland 2. Anderson, James E. and Douglas Marcouiller (2005), “Anarchy and Autarky: Endogenous Predation as a Barrier to Trade,” International Economic Review, 46 (1), 189-213 3. Anderton, Charles H., Roxane A. Anderton and John R. Carter (1999), “Economic Activity in the Shadow of Conflict,” Economic Inquiry, 37 (1), 166-79

Arms Trade and Conflict Resolution: A Trade-Theoretic Analysis

305

4. Becsi, Zsolt and Sajal Lahiri (2006a), “The Relationship Between Resources and Conflict: A Synthesis,” Discussion Paper No. 2006-03, Department of Economics, Southern Illinois University Carbondale 5. Becsi, Zsolt and Sajal Lahiri (2006b), “Conflicts in the presence of arms trade: policy options for the international community,”presented at the Midwest International Economics Group meeting held at Purdue University during 13-15 October, 2006 6. Becsi, Zsolt and Sajal Lahiri (2007a), “Bilateral War in a Multilateral World: Carrots and Sticks for Conflict Resolution”, Canadian Journal of Economics, 40, 1168-1187 7. Becsi, Zsolt and Sajal Lahiri (2007b), “Conflict in the Presence of Arms Trade: Can Foreign Aid Reduce Conflict?” pp. 3-15 in: S. Lahiri (editor), Theory and Practice of Foreign Aid, Elsevier, The Netherlands 8. Brito, Dagobert L. and Michael D. Intriligator (1985), “Conflict, War, and Redistribution,” American Political Science Review, 79 (4), 943-957 9. Brzoska, Michael (2001), “Taxation of the arms trade: An overview of the issues,” Paper prepared for the United Nations ad hoc Expert Group Meeting on Innovations in Mobilizing Global Resources for Development, 25-26 June 2001 10. Collier, Paul and Anke Hoeffler (2002), “Aid, Policy and Peace: Reducing the Risks of Civil Conflict,” Defense and Peace Economics, 13 (6), 435-450 11. Collier Paul and Anke Hoeffler (2005), “Civil War,” Draft Chapter for the Handbook of Defense Economics, University of Oxford mimeo 12. Dixit, Avinash K. and Victor Norman (1980), Theory of International Trade, Cambridge University Press 13. Findlay, Ronald and Mohamed Amin (2000),“National Security and International Trade: A Simple General Equilibrium Model,” Columbia University, Department of Economics 14. Garfinkel, Michelle R., Stergios Skaperdas, and Constantinos Syropoulos (2004), “Globalization and Domestic Conflict,” mimeo 15. Gleditsch, Kristian (2004), “A Revised List of Wars Between and Within Independent States, 1816-2002,” International Interactions, 30 (3), 231-262 16. Grossman, Herschel I. (1992), “Foreign Aid and Insurrection,” Defence Economics, 3(4), 275288 17. Grossman, Herschel I. and Minseong Kim (1996), ”Swords or Plowshares? A Theory of the Security of Claims to Property,” Journal of Political Economy, 103 (6), 1275-1288 18. Hirshleifer, Jack (1988), “The Analytics of Continuing Conflict,” Synthese, 76, 201-233 19. Hirshleifer, Jack (1995), “Anarchy and its Breakdown,” Journal of Political Economy, 103 (1), 26-52 20. Hufbauer, Gary C., Jeffrey J. Schott and Kimberley Ann Elliott (1990), Economic Sanctions Reconsidered: History and Current Policy, Second Edition, Washington, DC: Institute for International Economics 21. Levine, Paul and Ron Smith (1995), “The Arms Trade and Arms Control,” Economic Journal, 105 (2), 471-484 22. Neary, Hugh M. (1997), “A Comparison of Rent-Seeking Models and Economic Models of Conflict,” Public Choice, 93 (3/4), 373-388 23. Skaperdas, Stergios (1992), “Cooperation, Conflict, and Power in the Absence of Property Rights,” American Economic Review, 82 (4), 720-739 24. Skaperdas, Stergios and Constantinos Syropoulos (1996), “Competitive Trade with Conflict,” in Michelle R. Garfinkel and Stergios Skaperdas, ed., The Political Economy of Conflict and Appropriation, Cambridge: Cambridge University Press, 73-96 25. Skaperdas, Stergios and Constantinos Syropoulos (2001), “Guns Butter, and Openness: On the Relationship Between Security and Trade,” American Economic Review, Papers and Proceedings, 91 (2), 353-357 26. Skaperdas, Stergios and Constantinos Syropoulos (2002), “Insecure Property and the Efficiency of Exchange,” Economic Journal, 112 (January), 133-46 27. Syropoulos, Constantinos (2004), “Trade Openness and International Conflict,” presented at the conference ‘New Dimensions in International Trade: Outsourcing, Merger, Technology Transfer, and Culture,’ held at Kobe University, Japan during December 11-12, 2004

Trade and Wage Inequality with Endogenous Skill Formation Brati Sankar Chakraborty and Abhirup Sarkar

Abstract The present paper develops a two-sector model with one constant returns sector producing basic goods and another increasing returns to scale sector producing fancy goods. A quasi-linear utility function is used to capture the divide between basic and fancy goods. There are two types of productive factors, skilled and unskilled labour, the former working in the skill using fancy goods sector and the latter in the basic good producing sector. Agents differ in their costs of acquiring skill. The model holds possibilities of multiple equilibria and shows that international trade, in spite of equalizing factor prices, also increases the skill premium in all countries.

1 Introduction The present paper is an attempt to explain theoretically the empirical observation that the relative wage of the skilled to unskilled labour has been increasing almost everywhere in the world over the last few decades. In particular, the paper provides an explanation, in terms of opening up of trade, as to why this skill premium might go up in both skill scarce and skill abundant countries. The Heckscher-OhlinSamuelson (HOS) model, the widely accepted traditional framework of trade theory, is unable to explain this. The Stolper-Samuelson theorem, which is an integral part of this traditional theory, would predict an asymmetric change in the relative wage in the skill scarce and skill abundant countries as trade opens up between them. It would predict an increase in skill premium in the skill abundant country and a fall Brati Sankar Chakraborty Economic Research Unit, Indian Statistical Institute, Kolkata, India. e-mail: [email protected] Abhirup Sarkar Economic Research Unit, Indian Statistical Institute, Kolkata, India. e-mail: [email protected]

306

Trade and Wage Inequality with Endogenous Skill Formation

307

in the skill scarce country as a consequence of trade. Clearly we have to come out of this traditional framework to explain the uniform increase in the relative wage of the skilled labour in both sides of the international boarder. In an earlier work (Chakraborty and Sarkar (2007) we proposed a two-sector model of trade with a Constant Returns (CR) sector and an Increasing Returns to Scale (IRS) sector with goods differing in their income elasticities. We departed from the usual trade theoretic mode of treating skilled and unskilled labour as substitutable factors. In a very stylized way we introduced the notion that skilled labour has a larger spectrum of occupational options than unskilled labour in the sense that skilled labour can work both in skilled and unskilled jobs, deciding what they do by the rewards that are held by the two options, whereas unskilled labour is necessarily tied down to an unskilled job, a presumption that we did not think needed any intellectual labour to defend. We showed that this feature interacting with increasing returns to scale gives rise to myriad possibilities of multiple equilibria, and very naturally renders an avenue through which skill premium can rise in all countries following trade. Notably, we show this in a Factor Price Equalization (FPE) framework. The present paper is an extension of our earlier work. In our earlier model we interpreted skill as an inherent ability an individual is born with. In other words, in our earlier model we did not allow the possibility of endogenous skill formation as a result of conscious decisions by economic agents. The present paper fills this gap. We assume that acquiring skills is costly and this cost varies across individuals. As a result, in equilibrium some individuals will acquire skills and others will not. The main purpose of the paper is to show that as trade opens up it becomes more attractive for each agent to acquire skills. As a result a larger set of agents will acquire skills in equilibrium which, due to the presence of positive externalities, will increase the skilled wage in each trading country. With factor prices fully equalized through trade, this will increase the skill premium all over the trading world. The other departure from the existing literature is that while in the existing literature trade can explain a one shot increase in wage inequality, our model is able to demonstrate that the wage increase would be sustained over a period of time if globalization itself is gradual. In other words, we are able to show that as more and more countries open up to trade, wage inequality in each country will keep on increasing. Other papers trying to explain a symmetric increase in skill premium rely on the breakdown of factor price equalization. Jones (1999) proposes an interesting variant of the HOS model with 3 goods and 2 factors (skilled and unskilled labour) with the goods uniquely ranked in terms of intensities. The good with middle ranked intensity is produced in both the countries and one with the highest and lowest skill intensity are produced in the developed North and the less developed South respectively. The South exports the good with middle intensity. Trade liberalization by North leads to an improvement in terms of trade for the South leading to a rise in demand for skilled labour in the South and in the North, lower import tariff reduces the domestic price of the middle good and which in turn is the unskilled labour intensive good for the North. Consequently unskilled wage rate in the North goes down, leading to a rise in the wage gap. This model, thus can account for the symmetric movement in the wage

308

Brati Sankar Chakraborty and Abhirup Sarkar

gap. But also note that if the North were to export the middle intensive good and the South reduced tariff on the middle intensive good, wage inequality would fall in both the countries. Though once again relative wage movements are symmetric, whether inequality rises or falls crucially depends on the trade pattern. Interesting variants on similar theme have been worked out in several other papers. Feenstra and Hanson (1996) in an oft cited paper redesigns the Dornbusch et al.(1980) model by adding a third factor capital. In their model a single manufacturing output is assembled from a continuum of intermediate inputs. Such inputs are produced by skilled labour, unskilled labour, and capital. In equilibrium the South produces and exports a range of inputs and the North does the rest. A rise in the stock of capital in the South shifts the intermediate intensity goods from the developed North to the underdeveloped South raising relative demand for skilled labour in both countries, thus symmetrically increasing the wage gap. What is also crucial to note is that most of these models abandon the FPE framework. Trefler and Zhu (2001) closely builds up on the Feenstra, Hanson insight. In their model similar product shifting from North to South is initiated by technological catch up in the South. These models are essentially stepped in the tradition of standard competitive markets and Constant Returns to Scale (CRS) technology. Returns to scale, it seems, is a natural point to depart from these models. Krugman (1981) has shown that trade under Increasing Returns to Scale (IRS) might lead to co-movements in absolute factor prices, antithetically to the Stolper-Samuelson theorem. But even then in Krugman (1981) the relative factor prices follow the same pattern as HOS model would predict. Similarly in Ethier (1982), for a small open economy Stolper-Samuelson theorem remains valid, even with IRS in production, to the extent the equilibrium is Marshall stable. These models thus cannot account for symmetric movements in the wage gap across countries. In what follows, Section 2 lays down the model, Section 3 solves for the autarky, Section 4 is a discussion on the trade equilibrium and the last section concludes the paper.

2 The Model The Economy: Basic Description The economy is populated with a continuum of agents differing in their abilities to acquire skill. The agents are distributed over the unit interval [0, 1] according to some distribution function F(h), with density function f (h), hε [0, 1]. Each type of agent is initially endowed with one unit of unskilled labour. H is the total amount of unskilled labour available to the economy. An agent of type h has to expend e(h) of this unskilled labour to become skilled. So if this agent decides to acquire skill, after doing so, she is left with [1 − e(h)] amount of skilled labour. Alternatively, we may assume that an h-type agent has to buy and use up e(h) amount of skilled labour to transform her one unit of unskilled labour into one unit of skilled labour. It is assumed that e′ (h) < 0, that is, as we go up along the interval [0, 1] the basic ability

Trade and Wage Inequality with Endogenous Skill Formation

309

level of a worker increases. As we shall see below, the two alternative specifications of skill formation are equivalent in terms of the working of our model. For now, however, it suffices to note that the total amount of skilled and unskilled labour in this model is endogenous because depending upon her type and the relative market wage of skilled and unskilled labour an agent may or may not decide to acquire skill. Production The economy produces two goods: 1 and 2, where X1 and X2 are the outputs respectively. Good 1 is produced using unskilled labour under constant returns to scale technology. We choose units such that one unit of unskilled labour is required to produce one unit of good 1. Thus the price of good 1 equals the unskilled wage rate wu . We also choose good 1 to be the numeraire. This implies that wu is equal to unity. Good 2 on the other hand is produced using differentiated intermediate inputs. The production technology for X2 follows Dixit-Stiglitz (1977) specification and is given by X2 =



n



1/ρ

ρ yi

where 0 < ρ < 1

(1)

i=1

and where yi is the amount of input of intermediate good i. As in Ethier (1982), we assume that all intermediate goods have identical cost functions. The cost of producing the quantity x of a given variety of intermediate input is Cx = (a + bx)wx , where a and b are the fixed and marginal requirements of skilled labour respectively. An individual producer of X2 maximizes profits subject to the production function considering n to be parametrically given. This gives rise to the inverse input demand function for each intermediate input (see Helpman & Krugman (1985)) yi =

(qi )−σ ∑ni=1 qi yi σ ∑ni=1 q1− i

(2)

where qi is the price of the ith intermediate input and σ = 1−1 ρ is the elasticity of substitution between any pair of intermediate inputs. Assuming large number of intermediate good producers, such that strategic behavior is ruled out on their part, it can be easily shown that σ is the elasticity of demand faced by the intermediate producers. Thus each producer of intermediate inputs equate marginal revenue to marginal cost   1 qi 1 − = bws . σ Taking note of the fact that σ =

1 1−ρ ,

we may write

qi =

bws . ρ

(3)

310

Brati Sankar Chakraborty and Abhirup Sarkar

Thus prices of intermediate goods are a constant mark up over the marginal cost. With identical technology all firms charge the same price for intermediate goods i.e. qi = q ∀ i. Free entry in the production of intermediate inputs drives down profits to zero (the Chamberlinian large group case). Thus the operating surplus must be just enough to cover the fixed cost q xi = aws . (4) σ This also implies that output xi is the same for all producers, i.e. xi = x ∀ i. Dividing Eq. (4) by (3) we get, x=

aρ . b(1 − ρ )

(5)

The symmetry in output choice across firms, i.e. xi = x and demand supply equilibrium in intermediate goods market implying yi = x ∀ i , taken together, would allow us to express Eq. (1) as X2 = nα x (6) where α ≡ ρ1 > 1. Further, note that (5) implies output per firm (x) is a constant. Thus Eq. (6) implies that any expansion of X2 would be in terms of increased n, and this, as has already been noted, implies increasing returns to scale at the industry level in X2 production. If in equilibrium n is the effective number of produced varieties, then the total amount of skilled labour used in the production of intermediate goods is given by H = n(a + bx).

(7)

Note that x is a constant, which in turn implies that all changes in n and thereby in the output of X2 are brought about singularly by changes in H. Preferences All agents share the same quasi-linear utility function given by β

W = C1 + C2 where 0 < β < 1.

(8)

First order conditions for utility maximization imply β −1

β C1



1=λp

(9) (10)

where p is the relative price of good 2 and λ is the associated Lagrange multiplier. Eqs. (9) and (10) imply p=

1 1−β . C β 1

(11)

Trade and Wage Inequality with Endogenous Skill Formation

311

3 Autarky The general equilibrium supply curve We now make a distinction between the supply price ps and the demand price pd . An expression of the former is obtained in this sub-section while that for the latter is derived in the next. Noting that zero profit condition prevails in the production of final output X2 we have ps X2 = nqx (12) where the left hand side is the total revenue and the right hand side is the total cost. Substituting from Eq. (6), Eq. (12) boils down to ps = n1−α q.

(13)

Using Eqs. (3), (5) and (7), Eq. (13) can be rewritten as ps = ZH 1−α ws

(14)

 1−α b where Z ≡ 1−a ρ ρ. Eq. (14) is nothing but the price and average cost equality. The general equilibrium Demand Curve On the demand side Eq. (11) can be integrated with the factor market equilibrium to arrive at an expression for the general equilibrium demand. First note that both skilled and unskilled labour must consume the same amount of good 1. This follows from noting Eq. (11). Consumption of good 1 is a function of price alone. All consumers facing the same price will consume the same amount of good 1. Denoting the total amount of unskilled labour available for production by U, we can write the consumption of good 1 by each agent, C1 = U . Where the numerator is the H total production of good 1, and the denominator gives the total number of people consuming good 1. Inserting this expression of C1 in Eq. (11) we arrive at pd =

1 β



U H

1−β

.

(15)

Skill Formation A worker of type h will acquire skill if and only if ws (1 − e(h)) ≥ wu .

(16)

Since type h worker has to spend e(h) of her own raw labour (or alternatively, has to use e(h) units of skilled labour) to become skilled. Hence her net income after becoming skilled is given by the left hand side of (16). If the worker decides to acquire skill, her net income has to be greater than or equal to the income she can earn by remaining unskilled. We assume that e′ (h) < 0, e(0) = 1 and e(1) = 0. Thus the worker at the lowest end of the spectrum, the one with the least ability, has no

312

Brati Sankar Chakraborty and Abhirup Sarkar

incentive to acquire skills while the one with the highest ability will always acquire skills. Let h∗ be the type of worker who is indifferent between remaining unskilled and acquiring skill. Then we have ws =

1 1 − e(h∗)

(17)

where by the choice of numeraire wu = 1. Since e(h) is decreasing in h, all workers with h > h∗ must acquire skill and all workers with h < h∗ must choose to remain unskilled. Therefore, the supply of skilled labour is given by H =H

1 h∗

(1 − e(h)) f (h)dh

(18)

and the supply of unskilled labour is given by U =H

h∗ 0

f (h)dh.

(19)

Plugging the expressions for skilled and unskilled labour in demand and supply Eqs. (14) and (15) and using (17) we obtain supply and demand prices solely as functions of the single variable h∗ : ps (h∗ ) =

 1−α Z H h1∗ (1 − e(h)) f (h)dh

pd (h∗ ) =

1 β



1 − e(h∗) h∗

0

f (h)dh

1−β

.

(20)

(21)

The shape of the pd (h∗ ) function is unambiguous. An increase in h∗ increases the demand price. Also the graph of pd (h∗ ) starts at zero with h∗ = 0 and goes up to a finite number at h∗ = 1. On the other hand, it is clear from (20) that an increase in h∗ has in general an ambiguous effect on the supply price ps . Recalling that the exponent (1− α ) < 0 we find that both the numerator and denominator go up with an increase in h∗ making the overall change in the supply price ambiguous. However, from our assumed boundary conditions e(0) = 1, e(1) = 0 we can easily verify that ps → ∞ as h∗ → 0 or 1. This suggests a possible U-shape of the supply function. This is a mere suggestion though. Indeed, without knowing the exact form of the distribution function and the e(h) function we can not specify the exact nature of the supply function and in particular how many ups and downs the graph of ps (h∗ ) exhibits. For working convenience we shall stick to a U-shaped ps (h∗ ) graph for the moment. In fact the readers will readily recognize that the analysis done with an Ushaped curve immediately extends to the case with a ps curve which might have many ups and downs.

Trade and Wage Inequality with Endogenous Skill Formation

313

Uniform Distribution, Linear Ability Function Suppose the distribution function is uniform, that is, f (h) = 1 and e(h) = 1 − h. Then the supply price is given by

ps (h∗ ) = ZH

1−α

#

1 2

∗2

− h2

$1−α

. (22) h∗ Straightforward calculations show that the function ps (h∗ ) reaches a unique minimum at h∗ = √2α1 −1 . Since α > 1, we may conclude that the graph of the supply price function is U-shaped in the interval 0 < h∗ < 1. Equilibrium We proceed with the assumption that the supply price function is U-shaped. The demand price function, as we have already seen, is upward rising. Putting the two together we find that three types of equilibria are possible. In Figure 1(a) there are three equilibria occurring at points A, B and C. Equilibrium points A and C are Marshallian stable, while in the same sense B is unstable. If we start with an h∗ between A and B, demand price exceeds supply price and hence there is an expansion of output. This expansion entails a rise in skill formation and a consequent fall in h∗ so that we move towards the equilibrium point A. Similarly, when we start to the left of A, supply price exceeds demand price and there is a contraction of output leading to an expansion of h∗ . Again, C is an equilibrium because even though at C supply price exceeds demand price, h∗ has no further possibility of increasing. It can also be seen that it is stable. C is an equilibrium point where there is no skill formation in the economy and no production of good 2. Finally, equilibrium at point B is unstable, the economy will either settle at A or C following a small perturbation from B. In Figure 1(b) the only equilibrium is at C where the economy remains primitive and unskilled. In Figure 1(c) apart from the primitive equilibrium at C there is an equilibrium at B which is stable from the left hand side but unstable from the right. In what follows, we are going to ignore the unstable equilibria and focus only on the stable ones. From Eq. (20) it follows that an increase in the size of the economy, that is, an increase in H shifts the supply price function downwards. This implies that smaller economies are likely to have equilibrium represented by Figure 1(b). In other words, small isolated economies are likely to remain primitive. This is due to economies of scale the advantage of which can not be appropriated by a small isolated economy. For a large economy, the possibility of remaining primitive is still there, as shown by point C in Figure 1(a). However, for these economies the bad equilibrium can materialize only as a result of coordination failure. A consorted effort by the economic agents can bring the economy to the good equilibrium at A. Our findings so far can be summarized in the following proposition: Proposition 1. In our economy two types of equilibrium are possible: a bad equilibrium where no skill is acquired by the workers and a good equilibrium where there is skill formation. For small isolated economies, the bad equilibrium is the only pos-

314

Brati Sankar Chakraborty and Abhirup Sarkar

sible outcome. For large economies, equilibrium can be either good or bad and the bad equilibrium can occur only due to coordination failure. Under a general ability function and a general distribution function, the ps curve might have many ups and downs but with ps → ∞ as h∗ → 0 or 1 as shown in Figure 1(d). This adds no additional insight and the analysis remains the same as in an U-shaped ps curve. Only that in this case, there are many more stable and unstable equilibria.

4 Trade Integrated World Economy and Factor Price Equalization We now introduce international trade. Let two countries having the above structure engage in commodity trade. We call them home and foreign (whenever needed, we denote them by h and f respectively). Countries are similar preference and techh f nology wise but can possibly differ in their quantities of labour endowments H , H . The cost of acquiring skill is also the same in the two countries. We assume all goods (final goods 1 and 2, and also the intermediate goods) to be freely tradable. pd

pd, ps

ps C

pd, ps ps

B

pd

A

C

0

1

0

h∗

(a)

1

h∗

1

h∗

(b)

pd, ps

pd, ps ps ps pd

pd

C

B 0

1

h∗

0

(d)

(c) Fig. 1

Trade and Wage Inequality with Endogenous Skill Formation

315

With commodity trade equalizing good 1 prices in home and foreign, it must be the case that unskilled wages are equalized in both the countries. Now let us focus on the market clearing conditions for intermediate goods. xh = yhh + yhf f

(23a)

f

x f = yh + y f

(23b)

where xh and x f are the supplies of a representative brand of intermediate input of j home and foreign country respectively, and yi denotes the amount of intermediate input produced in the jth country and used by the ith country producers of good 2. Thus the LHS in Eqs. (23a-23b) denotes the total world supply and the RHS denotes the total world demand of intermediate inputs respectively. Now from the demand Eq. (2) and noting that zero profit condition prevails in the production of good 2, and that trade equalizes price of commodity 2 in both the countries, the set of Eqs. (23) reduces to xh =

(qh )−σ pX2f (qh )−σ pX2h + nh (qh )1−σ + n f (q f )1−σ nh (qh )1−σ + n f (q f )1−σ

xf =

(q f )−σ pX2 (q f )−σ pX2h + h h 1−σ h h 1− f f 1− σ σ n (q ) + n (q ) n (q ) + n f (q f )1−σ

(24a)

f

(24b)

where q j now denotes the price of a representative brand of intermediate input produced in the jth country, X2j is the output of good 2 produced in the jth country and n j is the number of varieties produced in the jth country. It is clear from Eq. (5) that with identical technology, xh = x f . Therefore the set of Eqs. (24) imply qh = q f . Now noting that intermediate good prices are set as a constant mark-up over skilled wages (see Eq. (3)), skilled wages are also equalized across countries. Therefore the following proposition is immediate. Proposition 2. Free trade in final and intermediate goods equalizes skilled and unskilled wages in both the countries. With price of intermediate goods produced in both the countries now equal (i.e. qh = q f ), it must be true that a country will be using the same amount of each brand of intermediate input whether produced at home or foreign, in the production of final good 2. Which imply yhh = yhf and y ff = yhf . Now with zero profit condition prevailing in the final good 2 production j

f

pX2 = nh qh yhj + n f q f y j : j = h, f .

(25)

316

Brati Sankar Chakraborty and Abhirup Sarkar

Fig. 2

Recalling that qh = q f and yhj = y fj : j = h, f , and using the production function given in Eq. (1), Eq. (25) reduces to p = (nh + n f )1−α qh

(26) .

This is evidently the trade counterpart of Eq. (13). As in Eq. (14), Eq. (26) reduces to p = Z(H h + H f )1−α ws

(27)

where H h and H f are the amount of skilled labour employed in intermediate goods production in the home and in the foreign country respectively. Since the process of skill formation is the same in the two countries, and in particular the function e(h) is identical, equalization of skilled wage implies that the same h∗ prevails in equilibrium in the two countries (see Eq. (17)). This yields the following supply price function for the integrated world economy h

ps (h∗ ) =

f

Z[H + H )

1

h∗ (1 − e(h)) f (h)dh] 1 − e(h∗)

1−α

.

(28)

Comparing Eqs. (20) and (28) it is clear that the supply price of good 2 is lower, for any given h∗ , in the integrated world economy than in the isolated domestic economies. This is clearly due to an increase in the scale of operation. On the other hand, the demand Eq. (21) remains the same even after trade opens up. Wage Inequality and Gains from Trade It is clear from the above analysis that international trade shifts the graph of the supply price function downwards, keeping that of the demand price function unchanged. This is shown in Figure 2.

Trade and Wage Inequality with Endogenous Skill Formation

317

If the economy was at an interior stable equilibrium (which we have assumed to be the case), like at A (in Figure 2), the new equilibrium shifts to A′ , leading to smaller equilibrium h∗ and thereby higher skill formation. According to Samuelson’s correspondence principle (1947), unstable production equilibria are almost never to be observed in the real world. Even if the initial equilibrium were to be unstable, Samuelson (1971) argues that perverse comparative statics results would never obtain. This is the global correspondence principle. Taking the clue from the global correspondence principle, we can show that even if the initial equilibrium is at B (in Figure 2) which is unstable, opening up to trade will lead to higher skill formation, and the new equilibrium will be arrived at A′ . To see this, note that as the pS curve shifts down, the demand price pd exceeds the supply price pS at initial h∗ (i.e., at B). This then according to the proposed Marshallian adjustment rule, should increase the output of X2 , which implies a higher skill formation and thereby lower h∗ . Argument on a similar line has been used in Ide and Takayama (1991). Also see Wong (1995), (pp. 224) and the reference therein. Barring the case where the world economy is too small to accommodate any skill formation (i.e. barring the case where even in the integrated world economy the only equilibrium occurs at the corner point C with h∗ = 1) there is an increase in skill formation (a decrease in h∗ ) in each country. A decrease in h∗ , in turn, leads to an increase in e(h∗ ) and a consequent rise in the skilled wage in both countries. This is evident from Eq. (17). Thus international trade increases wage inequality in both countries. The careful reader can easily figure out that our analysis of trade equilibrium can be extended without any difficulty to any number of countries. Indeed, the higher is the number of countries participating in free trade, the higher is the level of skill formation and the greater is the extent of wage inequality in each country. Our findings are recorded in the following proposition: Proposition 3. Free international trade increases skill formation, the skilled wage and the skilled-unskilled wage inequality in all countries participating in trade. The higher is the extent of international integration, the higher is the level of skill formation and wage inequality in each country. A corollary of the above analysis is that small, isolated economies might not be able to reap the benefits of increasing returns when they are separated from one another, so much so that in the extreme situation no skill formation may take place in each isolated country. As the world gets more and more integrated, the advantages of scale can be appropriated by each trading partner and skill formation will take place everywhere in the integrated world. Our next observation is about gains from trade. It is straightforward to verify that after trade opens up, in each country unskilled wage increases in terms of good 2 and remains constant in terms of good 1 while skilled wage increases in terms of both goods. Therefore, everyone, skilled or unskilled, gains from trade.

318

Brati Sankar Chakraborty and Abhirup Sarkar

Proposition 4. In spite of increasing wage inequality international integration raises real wages of both skilled and unskilled labour in each country participating in trade. Finally, the careful reader can easily figure out that the two-country analysis developed above can be extended without any difficulty to a multi-country scenario. In w i particular, if there are m countries participating in trade, letting H = ∑m i=1 H where i w H , H are labour endowments in the ith country and in the world respectively, Eq. (28) can be rewritten as

ps (h∗ ) =

 1−α w Z H h1∗ (1 − e(h)) f (h)dh 1 − e(h∗)

.

(29)

Eq. (29) represents the world supply function while the world demand function continues to be represented by (21). Now suppose the world gets more and more globalized over time. This implies new countries joining free trade and a consew quent increase in n and H which, in turn, leads to an increase in the skill premium in all countries participating in free trade. If we accept that globalization is a gradual process, not all countries opening up suddenly at the same point in time, but gradually joining the set of free traders sequentially, then our analysis suggests a sustained increase in the skill premium all over the trading world. All this may be summarized into the following proposition: Proposition 5. If globalization is gradual with countries opening up to trade sequentially, there will be a sustained increase in skill premium in all countries participating in free trade.

5 Conclusion We developed a two-good-two-factor trade model consisting of a basic good produced by unskilled labour using constant returns technology and a fancy good produced by skilled labour using increasing returns to scale technology. The demand pattern is generated out of a quasi-linear utility function. Thus the basic good is so called because it is always consumed in positive quantities for any positive income. The other good may not be consumed at all. An individual acquires skills by incurring a cost and this cost varies across individuals. As a result, in equilibrium only a subset of individuals acquire skills. We show that in such a model international trade increases the return from skill formation and as a consequence, as trade opens up skill formation goes up in all countries participating in trade. Due to positive externalities, this increased skill formation, in turn, increases skilled wage all over the world. We have arrived at the result of symmetric rise in wage inequality remaining within the FPE framework, which most of the models in the relevant literature abandon. Finally, we have shown that if the process of globalization is gradual in the

Trade and Wage Inequality with Endogenous Skill Formation

319

sense that countries open up to trade sequentially, there will be a sustained increase in skill premium in all countries as opposed to a once for all increase as obtained in other trade models explaining wage inequality. In other words, our model can explain a sustained increase in wage inequality, which the other models can not.

References 1. Chakraborty, Brati Sankar, Sarkar Abhirup (2007). Trade, Wage Inequality and the Vent for Surplus. In S. Marjit and Eden yu (Eds): Contemporary and Emerging Issues in Trade Theory and Policy, Emerald Group Publishing Limited, UK 2. Dixit, A., Stiglitz, J.E. (1977). Monopolistic competition and optimum product diversity. American Economic Review, 67, 297-308 3. Dornbusch, R., Fischer, S., Samuelson, P.A. (1980). Heckscher- Ohlin trade theory with a continuum of goods. Quarterly Journal of Economics, 95, 203-224 4. Ethier, W. J. (1982). National and international returns to scale in the modern theory of international trade. American Economic Review, 72, 389-405 5. Feenstra, R., and Hanson G. (1996). Foreign investment, outsourcing and relative wages In: R. Feenstra, G. Grossman and D. Irwin (Eds): Political Economy of Trade Policy: Papers in Honour of Jagadish Bhagawati. Cambridge, Mass.: MIT Press 6. Helpman, E., Krugman, P. (1985). Market Structure and International Trade. Cambridge, MIT Press 7. Ide, T., Takayama, A. (1991). Variable Returns to Scale, Paradoxes and Global Correspondence in the Theory of International Trade. In: A. Takayama, M. Ohyama and H. Ohta (Eds): Trade, Policy and International Adjustments. San Diego: Academic Press. 108-154 8. Jones, R. W. (1999). Heckscher-Ohlin trade models for the new century. Mimeo, University of Rochester 9. Krugman, P. (1981). Intra-industry specialization and the gains from trade. Journal of Political Economy, 89, 959-73 10. Samuelson, P.A. (1947). Foundations of Economic Analysis. Cambridge, MA: Harvard University Press 11. Samuelson, P.A. (1971). On the Trail of Conventional beliefs about the Transfer Problem. In: J.N.Bhagwati, R.W. Jones, R.A.Mundell and J.Vanek (Eds): Trade, Balance of Payments and Growth: Papers in International Economics in Honor of Charles P. Kindelberger. Amsterdam: North Holland, 327-351 12. Trefler, D., Zhu, S.C. (2001). Ginis in general equilibrium: Trade, technology and Southern inequality. NBER Working Paper No. 8446 13. Wong, Kar-yiu (1995). International Trade in Goods and Factor Mobility. Cambridge, Mass.: MIT Press

Dominant Strategy Implementation in Multi-unit Allocation Problems Manipushpak Mitra and Arunava Sen

Abstract In this paper we analyze allocation problems where an efficient rule can be implemented in dominant strategies with balanced transfers. We first prove an impossibility result in the homogenous goods case when preferences over these goods are allowed to be sufficiently diverse. We then consider a package assignment problem where the planner can bundle or package various units of the homogenous goods and wishes to allocate the packages efficiently. We characterize the package schemes for which an efficient rule in the associated package assignment problem can be implemented in dominant strategies with balanced transfers.

1 Introduction In this paper we consider two allocation problems and analyze the possibility of identifying domains of preferences over which efficient outcomes can be implemented in dominant strategies with balanced transfers. Preferences of the players or agents are assumed to be quasi-linear and their valuations for the commodities are assumed to be private information. The objective of the planner is to design a mechanism that attains the following: 1. each agent has dominant strategy incentives to reveal the truth and 2. the outcome in every state of the world is efficient.

Manipushpak Mitra Economic Research Unit, Indian Statistical Institute, Kolkata, India. e-mail: [email protected] Arunava Sen Planning Unit, Indian Statistical Institute, New Delhi, India. e-mail: [email protected]

320

Dominant Strategy Implementation

321

The former requires that truthful reporting is a dominant strategy of all agents under all profiles or states of the world. The latter requires the allocation to maximize the sum of utilities it generates and also for aggregate transfers to be balanced. Although the requirements above are stringent, there are important theoretical reasons for investigating environments and allocation problems where they can be satisfied. Some of these reasons are elucidated in Section 1.1 and a more complete discussion can be found in Mitra and Sen [12]. An example of an allocation problem and an environment where all the objectives can be reconciled is the single machine sequencing problem with linear costs, first analyzed in Suijs [13]. The model and results were generalized in Mitra [10], [11]. Our objective in this paper is to extend this line of research to another familiar class of allocation problems, that of allocating m homogenous indivisible commodities amongst n agents. For instance, the commodities could be identical plots of land and the agents could be farmers. Each farmer can receive and (possibly) has use for more than one plot of land. These valuations, however are private information. Agents can be compensated by money and utility functions are quasi-linear. Efficiency requires the m units to be allocated in a way which maximizes the sum of agent utilities from the commodities. Moreover transfers must be zero in the aggregate. The question we address is the following: does there exist a reasonable restriction on agent valuations so that efficiency can be attained with dominant strategy incentives for agents to reveal their valuations? Our result in this case is negative: we show that these requirements are mutually incompatible on any domain that satisfies a mild richness condition. In view of our negative result, we analyze a variant of the problem above where the planner can bundle or package various units and wishes to allocate these packages in a fully efficient way in dominant strategies. If there are n agents and m units, a package scheme is an n vector (q1 ≤ q2 ≤ . . . ≤ qn ) with q1 + q2 + . . . + qn = m. Note that efficiency in this context is weaker than standard efficiency. For instance, suppose that n = 3, m = 6 and the package scheme is the vector (1, 2, 3). Here the planner is constrained to give 3 units to one agent, 2 to another and 1 to the third while standard efficiency may require all 6 units to be given to one agent. We characterize package schemes which have the property that there exists some admissible, non-trivial domain over which it can be implemented efficiently with balanced transfers.1 We can show that the scheme (1, 2, 3) can be implemented in the sense above in the n = 3, m = 6 case and is indeed, the only one with this property.

1.1 Related Literature In the mechanism design literature an important result is that in the quasi-linear setting, the class of Vickrey-Clarke-Groves (or VCG) mechanisms (Vickrey [15], Clarke [1] and Groves [3]) achieves truth telling in dominant strategies and guaran1

Efficiency here is, of course, with respect to the given scheme.

322

Manipushpak Mitra and Arunava Sen

tees an efficient allocation in every state. Moreover if domain of valuation is convex the the VCG mechanisms are the only ones that have these properties (Holmstr¨om [6]).2 In our problem we assume that our domain is convex which implies smooth connectedness. Hence, in our framework too, VCG mechanisms are the only mechanisms that works. The main difficulty with VCG mechanisms is that in typical domains they are not budget balancing (Groves and Ledyard [4], Green and Laffont [2], Hurwicz [7], Hurwicz and Walker [8] and Walker [16]). The failure to obtain balanced VCG mechanisms is quite serious since under these circumstances, the social optimum in the second-best sense may not require getting the decision on the allocation exactly correct in terms of efficiency. There are a number of papers such as Groves and Loeb [5], Tian [14] and Liu and Tian [9] which have investigated the structure of pure public goods problems where full efficiency can be attained with dominant strategies. Results in the same spirit for sequencing problems have been established by Suijs [13]. There is therefore a compelling reason to investigate domains on which VCG mechanisms “work”.

2 Homogenous Goods Problem: An Impossibility Result We now consider the problem of allocating m identical units of an object amongst n agents. The main result is that there are no “non-trivial” domains over which an efficient rule can be implemented by balanced transfers. Let N = {1, 2, . . . , n} denote the finite set of n agents. Let m denote the number of identical indivisible units of a given commodity to be allocated to these n agents. Let θ j (k) ∈ ℜ+ represent the utility of the jth agent if she receives k units where k ∈ {0, 1, 2, . . ., m}. The vector θ j = (θ j (1), . . . , θ j (m)) ∈ ℜm + represents the type of agent j. We make two basic assumptions regarding types. 1. θ (k + 1) ≥ θ (k) for all k = 1, . . . , m − 1, i.e. receiving more units is no worse than not receiving them. 2. θ j (0) = 0, i.e. the utility of receiving no units is normalized to zero. The domain of type vectors of agent j is denoted by Θ ⊆ ℜm + . A state is a set of n vectors θ = (θ1 , . . . , θn ) ∈ Θ n . An allocation is a vector of non-negative integers x = (x1 , . . . , xn ) such that x j ∈ {0, 1, 2, . . . , m} and ∑i∈N xi = m. Let X denote the set of all possible allocations. Given an allocation x = (x1 , . . . , xn ) ∈ X, the utility of an agent j with type θ j ∈ Θ is U j (x j ,t j ; θ j ) = θ j (x j ) + t j where t j ∈ ℜ is the transfer that she receives. A multi-unit allocation problem Γ is a triple N, m, Θ . Definition 1. An allocation x∗ ∈ X x∗ ∈ argmaxx∈X ∑ j∈N θ j (x j ). 2

is efficient for state θ ∈ Θ n

if

Holmstr¨om [6] showed that if a domain is “smoothly connected” then we have the uniqueness of VCG mechanisms. Since convex domains are smoothly connected, uniqueness of VCG mechanisms also follow when the domain is convex.

Dominant Strategy Implementation

323

An efficient rule (also denoted by x) associates an efficient allocation with every state θ ∈ Θ n . The main objective of the planner is to ensure an efficient allocation in every profile. The difficulty however is that agents have private information about their valuations. The planner therefore has to design a mechanism to induce the agents to reveal their private information. It is well known that by applying the Revelation Principle we can concentrate on direct revelation mechanism where agents report their types and, based on their reports, the planner decides (i) an allocation of the m goods and (ii) a transfer for each agent. Formally, a (direct) mechanism M is a pair x,t, where x ∈ X and t ≡ (t1 , . . . ,tn ) : Θ n → Rn . If M = x,t is the mechanism, then an announcement θˆ = (θˆ1 , . . . , θˆn ) ∈ Θ n , results in agent j of type θ j getting utility U j (x j (θˆ ),t j (θˆ ); θ j ) = θ j (x j (θˆ )) + t j (θˆ ). Definition 2. An efficient rule x∗ : Θ n → X for Γ = N, m, Θ  is implementable, if there exists a mechanism M = x∗ ,t such that, for all j ∈ N, for all θ j , θ j′ ∈ Θ and for all θˆ− j ∈ Θ n−1 , we have U j (x∗j (θ j , θˆ− j ),t j (θ j , θˆ− j ); θ j ) ≥ U j (x∗j (θ j′ , θˆ− j ),t j (θ j′ , θˆ− j ); θ j ). In other words the mechanism induces each agent to reveal their type truthfully independent of what they believe about the announcements and true types of the other agents. It is obvious that when agents are truthful, an efficient allocation is achieved. In addition to the requirements above, we impose budget balancedness. Definition 3. An efficient rule x∗ in Γ = N, m, Θ  is implementable with balanced transfers if there exists a mechanism M = x∗ ,t that implements it and furthermore ∑ j∈N t j (θ ) = 0 for all θ ∈ Θ n . Thus, an efficient rule x∗ is implementable with balanced transfers if it can be implemented in a manner such that aggregate transfers are zero in every state. In such problems, incomplete information does not impose any welfare loss as the transfers are within the agents. Our goal is to identify problems Γ = N, m, Θ  which have the property that there exists an efficient rule which can be implemented by balanced transfers. In order to do so, we introduce a minimal richness requirement on domains. Definition 4. The domain Θ is minimally rich if it satisfies the following conditions: 1. There exists α , β ∈ Θ such that α (k) > β (k) ≥ 0 for all integers k ∈ {1, 2, . . . , m} and α (m) > α (m − r) + r ∑rp=0 β (p) for all r ∈ {1, 2, . . . , m}. 2. Θ is convex, that is if α , β ∈ Θ then λ α + (1 − λ )β ∈ Θ for all λ ∈ [0, 1]. The first part of the minimal richness assumption guarantees the existence of two sufficiently “diverse” type vectors. The vector α must strictly dominate the vector β . Moreover the mth or last component of the α must be strictly greater than the sum of the rth component of α and r times the sum of the first r components of β . Observe that it is satisfied if the vector (0, 0, . . . , 0) and any other strictly positive vector exists in the domain. In fact for any α with distinct components, we can satisfy the

324

Manipushpak Mitra and Arunava Sen

condition if we can pick another feasible type vector which is sufficiently smaller than α componentwise. In this sense we can say that the condition is satisfied if we can pick two type vectors, one of which is sufficiently larger than the other. Why do we impose an assumption such as (1) and why do we think that it is appropriate to refer to it as a requirement of non-triviality? The following example clarifies these issues. Example 1. Let n = m and Θ¯ = {λ α + (1 − λ )β : ∀λ ∈ [0, 1]} where α ≡ (α (1), . . . , α (n)) = (a, . . . , a), β = (β (1), . . . , β (n)) = (b, . . . , b) and a > b > 0. In other words, each agent has zero marginal utility for units in excess of one. The domain fails to satisfy minimal richness because α (n) = a and α (n − 1) + β (1) = a + b implies that α (n) < α (n − 1) + β (1).3 The domain is such that all efficient rules allocate exactly one unit to all agents in every state, that is x∗ (θ ) = (x∗1 (θ ) = 1, . . . , x∗n (θ ) = 1) for all θ ∈ Θ¯ . Clearly there are no incentive problems and the efficient rule can be implemented with no transfers (no announcements are required either). Minimal richness “forces” the efficient rule to have some variation across states. In particular, for every agent j, it guarantees the existence of a state where j receives all m units. The example makes it clear that without an assumption of this sort, implementability with balanced transfers may be satisfied trivially. We can now present our general impossibility theorem. Theorem 1. Let Γ = N, m, Θ  be a multi-unit allocation problem where Θ is minimally admissible. Then Γ cannot be implemented with balanced transfers. Proof: Let Γ = N, m, Θ  be a multi-unit allocation problem where Θ is minimally admissible and let x∗ be an efficient rule in Γ . Since the domain is convex and transfers are balanced, the results of Holmstr¨om [6] and Walker [16] can be applied to infer that the implementing mechanism can be assumed w.l.o.g to be a VCG mechanism and that x∗ must satisfy the following condition: For all pairs of profiles θ , θ ′ ∈ Θ n , we must have

∑ (−1)|S| ∑ θi (x∗i (θ (S))) = 0,

S⊆N

(1)

i∈N

where θ (S) = (θ1 (S), . . . , θn (S)) ∈ Θ n is a state such that θ j (S) = θ j if j ∈ S and θ j (S) = θ j′ if j ∈ S. Let α , β be type vectors which satisfy condition (1) of minimal richness (i.e α (k) > β (k) for all k ∈ M and α (m) > α (m − r) + r ∑rp=0 β (p) for all r ∈ M). Consider a pair of states θ , θ ′ ∈ Θ N where θ = (α , . . . , α ) and θ ′ = (β , . . . , β ). Given any S ⊆ N, θ (S) = (θ1 (S), . . . , θn (S)) ∈ Θ n where θ j (S) = θ j = α if j ∈ S and θ j (S) = θ j′ = β if j ∈ S. Our objective is to calculate the LHS of the expression in (1). The pair θ and θ ′ is selected in such a way that for any S ⊂ N, the efficient allocation x∗ (θ (S)) is one where all the units are allocated to agents in the set 3

This condition is a violation of condition (1) of minimal richness for m = n and r = 1.

Dominant Strategy Implementation

325

N − S, i.e. xi (θ (S)) = 0 for all i ∈ S. To see this consider any S1 ⊂ N such that |S1 | = n − 1. By setting r = m in condition (1) of minimally richness it follows α (m) ≥ m ∑mp=0 β (p). This means that any efficient rule allocates all the m units to {i1 } = N − S1 in state θ (S1 ) and hence θi (x∗i (θ (S1 )) = 0 for all i ∈ S1 . Therefore, ∑i∈N θi (x∗i (θ (S1 ))) = α (m) for all S1 such that |S1 | = n − 1. Consider any S2 ⊂ N such that |S2 | = n − 2. Again, all agents with type β (that is i ∈ S2 ) gets nothing because α (m) ≥ m ∑mp=0 β (p). Hence, θi (x∗i (θ (S2 )) = 0 for all i ∈ S2 . Moreover, since there are exactly two agents with type α in any state θ (S2 ), the allocation for {i1 , i2 } ∈ N − S2 is determined by that k ∈ {0, 1, 2, . . ., m} for which α (m− k)+ α (k) is maximized. Hence, ∑i∈N θi (x∗i (θ (S2 ))) = α (m − k∗ ) + α (k∗ ) ≥ α (m) where k∗ ∈ {0, 1, 2 . . . , m} maximizes α (m − k) + α (k). Thus, ∑i∈N θi (x∗i (θ (S2 ))) = α (m) + ε1 where ε1 = α (m − k∗ ) + α (k∗ ) − α (m) ≥ 0. Continuing this way we obtain: Given any h ∈ {1, . . . , n}, for all Sh ⊂ N such that |Sh | = n − h,

∑ θi (x∗i (θ (Sh))) = α (m) + εh−1,

(2)

i∈N

where εn−1 ≥ . . . ≥ ε2 ≥ ε1 = ε0 = 0. An important observation at this point is that all the ε terms depend only on α . Finally, if S = N (that is, θ (N) = θ ′ ), we get

∑ θi (x∗i (θ (N))) = ∑ β (x∗i (θ (N))) = ∑ β (x∗i (θ ′ )) < α (m). i∈N

i∈N

(3)

i∈N

Substituting (2) and (3) in the left hand side of (1) and then simplifying it we get |S|

∑ (−1) ∑

θi (x∗i (θ (S)))

i∈N

S⊆N

" !   n ∗ ′ n−1 = ∑ (−1) εn−1−p + (−1) α (m) − ∑ β (xi (θ )) . (4) p i∈N p=0 n−1

p

If the right hand side of (4) is not equal to zero then we already have a violation of (1). However, if the right hand side of (4) is zero then we consider a pair of states θ , θ˜ ∈ Θ such that θ = (θ1 = α , . . . , θn = α ) and θ˜ = (θ˜1 = β˜ , . . . , θ˜n = β˜ ) where β˜ = λ α + (1 − λ )β and λ ∈ (0, 1). Selecting λ > 0 sufficiently close to zero we get α (m) > α (m − r) + r ∑rp=0 β˜ (p) for all r ∈ {1, 2, . . . , m}. Using the same arguments as before with the pair θ , θ˜ instead of the pair θ , θ ′ we get: |S|

∑ (−1) ∑

S⊆N

i∈N

θi (x∗i (θ (S)))

" !   n ∗ ˜ n−1 ˜ = ∑ (−1) εn−1−p + (−1) α (m) − ∑ β (xi (θ )) . p i∈N p=0 n−1

p

(5)

Since the ε terms in (5) are the same as those in (4) (because they depend only on α ), the only difference between (5) and (4) is the last sum on the right hand side. Given, α (k) > β (k) for all k ∈ {1, 2, . . . m}, and β˜ = λ α + (1 − λ )β , we get β˜ (k) > β (k) for all k ∈ {1, 2, . . .m}. Thus ∑i∈N β˜ (x∗i (θ˜ )) > ∑i∈N β (x∗i (θ ′ )) so that the RHS of (5) is non-zero. Therefore, we have a violation of (1) which proves that the efficient rule cannot be implemented with balanced transfers. 

326

Manipushpak Mitra and Arunava Sen

3 Packaging Problem: Possibility Results We have seen in the previous section that in the standard multi-unit allocation problem, it is impossible to implement an efficient rule with balanced transfers except in cases where the problem is virtually trivial. In this section we consider a variant of this problem and demonstrate some possibility results. We consider the problem where the planner can bundle or package various units and wishes to allocate these packages efficiently. Observe that packaging creates “partial” heterogeneity in the goods being allocated. We use the qualification “partial” in the statement above because we allow for cases where some of the packages are of the same size. We address the following question: does there exist a package scheme such that the efficient rule can be implemented with balanced transfers over some nontrivial domain? We show that the answer is affirmative for all package schemes except for some special cases. As part of the domain we also characterize the domain of utilities for which a package scheme is implementable with balanced transfers. We now proceed to details. As before, we let N = {1, . . . , n} and m denote the set of agents and number of (identical) goods respectively. A package scheme or simply, a scheme is a vector of n integers q = (q1 , . . . , qn ) such that q1 ≤ q2 ≤ . . . ≤ qn with ∑ni=1 qi = m. For every scheme q, we let Σ (q) be the set of all possible permutations of the components of the vector q. For any q, an allocation xq is an element of the set Σ (q). We shall let xqj denote the package assigned to agent j under xq . When the scheme q being referred to is evident from the context, we suppress the superscript in xq . We illustrate the notation above by reference to an example. Assume that the set of agents is {1, 2, 3} and that m = 6. Suppose that q = (0, 1, 5). An allocation assigns 5 units to one agent, 1 to another. Suppose that xq gives 5 units to agent 2, then xq2 = 5 and so on. Fix a scheme q. A type for agent j, is a vector θ j = (θ j (q1 ), . . . , θ j (qn )) ∈ ℜn+ where θ j (qk ) denotes the utility of receiving qk units for agent j. We shall let Θ q denote the domain of such type vectors (assumed, once again, to be the same for all agents). Observe that since the components of the vector q need not be distinct, there may be components of θ q which are identical to each other. We assume 1. If qk = 0 for some k, then θ (qk ) = 0. 2. If qk = qk+1 for some k, then θ (qk ) = θ (qk+1 ). We shall let Θ q denote the set of possible type vectors for the scheme q. For any scheme q, a package allocation problem is a triple Γ q = N, m, Θ q . Definition 5. Consider a package problem Γ q = N, m, Θ q . An allocation x∗ ∈ Σ (q) is said to be q-efficient in state θ ∈ [Θ q ]n if x∗ ∈ arg max

∑ θ j (x j ).

x∈Σ (q) j∈N

An allocation is q-efficient in a package problem Γ q if the various packages which constitute q cannot be permuted amongst the agents to increase aggregate

Dominant Strategy Implementation

327

utility. Of course, an allocation which is q-efficient is not necessarily efficient because the argmax in its definition is only with respect to Σ (q) rather than the union of Σ (q)’s for all possible q’s. A q-efficient allocation rule is a mapping x∗ : [Θ q ]n → Σ (q) which picks an allocation x∗ (θ ) which is efficient in state θ for all θ ∈ [Θ q ]n . We say that the package problem Γ q = N, m, Θ q  is implementable if there exists a q-efficient rule x∗ and a mechanism M = x∗ ,t which induces each agent to reveal her type truthfully, i.e. j ∈ N, for all θ j , θ j′ ∈ Θ q and for all θˆ− j ∈ [Θ q ]n−1 , we have

θ j (x∗j (θ j , θˆ− j )) + t j (θ j , θˆ− j ) ≥ θ j ((x∗j (θ j′ , θˆ− j )) + t j (θ j′ , θˆ− j ). We say that the Γ q = N, m, Θ q  is implementable with balanced transfers if there exists a q-efficient rule and a mechanism M = x∗ ,t which implements it and furthermore ∑ j∈N t j (θ ) = 0 for all θ ∈ [Θ q ]n . We wish to address the following question: do schemes exist which can be implemented with balanced transfers over “non-trivial” domains? We are clearly motivated by the impossibility result of the previous section. Since efficiency with balanced transfers over non-trivial domains cannot be achieved, can the units be packaged according to some scheme q such that a q-efficient rule can then be implemented with balanced transfers? We let Δ θ j = (Δ θ j (q1 ), . . . , Δ θ j (qn−1 )) represent the vector of first differences generated by the vector θ j ∈ Θ q , i.e. Δ θ j (qk ) = θ j (qk+1 ) − θ j (qk ) for all k ∈ {1, . . . , n − 1}. An important observation is that all difference vectors Δ θ j have non-negative components. Moreover, if qk+1 = qk , then θ j (k) = 0. For any domain Θ q , we denote its corresponding first difference domain ΔΘ q . Finally, we say that Δ θ j < Δ θ j′ if Δ θ j (qk ) < Δ θ j′ (qk ) for all k ∈ {1, . . . , n − 1} such that qk+1 > qk . Note that unlike in the heterogenous goods case we cannot require one difference vector to strictly dominate another. This is because if qk+1 = qk for some k, then all difference vectors have their kth component equal to zero. Definition 6. The domain ΔΘ q satisfies regularity if for all Δ γ in the relative interior of ΔΘ q , there exists Δ α , Δ β ∈ ΔΘ q such that Δ α < Δ γ < Δ β . Definition 7. The domain Θ q is admissible if it is a convex subset of ℜn satisfying regularity. In the definition of an admissible domain, there is a “natural ordering” with respect to which these differences are computed. This is the ordering {1, 2, . . . , n} which arises naturally because the components of the vector q are arranged in ascending order and utilities are increasing in the number of units that an agent receives. We believe that the admissibility requirement is weak. Besides convexity, it imposes only regularity restrictions on admissible utility differences. The main result in this section characterizes admissible domains over which a package scheme is implementable by balanced transfers. Theorem 2. For any scheme q, let Γ q = N, m, Θ q  be a package problem where Θ q is an admissible domain. Then Γ q is implementable by balanced transfers if and only

328

Manipushpak Mitra and Arunava Sen

if the associated difference domain is of the form ΔΘ q = {(1 − s).δ + s.δ ′ | s ∈ I} where I ⊂ ℜ+ is an interval. Moreover if I is non-trivial, δ , δ ′ ∈ ℜn−1 + are such that     k−1 n−2 δ = n−1 (−1)k−1 n−2 δ ′ . (i) δ ′ > δ and (ii) ∑n−1 (−1) ∑ k=1 k=1 k−1 k k−1 k

The proof of the Theorem 2 is very similar to the proof of the main result in Mitra and Sen [12] and is hence omitted. Theorem 2 states that if Γ q = N, m, Θ q  (where Θ q is admissible) is implementable by balanced transfers, then the associated difference domain must be a straight line in ℜn−1 + satisfying certain restrictions. But for an arbitrary q can one find an admissible Θ q such that Γ q = N, m, Θ q  is implementable by balanced transfers? The answer is negative as the following example demonstrates.

Example 2. Let n = 3, m = 4 and q = (1, 1, 2). A typical difference vector is of ′ the form (0, λ ) where λ > 0 is a real number. Let δ = (0, λ ) and δ ′ = (0,  λ) k−1 n−2 be the two vectors specified in Theorem 2. Then λ = − ∑n−1 (−1) δ = k=1 k−1 k   k−1 n−2 ′ ′ ′ − ∑n−1 δ λ δ = δ (−1) . Therefore which contradicts the requirement = k=1 k−1 k that δ ′ > δ . Below we provide a complete answer to the question of what schemes are implementable with balanced transfers over some admissible domain. For any scheme q let Δ q denote the n − 1 vector (Δ q1 , . . . , Δ qn−1 ) where Δ qk = qk+1 − qk , for k = 1, . . . , n − 1. Theorem 3. Let q be a scheme. There exists an admissible domain Θ q such that Γ q = N, m, Θ q  is implementable by balanced transfers if and only if there exist integers r, s ∈ {1, . . . , n − 1} such that Δ qr , Δ qs = 0 and r + s is an odd integer. Proof: We first prove necessity. Suppose that q is a scheme such that there exists an admissible domain Θ q and Γ q = N, m, Θ q  is implementable with balanced transfers. According to Theorem 2 there must exist n − 1 dimensional vectors δ and δ ′ n−2 n−1 ˆ n−1 ˆ ′ k−1 ′ such that δ > δ and ∑k=1 ρk δk = ∑k=1 ρk δk where ρˆ k = (−1) k−1 . Therefore ′ − δ ) = 0. Since δ ′ > δ , δ ′ − δ ≥ 0 for all k and strictly positive for at ˆ ( ρ δ ∑n−1 k k k k=1 k k least one k. Observe that ρˆ k is strictly positive for k odd and strictly negative for k even. Note also from the definition of the difference domain that δk′ and δk can be strictly positive only for those values of k for which Δ qk is strictly positive. Suppose that for all r, s such that Δ qr , Δ qs > 0, we have that r + s is an even integer, i.e. all k ˆ ′ such that Δ qk > 0 are even or all are odd. Clearly then ∑n−1 k=1 ρk (δk − δk ) = 0 cannot hold and we obtain a contradiction to Theorem 2. In order to prove sufficiency, let r and s be integers such that Δ qr , Δ qs > 0 and r + s is an odd integer. Let δ be the n − 1 dimensional vector (0, 0, . . . , 0). Pick ε > 0 and real numbers c and d and let δ ′ be an n − 1 dimensional vector where δk′ = 0 if Δ qk = 0, δr′ = c, δs′ = d and δk′ = ε for all other k. Moreover c and d are

picked to satisfy the equation ρˆ r .c + ρˆ s .d + T = 0 where T = ε ∑k ∈[Q∪{r}∪{s}] ρˆ k where Q = {k ∈ {1, . . . , n − 1} − {r, s} | Δ qk = 0}. Since ρˆ r and ρˆ s are integers and are of opposite sign we can find strictly positive c and d which satisfy the equation for any given T . We now construct a difference domain which is a segment of the

Dominant Strategy Implementation

329

line passing through δ and δ ′ . It can be easily verified that this domain satisfies the requirements specified in Theorem 2.  Theorem 3 makes it easy to check whether there exists an admissible domain over which a scheme can be implemented with balanced transfers. For instance, we have an impossibility result for all q if n = 2. In this case Δ q is a singleton so that there does not exist r, s such that Δ qr = Δ qs . On the other hand if n ≥ 3 and q is a scheme such that all the components of q are distinct, (for instance if n = 3, m = 10 and q = (1, 2, 7)), then we have a possibility result. Finally consider the case where m = kn for some positive integer k and consider the scheme q = (k, k, . . . , k). Here all agents get k units in every state. It is therefore trivially implementable with balanced transfers over any arbitrary domain which appears to contradict Theorem 2. However this is not so because the associated difference domain for any domain consists of the single vector, the origin in ℜn−1 . This is the case where the interval I in Theorem 2 is trivial, i.e. consists of a single point.

4 Conclusion In this paper we have first established an impossibility theorem in a homogenous goods allocation problem where the domain satisfies a minimal richness requirement. Given this impossibility we consider package assignment problems in the homogenous goods case. We obtained a characterization of package schemes that can be implemented in dominant strategies with balanced transfers. These results clearly suggest that one can find possibility results by introducing appropriate heterogeneity in the homogenous goods problem.

References 1. Clarke, E.H., 1971. Multi-part pricing of public goods. Public Choice 11, 17-33 2. Green, J., Laffont, J.J., 1979. Incentives in Public Decision Making. North Holland Publication, Amsterdam 3. Groves, T., 1973. Incentives in teams. Econometrica 41, 617-631 4. Groves, T., Ledyard, J.O., 1977. Some limitations of demand revealing processes. Public Choice 29, 107-124 5. Groves, T., Loeb, M., 1975. Incentives and public inputs. Journal of Public Economics 4, 211-226 6. Holmstr¨om, B., 1979. Groves schemes on restricted domains. Econometrica 47, 1137-1144 7. Hurwicz L. 1975. On the existence of allocative systems whose manipulative Nash equilibria are Pareto optimal. Mimeo, University of Minnesota 8. Hurwicz, L., Walker, M., 1990. On the generic non-optimality of dominant strategy allocation mechanisms: A general theorem that includes pure exchange economies. Econometrica 58, 683-704 9. Liu, L., Tian, G., 1999. A characterization of the existence of optimal dominant strategy mechanisms. Review of Economic Design 4, 205-218

330

Manipushpak Mitra and Arunava Sen

10. Mitra, M., 2001. Mechanism Design in Queueing Problems. Economic Theory 17(2), 277-305 11. Mitra, M., 2002. Achieving the First Best in Sequencing Problems. Review of Economic Design 7(1), 75-91 12. Mitra, M., Sen, A. 2008. Efficient Allocation of Heterogeneous Commodities with Balanced Transfers, mimeo 13. Suijs, J., 1996. On incentive compatibility and budget balancedness in public decision making. Economic Design 2, 193-209 14. Tian, G., 1996. On the existence of optimal truth-dominant mechanisms. Economics Letters 53, 17-24 15. Vickrey, W., 1961. Counterspeculation, auctions and competitive sealed tenders. Journal of Finance 16, 8-37 16. Walker, M., 1980. On the non-existence of dominant strategy mechanisms for making optimal public decisions. Econometrica 48, 1521-1540

Allocation through Reduction on Minimum Cost Spanning Tree Games Anirban Kar

Abstract Bird (1976) introduced an allocation for minimum cost spanning tree games which belongs to the core. However Bird allocation fails to satisfy cost monotonicity. Dutta and Kar (2004) by constructing a new allocation, showed that it is possible to achieve core selection and cost monotonicity on minimum cost spanning tree games. This paper proposes a new class of parametric allocations. It shows that these rules are core selection and satisfy many other attractive properties. It also provides a necessary and sufficient condition on the parameter for cost monotonicity. Moreover it is shown that the Bird allocation and the Dutta-Kar allocation are two extreme points of this family.

1 Introduction There is a wide range of economic contexts in which aggregate costs have to be allocated amongst individual agents or components who derive the benefits from a common project. A firm has to allocate overhead costs amongst its different divisions. Regulatory authorities have to set taxes or fees on individual users for a variety of services. Partners in a joint venture must share costs (and benefits) of the joint venture. In this paper, I pursue axiomatic analysis of a specific class of cost allocation problems known as Minimum Cost Spanning Tree games denoted as MCST games. The common feature of these problems is that a group of users has to be connected to a single supplier of some service. For instance, several towns may draw power from a common power plant, and hence have to share the cost of the distribution network. There is a positive cost of connecting each pair of users (towns) as well as a cost of connecting each user (town) to the common supplier (power plant). A cost game arises because cooperation reduces aggregate costs - it Anirban Kar Department of Economics, Delhi School of Economics, University of Delhi, Delhi 110007, India. e-mail: [email protected]

331

332

Anirban Kar

may be cheaper for town A to construct a link to town B which is nearer to the power plant, rather than build a separate link to the plant. An efficient network must be a tree which connects all users to the common supplier. In this paper, I construct an interesting class of cost allocation rules over the efficient network and discuss their fairness properties. Although earlier works by Kruskal (1956) and Prim (1957) did the spadework of finding an algorithm for construction of a minimum cost spanning tree, this problem captured economists attention when Bird (1976) found an allocation which belongs to the core of an associated cost game. In recent years the focus have shifted to issues such as fairness and incentive compatibility of cost allocations. Granot and Huberman (1981), Dutta and Kar (2004), Tijs et. al. (2006), Bergantinos and Vidal-Puga (2007a) among others have offered allocations that satisfy various compelling features including cost monotonicity, core selection and population monotonicity. Unlike other papers which try to promote only one rule, here I propose an one-parameter-family of allocations. I show that the Bird (1976) allocation and the Dutta-Kar (2004) allocation are two extreme points of this family. Although it does not provide an axiomatic characterization but this paper connects the existing results. In the next section I discuss the model and various axioms, which is followed by the construction of one-parameter-family allocation rules. The last section contains all the results.

2 Minimum Cost Spanning Tree Games Let N = {1, 2, . . .} be the set of all possible agents. We are interested in networks where the nodes are elements of a set N ∪ {0}, where N ⊂ N , and 0 is a distinguished node which we will refer to as the source. Henceforth, for any set N ⊂ N , we will use N + to denote the set N ∪ {0}. A typical graph over N + will be represented by g = {(i j)|i, j ∈ N + }. Two nodes i and j ∈ N + are said to be connected in g if ∃(i1 i2 ), (i2 i3 ), . . . , (in−1 in ) such that (ik ik+1 ) ∈ g, 1 ≤ k ≤ n − 1, i1 = i, in = j. A graph g is called connected over N + if i, j are connected in g for all i, j ∈ N + . The set of connected graphs over N + is denoted by ΓN . A cost matrix C = (ci j ) represents the cost of direct connection between any pair of nodes. That is, ci j is the cost of directly connecting any pair i, j ∈ N + . We assume that each ci j > 0 whenever i = j. We also adopt the convention that for each i ∈ N + , cii = 0. So, each cost matrix is nonnegative, symmetric and of order |N| + 1. The set of all cost matrices for N is denoted by CN . However, we will typically drop the subscript N whenever there is no cause for confusion about the set of nodes. An MCST at C satisfies gN (C) = arg ming∈ΓN ∑(i j)∈g ci j . A minimum cost spanning network must be a tree. Otherwise, we can delete an extra edge and still obtain a connected graph at a lower cost. However a cost matrix can have more than one MCST. Here we introduce a few more definitions regarding a tree. The (unique) path from i to j in tree g, is a set U(i, j, g) = {i1 , i2 , . . . , iK }, where each pair (ik−1 ik ) ∈ g, and i1 , i2 , . . . , iK are all distinct agents with i1 = i, iK = j. The predecessor set of

Allocation through Reduction on Minimum Cost Spanning Tree Games

333

an agent i in a tree g is defined as P(i, g) = {k|k = i, : k ∈ U(0, i, g)}, these are the users through whom i connects to the source. The immediate predecessor of agent i, denoted by α (i, g), is the agent who comes immediately before i, that is, α (i, g) ∈ P(i, g) and k ∈ P(i, g) implies either k = α (i, g) or k ∈ P(α (i), g).1 The followers of agent i, are those agents who come immediately after i; F(i, g) = { j|α ( j, g) = i}. The objective of this paper is to propose ‘fair’ ways of dividing the cost. An allocation rule is a family of functions {µ N }N⊂N ,

µ N : CN → ℜN , satisfying

∑ µiN (C) =

i∈N



ci j .

(i j)∈gN (C)

We will drop the superscript N whenever there is no confusion about the set of agents. So, given any set of nodes N and any cost matrix C, a cost allocation rule specifies the costs attributed to agents in N. Note that the source 0 is not an ‘active’ player, and hence does not bear any part of the cost. One condition which must be satisfied by a cost allocation rule is that the total payment by the agents must cover the total cost. It is easy to see that cooperation among players increases the connection possibilities and hence decreases the cost. This suggests that one can also model the minimum cost spanning problems as a transfarable utility (cost) game. The following game is based upon the classical definition of stand-alone cost. It says that a coalition S ⊆ N can not link anyone outside the coalition while connecting to the source. Let CS be the cost matrix restricted to S+ . Then, cost of a group is c(S) = ∑(i j)∈gS (CS ) ci j . We will call (N, c) a minimum cost spanning tree game. Alternative formulation of cost games are possible, see Megiddo (1978) and Bergantinos and Vidal-Puga (2007b). One can define allocation rules based on the solution concepts of transferable utility games. Kar (2002) axiomatized an allocation rule based on the Shapley value (Shapley (1953)) of (N, c). Bergantonos and Vidal-Puga also looked at the Shapley value of a related cost game. It is possible to construct cost allocations without invoking a cost game. See Tijs et. al. (2006) and Bergantinos and Kar (2008) for discussions on obligation rules, which divide the cost of a link present in a MCST on a pro-rata basis among a relevant group of users. In this paper, I shall be interested in two particular allocations. The first was introduced by Bird (1976). Here each agent pays the cost of linking to her immediate predecessor in a MCST. Bird allocation is formally defined as Bi (C, gN ) = ciα (i,gN (C)) for all i ∈ N. Notice that this is not a valid allocation rule when C gives rise to more than one MCST. However, one can still use Bird’s method on each MCST derived from C and then take some convex combination of the allocations. The other allocation was proposed by Dutta and Kar (2004). It assigns agents to links in an iterative manner. Agents pay the cost of the links they have been assigned to. The detail of this procedure will be discussed in section three. For a joint characterization of Bird 1

Note that since g is a tree, the immediate predecessor must be unique.

334

Anirban Kar

and Dutta-Kar allocation see Dutta and Kar (2004). For other characterizations of Bird allocation see Feltkamp et. al. (1994) and Ozsoy (2006). The important issue here is not how an allocation is constructed but the properties it satisfies. Following are the axioms I use in this paper. Cost monotonicity: Let C,C′ ∈ CN be such that ckl = c′kl for all (kl) = (i j) and ci j > c′i j . An allocation rule µ satisfies cost monotonicity if for all m ∈ N ∩ {i, j}, µm (C) ≥ µm (C′ ). Cost monotonicity requires that the cost allocated to agent i does not increase if the cost of a link involving i goes down, nothing else changing. Notice that if a rule does not satisfy cost monotonicity, then it may not provide agents with the appropriate incentives to reduce the costs of constructing links. Core selection: An allocation rule µ is a core selection if for all N ⊆ N and for all C ∈ CN , ∑i∈S µi (C) ≤ c(S), ∀S ⊂ N. If an allocation is not in core that is ∑i∈S µi (C) > c(S) for some S ⊂ N then users in S will form their own network. Both cost monotonicity and core selection are standard properties that have been used in this area of research. In the context of transferable utility games Young (1994) showed that core selection and a monotonicity property similar to cost monotonicity are not achievable simultaneously. Dutta and Kar (2004) constructed an allocation rule for MCST games which satisfy the above properties. Scale invariance is another appealing property that can be imposed on a cost allocation rule. This axiom says that the unit of measurement should not affect the cost allocation. That is if the cost of connections are measured in terms of dollar instead of Euros, it must not affect the payment made by the agents. Scale invariance: Let C and C′ be two cost matrices such that C′ = δ C + β , where δ , β ∈ ℜ and δ ≥ 0. Then an allocation rule µ satisfies scale invariance if µ (C′ ) = δ µ (C) + β . Here are two more axioms that I find compelling for the MCST problems. We need to impose a domain restriction for defining these properties. Let CN1 = {C ∈ CN |C induces a unique MCST}. That is CN1 is the set of all cost matrices which have a unique minimum cost spanning tree. Take any C ∈ CN1 , then, i ∈ N is called an extreme point of gN (C) if i has no follower in gN (C).2 Suppose that i is an extreme point of gN (C). Note that i is of no use to the rest of the network since no node is connected to the source through i. Extreme point monotonicity essentially states that no ‘existing’ node k will agree to pay a higher cost in order to include i in the network. Extreme point monotonicity: Let i be an extreme point of C ∈ CN1 . Let C¯ be the restriction of C over the set N + \ {i}. An allocation rule µ satisfies extreme point ¯ ≥ µk (C), ∀k ∈ N \ {i}. monotonicity if µk (C) Tree invariance: Let C,C′ ∈ CN1 be such that gN (C) = gN (C′ ) and (i j) ∈ gN (C) ⇒ ci j = c′i j . An allocation rule µ satisfies tree invariance if µk (C) = µk (C′ ) for all k ∈ N. This axiom states that if two cost matrices have the same minimum cost spanning tree then the cost allocations corresponding to these matrices can not be different. This property does not have any fairness implication, however it adds to the computational simplicity of the rule. 2

We will often refer to i as an extreme point of C.

Allocation through Reduction on Minimum Cost Spanning Tree Games

335

3 Parametric Family of Allocation Rules We will use an iterative method to define the allocation rules. These are denoted by Ψ N,λ where N is the set of agents and λ is a parameter. We will drop the index N whenever there is no confusion about the set over which the allocation is defined. Our rules are defined for all cost matrices in C . However, in order to economise on notation, we describe the class of rules for a smaller domain. Let C 2 = {C ∈ C 1 | no two edges of the unique MCST gN (C) have the same cost}. First we construct the rules on C 2 and then extend them for all cost matrices. For 0 ≤ λ ≤ 1, this class of allocation rules is defined recursively as follows. First, if |N| = 1 then Ψ1λ (C) = c10 . Suppose we have defined this class of allocation rules for all sets N with cardinality strictly less than m. Now we define it for set of agents N such that |N| = m. Assume that ckl = mini = j ci j , where k is the immediate predecessor of l in gN (C)3 . This is unique as C ∈ C 2 . Now, we define the reduced cost matrix CR over N + \ {l} as follows, cRmn = cmn if k ∈ / {m, n},

(1)

cRjk = min{c jk , c jl }∀ j = k, l.

(2)

Intuitively, the reduction process merges user k and l to form a ‘super-node’ in the reduced society. For notational convenience, this super-node is still represented by k. Others can link this new user by connecting to either of the erstwhile users k and l. This gives Eq. (2) because to minimize cost, in the reduced society, each user will choose the cheapest of those two connections. Eq. (1) says that the cost of all other links remain unaffected in the reduced society. We prove a lemma at the end of this section (Lemma 1), which establishes CR ∈ C 2 . Hence Ψ λ (CR ) is already defined. Now we can extend this to an allocation at C. It involves dividing the additional cost of link (kl), which was removed while reducing C to CR . This is done as follows. Allocation of nodes, which are not involved in reduction remain the same. User l pays the entire cost of (kl) if k is the source. Otherwise k and l divide ckl , along with the cost share of the super-node in CR , by a fixed factor λ . Formally,

(1 − λ )Ψkλ (CR ) + λ ckl if k = 0 λ Ψl (C) = (3) ckl otherwise,

Ψkλ (C) = λΨkλ (CR ) + (1 − λ )ckl if k = 0, λ

Ψi (C) =

Ψiλ (CR )

for all i = k, l.

(4) (5)

We will study this one-parameter-family of allocation rules (with parameter λ ) in the rest of this paper. Following is a numerical example which illustrate the algorithm. 3

Note that (kl) ∈ gN (C) can be proved as follows. Let C ∈ CN2 , and k ∈ N and ckl = mini∈N + \{k} cki . Suppose (kl) ∈ / gN (C). As gN (C) is a connected graph over N + , there is a unique path between k and l. Suppose (k j) ∈ gN (C) belongs to that path But, {gN (C) ∪ (kl)} \ {(kl)} is still a connected graph and it costs no more than gN (C), as ckl ≤ ck j . This is not possible as gN (C) is the only MCST of C.

336

Example 1.

Anirban Kar



0 ⎜3 ⎜ C=⎝ 4 6

⎞ 346 0 5 1⎟ ⎟ 5 0 2⎠ 120

Step 1: Here the MCST is g(C) = {(0, 1), (1, 3), (3, 2)}. min p =q c pq = c13 = 1. So 3 will be merged with 1. Let C1 be the reduced cost matrix on{0, 1, 2}. We get, c112 = min(c12 , c23 ) = c23 = 2 and c110 = min(c10 , c30 ) = c10 = 3. Therefore ⎞ ⎛ 034 C1 = ⎝ 3 0 2 ⎠ 420

Step 2: The MCST at C1 is g(C1 ) = {(0, 1), (1, 2)}. min c1pq = c112 = 2. So 2 will be merged with 1. The reduced cost matrix of C1 is denoted by C2 (defined on {0, 1})   03 2 C = 30 Step 3: We obtain the following allocation if λ = 0.5.4

Ψ1λ (C) = λ [λΨ1λ (C2 ) + (1 − λ )c112] + (1 − λ )c13 = 1.75, Ψ2λ (C) = (1 − λ )Ψ1λ (C2 ) + λ c112 = 2.5,

Ψ3λ (C) = (1 − λ )[λΨ1λ (C2 ) + (1 − λ )c112] + λ c13 = 1.75. So far, we have defined the allocation for matrices in C 2 . Suppose that C ∈ C 2 . Then there can be more than one MCST corresponding to the cost matrix C. Moreover a MCST may contain edges which cost the same. Then, our algorithm is not well-defined because at some step there may exist more than one edge which minimises the cost of connections. Even if the minimum cost edge is unique, it will not be possible to assert predecessor-follower relationship because C might have more than one MCST. But, there is an easy way to extend the algorithm to deal with matrices not in C 2 . Let σ be a strict ordering over the set of edges. Then, σ can be used as a tie-breaking rule while constructing a MCST by Prim’s algorithm [Prim (1957)]. We will also use σ to fix the minimum cost edge in case of a tie. Let the set E be defined as E = {(i j)|ci j = min p =q c pq }. Then we choose (kl), the σ -maximized cost edge in E. It is immediate that any such tie-breaking rule makes the algorithm well-defined. Now, let Σ be the set of all strict orderings over the set of edges. Then, the eventual cost allocation is obtained by taking the simple average of the ‘component’ cost allocations. That is, for any σ ∈ Σ , let Ψσλ (C) denote the cost allocation obtained from the algorithm when σ is used as the tie-breaking rule. Then, 4

It is easy to check that this allocation is not a simple convex combination of allocations corresponding to λ = 0 and λ = 1.

Allocation through Reduction on Minimum Cost Spanning Tree Games

Ψ λ (C) =

337

1 Ψσλ (C). |Σ | σ∑ ∈Σ

Here is an example to illustrate this procedure. Example 2.



04 ⎜4 0 C=⎜ ⎝4 2 52

⎞ 45 2 2⎟ ⎟ 0 5⎠ 50

The cost matrix C has two MCST - g1 = {(0, 1), (1, 2), (1, 3)} and g2 = {(0, 2), (1, 2), (1, 3)}. The edges (12) and (13) have the same cost. We choose λ = 0. First, note that g1 will be the MCST for all permutations σ which rank (1, 0) over (2, 0). Otherwise g2 is the MCST of C. Consider the permutations for which g1 is the MCST. Among those permutations if (1, 2) is ranked over (1, 3), then the minimum cost edge is (1, 2). Otherwise (1, 3) is the minimum cost edge. Taking each in turn we obtain the allocations x1 = (2, 2, 4) and x2 = (2, 4, 2). The weights on these allocations will be one-fourth each. If g2 is the MCST of C then irrespective of the choice of minimum cost edge we get the allocation x3 = (2, 2, 4). Hence the weight attached to x3 is half. Taking the weighted average, we get Ψ 0 (C) = (2, 2.5, 3.5). We end this section by proving the following result on CR . This lemma shows 2 whenever C ∈ C 2 . that CR ∈ Cn−1 n Lemma 1. If gN (C) is the MCST of C ∈ C 2 , then the MCST of CR will be g(CR ), where g(CR ) = {(pq) ∈ gN (C)|{p, q} ∩ {k, l} = 0} / ∪ {(tk) = (kl)|(tk) or (tl) ∈ gN (C)} . Proof: Let C ∈ C 2 . The MCST of C can be divided into two parts. That is gN (C) = g1 (C) ∪ g2 (C), where g1 (C) = {(pq) ∈ gN (C)|{p, q} ∩ {k, l} = 0}, / 2 1 g (C) = gN (C) \ g (C). Thus, g(CR ) = g1 (C) ∪ {(tk) = (kl)|(tk) or (tl) ∈ g2 (C)}. Clearly g(CR ) is a connected graph over N + \ {l}.



(i j)∈g(CR )

cRij =



(i j)∈g1 (C)

cRij +



R ctk =

(tk)∈g(CR )



(i j)∈g1 (C)

ci j +



min(ctk , ctl ). (6)

(tk)∈g(CR )

/ Now, (tk) ∈ g(CR ) implies either (tk) or (tl) ∈ g2 (C). If (tk) ∈ g2 (C) then (tl) ∈ R = c . Similarly if (tl) ∈ g2 (C) then g2 (C) as (kl) ∈ g2 (C). Hence ctk < ctl or ctk tk R = c . Therefore from (6) ctk tl



(i j)∈g(CR )

cRij =



(i j)∈gN (C)

ci j − ckl .

(7)

338

Anirban Kar

Suppose g(CR ) is not the only MCST of CR . Let g be another MCST corresponding to the cost matrix CR . Hence g is a connected graph over N + \ {l}. We construct g¯ = {(i j) ∈ g|i, j = k} ∪ {(it)|(ik) ∈ g, cRik = cit } ∪ {(kl)} which is a connected graph over N + . Using (7) and the fact that g is an MCST corresponding to CR we get



(i j)∈g¯

ci j =



(i j)∈g

cRij + ckl ≤

cRij + ckl =



(i j)∈g(CR )



ci j .

(i j)∈gN (C)



This contradicts the fact that gN (C) is the only MCST of C.

4 Results In this section, we show that for all λ ∈ [0, 1], each Ψ λ is a core selection, satisfies tree invariance, scale invariance and extreme point monotonicity. Moreover Ψ λ satisfies cost monotonicity iff λ ∈ [0, 0.5]. For notational simplicity, I assume that C ∈ C 2 in all the subsequent proofs. However the results are true in general. Before we introduce the main theorems we will prove a couple of lemmas which will be used later. Lemma 2. Let C ∈ C 2 , and i ∈ N. If cik = minl∈N + \{i} cil , then (ik) ∈ gN (C). Proof : Suppose (ik) ∈ / gN (C). As gN (C) is a connected graph over N + , ∃ j ∈ N + \ {i, k} such that (i j) ∈ gN (C) and j is on the path between i and k. But, {gN (C) ∪ (ik)} \ {(i j)} is still a connected graph which costs no more than gN (C), as cik ≤ ci j . This is not possible as gN (C) is the only MCST of C.  This lemma says that if C ∈ C 2 then the minimum cost link of an user must belong to the MCST. The following lemma provides a lower bound on allocations according to Ψ λ . It says that no agent is subsidized beyond the cost of her cheapest link. This in itself can be considered as a desirable property. For instance the Shapley value of the game (N, c) considered by Kar (2002) does not satisfy this property. Lemma 3. For 0 ≤ λ ≤ 1, ∀i ∈ N, Ψiλ (C) ≥ mint∈N + \{i} cit . Proof: This is trivially true for |N| = 1. Let this be true for all cost matrices with number of agents strictly less than m. Take C ∈ C 2 with |N| = m. Let ckl = mini = j ci j and k = α (l, gN (C)). ∀i = k, l; Ψiλ (C) = Ψiλ (CR ) ≥

min

t∈N + \{i,l}

cRit = min cit . t∈N + \{i}

If k = 0 then

Ψkλ (C) = λΨkλ (CR ) + (1 − λ )ckl ≥ λ

min

t∈N + \{k,l}

cRkt + (1 − λ )ckl ≥ ckl =

min ckt .

t∈N + \{k}

Allocation through Reduction on Minimum Cost Spanning Tree Games

339

Similarly, Ψlλ (C) ≥ mint∈N + \{l} clt . If k = 0 then Ψlλ (C) = cl0 = mint∈N + \{l} clt . Hence the result follows.  The following theorem shows that Ψ λ is an interesting class of allocations, which satisfy various compelling properties. Theorem 1. For all λ ∈ [0, 1], each Ψ λ is a core selection, satisfies tree invariance, scale invariance and extreme point monotonicity. Moreover Ψ λ satisfies cost monotonicity iff λ ∈ [0, 0.5].

Proof: We will prove all the results by induction over cardinality of N. Core selection, tree invariance, scale invariance, and cost monotonicity are trivially satisfied for |N| = 1. It can be easily checked that extreme point monotonicity is satisfied for |N| = 2. Assume that the result is true for all N with |N| < m. Now we will prove the result when |N| = m. Take C ∈ C 2 . Let ckl = mini = j ci j . Without loss of generality, we can assume that k = α (l, gN (C)) that is k is the immediate predecessor of l in the MCST. [Core selection]: Suppose Ψ λ does not belong to the core of C. Then there is a coalition S ⊂ N such that S can block Ψ λ . Let cR denote the cost game corresponding to the reduced matrix CR . There are two possible cases (1) k = 0 (2) k = 0. Case 1: Suppose k = 0. We will argue that {k, l} ∩ S = 0. / To the contrary assume that S contains neither k nor l. Then using the fact that C and CR coincide on S+ we get ∑ Ψtλ (CR ) = ∑ Ψtλ (C) > c(S) = cR(S). t∈S

t∈S

So, S is also a blocking coalition in CR contradicting the induction hypothesis. Therefore {k, l} ∩ S = 0. / We will now show that S = [S ∪ {k} \ {l}] is a blocking R coalition in C . Consider the following graph g = {(pq) ∈ gS (CS )|{p, q} ∩ {k, l} = 0} / ∪ {(tk) = (lk)|(tk) or (tl) ∈ gS (CS )}.

Clearly, g is a connected graph over S+ because gS (CS ) is a connected graph over  S+ . Thus ∑(i j)∈g cRij ≥ cR (S). Take any (pq) ∈ g. If {p, q} ∩ {k, l} = 0/ then cRpq = c pq . If (tk) ∈ g then R = min{c , c }. ctk tk tl Suppose k, l ∈ S. Then ∑(i j)∈g cRij = c(S) − ckl . Hence,  ∑ Ψtλ (CR ) = ∑ Ψtλ (C) − ckl > c(S) − ckl ≥ cR(S)

t∈S

t∈S

and S is a blocking coalition.  ≤ ∑(i j)∈g cR ≤ c(S). Using Lemma 3 we get Otherwise cR (S) ij

∑ Ψtλ (CR ) = ∑

t∈S

t∈S\{k,l}

Ψtλ (CR ) + Ψkλ (CR ) ≥



t∈S\{k,l}

Ψtλ (C) + Ψjλ (C) = ∑ Ψtλ (C), t∈S

340

Anirban Kar

where j is either k or l. Thus,  ∑ Ψtλ (CR ) ≥ ∑ Ψtλ (C) > c(S) ≥ cR (S). t∈S

t∈S

Again S is a blocking coalition. Therefore if k = 0 then it is always possible to obtain a blocking coalition in CR which contradicts our induction hypothesis. Case 2: k = 0. Suppose l does not belong to the blocking coalition S. Then c(S) = ∑(i j)∈gS (CS ) ci j ≥ ∑(i j)∈gS (CS ) cRij ≥ cR (S). Now,

∑ Ψtλ (CR ) = ∑ Ψtλ (C) > c(S) ≥ cR (S).

t∈S

t∈S

Hence, S is a blocking coalition in CR contradicting the induction hypothesis. Therefore it must be the case that l ∈ S. Consider the following graph g = {(pq) ∈ gS (CS )|{p, q} ∩ {l} = 0} / ∪ {(t0) = (l0)|(t0) or (tl) ∈ gS (CS )}. Since gS (CS ) is a connected graph over S it follows that g is a connected graph over [S \ {l}]. Take any (pq) ∈ g such that {p, q} ∩ {l} = 0. / Then cRpq = c pq . Also R ct0

= min{ct0 , ctl } =

ct0 if (t0) ∈ gS (CS ) . ctl if (tl) ∈ gS (CS )

Since (l0) ∈ gS (CS ), we can not have both (t0) ∈ gS (CS ) and (tl) ∈ gS (CS ). Thus cR (S \ {l}) ≤ ∑ cRij = ∑ ci j − cl0 = c(S) − cl0. (i j)∈g

Now,

∑ t∈[S\{l}]

(i j)∈gS (CS )

Ψtλ (CR ) = ∑ Ψtλ (C) − cl0 > c(S) − cl0 ≥ cR (S \ {l}). t∈S

Therefore [S \ {l}] is a blocking coalition in CR contradicting the induction hypothesis. This completes the proof of core selection.  ¯ and (i j) ∈ gN (C) ⇒ [Tree invariance]: Consider C¯ ∈ C 2 such that gN (C) = gN (C) ¯ and ckl = c¯kl . First we ci j = c¯i j . From Lemma 2, (kl) ∈ gN (C). Hence (kl) ∈ gN (C) assert that c¯kl = mini = j c¯i j . Contrary to this, suppose c¯mn = mini = j c¯i j where (mn) = ¯ from Lemma 2. Therefore (mn) ∈ gN (C) and cmn = c¯mn . (kl). Then (mn) ∈ gN (C) Hence cmn = c¯mn < c¯kl = ckl , which contradicts our assumption. Now, using Lemma 1 we get that g(CR ) = g(C¯ R ). It follows from the definition of CR , C¯ R that cRij = c¯Rij for all (i j) ∈ g(CR ). Therefore CR and C¯ R are two cost matrices which have same minimum cost spanning trees. From the induction hypothesis Ψ λ (CR ) = Ψ λ (C¯ R ). ¯ Using the fact that ckl = c¯kl we get Ψ λ (C) = Ψ λ (C). 

Allocation through Reduction on Minimum Cost Spanning Tree Games

341

[Scale invariance]: Let C and D be two cost matrices such that D = δ C + β . Note that (kl) is also the minimum cost edge in D and DR = δ CR + β . By induction hypothesis Ψ λ (DR ) = δΨ λ (CR ) + β . Thus for all i = k, l

Ψiλ (D) = Ψiλ (DR ) = δΨiλ (CR ) + β = δΨiλ (C) + β . If k = 0 then Ψkλ (D) = λΨkλ (DR ) + (1 − λ )dkl = λ [δΨkλ (CR ) + β ] + (1 − λ )[δ ckl + β ] = δΨkλ (C) + β . Similarly it can be shown that Ψlλ (D) = δΨlλ (C) + β . If k = 0 then Ψlλ (D) = dl0 = δ cl0 + β = δΨlλ (C) + β .  [Extreme point monotonicity]: Let t be an extreme point of C. Let D be the restriction of C over N + \ {t}. First, note that our allocation rule is well defined over D because D ∈ C 2 . There are two possible cases. Either t = l or t = l. If t = l then from Lemma 1 it follows that t is also an extreme point of CR . Moreover DR is the restriction of CR over [N + \ {l,t}]. From the induction hypothesis, for all i ∈ [N \ {l,t}], Ψiλ (DR ) ≥ Ψiλ (CR ). From the construction of Ψ λ it follows that Ψiλ (D) ≥ Ψiλ (C) for all i ∈ N \ {t}. If t = l then from Lemma 1, gN\{l} (CR ) = gN (C) \ {(kl)}. Since l is an extreme point of C we have gN\{l} (D) = gN (C) \ {(kl)}. Thus gN\{l} (CR ) = gN\{l} (D). Take any (i j) ∈ gN\{l} (D). We have di j = ci j . If i, j = k then ci j = cRij . Since l is an extreme point, for all t = k, l; cRkt = min{ckt , clt } = ckt = dkt . Therefore (i j) ∈ gN\{l} (D) implies di j = cRij . Hence from tree invariance, Ψiλ (CR ) = Ψiλ (D) for all i = l. Therefore Ψiλ (C) = Ψiλ (D) for all i = k, l. Also Ψkλ (C) ≤ Ψkλ (CR ) = Ψkλ (D) from Lemma 3.  [Cost monotonicity]: Suppose λ ∈ [0, 0.5]. Take C¯ ∈ C 2 where c pq = c¯ pq for all ¯ ≥ Ψt λ (C) for all t ∈ {i, j}∩N. (pq) = (i j) and c¯i j > ci j . We have to prove that Ψtλ (C) ¯ = ¯ and hence by tree invariance for all t ∈ N, Ψt λ (C) / gN (C) If (i j) ∈ / gN (C) then (i j) ∈ λ Ψt (C). Therefore we assume that (i j) ∈ gN (C) and without loss of generality i = α ( j, gN (C)). We prove the result for k = 0. For k = 0 the proof is similar and hence omitted. There are two possible cases. ¯ If {i, j} ∩ Case 1: (i j) = (kl). In this case (kl) is still the minimum cost edge of C. R R R R {k, l} = 0, / then c¯i j > ci j . For all other edges ct1 t2 = c¯t1t2 . Applying the induction ¯ hypothesis, Ψtλ (CR ) ≤ Ψtλ (C¯ R ) for all t ∈ {i, j} ∩ N and hence Ψt λ (C) ≤ Ψtλ (C). ¯ Otherwise let q = {i, j} \ {k, l}. Either k is the immediate predecessor of l in gN (C) ¯ then c¯R = min(c¯qk , c¯ql ) ≥ min(cqk , cql ) = or the other way round. If k = α (l, gN (C)) qk cRqk . For all other edges (t1t2 ), c¯tR1 t2 = ctR1t2 . From the induction hypothesis ¯ Ψqλ (C) = Ψqλ (CR ) ≤ Ψqλ (C¯ R ) = Ψqλ (C), ¯ Ψkλ (C) = λΨkλ (CR ) + (1 − λ )ckl ≤ λΨkλ (C¯ R ) + (1 − λ )c¯kl = Ψkλ (C).

¯ Similarly it follows that Ψlλ (C) ≤ Ψlλ (C).

342

Anirban Kar

¯ which is only possible if j = k, that is cost of (ik) inOtherwise k = α (l, gN (C)), R creases. Then c¯il = min(c¯ik , c¯il ) ≥ min(cik , cil ) = cRik . Therefore Ψiλ (C) = Ψiλ (CR ) ≤ ¯ and Ψ λ (C¯ R ) ≥ Ψ λ (CR ). Now, Ψiλ (C¯ R ) = Ψiλ (C) l k ¯ − Ψkλ (C) = [(1 − λ )Ψlλ (C¯ R ) + λ c¯kl ] − [λΨkλ (CR ) + (1 − λ )ckl ], Ψkλ (C) = (1 − λ )[Ψlλ (C¯ R ) − ckl ] − λ [Ψkλ (CR ) − c¯kl ] ≥ 0. The last inequality follows from the fact that λ ≤ 0.5, c¯kl = ckl and Ψlλ (C¯ R ) ≥ Ψkλ (CR ). This completes the proof for (i j) = (kl).

¯ Then Case 2: (i j) = (kl). Note that (kl) can still be the minimum cost edge of C. it is immediate that CR = C¯ R . Hence Ψkλ (C) = λΨkλ (CR )+ (1 − λ )ckl < λΨkλ (C¯ R )+ ¯ Similarly Ψ λ (C) < Ψ λ (C). ¯ So, assume that (mn) = (kl) is the (1 − λ )c¯kl = Ψkλ (C). l l ¯ ¯ minimum cost edge in C and m = α (n, gN (C)). It is sufficient to show that cost monotonicity is satisfied when c¯kl = min{p =q|(pq) =(mn)} c¯ pq .5 Note that cmn = c¯mn = ¯ and m = α (n, gN (C)). Let the reduced min{p =q|(pq) =(kl)} c pq . Also k = α (l, gN (C)) R R ¯ From C to DR we first remove cost matrices C and C¯ be represented by D and D. (kl) and then (mn). On the other hand from C¯ to D¯ R we first remove (mn) and then (kl). As (kl) is the only edge which has different cost in C and C¯ we get DR = D¯ R . ¯ We have four subcases to Now we compare the allocations of k, l between C and C. consider. Subcase a: If {k, l} ∩ {m, n} = 0, / then

Ψkλ (C) = λΨkλ (D) + (1 − λ )ckl ,

= λΨkλ (DR ) + (1 − λ )ckl , < λΨkλ (D¯ R ) + (1 − λ )c¯kl , ¯ ¯ = Ψkλ (C). = Ψkλ (D)

¯ Similarly Ψlλ (C) < Ψlλ (C). Subcase b: If l = m, then

Ψkλ (C) = λΨkλ (D) + (1 − λ )ckl ,

= λ [λΨkλ (DR ) + (1 − λ )cln] + (1 − λ )ckl , < λΨkλ (D¯ R ) + (1 − λ )c¯kl , ¯ ¯ = Ψkλ (C). = Ψkλ (D)

The inequality follows from the fact that c¯kl > ckl and from Lemma 3, Ψkλ (D¯ R ) ≥ mini∈N + \{k,l,n} d¯ikR ≥ c¯kl > cln . Similarly, as c¯kl > cln > ckl , we get 5 If c¯ > min ¯ kl {p =q|(pq) =(mn)} c¯ pq then from case (i) cost monotonicity is satisfied between C and an intermediate matrix C′ , where c′kl = min{p =q|(pq) =(mn)} c′pq . Repeated application of this will thus establish the desired conclusion.

Allocation through Reduction on Minimum Cost Spanning Tree Games

343

Ψlλ (C) = (1 − λ )Ψkλ (D) + λ ckl ,

= (1 − λ )[λΨkλ (DR ) + (1 − λ )cln] + λ ckl , < λ [(1 − λ )Ψkλ (D¯ R ) + λ c¯kl ] + (1 − λ )cln, ¯ + (1 − λ )cln, = λΨlλ (D) λ ¯ = Ψl (C).

Subcase c: If k = m then ¯ − Ψkλ (C) , kλ (C) Ψ ¯ + (1 − λ )ckn] − [λΨkλ (D) + (1 − λ )ckl ], = [λΨkλ (D)   = λ [λΨkλ (D¯ R ) + (1 − λ )c¯kl ] + (1 − λ )ckn −   λ [λΨkλ (DR ) + (1 − λ )ckn] + (1 − λ )ckl , > (1 − λ )2[ckn − ckl ], > 0.

The inequalities are immediate from the fact that c¯kl > ckn > ckl . As Ψkλ (D¯ R ) ≥ mini∈N + \{k,l,n} d¯ikR ≥ ckn and c¯kl > ckl

Ψlλ (C) = (1 − λ )Ψkλ (D) + λ ckl ,

= (1 − λ )[λΨkλ (DR ) + (1 − λ )ckn] + λ ckl , < (1 − λ )Ψkλ (D¯ R ) + λ c¯kl , ¯ ¯ = Ψlλ (C). = Ψlλ (D)

Subcase d: The only remaining case is k = n, because l = n is not possible. The proof is similar to case (c) except the situation where m = 0. Then,

Ψkλ (C) = λΨkλ (D) + (1 − λ )ckl , = λ ck0 + (1 − λ )ckl , ¯ < ck0 = Ψkλ (C). Ψlλ (C) = (1 − λ )ck0 + λ ckl ,

¯ < (1 − λ )ck0 + λ c¯kl = Ψlλ (C).

This completes the proof when (i j) = (kl). Therefore allocation rules Ψ λ satisfy cost monotonicity for 0 ≤ λ ≤ 0.5. To complete the proof we now show that for any ¯ where Ψ λ violates value of λ > 0.5, we can construct two cost matrices C and C, cost monotonicity. Let N = {1, 2}. We choose C and C¯ such that c20 > c10 = c¯10 > c¯20 > c12 = c¯12 .

(8)

344

Anirban Kar

¯ everything else remaining unThus cost of the edge (0, 2) decreases from C to C, λ ¯ That is changed. Cost monotonicity will be violated if Ψ2 (C) < Ψ2λ (C).

λ c12 + (1 − λ )c10 < λ c¯20 + (1 − λ )c12, 1 [(2λ − 1)c12 + (1 − λ )c10]. λ Eq. (8) and Eq. (9) will be satisfied if we can choose c¯20 such that ⇒ c¯20 >

c10 > c¯20 >

(9)

1 [(2λ − 1)c12 + (1 − λ )c10] > c12 . λ

The last inequality follows from the fact that c10 > c12 . Since λ > 0.5, we have c10 > λ1 [(2λ − 1)c12 + (1 − λ )c10] and hence it is always possible to choose such c¯20 .  Theorem 1 has so far been proved for C ∈ C 2 . Suppose instead that C ∈ C 2 . Then, our proof shows that the outcome of the algorithm is in the core for each σ ∈ Σ . Since the core is a convex set, the average (that is, Ψ λ ) must be in the core if each Ψσλ is in the core. The outcome of the algorithm for each tie-breaking rule satisfies scale invariance and cost monotonicity for λ ∈ [0, 1] and λ ∈ [0, 0.5] respectively. Hence, the average must also satisfy these properties. For all C ∈ C 1 , tree invariance and extreme point monotonicity also follow from similar arguments. The next theorem connects our one-parameter-family to Bird’s allocation and Dutta-Kar allocation. I show that these are two extreme points of Ψ λ . Let us first formally introduce the Dutta-Kar allocation. Consider C ∈ C 2 . For any A ⊂ N, define Ac as the complement of A in N + . That is Ac = N + \ A. The algorithm proceeds as follows. Let A0 = {0}, g0 = 0, / t 0 = 0. Step 1: Choose the ordered pair (a1 b1 ) such that (a1 b1 ) = arg min(i, j)∈A0 ×A0c ci j . Define t 1 = max(t 0 , ca1 b1 ), A1 = A0 ∪ {b1 }, g1 = g0 ∪ {(a1 b1 )}. Step k: Define the ordered pair (ak bk ) = arg min(i, j)∈Ak−1 ×Ak−1 ci j . Moreover Ak = c

Ak−1 ∪ {bk }, gk = gk−1 ∪ {(ak bk )}, t k = max(t k−1 , cak bk ). Also, DKbk−1 (C) = min(t k−1 , cak bk ).

(10)

The algorithm terminates at step |N| = n. Then, DKbn (C) = t n .

(11)

At any step k, Ak−1 is the set of nodes which have already been connected to the source 0. Then, a new edge is constructed at this step by choosing the lowestk−1 cost edge between a node in Ak−1 and nodes in Ak−1 c . The cost allocation of b k−1 k−1 is decided at step k. Eq. (10) shows that b pays the minimum of t , which is the maximum cost amongst all edges which have been constructed in previous steps,

Allocation through Reduction on Minimum Cost Spanning Tree Games

345

and cak bk , the edge being constructed in step k. Finally, Eq. (11) shows that bn , the last node to be connected, pays the maximum cost.6 Now we can present our final result. Theorem 2. Allocation rule Ψ λ is equivalent to DK if λ = 0 and to B if λ = 1. Proof: Once again we will prove this result by induction on the number of agents. First if |N| = 1 then this result is trivially true. Suppose we have proved this result for all sets N such that |N| < m. Take C ∈ C 2 where |N| = m. Let ckl = min p =q c pq and k = α (l, gN (C)). Let CR be the usual reduced matrix defined by Eq. (1) and Eq. (2). First we prove that Ψ 0 = DK. From (3)-(5) we get,

Ψi0 (C) = Ψi0 (CR ) for all i = k, l, Ψk0 (C) = ckl if k = 0.

0 R Ψk (C ) if k = 0 0 Ψl (C) = cl0 otherwise.

(12) (13) (14)

In describing the algorithm which is used in constructing DK, we fixed a specific matrix, and hence did not specify the dependence of Ak ,t k , ak , bk etc. on the matrix. But, now we need to distinguish between these entities for the two matrices C and CR . We adopt the following notation in the rest of the proof of the theorem. Let Ak ,t k , ak , bk , gN etc. refer to the matrix C, while Aˆ k , tˆk , aˆk , bˆ k , gˆN etc. will denote the entities corresponding to CR . Without loss of generality assume that b p = k for some p ≥ 0. In this proof, for notational convenience, assume that b0 = 0. Also assume that j = α (k, gN (C))7 . Since only edges involving k and l have different costs in C and CR and cRkj = min{ck j , cl j } = ck j , we have bˆ p = k. Therefore DKi (C) = DKi (CR ) for all i such that i = b j = bˆ j where j < p. Thus t p = tˆp . Now k ∈ A p and l ∈ Acp . Since ckl is the minimum cost edge, we get (a p+1 b p+1 ) = (kl). If k = 0 DKk (C) = min(t p , ca p+1 b p+1 ) = ckl .

(15)

We also get t p+1 = max(t p , ckl ) = t p = tˆp . Now, since both k, l ∈ A p+1 from the construction of CR , it follows that A j = Aˆ j−1 ∪ {l} for all j ≥ p + 1. Also if a j+1 = l then (aˆ j bˆ j ) = (a j+1 b j+1), otherwise (aˆ j bˆ j ) = (kb j+1 ). That is caˆ j bˆ j = ca j+1 b j+1 for all j ≥ p + 1. Therefore DKl (C) = min(t p+1 , ca p+2 b p+2 ) = min(tˆp , caˆ p+1 bˆ p+1 ). Hence

if k = 0 DKk (CR ) DKl (C) = . (16) min(t 1 , ca2 b2 ) = min(cl0 , ca2 b2 ) = cl0 if k = 0 For all j ≥ p + 1 we get t j+1 = tˆ j and DKb j+1 (C) = DKbˆ j (CR ). Thus for all i = k, l, DKi (C) = DKi (CR ). 6 7

From Prim (1957), it follows that gn is also the m.c.s.t. corresponding to C. If k = 0 then no such j exists.

(17)

346

Anirban Kar

Comparing Eqs. (12)-(17) and using induction hypothesis on CR , we get Ψ 0 (C) = DK(C). Now we prove that Ψ 1 = B. Using Eqs. (3), (4), Lemma 1 and induction hypothesis on CR ,

Ψi1 (C) = Ψi1 (CR ) = cRiα (i,gN (CR )) = ciα (i,gN (C)) = Bi (C) for all i = l, which implies, Ψl1 (C) = ckl = Bl (C). Hence the result follows.



Acknowledgements I would like to thank Bhaskar Dutta, Diganta Mukherjee, Dipjyoti Majumdar and seminar participants at Statistics and Mathematics Unit, Indian Statistical Institute, Kolkata, for helpful comments.

References 1. Bergantinos G. and J. J. Vidal-Puga (2007a). A fair rule in minimum cost spanning tree problems; Journal of Economic Theory 137: 326-352 2. Bergantinos G. and J. J. Vidal-Puga (2007b). The optimistic TU game in minimum cost spanning tree problems; International Journal of Game Theory 36: 223-239 3. Bergantinos G. and A. Kar (2008). Obligation rules; Working paper series - Centre for Development Economics, Delhi School of Economics, University of Delhi, India 4. Bird C.G. (1976). On cost allocation for a spanning tree: A game theoretic approach; Networks 6: 335-350 5. Dutta B. and A. Kar (2004). Cost monotonicity and core selection on minimum cost spanning tree games; Games and Economic Behavior 48: 223-248 6. Feltkamp V., S. H. Tijs and S. Muto (1994). Birds tree allocation revisited; CentER, DP 9435, Tilburg University, Tilburg, The Netherlands 7. Granot D. and G. Huberman (1981). Minimum cost spanning tree games; Math Programming 21: 1-18 8. Kar A. (2002). Axiomatization of the Shapley value on minimum cost spanning tree games; Games and Economic Behavior 38: 265-277 9. Kruskal J. (1956). On the shortest spanning subtree of a graph and the traveling salesman problem; Proceedings of the American Mathematical Society 7: 48-50 10. Megiddo N. (1978). Computational complexity of the game theory approach to cost allocation for a tree; Math. Oper. Res. 3: 189-196 11. Shapley L. S. (1953). A value for n-person games; ’Contributions to the Theory of Games II’ (H. Kuhn and A. Tucker Eds.); Princeton University Press, Princeton NJ: 307-317 12. Tijs, S., R. Branzei, S. Moretti and H. Norde (2006). Obligation rules for minimum cost spanning tree situations and their monotonicity properties; European Journal of Operational Research 175: 121-134 13. Ozsoy H. (2006). A characterization of Bird’s rule; Available at: http://www.owlnet.rice.edu/˜ozsoy/ 14. Prim R. C. (1957). Shortest connection network and some generalization; Bell System Tech. Journal 36; 1389-1401 15. Young H. (1994). Cost allocation, ’Handbook of Game Theory with Economic Applications’ (R. Aumann and S. Hart Eds.); North Holland

Unmediated and Mediated Communication Equilibria of Battle of the Sexes with Incomplete Information Chirantan Ganguly and Indrajit Ray

Abstract We consider the Battle of the Sexes game with incomplete information and allow two-sided cheap talk before the game is played. We characterize the set of fully revealing symmetric cheap talk equilibria. The best fully revealing symmetric cheap talk equilibrium, when exists, has a desirable characteristic. When the players’ types are different, it fully coordinates on the ex-post efficient pure Nash equilibrium. We also analyze the mediated communication equilibria of the game. We find the range of the prior for which this desirable equilibrium exists under unmediated and mediated communication processes.

1 Introduction We study the Battle of the Sexes game with private information about each player’s “intensity of preference” for the other player’s favorite outcome. Recall that the complete information Battle of the Sexes game is a coordination game with two pure (and one mixed) strategy Nash equilibria. The players need to coordinate their actions in order to achieve one of these equilibria. If the players use strategies corresponding to different (pure) equilibria, then they both end up in a miscoordinated outcome that is worse than either of the (pure) Nash equilibria. With incomplete information, while coordination is clearly desirable, it is not obvious who should concede and go along to the other’s favorite outcome. Ex post efficiency (based on the unweighted sum of the two players’ ex post utilities) demands that the concession is made by the player who suffers a smaller loss in utility. Chirantan Ganguly Management School, Queen’s University Belfast, 23 University Square, Belfast, UK. e-mail: [email protected] Indrajit Ray Department of Economics, University of Birmingham, Edgbaston, Birmingham, UK. e-mail: [email protected]

347

348

Chirantan Ganguly and Indrajit Ray

We ask if the two players, when faced with a coordination problem of the above kind, can communicate with each other about their intensity of preference for the other’s favorite outcome to achieve coordination and (ex-post) efficiency. We have in mind a cheap talk phase in which the two players make announcements about their respective intensity of preference or a mediated communication in which they report to and get recommendations from a mechanism. We are interested in addressing whether fully revealing cheap talk (i.e., when players simultaneously and truthfully reveal their intensity of preference) and symmetric actions can achieve coordinated, and possibly even ex-post efficient outcomes in some circumstances. To address these questions, we take as our starting point an incomplete information version of the Battle of the Sexes game. This kind of a coordination game is appropriate for modeling issues like market entry (Dixit and Shapiro, 1985), product compatibility (Farrell and Saloner, 1988), networking (Katz and Shapiro, 1985) among other problems (such as public goods, credence goods, R&D problems). We however consider the Battle of the Sexes game with incomplete information in order to assess to which extent fully revealing cheap talk helps in achieving coordinated outcomes. Following the seminal paper by Crawford and Sobel (1982), much of the cheap talk literature has focused on the sender-receiver framework whereby one player has private information but takes no action and the other player is uninformed but is responsible for taking a payoff relevant decision. This framework can be restrictive when we want to model social situations involving multiple players who all have private information and can take actions. For instance, in most standard complete information two-player games that are commonly studied, both players choose strategies from their respective strategy sets. If we want to analyze incomplete information variants of these games, we would naturally keep the structure of the action phase similar to the complete information game. This could potentially give rise to new issues that cannot be dealt with by extrapolating from the sender-receiver framework. Among the many new complications that one can bring to these incomplete information games with two-sided actions, the important ones are one-sided or two-sided private information, one-sided or two-sided cheap talk and simultaneous or sequential cheap talk. Farrell (1987) has considered coordination in an entry-game that is similar to the Battle of the Sexes game with complete information. He has shown that cheap talk communication among the players can reduce the probability of miscoordination. Banks and Calvert (1992) characterized incentive compatible, (ex-ante) efficient mechanisms for a similar game and proved that in a cheap-talk set-up, ex-ante efficiency can be achieved under certain conditions. In related literature, Park (2002) considered a similar entry game and identified conditions for achieving efficiency and coordination. We model the communication between two players as direct cheap talk and then as mediated communication using a mechanism. In the cheap talk protocol, the Battle of the Sexes game with incomplete information is augmented by an initial stage of cheap talk before the action phase. This cheap talk is two-sided, i.e., both players

Communication Equilibria of Battle of the Sexes with Incomplete Information

349

can make announcements simultaneously. The messages are directly related to the incomplete information of each player. We analyze two-sided cheap talk equilibria and characterize the set of fully revealing symmetric cheap talk equilibria. We note that the best (in terms of ex-ante expected payoffs) fully revealing symmetric cheap talk equilibrium, when exists, has a desirable characteristic. When the players’ types are different, it fully coordinates on the ex-post efficient pure Nash equilibrium. In this outcome, when players are of different types, a certain type makes some sort of a compromise or sacrifice by agreeing to coordinate on a less preferred Nash equilibrium of the underlying complete information game. Casual observation of anecdotal evidence suggests that people do exhibit such behavior that apparently contradicts traditional concepts of self-interested rationality. Why do some people make altruistic sacrifices and why are they concerned about fairness? Instead of just assuming that people have concerns about fairness or other peoples’ utilities and looking at the implications of such behavior, we derive this kind of behavior as part of an equilibrium in a game with communication. We also analyze mediated communication equilibria of the game, following Banks and Calvert (1992) who fully characterized the ex-ante efficient incentive compatible mechanism for a similar framework. It is well-known that any unmediated equilibrium can be obtained as a mediated equilibrium. We focus here on our best fully revealing symmetric cheap talk equilibrium and achieve this outcome as a mediated equilibrium. We show that the range of the prior for which this outcome exists as a mediated equilibrium is strictly larger than the range for the cheap talk equilibrium. This paper is organised as follows. In Section 2 we introduce our Battle of the Sexes game with incomplete information, which can be preceded by a communication stage with a single round of simultaneous cheap talk. In Sections 3 and 4 respectively, we report our main results as to what outcomes can be achieved under fully revealing symmetric cheap talk and using symmetric mediated mechanisms. Section 5 offers some remarks on asymmetric fully revealing cheap talk equilibria and compares the payoffs from cheap talk and mediated equilibria, using an example. Section 6 concludes.

2 The Model 2.1 The Game We first consider the standard Battle of the Sexes game with complete information as given below. Each of the two players has two strategies, namely, F (Football) and C (Concert). The payoffs corresponding to the outcomes are as in the following table. We will call this a Battle of the Sexes game with values t1 and t2 , where 0 < t1 , t2 < 1.

350

Chirantan Ganguly and Indrajit Ray

Wife (Player 2) Football Concert Husband (Player 1) Football 1,t2 0, 0 Concert 0, 0 t1 , 1 This game has two pure Nash equilibria: (F, F) and (C,C) and a mixed Nash 1 equilibrium in which player 1 plays F with probability 1+t and player 2 plays C 2 1 with probability 1+t1 . Now consider the Battle of the Sexes game with private information, in which the value of ti is the private information for player i. We assume that ti is a random variable whose realisation is only observed by player i. For i = 1, 2, we henceforth refer to ti as player i’s type. For simplicity, we assume that each ti is a discrete1 random variable that takes only two values L and H (where, 0 < L < H < 1), that is, each player’s type is independently drawn from the set {L, H}, according to a probability distribution with Pr(ti = H) ≡ p ∈ [0, 1].

2.2 Unmediated Cheap Talk We now consider a situation in which the players first have a round of cheap talk before they play the game. We thus study an extended game, in which an unmediated communication stage precedes the actual play of the above Battle of the Sexes game. In the first (cheap talk) stage of this extended game, each player i simultaneously chooses a costless and nonbinding announcement τi from the set {L, H}. Then, given a pair of announcements (τ1 , τ2 ), in the second (action) stage of this extended game, each player i simultaneously chooses an action si from the set {F,C}. The strategies of this extended games are formally described as follows. An announcement strategy in the first stage for player i is a function ai : {L, H} → Δ ({L, H}), ti → ai (ti ), where Δ ({L, H}) is the set of probability distributions over {L, H}. We write ai (H |ti ) for the probability that strategy ai (ti ) of player i with type ti assigns to the announcement H. Thus, player i with type ti ’s announcement τi is a random variable drawn from {L, H} according to a probability distribution with Pr(τi = H) ≡ ai (H |ti ). In the second stage, a strategy for player i is a function σi : {L, H} × {L, H} × {L, H} → Δ ({F,C}), (ti , τ1 , τ2 ) → σi (ti , τ1 , τ2 ), where Δ ({F,C}) is the set of probability distributions over {F,C}. We write σi (F |ti , τ1 , τ2 ) for the probability that strategy σi (ti , τ1 , τ2 ) of player i with type ti assigns to the action F when the first stage announcements are (τ1 , τ2 ). Thus, player i with type ti ’s action choice si is a random variable drawn from {F,C} according to a probability distribution with Pr(si = F) ≡ σi (F |ti , τ1 , τ2 ). Given a pair of action choices (s1 , s2 ) ∈ 1

One may also consider a continuum of types. Ray (2009) indeed analyzes an implementation problem in the spirit of Kar, Ray, Serrano (2009) for this game with continuum of types.

Communication Equilibria of Battle of the Sexes with Incomplete Information

351

{F,C} × {F,C}, the players’ actual payoffs are given by the relevant entry in the above type-specific payoff matrix of the Battle of the Sexes game. We consider a specific class of strategies in this paper. First, we impose the property that the cheap talk announcement should be fully revealing. Definition 1. In the extended game, cheap talk is said to be fully revealing if the announcement strategy ai for each player i = 1, 2 has the following property: if ti = H, we have ai (H |H ) = 1, and if ti = L, we have ai (H |L ) = 0. The above definition simply asserts that, in the communication stage, each player makes an announcement that coincides with that player’s type: τi = ti . Having fixed each player’s announcement strategy ai so that it is fully revealing, we next ask what form the second-stage strategies σ1 and σ2 should take. First note that, under fully revealing announcements, for player i with type ti , a strategy in the second stage can be written as σi (ti ,t j ). We now restrict our attention to symmetric strategies in this stage of the extended game. We formally define symmetry assuming full revelation in the cheap talk stage. Definition 2. Under fully revealing cheap talk, a strategy profile in the action phase is symmetric if ∀t1 ,t2 ∈ {H, L}, σi [F |t1 ,t2 ] = σ j [C |t2 ,t1 ] ∀i, j ∈ {1, 2}. Note that the above definition preserves symmetry for both players and the types for each player. We are interested in (Nash) equilibria2 of the extended game in symmetric fully revealing strategies. Definition 3. A strategy-profile ((a1 , σ1 ), (a2 , σ2 )) is called a fully revealing symmetric cheap talk equilibrium if the announcement strategy ai is fully revealing, the action strategy σi is symmetric for each player i and the profile is a Nash equilibrium of the extended game. We characterize the set of fully revealing symmetric cheap talk equilibria in the next section.

3 Cheap Talk Equilibrium In this section, we analyze the set of equilibria of the extended game in which the players first communicate to each other and then play the Battle of the Sexes game. This two-stage game has possibly many (Nash) equilibria. Rather than attempt to obtain a full characterization of the set of all equilibria, we restrict our attention to 2

We could also consider a perfect Bayesian equilibrium of this two-stage game. One would require beliefs µ1 and µ2 , so as to render a fully revealing symmetric strategy-profile ((a1 , σ1 ), (a2 , σ2 )) a perfect Bayesian equilibrium. A belief µi for player i is a probability distribution over {L, H}, which represents player i’s belief about player j’s type, conditional on player j’s announcement ( j = 1, 2, j = i). It is obvious that the natural set of beliefs that would support a fully revealing symmetric equilibrium is the belief that corresponds with the announced type.

352

Chirantan Ganguly and Indrajit Ray

the set of fully revealing symmetric equilibria. As a first step towards the characterization of this set, we observe the following3. Claim 1: In a fully revealing symmetric cheap talk equilibrium ((a1 , σ1 ), (a2 , σ2 )), the players’ strategies in the action phase must constitute a (pure or mixed) Nash equilibrium of the corresponding Battle of the Sexes game with complete information; that is, (σ1 (t1 ,t2 ), σ2 (t1 ,t2 )) is a (pure or mixed) Nash equilibrium of the Battle of the Sexes game with values t1 and t2 , ∀t1 ,t2 ∈ {H, L}. Claim 2: In a fully revealing symmetric cheap talk equilibrium ((a1 , σ1 ), (a2 , σ2 )), conditional on the announcement profile (H, H) or (L, L), the strategy profile in the action phase must be the mixed strategy Nash equilibrium of the corresponding complete information Battle of the Sexes game; that is, whenever t1 = t2 , (σ1 (t1 ,t2 ), σ2 (t1 ,t2 )) is the mixed Nash equilibrium of the Battle of the Sexes game with values t1 = t2 . Based on the above claims, we can now identify all candidate equilibrium strategy profiles of the extended game that are fully revealing and symmetric. Claim 2 implies that these candidate strategy profiles (σ1 (t1 ,t2 ), σ2 (t1 ,t2 )) in the action stage has the property that σi [F |t,t ] = σ i (tt) with t = H, L, where σ i (tt) is the mixed Nash equilibrium of the complete information Battle of the Sexes game with values t and t, t = H, L. Therefore these profiles are differentiated only by the actions played when t1 = t2 , that is, when the players’ types are (H, L) and (L, H). As the strategies are symmetric, it is sufficient to characterize these candidate profiles only by σ1 [F |H, L ]. By symmetry, one could identify the full profile of actions, based on σ1 [F |H, L ]. From Claim 1, there are only three possible candidates for σ1 [F |H, L ] as the complete information Battle of the Sexes game with values H and L has three (two pure and one mixed) Nash equilibria. They are (i) σ1 [F |H, L ] = σ 1 (HL) where σ 1 (HL) is the probability of playing F in the mixed Nash equilibrium strategy of player 1 of the complete information Battle of the Sexes game with values H and L (ii) σ1 [F |H, L ] = 1 and (iii) σ1 [F |H, L ] = 0. Therefore there are only three fully revealing symmetric strategy profiles that are candidate equilibria of the extended game. In these three candidate equilibria, in the cheap talk phase, players announce their types truthfully, i.e., player i(H-type) announces H and player i(L-type) announces L and then in the action phase, the players’ strategies are one of the following. In the first candidate strategy profile, the players play the mixed Nash equilibrium strategies of the complete information Battle of the Sexes game for all type profiles. We call this profile Sm . In the second candidate strategy profile, the players play the mixed Nash equilibrium strategies of the complete information Battle of the Sexes game when both 3

The proofs of these claims are obvious and hence omitted.

Communication Equilibria of Battle of the Sexes with Incomplete Information

353

players’ types are identical (by Claim 2), and they play (F, F) ((C,C)), when only player 1’s type is H (L). Note that in this profile players fully coordinate on one of the pure Nash equilibrium outcomes when the types are different; however, the outcome they coordinate to generates payoffs (1, L) and thus is not (ex-post) efficient in the corresponding Battle of Sexes game with different types. We call this profile Sine f f . In the third candidate strategy profile, the players play the mixed Nash equilibrium strategies of the complete information Battle of the Sexes game when both players types’ are identical (by Claim 2), and they play (C,C) ((F, F)), when only player 1’s type is H (L). Note that in this profile players fully coordinate on a pure Nash equilibrium when the players’ types are different and that the outcome they coordinate to generates the ex-post efficient payoff of (1, H) in the corresponding Battle of the Sexes game with different types. We call this profile Se f f . Clearly among these three candidates, the third, whenever exists, is the best in terms of payoffs. We now look at cases when these candidates are indeed equilibrium profiles. Lemma 1. Sm is not an equilibrium of the extended Battle of the Sexes game with fully revealing cheap talk. H Proof: Under Sm , H-type player will reveal his type truthfully only if p( 1+H )+ H H H (1 − p)( 1+H ) ≥ p( 1+L ) + (1 − p)( 1+L ) where the LHS is the expected payoff from truthfully announcing H and the RHS is the expected payoff from announcing L and choosing the optimal action in the action phase given the deviation in the cheap talk 1 1 phase. This inequality implies 1+H ≥ 1+L which can never be satisfied as H > L. Therefore, Sm is not an equilibrium of the extended game.

Lemma 2. Sine f f is an equilibrium of the extended incomplete information Battle of 1+L+HL−H 2 1+H the Sexes game with cheap talk only when 1+L+L 2 +L2 H ≤ p ≤ 1+L+HL+H 2 L . H )+ Proof: Under Sine f f , H-type player will reveal his type truthfully only if p( 1+H H (1 − p)(1) ≥ p(H) + (1 − p)( 1+L ) where the LHS is the expected payoff from truthfully announcing H and the RHS is the expected payoff from announcing L and choosing the optimal action in the action phase given the deviation in the cheap talk 1+L+HL−H 2 phase. This inequality implies p ≤ 1+L+HL+H 2 L . Similarly, L-type player will reveal L H his type truthfully only if p(L)+ (1 − p)( 1+L ) ≥ p( 1+H )+ (1 − p)(1) which implies 1+H ≤ p. Hence the proof. 1+L+L2 +L2 H

Lemma 3. Se f f is an equilibrium of the extended incomplete information Battle of L2 +L2 H HL+H 2 L the Sexes game with cheap talk only when 1+L+L 2 +L2 H ≤ p ≤ 1+L+HL+H 2 L . H )+ Proof: Under Se f f , H-type player will reveal his type truthfully only if p( 1+H H (1 − p)(H) ≥ p(1) + (1 − p)( 1+L ) where the LHS is the expected payoff from truthfully announcing H and the RHS is the expected payoff from announcing L and choosing the optimal action in the action phase given the deviation in the cheap talk

354

Chirantan Ganguly and Indrajit Ray 2

HL+H L phase. This inequality implies p ≤ 1+L+HL+H 2 L . Similarly, L-type player will reL H veal his type truthfully only if p(1) + (1 − p)( 1+L ) ≥ p( 1+H ) + (1 − p)(L) which 2

2

L +L H implies 1+L+L 2 +L2 H ≤ p. Hence the proof. Based on the above lemmas, the following theorem now fully characterizes the set of fully revealing symmetric equilibria of the extended Battle of the Sexes game.

Theorem 1. (i) There does not exist any fully revealing symmetric equilibrium L2 +L2 H of the extended Battle of the Sexes game when p < 1+L+L 2 +L2 H and when p > 1+L+HL−H 2 . 1+L+HL+H 2 L

(ii) Se f f is the only fully revealing symmetric equilibrium of the extended Battle 1+H L2 +L2 H of the Sexes game for 1+L+L 2 +L2 H ≤ p ≤ 1+L+L2 +L2 H . (iii) Sine f f and Se f f are the only fully revealing symmetric equilibria of the exHL+H 2 L 1+H tended Battle of the Sexes game for 1+L+L 2 +L2 H ≤ p ≤ 1+L+HL+H 2 L . (iv) Sine f f is the only fully revealing symmetric equilibrium of the extended Battle HL+H 2 L 1+L+HL−H 2 of the Sexes game for 1+L+HL+H 2 L ≤ p ≤ 1+L+HL+H 2 L . 2

2

2

2

L +L H 1+H HL+H L 1+L+HL−H Proof: We observe that 1+L+L 2 +L2 H < 1+L+L2 +L2 H and 1+L+HL+H 2 L < 1+L+HL+H 2 L as both L and H are less than 1. The theorem now follows immediately from the lemmas above. As noted earlier, the equilibrium profile Se f f has a desirable characteristic. When the players’ types are different, it fully coordinates on an outcome which is the expost efficient pure Nash equilibrium outcome. However, this equilibrium exists only HL+H 2 L HL+H 2 L L2 +L2 H for a specific range of, p, 1+L+L 2 +L2 H ≤ p ≤ 1+L+HL+H 2 L . Note that 1+L+HL+H 2 L < 1 2. 2

1+H HL+H L For 1+L+L 2 +L2 H ≤ p ≤ 1+L+HL+H 2 L , Se f f clearly is the best (in terms of payoffs) fully revealing symmetric equilibrium of the extended Battle of the Sexes game.

4 Mediated Equilibrium Having characterized the fully revealing symmetric equilibria of the game with cheap talk, one may also analyze this game with mediated communication. Consider a situation in which players have access to a mediator who, based on the players’ announcements in the communication stage, makes a non-binding recommendation to each player as to which action that player should adopt in the Battle of the Sexes game. Considering such mediated mechanisms is useful because they inform us about the limits to communication possibilities via cheap talk. A mediated mechanism is a probability distribution over the product set of actions {(F, F), (F,C), (C, F), (C,C)} for every profile of types. In a (direct) mediated communication process the players first report their types (H or L) to the mediated mechanism (mediator) and then the mediator picks an action profile according to

Communication Equilibria of Battle of the Sexes with Incomplete Information

355

the given probability distribution and informs the respective action to each player privately. The players then play the game. As in the earlier section, we consider symmetric communication. Definition 4. A symmetric mediated mechanism is a probability distribution over the product set of actions {(F, F), (F,C), (C, F ), (C,C)} of the Battle of the Sexes game for every profile of reported types, as below: F F C

C v7

1−v6 −v7 2

1−v6 −v7 2

v6

F F 1 − v3 − v4 − v5 C v4

HH

F C

F v3 v4

C v5 F 1 − v3 − v4 − v5 C LH

C v5 v3

HL

F 1−v1 −v2 2

v1

C v2

1−v1 −v2 2

LL

where all vi ’s lie in the closed interval [0, 1]. Mediated mechanisms have been studied by Banks and Calvert (1992) in essentially the same setup as the one we consider here. It is easy to see that our version of the Battle of the Sexes with incomplete information can readily be obtained from Banks and Calvert’s (1992) setup through a linear transformation of the players’ payoffs. Our setting however differs in the sense that our mediator makes nonbinding recommendations, and therefore needs to provide incentives for each player to follow that recommendation. Definition 5. A (symmetric) mediated mechanism is called a (symmetric) mediated equilibrium if it provides the players with incentives to truthfully reveal their types to the mediator, and provides the players with incentives to follow the mediator’s recommendations following their type announcements. A (symmetric) mediated equilibrium thus can be characterized by a set of Incentive Compatibility constraints. To be in equilibrium, a symmetric mediated mechanism as above must satisfy the following Incentive Compatibility constraints.4 IC1: Incentive Compatibility for H-type to report H =⇒   v6 +v7 1 2 (1 − p)(1 + H) v1+v 2 − p(1 + H) 2 − (1 − H) v3 − 2 (IC1) −[1 − (1 + H)p](v4 + v5 ) ≥ 0. IC2: Incentive Compatibility for L-type to report L =⇒ 4

These constraints are for the player 1 and by symmetry, the set of constraints for player 2 is mathematically identical.

356

Chirantan Ganguly and Indrajit Ray v1+v2 1 7 (1 + L)p v6 +v 2 − (1 + L)(1 − p) 2 + (1 − L)(v3 − 2 ) +[1 − (1 + L)p](v4 + v5) ≥ 0.

(IC2)

IC3: Incentive Compatibility for H-type to choose F when F has been recommended =⇒ 1 (1 − p)(1 − v3 − v4 − v5 ) + p (1 − v6 − v7 ) − H[(1 − p)v5 + pv7] ≥ 0. 2

(IC3)

IC4: Incentive Compatibility for H-type to choose C when C has been recommended =⇒ 1 H[(1 − p)v3 + p (1 − v6 − v7)] − (1 − p)v4 − pv6 ≥ 0. (IC4) 2 IC5: Incentive Compatibility for L-type to choose F when F has been recommended =⇒ 1 (1 − p) (1 − v1 − v2) + pv3 − L[(1 − p)v2 + pv5 ] ≥ 0. (IC5) 2 IC6: Incentive Compatibility for L-type to choose C when C has been recommended =⇒ 1 L[(1 − p) (1 − v1 − v2 ) + p(1 − v3 − v4 − v5 )] − (1 − p)v1 − pv4 ≥ 0. 2 F F C

C

H 1 (1+H)2 (1+H)2 H2 H (1+H)2 (1+H)2

F

F 0

C 0

C

0

1

HH

HL

F

F 1

C 0

F

C

0

0

C

LH

(IC6)

F

C

L 1 (1+L)2 (1+L)2 L2 L (1+L)2 (1+L)2

LL

Among the class of symmetric mediated equilibria, one could characterize the ex-ante efficient (in terms of ex-ante expected payoffs) symmetric mediated equilibrium in our setup following the results in Banks and Calvert (1992) who indeed have characterized a similar ex ante efficient incentive-compatible mechanism.

Communication Equilibria of Battle of the Sexes with Incomplete Information

357

We however focus on the issue of obtaining any unmediated equilibrium from the previous section as a mediated equilibrium. It is well-known that any unmediated equilibrium can indeed be obtained using a mediated mechanism. Typically, however the mediator can improve upon the set of unmediated equilibria. Here we are interested in achieving Se f f , the efficient fully revealing symmetric unmediated equilibrium as a symmetric mediated equilibrium. Consider the following symmetric mediated mechanism defined by the probabilities over action profiles for each type-profile induced by the strategy profile Se f f . Define the symmetric mediated mechanism equivalent to the above distribution L2 as Me f f . So, Me f f is the symmetric mediated mechanism where v1 = (1+L) 2 , v2 = 1 ,v (1+L)2 3

= 1, v4 = 0, v5 = 0, v6 =

H2 (1+H)2

and v7 =

1 . (1+H)2

We observe the follow-

ing. Proposition 1. Me f f is a (symmetric) mediated equilibrium when 2L2 H+L2 H 2 +L2 −L+H+LH 2 +L2 H+L2 H 2 +H 2 ≤ p ≤ L+H+LH 2 +L2 H+L2 H 2 +L2 +H 2 +1 . L+H+LH 2 +L2 H+L2 H 2 +L2 +H 2 +1 Proof: Substituting the above values of v1 , v2 , v3 , v4 , v5 , v6 and v7 into the six Incentive Compatibility constraints, one can easily check that IC3 and IC6 will be satisfied for all p. Also, IC4 will hold if p ≤ 1 and IC5 will hold if 0 ≤ p. Finally, −L+H+LH 2 +L2 H+L2 H 2 +H 2 note that IC1 will be satisfied if p ≤ L+H+LH 2 +L2 H+L2 H 2 +L2 +H 2 +1 and IC2 will 2

2

2

2

H+L H +L require that L+H+LH2L 2 +L2 H+L2 H 2 +L2 +H 2 +1 ≤ p. Hence the proof. One might be interested in comparing the above range for p for Me f f to be in equilibrium with the range for p that we found for Se f f to be in equilibrium. It can be shown that the first range strictly contains the latter with respect to both 2 H+L2 H 2 +L2 L2 +L2 H the lower and upper bounds, i.e., L+H+LH2L 2 +L2 H+L2 H 2 +L2 +H 2 +1 < 1+L+L2 +L2 H and HL+H 2 L 1+L+HL+H 2 L

2

2

2

2

2

−L+H+LH +L H+L H +H < L+H+LH 2 +L2 H+L2 H 2 +L2 +H 2 +1 . The proposition thus implies that the outcome generated by the profile Se f f , with the desirable characteristic, can be obtained as a mediated equilibrium Me f f for a larger range of p.

5 Remarks 5.1 Asymmetric Equilibria We have characterized the set of fully revealing symmetric equilibria of the cheap talk game. There are of course many fully revealing but asymmetric equilibria of this extended game. Clearly, babbling equilibria exist in which the players ignore the communication and just play one of the Nash equilibria of the complete information Battle of the Sexes game for all type-profiles. There are other asymmetric equilibria. Consider for example the following strategy profile (σ1 (t1 ,t2 ), σ2 (t1 ,t2 )) that the players play in the action stage. (σ1 (H, H),

358

Chirantan Ganguly and Indrajit Ray

σ2 (H, H)) = (σ1 (H, L), σ2 (H, L)) = (C,C), (σ1 (L, H), σ2 (L, H)) = (F, F), and σi (L, L) = σ i (LL), where σ i (LL) is the mixed Nash equilibrium of the complete information Battle of the Sexes game with values L, L. The outcome can be generated by the following distribution (mediated mechanism). FC FC FC F 0 0 F 0 0 F 1 0 F C 0 1 C 0 1 C 0 0 C HH

HL

F

C

L 1 (1+L)2 (1+L)2 L2 L (1+L)2 (1+L)2

LH

LL

Call this strategy profile Sasymm . Proposition 2. Sasymm is a fully revealing equilibrium of the extended incomplete LH information Battle of the Sexes game with cheap talk only when L2 ≤ p ≤ 1−H+L . Proof: Under Sasymm , H-type player will reveal his type truthfully only if p(H) + H (1 − p)(H) ≥ p(1) + (1 − p)( 1+L ) where the LHS is the expected payoff from truthfully announcing H and the RHS is the expected payoff from announcing L and choosing the optimal action in the action phase given the deviation in the cheap talk LH phase. This inequality implies p ≤ 1−H+L . Also, L-type player will reveal his type L truthfully only if p(1) + (1 − p)( 1+L ) ≥ p(L) + (1 − p)(L) which implies L2 ≤ p. Hence the proof.

5.2 Payoffs One may be interested in the payoff generated by the best fully revealing symmetric equilibrium. Note that the ex-ante expected payoff for any player from Se f f and L H Me f f is identical and is given by EU = p2 1+H + p(1 − p)(1 + H) + (1 − p)2 1+L . This EU is concave in p and has an unique interior maximum in [0, 1]. However, ∂ EU HL+H 2 L ∂ p > 0 at p = 1+L+HL+H 2 L (the upper bound for the range of p for which Se f f is an equilibrium). So, EU is increasing over this range of p. Hence, the best achievable payoff from SeHf f is

L EU = p2 1+H + p(1 − p)(1 + H) + (1 − p)2 1+L 2 p= HL+H L 2 1+L+HL+H L   L 2 3 4 2 + H3 + 1 L + H + 2LH + 2LH + LH + LH + 2H = 2 (LH 2 +LH+L+1) Similarly, best achievable payoff from Me f f is

the H L EU = p2 1+H + p(1 − p)(1 + H) + (1 − p)2 1+L

achieved at p =

−L+H+LH 2 +L2 H+L2 H 2 +H 2 . L+H+LH 2 +L2 H+L2 H 2 +L2 +H 2 +1

Communication Equilibria of Battle of the Sexes with Incomplete Information

359

5.3 An Example We may illustrate all our results using an example. Take for example, L = 13 , H = 32 . For these values, the range of the prior p for which the efficient fully revealing symmetric unmediated equilibrium Se f f exists is 0.12 ≤ p ≤ 0.22. On the other hand, the range of the prior p for which Me f f is a mediated equilibrium is given by 0.11 ≤ p ≤ 0.37. The best payoff from Se f f (at p = 0.22) is 0.46 while the corresponding best payoff from Me f f (at p = 0.37) is 0.54.

6 Conclusion In this paper, we consider an incomplete information version of the Battle of the Sexes game. The game has two-sided private information, two-sided cheap talk and of course, two-sided actions. Cheap talk is modeled by adding a stage of announcements by players about their own types before going into the action stage. Strategic information transmission and communication in games has been recognised as an important determinant of outcomes in these games. The seminal work by Crawford and Sobel (1982) first illustrated this point, following which a burgeoning literature has been trying to investigate different aspects of this issue. The sender-receiver framework has been extended in different directions. Extensions include introducing multiple senders (Gilligan and Krehbiel (1989); Austen-Smith (1993); Krishna and Morgan (2001a, 2001b); Battaglini (2002)) and multiple receivers (Farrell and Gibbons (1989)); however, these extensions are not helpful in analyzing two-player games with two-sided actions and two-sided private information because only receivers take actions and only senders have private information. What happens in two-player games where both might have private information and both can indulge in cheap talk and both can take decisions or choose actions? Some authors have pursued these problems in a complete information environment (Rabin (1994); Santos (2000)) as well as in an incomplete information setting (Matthews and Postlewaite (1989); Austen-Smith (1990); Banks and Calvert (1992); Baliga and Morris (2002); Baliga and Sjostrom (2004)). The issue of multiple rounds of cheap talk has also been discussed in the literature (Aumann and Hart (2003); Krishna and Morgan (2004); R. Vijay Krishna (2007)) but only with one-sided private information. A different avenue of research considers what happens when a static cheap talk game is repeated. Repetition gives rise endogenously to reputational concerns and this might impose additional constraints on what can be communicated via cheap talk (Sobel (1985); Benabou and Laroque (1992); Morris (2001); Avery and Meyer (2003); Ottaviani and Sorensen (2006a, 2006b); Olszewski (2004)). Analysing cheap talk in this repeated framework would require us to make assumptions about the nature of these reputational concerns. Does the sender care about appearing to be

360

Chirantan Ganguly and Indrajit Ray

well informed or does he want to be perceived as not having a large conflicting bias? He might have a bigger incentive to mask the truth and create a false perception now because this will affect his future credibility and hence future payoffs. We achieve a desirable (ex-post efficient) outcome as a cheap-talk equilibrium outcome. The desirability criterion is mainly related to altruistic concerns for fairness whereby different players sacrifice or compromise under different states of nature. We should mention here that Borgers and Postl (2009) consider a set-up in which a compromise outcome is chosen. In a follow-up paper, Ganguly and Ray (2009) consider cheating during the announcement phase. When cheap talk fails to achieve the exact desirable outcome, their results indicate how one can use partially revealing cheap talk to approximate the desired outcome and derive how close an achievable outcome can be to the desired outcome. They consider more general strategy profiles which allow for some degree of randomization at the cheap talk stage itself. Clearly, this would lead to outcomes that are somewhat different from the desirable outcomes we want to achieve. Nevertheless, characterising these equilibria will help us analyze how close to the desirable outcome we can get using some form of cheap talk. Under certain conditions, they also show that cheating or randomisation by both types of players during the announcement phase can be welfare improving compared to cheating by just one type. Finally, in this context, one may think of a planner who may be able to help the players coordinate using a social choice function that may be fully implemented. Ray (2009) has illustrated this point by using implementation and correlated equilibrium distributions, in the spirit of Kar, Ray and Serrano (2009). Ray considers any social choice function that chooses one of the two pure Nash equilibria in two different states from the class of all correlated equilibrium distributions and asks whether it can be implemented in Nash equilibrium or not. Acknowledgements We wish to thank all seminar and conference participants at Belfast, Birmingham, Brunel, Exeter, ISI Kolkata, Keele, Newcastle, NUS, Rice, SMU, Texas A&M and Warwick for helpful comments and particularly, Peter Postl for constructive suggestions.

References 1. Aumann, Robert J. and Sergiu Hart: “Long Cheap Talk,” Econometrica, 71, 2003, 1619-1660 2. Austen-Smith, David: “Information Transmission in Debate,” American Journal of Political Science, 34, 1990, 124-152 3. Austen-Smith, David: “Interested Experts and Policy Advice: Multiple Referrals under Open Rule,” Games and Economic Behavior, 5, 1993, 3-43 4. Avery, Christopher and Margaret Meyer: “Designing Hiring and Promotion Procedures When Evaluators are Biased,” Mimeo, 2003 5. Baliga, Sandeep and Stephen Morris: “Coordination, Spillovers and Cheap Talk,” Journal of Economic Theory, 105, 2002, 450-468 6. Baliga, Sandeep and Tomas Sjostrom: “Arms Races and Negotiations,” Review of Economic Studies, 71, 2004, 351-369

Communication Equilibria of Battle of the Sexes with Incomplete Information

361

7. Banks, Jeffrey S. and Randall L. Calvert: “A Battle-of-the-Sexes Game with Incomplete Information,” Games and Economic Behavior, 4, 1992, 347-372 8. Battaglini, Marco: “Multiple Referrals and Multidimensional Cheap Talk,” Econometrica, 70, 2002, 1379-1401 9. Benabou, Roland and Guy Laroque: “Using Privileged Information to Manipulate Markets Insiders, Gurus, and Credibility,” Quarterly Journal of Economics, 107, 1992, 921-958 10. Borgers, Tilman and Peter Postl: “Efficient Compromising,” Journal of Economic Theory, Forthcoming, 2009 11. Crawford Vincent P. and Joel Sobel: “Strategic Information Transmission,” Econometrica, 50, 1982, 1431-1451 12. Dixit, Avinash K. and Carl Shapiro: “Entry Dynamics with Mixed Strategies,” The Economics of Strategic Planning, 1985, L.G. Thomas (ed.), Lexington Books 13. Farrell, Joseph: “Cheap Talk, Coordination, and Entry,” RAND Journal of Economics, 18, 1987, 34-39 14. Farrell, Joseph and Robert Gibbons: “Cheap Talk with Two Audiences,” American Economic Review, 79, 1989, 1214-1223 15. Farrell, Joseph and Garth Saloner: “Coordination Through Committees and Markets,” RAND Journal of Economics, 19, 1988, 235-252 16. Ganguly, Chirantan and Indrajit Ray: “Two-sided Cheap-Talk Equilibria in Battle of the Sexes with Incomplete Information,” Mimeo, 2009 17. Gilligan, Thomas W. and Keith Krehbiel: “Asymmetric Information and Legislative Rules with a Heterogeneous Committee,” American Journal of Political Science, 33, 1989, 459-490 18. Kar, Anirban, Indrajit Ray and Roberto Serrano: “A Difficulty in Implementing Correlated Equilibrium Distributions,” Games and Economic Behavior, Forthcoming, 2009 19. Katz, Michael L. and Carl Shapiro: “Network Externalities, Competition, and Compatibility,” American Economic Review, 75, 1985, 424-440 20. Krishna, R. Vijay: “Communication in Games of Incomplete Information: Two Players,” Journal of Economic Theory, 132, 2007, 584-592 21. Krishna, Vijay and John Morgan: “A Model of Expertise,” Quarterly Journal of Economics, 116, 2001a, 747-775 22. Krishna, Vijay and John Morgan: “Asymmetric Information and Legislative Rules: Some Amendments,” American Political Science Review, 95, 2001b, 435-457 23. Krishna, Vijay and John Morgan: “The Art of Conversation: Eliciting Information from Experts through Multi-Stage Communication,” Journal of Economic Theory, 117, 2004, 147-179 24. Matthews, Steven A. and Andrew Postlewaite: “Pre-Play Communication in Two-Person Sealed-Bid Double Auctions,” Journal of Economic Theory, 48, 1989, 238-263 25. Morris, Stephen: “Political Correctness,” Journal of Political Economy, 109, 2001, 231-265 26. Olszewski, Wojciech: “Informal Communication,” Journal of Economic Theory, 117, 2004, 180-200 27. Ottaviani, Marco and Peter Norman Sorensen: “Professional Advice,” Journal of Economic Theory, 126, 2006a, 120-142 28. Ottaviani, Marco and Peter Norman Sorensen: “Reputational Cheap Talk,” RAND Journal of Economics, 37, 2006b, 155-175 29. Park, In-Uck: “Cheap Talk Coordination of Entry by Privately Informed Firms,” RAND Journal of Economics, 33, 2002, 377-393 30. Rabin, Matthew: “A Model of Pre-game Communication,” Journal of Economic Theory, 63, 1994, 370-391 31. Ray, Indrajit: “Coordination and Implementation,” Mimeo, 2009 32. Santos, Vasco: “Alternating-announcements Cheap Talk,” Journal of Economic Behavior & Organization, 42, 2000, 405-416 33. Sobel, Joel: “A Theory of Credibility,” The Review of Economic Studies, 52, 1985, 557-573

A Characterization Result on the Coincidence of the Prenucleolus and the Shapley Value Anirban Kar, Manipushpak Mitra, and Suresh Mutuswami

Abstract A PS game is a TU game where the sum of a player’s marginal contribution to any coalition and its complement coalition is a player specific constant. For PS games the prenucleolus coincides with the Shapley Value. In this short paper we show that if L is an anonymous linear subspace of TU games such that it has a basis which is a subset of the class of unanimity games, then the prenucleolus coincides with the Shapley value on L if and only if L is a subset of the class of all PS games.

1 Introduction The prenucleolus is a Rawlsian concept used in TU games for allocating resources. It is defined as the efficient profile which lexicographically minimizes the “excess” of all coalitions. Inspite of its obvious fairness justification, it has not been widely applied as a solution simply because of its computational difficulty. A number of papers have appeared in which the prenucleolus is computed by an algorithm in which either one or a huge number of huge linear programs have to be solved. For instance see Derks and Kuipers [4]. An extensive overview of the reserach on nucleolus can be found in Maschler [6]. Arin and Feltkamp [1] also presents an algorithm for computing the nucleolus of a particular class of games. Since the Anirban Kar Department of Economics, Delhi School of Economics, University of Delhi, Delhi 110007, India. e-mail: [email protected] Manipushpak Mitra ERU, Indian Statistical Institute, 203, B. T. Road, Kolkata 700 108, India. e-mail: [email protected] Suresh Mutuswami Department of Economics, University of Leicester, University Road, Leicester, LE1 7RH, UK. e-mail: [email protected],[email protected]

362

The Prenucleolus and the Shapley Value

363

algorithm for calculating the Shapley value is well known, one way of bypassing this computational difficulty is to identify games where the two solutions coincide. It is well known that the prenucleolus coincides with the Shapley value on all two player games and symmetric games.1 In a recent work, Kar et al. [5] identified a class of games, called PS games, where also the two solutions coincide.2 In a PS game, the sum of a player’s marginal contribution to any coalition S and its complement coalition N \ (S ∪ {i}) is a player specific constant. Of course, PS-games are sufficient but not necessary for the coincidence.3 Obtaining a necessary condition is not easy because the set where the two solutions coincide is non-convex.4 In this paper, we take a step towards filling the gap between the sufficient condition for the coincidence of the prenucleolus and the Shapley value (PS games) and a necessary condition by characterizing the linear subspaces where the two solutions coincide provided there is a basis of this subspace containing only unanimity games. Our characterization result says that the two solutions coincide on such a linear subspace if and only if the basis containing only unanimity games contains at most one non-PS game, and if there is a non-PS game, then the rest are dictatorial games. It is known, following Shapley, that the class of all unanimity games is a basis for the linear space of all n-player TU games. However, the same is not true for strict subspaces. For instance, the linear subspace of all n-player symmetric games cannot be decomposed in terms of unanimity games because these games are not symmetric. Hence, our characterization result is a limited one. Notwithstanding such cases, we think our characterization result is still of value and the condition that we obtain indicates that any linear subspace where the two solution concepts coincide must be “close” to the class of PS games.

2 Preliminaries A coalitional form game with transferable utility (or a TU game) is a tuple (N, v) consisting of a finite set N = {1, . . . , n} of players and a function v : 2N → ℜ such that v(0) / = 0. The number v(S) is the total payoff available for division among the members of the coalition S.5 A profile is a vector x ∈ ℜN . A profile x is efficient if ∑i∈N xi = v(N). It is an imputation if it is efficient and xi ≥ v(i) for all i ∈ N. We denote by X the set of all efficient profiles and by I the set of all imputations. We will denote ∑i∈S xi by x(S). Let Δi v(S) = v(S ∪ {i}) − v(S) be the marginal contribution of player i to the coalition S. Definition 1. The Shapley value of a game G = (N, v) is defined by 1

See Winter [12]. See also Chun and Hokari [2], Deng and Papadimitriou [3] and van den Nouweland et al. [8]. 3 See Kar et al. [5]. 4 We show this in Section 3. 5 For notational convenience, we will sometimes write v(i) instead of v({i}) when dealing with singleton coalitions. 2

364

Anirban Kar, Manipushpak Mitra, and Suresh Mutuswami

φi (v) =

1 Δi v(Pi (π )) for all i ∈ N n! π∑ ∈Π

where Π is the set of all orderings of N and Pi (π ) = { j|π ( j) < π (i)}.6 An agent i ∈ N is a dummy if Δi v(S) = 0 for all S ⊆ N \ i. One can easily check that if i is a dummy player, then φi (v) = 0. Let x be an efficient profile of the game (N, v). The excess of a coalition S with respect to x is defined as e(S, x, v) = v(S) − x(S). We will omit the dependence on v and simply write e(S, x) when there is no confusion. The vector θ (x) is constructed by arranging the set of 2n − 2 excesses corresponding to proper non-empty subsets of N in non-increasing order.7 If y and z are two efficient profiles, then y 0 then e(N \ R, z) = rε > 0. Thus, for all ε , θ (φ (v)) ≥L θ (z). Case 2: β < 0. The maximum excess with respect to the Shapley value is (a) 0 if r = 1 and (b) − βr (r − 1) if r > 1. If r = 1 then, applying the same steps as in Case 1, we get θ (φ (v)) ≥L θ (z) for all ε . Consider r > 1. If ε < 0, then for any set R \ {i} with i ∈ R, e(R \ {i}, z) = −(r − 1)( βr + ε ) > − βr (r − 1) and θ (φ (v)) ≥L θ (z). Finally, if ε > 0, then for any i ∈ R, e(N \ {i}, z) = − βr (r − 1) + ε > − βr (r − 1) and θ (φ (v)) ≥L θ (z).  Thus, for r > 1, θ (φ (v)) ≥L θ (z) for all ε .10 Lemma 3. Let US and UR be two unanimity games where S ∪ R ⊆ N and S = R. Let v = α US + β UR . Suppose x is an efficient profile of v such that xk = 0 for all k ∈ S ∪ R. Then e(T, x) = e([T ∩ (S ∪ R)], x) for all T ⊆ N. Proof: Note that x(T ) = x(T ∩ (S ∪ R)) since xk = 0 for k ∈ S ∪ R. Therefore, it suffices to prove that v(T ) = v(T ∩ (S ∪ R)). Consider any T ⊆ N. If S ⊆ T , then S ⊆ [T ∩ (S ∪ R)] which implies that US ([T ∩ (S ∪ R)]) = 1 = US (T ). On the other hand, S ⊆ T implies S \ T is non-empty. In turn, this implies that S \ [T ∩ (S ∪ R)] is non-empty because T ∩ (S ∪ R) ⊆ T . Hence, US ([T ∩ (S ∪ R)]) = 0 and we have US ([T ∩ (S ∪ R)]) = 0 = US (T ). In a similar way, UR (T ) = UR ([T ∩ (S ∪ R)]) and the proof is complete since v = α US + β UR. ⊓ ⊔ 10

This result can be alternatively proved using findings by Arin and Feltkamp [1]. The game β UR is a veto-rich game with non-empty imputation. From Lemma 3.3 and Theorem 3.7 of Arin and Feltkamp it follows that ψi (v) = 0 for all i ∈ N \R. Since each i ∈ N \R is a dummy player, φi (v) = 0 for all i ∈ N \ R. Finally, since players in R are symmetric and since both prenucleolus and Shapley value satisfy symmetry and efficiency, the result follows.

The Prenucleolus and the Shapley Value

367

Now we are ready to prove our characterization result. The main effort is in proving necessity and this is done by contradiction. Basically, we show that if there is a basis of unanimity games containing a non-PS game and a non-dictatorial unanimity game, then it is possible to construct a game (in L ) where the prenucleolus and the Shapley value differ. Theorem 2. Let L be a linear subspace of TU games that has a basis S which is a subset of the class of unanimity games. Then, φ (v) = ψ (v) for all (N, v) ∈ L if and only if US ∈ S \ T ⇒ UR ∈ T1 for all UR ∈ S \ US . Proof: (Sufficiency) If S ⊆ T then the prenucleolus and the Shapley value coincide on L . Consider the unanimity game Ui ∈ T1 , for i ∈ N. Clearly, for this game Δi ui (S) = 1 for all S ⊆ N \{i} and for all j = i, Δ j ui (S) = 0 for all S ⊆ N \{ j}. Therefore, Ui is a PS game with ci = 2 and c j = 0 for all j = i. Now consider UT ∈ T2 where T = {i, j}, i = j. For all S ⊆ N \ {i}, Δi uT (S) + Δi uT (N \ [S ∪ {i}]) = 1 since either j ∈ S or j ∈ N \ [S ∪ {i}]. A similar reasoning holds for player j. For any k ∈ N \ {i, j}, Δk (S) = 0 for all S ⊆ N \ {k}. Hence, UT ∈ T2 is a PS game with ci = c j = 1 and ck = 0 for all k ∈ N \ {i, j}. Finally, if US ∈ S \ T and UR ∈ T1 for all UR ∈ S \ US , then Lemma 1 and Lemma 2 ensure coincidence. (Necessity) Let US ∈ S \ T and UR ∈ S \ (T1 ∪ US ). Define v = α US + β UR for α , β ∈ ℜ. Note that φk (v) = 0 for k ∈ / S ∪ R, because all such agents are dummy players. To complete the proof, we will construct an alternative profile y such that yk = 0 for all k ∈ / S ∪ R and θ (y) 1. The Shapley value of v is ⎧α if k ∈ S \ R, ⎪ ⎨ |S| β if k ∈ R \ S, φk (v) = |R| ⎪ ⎩α β |S| + |R| if k ∈ S ∩ R. Since S = R, we can assume, without loss of generality, that S \ R = 0. / Choose α > 0, β > 0 such that α /|S| < β /|R|. We will show that the coalitions with the maximum excess are all singleton coalitions {i} ⊆ S \ R. Indeed, for any such coalition T , e(T, φ (v)) = −α /|S|.11 Now, let T = {k}, k ∈ S \ R. If S ⊆ T = S ∪ R, then

11

Since |S ∩ R| > 1, it follows that |S| > 1, and hence, v({i}) = 0 for i ∈ S \ R.

368

Anirban Kar, Manipushpak Mitra, and Suresh Mutuswami

α β β α α − ∑ =− ∑ < − |T ∩ R| < − |S| |R| |R| |S| |S| k∈S k∈T ∩R k∈T ∩R

e(T, φ (v)) = α − ∑

since |T ∩ R| ≥ |S ∩ R| > 1. Similarly, if R ⊆ T = S ∪ R, then e(T, φ (v)) = −



k∈S∩T

α α 1. For all other T , v(T ) = 0 and hence, e(T, φ (v)) = −

α β α − ∑ 1 we can choose i ∈ S \ R and j ∈ S ∩ R. Define y by ⎧ ⎨ φk (v) + ε if k = i, yk (v) = φk (v) − ε if k = j, ⎩ otherwise. φk (v)

Since e({i}, φ (v)) > e(T, φ (v)) for all T ∈ {{k}|k ∈ S \ R} we can choose ε > 0 sufficiently small so that e({i}, y) = e({i}, φ (v)) − ε > e(T, y) for all T ∈ {{k}|k ∈ S \ R}. Moreover e({k}, y) = e({k}, φ (v)) for all k ∈ S \ [R ∪ {i}] and e({i}, y) < e({i}, φ (v)). Therefore θ (y) 0. The Shapley value is as follows. ⎧ α − if k ∈ S \ {l}, ⎪ ⎨ |S| β if k ∈ R \ {l}, − φk (v) = |R| ⎪ ⎩ α β − |S| − |R| if k = l. Let A1 be the collection of subsets with the highest excess at φ (v). We show that A1 = {(S ∪ R) \ {s1, r1 } | s1 ∈ S \ {l}, r1 ∈ R \ {l}} ∪ {(S ∪ R) \ {l}}. The sets in A1 are of the following type: (i) sets where one element each from S \ R and R \ S have been removed from S ∪ R, or (ii) where the common element l has been removed from S ∪ R. Define γ = (α + β ) − (α /|S| + β /|R|). If A = {(S ∪ R) \ {s1 , r1 }}, s1 ∈ S \ {l}, r1 ∈ R \ {l} then, e(A, φ (v)) = − ∑k∈A φk (v) = − (v(S ∪ R) − φs1 (v) − φr1 (v)) = γ .

The Prenucleolus and the Shapley Value

369

A similar computation shows that e((S ∪ R) \ {l}, φ (v)) = γ . We will show that the excess of all other subsets is less than γ . There are several possibilities. If S ⊆ T = S ∪ R, then  α β β e(T, φ (v)) = −α − − ∑ =β− ∑ − . ∑ |R| |R| k∈S |S| k∈R\T k∈(T \S)∪{l} Since α > 0, we have e(T, φ (v)) < γ . A similar argument holds if R ⊆ T = S ∪ R. Finally, if T is neither a superset of R nor a superset of S, then e(T, φ (v)) = −

α β α β − ∑ = α +β − ∑ − ∑ . |S| k∈R\T |R| k∈T ∩S |S| k∈T ∩R |R| k∈S\T



Since both S \ T and R \ T are nonempty and T ∈ / A1 , we have e(T, φ (v)) < γ . Using the above procedure to the coalitions in 2S∪R \ A1 , we can identify the coalitions with the next highest excess to be either (i) B1 = {(S ∪ R) \ {l, s1} | s1 ∈ S \ {l}}, (ii) B2 = {(S ∪ R) \ {s1, s2 , r1 } | s1 , s2 ∈ S \ {l} and r1 ∈ R \ {l}} or (iii) B3 = {(S ∪ R) \ {l, r1} | r1 ∈ R \ {l}}, (iv) B4 = {(S ∪ R) \ {s1, r1 , r2 } | s1 ∈ S \ {l} and r1 , r2 ∈ R \ {l}}. For each B ∈ B1 ∪ B2 , the excess can be computed to be   β 2α + = γ1 e(B, φ (v)) = (α + β ) − |S| |R| while the excess for each B ∈ B3 ∪ B4 is



2β α + e(B, φ (v)) = (α + β ) − |S| |R|



= γ2 .

Choose α , β such that β /|R| > α /|S| > 0. This choice will make γ1 > γ2 . Therefore A2 = B1 ∪ B2 is the set of coalitions with the second highest excess at φ (v). Define y by

⎧ ⎪ if k ∈ S \ {l}, ⎨ φk (v) + ε (|S|−1) ε if k ∈ R \ {l}, yk = φk (v) − (|R|−1) ⎪ ⎩ φ (v) if k = l. k

Let us distinguish between two sub-cases. Subcase (i): |S| > |R|. Choose ε > 0. It is easy to verify that the excess of [S ∪ R \ {l}] remains the same under y. For all other A ∈ A1 ,

370

Anirban Kar, Manipushpak Mitra, and Suresh Mutuswami

   |S| − 1 < e(A, φ (v)) e(A, y) = γ + ε 1 − |R| − 1 because |S| > |R|. We can choose ε sufficiently small such that the sets in A1 still have higher excess than the rest. Therefore θ (y) |S|, we choose ε < 0 but the argument remains the same. Subcase (ii): |S| = |R|. Choose ε < 0. Note that the excess of sets in A1 remains unchanged for any choice of ε since |S| = |R|. However, the excess of sets in A2 will now change.12 If B ∈ B1 , then e(B, y) = e(B, φ (v)) + ε < e(B, φ (v)). If B ∈ B2 , then    |S| − 1 = e(B, φ (v)) + ε < e(B, φ (v)). e(B, y) = e(B, φ (v)) + ε 2 − |R| − 1 Once again we can choose ε sufficiently close to zero such that A2 remain the collection of sets with the second highest excess. Hence, θ (y) 0 is a vulnerable coalition. Most voting situations in the real world are weighted systems that can be represented by a sytem of weights and quotas. Consider the Lok Sabha (LS), which is the Lower House of the Parliament of the Republic of India. The voters here are political parties rather than individuals. Each voter typically occupies a certain number of seats in the LS. Therefore, if a political party A decides to vote for a bill, it actually contributes wA ‘yes’ votes, where wA is the number of seats that A has in the LS. A bill presented before the LS is therefore passed if the total number of ‘yes’ votes cast in favor of the bill exceeds a pre-defined quota. Such voting situations are formally described by a weighted majority game, which is defined below. Definition 2. A weighted majority game G is a quadruplet (N;V ; w; q), where w = {w1 , w2 , ..., wn } is the vector of non-negative real weights of the voters in N and q is a non-negative real quota such that 0 < q ≤ ∑ wi , and for any S ∈ P(N) i∈N

V (S) = 1 ⇔ ∑ wi ≥ q and V (S) = 0, otherwise. i∈S

Thus a coalition is winning if and only if the weights of the members of the coalition add to a number that is at least as large as the quota. Next we will define a very important class of simple voting games called swap robust games (Taylor, 1995; Taylor and Zwicker, 1999). Definition 3. A simple voting game G = (N;V ) is said to be swap robust if V ((S\{i}) ∪ { j}) + V ((S′ \{ j}) ∪ {i}) = 0 ∀S, S′ ∈ W (G) and i, j ∈ N such that i ∈ S\S′ while j ∈ S′ \S. In other words, a simple voting game is swap robust if swapping (or interchanging) two voters between two winning coalitions does not turn both the coalitions into losing ones. Swap robust games are particularly significant because weighted majority games are swap robust (Taylor and Zwicker (1999)). Definition 4. A number valued index of power is a real valued function ξ : N → R+ , where R+ is the non-negative part of the real line. Thus, given a simple voting game G = (N;V ), an index of power assigns each voter a non-negative real number. With every such real valued functions, one can associate a complete preordering on N (≥ξ ) which is defined by : i ≥ξ j ⇐⇒ ξi ≥ ξ j. Next we will define the influence relation. The influence relation ranks voters according to how much influential they are in the decision making process without assigning any numbers to them. Formally, Definition 5.1. Let G = (N;V ) be a simple voting game as defined above. Two voters i, j ∈ N are said to be equally influential (or desirable) if ∀S ⊂ N\{i, j},

The Ordinal Equivalence of the Johnston Index and the Established Notions of Power

375

S ∪ {i} ∈ W (G) ⇐⇒ S ∪ { j} ∈ W (G). If i, j ∈ N are equally influential, we denote it by i ∼ j. Definition 5.2. Let G = (N;V ) be a simple voting game. Voter i is said to be more influential than voter j, denoted by i ≻ j, if the following two conditions are fulfilled: 1. ∀S ⊂ N\{i, j}, S ∪ { j} ∈ W (G) =⇒ S ∪ {i} ∈ W (G). 2. ∃ a coalition T ⊂ N\{i, j} such that T ∪ {i} ∈ W (G) but T ∪ { j} ∈ / W (G). Voter i is said to be at least as influential as voter j (i.e., i ! j ) if i ≻ j or i ∼ j. Remark 1: Taylor (1995) has shown that the influence relation generates a complete preordering on N if and only if the simple voting game is swap robust. We will now define the Johnston index of individual voting power. The Johnston score (Js) is a function that assigns to any voter i ∈ N in the game G a real value and is defined as Jsi (G) =

1

∑ |Cr (S; G)| .

S∈W (G):i∈Cr(S;G)

The Johnston index JI for a voter i ∈ N is given by JIi (G) =

Jsi (G) . ∑ Jsk (G) k∈N

Thus for any two voters i, j ∈ N, JIi (G) ≥ JI j (G) ⇐⇒ Jsi (G) ≥ Js j (G). As mentioned above, JI was suggested as a modification of the BC index (for a detailed discussion see Felsenthal and Machover (1998)). The departure between the two indices lies in the definition of the score. While in the BC index, a voter i’s score increases by 1 unit for each coalition that i is critical in, in the JI index, the score 1 increases by |Cr(S;G)| for each coalition S in which i is critical. Thus in the JI index, for each coaltion that i is critical in, he shares the score equally with all the other voters who are also pivotal in the same coalition. After having laid down the preliminaries, we will now state and prove our main results in the next section.

3 The Johnston Preordering and the Influence Relation Let us begin this section by defining the following set: Cr (i; m) = {S ∈ W (G) : V (S) − V (S\{i}) = 1 and |Cr (S; G)| = m} . That is, Cr (i; m) is the set of all winning coalitions in which a voter i ∈ N is a critical defector along with m − 1 other critical defectors. Then we can rewrite the Js score of a voter i ∈ N as follows

376

Sonali Roy n

Jsi (G) =

|Cr (i; m)| . m m=1



(1)

Lemma 1. Let G = (N;V ) be a simple voting game and i, j ∈ N. If i and j are equally influential (i ∼ j), then Jsi (G) = Js j (G). Proof: In proving this lemma we will show that if i ∼ j, then for every integer m, such that 1 ≤ m ≤ n, |Cr (i; m)| = |Cr ( j; m)| . We will do so by constructing a 1-1 onto mapping ϕ : Cr ( j; m) −→ Cr (i; m) . Let S ∈ Cr ( j; m) . Then by definition S ∈ W (G) and S\{ j} ∈ / W (G). The following two cases may arise. Either i ∈ S or i ∈ / S. If i ∈ S, then we must have S\{i} ∈ / W (G). If not, we would have S\{i} = (S\{i, j})∪{ j} ∈ W (G) but S\{ j} = (S\{i, j}) ∪ {i} ∈ / W (G). This would contradict the fact that ∀S ⊂ N\{i, j}, S ∪ {i} ∈ W (G) ⇐⇒ S ∪ { j} ∈ W (G) since i ∼ j. Thus, if i ∈ S, then i must be a critical defector in S. Further, since by assumption |Cr (S; G)| = m, we must have S ∈ Cr (i; m) . On the other hand suppose i ∈ / S. Then the fact that i ∼ j implies that i must be a critical defector in the coalition (S\{ j}) ∪ {i}. However in order to conclude that (S\{ j}) ∪ {i} ∈ Cr (i; m), we need to ascertain that |Cr ((S\{ j}) ∪ {i}; G)| = m. For this we will need the following two claims: Claim 1.1: Let S ∈ Cr ( j; m) and voters i and j be equally influential. Then ∄ a voter k ∈ S\{ j} such that k ∈ Cr (S; G) but k ∈ / Cr ((S\{ j}) ∪ {i}; G). Proof: Let us note at the outset that S\{k, j} ⊂ N\{i, j}. Now contradictory to / the claim, let us assume that ∃ a voter k ∈ S\{ j} such that k ∈ Cr (S; G) but k ∈ Cr ((S\{ j}) ∪ {i}; G). This implies that (S\{k, j}) ∪ { j} ∈ / W (G) but (S\{k, j}) ∪ {i} ∈ W (G). This contradicts that ∀S ⊂ N\{i, j}, S ∪ {i} ∈ W (G) =⇒ S ∪ { j} ∈ W (G) and hence violates the hypothesis that i ∼ j. Claim 1.2: Let S ∈ Cr ( j; m) and voters i and j be equally influential. Then ∄ a voter k ∈ S\{ j} such that k ∈ / Cr (S; G) but k ∈ Cr ((S\{ j}) ∪ {i}; G). Proof: Again let us assume that ∃ a voter k ∈ S\{ j} such that k ∈ / Cr (S; G) but k ∈ Cr ((S\{ j}) ∪ {i}; G). This implies that (S\{k, j}) ∪ { j} ∈ W (G) but (S\{k, j}) ∪ {i} ∈ / W (G). This contradicts that ∀S ⊂ N\{i, j}, S ∪ { j} ∈ W (G) =⇒ S ∪ {i} ∈ W (G) and hence again violates the hypothesis that i ∼ j. Thus claims 1.1 and 1.2 ensure that |Cr ((S\{ j}) ∪ {i}; G)| = m and hence (S\{ j}) ∪ {i} ∈ Cr (i; m) . Now we will construct the mapping ϕ : Cr ( j; m) −→ Cr (i; m) . For S ∈ Cr ( j; m) , let ϕ (S) = S if i ∈ S and ϕ (S) = (S\{ j}) ∪ {i} if i ∈ / S. From the above discussion it is clear that ϕ (S) ∈ Cr (i; m) and ∀S ∈ Cr ( j; m) , ϕ (S) is unique. Thus ϕ is a 1-1 mapping. Also it can easily be shown that i ∼ j implies that there does not exist a ′ set S ∈ Cr (i; m) such that it is not the image of any element in Cr ( j; m) under ϕ . Therefore, ϕ is a 1-1 onto mapping. Thus we have |Cr (i; m)| = |Cr ( j; m)| for every integer m, such that 1 ≤ m ≤ n. Using (1) it easily follows that Jsi (G) = Js j (G).  A direct consequence of Lemma 1 is that if i ∼ j, then JIi (G) = JI j (G). Lemma 2. Let G = (N;V ) be a simple voting game and i, j ∈ N. If i is more influential than j (i ≻ j), then Jsi (G) > Js j (G).

The Ordinal Equivalence of the Johnston Index and the Established Notions of Power

377

Proof: The hypothesis that i is more influential than j implies that ∃ a family of coalitions T ⊂ P(N\{i, j}) such that ∀T ∈ T , T ∈ / W (G) and T ∪ { j} ∈ / W (G) but T ∪ {i} ∈ W (G). However ∄ a coalition T ′ ⊂ N\{i, j} such that T ′ ∈ / W (G) and T ∪ { j} ∈ W (G) but T ∪ {i} ∈ / W (G). In order to establish that Jsi (G) > Js j (G), we will argue that we can arrive at the Johnston score for voter i from the score of voter j by adding positive numbers. Expanding (1) we can write the Johnston score of voter j in the game G as Js j (G) =

|Cr( j; m − 1)| |Cr( j; m)| |Cr( j; n)| |Cr( j; 1)| + ... + + + ... + . 1 m−1 m n

(2)

Next we will construct the sets Cr(i; m) as follows: ∀S ∈ Cr( j; m) put the coalition S in the set Cr(i; m) if i ∈ S and if i ∈ / S, put the coalition (S\{ j}) ∪ {i} in the set Cr(i; m). We can now write

Jsi (G)0 =

|Cr(i; m − 1)| |Cr(i; m)| |Cr(i; n)| |Cr(i; 1)| + ... + + + ... + . 1 m−1 m n

(3)

We index Jsi (G) by 0 in order to indicate that it is the zeroeth step towards arriving at the true Johnston score for i in the game G. It is obvious that by construction Js j (G) = Jsi (G)0 . Now let S ∈ Cr( j; m). If i ∈ S then by the same reasoning as in the proof of Lemma 1 we must have S ∈ Cr(i; m). Let i ∈ / S. Since i ≻ j, we must have i as a critical defector in the coalition (S\{ j}) ∪ {i}. Now two cases may arise. Case 1: ∃ a family of coalitions K ⊂ P(Cr(S; G)\{ j}) such that (S\{ j})\K ∈ T ∀K ∈ K . Then K ∩Cr((S\{ j})∪{i}; G) = φ . That is, replacing j by the more influential voter i renders some erstwhile critical members of the coalition non-critical. This, together with the fact that ∄ a voter k ∈ S\{ j} such that k ∈ / Cr (S; G) but k ∈ Cr ((S\{ j}) ∪ {i}; G) since that would contradict the hypothesis ∀S ⊂ N\{i, j}, S ∪ { j} ∈ W (G) =⇒ S ∪ {i} ∈ W (G), implies that |Cr((S\{ j}) ∪ {i}; G)| = m1 , where m1 < m. That is, (S\{ j}) ∪ {i} ∈ Cr(i; m1 ) instead of Cr(i; m) in the game G. Therefore we can pull out the coalition (S\{ j}) ∪ {i} from Cr(i; m) and add it to the set Cr(i; m1 ) in order to calculate the Johnston score of i in the game G. So we have |Cr(i; 1)| |Cr(i; m)| − 1 |Cr(i; m1 )| + 1 |Cr(i; n)| + ... + + ... + + ... + 1 m1 m n |Cr(i; m1 )| |Cr(i; 1)| |Cr(i; m)| + ... + + ... = + ... + 1 m1 m 1 |Cr(i; n)| 1 + +( − ) n m1 m > Jsi (G)0 = Js j (G).

Jsi (G)1 =

Next, we simply add the coalitions T ∪ {i}, T ∈ T in the appropriate sets Cr(i; m). In this case |Cr(i; m)| simply would increase for some value of m, 1 ≤

378

Sonali Roy

m ≤ n without any corresponding decrease for some other value of m in (2). Thus we would get Jsi (G) > Jsi (G)0 = Js j (G). Case 2: ∄ a coalition S ∈ Cr( j; m) for any m, 1≤ m ≤ n, for which ∃ a family of coalitions K ⊂ P(Cr(S; G)\{ j}) such that (S\{ j})\K ∈ T ∀K ∈ K . In this case we simply add the coalitions T ∪ {i}, T ∈ T in the appropriate sets Cr(i; m) which leads to an increase in |Cr(i; m)| for some value of m without a decrease for some other value of m in (2). Thus once again we would have Jsi (G) > Jsi (G)0 = Js j (G).  Consequently, if i ≻ j, we have JIi (G) > JI j (G). Let us give a numerical example to elucidate the proof. Consider the weighted majority game G = {9; 4, 3, 2, 2, 1, 1} on coalitions ⎧ the voter set {a, b,c, d, e, f }. ⎫  Then the winning   are:   ⎨{a, b, c} , {a, b, d} , a, b, e, f , {a, c, d, e} , a, c, d, f , b, c, d, e, f , {a, b, c, d},⎬ {a, b, c, e}, {a, b, c, f }, {a, b, c, d, e}, {a, b, c, d, f }, {a, b, c, e, f }, ⎩  {a, b, c, d, e, f }, {a, b, d, e}, {a, b, d, f }, {a, b, d, e, f }, {a, c, d, e, f } In each coalition, the voters who are underlined are critical members. It is easily verifiable that a ≻ b.

Cr(b; 1) = φ , Cr(b; 2) = {{a, b, c, d}, {a, b, c, e, f }, {a, b, d, e, f }}, Cr(b; 3) = {{a,  b, c} , {a,  b, d} , {a, b, c, e}, {a, b, c, f }, {a, b, d, e}, {a, b, d, f }}, Cr(b; 4) = { a, b, e, f },   Cr(b; 5) = { b, c, d, e, f }, Jsb (G) = 01 + 23 + 36 + 14 + 51 = 3.95. Now we will construct the sets Cr(a; m) following the rule outlined in the proof of Lemma 2. Since all the coalitions in the sets Cr(b; 2),Cr(b; 3) and Cr(b; 4) contain a, we can quite easily verify that Cr(a; 2) = {{a, b, c, d}, {a, b, c, e, f }, {a, b, d, e, f }}, Cr(a; 3) = {{a,  b, c} , {a,  b, d} , {a, b, c, e}, {a, b, c, f }, {a, b, d, e}, {a, b, d, f }}, Cr(a; 4) = { a, b, e, f }.   However a ∈ / b, c, d, e, f . So we set Cr(a; 5) = {{a, c, d, e, f }} and write Jsa (G)0 = 10 + 23 + 63 + 41 + 51 . Now T = {{c, d, e} , {c, d, f }} . Thus, {b, c, d, e, f } \ {b, f } and {b, c, d, e, f } \ {b, e} ∈ T . So K = {{ f }, {e}}. Thus f and e are not critical in the coalition {a, c, d, e, f }. It is easily verified that Cr({a, c, d, e, f }; G) = 3. So we readjust the Johnston score of a by pulling out {a, c, d, e, f } from the set Cr(a; 5) and 1 1−1 0 3 6 1 adding it to Cr(a; 3). So we have Jsa (G)1 = 10 + 23 + 6+1 3 +4+ 5 = 1+2+3+4+ 1 1 1 5 + ( 3 − 5 ) > Jsa (G)0 . Next we need to add the coalitions {a, c, d, e} , {a, c, d, f } to the appropriate set, namely, Cr(a; 4). This increase in |Cr(a, 4)| however is unaccompanied by a decrease in the size of any other set Cr(a; m). Hence the final 1 1 1 0 3 6 1 Johnston score of a is Jsa (G)2 = 10 + 23 + 63 + 1+2 4 + 5 +(3 − 5) = 1 + 2 + 3 + 4 + 1 1 2 1 5 + ( 3 − 5 ) + 4 = 4.58 > Jsa (G)1 > Jsa (G)0 = Jsb (G) = 3.95. Lemmas 1 and 2 help us to arrive at Proposition 1.

The Ordinal Equivalence of the Johnston Index and the Established Notions of Power

379

Proposition 1: The influence relation is a sub-preordering of the Johnston Index JI for every simple voting game. That is, given a simple voting game G = (N;V ), ∀ i, j ∈ N we have 1. i ∼ j =⇒ JIi (G) = JI j (G), 2. i ≻ j =⇒ JIi (G) > JI j (G). The following proposition is similar to that in Lambo and Moulen (2002). Proposition 2.1: Let the simple voting game G = (N;V ) be swap robust and i, j ∈ N. Then i ∼ j ⇐⇒ JIi (G) = JI j (G). Proof: We already know from Proposition 1 that i ∼ j =⇒ JIi (G) = JI j (G). Now suppose that JIi (G) = JI j (G) =⇒ i ∼ j is false. Since the game is swap robust, the influence relation induces a complete preordering on N (see Remark 1 above). So if i ∼ j is false, then without loss of generality we have i ≻ j. By Proposition 1, i ≻ j =⇒ JIi (G) > JI j (G), which is a contradiction. Hence the proof.  Pursuing the same line of reasoning, we can prove the following proposition too. Proposition 2.2: Let the simple voting game G = (N;V ) be swap robust and i, j ∈ N. Then i ≻ j ⇐⇒ JIi (G) > JI j (G). Thus we can infer from Propositions 2.1 and 2.2 that the preorderings generated by the influence relation and the JI index on the set of voters coincide with each other if and only if the simple voting game is swap robust. We already know that the preorderings generated by the influence relation, the SS and the BC indices coincide with each other if and only if the game is swap robust (Lambo and Moulen (2002)). Thus it follows that if the game under consideration is swap robust, the Johnston index does as good a job as the established indices in ordinally ranking the voters.

4 Conclusion In this paper we have shown that the influence relation is a sub-preordering of the Johnston index for every simple voting game. That is, voters who are equally influential have the same value of the Johnston index, whereas if a voter i is more influential than voter j, the Johnston index assigns a higher value to i than to j. Furthermore, we also show that the Johnston index ranks the voters in the same order as the Shapley-Shubik and Banzhaf-Coleman indices if and only if the simple voting game is swap robust.

380

Sonali Roy

References 1. Banzhaf, J. F., 1965. Weighted voting doesn’t work: a mathematical analysis. Rutgers Law Review 19, 317-342 2. Deegan, J., Packel, E.W., 1978. A new index of power for simple n-person gams, International Journal of Game Theory 7, 113-123 3. Felsenthal, D. S., Machover, M., 1998. The measurement of voting power: theory and practice, problems and paradoxes, Edward Elgar, Cheltenham 4. Isbell, J. R., 1958. A class of simple games, Duke Mathematical Journal 25, 423-439 5. Johnston, R.J., 1978. On the measurement of power: some reactions to Laver, Environment and Planning A 10, 907-914 6. Lambo, L.D., Moulen, J., 2002. Ordinal equivalence of power notions in voting games, Theory and Decision 53, 313-325 7. Laver, M., 1978. The problem of measuring power in Europe, Environment and Planning A 10, 901-906 8. Shapley, L S., 1953. A value for n- person games. In: Kuhn, H. W., Tucker, A. W., (Eds.), Contributions to the Theory of Games II. Annals of Mathematics Studies, 28. Princeton University Press, Princeton 9. Taylor, A.D., 1995. Mathematics and Politics, Springer, Berlin 10. Taylor, A.D., Zwicker, W.S., 1999. Simple Games, Princeton University Press, Princeton, NJ

Reflecting on Market Size and Entry under Oligopoly Krishnendu Ghosh Dastidar

Abstract In a homogeneous product market with n firms we explore the following. How do the equilibrium configurations change with increase in market size and with entry of additional firms? Regarding the effects of increase in market size, we prove some counterintuitive results. On the effects of entry, we reaffirm the existing results in the literature and reinterpret them. In all cases we provide illustrative examples.

1 Introduction We consider a homogeneous product market with n (where n ≥ 1) firms with symmetric costs. We explore two sets of questions. 1. Suppose there is an increase in market size. That is, demand curve shifts to the right. This may be due to an increase in income. How do the equilibrium output, profit etc change with such a rightward shift? Conventional wisdom tends to suggest that, given a fixed number of firms, both output per firm and profit per firm should increase. However, we show that this may not always be true. 2. Given a fixed market size, suppose the number of firms rises. That is, there is further entry into the market. How do the equilibrium output, profit etc change with such entry? Surprisingly, the effects of increase in market size (the first set of questions) is still unexplored. We will analyze them here and provide some counterintuitive results. The answer to the second set of questions is well known in the literature (see Seade, 1980 and Vives, 1999 Chapter 4). We restate them within our framework and provide new interpretations and illustrative examples. We will discuss our results both in the context of a monopoly and an oligopoly. For an oligopoly we will Krishnendu Ghosh Dastidar Centre for Economic Studies and Planning, School of Social Sciences, Jawaharlal Nehru University, New Delhi 110067, India. e-mail: [email protected]

381

382

Krishnendu Ghosh Dastidar

stick to Cournot games where firms choose quantities simultaneously. We will assume that a regular, unique Cournot equilibrium exists1 . Dastidar (2000) shows that with symmetric costs such a unique Cournot equilibrium is always locally stable. Therefore, we can proceed to carry out our comparative static exercises without any problems. We show that, in a monopoly, the equilibrium output will rise with an increase in market size if and only if the marginal revenue at the initial equilibrium point goes up. It is interesting to note that a rightward shift of the demand curve, which need not be a parallel shift, does not necessarily lead to a rightward shift of the marginal revenue curve. The marginal revenue curve may swing to the left (at least for a range of output), depending on the nature of the shift in demand. A very similar result holds in a Cournot oligopoly. We illustrate our propositions with examples where output per firm decreases with a rightward shift in the demand curve. The effects of increase in market size on profits is interesting. We first show that monopoly profit always rises with market size. This is fairly obvious and intuitive. Surprisingly however, profits per firm in a Cournot oligopoly may not rise when market size increases. The sufficient condition for a rise in profit per firm is that output per firm should decrease with market size. Consequently, the necessary condition for profit per firm to fall with increase in market size is that output per firm should rise. When market size increases, there are two effects. The direct effect increases profit per firm as other things remaining same, price goes up (for any given level of output). However, there is an indirect strategic effect. If output per firm falls, total cost per firm falls, price goes up further (as total output shrinks) and the price effect offsets any possible revenue loss due to output shrinkage. In this case the indirect effect moves in the same direction as the direct effect and profit per firm rises. If however, output per firm goes up, total cost per firm rises. As total output expands then even with a rightward shift in the demand curve price may fall and this tends to pull down profits. In this case the indirect strategic effect moves in the opposite direction to that of the direct effect. There may be cases where this indirect effect dominates and profit per firm falls. We illustrate this possibility with a numerical example. The effects of further entry (increase in the number of firms) on output per firm depends on whether the firms’ products are “strategic substitutes” or “strategic complements” (see Bulow et al. (1985) for a general definition and discussion)2. We show that if products are strategic substitutes, which implies that the best response functions are downward sloping, then output per firm decreases with further entry. If however, the products are strategic complements (where best response functions are upward sloping) then output per firm increases with further entry. We next show that total output always rises and profit per firm always falls with entry of additional firms. However, the effects of further entry on industry profit (the sum of all firms’ 1

Using lattice programming method, Amir and Lambson (2000) analyze effects of entry even with non-unique Cournot equilibria. 2 Strategic substitutes and complements are defined by whether a more “aggressive” strategy by firm A (i.e. greater quantity in Cournot competition) lowers or raises firm B’s marginal profits.

Reflecting on Market Size and Entry under Oligopoly

383

profits) is ambiguous. We provide numerical examples to show that industry profit can go either way with increase in the number of firms. The plan of the paper is as follows. We first provide the model of our exercise (Section 2). Thereafter, in Section 3 we provide our results on effects in market size. In Section 4 the results on effects of further entry are given. Lastly, we provide some concluding remarks.

2 The Model Consider a homogeneous product market with n (where n ≥ 1) firms. qi ≥ 0 is the quantity of good produced by firm i. All firms have the same cost function. Cost functions C (qi ) take the following form.

F + z (qi ) if qi > 0 C (qi ) = 0 if qi = 0 where F ≥ 0 is the fixed cost and z (qi ) is the variable cost. Let Q = ∑ni=1 qi be the total output produced by all the firms. Inverse demand function P (Q, A) is defined for all Q ∈ (0, ∞) and A ∈ I, where I is an interval. We call A the market size. An increase in A increases P (.). This means that the inverse demand curve shifts to the right (up) if A increases. It may be noted that this shift need not be a parallel shift.   There exists a Q¯ such that P (Q, A) > 0 for all Q ∈ 0, Q¯ . Note that Q¯ can be ∞ also. If Q¯ = ∞ then P (Q, A) > 0 for all Q ∈ (0, ∞). If Q¯ < ∞ then P (Q, A) = 0 ¯ It may be noted that the inverse demand may be discontinuous at Q. ¯ for all Q ≥ Q. However, since all our equilibria will be to the left of Q¯ (where the price is strictly positive), this discontinuity (if it’s at all there) will not affect our results. Also, in some cases Q¯ may depend on A. For example, if P (Q, A) = A − Q, we have Q¯ = A. In a Cournot game firms choose their quantities simultaneously. The payoff to each firm is its profit.

πi (q1 , q2 , ..qi ..qn ) = qi P (Q, A) − C (qi ) . A vector of outputs (q∗1 , q∗2 , ..q∗i ..q∗n ) is said to be a Cournot equilibrium if for all i and for all qi = q∗i , we have

πi (q∗1 , q∗2 , ...q∗i ..q∗n ) ≥ πi (q∗1 , q∗2 , ...qi ...q∗n ) .   For all Q ∈ 0, Q¯ and for all A ∈ I we define the following

384

Krishnendu Ghosh Dastidar

∂ πi (.) ∂ P (.) = qi + P(.) − C′ (qi ) , ∂ qi ∂ qi ∂ µi (.) ∂ 2 πi (.) ∂ 2 P (.) ∂ P (.) = − C′′ (qi ) , = q +2 ai (q1 , q2 , ..qi ..qn ) = i 2 2 ∂ qi ∂ qi ∂ qi ∂ qi   ∂ µi (.)  ∂ 2 πi (.)  ∂ 2 P (.) ∂ P (.) bi (q1 , q2 , ..qi ..qn ) = = = qi + .   ∂qj ∂ qi ∂ q j ∂ qi ∂ q2

µi (q1 , q2 , ..qi ..qn ) =

i = j

i = j

i

It may be noted that µi is the ith firm’s marginal profit, ai is the rate of change in the ith firms marginal profit w.r.t change in its own output and bi the rate of change in the ith firm’s marginal profit w.r.t. the change in the jth firm’s output. Since it is a homogeneous product market it does not matter which of the jth firm changes its output. To the ith firm what only matters is the sum of the outputs of all other firms. We now list all the assumptions below.     1. ∂ P (.) /∂ qi < 0 for all Q ∈ 0, Q¯ . ∂ 2 P (.) /∂ q2i is continuous for all Q ∈ 0, Q¯ . C (qi ) is twice continuously differentiable for all qi > 0. This assumption implies that the µi s, ai s and bi s (as stated above) are well defined in the relevant range.   2. ∂ P (.) /∂ A > 0 for all A ∈ I and for all Q ∈ 0, Q¯ . That is, as the market size parameter A increases, the demand curve shifts to the right (up) in the relevant range. ∂ 2 P (.) /∂ qi ∂ A is continuous for all A ∈ I and for all Q ∈ 0, Q¯ . 3. We assume that an interior, regular Cournot equilibrium exists and it is unique. ′′ ∗ 4. ai (q∗1 , q∗2 , ..q∗i , ...q∗n ) < bi (q∗1 , q∗2 , ..q∗i , ...q∗n ) . That is ∂∂P(.) qi < C (qi ). This is clearly a very general and plausible assumption. For convex costs this is always true. All we need is that marginal costs do not fall too rapidly in equilibrium.

2.1 Cournot Equilibrium At a regular interior Cournot equilibrium (q∗1 , q∗2 , ..q∗i , ...q∗n ) the following is true. for all i, µi (q∗1 , q∗2 , ..q∗i , ...q∗n ) = 0

(1)

for all i, ai (q∗1 , q∗2 , ..q∗i , ...q∗n ) < 0.

(1a)

Q∗ .

Since it is a homogeneous product Let total output at a Cournot equilibrium be market market what matters to firm i is the sum of the outputs of all other firms. Let Q∼i = ∑ q j and Q∗∼i = ∑ q∗j . j =i

j =i

We can rewrite µi (.), πi (.) etc as µi (qi , Q∼i ), πi (qi , Q∼i ), ai (qi , Q∼i ) and bi (qi , Q∼i ). Let ri (Q∼i ) = argmax πi (qi , Q∼i ) . qi ≥0

Reflecting on Market Size and Entry under Oligopoly

385

ri (Q∼i ) is the solution in qi of the following.

µi (qi , Q∼i ) = 0 and ai (qi , Q∼i ) < 0. Note that ri (Q∼i ) is firm i’s reaction function and it can be easily shown that ri′ (Q∼i ) = −

bi (ri (Q∼i ) , Q∼i ) . ai (ri (Q∼i ) , Q∼i )

Since ai (ri (Q∼i ) , Q∼i ) < 0, the sign of ri′ (Q∼i ) is the same as the sign of bi (ri (Q∼i ) , Q∼i ). Following Bulow et al (1985) we say that the products are strategic substitutes if bi (.) < 0 and strategic complements if bi (.) > 0. Since the Cournot equilibrium is unique, from Kolstad and Mathiesen (1987) and Gaudet and Salant (1991) it follows that n

bi >0 a − i=1 i bi

1+∑

(2)

where ai s and bi s are evaluated at equilibrium values. Unless otherwise stated, from now on all ai s and bi s will be assumed to be evaluated at equilibrium values. Dastidar (2000) shows that such a regular unique Cournot equilibrium, where firms have symmetric costs, is always locally stable. Note that since the Cournot equilibrium is unique and since all firms have the same cost function, the equilibrium must be symmetric. That is, at a Cournot equilibrium all firms produce the same output. Let q∗i = q∗ for all i. This means total output at a Cournot equilibrium is Q∗ = nq∗ . Also for all i, Q∗∼i = (n − 1)q∗ . It follows that ai (q∗ , Q∗∼i ) = a j (q∗ , Q∗∼i ) and bi (q∗ , Q∗∼i ) = b j (q∗ , Q∗∼i ) for all i = j. Let ∀ i, ai (q∗ , Q∗∼i ) = a and bi (q∗ , Q∗∼i ) = b. Since a < b (assumption 4) from (2) it follows that a + (n − 1)b < 0.

(3)

From (1 and 1a) it follows that at a Cournot equilibrium we have the following. q∗

∂ P (nq∗ , A) + P (nq∗ , A) − C′ (q∗ ) = 0, ∂ qi

and a < 0.

(4) (4a)

386

Krishnendu Ghosh Dastidar

2.2 Monopoly equilibrium We now characterize a monopoly equilibrium, if it exists3 . Let qm be the monopoly output. Then the first order and second order conditions of a monopoly equilibrium are as follows. ∂ P (qm , A) qm + P(qm , A) − C′ (qm ) = 0, (5) ∂q qm

∂ 2 P (qm , A) ∂ P(qm , A) − C′′ (qm ) < 0. +2 2 ∂q ∂q

(5a)

We denote the monopoly profit by π m = qm P (qm , A) − C (qm ). We now proceed to provide the main findings of our paper.

3 The Main Results 3.1 Effects of Changes in Market Size in a Monopoly We first explore the effects on monopoly equilibrium configuration when A changes. Proposition 1.

dqm dA

2

∂ P(.) > 0 iff qm ∂∂ qP(.) ∂ A + ∂ A > 0.

Proof: Using Eq. (5) and the implicit function theorem we get that 2

∂ P(.) qm ∂∂ qP(.) dqm ∂A + ∂A =− . 2 m m dA qm ∂ P(q ,A) + 2 ∂ P(q ,A) − C′′ (qm ) ∂ q2

(6)

∂q

Since the denominator is negative (see 5a), the sign of dqm /dA is the same as the 2

∂ P(.) sign of qm ∂∂ qP(.) ∂A + ∂A .



Comment: Common sense suggests that if the demand curve shifts to the right (up), then the monopoly output should rise. Proposition 1 shows that this may not 2 ∂ P(.) m be always true. If qm ∂∂ qP(.) ∂ A + ∂ A > 0, then we have the ‘normal’ case where q rises with A. Note that 2 qm ∂∂ qP(.) ∂A

∂ P(.) ∂A

> 0. However, if

∂ 2 P(.) ∂ q∂ A

is negative and large enough

∂ P(.) ∂A

+ may become negative and monopoly output (qm ) will decline then with an increase in A. It may be noted that the marginal revenue for the monopolist 2 ∂ P(.) ∂ P(.) ∂ MR ∂ MR m ∂ P(.) is MR = q ∂ ∂P(.) q + ∂ A and q ∂ q∂ A + ∂ A = ∂ A . If ∂ A < 0, then even though increase in market size A shifts the demand curve to the right, the marginal revenue 3

In some cases the monopoly equilibrium may not exist although Cournot equilibrium exists. For example, if P (Q, A) = A/Q and Z (qi ) = cqi then there is no monopoly equilibrium. However, there exists a Cournot oligopoly equilibrium in this case.

Reflecting on Market Size and Entry under Oligopoly

387

Fig. 1 Demand and MR curves

curve shifts to the left (at least at around qm ) and this leads to fall in qm . We now provide an example to illustrate this ‘perverse’ case.

Example 1: Let P (Q, A) = A2 e−AQ , Q¯ = 43 , I = 1, 32 . The cost function is as follows.

0 if q = 0 C (q) = F if q > 0, where F < 1e . That is, there is only a fixed cost and there are no variable costs. It can be easily verified that qm = 1/A and monopoly profit is π m = A/e−F > 0. Here the monopoly output is a strictly decreasing function of A. In this example, the marginal revenue is as follows. MR = A2 e−AQ (1 − AQ). In Figure 1 below we plot the demand curves and the MR curves for two values of A (for A = 1 and for A = 1.2). The thick lines depict the demand functions and the dashed lines give the MR curves. The figure shows that even though the demand shifts to the right with an increase in A, the MR curve swings to the left (at least for a range of output) and this causes a decline in monopoly output. Since marginal cost is zero, the monopoly equilibrium occurs where MR is zero. When A = 1, the MR curve intersects the horizontal axis at q = 1 but when A = 1.2, the MR curve intersects the horizontal axis at q = 0.833. We now state the next result. Proposition 2. Monopoly profit unambiguously rises with market size A. Proof: Note that π m = qm P (qm , A) −C (qm ). To see the effect of an increase in A on the monopoly profit we use the envelope theorem and get

388

Krishnendu Ghosh Dastidar

∂ P (.) dπ m = qm > 0. dA ∂A

(7) 

Comment: The intuition behind the above result is straightforward. Suppose demand curve shifts to the right. At the initial monopoly output qm the price will rise and this will mean that even if the monopolist keeps the output unchanged at the initial level the profit will increase (as revenue increases and costs do not increase). Therefore, when the monopolist moves to a new equilibrium output, the profit must strictly increase.

3.2 Effects of Changes in Market Size in an Oligopoly We now proceed to discuss the effects of changes in A on the Cournot equilibrium configuration. Proposition 3.

dq∗ dA

2

> 0 iff q∗ ∂∂ qP(.) + ∂∂P(.) A > 0. i∂ A

Proof: Using Eq. (4) and the implicit function theorem we get that q ∂ µi (.) /∂ A dq∗ =− =− dA ∂ µi / ∂ q i Since a < 0 the sign of

dq∗ dA

2 ∗ ∂ P(.) ∂ qi ∂ A

+ ∂ ∂P(.) A

a 2

.

+ ∂ ∂P(.) is the same as the sign of q∗ ∂∂ qP(.) A . i∂ A

(8) 

Comment: This result is extremely similar to the monopoly case. Also note that given a fixed number of firms n, total output in equilibrium (Q∗ ) moves in the same direction as q∗ . Increase in A will lead to a rise in q∗ (the normal expected case) if the marginal revenue rises; otherwise it will lead to a fall in q∗ (the perverse case). As in the monopoly case, we now provide an example to illustrate the perverse case where q∗ falls with A. Example 2. Let P (Q, A) = A3 e−AQ , Q¯ = 35 and I = [3, 5] . Costs aregiven  by C (qi ) = ¯ qi . There are two firms. Note that ∂ P (.) /∂ A > 0 for all Q ∈ 0, Q and for all A ∈ [3, 5]. The first order condition at a symmetric Cournot duopoly is as follows. ∗

A3 e−2Aq (1 − Aq∗) = 1.

(9)

Using (9) it is easy to show that q∗ decreases as A increases. Note that the solution to the above is as follows. $ # 2 − Lambert W A23 e2 q∗ = , 2A

Reflecting on Market Size and Entry under Oligopoly

389

Fig. 2 q∗ as a function of A

where Lambert W (x) is s.t. Lambert W (x) eLambert W (x) = x. In Figure 2 below we plot q∗ as a function of A over the range [3, 5]. To illustrate further we report the following values. When A = 3, q∗ = 0.27069, when A = 4, q∗ = 0.22615 and when A = 5, q∗ = 0.18937.

3.2.1 Effects on Profit per Firm and Total Industry Profits Proposition 4. If dq∗ /dA < 0 then d π ∗ /dA > 0. Hence, d π ∗ /dA < 0 only if dq∗ /dA > 0. Proof: Note that at a Cournot equilibrium the profit per firm is given by the following. π ∗ = q∗ P(nq∗ , A) − C (q∗ ) . (10) Differentiating the above we get that dπ ∗ dA

Note that

# $ ∗ ∗ ∗ ∗ n ∂ P(.) dq + ∂ P(.) − C′ (.) dq = P (nq∗ , A) dq + q dA dA ∂ qi dA ∂$A # ∗ ∗ ∗ ∂ P(.) ′ ∗ ∂ P(.) = dq dA P (nq , A) + nq ∂ qi − C (.) + q ∂ A .

(11)

390

Krishnendu Ghosh Dastidar

∂ P (.) − C′ (.) ∂ qi ∂ P (.) ∂ P (.) = P (nq∗, A) + q∗ − C′ (.) + (n − 1)q∗ ∂ qi ∂ qi ∂ P (.) ∂ P (.) = (n − 1)q∗ < 0 as P (nq∗ , A) + q∗ − C′ (.) = 0 (see 4). ∂ qi ∂ qi P (nq∗, A) + nq∗

Using the above in (11) we get that

∂ P (.) dq∗ ∂ P (.) dπ ∗ = (n − 1)q∗ + q∗ . dA ∂ qi dA ∂A From (12) we get that if 0).

dq∗ dA

< 0 then

dπ ∗ dA

(12)

> 0 (since ∂ P (.) /∂ A > 0 and ∂ P (.) /∂ qi < 

Comment: The above result stands in sharp contrast to the result on monopoly. In a monopoly, an increase in market size always leads to an increase in profits (Proposition 2). Common sense suggests that this should be the case in a Cournot oligopoly also. However, Proposition 4 shows that this may not always be true. It is interesting to note that the sufficient condition for increase in π ∗ is that q∗ should decrease with A (the perverse case for output change). Hence, a necessary condition for π ∗ to decrease with A is that q∗ should increase with A (the normal case for output change). The intuition behind this is as follows. From Eq. (12) we can see that an increase in A has two effects. The direct effect (captured by the term q∗ ∂∂P(.) A ) increases π ∗ . However, there is also an indirect strategic effect (captured by the term dq∗ dq∗ (n − 1)q∗ ∂∂P(.) qi dA ). If dA > 0, then the strategic effect is negative and this may outweigh the direct positive effect. If q∗ rises then price will fall and this fall may be substantial so as to reverse the positive effect of increase in A on profits. We produce an example below to illustrate the counterintuitive case. 1 Example 3. Let P (Q, A) = AQ− 2 , Q¯ = ∞ and I = [7, 8]. There are two firms. The cost function is of the following form. ⎧ 0 if qi = 0 ⎪ ⎪   ⎪ 3 13 ⎨ 21 91 (3000)− 10 qi − 200 (3000)− 10 q2i if qi < 3000 F + 100 C (qi ) =   7 ⎪ ⎪ 10 ⎪ F + qi − 40 − F if qi ≥ 3000 ⎩

7 39 (3000) 10 −40. 200 It may be noted that C (.) is twice continuously differentiable for all qi > 0. In this example the duopoly equilibrium, which is unique and stable, is the following.

where F =

Reflecting on Market Size and Entry under Oligopoly ∗

q =



15A √ 14 2

5

391 5

7



and π = 40 −

A 2 (15) 2

7

7

.

(14) 2 (2) 4

Clearly π ∗ is strictly positive and decreasing in A for all A ∈ [7, 8]. With the same demand and cost function as in example 3 the monopoly equilibrium will be as follows.  5 7 5 A2 5 2 5A m m q = and π = 40 + 2 7 . 7 72 As expected, π m is strictly increasing in A.

4 Effects of Entry In this section we will analyze the consequences of further entry. The basic results are well known (see Seade, 1980 and Vives, 1999 for a general discussion). However, we will bring them all together within our unified framework, provide new interpretations and also give illustrative examples. We proceed to state our first two results in this context. Proposition 5. Output per firm q∗ decreases with further entry iff b < 0. Proof: From (4) we get q∗

∂ P (nq∗ , A) + P (nq∗ , A) − C′ (q∗ ) = 0. ∂ qi

Using the implicit function theorem and rearranging terms we have dq∗ dn

=

  ∂ 2 P(nq∗ ,A) ∂ P(nq∗ ,A) q∗ q∗ + ∂q ∂ q2i i   − 2 ∗ ∗ 2 ∗ ∂ P(nq ,A) ∂ P(nq∗ ,A) ′′ (q∗ )+(n−1) q∗ ∂ P(nq ,A) + ∂ P(nq ,A) q∗ −C +2 2 2 ∂q ∂q ∂ qi

=

q∗ b − a+(n−1)b .

i

∂ qi

i

(13)

Since a + (n − 1)b < 0 (see Eq. (3)) we get from (13) dq∗ < 0 if and only if b < 0 . dn  The next result deals with total output Q∗ . Proposition 6. Total output Q∗ always rises with further entry. Proof: Note that Q∗ = nq∗ . Therefore,

392

Krishnendu Ghosh Dastidar

d (nq∗ ) dq∗ =n + q∗ . dn dn Using (13) to substitute

dq∗ dn

in the above equation and rearranging terms we get d (nq∗ ) q∗ (a − b) = . dn a + (n − 1)b

Since a < b (assumption 4) and a + (n − 1)b < 0 (see 3) we get that

(14) d(nq∗ ) dn

> 0. 

Comment: Note that for an individual firm what matters is the sum of all other firms’ outputs. In equilibrium this sum is (n − 1)q∗ . Using (13) and (14) we get d (n − 1)q∗ q∗ a = > 0, since a < 0 and a + (n − 1)b < 0. dn a + (n − 1)b

(15)

Also note that b < 0 (> 0) implies that the products are strategic substitutes (strategic complements) and the reaction functions are downward (upward) sloping. This implies that if all other firms together expand their output, an individual firm will find it optimal to decrease (increase) its output. Since total output of all other firms’ (including the new entrant) rises with entry (from (15)), an individual firm will contract its output if and only if b < 0 4 . We provide a couple of examples to illustrate Proposition 5. Example 4. Let P (.) = A − Q, Q¯ = A and I = [2, 6]. There are n ≥ 1 firms. The cost function is C (q) = q and there are no fixed costs. Then the Cournot equilibrium output is A−c q∗ = . n+1 Clearly q∗ is strictly falling in n. In this example the best response function is downward sloping. Example 5. Let P (.) = A/Q2.9 , Q¯ = ∞ and I = [1, 2]. There are n ≥ 3 firms. The cost function is C (q) = q and there are no fixed costs. Here the Cournot equilibrium output is  1   2.9 2.9 A q∗ = 1 − . 2.9 n n

It may be noted that when n increases from 3 to 4 the equilibrium q∗ rises. For 1 example , when n = 3, we have q∗ = 0.103 16A 2.9 and when n = 4 the corresponding 1 q∗ = 0.160 18A 2.9 . We now come to our last set of results. Proposition 7. Profit per firm π ∗ always decreases with further entry. 4

When output per firm goes down with further entry, it is often termed as the “business stealing effect” (see Mankiw and Whinston (1986)).

Reflecting on Market Size and Entry under Oligopoly

393

Proof: π ∗ = q∗ P (nq∗ , A) − C (q∗ ). Therefore   ∗ ∗ dq∗ dπ ∗ ∗ ∂ P(nq ,A) + P (nq∗ , A) − C′ (q∗ ) + q∗2 ∂ P(nq ,A) nq = dn dn ∂ qi ∂ qi ∗

= (n − 1) dq dn

∂ P(nq∗ ,A) ∂ qi



,A) + q∗2 ∂ P(nq ∂ qi



,A) since q∗ ∂ P(nq + P(nq∗ , A) −C′ (q∗ ) = 0 (from 4). Substituting ∂ qi (16) and rearranging terms we get   ∗ dπ ∗ a ∗2 ∂ P (nq , A) =q . dn ∂ qi a + (n − 1)b

Since

∂ P(nq∗ ,A) , ∂ qi

a and a + (n − 1)b are all negative we get that

dq∗ dn

dπ ∗ dn

,

(16)

from (13) into

(17) 

< 0.

Comment: The above result is intuitively obvious. Since there is more competition, in equilibrium, the payoff to each firm goes down. Given a market size, as more firms enter, total output will expand and price will fall and this leads to fall in profit per firm. While the effect of entry on individual profit is unambiguous; it is not the case with total industry profits. Note that industry profit is nπ ∗. Therefore, dπ ∗ dnπ ∗ = π∗ + n . dn dn

(18) ∗

π is strictly The first term of (18) π ∗ is always non-negative and the second term n ddn dnπ ∗ negative. Hence, we cannot sign dn unambiguously. We provide two examples to show industry profit can go either way with further entry.

Example 6. Consider a n firm oligopoly. Let P (Q, A) = A − Q, Q¯ = A, and I = [100, 200]. C (qi ) = qi for all qi . There are no fixed costs. Since we will consider the ˙ The effects of entry with a given market size, let us fix the value of A to be 100. Cournot equilibrium outcome is as follows. 99 q = and π ∗ = n+1 ∗



99 n+1

2



99 and industry profit nπ = n n+1 ∗

2

.

Since n ≥ 1, the industry profit nπ ∗ is strictly decreasing n. Seade (1980) calls it the ‘normal’ case where total profit comes down with further entry. We now produce an example to show that industry profit can rise with further entry. Example 7. We take the same demand curve as in the last example and fix A to be 100. That is, P (.) = 100 − Q. The costs are different and are given by C (qi ) = q3i . We give below the Cournot equilibrium outcomes with n = 2 and n = 3. When n = 2, we get the following.

394

Krishnendu Ghosh Dastidar

q∗ = 5.2951 and π ∗ = 324.97 and industry profit 2π ∗ = 649.94. When n = 3 we have the following. q∗ = 5.1452 and π ∗ = 298.89 and industry profit 3π ∗ = 896.67. In this case as n rises from 2 to 3, industry profit goes up.

5 Conclusion In this paper we explored how the equilibrium configurations in a homogeneous product market with n firms change with increases in market size and with further entry. We proved that the conventional wisdom regarding the effects of increase in market size may not always hold. On the question of effects of additional entry we reaffirmed the existing results and tried to reinterpret them with illustrative examples. Acknowledgements I am indebted to Andrew Daughety, Dave Furth, Claudio Mezzetti and Diganta Mukherjee for a set of excellent comments. The usual disclaimer applies.

References 1. Amir, R. and V.E. Lambson (2000) “On the effects of entry in Cournot Markets” Review of Economic Studies 67 235-254 2. Bulow, J., J. Geanakoplos and P. Klemperer 91985) “Multimarket oligopoly: strategic substitutes and complements Journal of Political Economy 93 488-511 3. Dastidar, K.G. (2000) “Is a unique Cournot equilibrium locally stable?” Games and Economic Behaviour 32 206-218 4. Gaudet, G. and S.W. Salant (1991) “Uniqueness of Cournot equilibrium: new results from old methods” Review of Economic Studies 58 399-404 5. Kolstad, C.D. and L. Mathiesen (1987) “Necessary and sufficient conditions for uniqueness of a Cournot equilibrium” 54, 681-690 6. Mankiw, G.N. and M.D. Whinston (1986) “Free entry and social inefficiency” Rand Journal of Economics 17 48-58 7. Seade, J. (1980) “On the effects of entry” Econometrica 48 479-489 8. Vives, X. (1999) Oligopoly pricing: old ideas and new tools MIT Press, Cambridge USA

Printed in November 2009