The Microeconomics of Public Policy Analysis 9781400885701

This book shows, from start to finish, how microeconomics can and should be used in the analysis of public policy proble

132 55 4MB

English Pages 784 Year 2017

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
CONTENTS
ALTERNATIVE COURSE DESIGNS
PREFACE
ACKNOWLEDGMENTS
PART ONE. MICROECONOMIC MODELS FOR PUBLIC POLICY ANALYSIS
CHAPTER ONE. INTRODUCTION TO MICROECONOMIC POLICY ANALYSIS
CHAPTER TWO. AN INTRODUCTION TO MODELING: DEMAND, SUPPLY, AND BENEFIT-COST REASONING
CHAPTER THREE. UTILITY MAXIMIZATION, EFFICIENCY, AND EQUITY
PART TWO. USING MODELS OF INDIVIDUAL CHOICE-MAKING IN POLICY ANALYSIS
CHAPTER FOUR. THE SPECIFICATION OF INDIVIDUAL CHOICE MODELS FOR THE ANALYSIS OF WELFARE PROGRAMS
CHAPTER FIVE. THE ANALYSIS OF EQUITY STANDARDS: AN INTERGOVERNMENTAL GRANT APPLICATION
CHAPTER SIX. THE COMPENSATION PRINCIPLE OF BENEFIT-COST REASONING: BENEFIT MEASURES AND MARKET DEMANDS
CHAPTER SEVEN. UNCERTAINTY AND PUBLIC POLICY
CHAPTER EIGHT. ALLOCATION OVER TIME AND INDEXATION
PART THREE. POLICY ASPECTS OF PRODUCTION AND SUPPLY DECISIONS
CHAPTER NINE. THE COST SIDE OF POLICY ANALYSIS: TECHNICAL LIMITS, PRODUCTIVE POSSIBILITIES, AND COST CONCEPTS
CHAPTER TEN. PRIVATE PROFIT-MAKING ORGANIZATIONS: OBJECTIVES, CAPABILITIES, AND POLICY IMPLICATIONS
CHAPTER ELEVEN. PUBLIC AND NONPROFIT ORGANIZATIONS: OBJECTIVES, CAPABILITIES, AND POLICY IMPLICATIONS
PART FOUR. COMPETITIVE MARKETS AND PUBLIC POLICY INTERVENTIONS
CHAPTER TWELVE: EFFICIENCY, DISTRIBUTION, AND GENERAL COMPETITIVE ANALYSIS: CONSEQUENCES OF TAXATION
CHAPTER THIRTEEN. THE CONTROL OF PRICES TO ACHIEVE EQUITY IN SPECIFIC MARKETS
CHAPTER FOURTEEN. DISTRIBUTIONAL CONTROL WITH RATIONS AND VOUCHERS
PART FIVE. SOURCES OF MARKET FAILURE AND INSTITUTIONAL CHOICE
CHAPTER FIFTEEN. ALLOCATIVE DIFFICULTIES IN MARKETS AND GOVERNMENTS
CHAPTER SIXTEEN. THE PROBLEM OF PUBLIC GOODS
CHAPTER SEVENTEEN. EXTERNALITIES AND POLICIES TO INTERNALIZE THEM
CHAPTER EIGHTEEN. INDUSTRY REGULATION
CHAPTER NINETEEN. POLICY PROBLEMS OF ALLOCATING RESOURCES OVER TIME
CHAPTER TWENTY. IMPERFECT INFORMATION AND INSTITUTIONAL CHOICES
AUTHOR INDEX
SUBJECT INDEX
Recommend Papers

The Microeconomics of Public Policy Analysis
 9781400885701

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

T H E

M I C R O E C O N O M I C S

P U B L I C

P O L I C Y

O F

A N A L Y S I S

T H E

M I C R O E C O N O M I C S

P U B L I C

P O L I C Y

LEE S. FRIEDMAN

PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD

O F

A N A L Y S I S

Copyright © 2002 by Princeton University Press Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, 3 Market Place, Woodstock, Oxfordshire OX20 1SY All Rights Reserved LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA

Friedman, Lee S. The microeconomics of public policy analysis / Lee S. Friedman p.

cm.

Includes bibliographical references and index. ISBN 0-691-08934-5 1. Policy sciences. H97 .F75

2. Microeconomics.

I. Title

2002

338.5—dc21

2001051156

British Library Cataloging-in-Publication Data is available This book has been composed in Adobe Times Roman and Futura by Princeton Editorial Associates, Inc., Scottsdale, Arizona Printed on acid-free paper. ∞ www.pup.princeton.edu Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

T O J A N E T, A L E X A N D E R , A N D J A C O B , WHO BRING JOY TO MY LIFE

CONTENTS (S = Supplementary Section,

O

= Optional Section with Calculus)

ALTERNATIVE COURSE DESIGNS PREFACE

xv

xvii

ACKNOWLEDGMENTS

xix

PA R T O N E

MICROECONOMIC MODELS FOR P U B L I C P O L I C Y A N A LY S I S Chapter 1 Introduction to Microeconomic Policy Analysis

3

Policy Analysis and Resource Allocation 3 The Diverse Economic Activities of Governments 6 Policy-Making and the Roles of Microeconomic Policy Analysis 9 Organization of the Book 15 Conclusion

18

Chapter 2 An Introduction to Modeling: Demand, Supply, and Benefit-Cost Reasoning

19

Modeling: A Basic Tool of Microeconomic Policy Analysis 19 Demand, Supply, and Benefit-Cost Reasoning 25

Monday Morning, 6:45 A.M. Monday Morning, 9:00 A.M. Summary 33 Discussion Questions 35

Chapter 3 Utility Maximization, Efficiency, and Equity

36

A Model of Individual Resource Allocation 37 Efficiency 45

The General Concept Efficiency with an Individualistic Interpretation Efficiency in a Model of an Exchange Economy A Geometric Representation of the Model Relative Efficiency Equity

58

Equality of Outcome Is One Concept of Equity Equality of Opportunity Is Another Concept of Equity Integrating Equity-Efficiency Evaluation in a Social Welfare FunctionS

vii

Contents

viii Summary 66 Exercises 68

Appendix: Calculus Models of Consumer ExchangeO 69

PA R T T W O

USING MODELS OF INDIVIDUAL CHOICE-MAKING I N P O L I C Y A N A LY S I S Chapter 4 The Specification of Individual Choice Models for the Analysis of Welfare Programs

79

Standard Argument: In-Kind Welfare Transfers Are Inefficient 81 Responses to Income and Price Changes 85

Response to Income Changes Response to Changes in a Good’s Price Response to Price Changes of Related Goods Choice Restrictions Imposed by Policy 94

Food Stamp Choice Restriction: The Maximum Allotment Food Stamp Choice Restriction: The Resale Prohibition Public Housing Choice RestrictionsS Income Maintenance and Work Efforts The Labor-Leisure Choice Work Disincentives of Current Welfare Programs The Earned Income Tax Credit (EITC) Interdependent Preference Arguments: In-Kind Transfers May Be EfficientS 113 Summary 118 Exercises 120 Appendix: The Mathematics of Income and Substitution EffectsO 121

Chapter 5 The Analysis of Equity Standards: An Intergovernmental Grant Application Equity Objectives 124 Intergovernmental Grants 135 Design Features of a Grant Program 136

Income Effects and Nonmatching Grants Price Effects and Matching Grants The Role of Choice Restrictions Alternative Specifications of Recipient Choice-Making Equity in School Finance 146

The Equity Defects Identified in Serrano The Design of Wealth-Neutral Systems Other Issues of School Finance Equity

124

Contents

ix

Summary 158 Exercises 160 Appendix: An Exercise in the Use of a Social Welfare Function as an Aid in Evaluating School Finance PoliciesO 162

Chapter 6 The Compensation Principle of Benefit-Cost Reasoning: Benefit Measures and Market Demands

169

The Compensation Principle of Relative Efficiency 171

The Purpose of a Relative Efficiency Standard The Hicks-Kaldor Compensation Principle Controversy over the Use of the Compensation Principle Measuring Benefits and Costs: Market Statistics and Consumer Surplus 179 An Illustrative Application: Model Specification for Consumer Protection LegislationS 193 Problems with Measuring Individual BenefitS 197

Three Measures of Individual Welfare Change Empirical Evidence: Large Differences among the Measures Summary 208 Exercises 210 Appendix: Duality, the Cobb-Douglas Expenditure Function, and Measures of Individual WelfareO 212

Chapter 7 Uncertainty and Public Policy

220

Expected Value and Expected Utility 222 Risk Control and Risk-Shifting Mechanisms 235

Risk-Pooling and Risk-Spreading Policy Aspects of Risk-Shifting and Risk Control Alternative Models of Individual Behavior under Uncertainty 243

The Slumlord’s Dilemma and Strategic Behavior Bounded Rationality Moral Hazard and Medical Care Insurance 254 Information Asymmetry and Hidden Action: The Savings and Loan Crisis of the 1980s and Involuntary Unemployment 260 Summary 263 Exercises 266 Appendix: Evaluating the Costs of Uncertainty 267

Chapter 8 Allocation over Time and Indexation Intertemporal Allocation and Capital Markets 279

Individual Consumption Choice with Savings Opportunities Individual Consumption Choice with Borrowing and Savings Opportunities

278

Contents

x

Individual Investment and Consumption Choices The Demand and Supply of Resources for Investment Individual Choices in Multiperiod Models Choosing a Discount Rate Uncertain Future Prices and Index Construction 299 Summary 307 Exercises 309 Appendix: Discounting over Continuous IntervalsO 310

PA R T T H R E E

POLICY ASPECTS OF PRODUCTION A N D S U P P LY D E C I S I O N S Chapter 9 The Cost Side of Policy Analysis: Technical Limits, Productive Possibilities, and Cost Concepts

317

Technical Possibilities and the Production Function 320

The Production Function Is a Summary of Technological Possibilities Efficiency, Not Productivity, Is the Objective Characterizing Different Production Functions Costs 335

Social Opportunity Cost and Benefit-Cost Analysis Accounting Cost and Private Opportunity Cost An Application in a Benefit-Cost Analysis Cost-Output Relations Joint Costs and Peak-Load PricingS Summary 362 Exercises 363 Appendix: Duality—Some Mathematical Relations between Production and Cost FunctionsO 365

Chapter 10 Private Profit-Making Organizations: Objectives, Capabilities, and Policy Implications The Concept of a Firm 374 The Private Profit-Maximizing Firm 377

Profit Maximization Requires that Marginal Revenue Equal Marginal Cost The Profit-Maximizing Monopolist Types of Monopolistic Price Discrimination Normative Consequences of Price Discrimination The Robinson-Patman Act and the Sherman Antitrust Act Predicting Price Discrimination Alternative Models of Organizational Objectives and CapabilitiesS 397

373

Contents

xi

Objectives Other than Profit Maximization Limited Maximization Capabilities Summary 411 Exercises 414

Chapter 11 Public and Nonprofit Organizations: Objectives, Capabilities, and Policy Implications

417

Nonprofit Organizations: Models of Hospital Resource Allocation 419 Public Bureaus and Enterprises 429 Empirical Prediction of Public Enterprise Behavior: The Pricing Decisions of BART 432 A Normative Model of Public Enterprise Pricing: The Efficient Fare Structure 440 Summary 449 Exercises 450 Apppendix: The Mathematics of Ramsey PricingO 452

PA R T F O U R

COMPETITIVE MARKETS AND PUBLIC POLICY INTERVENTIONS Chapter 12 Efficiency, Distribution, and General Competitive Analysis: Consequences of Taxation

461

Economic Efficiency and General Competitive Equilibrium 462

Efficiency in an Economy with Production Competitive Equilibrium in One Industry The Twin Theorems Relating Competition and Efficiency General Competitive Analysis and Efficiency 481

Partial Equilibrium Analysis: The Excise Tax General Equilibrium Analysis: Perfect Competition and the Excise Tax Lump-Sum Taxes Are Efficient and Most Real Taxes Are Inefficient Summary 499 Exercises 500 Appendix: Derivation of the Competitive General Equilibrium Model 502

Chapter 13 The Control of Prices to Achieve Equity in Specific Markets Price Support for Agriculture 508 Apartment Rent Control 514

Preview: Economic Justice and Economic Rent Dissatisfaction with Market Pricing during Temporary Shortage of a Necessity

507

Contents

xii

A Standard Explanation of the Inefficiency of Rent Control Rent Control as the Control of Long-Run Economic Rent The Relation between Rent Control and Capitalized Property Values The Supply Responses to Rent Control The Effects of Rent Control on Apartment Exchange Recent Issues of Rent Control A Windfall Profits Tax as Rent Control 542 Summary 544 Exercises 547

Chapter 14 Distributional Control with Rations and Vouchers

550

Ration Coupons 551 Vouchers and Ration-Vouchers 557 Rationing Military Service 561

The Utility Value of Work Conscription versus the All-Volunteer Military Income Effects of Rationing MethodsS Rationing Gasoline during a Shortage 572 Other Rationing Policies 583 Summary 586 Exercises 588

PA R T F I V E

S O U R C E S O F M A R K E T FA I L U R E A N D INSTITUTIONAL CHOICE Chapter 15 Allocative Difficulties in Markets and Governments Market Failures 595

Limited Competition Owing to Scale and Scope Economies over the Relevant Range of Demand Public Goods Externalities Imperfect Information Second-Best Failures Government Failures 606

Direct Democracy Representative Democracy Public Producers Summary 613 Exercises 615

593

Contents

Chapter 16 The Problem of Public Goods

xiii

617

The Efficient Level of a Public Good 617 The Problem of Demand Revelation 619 The Housing Survey Example 621 Mechanisms of Demand Revelation 623 The Public Television Example 626 Summary 631 Exercises 633

Chapter 17 Externalities and Policies to Internalize Them

635

A Standard Critique of Air Pollution Control Efforts 637 The Coase Theorem

638

Efficient Organizational Design and the Degree of Centralized Decision-Making 641 Reconsidering Methods for the Control of Air Pollution 647 Summary 657 Exercise

658

Chapter 18 Industry Regulation

660

Oligopoly and Natural Monopoly 661 Rate-of-Return Regulation 671 Franchise Bidding: The Case of Cable Television 677 Incentive Regulation 687 Summary 692 Exercise

695

Chapter 19 Policy Problems of Allocating Resources over Time Perfectly Competitive and Actual Capital Markets 698 The Social Rate of Discount 700 Education as a Capital Investment 702

The Market Failures in Financing Higher Education Investments Income-Contingent Loan Plans for Financing Higher Education The Allocation of Natural Resources 707

Renewable Resources: Tree Models A Note on Investment Time and Interest Rates: Switching and Reswitching The Allocation of Exhaustible Resources Such as Oil Summary 721 Exercises 723

696

Contents

xiv

Chapter 20 Imperfect Information and Institutional Choices Asymmetric Information 725

Adverse Selection, Market Signals, and Discrimination in the Labor Market Moral Hazard and Contracting The Nonprofit Supplier as a Response to Moral Hazard Nonprofit Organizations and the Delivery of Day-Care Services 734 The Value of Trust 746 Summary 747 Exercises 751 AUTHOR INDEX

753

SUBJECT INDEX

757

724

A LT E R N AT I V E C O U R S E D E S I G N S The Microeconomics of Public Policy Analysis is used by people with diverse backgrounds in economics and diverse purposes for studying this subject. The book contains enough material to meet these diverse needs, but it is not expected that any single course will cover everything in the book. In many chapters, instructors teaching students with no economics prerequisite will spend more time on the basic theory and less on the detailed policy examples. Instructors and their students will also have different preferences for which policy areas or aspects of microeconomics they wish to emphasize. To help make these choices, the table below shows a suggested number of lectures to correspond to each chapter. Where a range such as 1–2 is shown, it suggests that the instructor should choose between partial and fuller coverage. Most instructors will choose a group that totals twenty-six to twentyeight lectures for a standard course of thirty class meetings: Chapters

Lectures

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

½ ½ 2 2 2 2 2 1–2 1–2 1–2 1–2 2 1–2 1–2 ½–1 1* 1–2* 1–2* 1* 1*

Total

24½–33

*Chapter 15 provides an overview of the market (and government) failures that are the subjects of Chapters 16–20. Some instructors, after covering Chapter 15, may wish to pick only two to three of the latter chapters for intensive coverage.

P R E FA C E systematically upon intermediate-level microeconomic theory for a special purpose. That purpose is to develop the skills of microeconomic modeling and the principles of welfare economics used in policy analysis. No prerequisite is necessary, although the book can easily be used at a more advanced level by having one (more on this below). A typical chapter begins by developing a few principles from intermediate-level theory and using them to construct and apply models, in some depth, to one or more policy issues. The issues selected are diverse, and they cut across the traditional applied fields of public finance, urban economics, industrial organization, and labor economics. Overall, the book illustrates how microeconomic models are used as tools to design, predict the effects of, and evaluate public policies. Experts in many specific subject areas, such as welfare, housing, health, and environmental economics, will be, I think, pleasantly surprised by the coverage that their areas have received. As experts they will find most of these models simpler than the ones they construct or study in professional journals. Most subject area experts that I know find it easy to give examples of every conceivable reason for policy intervention, or for studying the harm from an intervention, right within their own sectors. Education economists, for example, have to account for monopoly schools, externalities in the classroom, the public good of having an educated citizenry, information asymmetries that make contracting for educational services difficult for all parties, and equity requirements for financing systems. This book, however, generally offers models with only one failure (of the market or of government) studied at a time. I believe that is the best way to explicate that type of failure carefully and to learn how to analyze alternative ways of responding to it. I hope that these experts will agree. I have used this book as the primary text in one-semester courses offered both to undergraduates (primarily those studying economics or public policy) and to graduate students (from diverse disciplinary and professional fields). However, all material in the book cannot be adequately covered in one semester. I provide the short section “Alternative Course Designs” to suggest different ways of using the book to construct a standard semester-length course. Most of the time I use this book without intermediate microeconomics as a prerequisite. When I do so, I do not attempt to cover all of the applications, and I skip most of the supplementary sections denoted with the superscript “S.” Sometimes, however, I offer a more advanced course in which I spend little class time on basic microeconomic theory and focus almost exclusively on the applications. The chapters of The Microeconomics of Public Policy Analysis are sequenced to draw upon microeconomic principles in the same order that they are developed in most intermediate theory courses: individual decision-making, supplier decision-making, markets as organizations, and market and governmental failures. Exercises are included at the end of chapters, and I recommend use of them as a means to develop, practice, and test analytic skills.

THIS BOOK DEVELOPS AND BUILDS

xviii

Preface

The book has no special mathematical prerequisite, although it often has empirical illustrations of the models discussed. The illustrations have been kept numerically simple. There are optional sections and appendices in which differential calculus is used freely, and some of the exercises require calculus; all of these are denoted with the superscript “O.” Other than in these instances, calculus is relegated to the footnotes. Readers who do or will use calculus as part of their professional training should benefit from these parts of the book. I hope, in the end, that readers will appreciate the strengths while still recognizing the weaknesses of existing models and methods. I hope that they will be impressed with the breadth and depth of the contributions of microeconomics as an analytic tool for public policy. I also hope that exposing areas in which improvement in microeconomic models is needed will help to spur some of those improvements. I refer particularly to two areas important for a large number of policy issues: (1) the growing body of evidence that, in many situations, actual individual behavior departs substantially from standard utility maximization; and (2) the analytic difficulty of assessing alternative strategies (or institutions) for responding to market failures when all are imperfect. Although the book raises these challenges in considering models of particular issues, it does not dwell upon what has not yet been done or applied. Rather, the emphasis throughout is on successful “role” models: those that illustrate how microeconomics contributes constructively and importantly to policy analysis.

ACKNOWLEDGMENTS THOSE INDIVIDUALS WHO have been graduate students in the Richard and Rhoda Goldman School of Public Policy at the University of California, Berkeley, have been a continuing source of inspiration for me. I doubt that my instruction will ever catch up to the level of their postgraduation accomplishments, but I do enjoy trying, and this book is a big part of my effort. I am grateful for their support and for their many constructive suggestions during this undertaking. Colleagues at Berkeley, especially Eugene Bardach, Steve Raphael, and Eugene Smolensky, have made important substantive contributions, and I am grateful for their advice, wisdom, encouragement, and patience. Elizabeth Graddy of the University of Southern California, David Howell at the New School, Samuel Myers, Jr., at the University of Minnesota, and several anonymous referees offered invaluable, detailed feedback on draft chapters, and they have helped to improve this book enormously. Brave users of the evolving draft manuscript, including Joseph Cordes of George Washington University and Joseph DeSalvo at the University of South Florida, also alerted me to numerous opportunities for improvement. My editor at Princeton University Press, Peter Dougherty, has done a wonderfully professional job shepherding this manuscript from review through design, copyediting, and production. I am grateful to him for his sound advice and encouragement, and for assembling a first-rate staff to produce this book. I am also grateful to the staff at the Goldman School for their careful and patient assistance with manuscript preparation: Merle Hancock, Kristine Kurovsky, and Theresa Wong. Despite all of this able assistance, I accept full responsibility for any flaws in the content of this book. It has been a joy to craft it, and I hope that it will help to advance the field where microeconomics and public policy intersect.

PA R T O N E M I C R O E C O N O M I C M O D E L S F O R P U B L I C P O L I C Y A N A LY S I S

CHAPTER ONE I N T R O D U C T I O N T O M I C R O E C O N O M I C P O L I C Y A N A LY S I S

Policy Analysis and Resource Allocation Policy analysis involves the application of social science to matters of public policy. The specific tasks of an analysis depend importantly on which aspects of policy are to be understood, who wants to know, and how quickly the analysis is needed. For many, the excitement of this discipline is in its often turbulent political application: the undertaking and use of policy analysis as a part of actual government decision-making. In that application the analysis is undertaken in order to advise; it is for the use of specific decision makers whose actions will depend on the analytic results. For others, the excitement is in rising to a more purely intellectual challenge: to create a more general understanding about how public policy is, and ought to be, made. The latter effort represents the more academic side of policy analysis. However, the same basic intellectual skills are used to conduct both types of analyses. Microeconomic analysis is one of the fundamental skills of this discipline. It provides a critical foundation for both the design and the evaluation of policies. This is hardly surprising: Most public policy involves resource allocation. To carry out virtually any government policy either requires the direct use of significant resources or severely constrains the use of resources by economic agents (i.e., individuals, firms, or government agencies). These resources (labor, buildings, machinery, and natural resources such as water, oil, and land) are scarce, and different people will have different ideas about what to do with them. A proposed dam may mean economic survival for farmers needing water to grow crops, but may at the same time threaten destruction of a unique white-water river and canyon irreplaceable to the fish species that spawn there and to lovers of nature. A town may, through its powers of zoning, declare certain or all areas within it for residential use only—to the chagrin of a company that wants to build a small factory (which would employ some of the town residents and provide local tax revenues) and to the delight of estate owners adjacent 3

4

Chapter One

to the restricted area (who fear plummeting property values). As in these examples, public policy–making typically forces tough choices. Microeconomics is the study of resource allocation choices, and microeconomic policy analysis is the study of those special choices involving government. Proficiency in policy analysis requires more microeconomics than is usually conveyed in a basic microeconomic theory course. Typically, most of the basic course is devoted to explaining and evaluating the operation of a private market system. By a private market we refer to the voluntary trading offers and exchanges made by independent, private, economic agents acting as buyers or sellers of resources, goods, or services. Public policy involves, by contrast, a collective decision to influence or control behavior that would otherwise be shaped completely by the private agents in a market. This does not imply, however, that public policy is antithetical to the use of markets. As we will see, much of the task of public policy analysis is to help create a proper blending of collective and market controls over resource allocation decisions. Thus microeconomic policy analysis requires a thorough understanding of the conditions that favor collective over individual action and the alternative collective or policy actions that might be taken, and it requires a means of evaluating the alternatives in order to choose among them. To offer a brief preview of analytic thinking, we consider the following hypothetical and informal conversation: Official: We have got to do something about the traffic congestion and pollution caused by commuting into the city. Why don’t we make a toll-free lane for automobiles carrying four or more passengers? Analyst: That’s an interesting idea, and it has worked with some success in a few other areas. But may I offer an alternative suggestion? The four-for-free plan has two important disadvantages. One, it reduces revenues that we sorely need. Two, it provides no incentive for commuters to form car pools of two and three, nor does it encourage the use of mass transit systems. I’ve heard this system may actually worsen the problem in the San Francisco area because many people have stopped using mass transit in order to carpool for free! Suppose instead that we raise the tolls during the peak commuting hours. The peak-toll plan would help solve our deficit problem. Furthermore, it would increase commuter incentives to form car pools of all sizes, to take mass transit rather than drive, and even to drive at off-peak hours for those who have that discretion. In the above conversation, the analyst recognizes immediately that pollution and congestion are what economists call external effects (side effects of allocating resources to the activity of commuting). The analyst knows that economic efficiency requires a solution that “internalizes” the externalities and that the four-for-free plan deviates substantially from this idea. These same economic principles influenced the design of the alternative peak-toll plan. Here is a second example: Mayor: We have a big problem. Remember the foundation that pays for 4 years of nursing school for the top fifty low-income applicants from our public high schools?

Introduction to Microeconomic Policy Analysis

5

The first of these groups is now in its third year of studies. A new report claims that the average cost of this education is $56,000 per nurse graduate, and that society only benefits from those who complete their degrees. These benefits, it seems, are only worth $48,000 per nurse. The foundation is not only considering pulling the plug on the program, but I’m told that it will terminate the groups that have already started. Advisor: Yes, that is a big problem. I’ll review the study for accuracy as soon as possible. But I believe that we can prevent the foundation from making one mistake. Mayor: What is it? Advisor: I think that we can make it understand, by its own logic, that it should not terminate the groups that have already started. Mayor: What do you mean? Advisor: The funds that have already been expended on the groups now in college are already gone. Nobody can put them to an alternate use. What matters is, right from today, the incremental costs of completing their educations compared with the incremental benefits from doing so. It will only cost about $14,000 for the last year of the oldest group. But that $14,000 will yield $48,000 in benefits from the nursing degree! Termination would therefore cause a net additional loss of $34,000. It would be criminal to pull the plug on those students now. The same logic holds true for the other groups that have started. Even for the firstyear group, the incremental benefits of $48,000 from graduating outweigh the incremental costs of about $42,000 for the remaining years. If the foundation is motivated by benefit-cost logic, the wise investment (not to mention public relations) is to let the groups that have started continue to completion. In this second example, the advisor quickly identifies what economists call sunk costs, or costs that have already been incurred. It might be true, as the report claims, that it is a poor investment to start any new students down the path of the current program design. But the real costs to society of continuing those already in school are only the resources that could still be freed up for use elsewhere (the costs that have not yet been incurred). In this example, these are small in comparison to the benefits of completing the nursing education. It is fortunate that, in this case, the economic criterion suggests a course that would improve the foundation’s public image under the circumstances.1 In actual settings, these conversations might lead to further consideration of alternative plans: the design of new alternatives, careful estimation of the consequences of the 1 In other cases, economic criteria can suggest termination of a project that could be embarrassing to an organization that has incurred sunk but visible costs in it. The organization may prefer to spare itself embarrassment despite the poor economics. This is especially true when the organization does not bear the brunt of the economic loss. For example, the military may learn that the cost of completing an order for a new type of helicopter is far greater than what is economically reasonable. But owing to its sunk costs in prototype development, base preparation costs for training and maintenance, and prior lobbying to gain approval for the initial order, it may prefer to continue with the project rather than disappoint its supporters.

Chapter One

6

alternatives to be evaluated (e.g., the effect of specific tolls on the city’s budget, benefitcost calculations for the foundation of revised programs), and evaluation by a set of criteria wider than efficiency (e.g., fairness or equity, legality, and political and administrative feasibility). This book focuses on developing the microeconomic skills essential to applying this kind of analysis to a wide range of public policy problems.

The Diverse Economic Activities of Governments To illustrate more concretely the specific subject matter of microeconomic policy analysis, let us take a brief tour of the many economic activities of government. This tour will serve the additional purpose of indicating the extensiveness of public controls over resource allocation. While we have already asserted that most public policy involves resource allocation, it is just as important that we understand the reverse relation: All resource allocation decisions are shaped by public policy. This shaping occurs in different ways: through direct government purchase or supply of particular activities, the regulation of market activities, the development and maintenance of a legal system, and the undertaking of redistributive programs. Let us consider each of these in turn. In the year 2000 the total value of all measured goods and services produced in the United States—called the gross domestic product, or GDP—was about $10 trillion.2 Roughly 17 percent of the total GDP, valued at $1.7 trillion, consisted of purchases by governments to provide various goods and services to citizens. The governments include federal, state, and local governments and regional authorities acting as collective agents for citizens. These different governments operate schools, hospitals, and parks; provide refuse collection, fire protection, and national defense; build dams; maintain the roads; sponsor research to fight cancer; and purchase a host of other goods and services that are intended to benefit the citizenry. What explains why these goods are provided through governments instead of markets? Why not let individual consumers seek them through the marketplace as they seek movies and food? What do we know about the economic advantages of doing it one way or the other? Such questions are still quite general. Of the 17 percent of goods and services purchased by governments, approximately 11 percent was supplied directly through government agencies and enterprises (e.g., the Post Office) and the other 6 percent was provided through contracts and grants (e.g., a local government may tax its citizens to provide refuse collection and contract with a private firm to actually do the work).3 When is it that a government should actually supply the goods, and when should it contract with private firms to supply them? If it does the latter, how should the contract be written to protect the economic interests of the citizens footing the bill? If a government actually produces the good or service itself, what mechanisms are there to encourage economy in production?

2

Economic Report of the President, January 2001 (Washington, D.C.: U.S. Government Printing Office, 2001), p. 274, Table B-1. 3 Ibid., p. 288, Table B-10. In 2000, $1.087 trillion of GDP was provided directly by the government sector out of $1.748 trillion in total government purchases, or 62 percent. The 11 percent in the text is approximately 0.62 of 17 percent.

Introduction to Microeconomic Policy Analysis

7

To purchase all these services, governments must raise revenues. The overwhelming bulk of the revenues comes from taxes; a smaller portion comes from individual user fees (e.g., park admission charges). When should user fees be charged, and to what extent? If taxes must be collected, who should pay them and how much should each taxpayer be assessed? The economic policy issues illustrated so far arise when governments have taken primary responsibility for providing goods and services. However, governments have great influence over a much wider range of goods and services through their regulatory mechanisms. In these cases, individual economic agents acting in the market still retain considerable decision-making power over what and how much to buy or sell of different commodities, although the available choices are conditioned by the regulations. Government regulatory mechanisms influence prices, qualities, and quantities of goods and services traded in the market as well as the information available to consumers about them. Many industries are subject to price controls on their products. The form that such controls take varies. For example, some industries have their prices controlled indirectly through limits set by regulatory commissions on the return that producers are allowed to earn on their investments. Natural gas and electric utilities are common examples. Occasionally, rental housing prices are regulated through rent control policies; in the past, domestic oil prices have been controlled. Another form of price control operates through taxes and subsidies. These are common in the area of international trade; many countries have policies to discourage imports through tariffs and encourage exports through subsidies. Within the domestic economy, alcohol and tobacco products are taxed to raise their prices and thus discourage their use (and many countries are considering new taxes on carbon emissions to prevent global warming); disaster insurance and loans for small businesses and college educations are subsidized to encourage their use. Although it is difficult to give precise figures on the amount of economic activity subject to some form of price regulation, a reasonable estimate is that at least an additional 20 to 25 percent of GDP is affected by this type of public policy. Price regulation may be the least common form of regulation. Product regulations controlling quantities and qualities are widely used, generally to provide environmental and consumer protection. Some of these regulations are highly visible; examples are automobile safety standards and the prescription requirements for the sale of medicines and drugs. Other regulatory activities, such as public health standards in food-handling institutions (e.g., restaurants, supermarkets, and frozen food factories) and antiflammability requirements for children’s clothing, are less visible. There are standards for clean air and water, worker health and safety, and housing construction; there are licensing requirements for physicians and auto repair shops; and there is restricted entry into the industries providing taxi service and radio broadcasting. There are age restrictions on the sale of certain goods and services, notably alcoholic beverages. There are import quotas on the quantities of textile products that developed nations may receive from developing nations, under the Multi-Fiber Arrangement. Product regulations of one kind or another affect virtually every industry. In addition to the product and price regulations, information about products is regulated. The Securities and Exchange Commission requires that certain information be provided to

Chapter One

8

prospective buyers of new stock and bond offerings; lending institutions are subject to truthin-lending laws; the Environmental Protection Agency tests new-model automobiles for their fuel consumption rates each year and publishes the results; tobacco products must be labeled as dangerous to one’s health. What are the economic circumstances that might make these regulatory public policies desirable, and are such circumstances present in the industries now regulated? How does one know when to recommend price, product, or information controls, and what form they should take? The social objectives or benefits of these policies may be readily apparent, but their costs are often less obvious. For example, when electricity production must be undertaken with nonpolluting production techniques, the cost of electricity production, and therefore its price, typically rises. Thus a decision to regulate pollution by utility companies is paid for through higher electricity prices. One task of microeconomic policy analysis is to consider costs and to design pollution control policies that achieve pollution reduction goals at the least total cost. Another important area of public policy that greatly affects resource allocation is the activity of developing the law. The legal system defines property rights and responsibilities that shape all exchanges among economic agents. If there were no law establishing one’s ownership, one might have a hard time selling a good or preventing others from taking it. Without a patent system to establish the inventor’s ownership rights, less effort would be devoted to inventing. Whether or not the existing patent system can be improved is a fair question for analysis. An example of a policy change involving property responsibilities is the spread of no-fault automobile accident liability. Under the old system, the driver at fault was liable for damages done to the injured party. The cost of insurance reflected both the damage and the transaction costs of legal battles to determine which driver was at fault. These transaction costs amounted to a significant proportion of the total insurance costs. The idea behind the no-fault concept is to reduce the transaction costs by eliminating in some cases the need to determine who was at fault. To the extent that this system works, consumers benefit from lower automobile insurance premiums. These examples should illustrate that analysis of the law is another public policy area in which microeconomic policy analysis can be applied. There is another very important function of government activity that can be put through the filter of microeconomic analysis: Governments undertake redistributive programs to influence the distribution of goods and services among citizens. Of course, all resource allocation decisions affect the distribution of well-being among citizens, and the fairness of the distribution is always a concern of good policy analysis; here the interest is in the many programs undertaken with that equity objective as the central concern. In 2000, of the $3.1 trillion collected in taxes and fees by the federal, state, and local governments (31% of GDP), $1.1 trillion was redistributed as transfer payments to persons.4 Another common method of redistribution, not included in the above direct payment figures, is through “tax expenditures” that reduce the amount of taxes owed by persons qualifying for the special provisions (e.g., people who are elderly or who have disabilities). 4

Ibid., p. 372, Table B-83.

Introduction to Microeconomic Policy Analysis

9

Government redistributive programs include welfare, food stamps, Medicaid, farm subsidies, and Social Security, just to name a few. The programs generally transfer resources to the poorest groups in society from those better off. However, some programs might be seen as forcing individuals to redistribute their own spending from one time period to another: Social Security takes away some of our income while we work and gives back income when we retire. Other public policies, such as farm subsidies, grants to students for higher education, and oil depletion allowances, may redistribute resources from poorer to richer groups. As the success of redistributive programs generally depends heavily on the resource allocation decisions made by the affected economic agents, microeconomic policy analysis provides tools for both the design and the evaluation of such programs. By now it should be clear that all resource allocation decisions are influenced, at least to some degree, by public policy. Governments shape resource allocations through their direct purchase and supply of goods and services, their regulations of specific economic activities, their development and maintenance of the legal system, and their redistributive programs. All the activities mentioned above illustrate the set of public policy actions that can be analyzed by the methods presented in this book. In undertaking such studies, it is important to consider the roles of analysis in a policy-making process and how the process influences the objectives of the analysis. The next section contains a general discussion of these issues.

Policy-Making and the Roles of Microeconomic Policy Analysis Public policy-making is a complex process. Policy is the outcome of a series of decisions and actions by people with varying motivations and differing information. Policy analysis, when not done for purely academic purposes, may be used to aid any of these people—be they elected officials or candidates for elected office, bureaucrats, members of various interest groups (including those attempting to represent the “public” interest), or the electorate. Not surprisingly, these different people may not agree on the merits of particular policies, even if perfectly informed about them, because of differing values. Policy analysis cannot resolve these basic conflicts; for better or for worse, the political process is itself the mechanism of resolution. It is important to recognize that the political process heavily influences the type of policy analysis that is done and the extent to which it is used. For that reason, anyone interested in learning the skills of analysis for the purpose of advising should try to understand the process. Such understanding yields a better perspective of the possibilities for contributions through analytic work as well as the limitations. We offer below the barest introduction (bordering shamelessly on caricature) to some of the rich thinking that has been done on this subject, and we strongly encourage a full reading of the source material. Lindblom, in 1965, put forth an optimistic model of a democratic political process.5 He described a system like ours as one of partisan mutual adjustment among the various

5

Charles E. Lindblom, The Intelligence of Democracy (New York: The Free Press, 1965).

10

Chapter One

interest groups (e.g., unions, bureaucracies, corporations, consumer groups, professional associations) in the society. In his view, the political pulling and hauling by diverse groups (from both within and outside government) in pursuit of self-interest leads to appropriate compromises and workable solutions. No one in this pluralist process ever sets national goals or identifies the alternative means available to achieve them.6 Rather, progress is made by sequential adaptation, or trial and error. Legislation proposed by one group, for example, has its design modified frequently as it wends its way through legislative subcommittees, committees, the full legislative bodies, and executive branch considerations. At each stage, the modifications reflect the compromises that arise in response to strengths and weaknesses identified by the affected interest groups and the changes they propose. After enactment and during implementation, the diverse interest groups continue to influence the specific ways in which the new legislation is carried out. These groups will also monitor the resulting government operating procedures. The procedures may not lead to the intended results, or someone may think of a better set of procedures. If enough support can be mustered for program modification, the groups can force renewed legislative consideration. Lindblom argued that muddling through is better than any alternative process designed to solve problems in a synoptic or comprehensive way. For example, an attempt to specify goals in a clear way may permit a sharper evaluation of the alternative means, but it will also increase the political difficulty of achieving a majority coalition. Interest groups are diverse precisely because they have real differences in goals, and they are unlikely to put those differences aside. Instead, they will agree only to statements of goals that are virtually meaningless in content (e.g., “This legislation is designed to promote the national security and the national welfare.”) and do not really guide efforts at evaluation. The groups affected by proposed legislation are concerned about the end result. It is easier politically to build a coalition around a specific alternative and worry later about how to describe (in a sufficiently bland way) the purposes it fulfills.7 Optimistic views of the “intelligence” of actual pluralistic processes are not widely held. Many people argue that the actual processes differ significantly (and perhaps inevitably) from the view Lindblom offered in 1965. For example, one obvious concern is whether an actual process is weighted unduly toward the “haves” (e.g., who can afford to employ highpowered lobbyists and analysts) and away from the “have nots.” In a later book, Lindblom himself writes that “business privilege” in the United States causes “a skewed pattern of mutual adjustment.”8 Authors from the “public choice” branch of microeconomics, which studies the efficiency of resource allocation through political processes, have often called 6 We refer to “national goals” for illustrative simplicity; the same logic applies to state, local, or other polities that use representative or direct democratic forms of government. 7 Wildavsky also put forth this general view. See Aaron Wildavsky, “The Political Economy of Efficiency: Cost-Benefit Analysis, Systems Analysis, and Program Budgeting,” Public Administrative Review, 26, No. 4, December 1966, pp. 292–310. See also D. Braybrooke and C. Lindblom, A Strategy of Decision: Policy Evaluation as a Social Process (New York: The Free Press, 1963). 8 Charles E. Lindblom, Politics and Markets (New York: Basic Books, 1977), p. 348.

Introduction to Microeconomic Policy Analysis

11

into question the wisdom of certain voting procedures or of industry regulatory processes that may protect the industry more than any other interest.9 Another important source of “skewness,” argues Schultze, is that efficiency and effectiveness considerations are not explicitly brought into the political arena.10 Schultze accepts the value of having a pluralist process and the inevitability that it will be characterized by special interest advocacy and political bargaining “in the context of conflicting and vaguely known values.”11 But he argues that there is a crucial role for policy analysis in this process. It improves the process’s “intelligence.” Analysis can identify the links between general values (in particular, efficiency and effectiveness) and specific program characteristics—links that are by no means obvious to anyone. Thus he offers this view of policy analysis: It is not really important that the analysis be fully accepted by all the participants in the bargaining process. We can hardly expect . . . that a good analysis can be equated with a generally accepted one. But analysis can help focus debate upon matters about which there are real differences of value, where political judgments are necessary. It can suggest superior alternatives, eliminating, or at least minimizing, the number of inferior solutions. Thus by sharpening the debate, systematic analysis can enormously improve it.12 Viewing a political process as a whole helps us to understand the inevitability of analytic suboptimization: The problem worked on by any single analyst is inevitably only a partial view of the problem considered by the system as a whole, and thus a single analyst’s proposed solution is not necessarily optimal from the larger perspective. For example, during the Carter administration the president directed two independent analytic teams from the departments of Labor and Health and Human Services to develop welfare reform proposals. Not surprisingly, the team from the Department of Labor proposed a reform emphasizing a work component of the welfare system, whereas the team from Health and Human Services emphasized the cash assistance components. This not only reflects bureaucratic interests; it is a natural consequence of the expertise of each team.13 Similarly, analysts for congressional committees or for varying interest groups would be expected to perceive the welfare problem slightly differently. The inevitability of suboptimization has important consequences. It becomes clear that the intelligence of the process as a whole depends not only on how well each analyst does 9 We consider many of these theories in various chapters of this book, e.g., bureaucratic behavior in Chapter 9, reasons for governmental failure in Chapter 13, and difficulties regulating industries in Chapter 16. 10 See Charles L. Schultze, The Politics and Economics of Public Spending (Washington, D.C.: The Brookings Institution, 1968), particularly Chapters 3 and 4. 11 Ibid., p. 74. 12 Ibid., p. 75. 13 This particular debate foreshadowed the important developments in welfare policy during the 1980s and 1990s, most notably the Family Support Act of 1988 and the 1996 Personal Responsibility and Work Opportunity Reconciliation Act. We analyze various aspects of current welfare policies at a number of points later on in the text.

12

Chapter One

the task assigned but also on the total analytic effort and on how cleverly the analytic tasks are parceled out. A certain amount of analytic overlap provides checks and balances, for example. However, excessive duplication of efforts may leave an important part of the problem unattended. Another pitfall is relying too heavily on analysis when the “true” social objectives are difficult to operationalize. For example, the problem of identifying an efficient national defense is not really soluble by analytic techniques (although certain important insights can be generated). If, however, it is decided that we should have the capability to gather a certain number of troops near the Alaskan oil pipeline within a certain number of days, then analysts may be able to reject methods that are too costly and help identify lowercost alternatives.14 The line of thought concerning analytic contributions in a pluralist political process can be carried further. Nelson suggests that analysts can and should play important roles in clarifying the nature of the problem, the values that are at stake, and an appropriate weighting of those values to identify a recommended solution.15 The idea is that the political process, without analysis, operates largely in the “intellectual dark” about efficiency and equity consequences. Thus both Nelson and Schultze appreciate the value of muddling through but think we can do it somewhat better with substantial analytic input to the pluralistic process. Nelson goes on to suggest that it is also important for substantial analysis to go on outside the constraints of the current political environment. The industry of government, like any other industry, continually needs research and development to improve its products. Analysis from within the thick of government tends to concentrate on identifying incremental improvements to existing activities. Achieving those improvements is important, but analytic efforts in an environment that offers more freedom to reexamine fundamental assumptions and methods may be a crucial source of important new ideas. The above general thoughts about the political process help us to understand the roles of public policy analysis. We enumerate them here in terms of four specific objectives. We have mentioned that (1) analysis may help define a problem that is only dimly perceived or vaguely understood by participants in the policy-making process. We have also mentioned that (2) a crucial role of analysis is in identifying or designing new policy proposals. Policy analysis also has these two important additional functions: (3) identification of the consequences of proposed policies, and (4) normative evaluation of those consequences in terms of certain broad social goals. Let us distinguish these latter two analytic objectives. The third objective, identification of consequences, is a positive or factual task. It involves answering questions such as these: “If the bridge toll is raised from $1.00 to $2.00, by how much will that reduce congestion?” (Presumably, fewer automobile trips will be taken across the bridge.) “If we combine the two local schools into one larger one, how will that affect education costs?” “If we guarantee all adults an income of $9000 per year, how 14 An excellent exposition of suboptimization with application to national defense is contained in C. Hitch and R. McKean, The Economics of Defense in the Nuclear Age (New York: Athenum, 1967). 15 See Richard R. Nelson, The Moon and the Ghetto: An Essay on Public Policy Analysis (New York: W. W. Norton & Company, 1977).

Introduction to Microeconomic Policy Analysis

13

will that affect the amount of work adults are willing to undertake?” These questions can rarely be answered with absolute certainty, but analysis can frequently provide reasonable estimates. With improved estimates of the consequences of proposed policies, policy makers can make better decisions about whether to support them. The fourth objective, evaluation, is a normative or judgmental task. It involves the “should” questions: “Should the bridge toll be raised from $1.00 to $2.00?” “Should the nation have a policy that guarantees all adults $9000 per year?” The answers to these questions always depend on values. There is no single, well-defined set of values that analysts must use in attempting to evaluate policies; the choice of criteria is discretionary.16 Nevertheless, in practice certain criteria are generally common to all analyses: efficiency, equity or fairness, political feasibility, and administrative feasibility. Efficiency and equity are commonly used criteria because almost all people care about them; since the insights of microeconomic analysis apply directly to these concepts, this book will emphasize them. Political feasibility is a common evaluative criterion because specific users of analyses are rarely interested in pursuing proposed policies, however efficient and equitable, if the policies cannot gain the necessary approval in the political process. In my own personal view, this criterion differs from the others in that it makes no sense to pursue it for its own sake: it is a constraint rather than an objective. While it may be naïve to recommend a policy that fosters certain social objectives without considering political feasibility, it is irresponsible to recommend a policy that is politically feasible without considering its effects on social objectives. Although different individuals will have concern for political feasibility in accordance with their personal judgments, it should be made clear that analytic attention to political feasibility is very rational. If one is considering a policy that would need approval of the United Nations Security Council, but it is known that Russia is adamantly opposed to the policy and would exercise its veto power, then the only purpose of raising the issue would be to garner its symbolic value. At times, symbolism may be important; it may lay the groundwork for future action. Alternatively, one might make better use of the time by seeking policies that are both socially beneficial and politically feasible. The point to emphasize here is that good policy analysis will generally include a diagnosis of the political prospects for the policies analyzed. Other examples of political analysis might question the prospects for passage in key legislative committees, whether any powerful lobbyist will work to pass or oppose the proposed policy, and whether the policy’s potential backers will gain votes for such a stand in the next election. Economist Joseph Stiglitz, in writing about some successes and failures of good economic proposals made while he was chair of President Clinton’s Council of Economic Advisors, reports on several important political obstacles that seem to recur. For example, he mentions the difficulty of government making a credible commitment to milk producers for a more economic alternative than the existing milk price supports, in a failed attempt to obtain their 16 That is why this task is described as normative. If analysts did not have to rely partially upon their own values to choose criteria and did not have any discretion about how to operationalize them, then we could describe the evaluative task as positive from the perspective of the analyst.

Chapter One

14

political support.17 General analysis of political feasibility is beyond the scope of this book, although a number of occasions on which microeconomic analysis provides political insight will be discussed. However, the reader interested in the use of policy analysis for other than academic purposes should undertake more complete study in this area.18 Administrative feasibility is also an important criterion. Similar to political feasibility, it is really a constraint rather than a desirable social objective in its own right. But policies that pass analytic scrutiny for efficiency, fairness, and political feasibility will not work right unless the agencies responsible for administering them implement them in a manner consistent with the other objectives. There are several reasons why divergence might occur, and good analysts will consider its likelihood as part of their work. One set of reasons why divergence may occur is due to the fact that there are limits on any organization’s capabilities in terms of information, calculation, and enforcement. An example that may illustrate this is a proposal for a tax on air pollution, as a means to discourage an unhealthy activity. An analyst ought to reject this proposal if it is not possible (or in some circumstances merely very costly) for the administrative agency to meter or otherwise know reliably the amount of pollution emitted by the different sources liable for the tax. This may seem obvious, but considerations like these are often ignored by excellent scholars who offer proposals without ever encountering responsibility for their implementation. Policy analysts, on the other hand, do bear responsibility for evaluating the administrative feasibility of proposals. Another reason for divergence is that the implementing organization’s objectives may differ from the objectives of those who design and approve the policy. That is, even if the agency has the capabilities to implement the policy efficiently and fairly, it may choose not to do so for other reasons. Suppose a state department of transportation is staffed primarily by individuals who like to build more roads. This department may participate in a federal grant program intended primarily to stimulate other transportation alternatives, but may nevertheless use the funds primarily to build more roads. The policy analyst will consider the consistency between the implementing agency’s goals and the policy’s objectives in evaluating administrative feasibility. As with political feasibility, there is a rich literature on organizational behavior that is useful for policy analysts to study but it falls outside the scope of this book.19 There are also may instances in which microeconomic analysis does make a distinctive contribution to understanding administrative feasibility, and this book includes applications to illustrate this. In addition to the general criteria mentioned so far, other criteria may be important for particular issues. Some policies might be intended to enhance individual freedom or develop 17

See Joseph Stiglitz, “The Private Uses of Public Interests: Incentives and Institutions,” Journal of Economic Perspectives, 12, No. 2, Spring 1998, pp. 3–22. 18 See, for example, John W. Kingdon, Agendas, Alternatives and Public Policies (New York: HarperCollins College Publishers, 1995) and Aaron Wildavsky, Speaking Truth to Power: The Art and Craft of Policy Analysis (Boston: Little, Brown, and Company, 1979). 19 An introduction to this material that also includes a discussion of political feasibility is in David L. Weimer and Aidan R. Vining, Policy Analysis: Concepts and Practice (Englewood Cliffs, N.J.: Prentice-Hall, 1999), Chapter 13.

Introduction to Microeconomic Policy Analysis

15

community spirit. Policies must conform to existing law (though in the long run, the law should conform to good policy!). Good analyses of these policies will, at a minimum, make these considerations clear to the users of the analyses. Because these aspects are only rarely illuminated by microeconomic analysis, little will be said here other than to note that one should be on the alert for them and remain open-minded about their importance.

Organization of the Book The task of this book is to show how to extend and relate microeconomic theory to the design and analysis of public policies. A theme that will be emphasized throughout is that a solid understanding of the actual behavior of economic agents is essential to the task. Part of the understanding comes from learning about the individual economic agents: their motivations and capabilities and the effects of public policies on their economic opportunities. Once behavior at the individual level is understood, it is easier to consider questions of organization: how to design and evaluate alternative systems that influence the interaction among economic agents. Thus the book focuses on individual behavior first and organizational behavior second. It consists of five interrelated parts. Part I, the introductory section, contains this chapter as well as two chapters to acquaint the reader with economic models that are commonly used in analysis. The second chapter introduces the concept of model construction as an analytic procedure and illustrates some of the issues that arise in using models. The discussion centers around a simple model of economic demand and supply and its use in understanding benefit-cost reasoning. The third chapter introduces normative concepts of efficiency and equity and develops the model of individual decision-making known as utility maximization to predict behavior and to evaluate the efficiency and equity consequences of that behavior. These chapters give an overview of the subject matter and a foundation for the methods of microeconomic policy analysis. Part II focuses on the resource allocation decisions of individuals. Different aspects of the theory of individual choice are developed to show their uses and importance in policy analysis. We begin with standard aspects of individual choice theory—budget constraints, income, and substitution effects—and relate those concepts to the design of specific policies such as the Earned Income Tax Credit and intergovernmental grants. This involves some extension of ordinary theory to include models with a variety of restrictions on individual choices. We then introduce equity standards in some detail and consider how models of individual choice can be used in the design of school finance policies to achieve equity objectives. Then the relation between individual choices and consumer demand functions is explored. The methodology of benefit-cost analysis is introduced, focusing initially on the use of demand functions for public policy benefit estimation. We then extend the model of individual decision-making to a variety of situations in which the outcome from a decision is not known at the time the decision is made (such as deciding how to invest one’s retirement savings or what insurance policies to purchase). Connections among uncertainty, individual choice, and public policy are investigated through these different extensions, which include expected utility maximization, game theory, and the implications of bounded rationality.

16

Chapter One

Policy examples such as national health insurance and disaster insurance subsidies are used to illustrate these points. Finally, we investigate individual resource allocation over time. We analyze saving, borrowing, and capital creation by investing and discuss the concepts of discounting used to compare resource allocations in different time periods. We consider index construction (such as the Consumer Price Index) and policies of indexation intended to reduce intertemporal uncertainty. All of the topics covered in Part II relate to individual behavior in pursuit of personal satisfaction or utility. Part III concerns the efforts of economic agents to convert scarce resources into goods and services: the production task in an economy. The effectiveness of many public policies depends on the response of private profit-seeking firms to them; an example is the response of doctors and for-profit hospitals to the prospective payment system used by Medicare. Other public policies succeed or fail depending on the behavior of public agencies or private nonprofit agencies; an example is the effect of the pricing decisions of a public mass transit system on the number of people who will use the system. The performance of an economic agent undertaking production is limited by the available technology, and policy analysis often has the task of uncovering technological limits through estimation of production possibilities and their associated costs. Potential performance can then be compared with actual practice or predicted behavior under a particular policy design. One must be careful in extending the ordinary method of production and cost analysis to the public sector, because of poor output measures and the possible lack of the usual duality relation that assumes production at least cost. Several examples of analyses involving these problems will be given. Not only must the technological realities be understood, but predicting an agency’s or a firm’s response to a public policy requires an understanding of its motivations and capabilities. Different models used for these purposes are explained in Part III. For the most part, economic theory treats each production entity as an individual decision-making unit; for example, a firm may maximize its profits. However, this treatment ignores the fact that firms and agencies are generally organizations consisting of many diverse individuals. A discussion of the firm as an organizational means to coordinate individual decision-making helps to connect the analyses presented in Parts II and III with the organizational policy issues that are the focus of Parts IV and V. Part IV focuses on the interaction of supply by competing suppliers and demand by many purchasers, in market situations referred to as perfectly competitive. We construct models of the operation of such markets in different situations, including markets for the consumer purchase of goods and services as well as markets for producer purchase of inputs or factors of productions. In the absence of specific policy interventions, the models predict that resource allocation will be efficient. But there are other reasons, usually ones of equity, that motivate policy interventions in some of them. The issues are the degree of public control of these markets that is desirable and the policy instruments that can be used to achieve varying degrees of control. We begin Part IV with a review of the conditions for market efficiency, and then apply the conditions in a general equilibrium framework of perfect competition. We use that

Introduction to Microeconomic Policy Analysis

17

framework to illustrate how the effects of taxation can be predicted and evaluated. We demonstrate that taxation generally causes “market failure” or inefficiency. Then we look at a number of more specific markets. By using extended examples of price supports for agriculture, apartment rent control, methods of securing military labor, and gasoline rationing, we examine a variety of policy approaches to make the markets for specific goods more equitable and efficient. In these examples, the details of the administration and enforcement of policies bear importantly on the success of the policies—a lesson with continued importance in Part V. Whereas Part IV is restricted to competitive market settings, Part V considers the circumstances that are normally called market failures: situations in which the attempted use of a competitive market process to allocate scarce resources results in inefficient allocations. We begin with a general review of the different reasons for market failures juxtaposed with a general review of reasons for governmental failures. In the attempt to reduce the extent of any market failure, a central idea is that all alternative methods of economic organization will have weaknesses that must be considered along with their strengths. Successive chapters in this section focus on each of the different types of market failures. We consider the problem of providing a public good: a type of good that is shared by a collectivity of consumers, and we use television broadcasting and the role of public television as an example. We next consider the problem of externalities, or side effects of resource allocation. In an extended example, we analyze alternative ways that we might respond to the problem of air pollution (a side effect of much industrial production, car-driving, and other activities). Then we turn to problems of limited competition known as oligopoly and monopoly markets. We analyze in some detail alternative regulatory methods for limiting inefficiency in these markets, such as those used for telecommunications services and electric power. The next market failures that we consider arise in allocating resources over time. These include the failure of capital markets to provide loans to those who wish to invest in their own higher education and secure a more productive future. They also include the failure of markets to account for the demands of future generations to reserve some of our exhaustible resources. Finally, we consider market failures that are due to important information asymmetries about the quality of a good or service. Examples include labor market discrimination that arises owing to the difficulty an employer has knowing the qualities of job applicants and the provision of child care for working parents who cannot directly observe the quality of care their children receive. Each of the market failures is associated with a situation in which the private incentives of one or more market participants deviate from the ones necessary to generate efficient allocations. Well-established microeconomic theory is used to identify and explain these incentive problems. However, the methods available for designing, comparing, and evaluating the imperfect alternatives in all of these situations of market failure are still developing. Thus Part V includes in its extended examples a variety of different methods that have been used to compare different organizational ways of trying to solve or mitigate the failures. These approaches include the use of “laboratory” methods for testing alternatives, a method for considering a choice of rules and enforcement procedures that focuses on

18

Chapter One

identifying an appropriate level of decentralization, and a method known as transaction cost economics that helps identify appropriate contracting provisions. They also include simulation models, consideration of economic history, and use of an “exit, voice, loyalty” framework to identify economical ways to convey demand for goods and services with important multidimensional quality attributes. These methods can produce new ideas for organizational solutions to problems of market failure, as well as insight about the strengths and weaknesses of alternative solutions.

Conclusion In this chapter we have had an overview of the relation between public policy analysis and the study of microeconomics. Public policy can often be analyzed and understood as a collective involvement in the resource allocation process. The intervention takes place to some degree in virtually all areas of the economy. Microeconomic policy analysis attempts to predict and evaluate the consequences of collective actions, and it can be used in the design of those actions as well as to identify areas in which public policy can be improved. Although there is no single well-defined set of evaluative criteria that must be used in policy analysis, this book will emphasize two criteria that are commonly used: efficiency and equity. By using and extending ordinary principles of microeconomic analysis, we will attempt to impart skills sufficient for the analysis of a wide range of public policies by those criteria.

CHAPTER TWO AN INTRODUCTION TO MODELING: D E M A N D , S U P P L Y, A N D B E N E F I T- C O S T R E A S O N I N G

TWO OF THE PRIMARY tasks of microeconomic policy analysis are prediction and evaluation. This chapter and the next illustrate, at a rudimentary level, how models constructed from microeconomic theory are used to help accomplish these tasks. In actual practice, considerable skill is required to predict and evaluate the consequences of specific policy alternatives. Later chapters build upon microeconomic theory to develop more sophisticated modeling skills and illustrate their use in specific policy contexts. However, even when the predictions and evaluations are derived from the best practicable analytic methods, the possibility remains that the judgments are wrong. We try to clarify why analytic efforts are important despite the persistence of uncertainty about the conclusions. The chapter begins with a general discussion of modeling. Then we turn to a somewhat unusual model: a story. One purpose of the story is to introduce basic economic concepts of demand and supply, in a way that also introduces the benefit-cost reasoning that is common in policy analysis. Each of these concepts will be developed more slowly, in more detail, and with more application in successive chapters. But it helps at the start to have a sense of where we are headed: the kinds of tools that we will develop and some introduction to their use. This story serves other purposes as well. It makes one think about modeling: the ease with which one can slip into making predictions and evaluations based upon a model and some of the informal ways that people use to judge a model. It will raise questions about the concept of value in economics. It will foreshadow some of the policy stakes that may depend upon the models that analysts use. The body of this book, in due course, will provide amplification, qualification, and clarification of these issues.

Modeling: A Basic Tool of Microeconomic Policy Analysis A powerful technique used to predict the consequences of policies is modeling. A model is an abstraction intended to convey the essence of some particular aspect of the real world. 19

Chapter Two

20

A child can get a pretty good idea of an airplane by looking at a plastic model of one. A science student can predict how gravity will affect a wide variety of objects, falling from any number of different heights, simply by using the mathematical equations that model gravity’s effects. In the latter case the model equations represent assumptions based on a theory of gravity, and their implications can be tested against reality. But can microeconomic theory be used to model resource allocation decisions accurately enough to help determine the consequences of proposed policies? The answer is a qualified yes. Economists have developed models with assumptions that seem plausible in a wide range of circumstances and whose predictions are frequently borne out by actual consumer behavior. As one of the social sciences, economics attempts to predict human behavior, and its accuracy, not surprisingly, is not as great as some models representing “laws” of the physical sciences. People do not all behave the same, and thus there is no ultimate model that we expect to predict each individual’s economic decisions perfectly. Nevertheless, there appear to be strong commonalities in much economic decision-making. It is these commonalities that we strive to identify and understand through models and that we use to predict future economic decisions (such as those expected in response to a new policy). Of course, predictions can be more or less ambitious. A qualitative prediction that “consumers will use less home heating oil as its price increases” is less ambitious than “consumers will reduce home heating oil consumption by 5 percent in the year following a price increase of 20 percent.” The latter is less ambitious than one that adds to it: “And this will cause an efficiency decrease valued at $2 billion.” Since the more precise predictions generally come from models that are more difficult to construct and require more detailed information to operate, the analyst must think about the precision required to resolve a particular policy question. Although there is analytic choice about the degree of precision to seek in modeling, it is important to recognize from the start that all models only approximate reality. In general, the usefulness of a model to its users depends on the extent to which it increases knowledge or understanding (and not on how much it leaves unexplained). The plastic display model of an airplane is not considered a failure because it does not fly. Or consider the scientific models that represent the laws of aerodynamics. Earlier in this century, the models of those laws were used for airplane design, even though the same models predicted that bumblebees must be unable to fly! Without any loss of respect for science over this apparent model failure, airplanes (as well as bumblebees) flew successfully. The perception that there is “something wrong” with a model does not necessarily deter us from using the model, as the above example illustrates. Analytically, we do not replace a partial success with nothing. Before replacing the earlier models of aerodynamic laws as applied to bumblebees, we required “something better.”1 The same is true of economic

1

According to one expert: “It used to be thought that insect flight could be understood on the basis of fixedwing aerodynamics, when in fact the wings of many insects, including bumblebees, operate more on the principle of helicopter aerodynamics.” See Bernd Heinrich, Bumblebee Economics (Cambridge, Mass.: Harvard University Press, 1979), p. 39. Even this more refined model is imperfect: “when the wings flap, fluctuating or unsteady flow pattern must occur . . . but we are only beginning to understand the nature of the problem and how

An Introduction to Modeling

21

models: “Good” models predict well enough to increase our understanding of certain situations, even though they may not predict them perfectly and there may be related situations in which the same models do not predict as well as expected. If we predict, as in the last example, that average home heating oil purchases will drop by 5 percent, for some purposes it might be “unimportant” that some people reduce by only 2 percent while others reduce by 8 percent (as long as the average reduction is 5 percent). We expect to see a distribution of actual decisions around the predicted average. The model is more powerful, however, when the actual decisions cluster tightly around the model predictions (the model comes “closer” to predicting each person’s decision correctly). To some extent, the power of a model depends on the analytic effort that goes into its construction (which in turn should depend on how valuable it is to have a more powerful model). However, human nature may be such that for some decisions, no model will make “powerful” predictions. What is often more puzzling is when the central tendency or average predicted by a model is systematically different from actual behavior. If the model predicts that almost all home owners will invest in attic insulation (because its modest cost is quickly offset by lower fuel bills that save far more), but few actually do invest, that suggests that a different model might be more accurate. These kinds of unexplained imperfections in the predictions remain as “puzzles” that stimulate new research that leads (with luck) to better models. As in science, the process of improvement is continual. Although all models are imperfect, not all are imperfect in the same way. Models vary in the variety of phenomena they can explain, as well as in their accuracy in explaining any particular phenomenon. In the economic example of the consumer response to a price increase for home heating oil, we suggested that alternative models can be constructed, models that attempt both increasingly precise estimates of the magnitude of the response (e.g., the specific percentage reduction in home heating oil usage) as well as broader implications of the response (e.g., the value of an efficiency increase or decrease associated with it). Even the simplest prediction illustrated—that consumers will purchase less home heating oil if its price rises—might be all that is necessary to resolve a particular issue. For example, suppose the consumer price of heating oil has been increased by a tax in order to induce greater conservation of the supply. Imagine the problem now is to alleviate the increased financial burden of the tax on low-income families. It is not unusual to hear some policy proposals to reduce the price for these families only (e.g., exempt them from the tax). But this works in opposition to the primary objective. Chapter 4 shows that such proposals are likely to be inferior to alternatives designed to provide the same amount of aid without reducing the price of heating oil.2 This conclusion is derived from the model making the least ambitious predictions; it depends primarily on

animals make use of such unsteady flow.” See Torkel Weis-Fogh, “Energetics and Aerodynamics of Flapping Flight: A Synthesis,” in R. Rainey, ed., Insect Flight (Oxford: Blackwell Scientific Publications, 1976), pp. 48–72. 2 Inferior in this case means that less conservation will be achieved for a given amount of tax relief. The conclusion is qualified for several important reasons explained in Chapter 4. For example, it is based on a model that does not consider the informational problems of identifying the tax burden on each family.

22

Chapter Two

the qualitative prediction that consumers purchase less home heating oil at higher prices and more at lower prices. This model also is constructed without using any numerical data; it is based entirely on microeconomic theory. On the other hand, suppose the policy problem is how to achieve a primary objective of reducing home heating oil consumption by 5 percent within a definite time period. We know from the simplest model that one way to do so is to raise the price, but by how much?3 This question is easy to ask but difficult to answer. To resolve it, a model that makes specific numerical predictions is required. Building this type of model requires empirical skills beyond the scope of this text; they are best acquired in courses on quantitative methods such as statistics and econometrics.4 However, careful use of the empirical skills requires knowledge of the microtheoretic modeling skills that we do cover. Furthermore, models constructed from theory can often be applied directly to available empirical evidence (from past studies), as examples later on in the text will illustrate. Each of the above two examples assumes that a policy objective is to reduce the consumption of heating oil. But suppose one wanted to consider whether that should be an objective at all. That is, suppose we asked a more basic question: Does the nation have an appropriate amount of heating oil relative to all other goods, or by how much should the price be changed (possibly increased or decreased) to remedy the situation?5 Even if the only criterion is to choose a quantity that is efficient, the model is required to explain two phenomena: the price to charge that would lead to each possible quantity of heating oil and whether that quantity is efficient or not. Accurate predictions of the price-quantity combinations would be of little help if that information could not be linked to the efficiency consequences. It would be like needing the plastic model of the airplane to be of the proper scale and to fly as well. One of the themes we shall emphasize throughout is the importance of model specification: the choice of a particular set of abstractions from reality used to construct the model. These building blocks are the model assumptions. For the plastic model of an airplane, the assumptions consist of the physical components included in the unassembled kit and the assembly instructions, as well as the way the model builder interprets or modifies them. In microeconomic policy analysis this set typically includes some representation of the policy objectives, the alternative policies under consideration, the motivations or objectives of the economic agents (people and organizations) affected by the policies, and the constraints on the agents’ resource allocation decisions. An important factor in model specification is the use of what has been learned previously about the phenomenon being modeled. This explains why many (but rarely all) of the spe3 There are other policy alternatives for achieving this objective than simply raising the price. A fuller discussion of rationing methods is contained in Chapter 14, including an application to fuel rationing. 4 See, for example, Robert Pindyck and Daniel Rubinfeld, Econometric Models and Economic Forecasts (Boston: Irwin/McGraw-Hill Book Company, 1998). 5 To keep this example simple, we continue to assume that the only policy instrument is to determine the price. A policy instrument is a particular method that government can use to influence behavior, in this case the quantity of heating oil made available to consumers.

An Introduction to Modeling

23

cific assumptions used in policy analysis are a part of conventional or neoclassical microeconomic theory. Models based on that theory have been very successful in predicting the direction of allocative changes made by economic agents in response to a wide variety of economic stimuli. Furthermore, they do not require inordinate amounts of information to predict the changes; the models are parsimonious. Therefore, a reasonable strategy for microeconomic policy analysis is to rely generally on conventional theory as a starting point and adapt it to account for the circumstances specific to each problem. On the other hand, conventional theory is not so powerful that all of its implications deserve to be accepted uncritically. Indeed, the myriad roles of public policy in an economy such as those described in the previous chapter could not be scrutinized reasonably by fully conventional models. To get a sense of the limits of any theory, let us look at an example suggested by Milton Friedman.6 He notes that an expert billiard player shoots as if he or she has an expert knowledge of physics. Therefore, by using calculations based on the laws of physics as the model, one can predict accurately the shots of the expert. This illustrates Friedman’s proposition that a theory should be judged by the empirical validity of its predictions, not of its assumptions. As long as the only purpose of this theory is to predict how an expert at billiards will direct the ball (or how a novice will fail to direct it), it does not matter that the assumptions are clearly inaccurate. However, theories are generally used to predict or explain a variety of phenomena. For example, suppose one proposed, based on the above theory, to evaluate applicants for jobs as physicists by their billiard scores (i.e., high billiard scores can be achieved only by individuals with an expert knowledge of physics). This method of predicting job success is not likely to do very well.7 In other words, the substantial inaccuracy of the assumptions severely limits the variety of phenomena that can be successfully explained or predicted by the theory. Few analysts think that the assumptions used in conventional microeconomic theory, which attribute a high degree of rationality to each economic agent, are highly accurate themselves. Sometimes an “as if ” logic is used to justify the assumptions. For example, firms do not really know how to maximize profits (their assumed objective), but those that survive in competitive markets must behave as if they maximize profits.8 Since the directional predictions made with the theory have been borne out over a wide range of phenomena, the assumptions seem quite acceptable for this purpose (at least until a better set of assumptions is developed). However, other uses of the theory may require more “faith” in the assumptions themselves. For example, let us refer to the evaluative concept of efficiency. In order to have an efficient allocation, consumers must make the most rational choices available to them. (We 6 See M. Friedman, “The Methodology of Positive Economics,” in Essays in Positive Economics (Chicago: University of Chicago Press, 1953), pp. 3–46. 7 The problem we are referring to is that those hired are not likely to perform well on the job. There is another problem: Some perfectly good physicists will be found unqualified for the job. However, the latter problem is not caused by the model assumptions. The model assumes that all experts at billiards are expert physicists and not that all expert physicists are experts at billiards. 8 Firm behavior is discussed in Chapter 10.

24

Chapter Two

review this shortly.) Conventional theory assumes that people make choices in this manner. But to interpret an allocation resulting from consumer choices as efficient, one must “believe” that the assumption is accurate. However, consider the following analogy: Assume all billiard players are experts at the game, and accept the assumption because using it accurately predicts the direction in which players shoot. Then one is forced to interpret the poor shots of novices as misses on purpose. It may be that, for many types of choices, consumers rarely “miss.” But surely there are some decisions that are harder to make than others (e.g., the purchase of complex insurance contracts, legal services, or used cars), and one should recognize that inefficiency can occur as a consequence of poor choice. Should we really assume, for example, that a consumer who maintains a savings account paying 5 percent interest while simultaneously carrying credit card debt at 18 percent is choosing wisely? It is not enough to develop a sense of the strengths and weaknesses of any particular assumption. A fundamental analytic skill is to be able to identify plausible alternative specifications relevant to a particular policy analysis. The point is to understand how heavily a policy conclusion depends on specific assumptions. For example, the same conclusion may be reached over a wide range of reasonable specifications, in which case confidence in the policy conclusion is enhanced. To build skill in model specification, each of the later chapters contains examples of alternative specifications relevant to the particular policy context. In addition to the specific lessons from each example, there are two general specification lessons to be learned from studying the range of examples. First, policy conclusions are often quite sensitive to variations in the way the policy itself is modeled. Therefore, the analyst must take care to understand the details of any specific proposal before deciding how to model it (or attempting to evaluate another’s model of it). Second, the reexamination of assumptions that are standard and appropriate in many contexts often becomes the central focus in a particular policy context. Indeed it is their inappropriateness in particular contexts that, in the aggregate, helps us to understand the large and varied roles of public policy in an economy such as those described in the previous chapter. Let us mention one final general modeling issue: the form that the model takes. Examples were given earlier of different forms that models might take: for example, the plastic model of an airplane and the mathematical model of gravity’s effects. We will sometimes construct economic models from verbal descriptions, sometimes by geometric representation, and sometimes by mathematical equations. Models may appear in the form of short stories or other abstractions. What should determine the form that the model builder chooses? In the different forms of economic models mentioned, the essence of the underlying behavior can be the same. What purpose is served by presenting it in different ways? We wish to note this distinction: Modeling is a way for the model builder to learn, but it is also a way to communicate with (or to teach) others. The main point of doing policy analysis is to learn: The analyst does not know the conclusion at the start and seeks to come to one by a logical procedure that can be subjected to evaluation by professional standards. Yet the form of a model used for learning is not necessarily appropriate as a form for com-

An Introduction to Modeling

25

munication. The latter depends upon the audience. If policy analysis is to influence policy, it is particularly important that it be communicated effectively. As a general rule, policy analysis is done in a more technical format than that used to communicate it to decision makers. Analysts communicate their work to each other quite fully and efficiently through the presentation of technical models (such as those in professional journals). However, most public officials, administrators, and politicians with interest in a specific analysis prefer it to be presented in a concise, clear, jargon-free form.9 Most of the material presented in this book goes somewhat in the reverse direction; we expand and build upon technical concepts in order to see how microeconomic theory is used in policy analysis. By way of gentle introduction, the first model that we present is a story. The story is about beginning students of public policy in their first class. It will introduce some very basic economic concepts—the demand for and the supply of an economic good—and will emphasize the close relation between these concepts and the benefit-cost reasoning that is common in policy analysis. It will make you think substantively about the concept of value in economics. It will make you think about models: how to use them and judge them.

Demand, Supply, and Benefit-Cost Reasoning Monday Morning, 6:45 A.M. Barbara Blackstone was sleeping peacefully when the irritating sounds of normally sweet Mozart traveled quickly from her bedside radio to her ears and suddenly to her brain. Her eyes opened. Her hand snaked out from under the covers, desperately stabbing at the radio to turn it off before it truly woke her up. Too late. Her eyes rested, and the radio was silent, but she had lost, as she always did, in the struggle to preserve sleep against the onslaught of a new day. Her eyes opened again. This really was a new day. It was to be her first day of classes at the Graduate School of Public Policy. For the past 2 years, since graduating from the University of Illinois and receiving her teaching certification, she had worked with enthusiasm and dedication in one of Chicago’s tougher high schools (where the students fondly referred to her as Ranger Blackstone, a jokey reference to Chicago’s infamous gang). She knew that, for at least some of her students, she 9 The form of the analysis may be tailored for a specific user. An extreme example is offered by Richard Neustadt, who was asked by President Kennedy to analyze the decision-making process leading to a controversial decision in 1962 to cancel the Skybolt missile. The decision had unexpected and embarrassing foreign relations consequences, and the President hoped Neustadt could draw some lessons to improve future policy-making. Neustadt felt that to understand the lessons from his analysis, Kennedy would have to read a lengthy document (which would have violated Washington’s KISS rule: Keep it short and simple). But presidential schedules generally do not allow time for reading lengthy documents. This president was known to be a fan of Ian Fleming novels, and Neustadt therefore put his lengthy analysis in a format designed to appeal to such a fan. He sent the report to the President on November 15, 1963, and Kennedy finished it on November 17, 1963, suggesting that Neustadt’s strategy was successful. This historical episode is discussed in Richard E. Neustadt, Report to JFK: The Skybolt Crisis in Perspective (Ithaca, N.Y.: Cornell University Press, 1999).

26

Chapter Two

was making a difference. But she also knew that, for her, it was not enough. The whole system, with its layers of federal, state, and local rules, regulations, funding and curriculum restrictions, the power struggles between the administrative bureaucracy and union leadership, was an unbelievable, impenetrable morass that functioned every day to deaden and weed out any thoughts of teacher or student initiatives. Surely there must be a better way. How could our political leaders, our policy makers, our school officials do this to us? Barbara was determined to improve the system. But she knew that she needed more equipment for this assault. She needed a better understanding of the economics of school finance, the politics of making change happen, the dynamics of organizations, and the numbers that were so often thrown in her face as an excuse for turning aside the suggestions she offered. Then one night many months ago, dining at a local trattoria (she recalled the calzone and excellent draft beer), her friend Susan described excitedly her environmentalist brother’s new job. Nate had done graduate work in public policy and was hired by the Environmental Protection Agency in Washington, D.C., to work on implementing something called a tradable permit system for SO2 emissions, required by the 1991 Clean Air Act Amendments. Neither Susan nor Barbara really understood how trading pollution rights could be good for the environment, but they knew that Nate was enthusiastic about it. Nate said that without an economically efficient design, there was no way that pro-environmental forces could have overcome the powerful industry lobbyists to pass legislation mandating an annual emission reduction of 10 million tons. Economics, politics, public policy—suddenly, Barbara realized that Nate’s skills seemed to be exactly what she was looking for. Almost a year later, following a lengthy application process and the agony of waiting to hear, she had been accepted and was ready to start the program. Actually, she would not be at all ready to start if she continued to lie there thinking. She got out of bed.

Monday Morning, 9:00 A.M. Professor Weiss was standing in front of the class. He seemed to have friendly eyes. Well, probably friendly. Gentle, that was it. He was standing there patiently, waiting for the new students to amble in and settle down. After the expected role-calling, name mispronunciations and corrections, and barrage of handouts, he thought for a moment and began: “In the next hour, I shall show you about 50 percent of what we’ll be studying during the entire year. Of course, this will be a bit like showing someone a half-built model of a car. I don’t really want to try and convince you that this particular car will be great. But I hope to get you interested in car-building.” What is he talking about, Barbara wondered. Am I in an auto mechanics class by mistake? But he turned to the whiteboard and wrote some of the economic buzzwords that had appeared in the school’s literature—market demand, market supply, efficiency and the benefitcost principle—and then went on. “In the early days of President Clinton’s administration, a decision was reached to rescind an executive order of former President Bush and reinstate a particular methodology known as ‘contingent valuation’, used for valuing environmental damage. The need for some

An Introduction to Modeling

27

methodology arises because in many cases, both inside and outside of government, the question comes up: how much are specific environmental resources worth? One study based on the contingent valuation methodology estimated that the oil spilled from the Valdez on the Alaskan coast created $3 billion in damages. This study was used by the government in its law suit against Exxon, a law suit that was later settled out of court with Exxon’s agreement to pay $1 billion. Upon what basis could figures of these magnitudes—or any other—be derived?” Professor Weiss did not seem to expect an answer, for he continued with barely a pause. “I will not show you the specifics of contingent valuation methodology today. But I will talk about economic valuation in a more general way. I intend to plant the seeds of the connection between the money people are willing to pay for something and the broader efficiency principle for resource allocation that underlies much economic evaluation. I also wish to suggest that important observable information about a people’s willingness to pay can often be found in the demand and supply curves that characterize economic activity in a market setting. That is, the same type of debate about value that was argued and resolved for the Valdez oil spill damages occurs as a matter of course, but through a quite different and impersonal process, for many other resources that are traded in markets. Good policy analysts learn how to take advantage of this information about value.” Professor Weiss then wrote on one side of the large whiteboard: Lessons 1. 2.

Individual valuations connect to efficiency Observable demand and supply curves reveal valuations

Reggie Thompson, a classmate on Barbara’s left who had introduced himself just before class began, passed her a puzzled look. He was definitely interested in the oil spill example. But the professor seemed to be moving away from it. Where was he going? When would he get back to it? “In analytic work, we use the concepts of demand and supply curves to characterize a market for the buying and selling of a good or service. These curves contain information relevant to evaluating efficiency. I’m going to draw some in a moment, define a market equilibrium, and evaluate its efficiency level. I’ll also introduce the principle of benefit-cost analysis, and show how it can be used to measure changes in efficiency. These basic steps will be repeated a hundred different times throughout this course. Of course the activities we study will quickly become those that raise important public policy questions. In this first case we study an ordinary private good, but we shall also study public services such as schooling . . .” Barbara’s eyes widened. Schooling? She was fully alert. “. . . and regulated activities such as electricity and environmental protection. Furthermore, the analytic methodology continually expands to take account of new wrinkles that are absent from this first case. Indeed, a totally firm grasp of economic concepts is just the beginning of what I hope you will learn. You must go well beyond this, to become proficient in the art and science of microeconomic modeling for policy analysis.” Professor Weiss turned to the whiteboard and began to draw (see Figure 2-1).

28

Chapter Two

Figure 2-1. Market equilibrium and efficiency.

“Let us begin with the concept of demand. The demand curve shows the quantity of a particular good, measured on the horizontal axis, that consumers will wish to buy at each possible price per unit, shown on the vertical axis. It is drawn to be downward sloping: at lower prices, consumers will want more of it.” Here we go, thought Barbara. Would I fly back to Chicago more if airfares were lower? I suppose I would, my schedule permitting. Even if I don’t, I guess other people would. Okay, so far this makes sense. “Suppose that at any particular price each consumer buys or demands the quantity that best fits his or her circumstances. Then the demand curve reveals something important about how consumers value this good in dollar terms. To illustrate this, imagine that the demand curve only shows the demands of one consumer, Janet. It shows that at a price of $10 per unit, Janet only buys one unit of the good. “This implies that the dollar value of a second unit to Janet, or her maximum willingness to pay for it, must be less than $10. If it were more than $10, say $12, then Janet would not be choosing the quantity that she feels best fits her circumstances. That is, if Janet were willing if necessary to forego $12—which could be used to purchase other goods—in order to get a second unit of this good, then she surely would forego only $10 to get the second unit. Since she does not do this, she must only value the second unit at something less than $10.” Barbara’s hand shot up. “Yes?” said Professor Weiss.

An Introduction to Modeling

29

“Isn’t it possible that Janet values the second unit at $12 or even more but simply doesn’t have the money, perhaps because she is poor?” “Excellent question!” replied Professor Weiss. “What is your name again?” “Barbara Blackstone.” “Got it. The answer is no, because of the special way that economic value is defined. By willingness to pay, we mean from whatever resources Janet happens to have . . . how much she really is willing to give up of other things, given her actual budget or wealth level, in order to get a second unit. A person who is a clone of Janet except for being much richer would likely have a higher willingness to pay for the second unit. For all units, for that matter. But this demand curve, Janet’s demand curve, tells us that Janet feels that there are more important things to spend $10 on then a second unit of this good.” Professor Weiss paused and looked around the classroom. “What does this imply?” he continued. “It implies that there may be nothing fair about the demands we observe. Janet is making the wisest choices possible, given the total dollars she actually has, but we do not have to like or approve of her wealth level. That is something we shall concern ourselves with throughout the course. But our concern will be expressed when we explicitly consider equity or fairness issues and not when we are evaluating efficiency.” What does he mean by “evaluating efficiency,” Barbara wondered? But Professor Weiss was racing on. “The demand curve also shows that Janet would buy eleven units at a price of $5. This implies, by the same logic as above, that the value of the eleventh unit must be at least $5 while the value to her of the twelfth unit must be less than $5. Extending this logic, we see that the value to Janet of each possible unit (the first, the second, . . .) is shown by the height of the demand curve above that unit. In economic terms, we say that the height of the demand curve shows the marginal value or marginal benefit of each unit. An individual’s demand curve is equivalent to a marginal benefit schedule for that individual. The market demand curve may be interpreted similarly. That is, the market demand curve shows the quantity bought by all consumers at each possible price, and its height also reveals the marginal benefit associated with each unit, although not which particular consumer receives it. “All we are doing so far is looking at the same line on the graph in two different ways. The “demand” interpretation is that the graph shows the quantity that would be bought, measured on the horizontal axis, at each possible price; the “marginal benefit” interpretation is that the graph shows the marginal benefit, measured on the vertical axis, of each possible unit of the good. Note that the downward slope of the demand curve implies diminishing marginal value: the first unit bought has the most value, and each successive unit has a value lower than the one preceding.” It all seems so reasonable, thought Barbara. Is he getting set to spring a trap? “Next we introduce the supply curve: the quantity of a particular good that producers will supply at each possible price. We can show this on the same diagram,” and he proceeded to add another line to his drawing. “Supply curves are often drawn as upward sloping: at higher prices, suppliers will increase their production levels and more quantity will be supplied to the market. This would be the case for the production of corn, for example. At low prices, only the most fertile land requiring the least tending would be used to produce corn.

30

Chapter Two

Less fertile land would be unprofitable to farm: the cost of tending—irrigation, fertilizer, plowing, that kind of stuff—would be greater than the revenue from selling the crop produced. As the price of corn rises, farmers find it worthwhile—that is, profitable—to bring in the next most fertile land requiring slightly more tending, thus increasing the overall supply of corn.” Professor Weiss looked over the class. “I realize that this is a lot for those of you new to economics, and probably only somewhat familiar to those of you have majored in it. But we are getting there,” and he continued. “The height of the supply curve shows the marginal cost of supplying each unit. That is, the height above each unit—the first, the second, . . .—shows the cost to the producer of the resources used to make that unit. Assuming that producers act in their own best interests, we can think of this cost as the value of the resources in their best alternative use. Economists call this value the marginal opportunity cost, or simply marginal cost. Producers will supply a unit to this market whenever the extra revenue it brings there exceeds its marginal cost, defined as the maximum payment the resources could command if used in some other market.” I think I get the idea, thought Barbara. If I owned land—not with the balance in my savings account, that’s for sure—and corn prices were low, I’d rather lease the land for grazing than grow corn. But if corn prices were high enough, I’d do better using the land for corn. She began to wonder how this might be relevant to schools. But Professor Weiss was starting to draw again. “Suppose at a price of $3 producers supply only one unit of the good. This implies that the marginal cost of the second unit must be greater than $3. At a price of $5, suppose eleven units are supplied. This means that units two to eleven have marginal costs between $3 and $5. Note that an upward sloping supply curve corresponds to increasing marginal costs for successive units. “At this point, we introduce the concept of market equilibrium: when the quantity demanded by consumers is just equal to the quantity supplied by producers. This occurs at the intersection of the demand and supply curves—where the price is $5 and the quantity is eleven units. Suppose the market price is something other than $5, say $10. At this price consumers only demand one unit, but producers bring a large number of units to market.10 The suppliers do not realize the profits they expect because consumers do not buy the goods. Since producers consider it better to get more revenue rather than less for the units they have produced, they will agree to sell the units for less than $10. As price falls, over time demand goes up while supply gets reduced. Price in the market will continue to change until the quantity brought is the quantity bought. Supply and demand are only equal where the two curves intersect: at a price of $5 with a quantity of eleven. “Finally, let us relate this market equilibrium to the concept of efficiency. Informally, think of efficiency as using resources to maximize the value to the economy’s members of

10 Figure 2-1 does not extend the supply curve far enough to show the quantity supplied at the $10 level. Using the formula for the supply curve in the next footnote, we find this quantity to be thirty-six.

An Introduction to Modeling

31

the goods and services produced. In this simple example, the market equilibrium is also an efficient allocation. The goods that are produced and sold are all those, and only those, for which the marginal benefit to consumers—the height of the demand curve—exceeds the marginal cost—the height of the supply curve. This maximizes value and is therefore efficient. Can anyone explain why?” Professor Weiss looked around the class hopefully. So did most of the students. Just when the professor seemed about ready to give up, Reggie Thompson’s hand managed to struggle a few inches above his head—it started up quickly, then retreated, then slowly rose to a level just marginally beyond ear-scratching height, and before the hand could retreat again Professor Weiss quickly nodded at him and said “Yes?” “Reggie Thompson. Well, ten units can’t be the maximum, although I don’t know what the maximum value is. Above the eleventh unit, the demand curve is higher than the supply curve. That means its benefit is greater than its cost, or having it increases total value. Whatever the total value of ten units is, the total value of eleven is higher.” Reggie stopped, uncertain if this was what the professor was looking for. “Your observation is completely correct,” Professor Weiss responded. “By the same logic, the total value of ten is greater than the total value of nine, and the total value of nine is greater than that of eight—in other words, total value is increasing at least up through the eleventh unit. By Reggie’s logic, could we further increase value by producing the twelfth unit?” Barbara was staring at the diagram on the whiteboard when, all of a sudden and somewhat involuntarily, a sharp “No!” erupted from her mouth. “Why not, Barbara?” asked Professor Weiss. “Because the cost of producing the twelfth unit—the height of the supply curve over it— is more than the benefit as shown by the height of the demand curve. Producing the twelfth unit would reduce total value. You would reduce total value by even more if you produced still higher quantities. You can’t get total value any higher than it is at eleven!” Professor Weiss nodded approvingly. “Yes, the only allocation where total value cannot be further increased by a change is at eleven units. There the marginal benefit just equals the marginal cost, which implies that value is at its maximum and therefore the allocation is an efficient one. Note that the allocation that has marginal benefit equal to marginal cost also identifies the intersection point of the demand and supply curves.” Another student’s hand was waving. Melanie Garcia asked: “Are you saying markets are always efficient? I just don’t believe that. Capitalist firms manipulate consumers for their own ends. Our national poverty rate is a disgrace. That’s what markets give us. Is that part of your ‘efficiency’”? Professor Weiss raised his hand to his chin, wrapped in thought as he pondered how to respond to this challenge. Melanie clearly thought that a trap had been laid for them, and she was not going to fall into it. The students were looking at one another, perceiving that the stakes here were greater than they had realized. “I love your question, Melanie, because of the connections you make between this simple model and the real world. I do not know if I can give you a satisfactory answer today, but let me try. Do you recall the analogy I made at the beginning of class, comparing

32

Chapter Two

this introductory discussion with showing someone a half-built model of a car? That I do not show it in order to convince anyone that this particular vision is desirable, but rather to get you interested in economic modeling? I meant that. As the course goes on, we will talk about markets a great deal. In some particular circumstances, I may well argue that a market or marketlike allocation process is better than some alternative. In other particular circumstances, I will argue that markets do not serve us well at all compared to government-guided allocation. In either case, I will tell you up front what I am going to argue and why. “And no, I am not saying that markets are always efficient,” he continued. “For example, and this is just my personal opinion with which any of you should feel free to disagree, I do not think the national labor market with an involuntary unemployment rate of 10 percent is very efficient—and this contributes to the poverty rates that distress many of us. The particular market in this model is efficient, but I have not claimed that the model is a good representation of any real market. “I am glad that you are willing to entertain the idea that a model can represent a real situation. But every good analyst”—Professor Weiss paused here for several seconds, looking around the room to be sure that every student was responding to this implicit challenge with full attention—“has the responsibility for critically assessing the appropriateness of a particular model to a particular real situation. This is a matter of evidence and of judgment. Note that I have not offered you any hard evidence at all to suggest that this model might apply to some real situation. My bet is that many of you, unlike Melanie, have been accepting this model without thinking enough about why.” Now Barbara was truly puzzled. Rather than defending himself against Melanie’s criticisms, he seemed to be agreeing with her. Am I guilty of accepting this model uncritically? Each part seems to have passed my own “tests” based on common sense; I did think about airfares and use of land for corn growing. And I want to trust this professor. Doesn’t he want us to trust him? What is the point of this lesson, then, and why should I struggle with all the model’s intricacies if it doesn’t apply to anything real? Professor Weiss continued as if he had been reading Barbara’s mind. “In fact, I think there are many good reasons why you should understand this particular model backward and forward. For one, it is the model most commonly used in microeconomic policy analysis —even if you think it is inappropriate for a specific situation, you will have to persuade others of what is more appropriate. For another, different pieces of this model will often be useful, even if the model as a whole is not. Finally, I think there are situations in which the model as a whole is, in fact, better than other alternatives that analysts can reasonably contemplate. “Getting each of you to the point where you have developed the skills necessary to make these assessments yourself will take some time. Therefore, I hope I can earn and maintain your trust in a limited way. I do not want you to trust me to tell you when a specific model applies; I want you to develop your independent critical abilities. I do hope that you will trust me to do my best to help you achieve mastery of microeconomic policy analysis.” With this, he stopped and looked directly at Melanie. She looked satisfied, at least for now. He looked around the rest of the room. All eyes were upon him. He stepped back toward the diagram. “There is one last conceptual point that I wish to add to the model we have de-

An Introduction to Modeling

33

veloped so far. A very useful way to think about this or any other allocative process and its relationship to efficiency is in terms of the benefit-cost principle. Reggie and Barbara have already used it in our previous discussion. An action that results in benefits being greater than costs increases efficiency, and one in which costs are greater than benefits reduces efficiency. In our example, the action of supplying the first unit to the market increases efficiency: to the nearest dollar, the benefits of $10 exceed the costs of $3. The actions of supplying two to eleven units also increase efficiency, because each has benefits greater than costs. But the action of supplying the twelfth unit would have costs greater than benefits, and would reduce efficiency. “We can measure the degree of efficiency. The shaded area in our figure shows the efficiency gain from using our resources to produce eleven units of this good compared to producing none of it at all. The benefits of the eleven units are represented by the entire area under the demand curve for those units, and the costs by the corresponding area under the supply curve. Benefits minus costs, or net benefits, are therefore the shaded area. Using some geometry, that area can be shown to equal $42.35.”11 Professor Weiss went on to show this, and then announced that this was quite enough for one day and ended the class. Barbara and Reggie looked at each other, tired but smiling. “That was some class, wasn’t it?” said Reggie. “You can say that again,” Barbara replied. “I think this course is going to be very interesting. I just hope I can keep up with it. At times it seemed to be going too fast for me. I’ve never seen anything like this before.” Reggie grinned at her. “It sounded to me like you’re going to do just fine.” Professor Weiss, who overheard the thrust of this conversation, smiled to himself as he left the room. They both will do just fine, he thought. This whole class will do just fine. I am lucky to have such good students. And with those private thoughts, he disappeared from view.

Summary The most common analytic method for predicting the consequences of proposed policies is modeling. A model is an abstraction intended to convey the essence of some particular aspect of the real world; it is inherently unreal and its usefulness depends on the extent to which it increases knowledge or understanding. The accuracy and breadth of a model’s predictions depend on its specification: the choice of a particular set of abstractions or assumptions used to construct the model. A fundamental skill of policy analysis is to be able to identify the alternative plausible specifications relevant to a particular policy analysis and to understand how analytic conclusions depend upon them.

11

Assuming that the demand and supply curves are linear, the two points given on each determine their equations and thus their intercepts on the price axis. The demand curve has equation Q = 21 − 2P and intercept P = $10.50. The supply curve has equation Q = 5P − 14 and intercept $2.80. The line at P = $5.00 divides the shaded area into two right triangles, each with a “quantity” height of eleven and “price” bases of $5.50 and $2.20 for the upper and lower triangles, respectively.

34

Chapter Two

The chapter provides a story as a model. In the story, some fundamental concepts used in microeconomic models are introduced. One of these is the demand curve, a relation showing the demand or quantity of a good that a consumer (or some group of consumers) will wish to buy at each possible selling price for that good. Normally, consumers will demand greater quantities at lower prices. Another fundamental concept is that of the supply curve, a relation showing the supply or quantity of a good that a producer (or group of producers) will offer for sale at each possible selling price for that good. Typically, producers will supply greater quantities at higher prices. Both the demand curve and the supply curve are introduced as observable consequences of individuals trying to make the best choices, in their own judgments, from those available to them. A third fundamental concept is that of market equilibrium: when the quantity demanded by consumers is just equal to the quantity supplied by producers. Also within the story, these three fundamental concepts are related to concepts of economic value in a highly specific way. The demand curve can be thought of as a marginal benefit schedule, interpreting its height as the maximum amount of money that someone would pay to receive each successive unit of the good. Similarly, the supply curve can be thought of as a marginal cost schedule, its height showing the minimum amount of money necessary to persuade a supplier to bring each successive unit to market for sale. The benefits and costs of bringing each unit to market can then be compared, and thus the story introduces the idea of benefit-cost analysis. Furthermore, the story informally introduces the idea of economic efficiency as the use of an economy’s resources to maximize the value to its members of the goods and services produced. In the story, efficiency varies with the quantity of the good that is produced (by suppliers using society’s scarce resources) and then consumed. The degree of efficiency increases if allocative actions are taken that have benefits greater than costs, and decreases if actions have costs greater than benefits. The market equilibrium is shown to be the particular allocation of resources that maximizes benefits minus costs and is thus an economically efficient allocation. Two important substantive questions of economics are raised by the students in the story. One is whether there is a conflict between seeking efficiency and seeking equity or fairness. This question arises when a student recognizes that a consumer’s wealth influences his or her demand curve. This implies that wealth also affects the measured benefit of allocating a unit of output to one consumer as opposed to a different consumer, and thus wealth affects judgments about economic efficiency as well. The other substantive question is about the efficiency of allocating resources through a market system. The model indicates that the market outcome is efficient. But one student objects, saying that market outcomes involve high unemployment and manipulation of consumers. The story does not attempt to resolve fully either of these important substantive concerns, although they will be addressed later in the book. Finally, the story offers important insights about economic modeling. One of the students “tests” the plausibility of the downward-sloping demand curve by asking herself whether she would buy more of things she consumes if their prices were lower. In effect, she is testing the generality of the model by asking if it predicts her decisions accurately.

An Introduction to Modeling

35

She makes a similar “test” of the plausibility of the upward-sloping supply curve. The objecting student mentioned above focuses on how the model evaluates efficiency and is concerned that it is intended to demonstrate the efficiency of real markets in general (which she disputes). The professor is pleased that his students are trying to understand how well the model might represent a real market. He reminds them that the “goodness” of a model can only be judged with respect to some particular purpose to which it is put. His primary purpose in presenting this model, he reiterates, was to get them interested in economic modeling.

Discussion Questions 1. What do you think of the economic concept of value as explained by Professor Weiss? What do you like and dislike about it? What do you think of the concept of efficiency offered? What do you see as its strengths and weaknesses as a public policy goal? 2. Do you think consumers buy the quantity of a good or service that best fits their own circumstances? What factors might make this difficult? If consumers buy things that do not best fit their needs, does this affect the value interpretation of the demand curve? 3. What is the relationship between the actions of a producer in the model and its profits? Can you think of circumstances that might cause producer behavior to be different? 4. How should the students in the class judge the usefulness of the model Professor Weiss has presented? What do you think of the factors used by Barbara to evaluate the model? Should you distinguish the predictive function of market behavior from the evaluative function of determining efficiency consequences?

CHAPTER THREE U T I L I T Y M A X I M I Z AT I O N , E F F I C I E N C Y, A N D E Q U I T Y

of the introductory section, we first introduce a standard model of individual choice referred to as utility maximization. We do so in the context of an individual who is deciding what items to consume, but must do so within the limits of a budget constraint. This standard model, as well as alternative versions, will be used in later chapters to predict behavior as we increasingly make connections among individual behaviors, policies, and aggregate outcomes. But our primary use of it in this chapter is to introduce, at a rudimentary level, some central ideas about the normative goals of efficiency and equity used in policy analysis. We introduce the concept of efficiency in a very simple setting where the only economic activity is the exchange of goods among different people. We illustrate how inferences about the efficiency of resource allocation can be drawn from the behavioral predictions of the utility-maximization model. We highlight that efficiency is evaluated with respect to the well-being of all individuals in an economy and that well-being is usually judged by the principle of consumer sovereignty: Each person is the judge of his or her own well-being. We emphasize that there are many different allocations of resources that are efficient and they vary widely in terms of how well-off specific individuals are with each of them. We distinguish the concept of efficiency known as Pareto-optimality from the concepts of relative efficiency used to compare two (or more) specific resource allocations. The latter concepts can be controversial unless they are used in conjunction with standards for assessing equity or fairness. We introduce several concepts of equitable resource allocation and illustrate their relevance to evaluating the exchanges previously discussed. A supplementary section illustrates how efficiency and equity measures may be integrated in a form known as a social welfare function. In a brief appendix, calculus versions of the model of utility maximization and the conditions that characterize exchange efficiency are explained.

IN THIS LAST CHAPTER

36

Utility Maximization, Efficiency, and Equity

37

A Model of Individual Resource Allocation Let us review here the most conventional and general assumptions about individual resource allocation choices. These assumptions form a model of human decision-making referred to as utility maximization. In later chapters, we will consider alternatives to this model, such as a model of bounded rationality (Chapter 7), and show that policy recommendations can depend in critical ways on which of the models is more accurate for the situation being studied. However, the utility-maximization model has proven to be insightful and useful in a wide range of situations, including the policy analyses that we will discuss in the next part of this book. Furthermore, understanding the construction of the most common model is a good way to begin to develop the skill necessary to construct and use less common models. The reader should be forewarned, however, that the utility-maximization model generally suggests that the individual is a highly competent decision-maker, and thus may not be very well suited for situations in which that competency is the key issue (e.g., for complex probabilistic situations, or at times of high emotional stress). The model of utility maximization can be described in terms of four assumptions. We will introduce these verbally and then show graphical representations of them. First, each consumer is assumed to have a preference-ordering. This means two things. One is that the consumer can compare any two possible bundles or collections of goods and services and will prefer one to the other or be indifferent. (This rules out responses such as “I don’t know” or “I can’t decide.”) The other is that the consumer is consistent: if bundle A is preferred or equal to bundle B, and B is preferred or equal to C, then A must be preferred or equal to C. Second, each consumer is nonsatiable. Roughly speaking, this means that a property of the consumer’s ordering is that more goods are preferred to less, other things being equal. The consumer is, of course, the judge of what things are “goods” as opposed to “bads.” If a consumer does not like air pollution or street crime, then the corresponding good is clean air or safe streets. The consumer may consider charity to others as a good, and thus is by no means assumed to be selfish. But is it not true that the consumer can be sated? Consider rare (but not raw) hamburgers as goods. You may like only one, and I may prefer two, and we both may know people who prefer more, but doesn’t everyone have a limit? The answer is yes, of course: Consumers may commonly have limits for specific goods within any particular time period.1 A precise way to state the assumption of nonsatiation that allows for these limits is as follows: There is always at least one good for which the consumer is not yet sated. Practically speaking, individuals are constrained by their limited budgets from having consumption bundles that contain everything they could possibly want. Even very rich people might prefer more leisure time, an extra summer home, or making more philanthropic

1 Obviously, the limit depends on the time period. The example is for hamburgers per meal; the limit would presumably be much higher for hamburgers per year. In general, economic activities must include a time dimension to be well defined. Since no particular period is required for many of our illustrations, the activities can be thought of simply as per period.

38

Chapter Three

donations (if they did not have to give up something they already have). And, of course, most people would prefer to be able to afford more (in terms of quantity or better quality) of most goods they are currently consuming. A convenient generalization is to treat consumers as not sated with any of the specific goods in the alternative consumption bundles under discussion. We often use this version of the nonsatiation assumption (sometimes referred to as strong monotonicity in the professional literature), but we would not if the focus is to be on some good for which satiation is a distinct possibility (e.g., microeconomic policy analysis courses per semester). The third assumption is that each consumer has strictly convex preferences or, stated informally, prefers diversity in consumption bundles. Suppose we know one consumer is indifferent between two bundles: one with much housing but little entertainment; the other with little housing but much entertainment. The third assumption asserts that the consumer would strictly prefer a third bundle formed by using any proportional combination (weighted average) of the housing and entertainment in the first two. For example, a bundle made up of one-third of the housing and entertainment in the first bundle plus two-thirds of the housing and entertainment in the second would be a proportional combination. The idea is that the consumer would prefer a more “balanced” bundle to either of the extremes. This tries to capture the empirical reality that most people consume a diversity of goods rather than extreme quantities of only one or two items. Since nothing stops them from choosing less balanced bundles, it must be that the less balanced ones are not considered as desirable as the more balanced ones. Like the second assumption, this is simply an empirical generalization thought to be true in most circumstances but for which there are exceptions. (For example, someone might be indifferent to spending a one-week vacation in either of two places but think that half a week in each is worse.) The fourth assumption is that each consumer makes resource allocation choices in accordance with his or her ordering. This implies that the consumer is both self-interested and informed (in terms of knowing which choice is the best one to make). Together, these four assumptions form the most common model of rational consumer decision-making. The first and fourth assumptions model rationality; the second and third assumptions are generalizations about preferences. The first three assumptions of this model are often theoretically represented by an ordinal utility function, and the fourth assumption is equivalent to the consumer acting to maximize utility. Imagine that the consumer mentally considers all possible bundles of goods and lines them up in sequence from best to worst. Further imagine that some number (the utility level) is assigned to each of the bundles in accordance with the following rule: More preferred bundles get higher numbers than less preferred bundles, and equally preferred bundles get the same number assigned to them. This is an ordinal ranking because the numbers indicate nothing apart from the consumer’s ordering.2 Mathematically, we can write 2 The ranking tells us which of two bundles is preferred, but not by how much. The distance between the numbers in an ordinal ranking has no meaning. A cardinal ranking, by contrast, reveals both the order and the distance between bundles (like temperature scales). However, to construct a cardinal ranking of an individual’s preference would require more knowledge about the consumer than that postulated in the four assumptions discussed. In

Utility Maximization, Efficiency, and Equity

39

the utility function as U(X1, X2, . . ., Xn), where there are n goods or services that can be in a bundle, Xi tells us how much of the ith good or service is in a bundle, and the value of the function tells us what utility level has been assigned to any particular bundle consisting of X1, X2, . . ., Xn. Note that since it is only the order or ranking of the bundles that the function reveals, a different function that, say, doubles all of the absolute levels V(X1, X2, . . ., Xn) = 2U(X1, X2, . . ., Xn) represents exactly the same ordering. In fact, any transformation of the original function that keeps the bundles in the same order is representing the same preferences. The absolute levels of utility thus do not mean anything by themselves. The utility function is a conceptual construct that can neither be measured directly, nor be used to compare utility levels of two different consumers (since the level of the numbers is arbitrary). Nevertheless, it provides the foundation for a theory that has many practical applications. We can also represent the consumer’s preferences (or utility function) graphically by using indifference curves. Assume that consumption bundles consist only of two goods, meat M and tomatoes T and therefore that the utility level is a function only of them: U(M, T). Any amount of meat and tomatoes can be represented by a single point in Figure 3-1a. For example, point A represents 5 pounds of M and 4 pounds of T, and point B represents 4 pounds of M and 3 pounds of T. Define an indifference curve as the locus of points representing all consumption bundles that are equally preferred to each other (or equivalently, have the same utility level). In Figure 3-1a, UA shows all consumption bundles that have the same utility level as at point A, and UB shows all bundles with the same utility level as at point B. Point A, which has more of both goods than point B, must (by the nonsatiation assumption) represent a greater utility level (UA > UB). Thus the utility level increases as one moves upward and to the right on the diagram. One (and only one) indifference curve must pass through every point on the diagram. This follows from the assumption that the consumer has a preference-ordering. Each bundle (or point on the diagram) has some utility level (is on some indifference curve) that allows it to be compared with any other bundle to see if it has a higher, lower, or equal utility level. Thus there are an infinite number of indifference curves, one to represent each possible utility level. Why is there only one indifference curve through each point? If two different indifference curves went through the same point and thus intersected (Figure 3-1b), it would violate the consistency aspect of the preference-ordering. It would imply that one bundle (the intersection point G) is indifferent to other bundles (points E and F) to which the consumer is not indifferent. To see this contradiction, note that point F has more of both goods than point E, and thus by the nonsatiation assumption UF > UE . If E and G are on the same indifference curve (UE = UG ), then consistency requires that UF > UG —but then F and G cannot also be on the same indifference curve (implying UF = UG ). particular, it would require a measure of the magnitude of the psychological change in utility or pleasure that an individual derives from consuming any specific bundle of goods relative to some comparison bundle. Since we do not know how to do that reliably, it is fortunate (and a notable achievement) that most economic models do not require the information in order to fulfill their purposes.

40

Chapter Three

(a)

(b) Figure 3-1. The representation of preferences by indifference curves: (a) The shape and location of indifference curves reflect the assumptions about preference orderings. (b) Indifference curves cannot cross (UG = UF and UG = UE means by consistency UF = UE but by nonsatiation UF > UE ).

The indifference curves are negatively sloped (they go downward from left to right). This is a consequence of the nonsatiation assumption. If the consumer begins in Figure 3-1a at A, and some meat is taken away, he or she is worse off. If we were to take away some tomatoes as well, the consumer would be still worse off. To bring the consumer back to UA , we must increase the quantity of tomatoes to compensate for the meat loss. This results in the negative slope. The marginal rate of substitution of a good M for another good T, represented by MRSM,T, is the maximum number of units of T a consumer is willing to give up in re-

Utility Maximization, Efficiency, and Equity

41

Figure 3-2. The diminishing marginal rate of substitution.

turn for getting one more unit of M. This is the number that keeps the consumer just indifferent, by his or her own judgment, between the initial position and the proposed trade. Note that, unlike the utility level, the MRSM,T is a measurable number that can be compared for different consumers. (We will make use of this aspect shortly.) Formally, the MRSM,T is defined as the negative of the slope of the indifference curve (since the slope is itself negative, the MRS is positive). Note in Figure 3-1a that the indifference curves are drawn to become less steep from left to right. The MRS is diminishing: along an indifference curve, the more meat a consumer has, the fewer tomatoes he or she will be willing to give up for still another pound of meat. We illustrate this more explicitly in Figure 3-2. At point C, a bundle with a relatively large quantity of tomatoes relative to meat, the individual is willing to give up ∆TC tomatoes (∆ means “the change in”) to get one extra unit of meat ∆M. But at point B, having fewer tomatoes and more meat, the individual will not give up as many tomatoes as before to get yet another unit of meat ∆M. This time, the individual will only give up ∆TB tomatoes. We can refer to the (negative) slope of the indifference curve at one of its points as ∆T/∆M, and the corresponding (positive) marginal rate of substitution MRSM,T is −(∆T/∆M). The changing scarcity of one good relative to the other along an indifference curve provides an intuitive rationale for a diminishing MRS. Note that if we ask about the MRST,M (i.e., we reverse the order of the subscripts), the definition implies this is −(∆M /∆T) and therefore MRST,M = 1/MRSM,T . The diminishing MRS can also be seen as a consequence of the strict convexity assumption. Consider the two equally preferred bundles B and C in Figure 3-2. The set of bundles

42

Chapter Three

that represents proportional combinations of B and C corresponds to the points on the straight line between them.3 By the strict convexity assumption, each of the bundles in this set is strictly preferred to B and C. Therefore, the indifference curve that connects any two equally preferred bundles, such as B and C, must lie below the straight line that connects them. This is possible only if the slope of the indifference curve becomes less steep from C to B. Note also that with nonsatiation, the slope of the indifference curve does not become perfectly flat on its lower-right side nor perfectly vertical on its upper-left side. If it did, say, become perfectly flat on the right side, that would mean the MRSM,T = 0. If no tomatoes must be taken away to hold utility constant after a small increase in meat, it means utility did not increase with the gain in meat. But this violates the nonsatiation assumption. The indifference curves embody the first three assumptions of the utility-maximization model (preference-ordering, nonsatiation, and convexity). In order to illustrate the final model assumption, that an individual chooses in accordance with his or her preferenceordering (or equivalently acts to maximize utility), it is useful to introduce one final concept: a budget constraint, which is the amount of money that an individual has available to spend. When individuals make choices about what to consume, they are constrained by the size of their budgets. Suppose that one individual has $B to spend to buy meat at PM per pound and tomatoes at PT per pound. Given these fixed parameters, the individual can choose any amount of meat M and tomatoes T as long as its total cost is within the budget: PM M + PT T ≤ B For illustrative purposes, we continue to assume that meat and tomatoes are the only two goods available, and the individual’s problem is to choose the quantities (M, T) that maximize 3 Represent B as the bundle (MB , TB) and C as (MC, TC ). A proportional combination of B and C is defined, letting α(0 < α < 1) be the proportion of B and 1 − α the proportion of C, as follows:

Mα = αMB + (1 − α)MC = MC + α(MB − MC) and Tα = αTB + (1 − α)TC = TC + α(TB − TC) We can show that the point (Mα, Tα ) must lie on the line connecting B and C. The slope of the line connecting B and C is ∆T TB − TC —— = ———— ∆M MB − MC The slope of the line connecting (Mα, Tα) and C is ∆T Tα − TC —— = ———— ∆M Mα − MC Substituting the definitions of Tα and Mα in the equation directly above gives us ∆T TC + α(TB − TC) − TC α(TB − TC ) TB − TC —— = —————————— = ————— = ———— ∆M MC + α(MB − MC) − MC α(MB − MC ) MB − MC Since both (Mα, Tα) and B lie on a line through C with slope (TB − TC )/(MB − MC), they must lie on the same line.

Utility Maximization, Efficiency, and Equity

43

(a)

(b) Figure 3-3. Maximizing utility subject to a budget constraint: (a) The budget constraint (PMM + PTT ≤ B limits choices to the shaded area. (b) A tangency point of an indifference curve to a budget constraint is a utility maximum (point C ).

utility subject to the budget constraint. In Figure 3-3a, we represent the budget constraint graphically. If the entire budget is spent on meat, the individual could buy the bundle (MB , 0) shown as the intercept on the horizontal (meat) axis, where MB = B/PM . If the entire budget is spent on tomatoes, the individual could buy the bundle (0, TB ) shown as the intercept on the vertical (tomatoes) axis, where TB = B/PT . These intercepts are the extreme points of the line segment representing the different (M, T) bundles that can be bought if the budget is fully expended. To see this, we rewrite the budget constraint equation above to express T in terms of M: T ≤ −(PM /PT )M + B/PT

Chapter Three

44

At equality (when the budget is fully expended), this is the line with slope −(PM /PT ) and Tintercept B/PT . Note for use below that the slope of the budget constraint is determined by the ratio of the two prices. Thus graphically the individual’s budget constraint is shown as the shaded area: he or she can choose any bundle on the line or under it, but does not have enough money to afford bundles that are above the line. In the context of this budget constraint, what does it mean for the individual to choose the most preferred bundle possible (or to maximize utility)? Which bundle will the individual choose? Might the individual choose a bundle under the line like point A shown in Figure 3-3b? Such a choice would be inconsistent with our model. Why? For any point under the line, there is a point on the line with more of both goods (like point B) and which, by the nonsatiation assumption, must have greater utility. Thus A cannot be the most preferred bundle attainable. The utility-maximizing bundle must be one of the points on the line rather than under it. To see which point on the line is the utility maximum, we add several of the individual’s indifference curves to the diagram. As drawn, we see that point B is not the maximum because point C is attainable and on a higher indifference curve. Point D has even greater utility than point C, but the individual cannot afford to buy the bundle at D. The key characteristic of point C that makes it the utility maximum is that the indifference curve at that point is just tangent to the budget constraint.4 Recall that minus the slope of an indifference curve is the marginal rate of substitution, and that minus the slope of the budget constraint is the price ratio. Since tangency implies equal slopes, at the utility maximum a consumer of both goods will have a marginal rate of substitution equal to the ratio of the prices: MRSM,T = PM /PT At any other point on the constraint such as point B where MRSM,T ≠ PM /PT , the individual can attain more utility from another bundle. To show this, suppose the price of meat PM is $4.00 per pound and the price of tomatoes PT is $1.00 per pound. If MRSM,T ≠ $4.00 ÷ $1.00, the consumer is not at a utility maximum. Suppose at point B the consumer has MRSM,T = 3 < PM /PT , for example: indifferent between the current allocation and one with 3 pounds more of tomatoes and 1 pound less of meat. The consumer can forgo 1 pound of meat and save $4.00, buy 3 pounds of tomatoes for $3.00, and have $1.00 left over to spend and thus increase utility over the initial allocation. In other words, from point B the individual can gain utility by moving along the constraint toward point C. From a point like E where MRSM,T > PM /PT , the individual could also gain utility by moving along the constraint toward point C, in this case forgoing some tomatoes to buy more meat.5 For utility to be maximized, any consumer of both goods must have MRSM,T = PM /PT . The four assumptions described in this section form a model of economic choice: an individual who has a utility function and acts to maximize it. We now wish to show (in a 4

This illustration assumes that the maximum is not at either intercept. It is possible for the maximum to be at one of the intercepts (the individual chooses not to consume one of the goods at all), and in these “boundary” cases the tangency condition will generally not hold. We discuss these cases later in the text. 5 Convince yourself this is possible by assuming that the MRS at point E is 6 and identifying a way for the M,T consumer to gain.

Utility Maximization, Efficiency, and Equity

45

very simple setting) how this model can be used to draw inferences about efficiency. To do so, we must first explain the concept of efficiency.

Efficiency The General Concept Society is endowed with a limited supply of a wide variety of resources: for example, people, land, air, water, minerals, and time. A fundamental economic problem faced by society is how to use the resources. If the resources were not scarce, there would be no economic problem; everything that anyone wanted could be provided today, and infinite resources would still be left to meet the desires of tomorrow. But although human wants or desires may be insatiable, resource scarcity limits our ability to satisfy them. Scarcity implies that any specific resource allocation involves an opportunity cost, or the value forgone from alternative opportunities for using the resources; for example, if more resources are allocated to education, fewer resources will remain to be allocated to health care, food production, road construction, and other goods and services. People differ in terms of their specific ideas for the use of scarce resources. However, we all generally agree that resources are too precious to waste. If it is possible, by a change in the allocation of resources, to improve the lot of one person without doing harm to any other person, then resources are currently being wasted. All efficiency means is that there is no waste of this kind: An efficient allocation of resources is one from which no person can be made better off without making another person worse off. Sometimes efficiency is referred to as Pareto optimality after the Italian economist Vilfredo Pareto (1848–1923), who first developed the formulation. Any allocation of resources that is not efficient is called, not surprisingly, inefficient. To demonstrate the practical import of efficiency for policy, consider the resources devoted to national defense by the world’s people.6 In order to keep the illustration simple, let us assume that each nation, acting for its citizens, seeks only to produce “security” from military aggression by others. Then the efficient defense policy would be for the world to have no military resources at all: no person anywhere would face a military threat, and the freed-up resources could be devoted to increasing supplies of food, shelter, clothing, or other important goods and services.7 6 For example, by a conservative definition the United States in 1999 spent about 4 percent of the GDP on defense. The federal government spent $365 billion on national defense out of a total federal product of $569 billion and total GDP of $9.3 trillion. Source: Economic Report of the President, January 2001 (Washington, D.C.: U.S. Government Printing Office, 2001), p. 288, Table B-10, and p. 298, Table B-20. 7 The efficient level of national defense is substantially above zero for at least two important reasons that the illustration rules out: (1) some nations or groups may initiate aggressive military activities for offensive rather than defensive reasons, and (2) some nations may consider a military response appropriate to a nonmilitary threat (e.g., to enforce a claim to ownership of a natural resource such as land). Each possibility leads every nation to allocate some resources for national defense. Nevertheless, the reasons cited do not affect the point of our example, which is that defense expenditures can be significantly higher than necessary in order to produce the desired level of protection.

Chapter Three

46

However, each nation is mistrustful of the others and allocates some resources to defense as a form of insurance. But that increases the threat perceived by each nation and leads to increases in national defense spending, which further increases the perceived threats, leads to more defense increases, and so on. In short, defense policies become the runners in an inefficient arms race.8 Furthermore, efficiency is not achieved by a unilateral withdrawal: if the United States gave up its arms but no other nation did, many U.S. citizens would feel worse off, not better off. This illustrates not only that efficiency is an important objective but that its achievement typically requires coordination among the different economic agents (countries, in this example). This is the primary objective of negotiations such as the Strategic Arms Reduction Treaties (START I and II) between the United States and the states of the former Soviet Union and the chemical weapons disarmament treaty involving over 120 nations.

Efficiency with an Individualistic Interpretation The definition of efficiency refers to individuals being either better off or worse off. To apply the definition to any practical problem, there must be a method of deciding whether someone’s well-being has improved or deteriorated. One way to develop such a method is by using the principle of consumer sovereignty, which means that each person is the sole judge of his or her own welfare.9 Based on that principle, economists have devised numerous analytic techniques for drawing inferences about whether individuals consider themselves better off or worse off under alternative allocations of resources. Although the above principle is commonly used in the economic analyses of most western societies, it is not the only logical way of constructing a concept of efficiency that meets the definition given above. An alternative route is to let some other person (e.g., a philosopher-king) or perhaps a political process (e.g., democratic socialism or communism) be the judge of each person’s welfare. Then efficiency would be evaluated by the values and standards of judges, rather than by those of the individuals affected. In this text, we will always mean efficiency as judged under consumer sovereignty unless it is explicitly stated otherwise. If efficiency is not judged by that principle, one jeopardizes its acceptance in policy analysis as representing a value most people share. It is worth mentioning, however, that there are a number of situations in which deviations from the concept are commonly thought appropriate. Typically, they are those in which individuals have incomplete or erroneous information or are unable to process the available information. One obvious example of such a deviation concerns children: Parents generally substitute their own judgments for those of their young children. Another example, and this one a matter of public policy, is that we do not allow suppliers to sell thalidomide, even though some

8

The behavior described here may be thought of as a version of the Slumlord’s Dilemma or Prisoner’s Dilemma analyzed in Chapter 7. In that chapter, applications involving urban renewal and health insurance are discussed. 9 The term “consumer” in the expression “consumer sovereignty” is a slight misnomer. The principle is intended to apply to all resource allocation decisions that affect an individual’s welfare: the supply of inputs (e.g., the value of holding a particular job) as well as the consumption of outputs.

Utility Maximization, Efficiency, and Equity

47

consumers might purchase it if they were allowed to do so. Certain medicinal drugs are legal to sell, but consumers can purchase them only by presenting physicians’ prescriptions. In each of these cases, some social mechanism is used to try to protect consumers from the inadequacies of their own judgments.10 In using the consumer sovereignty principle, it is important to distinguish between consumer judgments and consumer actions. It is only the sovereignty of the judgments that we rely on for this definition of efficiency. Indeed, there will be many illustrations throughout this book of inefficient allocations that result from sovereign actions of well-informed consumers. The defense example given earlier is of this nature: The people of each nation can recognize the inefficiency of an arms race; but when each nation acts alone, the inefficiency is difficult to avoid. The problem is not with the judgments of people about how to value security, but with the mechanisms of coordination available to achieve it.

Efficiency in a Model of an Exchange Economy To illustrate the analysis of efficiency, we will utilize a model of a highly simplified economy: a pure exchange economy in which there are only two utility-maximizing consumers and two different goods. In this economy, we assume in addition to utility-maximizing behavior that there are no significant costs involved in negotiating, bargaining, or otherwise arranging trades—an assumption of negligible transaction costs. By assuming for simplicity that the goods are already produced, we suppress the very important problems of how much of each good to produce and by what technical processes. Nevertheless, the basic principle of efficiency developed here holds for the more complicated economy as well. Furthermore, it highlights the critical connection between efficiency and the consumer sovereignty definition of human satisfaction. The principle to be established is this: The allocation of resources in an economy is efficient in exchange if and only if the marginal rate of substitution of one good for another is the same for each person consuming both of the goods.11 To see the truth of the principle, imagine any initial allocation of resources between two people, Smith and Jones, each of whom has a different MRSM,T and some of each of the two goods. Let us say that Smith has S J an MRSM,T = 3 (i.e., 3T for 1M), and Jones has an MRSM,T = 2. Then imagine taking away 1 pound of meat from Jones and giving it to Smith and taking away 3 pounds of tomatoes from Smith. Smith’s utility level is thus unchanged. We still have 3 pounds of tomatoes to allocate; let us give 2 pounds to Jones. Jones is now back to the initial utility level. Both have the same level of utility as when they started, but there is still 1 pound of tomatoes left to allocate between them. No matter how the last pound is allocated, at least one will be

10 The fact that individuals may be imperfect judges of their own welfare does not necessarily imply that there is any better way to make the judgments. In later chapters we discuss several information problems such as these and analyze alternative responses to them. 11 This is a slight simplification because it ignores consumers who are only consuming one of the goods. We show later in the chapter that for the consumer of only one good T, efficiency only requires that MRSM,T be no greater than the MRSM,T of individuals who are consuming good M.

Chapter Three

48

made better off than initially and the other will be no worse off (in terms of their own judgments). Therefore, the initial allocation was inefficient. Any time that two consumers of each good have different values for the MRS, there is “room for a deal,” as illustrated above. On the other hand, if both consumers have the same value for the MRS, then it is impossible to make one of them better off without making the other worse off. Although the illustration involved only Smith and Jones, the same reasoning would apply to any pair of consumers picked randomly from the economy. Therefore, efficiency requires that all consumers of the two goods have the same MRSM,T . Furthermore, there is nothing special about meat and tomatoes in this example; meat and fruit would work just as well, and so would fruit and clothing. Therefore, efficiency requires that the MRS between any two goods in the economy must be the same for all consumers of the two goods. Now that we know what is required for efficiency in this pure exchange economy, recall that we have said absolutely nothing so far about any mechanisms that a society might utilize to allow its citizens to achieve an efficient allocation. If there are only two consumers, they could achieve an efficient allocation by bartering or trading.12 Since trading can increase the utility level of each person, we predict that the two consumers will trade. Note that the value of the MRS depends on (all) the specific goods in a consumer’s bundle, and it changes as the bundle changes through trading. We know that an “equilibrium” position of efficiency will be reached because of diminishing marginal rates of substitution. This is illustrated below. In the example with Smith and Jones, Smith is gaining meat and losing tomatoes (and Jones the opposite). Smith will offer less than 3 pounds of tomatoes to get still more meat after the first trade, since tomatoes are becoming relatively more dear. Jones, with less meat than initially, will now demand more than 2 pounds of tomatoes to part with still anS other pound of meat. As they trade, Smith’s MRSM,T continues to diminish from 3 while J J Jones’s MRSM,T rises from 2 (or equivalently, MRST,M diminishes).13 At some point each will have precisely the same MRSM,T, and then the two will be unable to mutually agree upon any additional trades. They will have reached an efficient allocation. Note that knowledge of the marginal rate of substitution allows us to predict the direction of trade in the model. We have not had to decide whether Smith is more of a meat lover than Jones in terms of abstract utility.14 All we compare is a relative and measurable value: the pounds of tomatoes each is willing to give up for additional meat at the margin. At the initial point, meat will be traded to Smith because Smith is willing to give up more tomatoes than Jones requires to part with an additional pound of meat. Smith may not like meat more than Jones in an absolute sense. Alternatively, for example, Smith may like tomatoes less than (and meat as much as) Jones in the absolute sense. Or they could both

12

Recall that we assume that there are negligible transaction costs. In more realistic problems of policy, transaction costs are often significant and may play a major role in determining efficiency. 13 Recall that at any point on a normal indifference curve MRS M,T = 1/MRST,M . 14 This would require not only a cardinal utility measure for each but also a method of making the interpersonal comparison between Smith utils and Jones utils.

Utility Maximization, Efficiency, and Equity

49

have identical absolute preferences, but Smith might have an initial allocation with many tomatoes and little meat compared to Jones. Fortunately, efficiency requires only that we equate the comparable MRS values of each consumer at the margin; we do not have to know individual preferences in an absolute sense. Efficiency may be achieved through a barter process when there are only two consumers and two goods, but what about the actual world that consists of a great many consumers and a very large number of different goods and services? What mechanisms exist to facilitate communication and coordination among the diverse economic agents? That general problem of organization will be the focus of Parts IV and V, but it is useful to introduce at this very early stage the idea of price as one simple coordinating mechanism that can help with the task. Suppose each good has one price that all consumers either pay to buy the good or receive when they sell it. In this model of a pure exchange economy, each consumer can be thought of as having a budget constraint derived by multiplying the quantity of each good in the initial endowment by its price and summing over all the goods in the endowment. That is, the budget constraint is the market value at the given prices of each individual’s initial endowment. As we have seen, the consumer will try to allocate his or her budget such that, for any two goods X and Y that are bought, PX MRSX,Y = —– PY Since all consumers face the same prices, all try to achieve the same MRSM,T . If they are successful, the resulting allocation is efficient. Thus prices can be a very powerful coordinating device. Note that if consumers faced different prices for the same good and could buy the quantities of it that they wished, they would end up with different values for the MRS. This suggests a generalization (with exceptions to be uncovered elsewhere in the book) relevant to policy design: If a policy results in at least one consumer of a good being charged a price different from that charged other consumers of the same good, the policy will generally be inefficient.15 Are there policies for which this principle is relevant? One is the Robinson-Patman Act of 1936, which prohibits firms from engaging in price discrimination among their customers.16 However, many policies cause different consumers to be faced with different

15

The exceptions are typically based on one of three considerations: (1) It is only the price of the marginal unit purchased that must be the same for all consumers. (2) The consumer is unable or unwilling to alter the quantity purchased. (3) The policy offsets an inefficiency that arises elsewhere in the economy. When applying this generalization, it is important to make sure that the prices compared are for truly identical goods. For example, the price for a television with installation provided by the dealer need not be the same as the price for the same television at a cash-and-carry outlet. Differences in times of delivery, shipping costs, and guarantees also are common reasons for distinguishing otherwise identical goods. 16 Price discrimination refers to the practice of a seller charging different prices for the same good. We consider this practice in Chapter 10.

50

Chapter Three

prices. We will consider several of these in the next chapter when we discuss welfare programs. Programs to assist lower-income individuals often reduce the price of a good or service to recipients, and since only some people qualify for these reduced prices we might expect the result to be inefficient resource allocation. Alternative model specifications, taking program particulars into account, make this proposition a bit more complicated. Of course, one purpose of welfare programs is to help the poor and it is possible that achieving the equity objective is worth some efficiency cost. These questions can be raised about many programs that provide benefits to selected groups of people, for example, housing subsidies, food stamps, education benefits for veterans, Medicare, and Medicaid. The main point of raising these issues here, even without resolving them, is to demonstrate the earlier claim that considerable insight into the efficiency consequences of policies can be achieved through modeling. We did not have to go around asking every (or even any) consumer how he or she would be affected by price discrimination. The conclusion that price discrimination is inefficient comes from applying the utility-maximizing model of behavior to the definition of efficiency. This type of reasoning, displayed in very elementary form here, can be extended to quite sophisticated forms of analysis. All such analysis is subject to the same type of criticism: If the underlying model of consumer behavior is erroneous, then the analytic conclusion may be wrong. If some individuals are not utility maximizers, then making all individuals face the same prices will not necessarily result in each having the same MRS for any two goods. Finally, recall that models cannot be expected to be perfectly accurate. The relevant question, which is not addressed in this example, is whether the model is accurate enough. The answer depends, in part, on the alternative means of analysis with which it is compared. The theoretical example of coordination through prices illustrates only part of the coordinating potential of a price system. It does not consider the problem of how to ensure that the total amount people wish to buy of one good, given its price, equals the total amount that other people will be willing to sell. (The only goods available are those in the initial endowments of the consumers.) If the price of one good is too low, for example, then the quantity demanded will exceed the quantity supplied. But prices can solve that problem as well: Raise the price of the goods in excess demand and lower the price of those in excess supply until the prices that exactly balance the demand and supply for each good are found. The equilibrium prices are the ones that will allow the consumers in the above example actually to achieve efficiency; any other prices will result in some consumers not being able to buy the quantities necessary to maximize their utility.17

17 We should also note an important difference between two coordinating mechanisms mentioned: prices and START agreements. The price mechanism allows decentralized coordination: Consumers do not have to consult with one another about how much each plans to buy. START agreements are a centralized way of coordinating: Each party will agree only to “not buy” certain arms based on knowledge of what the other party promises to not buy. Thus, efficiency can be sought with both decentralized and centralized institutional procedures. One of the more interesting and subtle questions of policy design concerns the choice of centralized versus decentralized procedures. We focus on that choice in Part V.

Utility Maximization, Efficiency, and Equity

51

Figure 3-4. The Edgeworth box.

A Geometric Representation of the Model The concept of efficiency in exchange can also be explained geometrically through the use ¯ be the of an Edgeworth box diagram. In our simple example with Smith and Jones, let M ¯ total amount of meat between them and T be the total amount of tomatoes. These amounts determine the dimensions of the Edgeworth box shown in Figure 3-4. Let the bottom-left corner OS represent the origin of a graph showing Smith’s consumption. Point A shows Smith’s initial endowment of meat MSE and tomatoes TSE. Let the upper-right corner OJ represent the origin of a graph showing Jones’s consumption. The amount of meat consumed is measured by the horizontal distance to the left of OJ , and tomato consumption is measured by the vertical distance below OJ . Then, given the way the box is constructed, point A also represents Jones’s initial endowment. Since the total quan¯ and T¯ and Smith starts with M E and T E Jones must have the rest: tities available are M S S ¯ − ME MJE = M S and TJE = T¯ − TSE In fact, every possible allocation of the two goods between Smith and Jones is represented by one point in the Edgeworth box. At OS , for example, Smith has nothing and Jones has everything. At OJ , Jones has nothing and Smith has everything. Every conceivable trade

52

Chapter Three

between Smith and Jones can thus be represented as a movement from A to another point in the Edgeworth box. We can also draw in the indifference curves to show each person’s satisfaction level. Let SE be the indifference curve reflecting Smith’s level of satisfaction at the initial endowment (and thus point A lies on it). Higher levels of utility for Smith are shown by curves S2 and S3; a lower level of satisfaction is shown by S1. The curves drawn are only illustrative, of course; for every point in the box, Smith has an indifference curve that passes through it. The indifference curves for Jones are drawn bowed in the opposite direction because of the way the box is constructed, with Jones having more consumption at points away from OJ , and toward OS . Let JE be the indifference curve showing Jones’s utility level at the initial endowment (and thus point A lies on it). J2 and J3 represent increasing levels of satisfaction; J1 shows a lower degree of satisfaction. Thus the Edgeworth box combines two indifference curve diagrams, each like figures we have drawn previously. Smith’s preferences are represented like those in the earlier figures: origin at the bottom left, tomatoes on the vertical axis, meat on the horizontal axis, and higher utility levels upward and to the right. The same is true for Jones if the Edgeworth box is turned upside down. Note that the shaded area between SE and JE represents all allocations of meat and tomatoes whereby both Smith and Jones would consider themselves better off than at the initial allocation. If we include the points on as well as those strictly between the indifference curves SE and JE , they represent the set of all possible trades that would make at least one person better off and no one worse off. Consider the possibility that we could start the exchange economy (choose feasible initial endowments for each person) from any point in the Edgeworth box. From most points, there will be some trades that could make each person better off. For every point like A, through which the indifference curves intersect, improvements by trading are possible. But there are some points, like point B, where the indifference curves do not intersect but are just tangent to one another. Note that every single point that improves Jones’s satisfaction over the level at B lies below the J2 indifference curve (toward OS ). At each of these points Smith would be worse off than at B and so would not agree to any of the proposed trades. Similarly, Jones would not agree to move to any of the points that would make Smith better off. Thus B is an efficient allocation: from it, it is impossible to find a trade that will make one person better off without making the other person worse off. Further, as was claimed earlier, the MRSM,T is the same for Smith and Jones at B because the indifference curves are tangent there. There are a few other efficient points, such as C and D, illustrated in Figure 3-4. (Although the MRSM,T may be different at B, C, and D, at any one of the points the two consumers will have the same MRSM,T .) Imagine finding each of the efficient points. Define the set of exchange-efficient points as the contract curve; the curve is drawn as the line through the Edgeworth box connecting OS and OJ . The contract curve illustrates that there are many possible resource allocations that are efficient. Of course, if Smith and Jones are initially at point A and can trade with one another, utility maximization implies they will trade within the shaded area. Their trading will continue until they reach a point on the BH segment of the contract curve. (We cannot predict which

Utility Maximization, Efficiency, and Equity

53

Figure 3-5. Efficient allocations at a corner of the Edgeworth box.

point without further assumptions.) Furthermore, we can see that Smith will be gaining meat and losing tomatoes, whereas Jones does the opposite.18 In other words, the behavior in our previous verbal description of trading between Smith and Jones can be seen in the Edgeworth box diagram. Note that some of the points from the entire contract curve lie on the boundaries of the box. These segments, OSG and OJ F, do not meet the same tangency conditions as the efficient points interior to the boundaries. Although the points on the boundary segments are efficient, Smith and Jones are not each consuming both of the goods. Careful examination of these points is interesting because it suggests why consumers do not purchase some goods at all, although they have strictly convex preferences. The reason is simply that the price is considered too high. In Figure 3-5 the bottom left corner of the Edgeworth box is blown up a bit to see this more clearly. At G, Smith has utility level S′ and Jones J′. The solid portions of each indifference curve are in the feasible trading region (within the boundaries of the box). The dashed extension of J′ shows the consumption bundles that would give Jones the same utility level as at G, but there is not enough total meat in the economy to create these bundles. (Also, the points on this dashed segment do not exist from Smith’s perspective, since they would involve negative quantities of meat!) Although the indifference curves J′ and S′ are clearly not tangent at G, there are no mutually satisfactory trades that can be made from that point. Smith has only tomatoes to trade, S J . This means that Jones will only part with another pound of meat < MRSM,T and MRSM,T, for more tomatoes than Smith is willing to offer. Smith considers the meat price (in terms of tomatoes) demanded by Jones to be too high, and Jones feels similarly about the price (in terms of meat) for tomatoes.

18

S J . The slopes of the indifference curves at point A reveal that MRSM,T, > MRSM,T

Chapter Three

54

As with the other points on the contract curve, every point that makes Jones better off (those below J′) makes Smith worse off, and vice versa. Thus G is efficient, although the tangency conditions are not met. This is explained by the limits imposed by the boundaries (along which at least one of the consumers is consuming only one of the two goods). This analysis, of two consumers and two goods, generalizes to the many goods, many consumers economy just as before. Efficiency requires every pair of consumers to be on their contract curves for every pair of goods. Thus, we may use the Edgeworth diagram to represent metaphorically exchange efficiency for the entire society. Note that we have examined the concept of efficiency with both a verbal and a geometric model of a pure exchange economy. The verbal model provides an intuitive grasp of the motivation for trading and how the trading process leads to an efficient allocation characterized by the equality of the MRS among traders. The geometric model makes clear that there are normally an infinite number of allocations that are efficient. The efficient allocations vary widely in terms of how well off each of the different traders is at any one of them. A third common form of this model of a pure exchange economy, using mathematical equations and calculus, is contained in the appendix.19

Relative Efficiency The Pareto concept of efficiency is an absolute one: Each possible allocation is either efficient or inefficient. It is often useful, however, to evaluate relative efficiency: a comparison of the efficiency of one allocation with that of another. That is, we wish to know whether one allocation is relatively more efficient than another or whether an allocative change increases efficiency. Measures of relative efficiency have been devised and are frequently used in policy analysis. They are controversial and more complex than the absolute standard of Pareto optimality, and we defer most formal exposition of them until Chapter 6. Below, we briefly introduce the general ideas that underlie measures of relative efficiency. There is a natural extension of the Pareto rule that can be used to make some judgments about relative efficiency. One allocation is defined as Pareto-superior to another if and only if it makes at least one person better off and no one worse off. In Figure 3-6, the axes show utility levels for Smith and Jones. Point A in Figure 3-6 shows Smith with a utility level of SE and Jones with a utility level of JE (chosen to correspond with the utility levels of their indifference curves through point A in Figure 3-4). The shaded quadrant represents the set of all utility levels that are Pareto-superior to those at point A. For reference, we also draw in Figure 3-6 the utility-possibilities frontier: the locus of utility levels associated with the Pareto-optimal allocations of the economy. These levels correspond to the levels of the indifference curves passing through each point on the contract curve of Figure 3-4. The portion of the shaded area ABH in Figure 3-6 corresponds to the utility levels of the allocations in the shaded area of Figure 3-4. These are the Paretosuperior points that are feasible in the economy. Point R in Figure 3-6 is Pareto-superior to 19

Recommended for graduate students.

Utility Maximization, Efficiency, and Equity

55

O

Figure 3-6. The shaded area is Pareto-superior to point A.

point A, but it is not feasible if the amounts of meat and tomatoes available only equal the dimensions of the Edgeworth box in Figure 3-4. Given the resources of the economy, the efficient allocations are those from which no Pareto-superior change can be made. The concept of Pareto superiority is not itself controversial. However, it can become controversial if one proposes to use it normatively as a criterion for policy-making. For example, suppose one thought that all policy changes should be required to be Pareto-superior. That would restrict the pursuit of efficiency from point A to the allocations in the shaded area of Figure 3-6. The problem with this proposed rule is that it eliminates all the changes whereby some people are better off (perhaps many people are much better off) and some are worse off (perhaps a few are slightly worse off). But in an actual economy most allocative changes, and especially those caused by public policy, are characterized precisely by these mixed effects. Inevitably, evaluating these changes involves interpersonal comparisons that the concepts of Pareto optimality and superiority try to avoid. For example, the government may be considering building a new highway that bypasses a small town in order to increase the ease of traveling and the conduct of commerce between two or more cities. Benefits or gains will accrue to the users of the new highway and to those who own land adjacent to it that may be used to service the new traffic. But the owners, employees, and local customers of the service stations, restaurants, and motels in the small town may well experience losses because of the reduction in traffic. However, if the gains to the gainers are great enough and the losses to the losers are small enough, collectively the society might think the change justified.

56

Chapter Three

O

Figure 3-7. The sum of utilities at point A as a measure of its relative efficiency.

Allocative changes whereby some people gain and others lose certainly raise questions of equity or fairness. Suppose, however, we try to put these questions aside for the moment and consider whether some objective statements about relative efficiency can be made in these circumstances. Consider, for example, a change like the one from point A to point D in Figure 3-6. This clearly moves the economy from an inefficient allocation to an efficient one, and we might think that any objective measure of relative efficiency should indicate that the change is an efficiency improvement. Note that the test for efficiency or Pareto optimality does not depend on whether someone has been made worse off (Jones in the above example); it depends only on whether it is possible to make someone better off without making anyone else worse off. Efficiency is a matter of whether there is room for improvement, and one might wish that measures of efficiency indicated only the scarcity of the available room for improvement (e.g., closeness to the utility-possibilities frontier). Then we would say that one allocation is more efficient than another if there is relatively less room for improvement possible from it. For any given initial allocation the set of more efficient allocations would include not only the Pareto-superior ones but others as well. An example of a measure that avoids the restrictiveness of the Pareto-superiority criterion is the aggregate (or sum) of utilities.20 In Figure 3-7, the line through point A with a

20 This somewhat unusual example is chosen because it is easy to explain and because the most common measures (based on the Hicks-Kaldor compensation principle) can be seen as simple variations upon it.

Utility Maximization, Efficiency, and Equity

57

slope of −1 is the locus of points with constant aggregate utility equal to the level at point A. Any point above this line has higher aggregate utility and by that measure would be considered relatively more efficient. All allocations that are Pareto-superior to point A have a higher aggregate utility level (since at least one person’s utility level is higher and no one’s utility level is lower), and thus are considered more efficient by this test. We can see that point D, which is efficient but not Pareto-superior, also is considered more efficient than point A. Therefore, this test is less restrictive than the test of Pareto superiority. However, there are also problems with this measure. As we have pointed out before, utility is neither measurable nor comparable among persons. Therefore, the measure is not very pragmatic. Later in the text, we shall review methods of constructing related indices that solve the measurability problem. Indeed, the benefit-cost reasoning in Chapter 2’s story is based precisely on such a construct.21 Then we must face the remaining problem: The implicit interpersonal judgments in this measure (and the related measures discussed later) are arbitrary from an ethical viewpoint. Consider, for example, point F in Figure 3-7. Like point D, point F is Pareto-optimal, but it is considered relatively less efficient than point A by the sum-of-utilities test. Why? One reason is that the test is defined independently of the utility-possibilities frontier. Unless the shape of the test line is geometrically identical to the shape of the utilitypossibilities frontier, the different Pareto-optimal points will not receive the same relative efficiency ranking. Rather than an attempt to measure the distance to the frontier, the aggregate utilities test is better thought of as measuring distance from the origin; how far we have come, rather than how much farther there is to go. Since better use of resources does move us farther from the origin, a measure of this distance can be interpreted as an index of efficiency. If we accept the notion of measuring distance from the origin, the aggregate utilities test still imposes an interpersonal judgment: To hold relative efficiency constant, a 1-util loss to someone is just offset by a 1-util gain to another. But why should this judgment be accepted? Why can the social judgment not be something else, for example, that losses are more serious than gains, so that perhaps 2 utils of gain should be required to offset each util of loss? Then a quite different set of allocations would be considered relatively more efficient than point A. The point of this last illustration is to suggest that, normatively, measures of relative efficiency can be controversial if one does not agree with the implicit ethical judgment in them. Rather than leave such judgments implicit, the strategy recommended here is to make equity or fairness an explicit criterion in policy analysis and evaluate policies on both efficiency and equity grounds. Standardized measures of relative efficiency can then be very useful, but they take on normative significance only when looked at in conjunction with some explicit ethical perspective. To illustrate that, we must introduce the criterion of equity or fairness. 21 A weighted sum of utilities, where the weights are the inverses of each individual’s marginal utility from an additional dollar, is equivalent to the Hicks-Kaldor rule for benefit-cost analysis (measurable in monetary units). This is demonstrated and explained in Chapter 6.

58

Chapter Three

Equity It is clear from the earlier Edgeworth analysis that there are many efficient allocations of resources. These allocations, represented by the points on the contract curve, dominate all other possible allocations. That is, for any allocation not on the contract curve, there is at least one point on the contract curve that makes one or more persons better off and no one worse off. Thus if a society could choose any allocation for its economy, it would certainly be one from the contract curve. But which one? Recall that efficiency is but one social objective. Another important objective is equity: fairness in the distribution of goods and services among the people in an economy. However, no unique concept of equity is widely regarded as definitive for public policymaking. We shall use the term to refer collectively to all the various concepts of fair distribution, and later in the text we shall introduce specific concepts (e.g., strict equality or a universal minimum) to compare and contrast their implications in the analyses of particular issues.22 At this stage we wish primarily to make the point that diverse concepts of equity deserve analytic consideration.

Equality of Outcome Is One Concept of Equity Although all the points on the contract curve of Figure 3-8 are efficient, they differ in the distribution of well-being. In fact, the contract curve offers a continuum of distributional possibilities. As we move along it from OS to OJ , Smith is getting an increasing share of the total goods and services and is becoming better off relative to Jones. One way in which equity is sometimes considered is in terms of these relative shares. Intuitively, distributions in the “middle” of the contract curve represent more equal outcomes than those at the extremes, and for that reason they might be considered preferable on equity grounds. According to this interpretation, equality of relative shares is the most equitable allocation. Using the equality of relative shares as a standard, we must still confront the question “shares of what?” If equality of well-being or satisfaction is the objective, then it is the shares of utility that should be of equal size. But since utility is neither measurable nor interpersonally comparable, in practice we usually fall back on proxy measures such as income or wealth. For example, point A in Figure 3-8 is the allocation that gives each person exactly onehalf of the available wealth (meat and tomatoes). Without being able to measure or compare utility, making this the initial endowment may be the best we can do to ensure equal relative shares. This necessarily ignores the possibility that one of the two people, say Smith, simply does not like meat or tomatoes very much, and might require twice the quantity given to Jones in order to reach a comparable utility level. Furthermore, it would be difficult to distinguish on equity grounds any of the efficient allocations that the two might then reach by voluntary trading from point A: the BC segment of the contract curve. Some objective 22 Most of the specific concepts are introduced in Chapter 5. The concept of a just price is not introduced until Chapter 13 in order to permit prior development of the workings of price systems.

Utility Maximization, Efficiency, and Equity

59

Figure 3-8. Equity versus efficiency.

standard for weighing the equity of the meat and tomato exchange between Smith and Jones would be necessary.23

Equality of Opportunity Is Another Concept of Equity Some people do not feel that equality of outcomes should be a social goal of great importance. An alternative view is that only the process must be fair. One specific principle of this type, for example, is that all persons should have equal opportunity. We illustrate this view below. The equal endowments at point A might be considered equitable if we were starting an economy from scratch, but in reality we always start from some given initial allocation such 23 To understand why such a voluntary exchange might change the equity of the distribution, consider the move from point A to point B. Since the allocations at point A are perfectly equal by definition and at point B Jones’s welfare has increased but Smith’s welfare is the same, we must conclude that Jones is now better off than Smith. Obviously there must be some point on the BC segment that has the same equity as at point A, but the problem is whether it can be identified in any objective sense. One possible method is to use prices as weights. If the trading were done through a price system using the equilibrium prices, the market values of the endowments and the final consumption bundles would be equal (since no one can spend more than his or her endowment, and anyone spending less would not be maximizing utility). Thus we might define these equilibrium final-consumption bundles as the ones that keep the equity of the allocations identical with those at point A and use the equilibrium prices to evaluate the relative values of any allocations in the Edgeworth diagram. The important feature of these prices is that they are the ones that achieve an efficient equilibrium when starting from a perfectly equal allocation of the goods available in the economy.

60

Chapter Three

as that at point D. Let us say that at point D it is obvious that Jones is richer (Jones has both more meat and more tomatoes than Smith). Consider whether your own sense of equity is affected by why we are starting at point D. Suppose that Smith and Jones have equal abilities, opportunities, knowledge, and luck but that point D is the result of different efforts at production. That is, Jones obtained the initial endowment by working very hard to produce it, whereas Smith was lazy. Then you might think that the distribution at D is perfectly fair and that some allocation on FG reached through voluntary trading is equally fair. Furthermore, you might think that any policies aimed at redistribution toward equality (point A) are inequitable. Alternatively, suppose you feel that we started at point D because of past discrimination against Smith. Jones was sent to the best agricultural schools, but Smith’s family was too poor to afford the tuition and no loans were made available. Or the best schools excluded minorities or women, or there was some other reason for denying this opportunity to Smith that you think is unfair. While both worked equally hard at production, Jones knew all the best ways to produce and Smith did not. In this case, you might think that some redistribution away from point D and toward point A improves equity. Your sense of the fairest distribution depends on how responsible you think Jones should be for the past discrimination.24 These two examples of different principles of equity (equality of outcome and equality of opportunity) are by no means intended to exhaust the list of equity concepts thought relevant to public policy and its analysis. They are intended to illustrate that diverse concepts of equity can be used as standards and that quite different conclusions about the fairness of policies can be reached as a result. They reflect real problems of equity that any society must confront. To be influential, analytic evaluation of equity consequences must be sensitive to the different principles of equity that will be applied by those involved in the policymaking process. Methodology for doing this will be illustrated later in the book in the context of specific cases.

Integrating Equity-Efficiency Evaluation in a Social Welfare FunctionS Suppose that we are currently at the efficient allocation point G (Figure 3-8) but that the allocations near point A are considered more equitable. Suppose also that some constraints in the economy prevent us from achieving any of the efficient allocations on BC but that point A itself can be achieved.25 Which of the two is preferable? To decide, a trade-off must be made between efficiency and equity.

24 This example provides another illustration of why allocative changes should not be restricted to those that are Pareto-superior to the starting point. 25 This approximates the situation when considering programs of income maintenance: Every method for transferring resources to the worst-off members of society causes at least some inefficiency. Income maintenance is discussed in Chapter 4. The inefficiency of taxes, used to finance income maintenance as well as other public expenditures, is discussed in Chapter 12.

Utility Maximization, Efficiency, and Equity

61

Figure 3-9. Alternative social welfare functions.

Making such a trade-off of course requires a social judgment; it is not a factual matter to be resolved by policy analysis. However, analytic work can clarify the consequences of alternative ways of making the trade-off. One approach for doing so, which we introduce here, rests on the concept of a social welfare function: a relation between a distribution of utility levels among society’s members and a judgment about the overall social satisfaction (the level of social welfare) achieved by that distribution. Mathematically, we denote such a function by W = W(U 1, U 2, . . ., U m ) where U i is the utility level of the ith individual, with i = 1, 2, . . ., m individuals in the economy. To clarify the meaning of a social welfare function, consider the Smith-Jones two-person economy. Social welfare is then a function of the utility level of each: W = W (U S, U J ) Figure 3-9 displays three social indifference curves through point A, each representing a different social welfare function. The axes show the utility levels for Smith and Jones;

62

Chapter Three

assume for illustrative purposes that we can measure them in comparable units. The social indifference curves represent social welfare functions with differing ethical choices about how to trade off the aggregate of utilities (used here as a measure of relative efficiency) versus the equality of their distribution (used as a measure of equity). Each curve illustrates a different conception of the set of Smith-Jones utility combinations that yield the same level of social welfare as that at point A. The line denoted WB comes from a social welfare function that considers relative efficiency but is indifferent to the degree of equality. The representative social indifference curve is a straight line with a slope of −1. The shape implies that the transfer of units of utility between Smith and Jones (thus holding the aggregate utility level constant) does not affect the level of social welfare, whether or not the transfer increases or decreases the equality of the distribution. Any increase in the aggregate sum of utility improves social welfare by the same amount no matter who receives it. This is sometimes called a Benthamite social welfare function after Jeremy Bentham. In 1789 Bentham proposed maximizing the sum of satisfactions as a social objective.26 Mathematically, the Benthamite social welfare function is simply W = US + UJ. A move from point A to point E, for example, would increase social welfare (lie on a higher social indifference curve, not drawn) by this function. The curve W R represents a very egalitarian function. The representative social indifference curve is in the shape of a right angle with its corner on the 45° line. Starting from the equal distribution at point A, social welfare cannot be increased by giving more utility to just one person. The only way to increase social welfare is by raising the utility level of both people (e.g., point C). Starting from any unequal distribution (such as point D), social welfare can be increased only by raising the utility level of the worst-off person (Jones). Sometimes W R is called a Rawlsian function after the philosopher John Rawls. It was Rawls who suggested that inequalities in a society should be tolerated only to the extent that they improve the welfare of the worst-off person.27 Mathematically, the Rawlsian social welfare function is W = min (U S, U J ).28 Note that a change such as the one from point A to point E improves welfare by the Benthamite function, but decreases it by Rawlsian standards because the minimum utility level, that of the worst-off person, declines. The curve WM represents a middle-of-the-road function that lies between the Benthamite and Rawlsian ideals. The social indifference curve has the shape of an ordinary indifference curve, and its changing slope takes on the value of −1 at the point on the 45° line. It implies that, for any given level of aggregate utility, social welfare increases with greater equality.29 However, a big enough increase in aggregate utility can increase social welfare even if it makes the distribution less equal. The change from point A to point C illustrates

26 Jeremy Bentham, An Introduction to the Principle of Morals and Legislation (London: Oxford University Press, 1907). 27 John Rawls, A Theory of Justice, Rev. Ed. (Cambridge: Harvard University Press, 1999). 28 The function min (X1, X2, . . ., Xn ) has a value equal to the minimum level of any of its arguments X1, X2, . . ., Xn . For example, min (30, 40) = 30 and min (50, 40) = 40. 29 The locus of points with constant aggregate utility is a straight line with slope of −1. The maximum social welfare on this locus is where the social indifference curve is tangent to it, which by construction is on the 45° line.

Utility Maximization, Efficiency, and Equity

63

Figure 3-10. Social welfare and the utility-possibilities frontier.

the latter. However, the change from point A to point E is a social welfare decrease by this function; the increase in aggregate utility is not big enough to offset the reduction in equality. To see how a social welfare function makes a combined equity-efficiency judgment, let us examine Figure 3-10, which is a graph of a utility-possibilities frontier representing the Pareto-optimal utility levels possible in the economy. Now let us return to the question posed at the beginning of this section: How do we choose an allocation from among them? In Figure 3-10 we have also drawn some social indifference curves, each representing a different level of social welfare according to one underlying middle-of-the-road welfare function. The level of social welfare rises as we move upward and to the right on the graph. For example, if the economy is at the efficient point A, the level of social welfare is WA . However, the inefficient point B is one of greater social welfare than point A: this reflects the social importance of the greater equality at B. The maximum social welfare that can be achieved is shown as point C, where the social indifference curve is just tangent to the utility-possibilities frontier. Higher levels of social welfare, such as WD , are not feasible with the limited resources available in the economy. Note that, as drawn, the maximum possible social welfare is not at a point of equality, that is, it is not on the 45° line. This happens even though, other things being equal, the society prefers (by this welfare function) more equality to less. The explanation lies in the shape of the utility-possibilities frontier. For illustrative purposes, we chose one that has

64

Chapter Three

Figure 3-11. Maximum welfare as judged by alternative welfare functions varies in the degree of inequality that results.

this characteristic: With the available resources and the preferences of Smith and Jones, it is easier for the economy to increase Smith’s happiness than that of Jones (e.g., Jones may not like meat or tomatoes very much). The best attainable point of equality is the allocation at point E, but society considers that the gain in aggregate utility at point C more than compensates for the loss in equality. One final comparison of the three different illustrative social welfare functions is shown in Figure 3-11, which reproduces the utility-possibilities curve from Figure 3-10 and shows the maximum welfare associated with each of the different social welfare functions. The highest social indifference curve attainable for each welfare function is drawn. The middle-of-the road maximum is at point C, the same as in Figure 3-10. But the Rawlsian maximum is at point E, where the 45° line from the origin crosses the utility-possibilities frontier and thus the utility of Smith equals the utility of Jones.30 On the other hand, the

30 For any interior point above the 45° line, Jones is the worse-off person and any change that moves to the right within the area is a Rawlsian improvement. The furthest to the right that one can get without crossing the 45° line is point E. Similarly, for any interior point below the 45° line, Smith is the worse-off person and any changes that move upward within the area are Rawlsian improvements. The most upward that one can get is point E. Therefore, no matter where one starts, point E will be the highest attainable welfare by the Rawlsian social welfare function.

Utility Maximization, Efficiency, and Equity

65

Benthamite social welfare function has its maximum at a more unequal distribution of utilities than at point C, shown as point D on the utility-possibilities frontier. We explain this below. Since the Benthamite social indifference curve is a straight line with constant slope of −1, its tangency with the utility-possibilities frontier occurs where the slope of the frontier is also −1. We know the slope of the utility-possibilities frontier at point C exceeds (is steeper than) −1, since the slope of the middle-of-the-road social indifference curve tangent to the frontier is −1 where it crosses the 45° line and gets steeper along the way to point C. Therefore the point on the utility-possibilities frontier that has a slope of −1 must lie on the portion that is flatter than at point C, or to its left.31 There are important limitations to the use of social welfare functions in policy analysis. One obvious problem is that since utility is neither measurable nor interpersonally comparable, it is not possible to identify empirically the relative satisfaction levels of each individual. However, there are some situations in which the construction of social welfare functions is useful anyway. Typically they are those in which policy affects individuals on the basis of some observable characteristic (e.g., income level). All individuals with the same characteristic are to be treated alike, regardless of their individual preferences (e.g., all pay the same tax). We provide an illustrative example in Chapter 5 as part of the analysis of school finance policies. A second problem, more conceptual than empirical, is that there is no agreement or consensus on what the “proper” social welfare function is: Each individual in the society may have his or her own view of what is proper. We have already illustrated that there are many possible social welfare functions (e.g., Rawlsian and Benthamite) concerning how society should trade off aggregate utility and equality. But other efficiency and equity concepts are not reflected by those formulations of social welfare, and they deserve attention as well. For example, note the independence of the social welfare function and the utilitypossibilities frontier. In Figure 3-10 it is apparent that the level of social welfare associated with an allocation does not depend on the location of the utility-possibilities frontier (i.e., the social indifference curves are drawn without knowledge of the frontier). But it is only the latter that represents the efficient or Pareto-optimal allocations (corresponding to the contract curve of an Edgeworth diagram). Thus the level of social welfare does not reveal whether there is room for more improvement (i.e., if we are interior to the utility-possibilities frontier). However, knowing whether an alternative is or is not Pareto-optimal is important because it may raise the possibility that a new and superior alternative can be found. Similarly, the equity concept of equal opportunity is not reflected in the social welfare functions we illustrated. Indeed, it is a practical impossibility to have it otherwise. Knowledge of the utility level outcomes is not enough: One must decide the fairness of the process that 31 The above result depends on the shape of the utility-possibilities frontier, but a similar result of increased inequality compared to the other two social welfare functions would obtain if the frontier were drawn so that it is possible to give Jones far more utility than Smith. The middle-of-the-road function would have its tangency to the right of the Rawlsian maximum, with slope flatter than −1. The portion of the frontier where the slope equals −1 (and thus would contain the tangency to the Benthamite social indifference curve) would have to lie further to the right where the slopes are getting steeper.

66

Chapter Three

determined the starting point as well as the fairness of the processes associated with each way of making a specific change in outcome levels. That is, a change may make Smith better off relative to Jones for quite different reasons. Perhaps, for example, Smith worked harder than Jones and earned an extra reward or Jones was unfairly denied an opportunity that by default then went to Smith. One needs to know both the outcomes and the fairness of the process that explains them in order to incorporate an equal opportunity standard of equity into a social welfare function. These problems illustrate that despite the appealing neatness of integrating social values in one social welfare function, the approach will not generally substitute for explicit evaluation by the general criteria of efficiency and equity separately. The diversity of specific concepts of efficiency and equity should receive attention. Given the lack of any predetermined social consensus about which of them apply and how to integrate those that do apply, policy analysis can usually best help users reach informed normative conclusions by clearly laying out its predictions and evaluating them by the different normative elements (e.g., efficiency, relative efficiency, equality, equal opportunity). Certainly, nontechnical users will find each of the elements more familiar or at least easier to understand than the concept of a social welfare function. Thus only occasionally will it be useful to combine some of the elements in the form of a social welfare function. In the following chapters, we will build more thoroughly upon the general concepts introduced here in order to develop skills of application in specific policy contexts.

Summary Because of the limits on obtainable data, a central task of the economics profession is to develop tools that allow the analyst to infer important consequences accurately and with a minimum of data. This chapter introduces one of these tools, the utility-maximization model of individual decision-making, which is used in many economic policy analyses. The model consists of these assumptions: Each individual has a preference-ordering of possible consumption choices that is consistent, convex, and nonsatiable, and makes choices in accordance with that preference-ordering. The preference-ordering and choice-making can be represented as an ordinal utility function that the individual acts to maximize. While utility is neither measurable or interpersonally comparable, we can observe and measure an individual’s marginal rate of substitution MRSi,j —the amount he or she can forgo of one good j in order to obtain an additional unit of another good i and remain just indifferent. The MRSi,j varies with the mixture of goods that the individual consumes—it depends not only on the preferenceordering, but also on the relative abundance or scarcity of each item in the consumer’s consumption bundle. We illustrate that utility-maximizing individuals, when subject to a budget constraint, will choose a mix of goods such that the MRSi, j of any two goods i and j that are in the bundle will equal the ratio of their prices Pi /Pj . The assumptions used in the model are not intended to be literally true; they are intended to yield accurate predictions of many decisions. The analyst always retains discretionary judgment about whether the model works well for the particular decisions a policy might

Utility Maximization, Efficiency, and Equity

67

affect. While we will see many different uses of the utility-maximization model as we proceed, our primary use of the model in this chapter is to introduce the important normative concepts of efficiency and equity. Efficiency, or Pareto optimality, is defined as an allocation of resources from which no person can be made better off without making another person worse off. Equity refers to the relative distribution of well-being among the people in an economy. The use of both concepts in policy analysis involves predicting how policies will affect each individual’s well-being, or overall satisfaction, or utility level. In making these evaluative predictions, we typically follow the principle of consumer sovereignty: Each person judges his or her own well-being. On the basis of predictions using this principle, the analyst tries to infer whether existing or proposed policies are efficient and equitable. We began to explore efficiency and equity concepts in the context of a pure exchange economy populated by utility-maximizing consumers and negligible transaction costs. The principal result is that efficiency (in exchange) requires that all consumers of any two goods in the economy must have the same MRS for those two goods. We illustrated that two individuals in a barter economy, starting from an inefficient allocation, can reach an efficient allocation by the voluntary trading of goods; this happens as a result of each individual attempting to maximize utility and that the resulting trading causes the MRS of each to converge. We also showed that, with a price system, each utility-maximizing individual will equate his or her MRS for any two goods consumed to the ratio of the prices for those two goods. Thus, it may be possible, by having one set of prices that apply to all individuals, to achieve efficiency in a complex economy with many individuals and many goods. Whether this is desirable depends on whether equity can be simultaneously achieved. By using the Edgeworth box diagram, it becomes clear that an infinite number of efficient allocations are possible; each point on the contract curve is efficient. These efficient allocations differ from one another in equity: The relative well-being of each individual can vary from extremes at which one person has everything and the others nothing to more “balanced” allocations in which the goods are more evenly distributed among individuals. If one can pick any allocation within the Edgeworth box, it is at least theoretically possible to achieve both efficiency and equity of outcomes. A source of analytic difficulty is that there are no universally agreed upon principles that allow one to draw the inference that one allocation is more equitable than another: Different individuals, with full and identical information about an allocation, may disagree about its “fairness.” This becomes apparent when we recognize that an economy is always currently at some particular point in the Edgeworth box and that a policy change is equivalent to a proposal to move from that point to another one. What is considered fair may depend not only on where the initial location is but also on why it is the initial location; for example, to what extent does it represent a past characterized by “just rewards” or plain luck or unfair discrimination? We illustrated how two different principles of equity, equality of outcomes and equal opportunity, can lead to conflicting judgments about the fairness of a change. The definition of efficiency as Pareto optimality is not very controversial because the set of efficient allocations spans a very wide distributional range and thus causes little conflict with notions of equitable allocations. However, public policy changes inevitably involve

Chapter Three

68

making some people better off and others worse off. Measures of relative efficiency and the construction of social welfare functions are analytic techniques that have been developed to help clarify some normative consequences of those changes, but they do involve equity judgments at some level. Although there may be some policy issues in which equity does not play a major role in the analysis, the analyst must be sensitive to equity’s potential importance as a social goal and its impact on political feasibility.

Exercises 3-1

In a two-person, two-good pure exchange economy, Adam has an initial endowment of 15 flawless 1-caret diamonds (DA = 15) and 5 gallons of drinking water (WA = 5). Beth has no diamonds (DB = 0) but 20 gallons of drinking water (WB = 20). a In explaining his preferences, Adam says that he prefers (DA = 5, WA = 10) to his initial endowment. He also says that he is indifferent to (DA = 5, WA = 10) and (DA = 12, WA = 5). Are these preferences consistent or inconsistent with the standard assumptions of the utility-maximization model? b What are the dimensions of the Edgeworth box that represents the possible allocations in this economy? Draw it. c Show the point on the Edgeworth box that represents the initial endowments. Is it possible for these initial endowments to be an efficient allocation?

3-2

Consider an economy of two people who consume just two goods X and Y. Person 1 has an endowment of X1 = 30 and Y1 = 120. Person 2 has an endowment of X2 = 180 and Y2 = 90. Their utility functions are, respectively, U1 = X1Y1

and U2 = X2Y2

a Graph the Edgeworth box corresponding to this economy. b What are the equations for the indifference curves of persons 1 and 2 that go through the initial endowments? Plot the curves. [Hint: How does total utility change along an indifference curve?] c Shade in the locus of points that are Pareto-superior to the initial endowments. d What is the equation of the contract curve in this economy? Graph it. [Hint: Recall that a marginal rate of substitution can be expressed as a ratio of marginal utilities, and that the marginal utility of a good is the increase in the utility level caused by a one-unit increase of that good.] (Answer: Y1 = X1.) e Identify the boundaries of points on the contract curve that are Pareto-superior to the initial endowments. (Answer: X1 = Y1 = 60, X1 = Y1 = 82.7.) f O Suppose a secretary of the market announces that all trading must take place at PX = $1 and PY = $2. Furthermore, the secretary takes away each person’s initial endowment

Utility Maximization, Efficiency, and Equity

69

and replaces it with its cash value. The secretary instructs each person to order the quantities of X and Y that maximize utility subject to the budget constraint: (1) What quantities will persons 1 and 2 order? Can the secretary fill these orders with the endowments collected? (Answer: No.) (2) Go through the same exercise with PX = $2 and explain why the outcome is feasible and efficient.

APPENDIX CALCULUS MODELS OF CONSUMER EXCHANGEO

It is useful to see the exchange principles covered so far in a mathematical form. Actual empirical application often involves mathematics, so that the ability to consume, criticize, and conduct analysis requires some facility with mathematical procedures. However, some prior background in calculus is necessary in order to understand the mathematical material presented.32 Let U(X1, X2, . . ., Xn) represent a utility function for a consumer in an economy with n goods. For mathematical ease, we assume that this function is smooth and continuous and that the goods are infinitely divisible. The assumption that more is better (strong monotonicity) can be expressed as follows: ∂U > 0 —— ∂Xi

for all i

The left-hand term is a partial derivative. It represents the marginal utility from a small increment of good Xi to the bundle, holding all the other Xs constant. The expression says that the marginal utility of this increment is positive or, equivalently, that total utility increases as Xi consumption becomes greater. We have defined an indifference set (a curve if there are only two goods and a surface if there are more than two goods) as the locus of all consumption bundles that provide the con¯ represents some constant level of utility, then as the sumer with the same level of utility. If U ¯ indifference surface, it must always be true that goods X1, X2, . . . , Xn change along the U ¯ = U(X , X , . . ., X ) U 1 2 n As the X s are varied slightly, the total differential of the utility function tells us by how much utility changes. If we consider only changes in the X s along an indifference surface,

32

Basic courses in calculus that cover derivatives, partial derivatives, and techniques of maximization and minimization are usually sufficient background for understanding the optional material presented in this book. Two compact expositions of the most relevant aspects of calculus for economics are W. Baumol, Economic Theory and Operations Analysis, 4th Ed. (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1977), Chapters 1–4, pp. 1–71, and W. Nicholson, Microeconomic Theory, 7th Ed. (Hinsdale, Ill.: The Dryden Press, 1998), Chapter 2, pp. 23–65.

70

Chapter Three

then total utility does not change at all. Suppose the only changes we consider are of X1 and X2, while all the other X s are being held constant. Then as we move along an indifference curve, ∂U dX + —— ∂U dX ¯ = 0 = —— dU 2 ∂X1 1 ∂X2 or dX1 − —— dX2

|

¯ U=U

∂U/∂X2 = ———– ∂U/∂X1

The term on the left-hand side of this equation is simply the negative of the slope of the indifference curve; we have defined it as the marginal rate of substitution MRSX2, X1. The term on the right-hand side is the ratio of the marginal utilities for each good. Therefore, the MRSXi , Xj at any point can be thought of as the ratio of the marginal utilities MUXi /MUX j. Efficiency requires that each consumer of two goods have the same MRS for the two goods. Here we wish to show that this condition can be mathematically derived. Consider Smith and Jones, who have utility functions for meat and tomatoes U S(Ms ,Ts) ¯ be the total amount of meat in the economy and T¯ be the total amount and U J(MJ ,TJ ). Let M ¯ S of of tomatoes. We will be efficient (on the contract curve) if, for any given utility level U Smith, Jones is getting the maximum possible utility. To achieve this maximum, we are free to allocate the available goods any way we want to between Smith and Jones, as long as we ¯ S. We will find this maximum mathematically and show that the equations keep Smith at U that identify it also imply that Smith and Jones must have the identical MRS. Note that, in the two-person economy, knowledge of Jones’s consumption of one good ¯ − M . Thus we allows us to infer Smith’s consumption of that good; for example, MS = M J know that an increase in Jones’s meat consumption (and similarly for tomatoes) causes the following change in Smith’s: ¯ −M) ∂MS ∂(M J —— = ————— = −1 ∂MJ ∂MJ We use this fact in the derivation of the efficiency proposition below. The mathematical problem is to choose the levels of two variables, meat and tomato consumption, that maximize the utility level of Jones, which we denote as follows: max U J(MJ , TJ )

MJ , TJ

With no other constraints and the assumption of nonsatiation, the solution to the problem is to choose infinite amounts of meat and tomatoes. But, of course, the real problem is to ¯ and T¯ and that maximize subject to the constraints that total real resources are limited to M ¯ S: Smith must get enough of those resources to yield a utility level of U ¯ S = U S(M , T ) U S S The total resource constraints can be incorporated directly into the above equation by substituting for MS and TS as follows:

Utility Maximization, Efficiency, and Equity

71

¯ − M T = T¯ − T MS = M J S J Then all these constraints can be represented in one equation: ¯ S = U S [(M ¯ − M ), (T¯ − T )] U J J To solve the maximization problem with a constraint, we use the technique of Lagrange multipliers. We formulate the Lagrange expression L(MJ , TJ , λ): ¯ S − U S [(M ¯ − M ), (T¯ − T )]} L(MJ , TJ , λ) = U J(MJ , TJ) + λ{U J J The first term on the right is simply the function we wish to maximize. The second term always consists of λ, the Lagrange multiplier, multiplied by the constraint in its implicit form.33 Note that, when the constraint holds, the second term is zero and the value of L equals the value of U J. Thus from among the (MJ , TJ) combinations that satisfy the constraint, the one that maximizes U J will also maximize L. In addition to the two variables that we started with, MJ and TJ , we make λ a third variable. How do we find the values of the variables that maximize the original function subject to the constraint? The solution requires taking the partial derivatives of the Lagrange expression with respect to all three variables, equating each to zero (thus forming one equation for each unknown variable), and solving the equations simultaneously.34 We do

33 The implicit form of an equation is found by rewriting the equation so that zero is on one side of it. If we have a constraint that says F(X, Y ) = Z, we can always rewrite the equation as

Z − F(X, Y ) = 0 and define the implicit function G(X, Y ): G(X, Y ) = Z − F(X, Y ) Then the constraint in its implicit form is G(X, Y ) = 0 This is mathematically equivalent to the original expression F(X, Y ) = Z. That is, G(X, Y ) = 0 if and only if F(X, Y) = Z. 34 We try to explain this intuitively. Imagine a hill in front of you. Your object is to maximize your altitude, and the variable is the number of (uniform) steps you walk in a straight line. Naturally, you walk in a line that goes over the top of the hill. As long as your altitude increases with an additional step, you will choose to take it. This is like saying the derivative of altitude with respect to steps is positive, and it characterizes each step up the hill. If you go too far, the altitude will decrease with an additional step: the derivative will be negative and you will be descending the hill. If the altitude is increasing as you go up, and decreasing as you go down, it must be neither increasing nor decreasing exactly at the top. Thus at the maximum, the derivative is zero. This extends to the many-variable case, in which the variables might represent steps in specific directions. Exactly at the top of the hill, the partial derivative of altitude with respect to a step in each direction must be zero. This analogy is intended to suggest why the maximum utility above can be identified by finding the points where all the partial derivatives equal zero. Technically, we have only discussed the first-order or necessary conditions; they identify the interior critical points of the Lagrange expression. However, not all critical points are maxima; all the partial derivatives are zero at function minima also, for example. Most functions we use will only have the one critical point we seek. Some

72

Chapter Three

the first two parts below, making use of the chain rule in taking the partial derivatives35: ¯ −M) ∂L ∂U J ∂U S ∂(M J —— = —— − λ —— ——— ————— =0 ¯ ∂MJ ∂MJ ∂(M − MJ ) ∂MJ

(i)

∂L ∂U J ∂U S ∂(T¯ − TJ ) —— = —— − λ ——¯——— ————— =0 ∂TJ ∂TJ ∂(T − TJ ) ∂TJ

(ii)

∂L = U ¯ S − U S [(M ¯ − M ), (T¯ − T )] = 0 —– J J ∂λ

(iii)

Note that equation (iii) requires that the constraint be satisfied, or that Smith end up with ¯ S. This always happens with the Lagrange method; the form in which the a utility level of U constraint enters the Lagrange expression ensures it. Thus, when the equations are solved simultaneously, the value of L(MJ , TJ , λ) will equal U J(MJ , TJ). We can think of equation (iii) in terms of the Edgeworth box. It requires that the solu¯ S. tion be one of the points on the indifference curve along which Smith has utility level U With this equation, the first two equations will identify the point on the indifference curve ¯ − M , T = T¯ − T and that that maximizes the utility of Jones. To see this, recall that MS = M J S J we pointed out earlier that ¯ −M) ∂(M ∂(T¯ − TJ ) J ————— = ————— = −1 ∂MJ ∂TJ Then equations (i) and (ii) can be simplified as follows: ∂L = —— ∂U J + λ —— ∂U S = 0 —— ∂MJ ∂MJ ∂MS

(i′)

∂L = —— ∂U J + λ —— ∂U S = 0 —— ∂TJ ∂TJ ∂TS

(ii′)

Subtract the terms with λ in them from both sides of each equation and then divide (i′) by (ii′): ∂U J/∂MJ −λ(∂U S/∂MS ) ∂U S/∂MS ———— = ———–—— = ———— ∂U J/∂TJ −λ(∂U S/∂TS ) ∂U S/∂TS functions, however, have both maxima and minima (as if, in our analogy, there were a valley between the start of our straight walk and the top of the hill). The second-order conditions to ensure that the identified values of MJ and TJ maximize L are more complicated. For a review of the second-order conditions, see H. Varian, Microeconomic Analysis, 3rd Ed. (New York: W. W. Norton & Company, 1992), Chapter 27. 35 The chain rule for derivatives is this: If Z = G(Y) and Y = F(X ), then dZ dz dy —— = —— —— dX dY dX The same rule applies for partial derivatives when G( ) and F( ) are functions of variables in addition to Y and X. ¯ − M ) and (T¯ − T ) In the economic problem above, the function U S ( ) plays the role of G( ) and the functions (M J J play the role of F( ) in equations (i) and (ii), respectively.

Utility Maximization, Efficiency, and Equity

73

or J S MRSM,T = MRSM,T

That is, the first two equations require that Smith and Jones have the same MRS, or that their indifference curves be tangent. This is precisely the efficiency condition we sought to derive. We can also think of the first two equations as identifying the contract curve, and then, with them, the third equation identifies the point on the contract curve where Smith ¯ S. has utility level U We also showed in the text that when a pricing system is used, each consumer will allocate his or her budget in such a way that the MRS of any two goods Xi and Xj equals the price ratio of those two goods Pi /Pj . This can be seen mathematically as follows, letting B represent the consumer’s total budget to be allocated.36 The consumer wants to choose goods X1, X2, . . ., Xn that maximize utility subject to the following budget constraint: B = P1 X1 + P2 X 2 + . . . + Pn X n We formulate the Lagrange expression with n + 1 variables λ and X1, X2, . . ., Xn: L = U(X1, X2, . . ., Xn) + λ(B − P1 X1 − P2 X2 − . . . − Pn Xn ) To find the X s that maximize utility, all of the n + 1 partial derivative equations must be formed. Taking only the ith and jth of these equations for illustrative purposes, we have ∂L = —— ∂U − λP = 0 —— i ∂Xi ∂Xi

or

∂U = λP —— i ∂Xi

(i)

∂L = —— ∂U − λP = 0 —— j ∂Xj ∂Xj

or

∂U = λP —— j ∂Xj

(j)

and upon dividing the top equation by the bottom one we see that ∂U/∂Xi Pi ——— = —– ∂U/∂Xj Pj We have not yet noted any significance in the value of λ that comes from solving constrained maximization problems. However, from the above equations we can see that ∂U/∂X λ = ———i Pi

for all i = 1, 2, . . ., n

This can be interpreted approximately as follows: Marginal utility per unit of X λ = ————————————i = Marginal utility per dollar Dollars per unit of Xi

36

If a consumer starts with an initial endowment of X1E, X 2E, . . ., XnE then the total budget is B=

n

Σ Pi XiE i=1

74

Chapter Three

That is, λ can be interpreted as the amount of marginal utility this consumer would receive from increasing the budget constraint by one extra dollar. In general, the value of λ signifies the marginal benefit in terms of the objective (utility in the example) of relaxing the constraint by one increment (increasing the budget by $1.00 in the example). This is sometimes referred to as the shadow price of the resource causing the constraint, and it can be a useful way to estimate the value of that resource to its user. A numerical example may help clarify the mechanics of constrained maximization. Suppose a consumer has a utility function U = M ⋅ T. Suppose also that the budget constraint is $100, PM = $5.00 per pound, and PT = $1.00 per pound. How many pounds of meat and tomatoes should this consumer buy in order to maximize utility? The consumer wants to maximize M ⋅ T, subject to 100 = 5.00M + 1.00T. We formulate the Lagrange expression L(M, T, λ): L = M ⋅ T + λ(100 − 5.00M − 1.00T) Taking the partial derivatives and setting each to zero, we have ∂L — — = T − 5.00λ = 0 ∂M

or

T = 5.00λ

(i)

∂L = M − 1.00λ = 0 —– ∂T

or

M = 1.00λ

(ii)

∂L = 100 − 5.00M − 1.00T = 0 —– ∂λ

(iii)

On substituting 5λ for T and λ for M in equation (iii), we get 100 − 5.00λ − 5.00 = 0 from which λ = 10

M = 10

T = 50

U = 500

That is, the consumer achieves the maximum utility of 500 by purchasing 10 pounds of meat and 50 pounds of tomatoes. According to this solution, the shadow price of the budget is 10 utils. This means that increasing the budget by $1.00 would allow the consumer to gain approximately 10 more utils, or achieve a utility level of 510. Let us check this. Suppose the budget is $101 and we maximize utility according to this constraint. Equations (i) and (ii) are unaffected, so we can make the same substitution in the new equation (iii): 101 − 5.00λ − 5.00 = 0 from which λ = 10.1 T = 50.5

M = 10.1 U = 510.05

Utility Maximization, Efficiency, and Equity

75

As we can see, the utility level has indeed increased by approximately 10 utils. As one last check on our original solution, it should be that PM MRSM,T = —– =5 PT From the utility function, ∂U = T = 50 —– ∂M ∂U = M = 10 —– ∂T and therefore MRSM,T = ∂U/∂M ——— = 5 ∂U/∂T

PA R T T W O USING MODELS OF INDIVIDUAL CHOICE-MAKING I N P O L I C Y A N A LY S I S

CHAPTER FOUR T H E S P E C I F I C AT I O N O F I N D I V I D U A L C H O I C E M O D E L S F O R T H E A N A L Y S I S O F W E L FA R E P R O G R A M S

chapters we reviewed general predictive and evaluative aspects of microeconomic modeling. Now we wish to be specific. In this chapter, we develop skills of model specification useful for policy applications. The problems posed can be analyzed with models of individual consumer choice. The series of problems—chosen to be similar in some ways but different in others—is intended to create a facility with modeling that carries over to the analysis of new problems in new settings. The modeling emphasizes the role of the budget constraint and predictions based upon income and substitution effects. We also show how utility-maximizing behavior leads to consumer demand curves as well as labor supply curves. All the analyses in this chapter are drawn from one policy context: government programs that provide means-tested welfare grants to individual families or households. Welfare assistance is an important function of governments at all levels. Its extent depends in part on the amount of poverty in the economy and in part on how the society responds to it. Some simple statistics can convey the magnitude of this subject. In 1998 over 34 million Americans, almost 13 percent of the population, had incomes (excluding welfare assistance) below the official poverty level. In 1960 by contrast, almost 40 million were poor representing about 22 percent of the population. Thus, over this 38-year period, there was a reduction in the number and proportion of the poor population. However, progress in the reduction came to a halt in 1973, when approximately 11 percent were poor. The proportion did not change much during the mid-to-late 1970s, but increased to 14–15 percent for most of the 1980s and early 1990s. The population in poverty reached a peak of 39 million people in 1993 (15.1 percent of the population), and then declined slightly from 1993 to 1998. The exact causes of all of these changes are not

IN THE INTRODUCTORY

79

Chapter Four

80

yet well understood, but it is clear that some government programs to reduce poverty have had important effects.1 There is general consensus that great progress in reducing poverty within the elderly population has been made, largely through programs such as Social Security and Medicare that are not means-tested. There is also consensus, unfortunately, that little progress has been made in reducing poverty among the nonelderly population. The programs analyzed in this chapter focus largely on this latter group, and eligibility for them is restricted by means-testing. In 1998, federal expenditures on the major means-tested transfer programs came to $277 billion, and the states added $114 billion.2 However, according to one study, the boost from all U.S. welfare programs is only enough to remove about 20 percent of the nonelderly poor from poverty status.3 With so many resources expended, and so large a poverty problem remaining, it is clearly worthwhile to try and make every dollar spent be as effective as possible. The skills developed here can be used to make a contribution to that goal. Throughout this chapter, we focus primarily on how to spend a given number of welfare dollars efficiently. We focus on the issue of policy design, and study features of particular designs that determine how well the policy works. Rather than presenting one model, we emphasize comparison among alternative models. Each of the analyses presented involves thinking about different elements of consumer choice models and how to put them together. The elements include the nature of the constraints that limit an individual’s utility and to some extent the nature of the utility function itself. We consider primarily various alterations in the budget constraints of welfare recipients owing to public policies, but also alterations in the form of the utility function in the model. We use these different specifications to make predictions about the responses of individuals to the policies and to evaluate the efficiency of the predicted responses. We begin with a standard analysis that demonstrates that transfers in kind are inefficient. Then the real work of policy analysis begins. Actual policy designs typically include features not in the standard model, and it is important to understand how their inclusion changes the analysis. In order to extend our ability to analyze more features with more precision, we first discuss utility-maximizing responses to income and price changes known as income and substitution effects. Then we adapt the standard model to incorporate choice restrictions in the food stamp and public housing programs and examine the changes in predicted responses and their efficiency. Similarly, we review the standard microeconomics of the labor-leisure

1

For a discussion of recent changes, which cautions that it is difficult to separate the effect of welfare reforms from the strong growth of the economy, see Rebecca Blank, “Fighting Poverty: Lessons from Recent U.S. History,” Journal of Economic Perspectives, 14, No. 2, Spring 2000, pp. 3–19. 2 Federal assistance consisted of 41 percent on medical benefits such as Medicaid, 27 percent on cash assistance such as the Earned Income Tax Credit and Supplemental Security Income, 12 percent on food benefits such as the Food Stamp Program, 10 percent on housing benefits, and lesser percentages for education, other social services, and energy aid. 3 See Isabel V. Sawhill, “Poverty in the U.S.: Why Is It So Persistent?,” Journal of Economic Literature, 26, No. 3, September 1988, pp. 1073–1119.

The Specification of Individual Choice Models

81

Table 4-1 Federal Expenditure for Selected Welfare Programs, 1989–1998 (Billions of 1998 Dollars) Fiscal year

Food benefits

Housing benefits

Cash aid

1989 1990 1991 1992 1993 1994 1995 1996 1997 1998

27,410 29,803 33,545 38,142 39,266 39,739 39,365 38,622 35,927 33,451

20,950 21,909 22,712 25,486 27,051 26,574 26,689 26,497 26,853 26,897

43,628 45,502 50,634 56,635 60,245 69,774 72,662 72,758 72,971 73,872

Source: 2000 Green Book, Committee on Ways and Means, U.S. House of Representatives, October 6, 2000, Table K-2, p. 1398.

choice to explain labor supply. Then we examine the relationship between welfare programs and work incentives, including aspects of the design of a cash assistance program— the Earned Income Tax Credit.

Standard Argument: In-Kind Welfare Transfers Are Inefficient In the U.S. economy all levels of government are involved in providing welfare payments (through a variety of programs) to eligible low-income persons. These payments are transfers in the sense that they take purchasing power away from taxpayers and transfer it to welfare recipients. Some of the programs, such as the Earned Income Tax Credit and those supported by Temporary Assistance to Needy Families (TANF) grants, provide cash payments; other programs, such as food stamps, Medicaid, and housing allowances, provide transfers in kind (i.e., transfers that can be used only for the purchase of specific goods).4 During the 10 years from 1989 to 1998, expenditures on the major means-tested programs shown in Table 4-1 rose 46 percent in real terms, and the in-kind expenditures for food and housing assistance exceeded the cash assistance until 1994.5 In 2000, an average of 17 million low-income individuals (just over 6 percent of the population) participated in the Food Stamp Program each month.6 Clearly the in-kind programs represent a very substantial

4 The 1996 legislation ended the federal Aid to Families with Dependent Children and replaced it with staterun programs funded partially by federal block grants. While the states were allowed great discretion over the design of their programs, the federal legislation did mandate firm time limits on the time a family could be on welfare. 5 The phrase “real dollars” means that the nominal number of dollars is adjusted to remove the effects of inflation. We review these procedures in Chapter 8. 6 Food Stamp Program participation rates are available on the website of the Food and Nutrition Service of the U.S. Department of Agriculture at http://www.fns.usda.gov/pd/fssummar.htm.

82

Chapter Four

portion of the federal response to poverty. Yet the conventional wisdom among economists has always been that transfers of this type are inefficient. The proof of this proposition usually follows one of two closely related patterns: (1) For a given number of tax dollars to be transferred to the poor, the utility level of recipients will be higher if the transfers are in cash rather than in kind, or (2) to achieve given utility level increases among the recipients, fewer taxpayer dollars will be required if the transfers are in cash rather than in kind. Either argument suffices to demonstrate the inefficiency of inkind transfers, since each shows the possibility of making one group better off with all others no worse off. We will demonstrate the proof of this by using the second of the two patterns. In Figure 4-1a, let the line AB represent the budget constraint of a low-income individual who has $400 per month in income. The vertical and horizontal axes are each to represent quantities of a particular good. But to represent consumer possibilities properly on the diagram, all available spending choices must be shown. The “meat” and “tomato” axes used in the prior chapter are not appropriate if there are other goods such as clothing and oranges that the consumer could also choose. Thus the axes have to be defined carefully to divide all possible goods into one of two exhaustive categories, chosen to focus on the trade-off of concern to us. In this case, let us say that we are interested in the trade-off between “food” (the horizontal axis) and “all other goods” (the vertical axis). We assume the consumer will make utility-maximizing choices across the two categories as well as within each and that we can think of the quantity units on each axis as $1 worth of “food” and $1 worth of “all other goods.”7 With these definitions, OA = OB = $400. Recall that the slope of a budget constraint is equal to minus the ratio of the prices of the goods on each axis. In this special example, the “price” of an additional unit of each category is $1 so the slope of AB is −1. Now suppose we introduce a food stamp program that allows eligible recipients to buy for their own use any quantity they wish of $1 stamps at a price of $0.50 per stamp. Grocers must accept the food stamps at face value for food purchases, and the government gives the grocer cash reimbursement for the stamps collected. From the individual’s perspective, this changes the budget constraint from AB to AC, where point C represents $800 worth of food. That is, the consumer eligible for food stamps could take the whole budget, buy food stamps with it, and consume OC = $800 worth of food—exactly twice as much as before. However, if the consumer buys only other things, the maximum that can be purchased is OA = $400 worth, as before. Thus, the program, from the perspective of the eligible consumer, is equivalent to a price reduction for food (as if there is a sale in all participating

7 Formally, the situation that allows us to treat these two aggregates of goods as if each were a single identifiable commodity is referred to as “Hicksian separability.” This situation applies when there are no relative price changes within each group. The only price changes allowed are those that affect the price of each good within a category proportionately the same. As long as the individual food items included in the food category are all covered by food stamps, our analysis will meet the requirement for Hicksian separability. For a more technical discussion of separability, see Hal. R. Varian, Microeconomic Analysis, 3rd Ed. (New York: W. W. Norton & Company, Inc., 1992), pp. 147–150.

The Specification of Individual Choice Models

83

O

(a)

O

(b) Figure 4-1. The inefficiency of in-kind transfers: (a) Taxpayers pay $60 of the $120 worth of food stamps used. (b) The $60 food stamp subsidy is like a $49 income supplement to this consumer.

84

Chapter Four

food stores equal to one-half off the marked prices). The slope of the budget constraint changes from −1 to −1/2. To maximize utility, let us say the consumer chooses the quantities shown at D, where an indifference curve is just tangent to the new budget constraint. The consumer is shown as receiving $120 worth of food and $340 worth of all other goods. In this example, the consumer pays $60 for this food and the government pays the other $60. This amount is represented on the diagram by the shaded line segment DE: the difference between the $340 left for all other goods under the food stamp program and the $280 that would be left after buying the same food at ordinary market prices. In short, taxpayers pay $60 to bring this consumer to the indifference curve UF . To prove that this is inefficient, we must show that it is possible to make someone better off and no one worse off. Let us ask how many taxpayer dollars it would take to bring the consumer to utility level UF with a cash welfare grant. We illustrate this in Figure 4-1b. With this type of grant, the recipient does not face any price changes. The slope of the budget constraint remains at the original slope of AB, but it is “pushed out” until it reaches FG, where it becomes just tangent to UF . FA is the (unknown) dollar amount of the cash grant. Now we will show that FA, while unknown, is less than $60. First note that IE = FA (because the vertical distance between two parallel lines is constant). On the diagram, it is clear that IE is less than DE (= $60). But why does it come out this way? Note that the budget constraint FG, with slope determined by market prices, is steeper than the food stamp constraint AC. All points on UF with steeper slopes than AC lie to the left of D; therefore, the tangency with FG occurs to the left of D. But then FG must go through the line segment DE—if it crossed above D, it would violate the tangency condition. This completes the proof: Since a cash welfare program could achieve UF at a lower cost to taxpayers than food stamps, the food stamp program is inefficient. It is useful to note a few characteristics of this result. First, it causes the individual to consume more food and less of other things compared to the lower-cost cash welfare program yielding the same utility. Second, we know the food stamp recipient is indifferent to a cash grant of IE or a food stamp subsidy costing the taxpayer DE. Therefore, we can say that the recipient values each taxpayer dollar spent on food stamps at IE/DE cents, or that the waste per taxpayer dollar spent on food stamps is 100 − IE/DE cents, or DI/DE cents. According to one study conducted in the early 1970s (before certain changes in the program discussed later), recipients on average valued each $1 of food stamp subsidy at $0.82.8 Applying this figure to our diagram, we see that a cash grant of approximately $49 would make the consumer just as well off as the $60 food stamp subsidy. The basic result of inefficiency should not be very surprising. Consider the MRS of food for “everything else.” The recipient chooses a consumption point such that the MRS equals minus the slope of AC. But every consumer not eligible for the program chooses an MRS 8 See K. Clarkson, Food Stamps and Nutrition, Evaluative Studies No. 18 (Washington D.C.: The American Enterprise Institute for Public Policy Research, 1975). One reviewer of this study argues that this waste estimate is exaggerated because of failure to measure recipient income correctly. See J. Barmack, “The Case Against InKind Transfers: The Food Stamp Program,” Policy Analysis, 3, No. 4, Fall 1977, pp. 509–530.

The Specification of Individual Choice Models

85

equal to the ratio of the prices in the market, or minus the slope of AB. Therefore, the condition for exchange efficiency is violated, and we know there is room for a deal. The proof above simply illustrates one possible deal that would make some people (taxpayers) better off and all other people (food stamp eligibles) no worse off. The cash welfare program leaves all consumers (taxpayers and welfare recipients) facing the same market prices and thus, under the assumption of utility-maximizing behavior, leaves no room for a deal. By this reasoning, all in-kind welfare programs that cause recipients and nonrecipients to face different prices for the same good are inefficient.

Responses to Income and Price Changes In order to expand our analytic ability, we examine an additional set of inferences that one can draw based upon utility-maximizing behavior and some modest empirical information.9 These inferences concern an individual’s responses to changes in prices and budget levels and are referred to as income and substitution effects.

Response to Income Changes Let us refer informally to the budget level as “income.”10 In Figure 4-2a we show an individual’s utility-maximizing choice for each of several different budget constraints representing different income levels. The constraints differ only in the total budget size; the identical slopes reflect the assumption that prices are constant. The locus of the utilitymaximizing choices associated with each possible budget level is referred to as the income-expansion path. For every possible income or budget level (at the given prices), we ask what quantity of the good X the individual would purchase. In Figure 4-2b that relation, called an Engel curve, is drawn.11 The horizontal axis shows each possible income (or budget) level; the vertical axis measures quantity of X; and the curve shows the quantity of X that would be purchased at each possible budget constraint or income level. The one illustrated slopes upward and is for a normal good: as income increases, the individual increases the quantity of the good purchased. Some goods, such as spaghetti and potatoes, may have downward-sloping Engel curves (at least over a broad range of budget levels). These are called inferior goods: the individual buys less of them as income increases. Note that a good that is normal for one individual could be inferior for another; it is simply a matter of individual tastes. Nevertheless, empirical observation reveals that some 9 These are explained without calculus. A brief appendix at the end of the chapter contains the calculus version and explains the Slutsky equation for the income and substitution effects of a price change. 10 Later in the text we shall consider more fully the determinants of an individual’s budget constraint. If there were no borrowing, lending, or saving, then the level of an individual’s budget constraint for a given period would equal that individual’s income in the period (including net gifts and bequests). However, the existence of opportunities to save, borrow, or lend makes the determination of actual budget constraints more complicated. 11 Sometimes it is more convenient to define the vertical axis as P X, the expenditure on X. This relation is called X the Engel-expenditure curve.

86

Chapter Four

(a)

(b) Figure 4-2. Consumption response to income change: (a) Utility-maximizing choices. (b) Engel curve.

goods are treated as normal by most people, and others are generally treated as inferior. This modest knowledge can be important in predicting the responses to policy changes. In order to give a more precise measure to the sensitivity of X consumption to changes in income (holding all other factors constant), we introduce the important notion of an elasticity. The elasticity of one variable X with respect to another variable Z, denoted εX,Z , is defined as the percentage change in X that occurs in response to a 1 percent change in Z. In mathematical terms, we can write this as ∆X/X εX,Z = —–— ∆Z/Z

The Specification of Individual Choice Models

87

where ∆ means “the change in” and thus, for example, the numerator ∆X/X is the percentage change in X. This expression is often rewritten in an equivalent way12: ∆X Z εX,Z = — —— ∆Z X Thus the elasticity of X with respect to income I is ∆X I —— εX,I = — ∆I X The properties of normality and inferiority can then be defined in terms of the income elasticity: a normal good is one whose income elasticity is positive, and an inferior good is one whose income elasticity is negative. Among normal goods, sometimes distinctions are made between necessities and luxuries. A necessity is a normal good whose income elasticity is less than 1: the proportion of income spent on it declines as income rises. Individuals often treat food and medical services as necessities. To say that food is a necessity means operationally that an individual spends a decreasing proportion of income on it as income increases (everything else held constant).13 A luxury good, on the other hand, is a normal good whose income elasticity is greater than 1. For many people, sports cars and yachts are considered luxuries. One interesting good to mention is “everything”; it is normal with an income elasticity of 1, since the same proportion of the budget (100 percent) is always spent on it.14 The reason for mentioning this is to suggest that the broader the aggregate of goods considered, the more likely it is to be normal with elasticity close to 1. Empirical information about a good’s normality or inferiority can be quite useful in predicting responses to changes in budget constraints (such as those caused by new policies). Figure 4-3 illustrates this for food consumption (in the home). A low-income family with a $10,000 annual budget is shown at point A initially spending $1800 on food. Suppose the family qualifies for an income supplement of $3000, shifting its budget constraint out so that the vertical intercept is now $13,000. What can we say about the effects of this change on its food consumption? If we know that food is a normal good, then we know that food consumption will increase and therefore the utility-maximizing response on the new constraint will lie to the right of point B, or on the BC segment. If we know that “all other goods” is also normal, then its consumption must increase as well, and we can narrow our prediction to the segment BD, where both quantities increase.

The elasticity definition may also be expressed in calculus terms: εX,Z = (∂X/∂Z)(Z/X ). Sometimes it is useful to speak of aggregates of certain goods; in fact, we have already done so several times. “Food” is not a single good but refers to an aggregate category of goods. How do we know whether an individual consumes more food? Typically, we measure the expenditures on food and calculate an Engel-expenditure curve. 14 Saving is, of course, a legitimate possibility. It can be thought of as spending on future-consumption goods, so it is included as part of “everything.” 12 13

Chapter Four

88

All other goods ($)

Food expenditures ($) Figure 4-3. Predicting a new food expenditure level as a result of an income supplement depends on empirical knowledge about the income elasticity.

Suppose we have a little more information—say that food is a necessity. If the income elasticity were precisely 1, the straight line from the origin through point A would be the income-expansion path: It shows all bundles like A for which food comprises 18 percent of the bundle’s value (at current prices). This line is thus the dividing line between necessity and luxury goods. It intercepts the new constraint at point E, where food expenditure is $2340 (= 0.18 × 13,000). If food is a necessity, meaning its income elasticity is less than 1, then we can predict that the response must lie to the left of our dividing line, or on the BE segment. Finally, suppose we have an estimate that the actual income elasticity for food in the home for this family is 0.15.15 Then we could calculate a specific point estimate on the new constraint. Since income is increasing by 30 percent, we estimate that food spending will increase by (0.15)(30) = 4.5 percent, or to $1881 at point F. Of course we are rarely certain that we know an income elasticity precisely. Statistical methods can be used to calculate and report a “confidence interval” around the point estimate within which the “true” response 15 This is the approximate estimate for average U.S. households in 1992, based on the author’s calculations from more detailed estimates in the literature. See G. D. Paulin, “The Changing Food-at-Home Budget: 1980 and 1992 Compared,” Monthly Labor Review, 121, No. 12, December 1998, pp. 3–32.

The Specification of Individual Choice Models

89

Figure 4-4. Income and substitution effects of a price reduction for the good X.

is highly likely to be found.16 These examples illustrate how better empirical knowledge can be used to make more precise predictions.

Response to Changes in a Good’s Price Once the effects of income changes on consumption patterns are understood at a conceptual level, it is relatively easy to deduce the effects of price changes. A price change can be understood as stimulating two different responses: a substitution effect and an income effect. In Figure 4-4, assume an individual consumes two goods, X and Y, and is initially at A, with budget constraint I0 and consuming X0 of the good X. Then say that the price of X falls. This causes the budget constraint I0 to rotate outward from the unchanged intercept on the Y-axis to I1. The individual’s new utility-maximizing point is shown at C with increased consumption of X (now at level X1). Does this model always predict that consumption of a good will increase if its price falls? The answer is no, and the reasoning can be seen more clearly if we break the response into two effects. To do so, we pretend that the individual moves from A to C in two steps; first from A to B and then from B to C. The first step, from A to B, shows the substitution effect: the change in consumption (Xs − X0) that would occur in response to a new price if the individual were required 16 For example, a rule-of-thumb for statistically normal distributions is that a 95% confidence interval begins and ends approximately two standard deviations on each side of the mean or point estimate.

Chapter Four

90

to remain on the initial indifference curve. The substitution effect is also called the pure price effect or the compensated price effect (because real income, or utility, is held constant). It is determined by finding the budget constraint that has the same slope as I1 (reflecting the new price) but is just tangent to U0. This is shown as the dashed line Is, and thus the hypothetical compensation required to keep utility at the initial level is to take I1 − Is dollars away from the budget. The substitution effect of a price reduction for a good is always positive; the quantity consumed of that good increases. To show this, observe on the diagram that a price reduction for the good X always makes the slope of the new budget constraint less steep than the original one. This means that the tangency of the “compensating” budget Is to U0 must occur to the right of A (consumption of X increases), since all points on U0 with less steep slopes than at A lie to the right of it (because of diminishing MRS). By analogous reasoning, the substitution effect of a price increase is always negative: The quantity of that good consumed decreases. Thus the substitution effect on quantity is always in the opposite direction of a change in price for that good. The change in consumption associated with the second step, from B to C, is shown as X1 − Xs. This is referred to as the income effect: the change in the quantity of the good caused purely by a budget change that brings the individual from the initial to the new utility level (holding prices constant). In this case, the changing budget level is from Is to I1.17 We have already analyzed changes like this in deriving the Engel curve. The income effect (of a price reduction) will be positive if X is a normal good and negative if it is an inferior good. Since the change drawn on the diagram is positive, we have assumed that X is a normal good. Thus the total effect of the price change, including both the income and the substitution effects, is not clearly predictable without information about the good being analyzed. If we know the good is normal, the income and substitution effects work in the same direction: Quantity consumed will change in the direction opposite that of the change in price. If the good is inferior, however, the substitution and income effects work in opposite directions. In these cases, it is typical for the substitution effect to outweigh the income effect, and thus price and quantity will move in opposite directions. But there may be a few goods, known as Giffen goods, such that the income effect predominates and we get the unusual result that price and quantity move in the same direction.18 At this point, a simple extension is to consider how an individual would respond to various possible price changes. The analogous question for pure income changes is used to derive the Engel curve; for price changes, one derives the demand curve. The demand curve shows the quantity of a good an individual will consume at each possible price. This is illustrated in Figure 4-5. Figure 4-5a shows how the utility-maximizing choices change as the price of X is successively lowered. Figure 4-5b is based upon these choices and shows the quantity of X that the consumer would choose at each possible price: the demand curve. 17

Any change has an income effect if it results in an altered utility (or “real income”) level. Sir Robert Giffen was an English economist (1837–1910), who observed that a rise in the price of potatoes in Ireland caused an increase in the quantity demanded. 18

The Specification of Individual Choice Models

91

(a)

(b)

Figure 4-5. The ordinary demand curve shows an individual’s utility-maximizing consumption choices in response to alternative prices.

Except for Giffen goods, the demand curve is downward-sloping or equivalently the ordinary price elasticity of demand is negative; for example, a 1 percent increase in price will reduce the quantity demanded. Note also that as the price of X falls, the individual’s utility rises.19 19 Closely related to the ordinary demand curve is a construct known as the compensated demand curve. It is derived analogously except that it is based only on the substitution effect as shown in Figure 4-3. Since the individual remains on the same indifference curve, utility is constant along the compensated demand curve. The

92

Chapter Four

Figure 4-6. Demand curves may be elastic (DE ) or inelastic (DI ).

In Figure 4-6, we contrast two demand curves. Demand curve DE is elastic; small changes in price are associated with large changes in quantity. We call demand elastic when the price elasticity (∆X/∆P)(P/X ) is less than −1, for example, −1.5 or −2. Demand curve DI is inelastic: the quantity demanded does not change very much as price changes. We call demand inelastic when the price elasticity is between 0 and −1, for example, −0.1 or −0.7. Thus demand curves can take a variety of shapes, depending on individual preferences. We will make extensive use of demand curves in later chapters, but here we focus primarily on the workings of the income and substitution effects from which they derive.

Response to Price Changes of Related Goods The above analyses concerned how the consumption of a good changes in response to changes in income levels and changes in its own price. However, the demand for a good is also affected by changes in the prices of related goods. These effects may again be thought of in terms of income and substitution effects, although the analysis is less straightforward. The reason for this is that the price that is changing might be that of either a substitute or a complement, and the effects differ. Informally, substitute goods are those that can to some extent replace each other, like hamburgers and hot dogs. Complements are goods that go together, like mustard and hot dogs. If the price of hot dogs rises, then the consumption of hamburgers will go up but the

compensated price elasticity of demand is always negative. Compensated demand curves are developed and used in the supplementary section of Chapter 6.

The Specification of Individual Choice Models

93

Figure 4-7. A price increase of a substitute (hamburgers) increases (hot dog) demand from D to D ′, and that of a complement (mustard) reduces demand to D ″.

consumption of mustard will go down (assuming ketchup and not mustard is used on the extra hamburgers). A good X is defined as a gross substitute for another good Y if an increase in PY causes consumption of X to rise (∆X/∆PY > 0). Similarly, X is a gross complement to Y if an increase in PY causes X consumption to fall (∆X/∆PY < 0). These effects are illustrated in Figure 4-7 as causing shifts in an individual’s demand curve D for hot dogs. If the price of hamburgers rises (all other things being equal), the demand for hot dogs shifts to the right at D′: at any hot dog price, more are demanded than when hamburgers were cheaper. If instead the price of mustard rises (all other things being equal), then the demand for hot dogs shifts from D to the left at D″: at any hot dog price, fewer are demanded than when mustard was cheaper.20 We can also express the definition in terms of the cross-price elasticity. A measure of how much a good X responds to a change in the price of another good Y is its cross-price

20

A concept closely related to gross substitutability (or complementarity) is that of Hicksian or net substitutability. Two goods are Hicksian or net substitutes (complements) if the effect of a pure or compensated price increase in one is to increase (decrease) the consumption of the other. Thus gross substitutability includes any income effects of the price change, while Hicksian or net substitutability does not. Under the gross definition, it is possible for good X to be a gross substitute for Y and at the same time Y could be a gross complement for X. If the price of coffee rises, less cream will be demanded: cream is a gross complement of coffee. But if the price of cream rises, the demand for coffee can increase if people simply drink it “blacker”: coffee is then a gross substitute for cream. Under the Hicksian or net definition, there must be symmetry: if X is a net substitute for Y, then Y is also a net substitute for X.

94

Chapter Four

elasticity εX,PY ≡ (∆X/∆PY )(PY /X ). Since the second term in this elasticity definition is always positive, for gross substitutes, εX,PX > 0, and for gross complements, εX,PX < 0. Now let us return to the analysis of welfare policies.

Choice Restrictions Imposed by Policy The earlier section suggests that considerable inefficiency may result from the use of in-kind welfare programs similar to the food stamp policy illustrated. The argument was based on a conventional model specification used in microeconomic analysis. But does this specification apply to actual programs? In this section, we are going to examine more carefully the nature of the budget constraint in our models. Alternate and more realistic model specifications of the budget constraint affect inferences about policy made with the model. More specifically, we wish to highlight two general factors important to the design and evaluation of a policy: (1) the actual details of the policy design, and (2) the information and transaction costs necessary for the policy’s operation and enforcement. Both factors can have important effects on the actual opportunity sets that individuals face, and thus they should be considered when model assumptions about the opportunity sets are chosen. We illustrate these factors with examples concerning the Food Stamp Program, public housing, and income maintenance programs.

Food Stamp Choice Restriction: The Maximum Allotment The driving force of the inefficiency result in the standard model is the change in the price of food to food stamp recipients (but not others). This is what creates the room for a deal. But as presented, the model assumed that eligible individuals could buy all the stamps they wanted. A policy like that could easily lead to serious problems with illegal transactions— some individuals buying more stamps than they will use in order to sell them (at a profit) to others not legitimately eligible to obtain them. One way to resolve this problem is simply to limit the quantity of stamps to an amount thought reasonable given the family size. In fact, the actual Food Stamp Program sets a limit based on Department of Agriculture estimates for a low-cost nutritious diet. But then this changes the budget constraint in an important way. In Figure 4-8 several budget constraints are shown. AB represents the budget constraint with no program; AC represents the unrestricted food stamp program as before; and ARS represents the restricted budget constraint. Under the last program, the individual can buy up to OT quantity of food with food stamps but above that limit must pay the regular market price for additional food. Thus from A to R the slope of the budget constraint reflects the food stamp subsidy, and from R to S the slope is identical with that of AB. The kink in the budget constraint occurs at the food stamp limit. How does the individual respond to a restricted food stamp program? The answer depends on where the individual started (on AB) in relation to the limit OT. Suppose the individual was initially at point U, consuming over the program limit for food subsidy. Then the program has only an income effect. That is, given the quantity of food being consumed

The Specification of Individual Choice Models

95

O

Figure 4-8. The food stamp program with quantity restrictions.

at U, there is no change in the price of additional food at the margin. Under the highly plausible assumption that food and everything else are normal goods, the individual’s new utility-maximizing point must lie between W and D (points where the consumption of both goods increases). In that case, no inefficiency arises because of the food stamp program. The behavior is the same as if a cash transfer program with constraint QRS were implemented, the amount of the cash being QA, the size of the subsidy for the maximum allotment. Suppose, however, that the individual was initially under the subsidy limit at a point like V. The new utility optimum must then be on the PR segment of ARS.21 We explain this prediction below in two parts: first, that the individual prefers P to other points on AP and, second, that the individual prefers R to other points on RS. Together, these preferences imply that the individual’s optimum point on ARS is on the remaining segment PR. The first part is relatively easy. Since the price of additional food from V has been reduced, the substitution effect works to increase food consumption. Since food is a normal good, the positive income effect also works to increase food consumption. Both effects push in the same direction, so food consumption will increase from the level at V. Therefore, the

21 This argument is affected by the choice of V, where the quantity of everything else is greater than at R. A related argument could be made for an initial point where the quantity of everything else on ARS must be at least as great as at the initial point.

96

Chapter Four

O

Figure 4-9. Utility levels along an ordinary budget constraint decrease with distance from the utility-maximizing point.

individual will choose a point on ARS that has more food than at V or is to the right of P. (The points to the left of P on AP are ruled out because they contain less food than at V.) The second part is slightly trickier. The income effect on everything else is to increase it from V, since it also is a normal good. But the substitution effect is to decrease it. The effects go in opposite directions, and we do not know if the quantity of everything else will increase or decrease. However, even if it does decrease, it will not be less than the quantity at point R. This deduction follows from knowing the initial position V and that both goods are normal. To see this, imagine that the individual had QRS as a budget constraint: a pure income increase compared to AB. Then, by the normality property, the utility-maximizing consumption choice would be some point on QR with more of both goods than at V, like Z. As we move downward and to the right from Z along QRS, the individual’s utility level is steadily decreasing (Figure 4-9). Therefore R must yield greater utility than any other point on RS. Since the segment RS is feasible with the actual food stamp budget constraint ARS, if the individual is on this segment at all he or she will be at R. Note also, in Figure 4-9, that the kink point R will be a popular choice among utilitymaximizing individuals with budget constraint ARS. We have drawn the indifference curves to show R as the maximum utility for an individual with this constraint. It is not a point of tangency, but rather a “corner” solution. R will be the utility maximum whenever the slope

The Specification of Individual Choice Models

97

of the indifference curve at R equals or falls between the slopes of the segments AR and RS. It is like a trap. The individual, when considering choices moving upward from S, wishes to go past R (to Z) but is unable to do so. The same individual, when considering choices moving rightward from A, wishes to continue past R but cannot do so. From either direction, things get worse when the corner is turned. Thus R will be the utility-maximizing point for a number of individuals (all those with a MRS at R equal to or between the absolute values of the slopes of AR or RS). To sum up the general argument, the individual who starts at V and has a new budget constraint ARS will choose a point on the PR segment. The choices on AP yield less utility than P itself, and the choices on RS yield less utility than R itself. Therefore, the utilitymaximizing point must be one from the remainder, the PR segment. These examples imply that the efficiency of a food stamp program is higher when a limit is put on the quantity of food stamps available to a household. We explain this below. A household consuming a small quantity of food initially—meaning a quantity less than the limit—may choose a consumption point like the ones illustrated by line segment PR of Figure 4-8. For this household, inefficiency arises exactly as in the standard analysis: a cash grant smaller than the food stamp subsidy would allow the household to achieve the same utility level by purchasing less food and more of other things. However, a household consuming a large quantity of food initially—meaning a quantity greater than the limit—treats the subsidy exactly like a cash transfer. For this latter household, the resulting allocation is efficient: There is no cheaper way for taxpayers to bring the household to the same utility level. Thus the total impact of putting a limit on the food stamp subsidy is to increase efficiency overall: Low-food-consumption households are unaffected by the limit, and efficiency with respect to high-food-consumption households increases. The program is still not as efficient as a pure cash grant, but it is closer to being so. Let us reflect on this conclusion for a moment. If the only change in the program is a limit on the subsidy per recipient, the total taxpayer cost must be reduced. Recipients with low foodconsumption levels end up with the same subsidy and same utility levels as without the limit. Therefore, high-food-consumption households must have reduced subsidies. True, the value to them of each subsidy dollar is higher. But these households must end up with lower utility levels than they would have without the limit. Without a limit they could choose the same consumption levels as with the limit; since they do not do so, it must be that the withoutlimit choices yield more utility than the limited choices. (This is clear in Figure 4-8, where the household initially at U would prefer some point on RC to any of the points on WD.) How then do we make the statement about increased efficiency, if some people lose because of the limits (eligible households with high food consumption) and others gain (taxpayers)? Recall the discussion in Chapter 3 about relative efficiency. We have simply pointed out that the limits result in fewer potential deals among the members of the economy. (All potential deals are unaffected by the limits except those between taxpayers and high-food-consumption food stamp recipients, which are reduced.)22 Whether one thinks 22 Limiting the maximum allotment to each eligible family can provide gains to taxpayers (dollar reduction in food stamp expenditures) that exceed the losses to food stamp recipients (the dollar amount recipients would be

98

Chapter Four

such a change is desirable depends as well on views of the equity of the change. In the latter regard, note that the high-food-consumption family still receives a greater subsidy than the low-food-consumption family. But the desirability of such a change is not really the issue here. The issue is how to model the effects of food stamps. The actual Food Stamp Program does include limits on the subsidy available to any recipient family. Thus the model that takes account of these limits, and moves one step closer to reality, is more accurate. So far, it leads to a higher efficiency rating compared to the less realistic standard model. This examination of the effect of subsidy limits illustrates that good analytic use of microeconomic theory requires consideration of the specific features of a policy’s design. Because the subsidy limits for food stamps have important effects on the recipient’s opportunity set, or budget constraint, they affect the efficiency of the program. A model that ignores this feature, like our first one, is misspecified.

Food Stamp Choice Restriction: The Resale Prohibition Another source of specification concern involves the information and transaction costs necessary for the policy’s operation and enforcement. The standard model does not explicitly consider them. Implicitly it is assumed, for example, that recipients obey the rules of the program. But recipients may not obey the rules; they have incentive to participate in the illegal resale of the stamps. Accounting for this results in another change in the budget constraint. We illustrate this below. In Figure 4-10a, we draw the food stamp budget constraint ARS as before. The recipient family shown chooses $120 worth of food and $340 worth of all other goods, under a plan that sells each $1 food stamp for $0.50 and places a limit of $180 worth of food stamps as the maximum purchase. Thus the maximum subsidy is $90, although the family shown receives a $60 subsidy. Consider what this household would do if food stamps were freely exchangeable among individuals. Because it has the privilege of buying a $1 stamp for only $0.50, it will always buy the maximum allowable. Any stamp that it does not wish to use for its own food consumption can be resold profitably at its market value. The market value is approximately $1, since anyone trying to obtain a deeper discount (say $.90) would be outbid by someone who would be satisfied with a very small discount that we will treat as negligible. How does this new opportunity change the shape of the budget constraint? The segment RS is unchanged; the household uses all $180 of the stamps for itself and any food consumption beyond the maximum allotment must be bought at ordinary market prices. However, to the left of R, the constraint changes to the bold segment QR. By forgoing $1 worth of food from R, the consumer can resell one stamp and have $1 more to spend on all other goods. By forgoing $2 worth of food, the consumer can resell two stamps and have $2

willing to pay to prevent the imposition of the limit). This is a version of the Hicks-Kaldor test for relative efficiency explained in Chapter 6.

The Specification of Individual Choice Models

(a)

99

O

(b) O

Figure 4-10. The incentive to resell subsidized food stamps: (a) The legal opportunity to resell food stamps would make recipients better off. (b) Illegal resale opportunities may tempt some recipients.

100

Chapter Four

more for other things, and so on. But this is simply the ordinary market trade-off: The new constraint QRS is equivalent to the household receiving a cash grant of $90 (the difference between the face value of the maximum allotment and the household’s subsidized cost for it). Our illustrative household, initially at D, would now choose a point on the QR segment like Z. Point Z must have more of all other goods, since both income and substitution effects work in the same direction. Food expenditures could increase or decrease (income effect is to increase; substitution effect to decrease), but the new utility-maximizing choice must be on the portion of the QR segment that has $340 or more of all other goods. Of course, food stamps are not freely exchangeable. It is, in fact, illegal for food stamp recipients to sell their stamps to others and for noneligible households to use them. Most people obey the rules out of respect for the law, but some will be tempted by the room for a deal. The food stamp recipient is tempted by any offer to purchase unused stamps for an amount greater than cost. A potential purchaser will not offer the full market value because of the risk of being caught but may well offer an amount greater than the seller’s cost. Thus the effective budget constraint for the recipient will not be QRS but rather like YRS in Figure 4-10b: a price for illegal food stamp transactions yielding a slope between that of the normal subsidy constraint AR and the ordinary market value QR. How do information and transaction costs enter the analysis? The slope of the budget constraint YRS is determined in part by the resources allocated to governmental enforcement efforts to prevent the illegal transactions. At one extreme, vigorous enforcement efforts lead to an effective constraint closer to AR. (A high probability of being punished or an inability to use illegally purchased stamps dries up the illicit market.) At the other extreme, little or no enforcement efforts lead to a constraint closer to QR. (Easy use of the stamps by ineligibles raises the illicit market price of the stamps.) What are the implications of illegal trading possibilities? The eligible high-foodconsumption household is unaffected: It prefers to use the maximum allotment for its own consumption, as before. However, the eligible low-food-consumption household may modify its behavior. Suppose it does engage in illegal trading and ends up with a point like X on YRS. Obviously, the buyer and seller consider themselves better off. Then we can say that the closer X is to Z (the choice if trading stamps were legal, as shown on Fig. 4-10a) the more the exchange inefficiency between taxpayers and eligible households is reduced. Why then should this be prevented? One reason might be because of increased taxpayer cost. If the eligible household is on AR anywhere to the left of R itself, then illegal trading is accomplished by increasing its purchase of food stamps and thus taxpayer cost. However, if the eligible household is at R without any trading, then the resales occur at no additional taxpayer cost. (The full subsidy is used without illegal trading.) As a matter of equity, it is not obvious that households having identical eligibility should receive different subsidies dependent upon their food purchases. In fact, changes in the Food Stamp Program discussed below essentially obviate this concern, because they ensure that participating households with identical eligibility will all receive the maximum subsidy.

The Specification of Individual Choice Models

101

O

Figure 4-11. The household budget constraint with no purchase requirement for food stamps is AJK.

The model we have been discussing most closely represents the actual Food Stamp Program as it operated from 1964 to 1978, when participating families were required to purchase the stamps. However, the Food Stamp Act of 1977 eliminated the purchase requirement as of 1979. Each household is now given a free monthly allotment of food stamps based on its income and family size. Under these rules, we can make two inferences: (1) illegal resales of stamps do not increase taxpayer subsidies to recipients, and (2) illegal resales unambiguously reduce food consumption. The effect of this on a household is illustrated in Figure 4-11. In the diagram AB is the original budget constraint. ARS is the budget constraint under the old Food Stamp Program, and AJK is the budget constraint under the revised Food Stamp Program with no purchase requirement. With AJK as the constraint, the household will always be on the JK segment. Starting from A, the household gives up nothing to increase its food consumption to OT at J. Since additional food increases utility, the household will always move at least to J. Thus all eligible households will always use the full allotment of stamps. Think of the constraint NJK as an unrestricted cash grant. If the household prefers a point below J on the constraint, its behavior with respect to illegal trading of food stamps will be identical to that of the high-food-consumption household we analyzed previously. That is,

102

Chapter Four

it will prefer to use the full allotment for its own food consumption. But if the household has low preferences for food and prefers a point above J on NJ, then J is the best it can do under food stamps without illegal trading.23 If LJ represents the illicit market opportunities, this household might take advantage of them and thereby reduce its food consumption. Under the revised Food Stamp Program, since all families use their full allotment, any family engaging in illegal selling of stamps must be reducing its food consumption. However, there is no extra subsidy cost to the government from this trading. It is not known empirically exactly how much illegal trading actually occurs. However, its existence is not disputed. One study reported that 4.2 percent of supermarkets and 12.8 percent of small grocery stores were involved in illegal trafficking in 1995. The 902 trafficking investigations in fiscal 1994 involved in total $224,503 worth of food stamps sold for $124,779, implying a price of $0.56 per food stamp dollar.24 Similarly, a newspaper description of illegal trading in Oakland, California, reported sellers receiving $0.50 for their food stamp dollars. According to this article, the buyers are middlemen who sell the stamps to supermarkets or grocery stores for a slight profit; these stores, in turn, trade them back to the government for their full value.25 The view of food stamps developed here is quite different from the first standard model, and it is worthwhile to summarize the differences. The standard model suggests that food stamps are inefficient relative to a cash grant essentially because of effective food price differences between participants and nonparticipants. This model does not get at the essence of any inefficiency caused by the modern food stamp program, because that program does not directly create the type of price differences analyzed by the model. Instead, we must focus on choice restrictions: the food stamp limits per family and the rules preventing recipients from selling them. Even when there are price differences caused by an “in-kind” program, the effective choice restrictions are usually key determinants of the program’s efficiency. The food stamp limit is a key determinant of efficiency. The potential inefficiency only arises for families with desired food consumption below the limit. The smaller the limit, the more families will have desired food consumption greater than the limit. These families fully utilize the stamps, pay ordinary market prices for their marginal consumption of food, and thus cause no inefficiency. Therefore potential inefficiency depends on the (statistical) distribution of food preferences among the eligible families relative to the limit (i.e., how many will wish to be below the limit, and by how much). Given the potential inefficiency, its realization depends on the extent of illegal trading originating from these families. The more the illegal trading, the less the inefficiency! Since it is costly for the government to prevent illegal trading, one might wonder why any govern-

23 This characterizes about 10–15 percent of food stamp recipients according to one estimate. See Thomas M. Fraker et al., “The Effects of Cashing-Out Food Stamps on Household Food Use and the Cost of Issuing Benefits,” Journal of Policy Analysis and Management, 14, No. 3, Summer 1995, pp. 372–392. 24 These figures are from notes 66 and 152 in E. Regenstein, “Food Stamp Trafficking: Why Small Groceries Need Judicial Protection from the Department of Agriculture (and from Their Own Employees),” Michigan Law Review, 96, June 1998, pp. 2156–2184. 25 Oakland Tribune, “Local Black Market for Food Stamps Widespread,” March 15, 1993.

The Specification of Individual Choice Models

103

ment effort should be made to prevent such trades (or why not simply allow recipients to sell the stamps if they wish?). In fact, the efficiency implication above may still be drawn from too simplistic a model, despite accounting for the food stamp limits and illegal resales. The different model specifications we have discussed vary only in terms of the effective budget constraint for recipients. All these variants assume that recipients act in their own best interests (maximize utility) and that the utility of nonrecipients is not directly affected by the recipients’ consumption choices. However, there are reasons to suspect that these assumptions should be open to question as well. Rather than presenting more formal analytical models at this point, we simply introduce these issues below. To this point, we have used the terms “recipient,” “household,” and “family” interchangeably. Implicitly, we have assumed that the individual receiving the food stamps (e.g., a mother) acts in the best interests of those for whom they are intended (e.g., the mother, her children, and any other covered family or household members). We have assumed that there is such a thing as a “household” or “family” utility function representing the preferences of its members. But the formal economic theory of utility maximization is about individual behavior, not group behavior.26 What if some recipients do not act in the best interests of their household members, for example, a parent who trades stamps for drugs and brings little food home for the children (and perhaps herself)? This concern offers some rationale for making the resale of food stamps illegal and enforcing the restriction. In economics, the problem of motivating one person (or economic agent) to act in the interests of another or others is called the principal-agent problem. In this case, taxpayers through the government are the principal, the person receiving food stamps is the agent, and the covered members of the household are those whose interests the principal wants protected. Food stamp program regulations are the “contract” by which the principal and agent agree to do business. Presumably most of the agents do act in the best interests of their household members. Nevertheless, the “no resale” provision may be a reasonable one from the principal’s perspective. However, this restriction could prevent actions that truly are in the best interests of the covered members, for example, careful economizing on food in order to free up some funds for special medical or educational needs. As analysts, we would want to know if the covered members achieve more gains than losses from the restriction. Is there any other reason why our policy should prevent recipients from choosing a lower food consumption than the allotted amount? What happens if I and some of you say that our utilities depend on how recipients use their food stamps? If we are willing to pay eligibles specifically to increase their food consumption, then this creates room for a deal in which recipients increase their food consumption beyond the amount that they would choose if we did not care. In a supplemental section later in this chapter, we explore the significance of interdependent preferences. While its empirical significance remains an important issue for further research, it can in theory provide another rationale for one group subsidizing or penalizing certain consumption choices of another. 26 For a survey on the economics of the family, see T. C. Bergstrom, “Economics in a Family Way,” Journal of Economic Literature, 34, No. 4, December 1996, pp. 1903–1934.

104

Chapter Four

There are other questions we could raise about food stamps; examples are the administrative costs and the difference between increasing food consumption and increasing nutritional intake. But keep in mind that our primary purpose is to develop skills in microeconomic policy analysis. We have used the food stamp issue to illustrate how alternative specifications of models of consumer behavior bear on policy analysis. Different model assumptions (the specific design features of a policy, the information and transaction costs relevant to a policy’s operation, and the nature of utility functions) generated insights about how consumers will respond to the Food Stamp Program and how to evaluate that response.

Public Housing Choice RestrictionsS Another type of choice restriction can be seen in certain public housing programs. In these a family may be given a take-it-or-leave-it choice. In Figure 4-12a, we illustrate how such a choice affects the budget constraint. Let the original budget constraint, with no program, be AB and assume that the family initially maximizes utility at C, thereby consuming OG of housing. (Think of each consumption unit of housing in terms of a standardized quality, so higher-quality housing is measured as more of the standardized units.) The public housing authority then tells the family that it may have an apartment that is of the same size as its current one but of better quality (thus more housing) and, furthermore, that the rent is less than it currently pays (because the housing is subsidized). Thus the family’s new budget constraint is simply the old one plus the single point E. (Note that the family can consume more of “other things” at E because of the reduction in rent.) By the more-is-better logic, the family must prefer E to C, and it will accept the public housing offer (remember that, by assumption, the quality really is better). Is this efficient or inefficient? The argument we will make is that the take-it-or-leave-it choice is not necessarily inefficient. We will show the possibility that the indifference curve through E has a slope identical to the slope of AB, or (equivalently) that the individual at E could have a MRS equal to the ratio of the market prices. If that is so, there is no room for a deal and the public housing program is efficient.27 To make this argument, let us construct a hypothetical budget constraint AJ to represent an unrestricted housing subsidy program with the same percentage subsidy as at E. We will identify two points on AJ: one inefficient because it has too much housing (the slope of the indifference curve at that point is flatter than AB), and the other inefficient because it has too little housing (the slope of the indifference curve at that point is steeper than AB). As one moves along AJ from one of those points to the other, the slope of the isoquants through them is gradually changing. Therefore, there must be some point between them where the slope equals AB, and at that point there is neither too much nor too little housing. The point is efficient, and it could be E. First, let us show that there is a point on AJ that is inefficient because it has too much housing. This point is an old friend by now: It is the utility-maximizing choice of the

27

We ignore here the possibility of interdependent preferences discussed in a later section.

The Specification of Individual Choice Models

(a)

(b)

105

O

O

Figure 4-12. Public housing choice restrictions: (a) The allocation at F is inefficient with too much housing. (b) The allocation at D is inefficient with too little housing.

106

Chapter Four

household free to choose any point on AJ. We label this F in Figure 4-12a, where the indifference curve is tangent to AJ. This case is identical with the standard model of food stamps examined earlier. There is room for a deal because the household’s MRS of other things for housing (the absolute value of the slope of the indifference curve) is less than that of ordinary consumers who buy at market (unsubsidized) prices. This household would be willing to give up housing for cash (other things) at a rate attractive to the nonsubsidized household; both could be made better off by trading. The point F is inefficient because the household has “too much” housing relative to other things, given the market prices. Now let us examine another point. Suppose the housing authority simply offers to subsidize the current apartment (at the same rate as along AJ), which on the diagram is shown at point D. This point also is inefficient, but because it results in too little housing. To see this, in Figure 4-12b we have added a cash transfer program KL through D. Since the family started at C, it would choose a point from those on KL to the right and below D as long as both goods are normal.28 Therefore, D is not the utility maximum on KL. As it lies above the KL maximum, the slope of the indifference curve through D must be steeper than the slope of KL (it equals the slope of AB, the market trade-off rate). The household at D would trade cash (other things) for housing at a rate attractive to nonsubsidized households; both could be made better off. Thus D is inefficient because it has too little housing relative to other things, given the market prices. We have shown that the slope of the indifference curve through F at AJ is too flat for it to be efficient and that the slope of the indifference curve through D on AJ is too steep for it to be efficient. But as we move upward along AJ from F to D, the slopes of the indifference curves are gradually changing from the flat slope at F to the steep slope at D. Therefore, there must come a point between F and D where the slope of the indifference curve precisely equals the slope of AB, the market trade-off. This point is an efficient allocation. If the public housing authorities offer it as the take-it-or-leave-it choice, the public housing is efficient. Whether actual public housing programs are or are not efficient is a matter for empirical determination. The analysis we presented serves the function of raising the issue. According to one study of federal housing programs, the degree of inefficiency is probably less than 5 percent (the average public housing recipient would require a cash grant of at least $0.95 to forgo a $1.00 housing subsidy).29 As with food stamps, any inefficiency from too much housing might be offset by interdependent preferences. Meanwhile, we have seen that microeconomic theory can be used to analyze the take-it-or-leave-it choice, another type of budget constraint that is sometimes created by public policy.

28 A slightly weaker version of this argument can be made without relying on the normality of the goods. Find the point at which the indifference curve through C intersects AJ (this will be to the left of D). At that point, too little housing is being consumed (the slope of the indifference curve is steeper than at C and therefore steeper than the cash transfer constraint KL). The rest of the argument is the same as above. 29 For further discussion of this issue along with some empirical work, see H. Aaron and G. Von Furstenberg, “The Inefficiency of Transfers in Kind: The Case of Housing Assistance,” Western Economic Journal, June 1971, pp. 184–191.

The Specification of Individual Choice Models

107

Income Maintenance and Work Efforts The analyses so far have another source of oversimplification in their partial equilibrium nature. This means, roughly, that analytic attention is focused on only one of the resource allocation choices that is affected by a program. We have been focusing only on the analysis of exchange efficiency. Of course, the other questions of efficiency—whether the right goods are being produced and right resources are being used to produce them—are equally important. General equilibrium analysis requires that all the resource allocation choices affected by a program be considered. Although we shall continue to defer general discussion of these other efficiency issues until later chapters, it is convenient to introduce one element of them through analysis of the labor-leisure choice of recipients. All transfer programs affect and can cause inefficiency through this choice.30 The primary purpose of this section is to reinforce the usefulness of theory as a guide to policy design and evaluation. In particular, we build up to the idea of using the Earned Income Tax Credit (EITC) to provide aid and to encourage work efforts among low-income families. First, we review the general labor-leisure decision that all individuals face. Then we show how current welfare programs discourage work. Finally, we explain how the EITC can reduce the work disincentives of current programs.

The Labor-Leisure Choice To this point we have referred to individuals having a budget constraint or an initial endowment without being very specific about where it comes from. The basic sources of individual wealth are gifts: material things, such as inheritances; childhood upbringing, such as schooling received; and natural endowments, such as intelligence. Individuals use these gifts over time to alter their wealth further, increasing it through labor or capital investments (the latter includes skill development or “human capital” investments, such as advanced schooling)31 and decreasing it through consumption. For now, we shall focus only on the labor market decisions of individuals as if they were the only source of wealth. One constraint individuals face in earning labor income is the wage offered to them, but the more important constraint is time. There is only so much time available, and a decision to use it to work means not using it for other things we will refer to here simply as leisure. Presumably every individual has preferences about how much to work, given the income that can be derived from the work and the opportunity costs of forgone leisure.32 30 We note that taxpayer choices also are affected by transfer programs, because all methods of taxation cause distortions in prices that might be efficient otherwise. This is discussed in Chapter 12. 31 This is discussed further in Chapters 8 and 19. 32 For those of you thinking that individuals have little choice about how much to work (e.g., most jobs are 40 hours per week, and an individual can only take it or leave it), consider the many sources of flexibility. Many people work only in part-time jobs, and others hold two jobs simultaneously. Some jobs have more paid vacation than others. Sometimes the decision to pursue certain careers is heavily influenced by the time dimension, for example, most teaching jobs are for 9 to 10 months (and pay correspondingly). If one thinks broadly about work

108

Chapter Four

We represent the labor-leisure choice in Figure 4-13a by using a diagram virtually identical to those we have been using all along. Leisure is measured on the horizontal axis, and dollars for everything else on the vertical axis. The budget constraint is shown as AB, and its slope equals minus the wage rate. (Since more work is shown on the diagram as a movement to the left, the budget size goes up in accordance with the wage rate as the individual works more.) Thus the price per unit of leisure equals the wage. The location of the constraint depends upon the time framework selected by the analyst. If we select a 1-year period, the maximum leisure must be 1 year, represented by OB. The dashed vertical line at B indicates that it is impossible to choose a point to the right of it. Point C, let us say, represents the utility-maximizing choice of some individual. Consider the response of an individual to a change in the wage rate, say an increase. This changes the budget constraint to one like DB. How will the individual respond? This is clearly a price change involving both income and substitution effects. The rise in wages is a rise in the price of leisure (relative to all other things), and thus the substitution effect works to decrease leisure (or increase work). Real income increases because of the wage rise; if leisure is a normal good, the income effect acts to increase its consumption (or reduce work).33 Thus the income and substitution effects work in opposite directions, and the net effect cannot be predicted on purely theoretical grounds. The labor supply curve of an individual is the locus of points relating the choice of hours worked to each possible wage rate. Empirically, it is often thought to be “backwardbending” as in Figure 4-13b. That is, as the wage increases from a low initial rate, the substitution effect outweighs the income effect: The individual finds it more important to earn income for basic necessities than to reduce work effort and live on almost nothing. But as the wage rises past some point, the income effect begins to outweigh the substitution effect: The individual may feel that he or she has earned the right to spend more time relaxing and enjoying the fruits of a big paycheck.

Work Disincentives of Current Welfare Programs Let us now consider the population of low-income and low-wealth individuals who are, or might be, eligible for the various welfare programs we have mentioned: for example, TANF (replacing the former AFDC), food stamps, Medicaid, and local housing assistance. Historically, many of these programs were designed in such a way that benefits available to recipients were reduced dollar for dollar in response to any increases in the recipient’s earned income. This created some bizarre incentives for the recipients. during an individual’s lifetime, there are important decisions about when to begin and when to retire. There are decisions about how hard to seek new employment during a spell of unemployment. There are very subtle decisions such as how hard to work when working; some people prefer to take it easy on the job in full knowledge that this usually slows down promotions or raises that come from working more. In short, there is a great deal of choice between labor and leisure. 33 Leisure is generally considered a normal good because it is a complement to a highly aggregated (and undoubtedly normal) good: consumption. More simply, it often takes more time to consume more (e.g., theater, restaurants, shopping, vacation trips, reading).

The Specification of Individual Choice Models

(a)

O

(b) O

Figure 4-13. The labor-leisure choice and labor supply curve.

109

110

Chapter Four

For example, consider a family receiving under the old system both AFDC payments and local housing assistance. If a part-time job were offered to one of the family members, not only would AFDC payments be reduced by the amount of the earnings, but so would the housing assistance; by accepting the job, the family could lose twice as much money as the amount earned! Needless to say, welfare recipients were hardly enthused by the moral encouragement of some to increase “self-reliance.” Whereas most of the programs have been modified individually to allow some net increases in family income through earnings, as a group they still have close to a “100 percent tax” (i.e., for families eligible for several of them). We can see the effects of this situation on a labor-leisure diagram. Figure 4-14a shows an ordinary budget constraint AB for a low-income family. With no welfare programs, the family might choose a point like C—where, say, the mother works part time but spends most of her time taking care of her two young children. Let us say that the current amalgam of programs can be represented by the line segment DE. This means that the family receives $7000 in annual benefits if the mother does not work at all, and net income stays at $7000 as work effort increases from E until D (net welfare benefits are reduced dollar for dollar by any earnings). At point D, welfare benefits have been reduced to zero and the family goes off welfare, thus retaining the full earnings from additional work effort from D until A. It is hardly surprising that this family maximizes utility by completely withdrawing from the labor force, at E. Both the income and the substitution effects act to increase leisure. (Higher real income works to increase leisure; the lowered effective wage reduces the price of leisure, which acts to increase its consumption.) We thus know that the family will choose a point on the new effective constraint ADE with greater leisure than at C; this means it is on the DE segment. But although the slopes of indifference curves approach zero (the slope of DE) to the right because of diminishing MRS, they will not reach it if the nonsatiation assumption holds. The utility level increases as the family moves to the right along DE, but it never reaches a tangency position; instead, it runs into the boundary condition at E, which is the maximum attainable utility.34

The Earned Income Tax Credit (EITC) The EITC is intended to reward work efforts of low-income families, as well as contribute toward the reduction of poverty. Originally adopted in 1975 to reduce the burden of the social security payroll tax on low-income working families, it was expanded significantly in the 1993 Omnibus Budget Reconciliation Act. The EITC covers all low-income taxpayers, including those without children (although the credit amount depends upon family size). For families with two or more children the EITC (as of 1999) provides a tax credit equal to 40 percent of earned income up to a maximum of $3816 (at an income level of $9540). The 34 We note also that some families initially working with income above the welfare minimum will withdraw if the welfare plan becomes available to them. This will occur whenever the initial indifference curve crosses the welfare minimum height to the left of point E: If the family is indifferent to the initial position and the minimum income with some work effort, it must strictly prefer the minimum income and no work effort.

The Specification of Individual Choice Models

111

(a)

(b) Figure 4-14. Welfare and work incentives: (a) Traditional welfare programs have poor work incentives. (b) The EITC increases work incentives for welfare families.

112

Chapter Four

credit is gradually reduced at a rate of 21.06 percent for earned income above $12,460, until it is fully phased out at $30,580. The tax credit is refundable, which means that the government pays it whether or not the qualifying individual owes any federal income tax. Because this program is administered through the Internal Revenue Service like ordinary tax collection, it may not have the stigma associated with other welfare policies.35 The EITC budget constraint for a single mother, two-child family is shown as HGFE in Figure 4-14b. As before, BE represents a welfare benefit level of $7000 in the absence of work, and welfare benefits are assumed to be reduced dollar for dollar for earnings up to $7000. However, the EITC provides a new credit of 40 percent of earnings. Thus as the mother begins to work, net income is not flat but rises above $7000. At $7000 earnings, welfare benefits are zero but the EITC is $2800 and total income is $9800 (point F). Leftward from F to G, total income is 140 percent of earned income (thus the slope of this segment is steeper than the no-program constraint AB). To the left of point G, where earned income reaches $9540 and the credit is at its maximum of $3816, additional work effort supplements income by the ordinary market wage rate and thus GH is parallel to AB.36 The EITC should, on average, increase work efforts for welfare families. However, whether or not any particular welfare family changes behavior depends upon its preferences. Consider two possible preference structures for a family that does not work under traditional welfare (i.e., initially at point E). One possibility is represented by the indifference curve U2; its crucial characteristic is that its slope at point E is flatter than the EF segment under EITC. This family can reach a higher utility level by moving leftward along EF (i.e., by working) until it reaches the maximum at I on U3.37 However, the family may have a different preference structure represented by the indifference curve U4 through point E; its crucial characteristic is that its slope at point E is steeper than EF. Its utility-maximizing point under the EITC would still be at E. Those families with slopes at E less steep than the slope of line segment EF will be induced to work; those with slopes greater than EF will still not work. In sum, the EITC unambiguously increases work efforts among previously nonworking welfare families. The aggregate amount of this increase depends on two factors: (1) the strength of the EITC inducement (the steepness of the slope of EF) and, (2) the distribution of preferences (the number of families at E with MRS of leisure for other goods less than the slope of EF, as opposed to greater than that slope). However, the EITC also affects the work incentives of families who are already working, and for many of these families the incentives are largely to reduce work. Families with earned income in the “flat” (GH) segment are induced to increase leisure through the income

35 For more information about the EITC, see John Karl Scholz, “The Earned Income Tax Credit: Participation, Compliance, and Antipoverty Effectiveness.” National Tax Journal, 47, No. 1, March 1994, pp. 63–87. 36 The family shown does not reach the phase-out range even when working full time. For a family with a higher market wage than the one shown, the budget constraint would have an additional, flatter-than-market-wage segment for earnings above $12,460. 37 The utility-maximizing work effort on EF will be less than that at point C: both income and substitution effects increase leisure. Thus the EITC will reduce work effort for some low-income nonwelfare families.

The Specification of Individual Choice Models

113

effect, and families in the phase-out range (not shown) face both income and substitution effects to increase leisure. It is not clear how the expected work reduction from these families compares to the expected increase in work effort from other affected families.38 The above analysis is but one illustration of many factors to consider in the design of welfare policies. Within the EITC framework it does not discuss, for example, the cost to taxpayers of offering the credit, the role of eligibility requirements to limit this cost and target its benefits, the effects of varying the subsidy and phase-out rates and ranges, or the overall antipoverty effectiveness of the credit. Of course, there are many other important welfare issues apart from the EITC and work incentives. For a greater understanding of the economics of welfare, one can refer to the extensive literature that is available.39 Note that throughout this discussion of labor market effects of policies, little attention was focused on resolving the efficiency issue. Rather, it was suggested that there was a problem with the current welfare system, and one alternative to alleviate the problem was explored. The problem was identified as poor work incentives, and we simply used knowledge about income and substitution effects to understand it and theoretically develop an idea that might mitigate it. If it is impossible to determine efficiency effects precisely (recall all the other determinants of efficiency we have discussed in regard to the same set of policies), the next best thing may be to suboptimize: take one piece of the larger issue where there seems to be consensus that it is a problem and try to do something about it.

Interdependent Preference Arguments: In-Kind Transfers May Be EfficientS Another possible specification error in utility maximization models concerns the sources of utility: the arguments or variables of the utility function. The model of behavior as it is specified above has an implicit assumption of “selfish” preferences: The only sources of utility to individuals are from the goods and services they consume directly. There is nothing in the model of rational behavior that implies that people are selfish. It is perfectly plausible to have an interdependent preference: where one individual’s utility level is affected by another person’s consumption. You might feel better if you gave some food to a neighbor whose kitchen had just been destroyed by a fire. This act might please you more than if you gave the neighbor the cash equivalent of the food. People in a community might donate money to an organization created for the purpose of giving basic necessities to families who are the victims of “hard luck.” This organization might be a church or a charity, but it might also be a government whose voters approved collection of the “donations” through taxes paid by both supporters and opponents of the proposal. 38 See David T. Ellwood, “The Impact of the Earned Income Tax Credit and Social Policy Reforms on Work, Marriage, and Living Arrangements,” National Tax Journal, 53, No. 4, December 2000, pp. 1063–1105; N. Eissa and J. Liebman, “Labor Supply Response to the Earned Income Tax Credit,” Quarterly Journal of Economics, 111, No. 2, May 1996, pp. 605–637; and Scholz, “The Earned Income Tax Credit.” 39 See Rebecca Blank, “Fighting Poverty,” and Isabel V. Sawhill, “Poverty in the U.S.,” for good general introductions. For a general review of the incentive effects of welfare policies, see Robert Moffitt, “Incentive Effects of the U.S. Welfare System: A Review,” Journal of Economic Literature, 30, No. 1, March 1992, pp. 1–61.

114

Chapter Four

The use of government as the organization to make these transfers does raise an equity issue: What is the nature of the entitlement, if any, of the recipients to the funds being transferred? From the examples above, one might assume too quickly that initial entitlements rest fully with the donors and certain resources of their choice are granted to the recipients. The government is then used to effect these transfers as a mere organizational convenience. But the fact that some people are made to contribute involuntarily suggests that the transfer is enforced as part of a preexisting social contract. This latter interpretation affects the way we think about the extent of interdependent preferences. In a social contract interpretation, all members of the society have contingent claims on their wealth (e.g., taxes) and contingent entitlements to wealth (e.g., transfers). A claim or entitlement is contingent upon an individual’s future economic circumstances (e.g., tax payments if rich and transfers if poor). The magnitudes of entitlements and liabilities depend upon the rules of the social contract. These specifications are not necessarily reflected in the preferences expressed by citizens after they know their own levels of economic success. In fact, one might think that there is a specific bias after the “future” is revealed: The vast majority of people who end up with liabilities rather than entitlements would prefer to give less at the time of transfer than they would agree to be liable for before knowing who will be transferring to whom.40 This helps explain why a legal system is sometimes necessary to be the “social umpire”: to interpret contracts, to judge their validity, and to enforce them. Under the social contract interpretation, the appropriate interdependent preferences are those revealed when the valid contract is specified. This is presumably closer (compared to the pure donation interpretation) to the preferences that would be expressed when everyone is in the dark about who will end up “needy” and who will not.41 Whereas the size of the transfer is definitely affected by whichever of the above two situations is thought to be closer to the truth, interdependent preferences may exist in both cases. If there are interdependent preferences involving the consumption of specific goods, then it is no longer efficient for each consumer to have the same MRS for any two goods consumed.42 Suppose we go back to our example with Smith and Jones (as two of many con40

This is like asking people how much they are willing to pay for automobile insurance after they know whether they will be involved in any accidents. 41 The legal problems of interpreting the contract vary with the policy area. For the provision of food stamps, congressional legislation like the original Food Stamp Act of 1964 may be the document of primary interest. For another in-kind good, the provision of legal defense counsel, the courts have ruled that important entitlements are contained in the Sixth Amendment to the Constitution of the United States. 42 If the interdependent preferences are for general well-being, rather than for specific goods such as food or housing consumption, then no subsidization is required to sustain an efficient allocation. Ordinary prices will do. This result can be derived mathematically by following the format used in the appendix to Chapter 3. An efficient allocation can be thought of as one that maximizes the utility of Jones subject to keeping Smith at some given constant level of utility U¯ S. We formulate the Lagrange expression as in the appendix to Chapter 3, noting the slight change in Smith’s utility function due to the interdependent preference: L(MJ , TJ ) = U J(MJ , TJ ) + λ[U¯ S − U S(MS , TS , U J )] Recall that MS = M¯ − MJ and therefore ∂MS /∂MJ = −1 and similarly TS = T¯ − TJ and therefore ∂TS /∂TJ = −1. To find the values of MJ and TJ that maximize L requires the same procedure as before: taking the partial derivatives of L

The Specification of Individual Choice Models

115

sumers) consuming meat and tomatoes. For our purposes here, let us assume that Jones is quite poor relative to Smith and that Smith would derive some satisfaction (other things being equal) from an increase in Jones’s meat consumption. This is equivalent to saying that Smith has a utility function U S = U S(MS , TS , MJ ) where Smith’s utility level increases as Jones’s meat consumption rises. Initially, suppose consumption of meat and tomatoes is such that each person has an MRSM,T = 4 (4 pounds of tomatoes for 1 pound of meat). Smith, however, would also be willing to give up 1 pound of tomatoes in order to increase Jones’s meat consumption S by 1 pound (MRSM = 1). After telling this to Jones, Smith and Jones approach a third J ,TS consumer. They give the third consumer 4 pounds of tomatoes, 3 pounds from Jones and 1 pound from Smith, in exchange for 1 pound of meat. The third consumer is indifferent. Jones keeps the meat, so Smith is indifferent. But Jones is strictly better off, having given up only 3 pounds of tomatoes to get 1 extra pound of meat. Thus the initial position cannot be efficient, despite the fact that all consumers have the same MRSM,T in terms of their own consumption.43 Exchange efficiency requires, in this case, that the total amount of tomatoes consumers will give up to increase Smith’s meat consumption by 1 pound equals the total amount of tomatoes consumers will give up to increase Jones’s meat consumption by 1 pound. We can express this more generally if we think of M as any good for which there are interdependent preferences and T as any good for which there are none. Then for every pair of consumers i and j in an economy of m consumers: with respect to MJ , TJ , and λ, setting them all equal to zero, and solving them simultaneously. However, writing down only the first two of these equations will suffice to show the efficiency condition: ∂L ∂U J ∂U S ∂U J ∂U S ∂MS —— = —— − λ —— —— − λ —— —— = 0 J ∂MJ ∂MJ ∂U ∂MJ ∂MS ∂MJ

(i)

∂L ∂U J ∂U S ∂U J ∂U S ∂TS —— = —— − λ —— —— − λ —— —— = 0 ∂TJ ∂TJ ∂U J ∂TJ ∂TS ∂TJ

(ii)

By moving the last term in each equation (after simplifying) to the other side and dividing (i) by (ii), we get (∂U J/∂MJ )(1 − λ∂U S/∂UJ ) −λ∂U S/∂MS ——————————— = ———–—— (∂U T/∂TJ )(1 − λ∂U S/∂UJ ) −λ∂U S/∂TS On canceling like terms in numerator and denominator and recalling the definition of MRS, we have J MRSM = MRSSMS ,TS J ,TJ

This is the usual requirement for exchange efficiency. 43 Note that the existence of interdependent preferences does not interfere with consumer sovereignty. Each consumer is still attempting to use the initial resources to arrange voluntary trades that lead to maximum satisfaction by the consumer’s own judgment. The claim that interdependent preference interferes with consumer sovereignty, sometimes seen in the literature, may mistake the equity issue for the efficiency one. If both parties believe they have the initial entitlement to the transfer, then each will feel the other has no authority to direct its allocation.

Chapter Four

116

m

m

Σ MRSMk ,T = k=1 Σ MRSMk ,T k=1 i

k

j

k

That is, the sum of tomatoes all m consumers will give up to increase i’s meat consumption by 1 pound must equal the sum of tomatoes all m consumers will give up to increase j’s meat consumption by 1 pound. In our specific example, in which the only interdependent preference among all m consumers is Smith’s concern for Jones’s consumption of meat M, k MRSM =0 S ,Tk

whenever k ≠ S

k MRSM =0 j ,Tk

whenever k ≠ S or J

Then the above efficiency condition collapses to S J S = MRSM + MRSM MRSM S ,TS j ,Tj j ,TS

where the last term reflects Smith’s willingness to give up tomatoes in order to increase Jones’s meat consumption.44 The initial position can be seen to violate this condition: 4≠4+1 This violation created the room for a deal that we illustrated. It should be noted that this illustration is of a positive consumption externality: Smith derives pleasure or external benefits from an increase in Jones’s meat consumption. In other situations the externality may be negative: For example, some people experience reduced pleasure or external costs when others consume tobacco products in their presence.45 The “standard” case can thus be seen as the middle or neutral ground between positive and neg44

To see this, let us use the model from the prior note but substitute MJ for UJ in Smith’s utility function: L(MJ , TJ , λ) = UJ (MJ , TJ ) + λ[U¯ S − U S(MJ , TJ , MJ )]

Writing the first two equations for optimization as before, we get: ∂L ∂U J ∂U S ∂MS ∂U S —— = —— − λ —— —— + —— = 0 ∂MJ ∂MJ ∂MS ∂MJ ∂MJ

(i)

∂L ∂U J ∂U S ∂TS —— = —— − λ —— —— = 0 ∂TJ ∂TJ ∂TS ∂TJ

(ii)

( (

)

)

This simplifies to:

( ) ( )

∂U J −∂U S ∂U S —— = λ ——— + —— = 0 ∂MJ ∂MS ∂MJ

(i′)

∂U J −∂U S —— = λ ——— = 0 ∂TJ ∂TS

(ii′)

By dividing (i′) by (ii′) and recalling the definition of MRS, we get the result in the text: J S S MRSM = MRSM − MRSM J ,TJ S ,TS J ,TS 45 Negative interdependent preferences may arise in many situations. For example, you may feel angry if your neighbor washes a car during a severe water shortage. Or you may simply be envious of someone else’s good fortune.

The Specification of Individual Choice Models

117

ative externalities. In all cases of externalities, the key characteristic is that some agents cause costs or benefits to others as a side effect of their own actions. In order to relate interdependent preferences to in-kind welfare programs, let us first consider whether efficiency can be achieved if all consumers independently buy and sell at the same market prices. With no mechanism for Smith to influence Jones’s consumption, efficiency will not be achieved. Each will choose a consumption pattern that equates the MRS in terms of personal consumption to the ratio of the market prices, and thus each MRS will be the same and the interdependent efficiency condition above will be violated (since S MRSM > 0). This violation would continue even if a mechanism were created to transfer j ,TS cash from Smith to Jones, since both would spend their new budgets in light of the market prices. However, suppose we could create a situation in which Smith and Jones faced different prices. In particular, suppose PM and PT represent the market prices but that Jones gets a subsidy of SM for every pound of meat consumed. Then the real price to Jones per unit of meat is PM − SM and the chosen consumption pattern will be such that PM − SM PM SM MRS JMJ ,TJ = ———— = —— − —— PT PT PT S Since Smith will so arrange purchases that PM /PT = MRSM , the above equation implies S ,TS

SM J S MRSM = MRSM − —— J ,TJ S ,TS PT S If SM is so chosen that SM = PT MRSM , J ,TS J S S MRSM = MRSM − MRSM J ,TJ S ,TS J ,TS

which is the interdependent efficiency requirement. This illustrates the possibility that inkind transfer programs such as food stamps can be efficient if the subsidy rate is chosen correctly. The correct subsidy rate in the example equals the dollar value of the tomatoes Smith will forgo in return for increasing Jones’s meat consumption 1 unit (from the efficient allocation). This is necessary for the efficient allocation to be an equilibrium: At the real relative price each faces, neither has incentive to alter the consumption bundle.46 46 In this example we are ignoring the problem of how to finance the subsidy. If we had to put a tax on Smith, it might alter the (after-tax) prices Smith faces and mess up the equilibrium. We discuss these and other effects of taxation in a general equilibrium framework in Chapter 12. In a more general case with many “caring” consumers the correct subsidy rate to the ith consumer equals the sum of the dollars each other consumer is willing to forgo in return for increasing the ith consumer’s meat consumption by one more unit. That is, to make an efficient allocation be a market equilibrium, the subsidy to the ith consumer must be as follows: i =P SM T

Σ MRS

k≠i

k Mi ,Tk

Of course, there may be other needy individuals besides the ith consumer, and presumably the willingness to donate to (or subsidize) one consumer depends on how many other needy individuals there are.

118

Chapter Four

This example suggests that, in the presence of interdependent preferences for the consumption of specific goods (rather than general well-being), in-kind transfers may be efficient. Furthermore, cash transfers will generally be inefficient. Thus the standard argument depends on a particular specification of the factors that give individuals utility, namely, that no one has a utility function with arguments representing the consumption of specific goods by others.47 The above analysis raises several questions. First, do interdependencies involving specific consumption goods exist, and, if so, how large are they? Second, given the extent of interdependencies relevant to a specific good, do actual subsidies induce the efficient amount of consumption? These are currently unresolved empirical issues, although several analysts have suggested methods for estimating the answers.48 Finally, note that the use of theory in this section has not resolved an issue. It has raised empirical questions that otherwise would not have been asked at all. Sometimes this is one of the most important functions of analysis: to clarify and question the assumptions underlying a judgment about a policy.

Summary This chapter uses models of individual choice for the analysis of government welfare programs. Each of the models assumes utility-maximizing behavior, although they vary in details of their specification. The predictions and efficiency conclusions drawn from these models depend on the particular specifications. The analyses presented are intended primarily to develop a facility with these types of models in order to be able to adapt and use them in other settings. In terms of conventional microeconomics, we have focused on the role of the budget constraint in limiting an individual’s utility. We have seen that public policies can affect the shape of an individual’s budget constraint in many different ways. We began with a standard argument used to demonstrate the inefficiency of in-kind welfare programs involving price subsidies, like the Food Stamp Program from 1964 to 1978. Such subsidies create differences in the prices faced by program participants and other food consumers. In the standard model these price differences leave room for a deal and thus cause inefficiency. A cash grant, on the other hand, is efficient by this model. Every policy

47 For more general reading on this subject, see H. Hochman and J. Rodgers, “Pareto Optimal Redistribution,” American Economic Review, 59, No. 4, September 1969, pp. 542–557, and G. Daly and F. Giertz, “Welfare Economics and Welfare Reform,” American Economic Review, 62, No. 1, March 1972, pp. 131–138. 48 An empirical study suggesting the importance of interdependent preferences is A. Kapteyn et al., “Interdependent Preferences: An Econometric Analysis,” Journal of Applied Econometrics, 12, No. 6, November– December 1997, pp. 665–686. However, another study concluded that they are unlikely to be important. See J. Andreoni and J. K. Scholz, “An Econometric Analysis of Charitable Giving with Interdependent Preferences,” Economic Inquiry, 36, No. 3, July 1998, pp. 410–428. Earlier contributions include Henry Aaron and Martin McGuire, “Public Goods and Income Distribution,” Econometrica, 38, November 1970, pp. 907–920; Joseph DeSalvo, “Housing Subsidies: Do We Know What We Are Doing?” Policy Analysis, 2, No. 1, Winter 1976, pp. 39–60; and Henry Aaron and George von Fursentberg, “The Inefficiency of Transfers in Kind: The Case of Housing Assistance,” Western Economic Journal, 9, June 1971, pp. 184–191.

The Specification of Individual Choice Models

119

analyst ought to understand this important, general point. However, analysts also should understand that evaluation of the efficiency of actual policies is often more complex. The standard argument does not account for certain choice restrictions typically imposed by in-kind welfare programs. These restrictions can have important effects, and they deserve analytic attention. To develop the capability for analyzing a broad range of choice restrictions that affect the shape of budget constraints, we explained the income and substitution effects that are used to analyze how an individual responds to changes in income and prices. An individual’s consumption of a specific good varies with income in a relationship known as an Engel curve, and varies with the good’s price in a relationship known as a demand curve. Consumption is also affected by changes in the prices of related goods that may be substitutes or complements. The degree of responsiveness of consumption to these changes is often summarized by the income, price, and cross-price elasticities, and the more empirical information we have about them the more precisely we can predict the response to a proposed change. We then showed that some of the choice restrictions in welfare programs can prevent or reduce the inefficiency identified by the standard analysis. That is a characteristic of the food stamp allotment limits and the take-it-or-leave-it choice in public housing. We also analyzed the prohibition against food stamp resale transactions. The effectiveness of the prohibition depends on government efforts to enforce it. From the point of view of the standard model that treats a household like an individual, there is no reason for this prohibition: It is a choice restriction that reduces the utilities of recipients at no gain to anyone else. However, the standard model does not account either for the principal-agent problem of ensuring that the recipient acts in the best interests of the covered household members or for the interdependent preferences possibility (discussed in the optional section) that nonrecipients care about the food consumption of those participating in the food stamp program. It is an unresolved empirical issue whether or not there are benefits of these two types that might outweigh the costs of the resale prohibition. We also considered the partial-equilibrium nature of the models used to focus on consumption and exchange efficiency. Such analysis can be misleading if it fails to call attention to other important effects of the same policies. Welfare programs affect not only consumption choices, but work efforts as well. We constructed a simple model of the laborleisure choice that suggests that the work disincentive effects of the current mix of welfare programs can be strong on some participating families. We then showed that the Earned Income Tax Credit increases work incentives for nonworking welfare recipients, although it reduces work incentives for some other eligibles. Awareness of incentive effects such as these is important to the design of many public policies. This chapter contributed in several ways to the development of analytic skill. First we developed familiarity with the logic of utility maximization subject to a budget constraint. Second, we emphasized that care in model specification is necessary in order to model any particular policy accurately. Third, we saw that public policies often change the incentives individuals face when making certain choices. Good analysts will not only recognize these in existing policies, but will also take account of them when designing new ones.

Chapter Four

120

Exercises 4-1

Qualifying low-income families in a community receive rent subsidies to help them with housing costs. The subsidy is equal to 25 percent of the market rent. a Draw on one diagram a qualifying family’s budget constraint of $1000 for rental housing and all other goods. Show how the constraint is modified by the rental subsidy program. b Suppose housing is a normal good and that, without subsidy, low-income families usually spend half of their income on rental housing. Find the point on your rental-subsidy constraint at which housing consumption (measured in dollar market value) equals the dollar consumption of all other goods. The values at this point are approximately $571 on each axis. Is this an equilibrium point for the subsidized family with the usual preferences? Explain whether it has just the right amount of housing, too little housing, or too much housing. c For any point that the family might most prefer on the rental-subsidy constraint, how would you expect its housing consumption to change if the rental subsidy was replaced by an equal-size housing voucher (a certificate in dollars that can be used for housing only, similar to free food stamps).

4-2

The Earned Income Tax Credit (EITC) benefits millions of Americans with incomes below the poverty line and provides encouragement to work for those not currently in the labor force. However, it has different incentive effects for many families who work but whose incomes are still low enough to qualify for it. This problem set is designed to illustrate these latter incentives (in a simplified way). It is also designed to illustrate how every scrap of economic information can be used constructively to make “tighter” estimates of the policy’s consequences. Val is a member of the “working poor.” She has a job that pays $8 per hour. She chooses to work 8 hours per day, 5 days per week, 50 weeks per year (i.e., 2000 hours per year), although her employer would hire her to work any number of hours per year. Because of the large family she supports, she does not owe any income taxes and would not owe any even if she increased her work effort by, say, 20 percent (and thus we will henceforth ignore ordinary taxes). a Draw Val’s budget constraint on a graph on which income per year is measured on the vertical axis and leisure per year on the horizontal axis (use 8760 total hours per year to be allocated to either work or leisure; OK to show leisure in range from 4500 up and income correspondingly). Label her current income-leisure choice as point A. What is Val’s MRS of an hour of leisure for income at this point? b Under a new EITC plan for Val’s family size, there is a $1 credit for every $2 earned up to a maximum $4000 credit. The credit is reduced by 25 percent of all income above $12,000 (until the credit becomes zero). Draw Val’s budget constraint under the EITC plan. At what point on this budget constraint is her income and leisure identical to what

The Specification of Individual Choice Models

121

she could have with no plan (aside from zero work)? Label this as point C, the breakeven point. (Answer: $28,000.) c Can you predict how the EITC plan will affect Val’s working hours? Note that at Val’s pre-EITC hours, she is in the “phase-out” portion of the EITC plan. (Answer: It will reduce them.) d Is it possible, assuming leisure is a normal good, that Val will choose a point at which the MRS of leisure for income is not equal to $6? (Answer: Yes.) e The secretary of the treasury, in a draft of a speech on the EITC plan, calculated that Val’s income will be increased by $3000 per year. Furthermore, with 20 million individuals exactly like Val, the annual cost to the government was estimated at $60 billion. The subsidy cost is based on Val’s current working hours. Is this a good estimate of Val’s annual subsidy? Explain. Assuming that leisure is a normal good, what is the range of possible annual costs to the government? (Answer: $60 to $80 billion.) f In order to get a better estimate of the cost to the government of the EITC plan, a social experiment was undertaken. The experiment consisted of varying the amount of a fixed credit given to each family unit but always phasing it out at 25 percent of earnings above $12,000. The main thing learned was that, compared to no plan, families said they were equally satisfied with a maximum credit of $1440 and working 7 hours per day (5 days per week, 50 weeks per year). If there were no income effects on leisure in going from this plan to the actual proposed EITC, estimate the annual government costs. g The second thing learned from the experiment was that, for these individuals, leisure is a necessity with income elasticity less than 0.5. Measure real income as the sum of the dollar values of any disposable income–leisure combination along the phase-out portion of the budget constraint (leisure is valued at its real price, the dollars of other goods and services that must be given up to get it). What leisure level would you expect on the actual proposed EITC if its income elasticity were 0.5? h Use the answers to (f) and (g) to form a range within which the annual government cost of the proposed EITC plan must lie. (Answer: $70–$76.3 billion.)

APPENDIX THE MATHEMATICS OF INCOME AND SUBSTITUTION EFFECTSO For a given utility function we need to know the prices of all goods and the budget level in order to calculate the utility-maximizing choice. We know that individuals change their demands for a certain good X in response to changes in any of the parameters, for example, a change in price or income level as discussed above. The responses are summarized in an ordinary demand function:

122

Chapter Four

X = DX (P1, P2, . . ., PX , . . ., Pn , I) where the Pi represents the prices of each good (including X) and I is the income or budget level. The shape of the demand function depends, of course, on the individual’s preferences. However, certain aspects of a demand function will appear for anyone who maximizes utility, independently of the particular preferences. It is those general aspects that we attempt to describe by the income and substitution effects. The response to a unit increase in income is found by taking the partial derivative of the demand equation with respect to income ∂X/∂I. A good is “normal” if this partial derivative is positive, and “inferior” if it is negative. The income elasticity is defined as ∂X I εX,I = — — ⋅— ∂I X where εX,I denotes the elasticity of the variable X with respect to the variable I. As I and X are positive quantities, the income elasticity has the same sign as the partial derivative ∂X /∂I. Note that the magnitude of the elasticity does not have to be constant; it depends on the consumption point from which it is measured. As an obvious example, a good that is inferior at one income level must have been normal at some lower income level. (Otherwise, there would not be a positive quantity of it to reduce.) The response to a unit increase in price is found by taking the partial derivative of the demand equation with respect to price ∂X/∂PX . The decomposition of this total effect of a price change into its component income and substitution effects is described by the Slutsky equation: ∂X— = — ∂X — — ∂PX ∂PX

|

∂X − X —– ∂I U=U 0

where the first term is the substitution effect (the utility level is held constant at its initial level U0) and the second is the income effect.49 Note that the overall size of the income effect is proportional to the individual’s initial consumption level of the good X. The price elasticity of demand is defined as ∂X PX εX,PX = —— —– ∂PX X Except for a Giffen good, the price elasticity is negative (PX and X are positive; ∂X/∂PX is negative). It is often easier, when doing empirical analysis, to work with elasticities because they are thought to be more “constant” than the partial derivatives over the changes in prices or income considered. The Slutsky equation can be rewritten in terms of price and income elasticities. Multiply both sides of it by Px /X and the last term by I/I:

49 Eugen E. Slutsky (1880–1948) was the Russian economist who first derived this equation. A short derivation of it, using the theory of duality that we introduce in the appendix to Chapter 6, is in Philip J. Cook, “A ‘OneLine’ Proof of the Slutsky Equation,” American Economic Review, 62, No. 1/2, 1972, p. 139.

The Specification of Individual Choice Models

∂X PX ∂X —— —– = —— ∂PX X ∂PX

|

PX ∂X P I —– − X —– —X — X ∂I X I U=U 0

or PX X ∂X I S εX,PX = εX,P —— —– — X I ∂I X S where εX,P is the “substitution elasticity, or X

PX X S εX,PX = εX,P − —— ε X I X,I Note that Px X/I is the proportion of income spent on the good X.

123

CHAPTER FIVE T H E A N A L Y S I S O F E Q U I T Y S TA N D A R D S : A N I N T E R G O V E R N M E N TA L G R A N T A P P L I C AT I O N

to develop further skills of model specification and show how to use those skills to understand the equity consequences of proposed or actual policies. The equity or fairness of a policy is often difficult to evaluate because of a lack of social consensus on the appropriate standard. However, that does not mean that the analysis of equity consequences is left to the whims of the analyst. In this chapter we will introduce several equity standards and apply them in a discussion of school finance policy. These standards can be applied routinely to the analysis of a broad range of policies and are often used in legal and legislative settings. The chapter is organized as follows. First, a number of principles of equity are introduced in a systematic way: strict equality, a universal minimum, equal opportunity, simple neutrality, conditional opportunity and neutrality, and horizontal and vertical equity. Some of these characterize the outcomes of a resource allocation process, and others characterize the fairness of the process itself. Next we turn to develop an application of these standards to school finance policies. We first go over some standard analyses of intergovernmental grants in general, and then review some grant policies for school financing that have caused public concerns. We relate these concerns to the equity standards and consider how intergovernmental grant programs can be designed to achieve the standards. In an appendix we present an exercise to illustrate the use of social welfare functions in evaluating school finance policies.

THIS CHAPTER IS INTENDED

Equity Objectives In general, equity or fairness refers to the relative distribution of well-being among the people in an economy. But although this identifies the topic clearly, it provides no guidance to what is equitable. There are a number of shared principles of equity that can serve as 124

The Analysis of Equity Standards

125

analytic guides, but in any particular situation the analyst must recognize that there may be no consensus about which one is most applicable.1 Nevertheless, the analysts can describe the effects of proposed or actual policies in terms of the particular equity concepts thought most relevant. The use of well-defined concepts of equity not only helps users of the analysis to understand policy consequences of concern to them but avoids arbitrariness in the analytic methodology. Even if the analyst feels that none of the better-known principles is applicable to a particular policy, those principles can still serve as a context for the analysis. By having to argue that some other concept is more appropriate than the better-known ones, this helps to produce a clarity of reasoning that might otherwise be absent. The introductory discussion in Chapter 3 distinguished two broad categories of equity concepts: those that relate to outcomes and those that relate to process. Outcome concepts of equity are concerned with the existence in the aggregate of variation in the shares that individuals receive. Process concepts of equity are concerned with whether the rules and methods for distributing the shares among individuals are fair. Keep in mind that these are different standards by which to judge a system of resource allocation; a system can do well by one of these two broad concepts and poorly by the other. Furthermore, changes made to improve the system in one equity dimension can cause deterioration as judged by the other (and, of course, can affect efficiency as well). We will illustrate this for school finances shortly. There is also an issue concerning the type of shares that should be scrutinized in the light of an equity norm. One position is that we are interested in how policies affect general distribution: the overall distribution of utility in the economy, or measurable proxies for utility such as income or wealth. A different position is that we are interested in specific egalitarianism: the equity of the distribution of particular goods and services, for example, medical care.2 The underlying philosophy of specific egalitarianism is that although it might be fine to allow most goods to be allocated and distributed purely as the rewards of market forces, different rules should apply to a limited category of goods and services. The basic necessities of living (e.g., food, shelter, clothing, essential medical care) should be guaranteed to all, and civic rights and obligations (e.g., voting, the military draft, jury duty) should not be allocated purely by the forces of the market.3 Most of the equity concepts discussed

1 The framework in this section was first explicated in Lee Friedman, “The Ambiguity of Serrano: Two Concepts of Wealth Neutrality,” Hastings Constitutional Law Quarterly, 4, 1977, pp. 97–108, and Lee Friedman and Michael Wiseman, “Understanding the Equity Consequences of School Finance Reform,” Harvard Educational Review, 48, No. 2, May 1978, pp. 193–226. A comprehensive treatment of equity concepts for school finance is contained in Robert Berne and Leanna Stiefel, The Measurement of Equity in School Finance (Baltimore: Johns Hopkins University Press, 1984). 2 See James Tobin, “On Limiting the Domain of Inequality,” Journal of Law and Economics, 13, October 1970, pp. 263–278. 3 Note that this can imply a rejection of utilitarianism. For example, we can imagine allowing the buying and selling of votes, which would increase individual welfare. Yet the laws prohibiting the transfer of voting rights suggest that the underlying norm of one person, one vote applies to the distribution of the voting right itself and not the individual utility that might be derived from it.

126

Chapter Five

below can be applied either to the general redistributive effects or to the distribution of a specific good or service of concern. Additionally, the concepts must be applied to a well-defined population. Often this is simply geographic, as to residents of a particular country, state, or city. However, nongeographic definitions can be used as well. For example, most of the legal debate about school financing applies to spending variation among public school students within a jurisdiction (and thus spending on students attending private schools is not considered). As another example, we might be interested in the equity of health services available to veterans (and thus a precise definition of veteran is required). With clear definitions of the good or service and of the relevant population, we can turn to the different equity concepts that can be applied. There are two outcome concepts of equity that are commonly used: strict equality and a universal minimum. The norm of strict equality means that all people should receive equal shares. There are numerous ways of measuring the degree to which a system attains that standard. One common method is to graph a Lorenz curve and calculate its associated Gini coefficient. A Lorenz curve shows the relation between two variables X and Y defined as follows: X = the percent (from 0 to 100) of the measured population, ordered by and starting from those with the least amount of the good or service under study Y = the percent (from 0 to 100) of the total amount of the good or service in the measured population that the X percent receives (or provides). For example, one point on the 1997 Lorenz curve for U.S. household income is that 3.6 percent of total income accrues to the poorest 20 percent of the population (Figure 5-2, which we will discuss shortly). Every Lorenz curve begins at the origin and ends at the point where X = Y = 100 percent (the 100 percent of the population with the least amount of Y has all of it). When strict equality holds, the Lorenz curve is simply a straight 45° line (each X percent of the population has Y = X percent of the good). If one person has all of the good or service and everybody else has none, the Lorenz curve is a right angle that coincides with the X-axis until it reaches 100 percent and then becomes vertical (to the height where Y equals 100 percent). To illustrate, Figure 5-1 shows a hypothetical Lorenz curve for yearly jury duty. The percent of total population eligible for jury duty, ordered by and starting from those who served the least to those who served the most, is measured along the horizontal axis. The percent of total jury service provided (in terms of person-days on jury duty) is measured along the vertical axis. As drawn, 25 percent of the population provided no service, the next 25 percent provided 10 percent of jury service, the next 25 percent provided 15 percent of jury service, and the last 25 percent provided the remaining 75 percent of jury service.4 4

Note that the shape of the Lorenz curve in this example might shift dramatically if we redefined the period of time considered, for example, 2 years instead of 1 year. Obviously this does not imply that the longer period is fairer; both are pictures of the same distribution and must represent equal fairness. The analyst must be careful to recognize the effect of choosing a particular definition of the units being distributed and to keep the units constant when making comparisons.

The Analysis of Equity Standards

127

O

Figure 5-1. The Lorenz curve is a measure of outcome equality.

Sometimes it is useful to have an empirical measure of the degree of equality, and a common index used for that is the Gini coefficient, defined graphically by the Lorenz curve as the ratio of area I to area I + II as illustrated in Figure 5-1. Thus the further the Lorenz curve from the 45° line, the higher the Gini coefficient.5 If each person in the population provided 5

Mathematically, if d1, d2, . . . , dn, represent the days served as juror by each of the n people in the eligible population, the Gini coefficient equals n

n

ΣΣ | d − d | i=i j=i

i

j

—————— 2n2d¯ where d¯ is the mean of the di . As an illustrative example, suppose there are only four people and the jury days served by each (ordered from highest to lowest) are 75, 15, 10, and 0. Then there are a total of 100 days of jury service, and the average number of days per person is d¯ = 100/4 = 25. The denominator of the formula for the Gini coefficient is 2n2d¯ = 2(42)(25) = 800. The numerator is n

n

Σ Σ | d − d | = |75 − 75| + |75 − 15| + |75 − 10| + |75 − 0| + |15 − 75| + |15 − 15| i=i j=i

i

j

+ | 15 − 10 | + | 15 − 0 | + | 10 − 75 | + | 10 − 15 | + | 10 − 10 | + | 10 − 0 | + | 0 − 75 | + | 0 − 15 | + | 0 − 10 | + | 0 − 0 | = 460

The Gini coefficient is then .575 = 460/800.

128

Chapter Five

O

Figure 5-2. Lorenz curves showing U.S. household income.

the same jury service, the Lorenz curve would coincide with the 45° line, area I would shrink to zero, and the Gini coefficient would be zero. At the other extreme, if one person provided all the jury service, the Lorenz curve would coincide with the outer bounds of area II and the Gini coefficient would approach 1.6 Thus the Gini coefficient is a measure of the degree to which strict equality is attained: zero if it is attained exactly, positive if there is any inequality, and approaching a maximum of 1 as the inequality worsens. If the Lorenz curve for one policy alternative lies strictly within the Lorenz curve for another policy alternative, the first is unambiguously more equal and will have a lower Gini coefficient. A Lorenz curve can give a good visual picture of the degree of equality, and two or more of them drawn on the same diagram may be used to compare distributions (e.g., if jury service is more equal in New York than in Boston, or if jury service in New York is more equal in 1999 than it was in 1989). To illustrate this, let us leave the hypothetical and turn to an important real example. Figure 5-2 shows a Lorenz curve for household income in the United States for 1997 (the solid-line curve) and for 1967 (the dotted line).7 The percent of total household population is measured along the horizontal X-axis, ordered and starting

The Gini coefficient in this case is (n − 1)/n, which approaches 1 for large n. The data used in the discussion and to construct these curves were taken from pp. xi–xii of Money Income in the United States: 1997, Current Population Reports, Consumer Income P60-200 (Washington, D.C.: U.S. Bureau of the Census, 1998). 6 7

The Analysis of Equity Standards

129

from those with the least income to those with the most. The percent of total U.S. income is measured along the vertical Y-axis. In 1997 the poorest quintile of households had only 3.6 percent of income, while the richest quintile had 49.4 percent. The Gini coefficient for 1997 was .459. You may have noticed that the 1967 Lorenz curve lies entirely within the 1997 one: the distribution of income has become unambiguously less equal over this 30-year period. In 1967, the lowest quintile had 4.0 percent of income, and the richest quintile had 43.8 percent of it. The Gini coefficient for 1967 was .399; by this measure, income inequality in 1997 was 15 percent greater than in 1967. Many studies of U.S. income inequality have confirmed that inequality began to increase in the mid-1970s and continued to do so gradually until the mid-1990s; in the last few years (1997–2000), it has remained approximately constant. This increase in inequality was not expected and is of concern to many. It was not expected because from the 1940s through most of the 1960s the economy had been growing rapidly while at the same time inequality was gradually lessening. The two main reasons generally offered for the long-term increase of the 30 years between 1967 and 1997 are that: (1) technological change during this period had a “bias” toward increased demand for higherskill workers, resulting in greater earnings inequality among workers, and (2) the proportion of households headed by a single female increased significantly (from about 10 to 18 percent), and these households have less income than others because, on the one hand, there are fewer earners in them and, on the other, because women tend to earn less than men. Nevertheless, the causes of changes in income inequality are an unsettled matter for further research.8 In our example, the Lorenz curves did not cross. But they can cross, and two Lorenz curves that cross can have the same Gini coefficient: One curve will have greater inequality among the people in the lower end and the other greater inequality among those in the upper end. Like all single-parameter measures of the degree of equality, the Gini coefficient does not always reflect legitimate concerns about the location of the inequality.9 Thus, one must be cautious about comparing two very different distributions by this or any other singleparameter measure. That is why it is often useful to display the Lorenz curves of the distributions being compared on a graph or, similarly, to construct a chart that shows what each

8 A good overview of U.S. income inequality is provided in Chapter 5 of the 1997 Economic Report of the President (Washington, D.C.: U.S. Government Printing Office, 1997). Interesting studies of this include Frank Levy, The New Dollars and Dreams: The Changing American Income Distribution (New York: Russell Sage Foundation, 1999), and Richard C. Michel, “Economic Growth and Income Equality Since the 1982 Recession,” Journal of Policy Analysis and Management, 10, No. 2, Spring 1991, pp. 181–203. A review of inequality studies encompassing many countries is P. Aghion, E. Caroli, and C. Garæia-Penalosa, “Inequality and Economic Growth: The Perspective of the New Growth Theories,” Journal of Economic Literature, 38, No. 4, December 1999, pp. 1615–1660. 9 Another common measure is the coefficient of variation, which is defined as the standard deviation divided by the mean. It is zero at perfect equality and increases as the distribution becomes more unequal. For a general discussion of inequality measurement, see A. B. Atkinson, “On Measurement of Inequality,” Journal of Economic Theory, 2, 1970, pp. 244–263.

130

Chapter Five

quintile or decile of the population receives under the alternative distributions. Even the simple observation that the lowest quintile receives a smaller percentage does not necessarily mean that the households are worse off in an absolute sense: a smaller percentage of a bigger income pie can still be a larger slice.10 The other outcome standard of equity is the universal minimum, which means that each person should receive a share that is at least the size of the minimum standard. Unlike strict equality, application of this norm requires that a particular minimum standard be selected. During the Nixon administration, a proposal that was effectively a guaranteed minimum income was debated by Congress and did not pass, although a majority favored such a plan. The proposal, called a Negative Income Tax, was like an Earned Income Tax Credit with maximum credit occurring at zero work (and other income) and then gradual reduction. The maximum credit was thus a guaranteed minimum income. One part of the majority insisted upon a higher minimum guarantee than the other part would agree to support, so the majority became two minorities. Once a minimum standard is selected, one can count the number or proportion of individuals below the minimum and the total quantity required to bring all those below up to the minimum. Often the analyst will pose several alternative minimum standards to discover the “equity cost” of increases in the standard. This exercise can be trickier than one might suspect. One issue is whether the source of supplementation to those below the standard must come from those above it or whether other goods or services can be converted into the good or service of concern. For example, if minimum educational resources per child are the issue, one need not take educational resources away from those who have them in abundance. Instead, more educational services can be produced by doing with less of all other goods. On the other hand, a shortage of water in an emergency might require the redistribution of water from those who usually consume larger quantities (e.g., owners of swimming pools) to others. That is, the supply of water during the relevant time period may be fixed, or perfectly inelastic. When the good in question has positive elasticity of supply, it need not be directly redistributed. In the elastic case, it is not hard to show that the efficiency cost of achieving the minimum is lower with expanding production than with direct redistribution of the existing quantities.11 Not only does the cost bear on the method of achieving the minimum; it also bears on whether a minimum is more desirable than strict equality. For example, consider whether it might be appropriate to ensure strict equality in the distribution of the entire privately produced GDP. If all potential suppliers of resources (1abor and nonlabor) to the market knew

10 Unfortunately, this does not appear to be the case for the poorest segment in the United States during this 30-year period. 11 Imagine assigning a tax to each person having more than the minimum standard so that the total tax will provide enough resources to bring all up to the minimum. Then give each taxpayer the choice of paying the tax directly from his or her existing stock of the good or by its cash equivalent. All the taxpayers will prefer to give cash, which is then used to produce additional units of the good. Thus it is more efficient to expand production (when it is elastic) than to redistribute directly.

The Analysis of Equity Standards

131

that they would end up with the average share independently of their individual decisions (each having a negligible effect on the total), the incentives would be to supply very little and the GDP would plummet drastically. The achievement of strict equality does not seem to be a very pragmatic objective for a market-oriented economy with substantial individual liberties. However, this is not to say that moving closer toward it from where we are now is not important. Rather, past some point, the reduction in equality would not be worth its efficiency cost.12 On the other hand, a reasonable universal minimum might be attainable well before the economy reaches its “most equal” overall distribution. Thus cost considerations might influence the choice of which equity standards to emphasize. Of course, the choice of standards still comes down to a moral judgment about which is best. In the education example given above, a universal minimum was posited as the objective. But many people feel that is just not the relevant standard; they might insist that all children be educated equally. The responsibility of the analyst in this situation is to make clear that there are competing conceptions and to try to clarify the consequences of achieving the alternative standards. Having discussed some issues relevant to the selection of an outcome standard of equity, let us turn to the process standards of equity. Process standards become applicable when inequality of share sizes is explicitly permitted. The concern here is not with how much aggregate inequality exists but with whether the share that each person ends up with has resulted from a fair process. There may be only one winner and many losers of a lottery, but if the entry conditions are the same for all and each entrant has an equal chance of winning, we might think that the distribution is perfectly equitable. The economic agents in an economy might be viewed in an analogous way. When individuals make resource allocation decisions in an economic system, there is often uncertainty about what the outcome will be. For example, consider the sequence of decisions to invest in higher education by attending college, to choose a major subject, and to select a job from the jobs offered after graduation. These represent a series of contingent decisions that affect but do not completely determine the income streams of particular individuals. After all the decisions are made, we observe that, on average, those with college degrees have higher incomes than those without, those with certain majors have higher incomes than other college graduates, and those who accepted jobs in certain industries earn higher incomes than similarly educated people who went into other industries. Whether this whole process is thought fair depends upon the entry conditions to colleges, major subjects, and industries, as well as on how the payoffs are distributed within each part of the whole sequence. If, because of their sex, women are denied entrance to the best schools or are not considered for promotion within a particular industry, then one might think that this process of resource allocation is unfair. There are several standards by which an attempt is made to capture the ideal of process equity. A fundamental one is equal opportunity: Each person should have the same

12 Recall the discussion in Chapter 4 about the Earned Income Tax credit, which illustrated that welfare assistance reduced work effort more or less, depending on the design.

132

Chapter Five

chance of obtaining a share of a given size. However, in practice it is virtually impossible to tell whether a particular person has been given equal opportunity. For example, periodically the government has a lottery, which any citizen can enter for a minimal fee, to award oil leasing rights on federally owned land. If a loser in the government lottery claims that his or her number did not have the same chance of being drawn as every other number, how can one tell after the fact? Because it is often the case that each person makes a specific decision only rarely (e.g., to enter the lottery), it may be impossible to know whether the outcome is explained by chance or by denial of equal opportunity. On the other hand, if a person made many sequential bets on the same roulette wheel, it would be possible by using statistical laws to tell whether the game had been rigged against the person. Suppose that the oil lease lottery really was rigged to favor certain entrants. Could that be discovered through an examination of the results? If some identifiable group of entrants (e.g., private citizens as opposed to oil companies) does not win its fair share of leases, it may be taken as a strong indication of rigging.13 But without group evidence of this kind, the rigging may go unnoticed. Thus, it may be necessary to fall back on tests involving a group of participants to substitute for the lack of multiple tests involving a single participant. If there really is equal opportunity for each person and if we divided the participants into two groups, each group ought to receive a similar distribution of share sizes.14 In fact, sometimes we simply substitute the concept of neutrality for particular groups instead of the more stringent concept of equal opportunity for each individual. That is, we may accept a resource allocation process as fair if it does not discriminate against selected groups of particular social concern. These groups usually become identified by suspicions that they have been discriminated against in the past. The courts, for example, often apply “strict scrutiny” in cases in which the alleged denial of equal opportunity or equal treatment arises by classifying people into groups by race or wealth.15 This scrutiny was relevant to the findings that poll taxes are unconstitutional and that states must provide counsel to poor individuals accused of serious crimes. In both rulings, a key part of the arguments was that wealth was a suspect means of classification and that state action prevented poor people from having opportunity equal to that of others. Let us refer more loosely to the groupings thought to be of particular social concern as the suspect groupings and define simple neutrality: The distribution of shares within a suspect group should be identical to the distribution of shares among all others. 13 The lottery for oil and gas leasing was temporarily suspended on February 29, 1980, by Interior Secretary Cecil Andrus. The government charged that some oil companies had cheated by submitting multiple entries for individual parcels, in violation of federal rules. See the articles in the Wall Street Journal on April 8, 1980 (p. 12, col. 2) and October 6, 1980 (p. 29, cols. 4–6). In January 1999, police in Italy arrested nine people for fixing the Milan lottery. According to newspaper reports, the investigation began 8 months earlier when police in Cinisello Balsamo noticed that an unusual number of locals were winning. See “Scandal Sullies Italian Lottery,” Associated Press Online, January 15, 1999. 14 In this manner, we use knowledge about the outcomes for the purpose of judging the process. Note that this is quite different from judgment by the outcome standards themselves. 15 “Strict scrutiny” is a legal term for a particular type of judicial test for constitutionality. As the term suggests, it is more difficult for laws to withstand judicial examination with strict scrutiny than with other tests.

The Analysis of Equity Standards

133

Whenever there is equal opportunity, there will be simple neutrality. If each person has the same chance as other persons of receiving a share of any given size, then any large grouping will be characterized by simple neutrality. Each group will receive approximately the same proportions of any given share size, and the average share size in each group will therefore be approximately the same. However, simple neutrality with respect to one suspect group does not mean that there is equal opportunity. Overweight people, for example, can be systematic victims of discrimination and thus are denied equal opportunity. The discrimination will not affect neutrality with respect to race (the suspect group in this example) as long as each race has the same proportion of overweight people in it. Thus simple racial neutrality can hold while, at the same time, equal opportunity is not available to all. Simple neutrality is therefore a less stringent standard than equal opportunity. One reasonable and common objection to the standard of either equal opportunity or simple neutrality is that there may be legitimate reasons for differences in share sizes. For example, excessive weight might be just cause for denial of employment as a police officer. Then weight is an exceptional characteristic: a factor that is legitimately expected to influence shares. All applicants might be considered equally, conditional upon being within the weight limits. To account for legitimate deviations from equal opportunity caused by exceptional characteristics, we define the standard of conditional equal opportunity: Every person with identical exceptional characteristics has the same chance of obtaining a share of a given size. To continue with the police example, consider the question of neutrality by race. If applicants grouped by race have identical characteristics except that Caucasians are more overweight than others, then simple neutrality would not be expected to hold (a smaller proportion of Caucasian applicants would be offered employment). However, we would expect conditional neutrality to hold: Among those with identical exceptional characteristics, the distribution of shares within a suspect group should be identical to that of all others. Thus suspect factors are those that are not allowed to cause distributional differences, and the exceptional characteristics are factors that are allowed to cause those differences. Whenever there are differences in exceptional characteristics across suspect groupings, simple and conditional neutrality offer different standards. (Both cannot be met by one system.) Conditional neutrality is less stringent than simple neutrality in the sense that it permits larger differences among the shares received by each group. Even if one finds that a system is conditionally neutral, it is wise to examine the effects of the exceptional characteristics. Exactly how much difference is to be allowed because of an exceptional characteristic depends upon the specific situation being analyzed. The terms used to assess the fairness of differences allowed owing to exceptional characteristics are horizontal and vertical equity. The most common application of these is in evaluating the fairness of tax systems. In typical cases, the outcomes are the amount of taxes paid and one of the exceptional characteristics is before-tax income.16 Horizontal equity 16 Outcomes might alternatively be defined in terms of utilities. We provide an illustration in the appendix to this chapter.

134

Chapter Five

means that likes should be treated alike. Since tax amounts are often determined by factors besides income (e.g., factors leading to deductions or exemptions, or use of a nonincome tax base such as sales), the crucial equity issue is whether other exceptional characteristics in addition to income will be used to assess fairness. That is, are “likes” defined simply by their income levels or are there other characteristics that must also be the same for them to be truly “likes”?17 For example, consider two households with the same income and living in identical dwelling units and paying the same housing expenses. The only difference between these two households is that one rents the dwelling unit and the other is an owner-occupant. Under current federal law, the owner-occupant can deduct the mortgage interest charges from income and will pay lower taxes. Many economists feel that this difference, while longestablished, violates the standard of horizontal equity. Other examples of differences recognized by the current federal tax code include age (extra deduction if over 65), medical expenses, and source of income (such as the exemption of dividend income from most state and local municipal bonds). These deductions and exemptions have consequences beyond their fairness: Each causes an increase in the tax rate necessary to raise a fixed amount of revenue, increasing the effect of taxation on other allocative decisions (as we saw in the labor-leisure choice example in the previous chapter). One distinction deserving special mention is whether the benefit an individual receives from the government good or service is an exceptional characteristic. Use of benefits as an exceptional characteristic is referred to as taxation by the benefit principle, whereas use of income is referred to as the ability-to-pay principle. To the extent that governments raise revenue through user fees or other charges for goods and services provided, they are relying on the benefit principle. Gasoline taxes and cigarette taxes are often given the justification that they are collected from the people who most benefit from the roads and from medical services for smoking-related illnesses. The appropriateness of this distinction on grounds of horizontal equity is an important factor in the design of government financing systems. Vertical equity means that there is a fair difference in shares among people with different levels of the exceptional characteristics. In the case of federal taxes, there is widespread agreement that people with higher income should pay higher taxes. In principle, taxation can be proportional (taxes rising proportionately with income), progressive (rising faster than income), or regressive (rising more slowly than income). To illustrate, suppose the tax on $10,000 income is $1000. Then the tax on $20,000 is proportional if $2000, progressive if more than $2000, and regressive if less than $2000. Evaluations that attempt to measure the relative burdens on different groups caused by a tax are referred to as studies of tax incidence. Most evaluations of our federal income tax system have concluded that it is mildly progressive. Sales taxes and payroll taxes, on the

17 In nontax applications, a measure of economic means such as income may not be an exceptional characteristic. For example, there are horizontal and vertical equity issues in criminal sentencing policy involving the relationships between the seriousness of crime and the penalty received.

The Analysis of Equity Standards

135

other hand, are generally regressive. However, the study of tax incidence is quite complicated and the results subject to much uncertainty because of the phenomena of tax-shifting: The legal payers of the tax may not be the same as the individuals bearing its burden (e.g., if a landlord pays a property tax that is fully recovered through rents received, then the tax has been shifted to the tenants). We shall return to the tax-shifting issue in Chapter 12, and for now simply note that it complicates any efforts to assess vertical (and sometimes horizontal) equity. The concepts of horizontal and vertical equity apply to many policy issues other than tax policy. For example, there are legal equity requirements concerning the provision of counsel to those accused of crimes and the appropriateness of the sentences given to the guilty. As another example, many people feel that health care should be based strictly on medical need and without regard to an individual’s ability to pay.18 In another area, Congress has ordered the Federal Communications Commission to find a way to ensure that women, minorities, and small businesses receive a fair share of licenses for use on the portion of the radio spectrum reserved for personal communications services.19 While equity judgments can be quite difficult to make, analytic review using the concepts covered here can greatly clarify the consequences of existing policies and proposed changes to them. Of course, there are many issues involving the use, measurement, and extension of equity concepts that are the subject of ongoing research.20 However, let us at this point turn to issues in their practical application. We begin with some introductory analytics for intergovernmental grant programs, and then consider an ongoing equity issue involving intergovernmental grants for schools.

Intergovernmental Grants In the fiscal year 2000 approximately $235 billion was provided through grants-in-aid from the federal government to state and local governments.21 These funds were provided through a wide variety of programs, such as health care for the homeless, urban mass transit assistance, and community development block grants. States also fund grant-in-aid programs to local governments, most notably for school financing. About one-third of state

18 An interesting debate on this issue was sparked by the article by Julian Le Grand, “The Distribution of Public Expenditure: The Case of Health Care,” Economica, 45, 1978, pp. 125–142. See, for example, A. Wagstaff, E. Vandoorslaer, and P. Paci, “On the Measurement of Horizontal Equity in the Delivery of Health Care,” Journal of Health Economics, 10, No. 2, July 1991, pp. 169–205, and in the same issue Julian Le Grand, “The Distribution of Health Care Revisited—A Commentary,” pp. 239–245. 19 See the article “U.S. Opens Air Waves to Women, Minorities,” San Francisco Chronicle, June 30, 1994. 20 See for example John E. Roemer, Equality of Opportunity (Cambridge: Harvard University Press, 1998); Edward Zajac, Political Economy of Fairness (Cambridge: The MIT Press, 1995); Amartya Sen, Inequality Reexamined (Cambridge: Harvard University Press, 1992); and William J. Baumol, Superfairness (Cambridge : The MIT Press, 1986). 21 Economic Report of the President, January 2001 (Washington D.C.: U.S. Government Printing Office, 2001), p. 371, Table B-82.

136

Chapter Five

spending consists of intergovernmental aid, and this spending is equal to about one-third of local revenues.22 Although these grants have diverse purposes, most economic rationales for them depend on either externalities or equity arguments. An example of an externality argument might be as follows. School districts do not have sufficient incentive to bear the costs of devising innovative educational techniques because the benefits of success will accrue primarily to schools external to the district boundaries (who will imitate for free). Thus, although the social benefits of research and development for the education sector might outweigh the costs, the private benefits to any single district may not justify the expense to the taxpayers within it.23 As an attempt to ameliorate this problem the federal government sponsors the Elementary and Secondary Education Act, which provides grant funds to pay for innovative demonstration projects. In the last part of this chapter we will consider the design of a grant program to achieve an equity goal: the “neutralization” of the influence of local wealth on local school finance decisions. Another grant program with a possible justification on equity grounds was the federal general revenue-sharing program for cities.24 No single city can impose too progressive a tax system on its own, or those faced with high taxes might simply move outside the city boundaries. The federal government, on the other hand, has less need to be concerned about tax avoidance through locational choice. Thus the federal government might play the role of tax collector and, through revenue sharing, fund given services in a more progressive manner. Rivlin has proposed a system that could facilitate this, based on common shared taxes among the states, similar to the system used in Germany.25 Aside from strict economic rationales for intergovernmental grants, political rationales may also be relevant. For example, some people feel that individual liberties are threatened when too much decision-making power is left in the hands of the central government; they might favor more local control of spending even if revenues were raised by the central government.

Design Features of a Grant Program In this section we will go over economically relevant design features of an intergovernmental grant program. They can generally be classified into three categories: income effects, price effects, and choice restrictions. Knowledge of these three features can then be com22 See Helen F. Ladd, “State Assistance to Local Governments: Changes During the 1980s,” Economic Review, 80, No. 2, May 1990, pp. 171–175. 23 Under a private market system, these social benefits can often be captured by the innovating unit through the patent system, which “internalizes” the externality. In Chapter 17, we focus upon the analysis of externalities. 24 This program was terminated during the Reagan administration in a reform movement that, somewhat inconsistently, consolidated a large number of specialized grant programs into a smaller number of broader ones. General revenue sharing is simply a further consolidation, but one that lacked special appeal to the politically organized interest groups at the time. 25 Rivlin’s primary focus is on productivity, but her proposal allows for progressivity for reasons similar to those for general revenue sharing. See Alice M. Rivlin, Reviving the American Dream (Washington, D.C.: The Brookings Institution, 1992).

The Analysis of Equity Standards

137

O

Figure 5-3. Nonmatching grants are like income increases.

bined with a model of the decision-making behavior of the recipient to predict the effects of the grant. Initially we will assume that the recipient community can be treated like a utilitymaximizing individual; from that perspective, the analysis of intergovernmental grants is identical with the welfare policies analyzed in the preceding chapter. (Welfare payments also are grants, but to families rather than governments.) Then we will consider model variants of the decision-making process that challenge the “community utility-maximization” perspective.

Income Effects and Nonmatching Grants Nonmatching grants, or block grants, are fixed amounts of funds given to an economic unit to spend. Typically these are given by a higher-level government to a lowerlevel one, and they may or may not be restricted to use for particular purposes. They affect the recipient primarily by altering the amount of funds available to spend on anything—a pure income effect. This can be seen in Figure 5-3, which shows the trade-offs faced by a community allocating its budget between government and private goods. Here government goods are any goods and services provided through the local government and financed by taxes. Private goods are goods and services that community members buy as individuals in the marketplace with their after-tax incomes. Both government and private goods are measured by the dollar expenditures on them; the sum of expenditures on them equals the community budget level. Let AB represent the pregrant budget constraint, and say the community initially is at a point like C. Then let the central government provide general revenue-sharing funds to the

138

Chapter Five

community to be used for any local government goods or services.26 The new budget constraint is then ADE. That is, the community still cannot obtain more than OA in private goods, can have government goods up to the amount at D (AD is the size of the grant) without sacrifice of private goods, and past D must sacrifice private goods to consume additional government goods. Since the grant does not alter the prices of either government or private goods, the DE segment is parallel to AB (as a pure income increase would be). If both government and private goods are normal, the community will increase its purchases of both, and it will move to a point like F on the new budget constraint. Note that the grant described above, restricted to use for purchasing government goods only, has the effect of increasing the community’s consumption of private goods. Observe on the diagram the dollar amount of expenditure on private goods OG at C and OH at F. The ratio GA/OA is the tax rate that the community used before the grant program, that is, the proportion of private wealth given up to have government goods. The ratio HA/OA is the tax rate after the grant program is introduced; it is lower than the initial rate. Thus the community meets the legal terms of the grant by using it for government goods, but it reduces the amount of its local wealth previously allocated to government goods. To make sure the resulting allocation is clear, think of the community response as follows. Imagine that the immediate response to the grant is to reduce the tax rate so that exactly the same amount of government goods is being provided as before the grant. This meets the terms of the grant. But now the community can have the same private consumption as before and still have resources left over to spend on anything. The extra resources are just like extra income, and the community behaves as an individual would: It buys a little more of everything normal including government goods. Thus, the revenue-sharing grant to the community has allocative effects identical with those of a tax cut of the same size by the central government. Its net effect is to increase spending on government goods, but not nearly by the amount of the grant. In this example the restriction that the grant be used to purchase only government goods did not play an important role; the grant size is small relative to the funds the community would provide for government goods without any restriction. Figure 5-4 gives an illustration of a binding constraint: The size of the grant is greater than the amount the community would freely choose to spend on government goods. This occurs when the grant size (measured along AD) is large enough to cross the income-expansion path OCK.27 If ADE is the budget constraint with the grant, the community will choose D as its optimal point: more of the covered goods than it would choose if the grant were a pure income supplement.28 This is unlikely to occur with general revenue sharing, but it becomes more likely

26 Throughout this chapter, we will not consider the sources of central government revenues used to fund the grant program. The pregrant budget level of the local community is measured after central government taxes have been collected. 27 Recall that the income-expansion path is defined as the locus of utility-maximizing consumption choices at each possible income level when all prices are held constant. 28 Note that the MRS at D must be less steep than at J (the preferred point if the categorical constraint is removed). Starting from the utility-maximizing point along an ordinary budget constraint, the slopes of the in-

The Analysis of Equity Standards

139

O

Figure 5-4. The categorical constraint of a nonmatching grant can be binding.

(for a given grant size) as the allowable uses of the grant are narrowed, as for new firefighting equipment. A grant that restricts the recipient to spending it on only certain types of goods is called a categorical or a selective grant.

Price Effects and Matching Grants Analysis of the effect of matching requirements in a grant system is identical with the analysis of the Food Stamp Program. A matching grant offers to pay a predetermined portion of a total expenditure level chosen by a recipient unit for the goods covered by the grant. For example, under a program to develop mass transit facilities, the federal government will provide $9 for every $1 that is raised by a local jurisdiction for mass transit in that jurisdiction. The program has a matching rate m of 9 to 1. In other grant programs the matching rate may not be as generous; perhaps the donor government might offer only $0.10 for $1 raised by the recipient (m = 0.1). It is also possible for the matching rate to be

difference curves passing through the budget constraint become progressively less steep to the right and more steep to the left. The utility level becomes progressively lower as we move away from J in either direction. Thus D is the maximum attainable utility, and more grant goods are purchased than if the categorical restrictions were relaxed.

140

Chapter Five

negative, which is a matter of taxing the local community for its expenditures on the specified good.29 To see how the matching grant affects the price of the covered goods from the recipient’s perspective, imagine the recipient purchasing one additional unit. If the market price is P0 , then the local contribution plus the matching funds must sum to P0 . Let us call the local contribution per unit Ps and then the matching funds per unit will be mPs. The equation summarizing this is as follows: Ps + mPs = P0 or Local funds per unit + Matching funds per unit = Market price per unit Solving for Ps , we find the price per unit as perceived by the recipient to be as follows: P0 Ps = ——– 1+m Thus a matching grant changes the terms of trade by which a community can exchange the good covered by the grant for other goods. In the example used for food stamps in the preceding chapter, the program provided a match of $1 for every $1 provided by the recipient. Thus the matching rate was 1, which translates into a price reduction of 50 percent. The recipient had to give up only $0.50 instead of $1 worth of other things for each $1.00 worth of food. Matching grants may either be open-ended or closed-ended. In an open-ended grant there is no limit on the quantity the donor government is willing to subsidize. An open-ended grant is shown in Figure 5-5 by AC; the pregrant budget constraint is AB. In a closed-ended grant arrangement, on the other hand, the donor government will subsidize purchases only up to a certain amount. In Figure 5-5 this is illustrated as budget constraint AFG, where the limit is reached at F. These two cases can be seen to correspond exactly with the food stamp analyses of the preceding chapter (Figure 4-8). The effect of closing the grant is either to reduce purchases of the grant good and the total subsidy (if the community prefers a point on FC, no longer attainable because of the restriction) or to have no effect (if the community prefers a point on AF that can still be attained). To see the effect of the matching provision, let us compare an open-ended matching grant with a nonmatching grant of equivalent total subsidy. In Figure 5-6 the open-ended matching grant is AC, and let us assume that the community chooses a point like F. Then we construct an equivalent subsidy nonmatching grant (thus also passing through point F ) shown as ADE. Note that point D must lie to the left of point F (DE is parallel to AB). We will show by revealed preferences reasoning that the utility-maximizing choice from ADE cannot be on the segment FE. In general, one bundle of goods and services X = (X1, X2, . . . , Xn) is revealed-preferred to another Y = (Y1, Y2, . . . , Yn ) if two condi29 The district power-equalizing proposal for financing local schools has this feature; it is discussed later in the chapter.

The Analysis of Equity Standards

141

O

Figure 5-5. Matching grants can be open-ended (AFC ) or closed-ended (AFG ).

O

Figure 5-6. Comparing equal sized matching and nonmatching grants.

tions are met: (1) Y is affordable given the budget constraint; and (2) X is the bundle actually chosen. In terms of Figure 5-6, if the community preferred a bundle represented by some point on FE to the bundle F itself, it could have chosen it under the open-ended plan AC. Since it did not, F is revealed-preferred and must yield more utility than anything on FE. But point F is not itself the utility maximum on ADE: The indifference curve through it is tangent to AC and thus not to DE. Hence there are points on ADE that have greater utility than at F, and they cannot be on FE: They must be to the left of F.

142

Chapter Five

If the categorical restriction is binding, D is the point of maximum utility; but since D is always to the left of F, it always implies a lower quantity of the grant-covered good than does the open-ended matching grant. Thus the open-ended matching grant induces greater consumption of the covered good than an equivalent-subsidy nonmatching grant. Therefore, there is also a matching grant with lower cost to the central government that induces the same consumption of the covered good as the nonmatching grant. The above result suggests that matching grants have an advantage when the program’s objective is to alter the allocation of some specific good. This objective characterizes grants to correct for externalities. Matching grants are generally considered appropriate for these cases because the matching rate alters the price to recipients and, if chosen correctly, internalizes the external costs or benefits of the recipient’s allocative choice. (An example is the optimal food-stamp subsidy when there are interdependent preferences, discussed in Chapter 4.) Equity objectives may also imply altering the relative distribution of a specific good, such as the school financing issue discussed later in the chapter, and matching grants can be appropriate for those policy objectives as well. Nonmatching grants, on the other hand, are most appropriate for general redistributive goals.30

The Role of Choice Restrictions We have already introduced several forms of choice restriction common to intergovernmental grants: the expanse of goods covered and the maximum quantity for which matching subsidies are available. Their importance depends not only on the allocative effects of the type we have been examining but also on institutional effects in terms of the information and transaction costs of grant administration and enforcement. We illustrate this by introducing another common restriction: maintenance of effort. This means that the recipient community is eligible only for grant funds to supplement its prior spending on the covered goods. Figure 5-7 shows an example of how a nonmatching grant with maintenance-of-effort requirement achieves an increase in the government good at lower subsidy cost than an openended matching grant. The community initially has budget constraint AB and chooses C. Then a matching grant changes the budget constraint to AD, where the community chooses E (more of the grant good than if a cash-equivalent transfer had been made) at subsidy cost EG. The same quantity OH of the grant good can be stimulated with the less costly nonmatching grant with maintenance of effort represented by the budget constraint ACFJ. The shape of ACFJ can be explained as follows. Imagine the community starting from point A. As it purchases units of the grant-covered good, it pays the ordinary market price and proceeds down the AB line until it reaches point C. At this point the quantity of the grant-covered good is OK, the amount that maintains the effort of the community in terms 30 For a more detailed review of general economic policy issues concerning grants, see George F. Break, Financing Government in a Federal System (Washington, D.C.: The Brookings Institution, 1980), and Wallace Oates, “Federalism and Government Finance,” in John Quigley and Eugene Smolensky, eds., Modern Public Finance (Cambridge: Harvard University Press, 1994), pp. 126–151.

The Analysis of Equity Standards

143

O

Figure 5-7. Maintenance-of-effort restriction (ACFJ ).

of its past expenditure on this good. Additional units of the grant-covered good can then be had without sacrifice of other goods, because the community now qualifies for the grant and the grant funds pay for them in full. Thus the community proceeds horizontally and to the right from point C until it has spent the entire grant. We have deliberately selected the grant that allows a maximum free purchase of CF, which would bring the community to OH consumption of the grant-covered good (the same quantity as would be chosen with the matching grant AD). The dollar cost of this grant is FG, less than the cost EG of the matching grant. Beyond point F, the community must sacrifice other goods at the market rate in order to further increase the consumption of the grant-covered good; thus, the slope of FJ is the same as that of AC. With budget constraint ACFJ, the theory of individual choice predicts that the community will choose point F. If it chooses point C with constraint AB, point F is clearly preferable because it has more of one good and no less of any other good. How do we know that some point on FJ is not even better? Since the goods on each axis are normal, the community should increase the purchase of both goods as income increases. That is, the income-expansion path would cross LJ (the extension of FJ) to the left of FJ. Thus, point F is closer to the optimal point on LJ than any other point on FJ and therefore must be preferred. The community chooses F; its consumption is OH of the grant-covered good (as with the matching grant); and the cost to the donor government is less than the cost with the matching grant (FG < EG).

144

Chapter Five

This highlights the strong impact of the restriction. However, it does not change our prior results about the greater inducement of matching requirements per subsidy dollar. That result holds as long as other things are kept equal (including the restrictions).31 It does suggest that empirically one should not necessarily expect to find that matching grants have a more stimulative effect unless the restrictions are similar. However, it has been pointed out in the literature that the effectiveness of intergovernmental grant restrictions cannot be assumed; it depends upon the ability and effort made to administer and enforce the restrictions.32 This caveat deserves further discussion. Recall that whenever one consumer has a MRS for two goods different from that of another consumer, there is room for a deal. When economic agents make consumption (and production) choices, they usually do so in light of the prevailing market prices. However, a grant recipient subject to matching provisions or certain restrictions will typically have a MRS that is not equal to the prevailing market price ratio. Correspondingly, this creates incentives for deals. That is precisely what we saw in Chapter 4 in the discussion of individual food stamp grants and the illegal market for stamp resales. The recipient could increase utility by exchanging the food stamps at prevailing market prices. The income could then be used to purchase whatever goods maximize the recipient’s utility. The desire to exploit divergences in the MRS can be applied to communities as well as individuals. Any intergovernmental grant program that contains provisions causing divergences of this nature may fail to achieve its inducement objectives if the recipient community can find ways of making the potential deals. Thus the success of a grant program depends not only upon the allocative effects we have described so far but also on the administration and enforcement capabilities. Consider a community offered a grant that requires maintenance of effort. It may keep its local budget size constant but change the composition of what is purchased with that budget. For example, a community may feel a pressing need to obtain more medical equipment for its local hospital but the only grant offered to it is for criminal justice with maintenance of effort required. It therefore decides to make its hospital security guards part of the police force. The grant funds are then used to maintain police services exactly as they were plus the hospital security guards, and the hospital finds its revenues unchanged but its costs decreased by the cost of the security guards. Thus the hospital buys the additional medical equipment with the grant, even though that is not the way it appears on the record. If a grant program continues for several years, it may become harder to enforce choice restrictions. For example the maintenance-of-effort requirements may be clear in the initial year, but no one can know for certain what the community would have spent without any grants in successive years. If community income is growing, for example, one would expect expenditures to increase over time even without a grant program. Then the maintenance-

31

To see this, it is left to the reader to compare the effects of matching versus nonmatching terms when both grants require maintenance of effort. 32 See, for example, Martin McGuire, “A Method for Estimating the Effect of a Subsidy on the Receiver’s Resource Constraint: With an Application to U.S. Local Governments 1964–71,” Journal of Public Economics, 10, 1978, pp. 25–44.

The Analysis of Equity Standards

145

of-effort restriction, if unchanged, becomes less important over time and the program effects become more like those of an unrestricted block grant. The point of these examples is to demonstrate the importance of recognizing the incentives created by any particular grant design. Part of any policy analysis of grant programs is to consider whether the administration and enforcement of its provisions can be accomplished pragmatically; otherwise, the overall objectives of the program can be jeopardized. There is no standard answer as to whether enforcement is easy or difficult; it depends upon the nature of the good. Illegal markets may arise readily with food stamps because resales of the stamps are hard to prevent or detect; highways, on the other hand, are another matter.

Alternative Specifications of Recipient Choice-Making To this point we have assumed that it is reasonable to treat a community as if it were an individual, as in having preferences or choosing a consumption bundle. But a community is not an individual. It is an aggregate of individual residents who have chosen to live in the area and can relocate if they wish. It also generally contains public and private agencies that may employ nonresidents and be owned by nonresidents; these people as well as the residents will be concerned about and affected by grant programs available to the community. The community choice perspective we have been using is often given some theoretical justification by an appeal to the idea of the median voter, according to which local decisions reflect the median voter’s tastes and preferences. Imagine, for example, successive voting on school expenditures: After each level is approved, a new and slightly higher level is voted on and the level selected is the last one to muster a majority. If the voters are then lined up in the order of the maximum school expenditures they approve, it becomes apparent that the median voter determines the total expenditure. In short, the community preferences can be represented by those of the median voter.33 Applied to local choices, the theory has been shown to be useful empirically in a number of studies.34 However, it is important to recognize that over a period of time individuals and firms can choose their locations from among alternative communities in a given area. Charles Tiebout hypothesized that the ability to “vote with their feet” creates pressure on the community to provide the most attractive possible bundle of public services (including taxes) lest its occupants go elsewhere.35 Of course, as people gradually relocate, 33 This is offered only as a justification for why the community choice theory might predict collective decisions accurately. It is not intended to suggest that the choices are efficient; in fact, there is good reason to believe that no democratic voting procedure can be used to attain an efficient allocation of resources. See our later discussion in Chapter 15, and Kenneth Arrow, Social Choice and Individual Values (New Haven: Yale University Press, 1951). 34 See Edward Gramlich, “Intergovernmental Grants: A Review of the Empirical Literature,” in Wallace Oates, ed., The Political Economy of Fiscal Federalism (Lexington, Mass.: Lexington Books, 1977). 35 See Charles Tiebout, “A Pure Theory of Local Expenditure,” Journal of Political Economy, 64, No. 5, October 1956, pp. 416–424. For a test of this hypothesis and that of the median voter, see Edward Gramlich and Daniel Rubinfeld, “Micro Estimates of Public Spending Demand Functions and Test of the Tiebout and MedianVoter Hypotheses,” Journal of Political Economy, 90, No. 3, June 1982, pp. 536–560.

146

Chapter Five

this changes the characteristics of the median voter in any particular community. Thus, competition among communities is an important additional determinant of community decisions; its influence is undoubtedly greater in the long run than in the short run. A second reason for questioning the community choice perspective comes from theories of bureaucratic behavior. The idea is that any particular grant-receiving bureau is like flypaper: The grant will stick where it hits. Let us go back to our earlier example of the community seeking funds for new medical equipment but offered only a criminal justice grant with maintenance of effort required. We suggested that the hospital security guards would be added to the police budget in order to meet the legal terms of the grant and the allocative effect would be to use the extra hospital funds (once used to pay for guards) to buy the new equipment. However, what happens if the police chief does not like this idea? In particular, what happens when the police insist they need to use the grant funds to purchase helicopters? There may be no effective political mechanism to prevent the police from doing just that. It depends, of course, on the political power of the police department relative to other officials (who may or may not sympathize) and the public. Perhaps the public view will receive greater weight over a longer period of time (e.g., through new elections of officials who have the power to hire and fire the police chief). Thus the grant may in the short run, because of local bureaucratic support, have the effect its designers intended; in the long run there is more chance that the income conversion effect will predominate. In fact, empirical research on the effects of grants offers strong support that something like the flypaper effect occurs and persists. According to Hines and Thaler, a pure income effect in most grant programs would increase spending on the covered goods in the long run by $0.05 to $0.10 for each $1.00 of a nonmatching grant. (The figures correspond to an income elasticity near unity.) But the actual estimated income effects are always substantially larger, between $0.25 and $1.00.36 Thus political and bureaucratic effects on grants can be significant and should be accounted for in making empirical predictions of a grant program’s effects. The evidence, for example, contradicts the assertion based on individual choice theory made earlier that a nonmatching grant has the same effect on a community as a tax cut equal in size. The evidence suggests that a nonmatching grant stimulates greater expenditure on the covered good than an equivalent tax cut.

Equity in School Finance In this section we apply both the theory of grants and the concepts of equity to the problem of public school financing. We shall focus primarily on the California system, which was declared unconstitutional by the state Supreme Court in its 1976 Serrano vs. Priest deci36 See James R. Hines, Jr., and Richard H. Thaler, “Anomalies: The Flypaper Effect,” Journal of Economic Perspectives, 9, No. 4, Autumn 1995, pp. 217–226, and Shama Gamkhar and Wallace Oates, “Asymmetries in the Response to Increases and Decreases in Intergovernmental Grants: Some Empirical Findings,” National Tax Journal, 49, No. 4, December 1996, pp. 501–512.

The Analysis of Equity Standards

147

Figure 5-8. The unconstitutional California Foundation plan for school financing.

sion. The general problems considered then have been found to apply to many states, with good solutions being elusive. New York’s system was declared unconstitutional by its Supreme Court in 2001, and a similar decision was reached about the New Hampshire system by its Supreme Court. The New Jersey Supreme Court declared its state system unconstitutional on grounds similar to Serrano, and in 1993 Texas voters rejected a wealthsharing plan similar to some proposed remedies for Serrano. First we will review the system found defective in California and the equity requirements enunciated by the court, and then we will consider what system might meet those requirements.

The Equity Defects Identified in Serrano In Figure 5-8 we represent the budget constraints of a “rich” school district (subscript R) and a “poor” school district (subscript P). Public school expenditures per child E are measured along the horizontal axis, and wealth for all other goods per child W is measured along the vertical axis.37 The dashed lines represent a hypothetical school financing system that is purely 37 Note that two districts can have the same budget constraint in this diagram but different total wealth, caused by differences in the number of children each has in attendance at public schools. The “per child” units are convenient for the analysis presented here, but one must be careful in their use: A community will respond to changes in its total wealth as well as to changes in the size of the school population, and accurate empirical prediction of the overall response may require more information than the proportion of the two. Consider two communities that are initially perfectly identical; one then experiences a doubling of real wealth while the other experiences a 50 percent drop in the public school population. Their new budget constraints on the diagram will continue to be

148

Chapter Five

local; that is, one in which the state contributes nothing to the local school district. Under such a system, it would hardly be surprising to find the rich district spending more on education per child than the poor district; that will happen as long as public education is a normal good. The actual system declared unconstitutional was not purely local. California, like many other states, had been using a system of school finance known as a foundation plan. Under it, every school district was entitled to some state aid. The amount of state aid received was an inverse function of the district property wealth per child.38 All districts received at least $125 per child, an amount referred to as basic aid. This was the only aid received by rich districts. Poor districts received additional funds in the form of equalization aid, and poorer districts received greater amounts of equalization aid. In Figure 5-8 the solid lines represent the budget constraints of the two representative districts including state aid. The grants are equivalent to nonmatching grants restricted to educational spending. The rich district is shown as receiving a smaller grant than the poor district. This plan should make spending in the two districts more equal, as the larger grant will have a bigger income effect (assuming nonincreasing marginal effects of budget increases). However, there is no reason to think that this system will lead to equal spending. The size of the grants received by poor districts would have to be very large for this to be so, as high-wealth California districts were spending more than four times the amount spent by low-wealth districts.39 Note that this system of grants has no price effects and that we showed earlier that expenditure inducements for particular goods can best be achieved by matching grants (which make use of price effects). However, we have not yet discussed which equity standard is relevant in this case. Why might a court find, as in the Serrano decision, that the system we have described denies “equality of treatment” to the pupils in the state?40 The fact that there are differences

identical, but there is no theoretical reason to expect them to choose the same point on it. The choice depends on the respective wealth elasticities for spending on education versus other goods. For the purposes here, we are holding the number of children in the representative districts constant. 38 Because the property tax is used almost universally to raise local funds for education, the measure of district wealth used by the courts and other government agencies is the total assessed valuation of property in the district. The appropriateness of this proxy measure is discussed later in the chapter. 39 Grants of this magnitude would certainly cause recipient districts to select the consumption bundle where the restriction is binding. To see this, imagine a district that is very poor and is spending 10 percent of its wealth measured by property value (a very high proportion). If we gave the district a grant three times larger than its current expenditure, its wealth would be increased by 30 percent, that is, three times 10 percent. How much of that increase the district would spend on education if unconstrained depends on the wealth elasticity. Empirical research suggests it is inelastic. By a unitary elasticity estimate, the district would freely choose to spend 3 percent of the 30 percent wealth increase on education, for a total of 13 percent of the original wealth. But the grant restriction is that education expenditures must be at least 30 percent of original wealth (i.e., the size of the grant), so the restriction must be binding. The wealth elasticity would have to be something like 7 in this example for the constraint to be nonbinding. That is not plausible. It is more implausible given the unrealistically high proportion of spending on education assumed initially, and that a grant as small as three times the initial level would achieve equality if completely spent on education. 40 It is important to note that the specific legal interpretation of phrases such as “equality of treatment” and “equal opportunity” can be quite different from their general definitions as analytic concepts. I know few who would

The Analysis of Equity Standards

149

in expenditure levels from district to district is not what the court found offensive; the decision made it very clear that the court was not requiring strict equality. Nor did the court require a universal minimum, although it did express particular concern about the low expenditure levels typical for children attending school in low-wealth districts. Thus neither of the outcome standards was found applicable in this setting. Instead, the court held that the state had created a system of school finance that violated wealth neutrality. Children living in low-wealth districts (the suspect class) had less money spent for their public education on average than children living in other districts. The court held that the education a child receives (measured by school expenditure per child) should not be a function of the wealth of his or her parents and neighbors as measured by the school district property tax base per child. It is interesting to consider briefly this choice of an equity standard. It concerns the expenditure on a child in one district relative to a child in another. In theory, violations of this standard can be removed either by raising the expected expenditure level for those initially low or lowering the expected level for those initially high or both. The average expenditure level for the population as a whole is not restricted (no minimum has been held to be required), and there is no restriction on the overall distribution in terms of how much deviation from strict equality is allowed. The requirement is only that children in property-poor districts as a group have educational opportunities like those of children in all other districts. Why should a neutrality standard be required as a matter of general policy? If we think back to our discussion of equity concepts, it is plausible to argue that a basic education is a requirement for modern-day life, much like food, clothing, and shelter. Argued more modestly, perhaps a basic education is one of those goods that we wish to guarantee to everybody. But this logic calls for a universal minimum; and since the supply of educational resources is elastic, there is no reason to be concerned about the relative educational expenditures on different children.41 Another reason for concern about equity in education is the belief that it affects the other opportunities available to an individual during the course of a lifetime and that there is a social responsibility to move toward process equality for those other opportunities. By this rationale education is seen as a means to an end; policies such as compensatory education might be derived from it. This concern does involve the relative expenditure levels for education, but it does not imply strict equality, equal opportunity, or neutrality as a requirement for educational expenditures. Requirements such as these can prevent the attainment

argue, for example, that the provision of a public defender ensures neutrality or equal opportunity as we have defined it. It may provide a universal minimum, but the court is satisfied that the public defender ensures “equal opportunity” by legal standards. To maintain these definitional distinctions, reference to legal meanings will be indicated by quotation marks or distinct terminology. 41 An important issue not discussed here is the extent to which there is a relation between educational expenditures, real educational resources, and education absorbed by children. Most researchers in this area consider the linkages to be very weak. See, for example, E. Hanushek, “Assessing the Effects of School Resources on Student Performance: An Update,” Educational Evaluation and Policy Analysis, 19, No. 2, Summer 1997, pp. 141–164; and “The Economics of Schooling,” Journal of Economic Literature, 24, No. 3, September 1986, pp. 1141–1175.

150

Chapter Five

of equal opportunity for other life opportunities (e.g., if compensatory education is necessary), although they may represent an improvement from the status quo. A somewhat different rationale from the above focuses more on the importance of even-handedness by government, particularly when its actions bear on vital interests of the citizenry. That is, one could argue that education is a vital interest; and because the state influences the education provided by local governments within it, it must ensure that its influence is even-handed on all the state’s children.42 Since it is the state that defines local district boundaries and determines the financing rules that local districts face, it must choose the rules to ensure the same availability of educational opportunities to the children in each district. Of course, the state may influence district behavior in many ways, and as a value judgment one might wish to apply the even-handedness rationale to all of the state influences. But the court expressed only the narrower concern about the opportunities affected by the wealth classifications (i.e., district boundaries) made by the state. Thus a rationale for a neutrality standard can be derived from an underlying concern with state even-handedness in regard to vital interests when they involve suspect classifications. It is concern of this type that is manifested as a matter of law in the Serrano decision.

The Design of Wealth-Neutral Systems Now let us turn to the problem of designing a school finance system that meets the wealthneutrality standard. Any state system that ensures equal spending, such as full-state financing, is wealth-neutral. However, local control of schools is an important value that makes that politically unattractive.43 One could also redraw district boundaries so that all districts had equal wealth. There are two serious disadvantages to that approach: (1) It is politically unpopular because it threatens the status of many of the employees of the school system (e.g., district superintendents) and upsets families that have made residential choices partly on the basis of the school district characteristics. (2) In districts comparable in size with those now existing, the wealth of one relative to that of another can change significantly in just a few years (which would necessitate almost continuous redistricting). One could redistrict into much larger units, but then there are disadvantages similar to full state control. As an interesting aside, the choice of residential location mentioned above suggests a Tiebout approach to wealth neutrality: If families really have free choice about where to locate, the system is wealth-neutral no matter how the state draws district boundaries or what grants are given to different districts. Obviously the court did not accept that reasoning, or it would have found the system free of state constitutional defect. Although few people

42 Perhaps this could be interpreted with some flexibility. Certain circumstances might present sufficiently compelling reasons to allow exceptions to even-handedness, for example, compensatory education. 43 Hawaii is the only state to have full-state funding financing, although New Mexico’s system may be equivalent. (The state mandates a uniform spending level and a uniform tax rate that all localities must use.) One can separate the financing and uses of resource decisions, but most would still consider the move to state financing as a diminution of local control.

The Analysis of Equity Standards

151

would argue that actual residential choices (in terms of school districts) are made independently of wealth (e.g., zoning restrictions that prevent construction of low-cost housing), another way to neutralize the influence of wealth is by open enrollment. That is, suppose each child had the option of choosing (at no additional cost) a school from alternatives in the area that represented a broad range of expenditure levels. Then one might consider such a system wealth-neutral. By the end of 1990, several states had introduced fairly broad openenrollment programs. However, Odden reports a number of problems in achieving financial equity with these systems.44 We are left with alternatives that maintain the existing districts but attempt to neutralize the influence of wealth. At this point it is time to confront the issue of whether the required neutrality is simple or conditional. The court decision left the matter ambiguous, despite the fact that the differences between the concepts as applied to the remaining alternatives are great.45 We construct simple models of school expenditure decisions and the effects of grants primarily to illustrate the difference between the equity standards. The actual design of grant systems to achieve either standard involves consideration of a number of important factors that are omitted from the simple models used here, and in the following section we will note a number of them. Simple wealth neutrality requires that there be no functional relation between the wealth of a school district and its public school expenditure level on a per student basis. This implies that the expected expenditure of a district in one wealth class equals the expected expenditure of a district in any other wealth class.46 In Figure 5-9 we illustrate the aim of simple neutrality. The figure has two demand curves. Each represents the average demand of a group of districts: DH for high-wealth districts, and DL for low-wealth districts. With no grant system, all districts face a price of $1.00 per unit of educational spending.47 The figure shows that districts spend, on average, $6300 per

44 See Allan R. Odden, “Financing Public School Choice: Policy Issues and Options,” in Allan R. Odden, ed., Rethinking School Finance (San Francisco: Jossey-Bass, Inc., 1992), pp. 225–259. 45 The labels “simple” and “conditional” are simplifications used for convenience. There are a number of special sources of funds, such as federal funds for handicapped children, that could be included as part of the total expenditures and analyzed for their appropriateness in terms of exceptional characteristics. For purposes of the analysis here, we simply remove them from the expenditure base under examination and consider only the equity of general purpose funds. However, we could describe the same choice as between two alternative specifications of conditional neutrality by including the special purpose funds in the base. The equity of special purpose funds does deserve scrutiny; for some thoughts on the subject see Friedman and Wiseman, “Understanding the Equity Consequences of School Finance Reform.” 46 The earlier examples of simple neutrality were illustrated with dichotomous groupings, but the principle applies to a suspect classification into multiple groupings. In this illustration of district wealth classification, we treat each wealth level as a separate and continuous classification. The assumption here is that the court would be equally offended if middle-wealth or upper-middle-wealth districts were systematically handicapped relative to all other districts. 47 The district demand curves are from the assumed functional form E = 0.015P −0.4W, where E is educational spending per child, W is wealth per child, and P is the price per unit of education. This assumed form is used for ease of illustration. The price elasticity of −0.4 is realistic, although estimates of it in the literature vary between 0 and −1. A realistic wealth elasticity, based on the estimates available in the literature, would be lower than unitary

152

Chapter Five

Figure 5-9. Achieving simple wealth neutrality through appropriate matching grants.

student for the high-wealth group and $1800 for the low-wealth group. It is important to understand that the figure only shows the group averages; we are not assuming, for example, that every single high-wealth district is spending exactly $6300. To achieve simple neutrality between these two groups, the state can choose any average school expenditure level it wants as the target. The figure illustrates a target of $4800 per student. Then the state must design a grant system that causes each group to make choices that average $4800 per student. To achieve this, the figure shows that the low-wealth districts must face a price of $0.09 per unit of educational expenditure and high-wealth districts a price of $1.97. The matching grant rates that will result in these prices are easy to identify from our earlier formula: Ps + mPs = P0

and probably between 0.25 and 0.50. Hoxby, for example, reports an income elasticity of 0.54 for U.S. metropolitan areas, which probably proxies wealth closely since no separate measure of the latter was included. See Caroline M. Hoxby, “Does Competition Among Public Schools Benefit Students and Taxpayers,” American Economic Review, 90, No. 5, December 2000, pp. 1209–1238.

The Analysis of Equity Standards

153

For the low-wealth districts, this means: 0.09 + m(0.09) = 1.00 or m = $10.11 That is, the state offers low-wealth districts $10.11 for every dollar raised locally. This matching rate will cost the state $4368 per student ($4800 total per student, $432 raised locally). For the high-wealth districts, the formula requires: 1.97 + m(1.97) = 1.00 or m = −0.49 That is, the state sets a negative matching rate! It requires that for each dollar raised locally for education, $0.49 must be returned to the state and only $0.51 is actually spent on local education. With this rate, the state will receive an average of $4611 per student in the highwealth districts ($4800 total expenditure per student, $9411 raised locally). Naturally the state, in determining the target level, will consider how much funding it is willing to provide out of general revenues to pay for the matching grants as well as how much it can feasibly recapture through negative matching rates. Furthermore, to reduce the uncertainty about finding the correct matching rates to achieve neutrality (since the actual demand functions by wealth are not known with confidence), the state can use choice restrictions such as those we have already discussed. To this point we have illustrated only simple wealth neutrality. However, another possible interpretation is to require conditional wealth neutrality with the district tax rate choice as the exceptional characteristic.48 That is, a court may define wealth neutrality to mean that those districts choosing the same tax rate should have the same expenditure level. This requires an equal percentage sacrifice of district wealth to buy the same amount of education everywhere. This interpretation is consistent with the California court’s indication that a system known as district power equalizing (DPE) would be acceptable.49 48 In California, a state initiative passed in 1978 and known as Proposition 13 limits local tax rates to 1 percent of assessed value. This effectively removes the option of choosing a local rate. California schools are now financed primarily through the state, with local property taxes contributing less than 25 percent of school revenues. In the 1980s the California courts concluded that the system was sufficiently wealth-neutral and the Serrano case was declared closed in 1989. In 1997–1998, the state reported that 97.91 percent of students were in districts in compliance with the Serrano wealth-neutrality standard. At the same time, according to Education Week, California ranked last among states in the nation for adequacy of educational resources, illustrating poignantly the difference between a wealth-neutrality standard and a universal minimum. 49 Another way to reconcile the court’s acceptance of DPE as a possible solution is if it is simply assumed (incorrectly) that simple wealth neutrality would result from using it. For more discussion of this issue, see Friedman and Wiseman, “Understanding the Equity Consequences of School Finance Reform.”

154

Chapter Five

Under a DPE plan the state would publish a schedule associating each possible expenditure level with a property tax rate. Any district that wanted to spend, say, $5000 per child would have to tax itself at the rate associated with that expenditure level on the state schedule, for example, 1 percent. If the district revenues collected at the 1 percent rate are not enough to provide $5000 per child, the state provides a subsidy to make up the difference. If the district revenues exceed the amount required to provide $5000 per child, the state recaptures the excess. But the only way a district can have an expenditure level of $5000 per child is to tax itself at the specified rate of 1 percent. The advantage of the conditional neutrality interpretation is that it is relatively easy to design a system to meet the standard. As with the above illustration, no knowledge of district demand functions is required. But there is a cost to this interpretation as well. One must ask why the district tax rate choice should be an exceptional characteristic. One rationale might be to rely purely on a notion of taxpayer equity. However, wealthy districts that wish to spend a lot on education incur a substantial financial burden in the form of revenues recaptured by the state, whereas equally wealthy districts that do not wish to spend as much escape the burden. Why should equally wealthy districts provide differing contributions to state revenues? Taxpayers might not consider this very equitable.50 Perhaps more importantly, wealth may remain as a significant determinant of educational spending on the children. That is, one might reject the taxpayer equity rationale and argue instead that the burden on high-spending wealthier districts is intended to protect children by neutralizing the influence of wealth on education-spending decisions. But there can be no assurance that it will do so. To show this, Figure 5-10 illustrates the budget constraints that result from using a DPE system. Observe that if all districts tax themselves at 100 percent, all must end up at the same point on the horizontal axis. As a matter of policy, the state can select any point on the horizontal axis as the common point; the farther to the right, the greater the total expenditure— and state aid—will be. Under a very simple DPE system, the new budget constraints can be represented as straight lines that intersect the vertical axis at the district’s own wealth level. This is equivalent to the state electing a common wealth base (the intercept with the horizontal axis) whereby educational expenditures in a district equal the districts’ tax rate choice times the common base.51 The important characteristic for our purposes is that the state schedule determines the new budget constraints (and thus prices and matching rates) for all districts. Mathematically, the general DPE rule is that the expenditure per child (E) is a function of a state-determined formula F(τ) that depends solely on the tax rate choice (τ) of each local district: E = F(τ) where ∆E/∆τ ≥ 0. 50

The taxpayer equity issue will also depend on how district wealth is defined. We discuss this issue later. The budget constraints need not be straight lines. For example, the state could increase or decrease the “common base” as the tax rate increases, making it nonlinear. Furthermore, it could restrict tax rates to be within a certain range (e.g., between 2 and 3 percent). 51

The Analysis of Equity Standards

155

O

Figure 5-10. District power-equalizing does not ensure simple wealth neutrality.

The straight line constraints shown in Figure 5-10 have the particular mathematical form: E = τWC where WC is the state-chosen common wealth base. In actual practice, states using systems such as DPE choose more complex functional forms.52 But with this simple form it is easy to show that the price per unit of education faced by a district with wealth i is53: Pi = Wi /WC Thus the DPE rule determines a set of prices, but with no consideration of the demand functions that determine the prices necessary to ensure simple wealth neutrality. That is, requiring a DPE system in no way ensures simple wealth neutrality. Suppose a state hopes to use a DPE system and achieve the same result with the districts from our earlier example: inducing districts of both wealth classes to average $4800 expenditure levels. Assume that the districts in the example have actual average wealth of

52

The addition of a constant term to the formula would represent a nonmatching grant amount. This is common in practice, along with placing narrow limits on the choice of tax rate. Both of these features restrict the inequality in school spending. 53 By definition, local contributions are Pi E and the tax rate τ equals local contributions over local wealth or Pi E/Wi . Substituting this last expression for τ in the above equation, the Es cancel and the result is as shown.

156

Chapter Five

$120,000 per child in the low-wealth group and $420,000 per child in the high-wealth group. We already know (from the demand curve of Figure 5-9) that the high-wealth districts will choose an average of $4800 per child on schooling if they face a price per unit of $1.97. The DPE schedule that has this price must use the following common wealth base WC : $1.97 = WH /WC = $420,000/WC or WC = $214,217 and therefore the state schedule must be E = τ*($214,217) With this DPE schedule, however, we know that the low-wealth districts will not average E = $4800. To do so, they must face a price PL of $0.09 (also known from Figure 5-9). But the schedule implies a price to them as follows: PL = WL /WC = $120,000/$214,217 = $0.56 This price is far too high. With it, they will choose an average expenditure level of $2270, far below the $4800 of the high-wealth districts.54 The DPE plan will be inconsistent with simple wealth neutrality. This example illustrates that a DPE requirement in no way guarantees simple wealth neutrality. It does not prove that there are no DPE systems that could achieve it, but in a real setting, that would be quite a difficult design task. The main point is simply that a school financing requirement of simple wealth neutrality is quite different from conditional wealth neutrality.

Other Issues of School Finance Equity When the Serrano issues are being explained, it is common to illustrate inequities by describing the plight of children in low-wealth districts. However, it is also important to recognize that many children from poor families do not live in low-wealth districts. In California, for example, the majority of such children live in districts that are property-rich; an example is San Francisco, which has property wealth more than twice the state average. Under a plan meeting either neutrality standard and keeping average spending in the state at approximately the same level, these children could be made substantially worse off. It is important for analysts to think carefully about how to avoid or minimize this or other unintended harms when implementing standards such as those required in Serrano. Below are 54 Calculated according to the hypothetical demand curve from note 47. In this example, DPE is weaker than simple neutrality requires. But if the demand for public education is price-elastic and wealth-inelastic, then DPE would induce lower-wealth districts to spend more than higher-wealth districts. See M. Feldstein, “Wealth Neutrality and Local Choice in Public Education,” The American Economic Review, 65, No. 1, March 1975, pp. 75–89.

The Analysis of Equity Standards

157

some suggested directions that an analyst working on the issue might explore; they are not intended to exhaust the range of equity issues that are encountered in school financing. First, one dimension of a school finance plan that does not inherently interfere with neutrality is the proportion of funds derived from nonmatching revenues as opposed to matching grant arrangements. Earlier we suggested that full state financing meets both neutrality standards. It is possible to create a neutral system that maintains some degree of local choice but relies heavily on nonmatching revenues. For example, the state could provide $4000 to all students from general revenues and allow districts the option of adding up to $2000 to it by a matching grant system. This would reduce the stress owing to recapture placed on city budgets (assuming that the state general revenues are raised by broad-based taxes), at least as compared with a full matching grant system. Furthermore, by narrowing the range of expenditure variation that can occur as a result of local decision-making, it would make control of the variation easier. That can be an absolutely critical design feature if one is trying to achieve simple wealth neutrality, given the uncertainty about actual district price and wealth elasticities. Second, the analyst should think carefully about what measure to use of a district’s fiscal capacity: its ability to generate tax revenues. There is no law that requires the property tax base to be used as the measure of district wealth, nor is there any economic argument that maintains that total property value is the proper measure of a district’s fiscal capacity. For example, most analysts think that the level of personal income in a district is an important determinant of true fiscal capacity. Thus, a wealth measure that depended on both property base and personal income could be constructed. It is no surprise that the hybrid measure would favor cities that have large low-income populations. One reason why the concern about them arises is the sense that property wealth alone does not give an accurate picture of fiscal capacity. Another possibility is that commercial, industrial, and residential property should be weighted differently on the grounds that the ability to tax different types of property is different. For example, it may be less possible to impose taxes on industrial wealth than on residential wealth if industry is more likely to relocate; many communities offer tax breaks to new industries as locational inducements.55 Third, the nominal dollar expenditures may not be the best measure of the educational opportunities being provided through the schools. Although that remark could stir up a hornet’s nest (e.g., when is one educational opportunity better than another?), there are some cost differences from district to district in providing identical resources. For example, to provide a classroom of 65° to 68°F in northern California during the winter costs a great deal more than in other parts of the state. Similarly, to attract a person with high teaching ability to an inner city school might require a higher salary than a suburban school would 55 See, for example, Helen F. Ladd, “State-wide Taxation of Commercial and Industry Property for Education,” National Tax Journal, 29, 1976, pp. 143–153. It is interesting to think about the effects of a Serrano solution on the willingness of communities to have industry locate within them. To the extent that expenditure levels are determined by the “common base” of a DPE plan, every community has less incentive to attract industry to it.

158

Chapter Five

have to offer to obtain comparable talent. Thus it might be appropriate to adjust the expenditure figures to reflect differences in the cost of obtaining educational resources.56 Fourth, the student populations of equally wealthy districts may differ substantially in regard to the educational resources appropriate to each. For example, some districts might be characterized by high proportions of children from non-English-speaking families or of children who are mentally or physically handicapped or of children who are in high school. Thus one has to think carefully about how to best account for these differences. One approach, used in Illinois, is to develop a pupil-weighting system that gives a higher measure of enrollment in districts with students who are relatively expensive to educate. Fifth, it is important to keep in mind that the long-run effects of any school finance reform may be substantially different from those observed in the short run. Over time, residents and firms may change their locations because of the reform. The use of private schools rather than public schools may change as the reform causes changes in the attractiveness of public schools. All of these factors result in changes in a district’s preferences for public education and its wealth base, and thus one expects its response to any particular grant system to change as well. Thus it is important to consider analytic models that account for the long-run effects.57

Summary To strengthen understanding of equity in general policy analysis, a number of important and competing concepts of equity must become part of the analytic framework. One conceptual distinction is the extent to which equity consequences refer only to effects on the general distribution of well-being (i.e., net effect on utility levels) or include specific egalitarian goals (e.g., equal distribution of jury service). In both cases, equity may be measured against outcome standards or process standards. Outcome standards refer to the aggregate amount of variation in the shares (e.g., income, food) that individuals receive. Two common standards of this type are strict equal-

56 For guidance on how to do this, see William Fowler, Jr., and David Monk, A Primer for Making Cost Adjustments in Education (Washington, D.C.: National Center for Education Statistics, Publication 2001323, 2001). 57 There is an extensive literature on school finance equity that deals with the issues raised here as well as others. Examples include Robert Berne, “Equity Issues in School Finance,” Journal of Education Finance, 14, No. 2, 1988, pp. 159–180; Thomas A. Downes, “Evaluating the Impact of School Finance Reform on the Provision of Public Education: The California Case,” National Tax Journal, 45, No. 4, December 1992, pp. 405–419; W. Duncombe and J. Yinger, “School Finance Reform: Aid Formulas and Equity Objectives,” National Tax Journal, 51, No. 2, June 1998, pp. 239–262; Caroline M. Hoxby, “Are Efficiency and Equity in School Finance Substitutes or Complements,” Journal of Economic Perspectives, 10, No. 4, Fall 1996, pp. 51–72; Helen F. Ladd and John Yinger, “The Case for Equalizing Aid,” National Tax Journal, 47, No. 1, March 1994, pp. 211–224; Andrew Reschovsky, “Fiscal Equalization and School Finance,” National Tax Journal, 47, No. 1, March 1994, pp. 185–197. See also the symposium “Serrano v. Priest: 25th Anniversary,” Journal of Policy Analysis and Management, 16, No. 4, Winter 1997, pp. 1–136, and the texts by David Monk, Educational Finance: An Economic Approach (New York: McGraw-Hill Book Company, 1990) and Allan R. Odden and Lawrence O. Picus, School Finance: A Policy Perspective (New York: McGraw-Hill Book Company, 1992).

The Analysis of Equity Standards

159

ity and a universal minimum. One factor apart from moral feeling that influences the choice of the standards (as well as the methods for achieving them) is the elasticity of supply of the good(s) in question. The Lorenz curve and Gini coefficient illustrate methods of measuring the amount of outcome equality. Process concepts of equity concern not the aggregate amount of variation but the rules and methods for assigning shares to individuals. These concepts become relevant once it is accepted that there will be inequality; the question is whether the share that each person ends up with has resulted from a fair process. A fundamental process standard is that of equal opportunity: Each person should have the same chance of obtaining a share of a given size. In practice it is often impossible to tell whether a particular individual had equal opportunity, and we sometimes try to test the implications of equal opportunity statistically by comparing the expected outcomes in a large group with the actual group outcomes. In some instances, we substitute the less stringent concept of neutrality for particular groupings (the suspect ones) instead of equal opportunity for all. Simple neutrality means that the distribution of shares within a suspect group should be identical with the distribution of shares among all others. Often there will be exceptional characteristics that cause legitimate deviations from simple neutrality. (For example, the elderly will be overrepresented on juries if employment is a legitimate excuse from jury duty.) When deviations arise, the concept of conditional neutrality becomes appropriate: If the members of a suspect class have exceptional characteristics identical with those of all others, the distribution of shares within each group should be identical. Whenever exceptional characteristics are relevant to process equity, it is appropriate to consider the additional concepts of horizontal and vertical equity. These concepts are used to assess the fairness of differences caused by the exceptional characteristics. Horizontal equity means that likes should be treated alike, and the question is the definition of the set of exceptional characteristics used to define who will be treated alike. In tax equity, for example, one question is whether the amount of income tax ought to depend upon the source of income (such as distinguishing wage income from dividend income). Vertical equity means that there is a fair difference in shares among people with different levels of exceptional characteristics. In the income tax case, the issue is the extra amount of tax due as income rises (the degree of progressivity). Analysts contribute to public policy debate by examining the consequences of alternative equity standards. We illustrated this with an application to school financing. First we developed the basic theory of intergovernmental grants, relying primarily on the utilitymaximization model developed earlier. Based upon the theory, a crucial distinction between grant types is whether there is a matching requirement: Grants with matching requirements have effects similar to those of price changes, whereas nonmatching or block grants have only income effects. However, this basic analysis must be modified to take account of the choice restrictions that are common design features of actual grant programs: the degree of selectiveness, whether the grant is open- or closed-ended, and maintenance-of-effort requirements. The effectiveness of these restrictions depends upon their administration and enforcement; the analyst should be aware that grant recipients have an incentive to convert a grant into its pure income equivalent.

Chapter Five

160

One result that generally holds up within the context of these models is that a matching grant will induce greater spending on the covered goods than an equivalent subsidy nonmatching grant (other things being equal). This suggests that matching grants are more likely to be appropriate when the grant’s purpose is to correct an externality. Nonmatching grants are more likely to be appropriate when the purpose is general redistribution. Other reasons for qualifying the predictions of these models are based upon the recognition that a community is not a single-minded maximizer. The Tiebout model makes us aware that individual locational choices influence community decision-making by changing the makeup of the community; policy changes may induce changes in locational patterns and thus affect prediction of the policy effects. The bureaucratic model suggests that decision-making power does not rest exclusively with residents or voters; a grant may stick like flypaper to the bureau receiving it and thus prevent or slow down its conversion into an income effect. Armed with the theory of grants and some specific ideas about equity, we presented an application to school finance. The California foundation system of school finance, struck down as unconstitutional in the Serrano decision, is a nonmatching grant system that reduces aggregate inequality from the level that a purely local system would produce. However, the court was not concerned with the aggregate amount of inequality. It was offended because the state failed to provide a wealth-neutral system: Children in property-poor districts experienced much lower levels of school expenditures than did children in other districts. This same problem continues in numerous states. The court was ambiguous as to requiring simple or conditional wealth neutrality; the ambiguity depends on whether the district tax rate choice is considered an exceptional characteristic. Several systems of school finance, for example, full state financing, can achieve both simple and conditional neutrality, but for political reasons it is likely that the existing districts in most states will continue to have at least some power over their individual expenditure levels. In these cases, either (but not both) of the neutrality standards can be met by a system of matching grants. Simple neutrality is harder to achieve because it requires knowledge of each district’s demand curve. Conditional neutrality is easier to design but may result in substantial correlation between district wealth and expenditures. A number of other issues must be considered in designing an equitable school finance policy. For example, a naïve application of the neutrality principle could substantially and unintentionally worsen the position of children of lower-income families living in the large cities, since those cities are considered property-rich. Careful thinking about the degree of state general funding versus matching grant funds, the measurement of a district’s true fiscal capacity, the district cost variation of providing equivalent educational resources, the measurement of pupil needs, and the long-run consequences of reform can lead to a fairer system of finance and a reduction in unintended adverse consequences of seeking greater equity.

Exercises 5-1

The Minnesota legislature was debating how to give financial relief to the poor from higher heating bills expected in the future. One suggestion was that any increase in ex-

The Analysis of Equity Standards

161

penditure per household compared with the expenditure of the prior year be billed directly to the state. A second suggestion was to give each household a heating voucher (a nonmatching grant to be used only for heating the household) of an amount equal to the difference between last year’s expenditure and the cost of the same quantity at this year’s higher prices. Both suggestions would give each household more utility than it had last year, but only one would help conserve energy. Explain. 5-2

Sometimes the political implications of grants may not be what they seem. At one time, members of the liberal caucus in Congress introduced legislation to provide increased funding for local social services. It came as a great surprise when the caucus was approached by several conservatives with an offer of bipartisan support. These same conservatives had long attacked spending on social services as wasteful; now, however, they found that their voters perceived them as “heartless.” Given the financial plight of local governments, they suggested dropping the matching requirement of the liberal version. This would allow them to take some credit for the legislation and to counteract their heartless image. The liberals readily agreed. a Had these conservatives softened? Or can you offer an explanation for their behavior consistent with their long-standing objectives? [Hint: Use diagrams and assume some local community choice under a matching plan; then construct an equally costly nonmatching plan.] b The answer to (a) was pointed out to one member of the liberal caucus. She chuckled softly, shook her head, and responded cryptically: “Never underestimate the tenacity of social service bureaucrats.” What could the congresswoman be thinking?

5-3

The federal government has been providing a nonmatching grant of $1 billion to a community to provide training for the long-term unemployed (defined as those unemployed for 6 months or more). Before the grant program, the community had not been spending anything on training. a Draw a diagram to represent the community’s budget constraints and choices in the preprogram and postprogram periods. b With new elections in Congress, the legislative mood shifted. More congressional members argued that the federal government should not tell local governments what to do. They proposed changing the terms of the grant so that it could be used either for training or public employment. The community already spends $25 billion for salaries of public sector employees out of its total budget of $250 billion. What effect do you think this proposal would have on spending for training? Use the standard theory of intergovernmental grants in answering. c Suppose the proposal passed and you observed that spending on training programs hardly changed at all. What theory might explain this?

5-4

The district demand for public education expenditures per child E is E = 0.03PE−0.4W, where PE is the price per unit of E and W is the district property wealth per child. Suppose

162

Chapter Five

that there are only two districts in the state and that they have property wealth per child of $20,000 and $70,000. Currently, school finance is purely local and PE = $1. If each district has the same number of children, identify a variable matching grant program that the state can introduce to make education spending equal with no net effect on the state treasury. [Hints: Recaptured funds must equal state subsidies. Use a calculator.] (Answer: Spending is equalized at $1618.92 per child with no net cost to the state when the state uses a matching rate of −$0.47818 for the $70,000 district and $10.95883 for the $20,000 district.)

APPENDIX AN EXERCISE IN THE USE OF A SOCIAL WELFARE FUNCTION AS AN AID IN EVALUATING SCHOOL FINANCE POLICIESO

The idea of using a social welfare function as a way of making evaluative judgments that combine equity and efficiency was introduced in Chapter 3. In this section we illustrate some of the mechanics of constructing a social welfare function for use in evaluating school finance grant systems. Realistic examples involve detailed empirical specifications, and the hard analytic work of choosing parameters carefully and cogently is essential to the success of any such analysis. Here we wish to keep the mechanics as transparent as possible and thus greatly oversimplify by building partially on the illustrative example of the chapter. However, it is strongly recommended that several actual applications be studied before use of this technique is attempted.58 Figure 5A-1, like Figure 3-9, displays three social indifference curves that represent different emphases on efficiency and equality (the Benthamite straight line W B, the Rawlsian right angle W R, and the middle of the road W M ). Although we know that there is no social consensus about an appropriate welfare function, it may be that individual politicians and interest group representatives have social preferences that are known to be closer to one function than another. Thus if policies could be evaluated with the functions representing the interests of the decision makers, such an analysis could be helpful to those decision makers in deciding what policies to support.59 This is most likely to be useful when the 58

The simple exercise presented here was inspired by a very thoughtful simulation study of school finance reform in New York. See R. P. Inman, “Optimal Fiscal Reform of Metropolitan Schools,” American Economic Review, 68, No. 1, March 1978, pp. 107–122. An example using social welfare functions in another policy area is N. H. Stern, “On the Specification of Models of Optimum Income Taxation,” Journal of Public Economics, 6, 1976, pp. 123–162. 59 Policy conclusions from this type of analysis, and a defense of them, would have to be presented in a nontechnical way. For example, the analyst might learn that when the matching rate in a certain grant plan goes above $2 for every local dollar, the ranking by the Benthamite criterion declines rapidly. The analyst would have to realize that the Benthamite criterion is most relevant to the large middle class (because each person’s utility level is weighted

The Analysis of Equity Standards

163

Figure 5A-1. Alternative social welfare functions.

consequences of adopting policies are complex: gains and losses of varying sizes distributed over different interest groups in a nonobvious way. Social welfare functions can be constructed to take the basic shape of any of the social indifference curves shown in Figure 5A-1. For example, consider the family of social welfare functions represented by W=

(Σ ) n

i=1

Uiδ

1/δ

where δ ≤ 1. If we let δ = 1, the social welfare function collapses to the Benthamite sum of utilities. As δ → −∞, the function becomes more proequality and approaches the Rawlsian standard. Middle-of-the-road functions are generated by using values of δ between the extremes. The parameter δ simply specifies the weight to be given to each person; lower values of δ give greater weight to people with lower utility levels. Thus, once one knows the values of individual utility levels to enter as the arguments of the welfare function, one equally). Then one can state a conclusion and explain it in language understood by policy makers, for example, “The matching rate should not be more than $2 for every local dollar. Otherwise, the state budget level would have to be raised and the crucial support of the Taxpayer’s Association would be jeopardized.”

164

Chapter Five

can see if a policy proposal ranks favorably over a broad range of values for δ. (If it does, a potentially broad range of support for the policy is implied.) How does one know what values of individual utility levels to enter? After all, utility is nonmeasurable and noncomparable among different people. Here we deviate from consumer sovereignty and seek to identify a way of making utility comparisons that reflect the social judgments of potential users of the analysis. Typically, the assumption is that policy makers will count people who are identical in terms of certain observable characteristics as having the same utility. For the case of school finance, we will assume that each household’s utility level can be fairly represented by a function U(E, BT ), where E is the amount of education consumed by the household (measured by the educational expenditures on its children) and BT is the after-education tax wealth that the household has available for all other goods (using the property tax base per child as a proxy variable adjusted for taxes paid). Both variables are observable, and households with the same E and BT will be counted as having the same utility. The specific functional form of the utility function is chosen to be common to all persons and to weight the observable characteristics in a manner that policy makers will judge reasonable. Here real-world statistics offer useful guidance. In the case of school finance for this exercise, we use the form U = BT0.985E 0.015 This form implies that households will choose to spend 1.5 percent of their wealth on education.60 This percentage is what we “observed” in the hypothetical prereform districts of the main chapter. That is, the average district with a $120,000 property tax base per child chose a $1800 school expenditure level (per child), and the average $420,000 district spent $6300 per child. In each case the ratio of expenditure to the tax base is 0.015 (= 1800/ 120,000 = 6300/420,000). The policy maker aware of the equality might reasonably conclude that households prefer the proportion, and might wish the analysis to penalize deviations from it caused by policy.61 Imagine that each district in our example is composed solely of homogeneous households with one child per household (this keeps the example as simple as possible). Then each household in a district will pay local taxes equal to the educational expenditure on its child (with no state intervention). If each of the households in one district has a pretax wealth of B0, the behavioral demand function used in the chapter implies that it will spend 1.5 percent of B0 on education. Since within a district all the households in our example are assumed to be homogeneous, they will unanimously agree as local voters to spend $1800 per child in the $120,000 district and $6300 per child in the $420,000 district.

60 This form is a special case of the more general form U = B α E 1−α, where α is the utility-maximizing proT portion of the budget spent on BT and 1 − α for E. This function, called the Cobb-Douglas utility function, is discussed in Chapter 6. 61 Like the analysis this exercise is modeled after, we forego complicating this socially chosen utility function further. It is not consistent with the price-elasticity assumptions used in the demand equations in the chapter, but then it is only used after the behavioral predictions have been made.

The Analysis of Equity Standards

165

Table 5A-1 Simulated Effects of School Finance Reforms on Households School finance policies Simple wealth neutrality (matching grants)

Conditional wealth neutrality (DPE)

Equal spending (full state financing)

Low-wealth district E 1,800 BT 118,200 U 111,009

4,800 119,690 114,052

2,270 120,535 113,563

4,800 117,867 112,341

High-wealth district E 6,300 BT 413,700 U 388,531

4,800 410,711 384,195

4,800 412,396 385,747

4,800 412,533 385,874

Districts

Prereform

Now, we are almost ready to compare the three policies used in the preceding section’s illustrations; do nothing, achieve simple wealth neutrality with target spending of $4800, and achieve conditional wealth neutrality with the high-wealth district spending $4800. First we must clarify some additional assumptions used in the calculations below. We treat our two district types as if they were equal in population and made up the state. The state treasury always breaks even. The surplus generated by each reform (recaptures exceed state aid) is assumed to be distributed in equal payments to each household in the state (e.g., a tax credit for households with children in school). The redistributions have small income effects that we ignore. We continue to rule out household locational changes or the substitution of private for public education, so that the district responses to the reform remain as indicated in the main chapter. The effects of the three previously discussed policies on the households are summarized in Table 5A-1. The positions of a low- and a high-wealth household with no reforms in effect are given in column 1. Since in this case a household’s tax payment equals the district educational expenditure per child, BT in the low-wealth district is $118,200 (= $120,000 − $1800) and in the high-wealth district it is $413,700 (= $420,000 − $6300). The utility level U is calculated (here and in the other columns) by substituting the levels of BT and E in the utility function chosen earlier. For example, the utility of each family in the low-wealth district is U = 118,2000.985 (18000.015) = 111,009 Columns 2 and 3 show similar calculations for the simple and conditional wealth neutrality proposals illustrated in the main chapter. The only entries that require explanation are those for BT . Recall that the low-wealth district under simple wealth neutrality received $4368 in matching state aid and contributed only $432 of its own wealth to reach the $4800

166

Chapter Five

expenditure level. The high-wealth district under this proposal raised $9411, of which the state recaptured $4611. Therefore, the state had net receipts of $243 (= $4611 − $4368) for every two households, or $121.50 per household. Under our assumption, the state rebates $121.50 to each household. Therefore, the after-tax wealth BT of the household in the lowwealth district is BT = $120,000 − $432 + $121.50 = $119,689.50 Similarly for the high-wealth district, BT = $420,000 − $9411 + $121.50 = $410,710.50 The after-tax wealth figures for the DPE proposal used to achieve conditional wealth neutrality are similarly derived.62 Note that in all cases the sum over both districts of educational expenditures and after-tax wealth for the two representative households is $540,000 (the joint budget constraint). Column 4 of Table 5A-1 contains a new policy alternative not previously discussed: equal spending per child achieved by full state financing of schools with a statewide property tax. A uniform expenditure of $4800 per child is made with state revenues. This level is chosen for comparability with the simple wealth neutrality proposal. This means that the state tax rate τ applied to both the low- and high-wealth districts must raise $9600 in revenues for every two households: 120,000τ + 420,000τ = 9600 540,000τ = 9600 τ = 1.778 percent The low-wealth district makes tax contributions to the state of $2133 per household (= 0.01778 × 120,000), and the high-wealth district contributes $7467 (= 0.01778 × 420,000). These figures are used to determine the after-tax wealth in each district. As we look across the columns, it becomes clear that no one of these proposals is obviously better than the others. Furthermore, their relative impacts on any single district are not obvious from the observable variables only. The DPE proposal, for example, barely increases educational spending in the low-wealth district compared to the increase under full state financing, but it is nevertheless ranked higher: The bigger after-tax wealth outweighs the lower educational spending. The rankings are, of course, a consequence of the utility function chosen. Keep in mind that this choice has some justification: the observation that households across the wealth

62 The exact demand of the average low-wealth district under DPE was calculated by using the demand curve E = 1800P−0.4 at P = $0.56.

The Analysis of Equity Standards

167

Table 5A-2 Social Welfare Rankings of the Alternative School Finance Reforms (1 = Best, 2 = Second Best, 3 = Third Best, 4 = Last) School finance policies

Districts

Prereform

Household utility levelsa UH 388,531 UL 111,009

Simple wealth neutrality (matching grants)

Conditional wealth neutrality (DPE)

Equal spending (full state financing)

384,195 114,052

385,747 113,563

385,074 112,341

3

2

4

2 1

1 2

3 3

Social welfare functionsb Benthamite 1 Middle of the road 4 Rawlsian 4 a

H = high-wealth district; L = low-wealth district.

b

The actual numerical scores for the alternative are not comparable across the different welfare functions because the functional form changes.

classes seem to prefer spending 1.5 percent of their wealth on education. The full-statefinancing plan has the family in the low-wealth district spending 3.9 percent of its total after-tax wealth ($122,667 = $4800 + $117,867) on education, whereas under the DPE plan the percentage is 1.8. Table 5A-2 shows the rankings of the four policy alternatives when evaluated by three different social welfare functions: Benthamite (∆ = 1), Rawlsian (the utility level of the worst-off household),63 and a middle-of-the-road function (∆ = 0.1). Recall that the Benthamite function is simply the sum of the utility levels, and the middle-of-the-road function (using subscripts L and H for the households in the low-wealth and high-wealth districts) is W M = (UL0.1 + UH0.1 )10 On looking at the table, we see that a different alternative is ranked first by each of the social welfare functions. Although there is no consensus about what is best, note that the full-state-financing proposal is dominated by both the simple and conditional wealth neutrality proposals (i.e., the latter two are ranked higher than full state financing by all three evaluation rules). Thus, unless policy makers have a very strong social preference for equal

63 When handling large amounts of data, it can be convenient to approximate the Rawlsian function. Even though the Rawlsian function is the limit as δ → −∞, it turns out that δ = −10 is usually approximate enough. This approximation applied to the data in this example is typically within 0.00006 percent of the exact number.

168

Chapter Five

educational spending per se, we can eliminate this proposal from consideration.64 Also, we can see that those with preferences other than Benthamite have some incentive to form a coalition to try to eliminate the prereform system. (They prefer any of the reform proposals to no reform at all.) In a sense this exercise only begins to suggest how social welfare functions may be useful. A much more powerful use is in the design of the alternatives. We picked specific proposals rather arbitrarily: Why do we consider a full-state-financing proposal only at the $4800 expenditure level, for example, when some other level might rank much higher by one or more of the social welfare functions? It can be shown that the best full-statefinancing plan by the Benthamite rule (indeed, any of the social welfare rules) is to set the expenditure level at $4050, with a statewide property tax rate of 1.5 percent, UL = 112,237, UH = 385,964, and W B = 498,332.65 With computer simulation one can identify the optimal financing plans of each type of reform according to various social welfare functions. Simulation with actual data can be a way to begin to clarify the most promising alternatives for serious policy consideration.66

64

Recall that neutrality and educational equality are social values in addition to the social welfare criteria. For the full state plan, the sum of taxes from one household in each district must equal twice the chosen expenditure level (in order for the treasury to break even): 65

120,000τ + 420,000τ = 2E E τ = ——–— 270,000 The household in the low-wealth district therefore always pays 4/9 E in taxes (= 120,000E/270,000), and the highwealth household pays 14/9E. The best Benthamite plan requires choosing E to maximize UL plus UH :

(

4E Maximize 120,000 − —– 9

)

0.985

(

14E E 0.015 + 420,000 − —— 9

)

0.985

E 0.015

The solution to this is E = $4050; in fact, it is the level for a statewide plan preferred by each district independently and would therefore be preferred if any of the social welfare functions were used. 66 Inman, “Optimal Fiscal Reform of Metropolitan Schools,” does precisely that. Similarly, Stern, “On the Specification of Models of Optimum Income Taxation,” undertakes an exercise to determine the optimal income tax rates.

CHAPTER SIX T H E C O M P E N S AT I O N P R I N C I P L E O F B E N E F I T- C O S T REASONING: BENEFIT MEASURES AND MARKET DEMANDS

ONE OF THE MOST important analytic tasks is to address questions of relative efficiency. For example, it was primarily analysts who argued for the deregulation of passenger air service because it was expected to increase efficiency. The analysts knew that some individuals would gain from the change and others would lose, but the analytic community was nevertheless virtually unanimous in advocating deregulation as an efficient change. The force of the analysts’ arguments was largely responsible for the Airline Deregulation Act of 1978, which provided for the phasing out of the Civil Aeronautics Board by 1985 and return of the power to choose routes and set fares to the individual airlines. In fact, similar arguments were put forth with a high degree of consensus by analysts of the communications, energy, and other industries. In the period from 1977 to 1988, these led to a reduction in the portion of the U.S. economy that is fully regulated from 17 percent to 6.6 percent and resulted in efficiency improvements with estimated value of $36 to $46 billion annually.1 While these examples happen to be of reduced government intervention, there are many other examples where relative efficiency is enhanced by an active government role. We consider in this chapter the shared concept of relative efficiency that led to this unusual degree of agreement. We introduce the fundamental test of relative efficiency known as the compensation principle. Essentially, the principle involves considering whether it is possible for the gainers from a change to compensate the losers and still come out ahead. This principle not only helped lead to the substantial analytic agreement in the above example, but is the foundation for the important analytic technique known as benefit-cost analysis, which is used extensively throughout government. It was first required by the Flood Control Act of 1936 for

1 See C. Winston, “Economic Deregulation: Days of Reckoning for Microeconomists,” Journal of Economic Literature, 31, No. 3, September 1993, pp. 1263–1289.

169

170

Chapter Six

use in estimating the economic value of proposed federal water resource projects such as dams, and its use has since spread to all types of expenditure and regulatory decisions. For example, many analysts argue that regulatory standards such as those limiting water pollution or the use of hazardous substances were not originally designed with sufficient attention to relative efficiency. President Reagan issued an executive order directing regulatory agencies to use benefit-cost analysis when making new regulations and reviewing old ones. As his Economic Report of the President explains2: “The motive for incorporating benefit-cost analysis into the regulatory decision-making process is to achieve a more efficient allocation of government resources by subjecting the public sector to the same type of efficiency tests used in the private sector” (emphasis added). Indeed, we will introduce the compensation principle in a way that reinforces the analogy between the efficiencyseeking of a market and that of benefit-cost analysis. But what does it really mean when we say that one allocation is more efficient than another or that an allocative change increases efficiency? Recall the introductory discussion of efficiency in Chapter 3. Efficiency or Pareto optimality is considered to be a neutral standard in terms of outcome equity implications, in the sense that there are efficient allocations characterized by virtually any relative distribution of well-being among individuals. However, this is an absolute concept rather than a relative one: It is either possible or not possible to make someone better off without making another person worse off. To determine whether one allocation is more efficient than another, we need a standard of relative efficiency. Most allocative changes caused by public policy involve situations in which some people are made better off and others are made worse off. Standards of relative efficiency used to decide if a change is an improvement in efficiency make interpersonal comparisons in a specific way and therefore may be controversial on equity grounds. In this chapter we explain the fundamental test of relative efficiency—the compensation principle—in a way that makes its equity implications explicit. There is a second theme to this chapter, in addition to explaining the compensation principle. To this point the theory that we have been using to understand policy consequences consists primarily of principles of individual choice. But the data most commonly available (or most readily attainable) for policy analyses are usually market statistics: information about aggregations of individual choices rather than the individual decisions themselves. For example, we are more likely to have estimates of a market demand curve than of all the individual demand curves that shape it. To understand the inferences that can be made from market observations, one often has to rely upon additional logical bridges to connect individual decisions with market observations. Good use of these logical bridges requires skill in model specification. We consider this problem in the context of applying the compensation principle: How and when can one use knowledge of market demand curves to make inferences about relative efficiency?

2 Economic Report of the President, February 1982 (Washington, D.C.: U.S. Government Printing Office, 1982), p. 137.

The Compensation Principle of Benefit-Cost Reasoning

171

We begin by introducing the compensation principle. Then we illustrate its application in some simple settings that permit focus on the “demand” or “benefit” side (Chapter 9 introduces and focuses on cost-side issues). We define a specific benefit-cost component known as consumer surplus, and demonstrate how measurements of changes in consumer surplus are used in benefit-cost analysis. We consider whether or not the market demand curve contains the information we seek for benefit-cost analysis in three different policy settings: taxation, gasoline rationing, and consumer protection legislation. In the supplemental section, we consider three variants of the consumer surplus measure and the difficulty of applying them. We discuss their application through survey methods, with special attention to the method known as contingent valuation and its role in environmental issues. An appendix to the chapter goes over the mathematical relations between these measures and utility and demand functions, including concepts used in the dual theory of consumer choice.

The Compensation Principle of Relative Efficiency The Purpose of a Relative Efficiency Standard As we know, public policy alternatives will cause changes in the welfare of many people. Furthermore, it is virtually inevitable that every policy change will improve the lot of some but worsen the lot of others. As we recognize that the decisions are made through a political process, the problem we address is whether analytic input to the process can provide a useful judgment about the relative efficiency of each alternative. In Chapter 3, we began discussing the analytic value of having a yardstick of some type. Neither the criterion of Pareto superiority nor that of Pareto optimality helps us to resolve whether the gains from a change might somehow be sufficient to offset the losses that it causes.3 The criterion of Pareto superiority applies only to situations in which some gain but no one loses or no one gains and some lose. As a practical matter, this does not characterize the effects of actual policy changes. As an ethical matter, there is no compelling justification for trying to restrict policy changes to Pareto-superior situations. Aside from the practical impossibility of doing so, equity considerations could lead to a social preference for making some better off at the expense of others—as when, for example, the status quo is a consequence of unfair discrimination in the past. Can we not simply rely upon Pareto optimality as the efficiency standard? If it were easy to attain and remain at a Pareto-optimal allocation, we might not have to be so concerned about our progress (or lack thereof) toward one. However, in complex and dynamic economies, the best that we can do is to strive toward this goal. Many different policy changes are considered each year by many different layers of governments, and each of them can bear in important ways on the populations affected. Should a new highway be built in a certain locality? With limited funds available for new equipment at a public hospital, what equipment would be most valuable to buy? Is it a good idea to require airlines to reduce 3 Pareto superiority is an allocative change that makes at least one person better off with no one worse off, and Pareto optimality is an allocation from which it is impossible to make anyone better off except by making another worse off.

172

Chapter Six

noise levels near airports? If we are fortunate enough to be able to reduce taxes, which taxes would it be the most beneficial to reduce? The people making these decisions try to resolve each by assessing the pros and cons of the different alternatives. The purpose of the relative efficiency standard is to offer a systematic way of measuring the pros against the cons, or the benefits compared to the costs. The value of a consistent methodology is that it allows any particular application to be scrutinized for accuracy by any professional analyst. Furthermore, repeated use of the same measure helps users of analyses to understand the strengths and weaknesses of the measure. Because the measure will inevitably compare the gains to some against the losses to others, it is like using a particular social welfare function, and not everyone will agree that decisions should be made based upon it. In fact, our recommendation for its use is to treat it as an important but not sufficient indicator of an alternative’s worthiness. Two actual examples from the late 1990s illustrate the uses to which this standard is put.4 Many federal agencies seek to make rules and regulations that they believe will improve the well-being of the citizenry. As we have seen, these agencies are required by Executive Order of the President to quantify benefits and costs of their proposed rules.5 The Department of Energy (DOE), working to reduce fossil fuel consumption, imposed a ruling “Energy Standards for Refrigerators and Freezers.” For each of eight classes of refrigerators (e.g., those with automatic defrost and top-mount freezers), DOE calculated the benefits and costs for at least twelve alternative performance standards in order to put forth the one with the greatest net benefits for each class. Based on its analysis, the standards implemented cost $3.44 billion per year, but generate benefits of $7.62 billion per year. The benefit estimates are derived from an analysis that begins by calculating physical benefits, such as the degree to which NOX and CO2 air pollution emissions will be reduced as a result of the standards and then putting monetary values on those reductions based on studies of the public’s willingness to pay for them. The second example is from the Health and Human Services Department (HHS). In April 1999, the Mammography Quality Standards Act became effective. This act mandates various quality improvements in mammography screening procedures for detecting breast cancer. As part of the analysis done to decide if such an act was beneficial, the Food and Drug Administration (FDA) conducted a benefit-cost analysis of the proposed standards. It estimated that the benefits of these higher standards were worth $182–$263 million per year, whereas the costs were only $38 million per year. The benefits come primarily from increased detection of breast cancers at an early stage, when treatment is most likely to be successful. Some of the benefits are that lives will be saved, and the monetary value of saving each of these “statistical” lives was estimated (based on other studies) at $4.8 million.6

4 The examples are discussed in a 1998 report from the Office of Management and Budget entitled “Report to Congress on the Costs and Benefits of Federal Regulation.” 5 President Clinton continued this practice, mandating agencies to conduct benefit-cost analyses to the best of their ability and to the extent permitted by law. See his Executive Order 12866, “Regulatory Planning and Review.” 6 “Statistical” lives are saved when there is an increase in the probability of saving someone in a large population, even though no one can identify the specific individual. For example, if a new treatment applied to a

The Compensation Principle of Benefit-Cost Reasoning

173

While the FDA assumed that the new standards would result in a 5 percent quality improvement, it noted that only a 2 percent improvement was needed for the benefits to exceed the costs. In these cases the claim is not that the new standards achieve an “optimal” allocation of resources, but that they achieve a more efficient allocation. Many questions can be raised about the examples. How does anyone know the public’s willingness to pay? If the benefits of the new mammography standards did not outweigh the costs, would this mean that they should not be imposed? Another federal agency that makes health regulations, the Occupational Health and Safety Administration, has been forbidden by a Supreme Court decision from assigning monetary values to human lives and suffering. How are such estimates derived, what are we to think of them, and why should some agencies be allowed to use them and others not? In order to begin to address any of these important issues (in this and later chapters), we have a more immediate task. Let us first understand carefully the principle that underlies our methods of measuring relative efficiency.

The Hicks-Kaldor Compensation Principle The standard of relative efficiency that has come to be widely utilized in analytic work was first developed by the British economists John Hicks and Nicholas Kaldor.7 The underlying concept, called the compensation principle, builds on the notion of Pareto superiority. The principle is as follows: An allocative change increases efficiency if the gainers from the change are capable of compensating the losers and still coming out ahead. Each individual’s gain or loss is defined as the value of a hypothetical monetary compensation that would keep each individual (in his or her own judgment) indifferent to the change. It is critically important to understand that the compensations are not in fact made. If they were, the change would then indeed be Pareto-superior. Thus the compensation principle is a test for potential Pareto superiority. Think of the gains from a change as its benefits and the losses from a change as its costs. Define net benefits as the sum of all benefits minus the sum of all costs. If a change increases efficiency, that is equivalent to stating that its net benefits are positive. The analytic effort to examine whether or not changes satisfy the compensation principle is called benefit-cost analysis. Often, benefit-cost analysis is used to compare mutually exclusive alternatives to identify the one yielding the greatest net benefits. Trying to maximize net benefits is thus the same as trying to maximize relative efficiency.

population of 1000 reduces the probability of death from a specific disease from 0.07 to 0.03, then we would expect only thirty rather than seventy people to die from the disease. The treatment is said to save forty statistical lives. The value of saving a statistical life is not necessarily the same as the value of saving particular identified individuals. For more information on valuing life and limb, see the review article by W. K. Viscusi, “The Value of Risks to Life and Health,” Journal of Economic Literature, 31, No. 4, December 1993, pp. 1912–1946. 7 See J. R. Hicks, “The Valuation of the Social Income,” Economica, 7, May 1940, pp. 105–124, and N. Kaldor, “Welfare Propositions of Economics and Interpersonal Comparisons of Utility,” Economic Journal, 49, September 1939, pp. 549–551.

174

Chapter Six

Before illustrating some of the details of benefit-cost reasoning, it is important to discuss first the controversy about whether or not the pursuit of relative efficiency is equitable.

Controversy over the Use of the Compensation Principle Most of the controversy over the use of compensation tests concerns the equity judgments implicit in them. Some analysts would like to ignore equity altogether and use the compensation test as the decisive analytic test. One rationale is the hope that a separate set of policies can be designed to take account of and amend the overall distributional effects. However, this view may underestimate the process equity concerns with particular policy changes and overestimate the ability of policy makers to change the outcomes once they are “done.” A second rationale for relying solely on the compensation test is the belief that concern for equity is simply unfounded: If a large number of policy changes are made in accordance with the compensation rule, then everyone will end up with actual gains. Even if correct, this argument has an implicit equity judgment that restricts redistributive concerns. (For example, perhaps some people should gradually be made worse off to allow others to become better off.) Putting that aside, let us consider the argument directly. Think of the payoff to an individual from a policy change as arising from the flip of a fair coin: $2 if it is heads and −$1 if it is tails. On any one flip the individual may lose, but in the long run the individual will be better off. However, this reasoning depends heavily on the debatable proposition that gains and losses from policy changes are distributed randomly. If they are not, what have the losers (gainers) done to deserve the losses (gains)? At least in the author’s judgment, the compensation test cannot substitute for explicit consideration of equity effects. A third rationale focuses on the role of analysts in the policy process. This argument is that analysts are not needed to represent equity concerns because these concerns are appropriately represented by the other participants in the policy process (e.g., elected officials, lobbyists for interest groups); the need for analysts is to give voice to efficiency concerns that would otherwise be given no consideration at all. Acceptance of this rationale depends heavily on a view of the policy-making process in which the process (absent analytic input) is flawed in representing efficiency but not equity. Furthermore, the view implies that analytic input can give voice to important unvoiced efficiency concerns but cannot similarly give voice to important unvoiced equity concerns. In light of the importance of analytic contributions to the equity of issues such as school finance and criminal sentencing, this seems a strained argument at best. Putting aside the arguments about whether to consider equity at all, some people argue that an implicit and unacceptable equity judgment is reflected in the money metric used. The principle implies that a $1 gain to one individual precisely offsets a $1 loss to any other. When the gainer is rich and the loser is poor, this would be a peculiar social welfare function to use as a rule. To clarify this, let us examine the compensation principle for small changes involving only two people.

The Compensation Principle of Benefit-Cost Reasoning

175

Let us denote the hypothetical compensation as Hi for person i. For a small change, this may be expressed in utility terms: Hi = ∆Ui /λ i where ∆Ui is the change in utils and λ i person i’s marginal utility from an additional dollar. That is, the numerator is the total change in utils, and the denominator is the util value per dollar. Therefore the whole expression is the dollar equivalent of the utility change. For example, an individual who values a gain at $10 and whose marginal utility of a dollar is 2 utils can be thought of as experiencing a gain of 20 utils: $10 = 20 utils/(2 utils/$) The sum of hypothetical compensations in this two-person case is H1 + H2 = ∆U1 /λ 1 + ∆U2 /λ 2 The above expression, in turn, can be expressed as the difference between the prechange and postchange utilities. Denoting A as the prechange utility and B as the post-change utility, and ∆Ui = U1B − U1A we can rewrite the above as H1 + H2 = [U1B /λ 1 + U2B /λ 2] − [U1A/λ 1 + U2A/λ 2] The bracketed terms in the above expression may be thought of as levels of a social welfare function. The social welfare function is simply the sum of the utility of each person, weighted by the inverse of the marginal utility of money. Viewed this way, objections to the compensation principle on equity grounds can now be made clear. If there is declining marginal utility of wealth, a 1-util gain to a poor person will be judged socially less worthwhile than a 1-util gain to a rich person! In Figure 6-1, we illustrate this in terms of social indifference curves. Imagine that persons 1 and 2 have identical utility functions, so that all differences in their utility levels are due solely to wealth differences. We saw in Chapter 3 that the Benthamite W B and Rawlsian W R social welfare functions are usually considered the extremes of plausible social judgment. But the social indifference curve corresponding to indifference by the compensation principle W c falls outside these extremes. It is bowed toward the origin. We explain this below. The slope of any social indifference curve is ∆u 2/∆u1, where ∆u 2 is the change in person 2’s utility necessary to hold social welfare constant when person 1’s utility changes by a small amount ∆u1; Indifference by the compensation principle requires that ∆u 2 ∆u1 —— + —— =0 λ2 λ1 or ∆u 2 λ2 —— = − —– 1 ∆u λ1

176

Chapter Six

O

Figure 6-1. The compensation principle as a social welfare function is antiegalitarian.

At a point of equal utilities, λ 2 = λ 1 and the slope of the social indifference curve is −1. But as we move away from a point of equal utilities along the social indifference curve, λ declines for the gainer and rises for the loser. At point D, for example, λ 2 < λ 1 and the slope of the indifference curve is flatter or less negative (−λ 2 /λ 1 > −1). At point E, where person 1 is relatively well off, λ 1 < λ 2 and the indifference curve is steeper or more negative (−λ 2 /λ 1 0, then P X i

i

A

B

> PA XA

If Hi > 0, there is some allocation of the aggregate bundle of goods XB among the consumers such that all coni sumers get at least as much utility as from the actual consumer allocations of XA . Take the case in which XB is hypothetically so allocated that each person is strictly better off. Now consider the ith person. Since we assume utility maximization, it must be true that PA X Bi > PA XAi That is, it must be that the ith person could not afford the bundle of goods XBi before the change: Otherwise, the person could have increased utility by buying it (we assume each individual’s hypothetical bundle is available before the change, though at that time the economy could not make these bundles available to all individuals simultaneously). Since the same is true for every person, PA

ΣX i

i B

> PA

ΣX i

i A

But the sum of allocations to each person must add up to the appropriate aggregate allocation:

ΣX i

i B

= XB and

ΣX i

i A

= XA

Therefore, we can substitute XB and XA in the above equation and get PA XB > PA XA

178

Chapter Six

Whatever one thinks about the desirability of using the compensation principle in policy analysis, it can be no more controversial than thinking that increases in national product are generally good. In short summary, the Hicks-Kaldor compensation principle has come to be widely utilized as a guide for making this comparison: A policy change is considered an efficiency improvement if it is possible for the gainers to compensate the losers and still have something left over for themselves. Although there can be no strong ethical justification for it, such a principle is an attempt to capture a sense of which alternative allocation is considered to be the biggest social pie. It is a more refined measure of efficiency change than simply looking at the change in national product, and we know that many people are willing to use the latter as an efficiency indicator. Therefore, looked at as an indicator to be considered along with equity effects, it provides policy makers with useful information. In actual policy use, the compensation principle may play an even more defensible role than can be argued on theoretical grounds alone. Its most common use is in the form of benefit-cost studies to compare alternative programs for approximately the same target populations of gainers and losers. We have already mentioned one example of this: the DOE study of alternative energy-efficiency standards for refrigerators. As another example, the Department of Labor sponsors or has sponsored different public employment programs aimed at youth (e.g., Jobs Corps, Supported Work, Neighborhood Youth Corps) and has commissioned or undertaken major benefit-cost studies of each. In those situations, the studies are more likely to influence decisions as to which of these programs will survive or grow rather than whether there will be any programs at all. The relationship between policy process and policy analysis is very important and warrants serious reflection. To the extent that policy alternatives must be carefully designed and their consequences analyzed, there can be an important role for policy experts. This depends on whether the experts can perform their tasks with the confidence and support of the other participants in the policy process. If analytic work either cannot clarify what participants in the decision process wish to know or does not add to what they already know, it is of little use. An analyst attempting to use the compensation principle as a way of deciding among very divergent allocative alternatives may find the work ignored (like it or not). The problem of deciding what to do may be solved quite “rationally” by breaking the problem into pieces and giving policy analysts responsibility for the pieces with which they can deal least ambiguously. Since the policy significance of a compensation test becomes less clear as it is used to compare broader and more diverse allocations, perhaps the test should be relied upon less. I know of no instance, for example, in which a compensation test was actually used to decide between the mix of flood control projects and urban renewal. But analytic work of that type is used regularly to help sort out which programs seem better within each category. Other types of microeconomic policy analysis may influence the broader allocative decisions; but when they do so, it is likely that the analysts involved have been successful in addressing the concerns of the other participants in the policy process.

The Compensation Principle of Benefit-Cost Reasoning

179

Measuring Benefits and Costs: Market Statistics and Consumer Surplus There are a great many issues involved in actually measuring benefits and costs. In this section, we focus not on the possible problems but on the potential for success. At first, the information demands may seem overwhelming: How could any analyst find out the gains and losses experienced by each and every individual affected by a change? The answer is that, at least in some situations, essentially all the relevant information is contained in a few places where it can, in principle, be reasonably estimated through careful empirical investigation. In particular, much of the information is contained in market demand and supply curves. We do not attempt to review the statistical issues involved in making reliable estimates of these curves. We do, however, emphasize that in many circumstances this hard task is at least feasible when the much harder task of obtaining similar information for each individual market participant is not. Our examples abstract from the statistical estimation problems by assuming that we have exact information about the demand curves, so that we can focus on the way benefits and costs are measured from them. We illustrate the relevant measurements using a simple model of an excise tax. The value of benefit-cost analysis will depend on the accuracy of models relied upon for measurements. As we have seen, models do not have to be perfect to provide valuable information. Furthermore, models can handle considerably more complexity than that illustrated by this first benefit-cost analysis. Thus a task that may seem hopelessly insurmountable at first is not hopeless at all. Nevertheless, to conduct a good benefit-cost analysis in many situations requires considerable analytic ingenuity. In the following sections we explore several of the challenges that arise on the benefit side and some analytic approaches to them.11 Let us begin by going over the relationship between individual and market demand curves. Recall that the demand curve of an individual shows the quantity of a good that the individual would purchase at each possible price, holding the total budget and the prices of all other goods constant.12 We saw in Chapter 4 that this relation is a consequence of utility maximization, using diagrams similar to Figures 6-2a and b. Figure 6-2a shows the utility-maximizing quantities of milk and “all other goods” that an individual will purchase each year for several different budget constraints. These constraints are chosen to hold the total budget level and the price of “all other goods” constant (so that the vertical intercept is constant); the changes in the budget constraints represent 11 We make no attempt to discuss comprehensively the nature of problems that may arise in doing benefitcost analysis. Nevertheless, in the course of demonstrating the uses of microeconomics for public policy throughout the text, most of the major benefit-cost problems are discussed. This chapter highlights discussion of benefits, and the following chapter expands this to focus on the benefits of reducing uncertainty. Chapter 9 highlights discussion of costs. Chapters 8 and 19 emphasizes the role of time in economics and the importance of discounting to make benefits and costs that occur at different times commensurate. 12 This is the definition of the ordinary demand curve. In a later section, we also define the compensated demand curve. The latter is a demand curve that also holds utility constant.

Chapter Six

180

(a)

(b) Figure 6-2. Deriving an individual demand curve: (a) Indifference curve representation. (b) Demand curve representation.

The Compensation Principle of Benefit-Cost Reasoning

181

different assumptions about the price of milk (the cheaper the milk, the greater the horizontal intercept). Points A, B, and C show the utility-maximizing choice associated with each budget constraint. At lower milk prices, the individual would buy more milk.13 Figure 6-2b is constructed from the information in 6-2a. The vertical axis is the price of milk per gallon (the absolute value of the slope of the budget constraints in 6-2a). The horizontal axis remains the quantity of milk per year. Points A, B, and C in this figure are the prices (call them pA , pB , and pC ) and milk quantities associated with points A, B, and C in Figure 6-2a. If we imagine identifying the utility-maximizing quantity of milk at every possible price, then something like the curve shown passing through points A, B, and C in Figure 6-2b would result.14 This curve is the individual’s demand curve. It is common to explain this curve as representing the quantity demanded at each possible price. However, it is equally true that the height of the demand curve represents the dollar value of each incremental unit to the individual: the very most that he or she would be willing to pay to obtain it. Consider point B, for example, where price (the height) is $4 per gallon and the individual buys 40 gallons. We know by the tangency condition of Figure 6-2a that the individual’s MRS is precisely $4 at this point and that the MRS is by definition the largest amount of the other good (in this case, dollars) that the individual is willing to give up for the 40th gallon. Thus by construction, the height at each point on the demand curve shows the maximum willingness to pay in dollars for each successive unit of the good. In the normal case, the height diminishes as quantity increases: The value to the consumer of the fiftieth gallon is only $3. In other words, an individual’s demand curve is also a marginal benefit curve, revealing the individual’s maximum willingness to pay for each unit. The demand interpretation focuses on quantity (the horizontal distance) as a function of price (the vertical distance). The marginal value interpretation focuses on marginal benefit (the vertical distance) as a function of quantity consumed (the horizontal distance). It is simply two different ways of looking at the same curve. The marginal benefit interpretation, however, gives us an interesting way to measure the overall value of particular allocations to the consumer. Suppose we added the marginal benefit of each unit consumed. That is, when milk is $5 per gallon we add the marginal benefit from each of the thirty units consumed in order to get the total value to the consumer of these units. Geometrically, this is equivalent to calculating the area under the demand curve from the origin until Q = 30. We explain this below. 13 The downward slope is expected for all goods except Giffen goods. Recall the discussion in Chapter 4 of the income and substitution effects that help to explain the shape of an individual’s demand curve. 14 Actually, the smoothness of an individual’s demand curve depends on whether the particular good or service is of value only in discrete units (e.g., slacks, cameras) or a continuous dimension (e.g., gallons of gasoline, pounds of hamburger). A half of a camera is presumably of no value, while a half-pound of hamburger is of value. For discrete goods, the demand curve is a step function—looking like a series of steps descending from left to right— with the “rise” of the step generally declining with each successive unit of the good to indicate diminishing marginal value. If an individual purchases many units of a discrete good within a given time period, it is usually possible (and analytically convenient) to approximate the demand curve by a continuous function rather than the actual step function.

182

Chapter Six

Figure 6-3. The sum of rectangles approximates the area under the demand curve or total consumer benefit.

In Figure 6-3, we show this idea by drawing in a series of rectangles with a common base. If the base is 1 gallon of milk, there would be thirty rectangles (only five are drawn) and we would sum their areas to get the total value to the consumer. However, our choice of gallons as a measure causes us to underestimate slightly. Within any single gallon, owing to diminishing marginal value, the value of the first drop to the consumer is higher than the second, and the second higher than the third, and so on until the end of the gallon is reached. But each of our rectangles has a height equal to the value of only the last drop in the gallon, thus underestimating slightly the value of the preceding drops in that gallon. To reduce this error, we could simply use a smaller measure as the base: quarts rather than gallons would give 120 rectangles with smaller errors, and ounces rather than quarts would give 3840 very-small-base rectangles with further reduction in the degree of underestimation. Figure 6-3 illustrates this for the first gallon only, with the shaded area showing the reduced error by switching to a half-gallon base. As the number of rectangles (with correspondingly smaller bases) increases, the sum of their areas gets closer and closer to the exact area under the demand curve and the size of the error approaches zero. But then we may simplify: total value to an individual, or equivalently the individual’s maximum willingness to pay, equals the area under the demand curve for the units consumed. This measure (as opposed to the sum of finite rectangles) will be the same regardless of the units chosen to measure consumption.

The Compensation Principle of Benefit-Cost Reasoning

183

O

Figure 6-4. The consumer surplus is total consumer benefit minus consumer cost (triangle AEB).

In Figure 6-4, we draw a linear demand curve for the pedagogical purpose of making the calculation of the area easy and to illustrate one new concept. The area under the demand curve for the 30 gallons can be calculated as the simple sum of the rectangle OABC and the triangle AEB. The rectangle has area of $150 (= $5/gallon × 30 gallons), and the triangle $45 [= (1/2) × ($8–$5/gallon) × (30 gallons). Thus the total value to the consumer of the 30 gallons is $195. This total value is not the net value or net benefit to the consumer, because it does not take account of what the consumer has to pay in order to obtain the goods. The consumer surplus is defined as the total amount the consumer is willing to pay minus the consumer cost. In this case, since the consumer pays $5 per gallon for the 30 gallons or $150 (the rectangle OABC), the consumer surplus is $45 (the triangle AEB). The surplus arises because the consumer is willing to pay more than $5 per gallon for the intramarginal units of milk, but obtains each for the $5 market price. Thus the area bounded by the horizontal price line, vertical price axis, and demand curve is usually the consumer surplus.15 Note also that 30 gallons is the precise quantity that maximizes the consumer’s surplus when price is $5 per gallon. If the consumer chose only 29 gallons, where the height of the demand curve exceeds $5, surplus could be increased by the purchase of one more gallon. If the consumer chose 31 gallons, where the height of the demand curve is below $5, then the 31st gallon has reduced surplus (achieved marginal value lower than marginal cost), 15 Examples in later sections and chapters will show exceptions, typically owing to nonuniform prices or nonprice-rationing.

184

Chapter Six

and the consumer could increase surplus by purchasing one unit less. Thus the surplusmaximizing choice is also the utility-maximizing choice. This suggests more generally that maximizing net benefits is very much like the kind of maximization that economic agents try to achieve in a market setting. Recall that our primary motivation in this section is to understand how benefits and costs may be calculated or known in the aggregate. This requires us to move on from one individual’s demand curve to the entire market demand curve. We focus on the benefit side here, and on knowledge about the market demand curve as a particular source of information. A market demand curve, by definition, shows the sum of all individual quantities demanded for any given price. Suppose we have a very simple two-person linear demand economy: Q1 = 80 − 10p and Q2 = 160 − 20p The market demand curve is then the sum of the individual demands: Qm = 240 − 30p All three demand curves are shown in Figure 6-5a. This is obviously a special case used for illustrative purposes; the generality of it will be considered shortly. If the market price is p = 5, then Q1 = 30, Q2 = 60, and Qm = 90. The consumer surplus from this good for each person, the area enclosed by the individual demand curve, the price axis, and the market price line, is 1 (8 − 5)(30) = $45 CS1 = — 2 1 (8 − 5)(60) = $90 CS2 = — 2 Thus the actual total consumer surplus from this good (denoted CS) is $135 (= 45 + 90). Now let us see whether the consumer surplus under the market demand curve (denoted CSm) does in fact equal $135: 1 (8 − 5)(90) = $135 CSm = — 2 This illustrates the important result that the consumer surplus under the market demand curve, at least in some circumstances, equals the sum of the individual consumer surpluses. Now suppose we consider this policy change: the imposition of a $1 excise tax on each unit of Q purchased. Then the price that consumers face would equal $6, and the quantity bought in the aggregate would equal 60.16 This is illustrated in Figure 5-6b. How does this 16 For simplicity we are assuming that the supply of the good is infinitely elastic. Explicit discussion of the relevance of supply conditions to the compensation principle is deferred until Chapter 9. Explicit discussion of tax incidence is in Chapter 12.

The Compensation Principle of Benefit-Cost Reasoning

185

(a)

(b) Figure 6-5. The benefits under the market demand curve are the sum of individual consumer benefits: (a) Individual and market consumer surpluses [CS1, CS2, CSm ]. (b) The deadweight loss from a $1 excise tax.

186

Chapter Six

policy change fare by the compensation principle? To answer that question correctly, one must take account of all the changes that arise. That is, we must add up the change in each individual’s consumer surplus across all markets. However, many of these changes will cancel each other out, and under certain conditions everything we need to know is contained in the information about this one market. In this market, the new consumer surplus under the market demand curve (CS′m ) is 1 (8 − 6)(60) = $60 CS′m = — 2 Thus the aggregate loss in consumer surplus in this market is $75 (= 135 − 60). It is hardly surprising that these consumers are losers, but are there no gainers? There are two more effects to consider: What happens to the tax receipts, and what happens to the real resources that used to be used in this industry when Qm = 90 but are not used here now that Q′m is only 60? We can see from Figure 6-5b that $60 in taxes are collected ($1 for each of the 60 units sold). Let us assume that the tax receipts are used entirely to provide cash to the needy and may be viewed as a pure transfer: a redistribution of wealth from one set of people to another without any effect on the efficiency of real resource use. The idea is that this is just like taking dollar bills out of one person’s pocket (the taxpayer’s) and stuffing them into another’s. By itself the pure transfer is assumed to have no significant effect on the efficiency of consumption or production; all real resources are still being used as effectively as before the transfer.17 The $60 paid in taxes by the consumers in this market are given to some other people; the taxpayers would require $60 in compensation to be indifferent; and the recipients of the tax receipts would be indifferent if they received a compensating reduction of $60. Thus the sum of compensations is zero from this effect; $60 of the $75 loss in consumer surplus in this market is offset by the $60 gain to others from the tax receipts. Now what about the remaining $15 loss? In Figure 6-5b, it is seen as the shaded triangle. This part of the loss in consumer surplus is referred to as deadweight loss; the idea is that it is a loss for which there are no offsetting gains. The deadweight loss from a tax, sometimes referred to as its excess burden, can be defined as the difference between the tax revenues to the government and the loss the tax causes to others. Under special assumptions, this is the end of the story. The resources that were used to produce units 61 to 90 have been released from this industry and are presumed to be in use elsewhere (e.g., other dairy products besides milk). The returns to the owners of the resources are assumed to be the same (their opportunity costs) in both cases, so no change arises in their budget constraints. Furthermore, the new products made with them generate no net change in aggregate consumer surplus; the assumption is that the released resources are spread evenly among the 17 In actuality, programs that provide resources to the needy may improve their productivity through better health or education and to this extent would not be pure transfers. More generally a tax instituted to enable a specific reallocation of real resources (e.g., to build a public highway that would not otherwise exist) would not be a pure transfer.

The Compensation Principle of Benefit-Cost Reasoning

187

many other production activities. Then the change in quantity of each good is small and at the margin of its market, where demand equals price. Thus the marginal change in the area between the demand curve and the price is negligible in each of these other industries. Thus, under these special assumptions, we are able to calculate that the change in aggregate consumer surplus caused by the $1 excise tax is −$15. The remarkable aspect of this result is the parsimony of the calculation. The only pieces of information used are the price, tax, and market demand equation. Obviously, one is not required to make the assumptions that lead to this maximum economy in modeling. Indeed, it is completely irresponsible to do so if the analyst thinks an alternative model might be better. We will see in later chapters, for example, that the more common practice is to substitute the estimated supply curve for the assumption here that supply is at constant cost; this is easily incorporated into the compensation test through the concept of the producers’ surplus. However, not all of the assumptions made above are easily replaceable, and their accuracy depends on the context studied. As always, the true art and skill of microeconomic policy analysis is in constructing a model that captures the particulars of the phenomenon being studied accurately enough to improve decision-making. One should by no means underestimate the difficulty of carrying out an analysis that pins the number down within a range relevant to decision-making. The point of this parsimonious example is simply to dramatize that what might at first seem to be a totally hopeless quest (to discover all the necessary individual compensations) may be a useful endeavor after all. To try to reinforce the preceding observations, let us point out an insight about taxation that follows directly from the illustration. One factor that determines the amount of deadweight loss is the elasticity of the demand curve: the more elastic it is, the greater the deadweight loss. Therefore, based on efficiency grounds, it may be preferable to tax goods that are more inelastic. On the other hand, goods that are characterized by inelastic demands are necessities. Since lower-income families spend greater portions of their budgets on necessities, such a tax is likely to be regressive in its incidence. Thus there can be a real tension between efficiency and equity in deciding how to raise taxes. Now, putting aside other aspects of compensation testing, let us continue to focus on the information that can be extracted from the market demand curve. In the special case used earlier, we saw that the change in the sum of the individual consumer surpluses caused by an excise tax equals the change under the market demand curve. The more general principle is that this holds for a change in price levels applied uniformly to all. The easiest way to see this is to use the linear approximation to the change in consumer surplus experienced by one individual ∆CSi in response to a price increase ∆P: 1 ∆P(Q 0 − Q 1 ) ∆CSi = ∆PQi1 + — i i 2 where Qi0 is the initial quantity purchased and Qi1 is the quantity purchased after the price increase. The first term is the change in the consumer cost of the goods actually purchased after the price increase, and the second is the deadweight loss. We illustrate this in Figure 6-6 using the first consumer from our earlier example: $25 = ($6 − $5)(20) + (1/2)($6 − $5)(30 − 20)

188

Chapter Six

Figure 6-6. The effect of a price increase on consumer 1.

That is, the first consumer pays $20 in taxes (the change in consumer cost for the purchased goods), and has a $5 deadweight loss (the loss in consumer surplus from units 21–30, no longer purchased owing to the higher price). If, in the more general case, we add up these amounts for all r consumers in the market, ∆CS =

r

Σ [∆PQi1 + —2 ∆P(Qi0 − Qi1)] i=1

= ∆P

1

r

r

Q11 + — ∆P Σ (Qi0 − Qi1) Σ 2 i+1 i=1 1

1 ∆P(Q 0 − Q 1 ) 1 +— = ∆PQ m m m 2 These two terms are the areas under the market demand curve that are precisely analogous to the two relevant areas under each individual’s demand curve. (In the earlier example they correspond to the $60 raised in taxes and the $15 deadweight loss.) Thus, for a uniform price change applied to all, the relevant areas under the market demand curve tell us exactly what we want to know about the total change in consumer surplus. If a policy change can be characterized by the uniform price change effect, as is the case in the imposition of an excise tax, the market demand curve can be used to identify the net direct change in consumer surplus. But many policy changes are not that simple. If a policy change causes a direct shift in individual demands (as opposed to movement along a demand curve) along with a price change, then the change in the area under the market de-

The Compensation Principle of Benefit-Cost Reasoning

189

mand curves does not reflect the information we seek. If there is inefficiency in exchange either before or after the change, the same lack of correspondence can arise. For example, when the gasoline shortages of the 1970s occurred, it was common in many areas around the country to introduce various ration plans such as 10-gallon limits per fill-up, no Sunday sales, and odd-even rationing so that a motorist could make purchases only every other day.18 At the same time, the price of gasoline was rising. The combination of effects served the purpose: short-run demand was reduced. From empirical observation of aggregate behavior during those periods, careful statistical work can result in reasonable estimates of the ordinary demand curve (over the observed range) and even isolate the regulatory and price effects. But it is not possible, without disaggregated data, to make an unbiased estimate of the loss in consumer surplus.19 To see why this is so, let us use another simple two-person linear demand system, this one for the gasoline market: Q1 = 10 − 2p Q2 = 20 − 4p and a market demand curve equal to the sum of the individual demands: Qm = 30 − 6p We assume that the initial price per gallon is $2, and that government, in response to a shortage, mandates a price of $3 per gallon in the form of a $1 tax and $2 for the gas seller.20 Additionally, we add the regulatory constraint that no person can buy more than 4 gallons per time period. Because we know the individual demand equations, we know what quantity each person wishes to buy at the higher price: Q1 = 10 − 2(3) = 4 Q2 = 20 − 4(3) = 8 But, because of the 4-gallon limit, person 2 can only buy 4 gallons. So the total quantity purchased in the market is 8 (= 4 + 4). Note that this total quantity is not a point on the “true” market demand curve (at p = 3, desired Qm = 12) because of the regulatory constraint. Figure 6-7 shows the demand curves that are discoverable from market statistics. The two market observations we have are the initial allocation ( p = 2, Qm = 18) and the final allocation ( p = 3, Qm = 8). One mistake to avoid is thinking that the change in quantity is due only to the change in price or, equivalently, that the ordinary demand curve passes through those points. If one has made that mistake and has estimated the change in consumer

18

Rationing plans are the subject of Chapter 14. In this example the shortage will cause a loss in consumer surplus. The policy problem is to respond to the short-run shortage in a way that is fair and keeps losses at a minimum. The intention here is to illustrate only one aspect of evaluating a particular alternative. 20 We are deliberately glossing over the supply conditions; as noted before, their relevance and importance will be studied in later chapters. 19

190

Chapter Six

Figure 6-7. The market demand curve with rationing does not reflect the change in consumer surplus.

surplus by using the linear approximation to the erroneous demand curve (the dashed line), then 1 (1)(10) L = 1(8) + — ∆CSM 2 = 13 A good analyst will realize that the dashed-line demand curve violates the usual ceteris paribus assumptions: Factors other than price are not being held constant; regulatory constraints have changed as well. After a bit of digging, it is discovered that the ordinary demand curve, which is shown on Figure 6-7 as the solid line, has been estimated previously and correctly.21 The next analytic pitfall is to think that the change in consumer surplus can be estimated on the basis of this perfectly accurate market picture. That is, it appears plausible to assume that the loss in consumer surplus is the shaded area in Figure 6-7. The area seems to consist of the two usual components: the extra cost of purchased units and the forgone surplus on the unpurchased ones. It differs from our original analysis of the excise tax by triangle 21

It is common in policy analysis to review the existing literature in order to make use of prior empirical research of this type. However, many factors must be considered in evaluating the appropriateness and use of prior studies for a different context, and one must be careful. For example, the length of time used for a demand observation (e.g., a month, a year) should be commensurate, and the price sensitivity in one geographical area need not be comparable to that in another.

The Compensation Principle of Benefit-Cost Reasoning

191

ABC. The extra triangle arises because of the regulatory constraint; without the constraint, four more units would be bought at a cost of $3 each (including the tax). To find the height at point A, we use the true market demand curve to find the price at which eight units would be bought: 8 = 30 − 6p p = 3.667 Thus triangle ABC has the area ABC = 1/2 (3.667 − 3.00)(12 − 8) = 1.33 The rest of the loss under the market curve is $15.22 Adding the two, we erroneously conclude that the loss in consumer surplus is $16.33. The fallacy in the above analysis is that it assumes efficiency in exchange, but the regulatory constraint causes inefficiency in exchange. That is, the four-unit reduction owing to the regulatory constraint is not accomplished by taking the four units valued the least by consumers. At a price to consumers of $3.667 with no other constraints, Q1 = 2.67 and Q2 = 5.33 (and Qm = 8). Since at a price of $3, Q1 = 4 and Q2 = 8, the efficient reduction is to take 1.33 units from person 1 and 2.67 units from person 2. The extra loss in consumer surplus caused by this change is the one that triangle ABC measures. But the actual change results in Q1 = 4 and Q2 = 4, and it has a bigger loss associated with it. None of this information could be known on the basis of market statistics only. To calculate the actual losses, one needs information about the individual consumers. The actual individual losses are shown as the shaded areas in Figure 6-8a and b. Person 1, whose behavior is unaffected by the regulatory constraint, has a loss similar to the simple excise tax example: ∆CS1 = 1(4) + 1/2 (1)(2) = 5 Person 2 loses more than in a simp1e excise tax case because the regulatory constraint is binding here; the extra loss is triangle EFG. The area of EFG is found analogously to the area of ABC in the above example; the height at point E must be 4, and thus the area of EFG is 2. Person 2’s loss in consumer surplus owing to the tax alone is 10 [= (3 − 2)(8) + 1 /2(3 − 2)(12 − 8)]. Therefore, ∆CS2 = 12 and the actual loss in aggregate consumer surplus is $17 with $8 as tax receipts and $9 as deadweight loss. In this case one knows that the area under the market curve must underestimate the true loss. Do not be misled by the small size of the underestimate in this example; in actual situations the error could be quite substantial.

22

$15 = (3 − 2)(12) + 1/2 (3 − 2)(18 − 12).

Chapter Six

192

(a)

(b) Figure 6-8. The actual individual losses in consumer surplus: (a) Loss for person 1. (b) Loss for person 2.

The Compensation Principle of Benefit-Cost Reasoning

193

An Illustrative Application: Model Specification for Consumer Protection LegislationS One researcher attempted to estimate the value of consumer protection legislation for prescription drugs by the general methods we have been discussing. A brief review of the methodology is quite instructive.23 In 1962 the Kefauver-Harris amendments to the Food, Drug, and Cosmetics Act added (among other provisions) a proof-of-efficacy requirement to the existing legislation. The primary objective of the amendments was to reduce the harm and waste that consumers experienced because of poor knowledge about the actual effects of drugs. In congressional testimony, numerous examples of manufacturers’ claims that could not be substantiated were offered. It was hoped that the new legislation would not only improve the use of drugs on the market but would deter the entry of new drugs that did not offer any real improvements over existing ones. Figure 6-9a illustrates, at a greatly simplified level, the conceptual framework used in the study. There are two solid-line demand curves on the diagram: the uninformed demand DU and the informed demand D I. Think of these demand curves as representing a physician’s demands for a new drug for a particular patient. When uninformed, the physician-patient believes the drug will work miracles (perhaps like laetrile). Gradually the physician-patient learns (maybe by trial and error, or from other physicians, or published reports) that the drug does not work as claimed. Thus, over time, as the physician-patient becomes informed, the demand curve shifts inward.24 For simplicity, we assume there are only two time periods: the uninformed period and the informed period. Suppose we ask how much this consumer would value being perfectly informed from the start. In Figure 6-9a, the shaded area represents the value of the information. At price OC, the consumer buys OB units when uninformed. When informed, the consumer reduces the purchase by AB units to the quantity OA. The true benefits are always the area under the informed demand curve DI, although the consumer is not aware of this when uninformed. (That is, when uninformed, the consumer misperceives the benefits.) Perfect information from the start would prevent the purchase of AB units; these units have costs (AEFB) greater than benefits (AEGB) by the amount EFG. Thus, the maximum amount of money this consumer would pay to be informed from the start is EFG. To find out what the aggregate value of perfect information is, we simply have to add up all these individual triangles.

23 For a more detailed review of the methodology, the reader should refer to the debate in the professional literature. See S. Peltzman, “An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments,” Journal of Political Economy, 81, No. 5, September/October 1973, pp. 1049–1091; T. McGuire, R. Nelson, and T. Spavins, “A Comment,” and S. Peltzman, “A Reply,” Journal of Political Economy, 83, No. 3, June 1975, pp. 655–667. Another important aspect to this debate, not discussed here, is whether or not the legislation causes delay in the approval of new medicinal drugs. The evidence suggests that it does, although the earlier studies overestimated the delay. See D. Dranove and D. Meltzer, “Do Important Drugs Reach the Market Sooner?,” RAND Journal of Economics, 25, No. 3, Autumn 1994, pp. 402–423. 24 There may be some situations in which the drug seems useful, so the demand curve does not necessarily disappear altogether.

194

Chapter Six

O

(a)

O

(b) Figure 6-9. Some consumers overestimate benefits of a drug (a) and some underestimate them (b).

The Compensation Principle of Benefit-Cost Reasoning

195

Of course, the new drug regulation did not purport to make information perfect. The idea was that it would cause the initial period to be one of improved information, so the firstperiod demand curve would be shown as the dashed line D R.25 As drawn, the consumer would purchase OH in the initial period, overbuying by AH, and then buy OA when fully informed. Thus the effect of the regulation is to avoid the mistake of buying HB in the initial period, which has costs HJFB greater than benefits HKGB by the amount KJFG. Thus KJFG is the value to the consumer of the new drug regulations. The study goes on to estimate the market demand curves corresponding to DU, D R, and D I and report the area KJFG under the market curves as the aggregate value to consumers. This last step is flawed and misestimates the information value by an unknown but perhaps extremely high factor. To see that, let us turn to Figure 6-9b. The demand curves in this diagram are those of a second physician-patient. They have been chosen to be identical to those in Figure 6-9a with one crucial twist: The uninformed and informed demand curves are switched around. Upon a moment’s reflection, the existence of this behavior is just as plausible as that previously described. In this case the physician-patient is simply a cynic and skeptic. Having been misled by false claims in the past, the physician-patient initially tries new drugs only with great reluctance. However, the drug works better than the skeptic expected and over time the demand shifts outward. In Figure 6-9b, the consumer initially buys OA when uninformed and then OB when perfectly informed. The true benefits are always the area under the informed demand curve for the quantity actually consumed. If perfectly informed from the start, this consumer would not make the mistake of underbuying AB units. These units have benefits of AKFB but cost only AEFB, so the net benefit from consuming them is EKF, the shaded area on the diagram. The maximum amount this consumer would pay to be perfectly informed from the start is EKF. As before, the drug legislation is not expected to achieve perfection. With improved information represented by the dashed demand curve D R, the consumer initially buys OH and thus avoids underbuying quantity AH. The value of the drug legislation is EKL J. If these are the only two consumers in the market, the aggregate value of the regulation is the sum of KJFG in Figure 6-9a plus EKL J in Figure 6-9b. But now let us consider how the market demand curve hides those benefits. The clearest example can be seen if we assume that OH, the initial quantity bought by each consumer after the regulation is introduced, halves the errors (i.e., AH = HB in both diagrams). In that case, the initial market quantity bought is QU m = OB + OA After the regulation is passed, the initial market quantity bought is R Qm = OH + OH 25

It could be argued, though probably not very plausibly, that the regulation actually worsens information. The idea would be that drug companies tell consumers what they really want to know and regulation prevents them from acting on the information. This argument does not affect the point of the discussion here: that information about shifts in the market demand curve does not include the information necessary to make a reasonable compensation test.

196

Chapter Six

Since AH = HB, we can add and subtract them on the right-hand side: R = OH + HB + OH − AH Qm

= OB + OA That is, the initial market demand curve does not shift at all after the regulation is introduced! The reduction in overbuying by the first consumer exactly offsets the reduction in underbuying by the second. The researcher finds that the drug regulation has had no impact on consumer purchasing and measures the information value as zero, whereas it is obviously much greater. If we had 100 million consumers divided into two camps like those drawn of 50 million skeptics and 50 million optimists, the market demand curve would still be unchanged by the regulation. That is why the flaw is so serious; savings in consumer surplus that should be adding up are being subtracted from one another. The seriousness of this flaw is not an artifact of the specific example chosen. The initial market demand curve will shift to the left if, preregulation, there are more optimistic purchases, to the right if there are more skeptical nonpurchases, and not at all should they happen to offset one another exactly. The point is that in all of these cases the shift in the market curve owing to the regulation shows what is left after one subtracts one group of benefits from another. For that reason the market demand curves simply do not contain the information necessary to make this compensation test. The alert reader may have noticed a second flaw: The use of the market curves here is subject to the same flaw discussed in the gasoline rationing case. To illustrate it, let us put aside the last flaw by assuming that no consumers underbuy. A remaining problem is that not all consumers make the same size mistake. Consider a very simple example with only two consumers who make mistakes initially but do not make any after the regulation is introduced. Now contrast these two situations: (1) Consumers 1 and 2 initially overbuy by AH in Figure 6-9a. (2) Consumer 1 initially overbuys by AB, and consumer 2 makes no mistakes at all. In both situations the amount of initial market overbuying is the same and the regulation prevents the errors, so the shift in the market curve owing to the regulation is the same. But the benefits of preventing those errors are not the same. The regulation is more valuable if it prevents the errors in the second situation. It is least valuable in the first situation; use of the market curves is equivalent to assuming that the starting point is the first situation. In Figure 6-9a think of situation 1 as each consumer overbuying AH and losing area EJK; preventing both these errors is worth twice area EJK. Situation 2 can be thought of as one consumer overbuying AB (twice AH). But the second unit overbought, HB, hurts more than the first. Thus the total loss in situation 2 is greater than twice the area EJK.26 Market demand curves are so constructed that, for any price, the most highly valued units will be purchased. If a good is priced too low, there will be overbuying but only of the next 26 As drawn, with all demand curves the same slope, it is exactly four times the area EJK; the triangle EFG has base and height each twice that of EJK. The actual error could be greater or smaller, depending on the shapes of the demand curves.

The Compensation Principle of Benefit-Cost Reasoning

197

most highly valued units. This tends to minimize the aggregate error. But overbuying owing to consumer ignorance hardly guarantees that the least serious mistakes will be the ones made. Thus this second flaw also causes misestimation, to an unknown degree, of the consumer benefits from the regulation. The main point of this extended illustration is, as always, to emphasize the importance of model specification. Careful thought about this approach to evaluating the drug legislation suggests that the available market statistics cannot be used to bound the uncertainty about the true benefits within any reasonable range. In other uses, as for taxation or estimating the harm from monopoly, the market demand curves may contain precisely the information sought. An important part of analytic skill is learning to understand when the appropriate linkages are present. Before concluding this section, it may be useful to speculate about the interpretation of an accurate compensation test for drug regulation. That is, suppose we had no regulation and knew (which we do not) that passing the drug regulation would comfortably pass the compensation test. What significance, if any, might this knowledge have to an analyst? The parties primarily affected by the legislation are the demanders and suppliers of drugs. The benefits to consumers are likely to be spread over a fairly large population: all the people who have occasion to use prescription drugs. This is not the kind of policy change in which one narrow segment of the population is gaining at the expense of another narrow segment. The losers, if there are any, might also be widely scattered consumers: the lovers of laetrile (and other drugs whose efficacy cannot be proved). On the assumption that the gains comfortably outweigh the losses, and with no special reason to favor laetrile lovers over other consumers, one might rely heavily on the test as a reason for recommending that the regulation be passed. An additional possibility is that the incomes of those on the supply side will be changed. Suppose the makers of laetrile will be forced to close up shop. Usually, such supply activities are geographically scattered and each is a small part of a local economy. If that is the case, the labor and capital resources may be quickly reemployed in other parts of the company or in other industries and suffer only small temporary losses (mitigated perhaps by unemployment insurance). Alternatively, all laetrile manufacturers might be geographically concentrated in one town whose entire economy is dependent upon laetrile manufacture. In that case it is more likely that some form of secondary legislation, for example, relocation assistance, would be passed to compensate the losers. These speculative comments are intended only to suggest that an accurate compensation test could, in this case, have great significance for an analyst. Of course, actual analytic significance would require more detailed information and reasoning than can be provided here. Furthermore, most analysts would not wish to be restricted to a set of only two alternatives.

Problems with Measuring Individual BenefitS In the last section, we focused on when market demand curves contain information relevant to measuring aggregate benefits. In this section, we return for a closer examination of the underpinning: the measure of individual benefit. The one we have been using, the area under

198

Chapter Six

the ordinary demand curve, is actually an approximate measure. Now we will show two new measures, each exact but different from one another, because each uses a different initial reference point. We explain why the measure we have been using is an approximation of them. In most cases, the approximation is a good one because the two exact measures are themselves close and the approximation will be between them. However, we go on to discuss some circumstances in which a good approximation is difficult, if not impossible. The difficulties may arise in analyses of many policy areas, particularly in the environmental and health policy arenas.

Three Measures of Individual Welfare Change We know that the area under an individual’s ordinary demand curve reveals a good deal of information about the value of the goods and services consumed. However, along the ordinary demand curve, the utility level of an individual changes from point to point. For example, if the price of a good rises, the typical individual will buy less of it and end up with a lower utility level than initially. The compensation principle of benefit-cost analysis, however, requires that we find the compensation that makes the individual indifferent to the change (the price rise in this example). To do this, we introduce the concept of the individual’s compensated demand curve: a special demand curve that shows the quantity the individual would purchase at any price if the budget level were continually adjusted to hold utility constant at a predetermined level. Let us illustrate this graphically. In Figure 6-10a we have drawn one individual’s ordinary demand curve for electricity, shown as AF. At the initial price p0, the consumer buys q0 and has utility level u0. After the rate increase, the price is p1 and the consumer buys q1 and has a lower utility level u1. Figure 6-10b is an indifference curve representation of the same consumer choices. In Figure 6-10b let us identify the compensating variation: the size of the budget change under the new conditions (price = p1 ) that would restore the individual to the initial utility level (u0). Given the state of the world in which the consumer is at B (i.e., after the price change), how much extra income is needed to bring the consumer back to u0? That is, keeping prices constant at the slope of DB, let us imagine adding income to move the budget constraint out until it becomes just tangent to u0, which we have shown as the dashed line EC tangent to u0 at C. The amount of income required is shown on the vertical axis as DE: this is the compensating variation. Note that it is the amount of income associated with the income effect q1 − q1c of the price increase. To show the compensating variation in Figure 6-10a, we must first construct a compensated demand curve, which is simply an ordinary demand curve with the income effect removed. For example, let us construct the compensated demand curve associated with the utility level u0 at point A. In Figure 6-10b, we have already located one other pricequantity combination that gives utility level u0. It is at point C, where price is p1 and quantity is q1c. We also show this as point C in Figure 6-10a. In fact, all the price-quantity combinations for this compensated demand curve can be “read” from Figure 6-10b: For each point on the u0 indifference curve, the quantity and

The Compensation Principle of Benefit-Cost Reasoning

199

O

(a)

O

(b) Figure 6-10. The compensating demand curve and compensating variation: (a) The compensated demand curve and compensating variation (shaded) for a price change from p0 to p1. (b) The indifference curve representation of the same consumer choices.

200

Chapter Six

associated price of electricity (known from the slope of the curve) are points on the compensated demand curve. We have shown this as Dcu0 in Figure 6-10a. For a normal good (which we have illustrated), the compensated demand curve is steeper than the ordinary demand curve. If the price increases above p0 the uncompensated consumer will, of course, end up with less utility. Compensation requires that the consumer be given additional income which, for a normal good, results in greater consumption than without the compensation. For prices below p0 we would have to take away income to keep the consumer at u0 utility level, and therefore the consumer would buy less than without the (negative) compensation. Now consider in Figure 6-10a the consumer surplus associated with the compensated demand curve. This consumer surplus can be interpreted as a compensating variation. For example, suppose we forbid the consumer to buy any electricity when its price is p1. The consumer then does not spend p1q1c but loses the consumer surplus p1GC. The amount of money that we would have to give this consumer to compensate for the rule change (i.e., to maintain utility at the u0 level in the new state of the world) is p1GC; thus this amount is the compensating variation. Finally, what is the compensating variation for the price change from p0 to p1? Initially, the consumer surplus under the compensated demand curve is p0GA. When restored to the initial utility level after the price increase, the consumer surplus is only p1GC. Therefore, the amount of compensation necessary to restore the consumer to the initial utility level must be the loss in consumer surplus, the shaded area p0 p1CA. This compensation plus the new consumer surplus equals the initial consumer surplus. Therefore, the compensating variation for a price change is the area between the initial and new price lines bounded by the compensated demand curve for the initial utility level and the price axis. There is one last wrinkle that we must iron out. The compensating variation is measured under the assumption that the new higher price is in effect. However, we could ask the compensation question in a slightly different way. What is the most the individual would pay to prevent the price increase? This equals the size of the budget change at the original price that gives the individual the same utility as after the price increase. More generally, we define the equivalent variation as the size of the budget change under the initial conditions (price = p0) that would result in the same utility level as after the actual change (u1 ). Figures 6-11a and b illustrate the equivalent variation, using the same ordinary demand curve and indifference curves as previously, with the individual initially at point A and then choosing point B in response to the price increase. In Figure 6-11b the equivalent variation is shown as the distance DK. It is the amount of income that can be taken away if the price is kept at its initial level (and thus the change is prevented) leaving the individual no worse off than if the change were made. We find it by moving the budget constraint parallel to DA and down until it is just tangent to u1 shown at point J. Note that DK is the income associated with the income effect q1E − q0 of the price increase when the substitution effect is measured along the new rather than the original indifference curve. Recall that the compensating variation is the income associated with the income effect measured in the usual way. This explains why we stated earlier that the empirical difference between the compensating and equivalent variations is due to the difference in the way income effects are measured.

The Compensation Principle of Benefit-Cost Reasoning

201

O

(a)

O

(b) Figure 6-11. The equivalent variation: (a) The equivalent variation (shaded) for a price change from p0 to p1. (b) The indifference curve representation of the same consumer choices.

202

Chapter Six

In Figure 6-11a we construct a compensated demand curve as before except that this time the curve is associated with utility level u1.27 It goes through point B on the ordinary demand curve, and price p0 is associated with quantity q1E (from point J in Figure 6-11b). It is steeper than the ordinary demand curve by the same reasoning as before. Its height at any point is the amount of income that, if given up for an additional unit of electricity, just allows the consumer to maintain u1. The equivalent variation is the reduction in the initial budget level that reduces the consumer’s utility from u0 to u1 when the price is at its original level. But note that the price change does not change the consumer’s budget level. The consumer’s budget when at point B is the same as when at point A. Thus, we can just as well ask what budget change from point B is necessary to leave the consumer with u1 utility if the price is reduced to p0. But as we have already seen, this is the change in consumer surplus under the compensated demand curve (area p0 p1BJ ). Thus, the equivalent variation for a price change is the area between the initial and new price lines bounded by the compensated demand curve for the final utility level and the price axis. In Figure 6-12 we have drawn the ordinary demand curve and both of the compensated demand curves. It is clear from the diagram that, for a price increase, the equivalent variation is smaller than the compensating variation in absolute size. That is true for normal goods; for inferior goods the size relation is reversed. Note that we have not given any reasons for preferring either the compensating variation or the equivalent variation. Both are exact measures of the welfare change; they just differ in whether the initial or final state of the world is used as the reference point. Sometimes it is argued that the compensating variation should be preferred because it is more plausible to assume that individuals have “rights” to the initial state of the world as a reference point rather than their positions after some proposed change. However, this clearly favors or accepts the status quo distribution, and one need not accept such reasoning. To actually calculate either of them requires knowledge of the (relevant) compensated demand curve. Since they are not observable in the actual uncompensated world, it can be difficult (but not impossible) to estimate them. Fortunately, there is a third monetary measure that has two great virtues: It can be calculated directly from the ordinary demand curve, and it always has a value between the compensating and equivalent variations. It is simply the change in the consumer surplus under the ordinary demand curve (for short, the ordinary consumer surplus). In Figure 6-12 the ordinary consumer surplus at price p0 is the area of triangle p0 FA, and at price p1 it is the area of triangle p1 FB. The loss in ordinary consumer surplus caused by the price increase is the difference in the areas, or p0 p1BA. It is more than the equivalent variation and less than the compensating variation.28 27 Note that a whole family of compensated demand curves is associated with each ordinary demand curve (one for each utility level attainable). 28 Note that if we now consider a price decrease from p to p , the compensating variation for the price in1 0 crease becomes the equivalent variation for the price decrease, and similarly the equivalent variation for the increase becomes the compensating variation for the decrease. The change in consumer surplus is the same, and it remains the middle-size measure.

The Compensation Principle of Benefit-Cost Reasoning

203

O

Figure 6-12. Comparing the change in ordinary consumer surplus, the compensating variation, and the equivalent variation.

In practice, the change in ordinary consumer surplus is probably used more frequently than either of the other two measures. Exactly how close together the different measures are depends upon the nature of the change. In an interesting article, Willig demonstrates that the measures are quite close except when the change involves goods that make up a large proportion of the consumer’s budget or for which the income elasticity is unusually large.29 This is because it is the income effects that cause the differences in the measures; if the income effects were zero for a particular good, all the measures would be identical. To sum up this section briefly, we have presented three measures that represent monetary equivalents of the effect of a policy change on one individual’s welfare: the compensating variation, equivalent variation, and change in ordinary consumer surplus. Although they differ slightly from one another, each is an attempt to reveal the change in general purchasing power that would make the individual indifferent to the main change. In the technical appendix, some illustrative calculations are presented after clarifying (by introducing the theory of duality) some linkages between observable phenomena such as demand and the theory of utility-maximizing choice. However, we now turn to the final

29 R. D. Willig, “Consumer’s Surplus without Apology,” American Economic Review, 66, No. 4, September 1976, pp. 589–597.

204

Chapter Six

section of this chapter to explore some actual situations in which the different measures give strikingly different results.

Empirical Evidence: Large Differences among the Measures The above exposition focused on an economic change that is a relatively minor part of any one individual’s well-being (the change in price of a common consumption good). Even so, it illustrated that there are two exact but different measures of the compensation necessary to hold the individual’s utility constant and that the difference is caused by whether the prechange or postchange position is used as a reference point. However, one can imagine changes that might have a much more dramatic effect on an individual’s well-being. In such cases, there is no reason to think that the alternative measures would necessarily yield similar monetary results. Consider, for example, an individual whose life is threatened by disease and that person’s right to a particular life-saving medical treatment. With no right to the treatment, the individual must pay to obtain it. The maximum willingness to pay is limited by his or her wealth, and for a poor individual, this may be an unfortunately small amount. However, suppose the individual in this example had a right to the treatment and could only be denied it if he or she sold the right to another. In this case, the minimum amount the individual is willing to accept to forgo the right might be infinite (i.e., no amount of money, no matter how large, will cause the individual to forgo the treatment). The two differing amounts are a result of changing the reference point from which we are measuring. In the first case the poor individual does not own the right to treatment, and in the second case the individual has identical resources except that a (valuable) right to treatment has been added. The individual would have one (low) utility level without the treatment and another (high) utility level with the treatment. These two utility levels are the same no matter which reference point we use. But our measure of the amount of money necessary to compensate the individual for moving from one level to the other depends crucially on which position we use as the starting point.30 In cases with large differences between the measures, one can try to decide the appropriate one to use by a careful reading of the law (who does, in fact, have the right?). However, there are a number of cases in which ownership of the right may not be clear and there is no simple resolution of the matter. Some property, for example, is defined as common property and is not owned by anyone. Who, for example, owns the world’s air, oceans, wild animals, or outer space? In these cases, there are often conflicts among humans about the uses of these natural resources. Should scarce wetlands be developed to provide more residential housing or should it be preserved as a wildlife habitat? Should new factories be

30

The difference in these two amounts is simply the difference between the compensating and equivalent variations explained in the prior section. Without the right, the willingness to pay (for the right) is the compensating variation. The willingness to accept (the loss of the right) is the equivalent variation. If the right to treatment is assigned to the individual, the same measures switch roles: the willingness to pay becomes the equivalent variation, and the willingness to accept becomes the compensating variation.

The Compensation Principle of Benefit-Cost Reasoning

205

allowed to add to air pollution in an area, in order to provide jobs and more consumer goods, or is it more important to improve air quality? If one consideration in resolving these issues is the magnitude of benefits and costs, and there are large differences in the benefit measures, then resolution is made more difficult. There is an interesting and important psychological aspect to “problem” cases such as these. To some extent, it can be difficult to tell whether or not large observed differences between the two measures are real. In many cases, a substantial part of the difference may be due to difficulties that individuals have in truly understanding the implications of their responses. These difficulties are less likely to arise in choices that individuals make frequently where the consequences are clear, as with many ordinary goods and services that are purchased frequently in the marketplace. However, there are goods—such as the air quality in the above example—that individuals value but do not normally purchase directly. Analysts are often asked what, if anything, can be said about the value of these goods to the affected populations. There are many clever ways that analysts use to discover how people value these nonmarket goods. For example, suppose one is interested in how residents value the quality of air surrounding their homes. Analysts have estimated this by examining the selling prices of homes that are comparable except for the fact that they differ in the air quality around them.31 Other things being equal, we expect people to pay more for a home with “good” rather than “bad” surrounding air quality. Thus differences in home values can indirectly reveal the value of cleaner air. Similarly, we do not directly observe how much individuals are willing to pay for national parks, but we can study how much people pay in travel costs in order to get to the sites (and they must value the recreational benefit at least by that much).32 Studies such as these, based on actual choices that people make, normally do not lead to unusually large differences between the benefit measures (e.g., the observed choices of a large sample of people are used statistically to infer ordinary price and income elasticities of demand, and then compensated demand curves can be approximated from them). Nevertheless, there are some things that people value that do not get revealed either directly or indirectly by observable market choices. For example, you may value the existence of Yellowstone National Park even if you have no intention of going there yourself. As other examples, you may value the existence of Brazilian rainforests or the preservation of the spotted owl, even if you have no intention of visiting, seeing, or personally using either. 31 I am oversimplifying for expositional clarity. The houses that are compared are typically dissimilar, and they are made comparable through statistical techniques. There are a great many factors that cause one home to have different value than another (e.g., size, quality of construction, neighborhood, quality of local public services such as schools), and it is a thorny statistical problem to control for all of these factors in order to isolate the effect of air quality alone. Nevertheless, there are numerous careful studies that provide estimates of this value. These estimates, while uncertain, do help us to understand what a plausible range for the true value might be. For a discussion of this and related approaches, see K. Ward and J. Duffield, Natural Resource Damages: Law and Economics (New York: John Wiley & Sons, Inc., 1992), pp. 247–256. 32 For a review of some of these travel cost studies, see V. K. Smith and Y. Kaoru, “Signals or Noise? Explaining the Variation in Recreation Benefit Estimates,” American Journal of Agricultural Economics, 72, 1990, pp. 419–433.

206

Chapter Six

These values are sometimes referred to as “existence values” or “passive-use values.” Understanding the magnitude of these existence values can bear on the policies adopted with respect to these natural resources. In order to get some idea of the magnitudes, economists have been developing procedures known as contingent valuation surveys in order to estimate them. The survey methodology involves asking people what they are willing to pay for various hypothetical government programs that would help preserve the resource. The methodology was used, for example, in a law suit by the State of Alaska to estimate that citizens across the nation valued the damage done to its coast by the Exxon Valdez oil spill at almost $3 billion.33 For a number of reasons, the validity of contingent valuation survey results has been a subject of intense debate. One simple argument against them is that, because no actual payments are required, those surveyed may not tell the truth (e.g., those liking the program may overreport willingness to pay, and those disliking the program may underreport). However, some experiments suggest that the survey answers may be surprisingly truthful.34 A second argument is the psychological one: the results themselves reveal serious inconsistencies in individual responses, making it difficult if not impossible to give them a reasonable interpretation. In particular, there are often implausibly large differences between the two welfare measures we have been discussing. The surveys typically ask about the willingness to pay (WTP) for an additional program and the willingness to accept (WTA) a reduction in a program. When the addition in the WTP measure is the same marginal unit as the reduction in the WTA measure, then the measures are the compensating and equivalent variations. One striking example of this inconsistency is found in the closely related work of psychologists Kahneman and Tversky.35 They ask subjects, in two different ways as part of a controlled experiment, which of two programs they prefer. In their example, the lives of 600 people are threatened by a disease. One program saves 200 people but will not prevent the death of the other 400. The other program has a one-third chance of saving all 600, and a two-thirds chance of saving no one. When these programs are described in terms of lives saved, 72 percent of respondents favor the one certain to save 200. But when the same programs are described in terms of lives lost, 78 percent of respondents favor the program that might save all 600. The difference in responses is termed a framing effect, because it is due simply to the way the questions are asked. Because this is very similar to the framing difference between the equivalent and compensating variations—a willingness to pay for a gain of a good or service, as opposed to a willingness to accept a monetary amount for the

33

The study is Richard Carson et al., “A Contingent Valuation Study of Lost Passive Use Values Resulting from the Exxon Valdez Oil Spill,” report to the Attorney General of the State of Alaska, prepared by Natural Resource Damage Assessment, Inc. (La Jolla, Calif.: 1992). In 1991, Exxon settled the suit out of court, agreeing to pay $1.15 billion. 34 See, for example, Peter Bohm, “Revealing Demand for an Actual Public Good,” Journal of Public Economics, 24, 1984, pp. 135–151. 35 Amos Tversky and Daniel Kahneman, “Rational Choice and the Framing of Decisions,” in D. Bell, H. Raiffa, and A. Tversky, eds., Decision-Making: Descriptive, Normative and Prescriptive Interactions (Cambridge: Cambridge University Press, 1988), pp. 167–192.

The Compensation Principle of Benefit-Cost Reasoning

207

loss of the same thing (assuming you start with it)—we might expect such framing effects to influence the results of contingent valuation studies. In fact, this is confirmed by experimental research of economists. A study by Brookshire and Coursey is illustrative.36 They wanted to study the differences between the WTP and WTA measures under three different elicitation procedures: the field survey methodology used in contingent valuation studies, a modified field method with monetary incentives to respondents to reveal their preferences honestly, and a laboratory setting with similar monetary incentives to the second method but with repetition of up to five elicitation “rounds” and discussion among participants allowed.37 The difference between the first and second methods emphasizes the change in participant incentives, and the difference between the second and third methods emphasizes the role of learning from the opportunity to see the consequences and the chance to adapt behavior in response. The third method is also most like the purchase of ordinary goods in the marketplace, where consumers often can learn by trial and error. The good in the Brookshire and Coursey study was the number of trees in a small public park, with artist renditions of the alternate proposals. The range of trees considered was from a low of 150 to a high of 250, with 200 as the “base” from which participants were asked to consider changes. The results showed considerable stability in the WTP measures across the three methods: for example, the average WTP for an increase of 50 trees was $19.40, $15.40, and $12.92 for the first, second, and third methods, respectively. However, the WTA measure was less stable, changing greatly with the third method: for example, the average WTA for a decrease of 50 trees was $1734.40, $1735.00, and $95.52. Furthermore, while there is no reason why the WTP and WTA measures should be equal to each other in this experiment (WTP values the change from 200 to 250 trees, WTA from 150 to 200), there is also no economic reason to expect differences as large as those observed. Even if we were to interpret the results of the repetitive laboratory method as the “truth” (i.e., a WTP of $12.92 and a WTA of $95.52, still a difference that seems too great a rise in marginal value to be the result of normal preferences), clearly the value difference resulting from the contingent valuation survey methodology (WTP of $19.40 and WTA of $1734.40) is substantially due to psychological phenomena such as framing effects and not true preferences. The above should not be interpreted as implying that contingent valuation surveys are invalid. Indeed, the results of the one study reviewed above suggest that WTP measures derived from careful contingent valuation surveys may be reasonable indicators of value. The U.S. Department of Interior as well as other agencies make use of such studies. The conclusion of a blue-ribbon panel commissioned by the National Oceanic and Atmospheric Administration to review this methodology stated that contingent valuation studies “can produce estimates reliable enough to be the starting point of a judicial process of damage

36 David S. Brookshire and Don L. Coursey, “Measuring the Value of a Public Good: An Empirical Comparison of Elicitation Procedures,” American Economic Review, 7, No. 4, September 1987, pp. 554–566. 37 We shall study the problem of honest revelation of preferences for public goods later on in the text.

208

Chapter Six

assessment, including lost passive-use values.”38 Nevertheless, the use of such studies does remain controversial.39 To summarize, in some cases we may observe large differences between the equivalent and compensating variations. Sometimes these differences can be quite real, particularly when the nature of the change being studied is itself of great economic importance to the affected individuals. However, there are also cases where large differences must be viewed skeptically. These cases typically arise not from market observations but precisely in those areas where the “good” being valued is not normally bought or sold. The existence values to people of common-property natural resources fit this description, and these values are often studied through the use of contingent valuation surveys. Because these surveys involve asking people about complex economic choices that are hypothetical and abstract, their choices may not be the same as those that would be revealed in actual decision-making situations. Experimental evidence confirms the difficulty of making abstract choices as well as large differences between the two measures caused in good part by this difficulty. While the use of these surveys remains controversial, they seem most reliable when estimating WTP rather than WTA.

Summary In this chapter we have reviewed one of the most commonly utilized analytic principles in making evaluative judgments: the Hicks-Kaldor compensation criterion. The criterion provides the foundation for the analytic technique of benefit-cost analysis. It is a test of potential Pareto superiority; it seeks to discover whether the gainers from a policy change could compensate the losers and still have enough left over to come out ahead. Because the compensations are only hypothetical, it is important to think carefully about whether, why, and when one might rely on such a principle. It is used because it captures an important element of social reality and no better operational criterion has been found to replace it. The two relatively uncontroversial criteria of Pareto optimality and Pareto superiority generally do not characterize actual states of the economy either before or after proposed policy changes; this renders them of little use in making policy choices from among alternatives that make some people better off and others worse off. Nevertheless, there are many alternative allocations of this latter type, and there can be wide (if not unanimous) consensus that some of them are better because there is more product to go around. Viewed that way, analytic use of the compensation principle takes on more appeal. That is, we do not attempt to justify the principle as a rational decision rule that all reasonable people “should”

38 The report is published in the Federal Register for January 15, 1993. The panel was chaired by Nobel laureate economists Kenneth Arrow and Robert Solow. 39 A good general reference on contingent valuation methodology is Robert Mitchell and Richard Carson, Using Surveys to Value Public Goods: The Contingent Valuation Method (Washington, D.C.: Resources for the Future, 1989). The controversies and much of the literature are summarized in a symposium published in The Journal of Economic Perspectives, 8, No. 4, Fall 1994, pp. 3–64, with contributions by Paul R. Portney, W. Michael Hanemann, Peter A. Diamond, and Jerry A. Hausman.

The Compensation Principle of Benefit-Cost Reasoning

209

follow. Consider it as an imperfect predictor of what informed social judgment on relative efficiency grounds would be. It does not purport to reflect equity judgments, and thus separate consideration of outcome and process equity issues would often be appropriate. For changes involving only ordinary marketable goods, those that can pass the compensation test are a subset of those that would increase national product valued at the initial prices. This example suggests that the compensation test is no more controversial an indicator than measures of national product (which have flaws but are widely used). This comparison does not have much meaning if one is considering radically different resource allocations (so that prices also would be radically different), and thus the compensation principle should be expected to be of more use in comparing “smaller” changes. The most common use in practice is in comparing policy alternatives under the control of one agency that are similar in their target gainers and losers. To carry out a compensation test, one must be able to measure the benefits to the gainers and the costs to the losers. A common measure used in such assessments is the change in the ordinary consumer surplus, which can be calculated from knowledge of the ordinary demand curve and the consumer cost. In a supplementary section, we relate this common measure to two exact monetary measures, called the compensating variation and the equivalent variation. The two exact measures seek to identify the hypothetical budget change necessary to make an individual indifferent to a policy change. The two differ only in the choice of a reference point. In most cases these two measures are close to one another. However, neither of them can be calculated directly from an ordinary demand curve. Since the change in ordinary consumer surplus always has a value between the two exact measures, in practice it is the one most commonly used. Another supplementary section discusses problematic cases in which there may be large differences between the two exact measures. One type of case is when the change is of great importance to an individual, as in the provision of expensive medical treatments. Another type sometimes arises when goods and services are not normally traded in the market place and therefore no demand curve can be directly observed. For many of these, analysts avoid the potential problem and indirectly derive the demand curve (and then in turn the relative efficiency measure) from other activities that are observable: examples are using home values to estimate the value of cleaner air and transportation costs to give a (lower-bound) estimate of the value of recreational facilities such as lakes and parks. However, there remain some goods and services for which neither direct nor indirect measures are available. The most prominent example of these are the existence values of common-property natural resources such as the oceans or various wildlife habitats. For the latter, contingent valuation survey methods may be used. These surveys are controversial and subject to a variety of difficulties. One is that psychological framing effects (rather than true preferences) can cause large differences between the equivalent and compensating variations. Nevertheless, carefully done contingent valuation surveys, particularly when used to estimate willingness to pay, may provide useful estimates. The mathematical relations among utility functions, demand functions, and the above measures of individual welfare can be somewhat involved. In an optional appendix we derive them for the case of a utility function called the Cobb-Douglas function, and provide

Chapter Six

210

illustrative numerical calculations. We introduce the concepts used in the dual theory of consumer choice, the expenditure function and indirect utility function, and illustrate how welfare measures can be derived from them. In this chapter no attempt was made to survey all the issues involved in carrying out a compensation test through benefit-cost analysis. Instead, we focused on one important part of it: utilizing information available from market demand curves. We saw that, under certain conditions, the market demand curve can reveal virtually everything necessary to calculate the sum of compensations. Under other circumstances, knowledge of the market demand curves is essentially useless for those purposes. The most important point from these examples is the one that is emphasized throughout the text: The development of analytic skill depends heavily on understanding the implications of alternative model specifications. Linkages that are assumed to be universal in the context of purely private markets may be nonexistent in the particular policy setting being analyzed. In this chapter, for example, we showed that the link between individual consumer surplus and the area under the market demand curve is broken when consumers are misinformed and as a consequence make purchasing errors. But it is precisely in the areas in which consumer information is judged most seriously inadequate that public policy is likely to exist. The creators and users of policy analyses must be sensitive to the logical underpinnings of models in order to use the models and interpret them effectively.

Exercises 6-1

An Excise Tax The diagram shows the demand (D) and supply (S) for passenger rail service between two cities. Initially the price per ticket is $4 and 1000 trips are made per week. An excise tax of $2 is placed on each ticket, which raises supply to S ′ and reduces the number of rail trips to 800.

The Compensation Principle of Benefit-Cost Reasoning

211

a Identify the area on the diagram that represents the tax revenue collected. b What does “consumer surplus” mean? Identify the area on the diagram that represents the consumer surplus received by rail passengers after the tax has been imposed. c Identify the area on the diagram that represents the deadweight loss (or equivalently the “efficiency cost”) of the tax. Explain the meaning of this concept. 6-2

There are forty consumers in an economy who purchase a drug to relieve the pain of arthritis. They think that the only effective drug is Namebrand. However, the same drug can be bought by its chemical name acethistestamine, or ace for short. The drug costs $2 to produce no matter what it is called; any quantity demanded can be supplied at that price. The company producing Namebrand exploits consumer ignorance by charging $6 for each unit; that is, the consumers buy Namebrand at $6 per unit, not realizing that ace is a perfect substitute available for only $2. The aggregate demand curve of the 40 uninformed consumers is Q = 400 − 40P. a What would be the value to consumers of knowing that ace and Namebrand are identical? (Answer: $960.) b How much is the deadweight loss due to the consumers’ lack of perfect information? (Answer: $320.)

6-3

There are two and only two consumers, Smith and Jones, who buy a product of uncertain quality. When both are informed of the quality, they have the same demand curve: P = 100 − Q/4 The market price is P = $50. Suppose they are uninformed and have the following uninformed demand curves: Smith:

P = 125 − Q/4

overestimates value

Jones:

P = 80 − Q/4

underestimates value

a Calculate the loss in consumer surplus to Smith from not having accurate information. Make a similar calculation for Jones. b Calculate the uninformed market demand curve. Also calculate the informed market demand curve. c What is the loss in consumer surplus as measured by the deadweight loss triangle between the market informed and uninformed curves? (Answer: $25.) d What is the actual loss in consumer surplus from having poor information? (Answer: $2050.)

212

Chapter Six

APPENDIX DUALITY, THE COBB-DOUGLAS EXPENDITURE FUNCTION, AND MEASURES OF INDIVIDUAL WELFAREO

The Cobb-Douglas function is a reasonably simple form of utility function that is often used in analytic work. For a two-good economy, its equation is U = X1αX 21−α where 0 < α < 1.40 The ordinary demand curves derived from it are characterized by unitary price elasticity of demand, which is sometimes a good representation of actual behavior. The demand curves have the form αB D(X1) = —– P1 and (1 − α)B D(X 2 ) = ———— P2 where B is the consumer’s budget. The budget elasticity of demand is 1. The proportion of this budget spent on X1 = a and that on X2 = 1 − a. The demand curves can be derived by maximizing the utility function subject to a general budget constraint. We form the Lagrangian L = X1αX11−α + λ(B − P1 X1 − P2 X 2) To maximize utility subject to the constraint, we set the partial derivatives with respect to X1, X 2, and λ equal to zero and solve the equations simultaneously: ∂L = αX α−1X 1−α − λP = 0 —— 1 2 1 ∂X1

(i)

∂L = (1 − α)X αX −α − λP = 0 —— 1 2 2 ∂X 2

(ii)

∂L = B − P X − P X = 0 —– 1 1 2 2 ∂λ To solve, first multiply both sides of equation (i) by X1 and simplify: αX1αX 21−α − λP1 X1 = 0

40

For an n-good economy, U = X1α1X2α2 . . . Xnαn, where 0 < αi < 1 and

Σα = 1 n

i=1

i

(iii)

The Compensation Principle of Benefit-Cost Reasoning

213

or αU − λP1 X1 = 0 or αU X1 = —— λP1

(i′)

Similarly, multiply both sides of (ii) by X 2 and simplify: (1 − α)X1αX 21−α − λP2 X 2 = 0 or (1 − α)U − λP2 X 2 = 0 or (1 − α)U X 2 = ———— λP2

(ii′)

Now substitute (i′) and (ii′) in (iii): P2(1 − α)U P1αU B − —–— − ————— =0 λP1 λP2 or λB = αU + (1 − α)U = U or U λ=— B

(iii′)

Finally, substituting (iii′) back in (i′) and (ii′) gives us the demand functions: αU — B = —– αB X1 = —— P1 U P1 U B (1 − α)B X 2 = (1 − α) — — = ———— P2 U P2 The unitary price and budget elasticities may be derived by applying their definitions to these demand equations. Note that by multiplying each side of the demand equations by the price, we see that expenditures as a proportion of the budget equal a constant α and 1 − α for X1 and X2, respectively. Much research effort has been devoted to developing easier ways to relate the theory of utility-maximizing choice to observable phenomena such as demand. One approach, which we introduce and use in this section, is based on the mathematics of duality. To convey the idea of the dual approach, note that we have formulated the consumer choice problem as

214

Chapter Six

maximizing utility subject to a budget constraint. An essentially equivalent way to formulate the problem is to minimize the expenditures necessary to achieve a certain utility level. In this dual problem we work with an expenditure function subject to a utility constraint, rather than a utility function subject to a budget constraint. Under certain fairly general conditions, knowledge of the expenditure function reveals the same information about the consumer as knowledge of the utility function would.41 To illustrate this dual approach, we define the concepts of an indirect utility function and an expenditure function. Then we use them in the Cobb-Douglas case to help calculate values of the welfare measures discussed in the preceding sections. Sometimes it is convenient to use an indirect utility function, which expresses the maximum utility a consumer can achieve as a function of prices and the budget level. We denote it by U = U(B1, P1, P2 ) for the two-good case. For the Cobb-Douglas function, we find the indirect utility function by substituting the demand equations for X1 and X2 in the ordinary utility function: U = X1α X 21−α αB = —— P1

α

( )[

(1 − α)B ———— P2

]

1−α

= α α (1 − α)1−αBP1−α P2α−1 or, letting δ = αα(1 − α)1−α, we have U = δBP1−αP2α−1 The indirect utility function can be rewritten in a form generally referred to as the expenditure function. The expenditure function B(U, P1, P2 ) shows the minimum budget or expenditure necessary to achieve any utility level U at prices P1 and P2 . For the CobbDouglas function: UP1αP21−α B = ———–— δ From the expenditure function it is easy to find the compensated demand curves. Recall that a compensated demand curve shows the quantity of a good that will be bought at each possible price when the utility is held constant at some level U¯. Shephard’s lemma states that this quantity equals the partial derivative of the expenditure function with respect to price42: ∂B(U, P1, P2 ) Xi = —–———— ∂Pi

41

i = 1, 2

The dual approach applies to the supply side as well as the demand side. We introduce duality on the supply side in the optional section of Chapter 8. A good introductory reference to the use of duality in economics is Hal R. Varian, Microeconomic Analysis (New York: W. W. Norton & Company, 1992). Additional references are contained in Chapter 9. 42 A proof of this is sketched in the optional section of Chapter 9.

The Compensation Principle of Benefit-Cost Reasoning

215

Applied to the Cobb-Douglas expenditure function: ∂B αU¯ P1α−1 P21−α X1 = —— = ————–—— ∂P1 δ ∂B (1 − α)U¯ P1α P2−α X2 = —— = ———–———— ∂P2 δ We will use these equations for the compensated curves below. We also note that it is easier to derive the ordinary demand curves from an indirect utility function than from an ordinary utility function. That is because of Roy’s identity, which states43: −∂U(B, P1, P2)/∂P1 X i = ———————— ∂U(B, P1, P2)/∂B

i = 1, 2

For the Cobb-Douglas function we find the ordinary demand curve for X1: ∂U(B, P1, P2) ———— —— = −αδBP1−α−1P2α−1 ∂P1 ∂U(B, P1, P2) ———— —— = δP1−α P2α−1 ∂B and therefore, by Roy’s identity, αB X1 = —— P1 This is, of course, the same result we derived earlier by solving a system of simultaneous equations.

43

The following proof is from Varian, Microeconomic Analysis, pp. 106–107. It is true by identity that a given utility level U¯ can be expressed by the indirect utility function U¯ = U[P1, P2, B(U¯, P1, P2 )] At any prices, that is, the consumer will achieve the U¯ level if he or she is given the minimum expenditure necessary to achieve it. But then the derivative of this expression with respect to price must always equal zero: ∂U¯ ∂U ∂U ∂B —— = —— + —— —— = 0 ∂Pi ∂Pi ∂B ∂Pi We can write this ∂B −∂U/∂Pi —— = ———— ∂Pi ∂U/∂B But from Shephard’s lemma we know that the term on the left is X1. This gives us Roy’s identity: −∂U/∂Pi Xi = ———— ∂U/∂B

216

Chapter Six

O

Figure 6A-1. The compensating variation and its approximations.

To illustrate some of the measures we have described, suppose an individual has this specific Cobb-Douglas utility function: U = X10.1X 20.9 and let us assume that the budget is $10,000, P1 = $2.00, and P2 = $1.00. Focusing on the first good, the consumer purchases 0.1(10,000) = 500 D(X1 ) = ————— 2 Suppose the price of this good increases to $4.00. Then the consumer purchases 0.1(10,000) = 250 D(X1 ) = ————— 4 We represent the initial situation and the change in Figure 6A-1. Let us find the change in ordinary consumer surplus ∆CS, the compensating variation CV, and the equivalent variation EV. The change in consumer surplus is area ABCE. Its exact area is44

44 Note that the area ABCE can be calculated by integrating over either the price or the quantity axis. Integrating over the price axis is more convenient in this case.

The Compensation Principle of Benefit-Cost Reasoning

217

∫2.00 [X1 ]dP1 4.00 0.1(10,000) =∫ ————— dP1 2.00 P1

∆CS =

4.00

= 1000(ln 4.00 − ln 2.00) = $693.15 In practice, the demand curves are always estimated from actual observations, so the ∆CS is also an estimate. Sometimes the only information available will be the initial and final prices and quantities. In that case, it is often assumed that the demand curve is approximately linear “over the relevant range.” This assumption may be fine for small price changes, but it can lead to more serious estimation errors as the change considered gets larger. If we make the linearity assumption in the present case of a quite large price change (i.e., 100 percent), the area we calculate is still ABCE but we treat the CE boundary as the dashed line shown in Figure 6A-1. In this case, using subscript L for linearity assumption, ∆CSL = ABCD + CDE = 500 + 1/2(2)(250) = $750 Thus we overestimate the change in ordinary consumer surplus by 8.2 percent—not bad for such a large price change. Think of this error as being caused by uncertainty concerning the true demand curve. Now let us turn to calculating the CV. One method requires two steps: (1) Find the relevant compensated demand curve. (2) Calculate the area of the CV by the method of integration used for the ∆CS. The relevant compensated demand curve is the one through point E in Figure 6A-1, where utility is held constant at its initial level. Since we know the utility function, we can find the initial utility level by plugging in the initial consumption amounts of X1 and X 2 (or equivalently, by plugging in the budget level and initial prices in the indirect utility function). We know that X1 is initially 500; X 2 is easily determined to be 9000 by substituting the known parameters B = $10,000, 1 − α = 0.9, and P2 = 1.00 into D(X 2). Then U0 = 5000.1(90000.9) ≈ 6741 It is a simple matter to find the compensated demand curve associated with this utility level. We merely substitute in the compensated demand equation derived from the expenditure function: αU¯ P1α−1 P21−α X1 = ————–—— δ 0.1(6741)P1−0.9(10.9 ) = ————–———— 0.10.1(0.9 0.9 ) = 933.05P1−0.9

218

Chapter Six

This is the equation of the compensated demand curve through point E. When P1 = $4.00, X1 = 268 if the consumer is compensated. This is shown as point F in Figure 6A-1 and the compensating variation is area ABFE. To calculate it,



CV = 933.05(P1−0.9 )dP1 P10.1 = 933.05 —–— 0.1

|

P1=2.00

P1=4.00

= $717.75 Thus, the CV is only 3.54 percent bigger than the ordinary consumer surplus in this case. Note that if we use the linear approximation here, the estimated CVL is CVL = ABFG + FGE = 2(268) + 1/2(2)(232) = $768.00 This method might be used if nothing is known except the initial and final positions and there is an estimate of the income effect used to approximate the location of point F. These empirical examples should help illustrate why, in many situations, abstract debate about which of the different measures should be used may not be worth the fuss; uncertainty about the true demand curve is often the dominating source of potential evaluative error. The exact method of calculating the CV used above illustrated how to identify the equation for a compensated demand curve. But a little thought about the meaning of an expenditure function leads to an interesting shortcut. When the change in the state of the world concerns prices, the CV can be simply expressed: CV = B(P11, P12, U0 ) − B(P11, P12, U1 ) where Pj i is the price of the ith commodity in period j and Uj is the utility level in period j. The first term is the minimum budget necessary to achieve the original utility level at the new prices, and the second is the actual budget in the new state of the world (i.e., the minimum expenditure necessary to achieve the actual utility level at the actual prices). The difference between them is precisely what we have defined as the CV. Note that since price changes do not affect the size of the consumer’s budget, the second term is equivalent to the following: B(P11, P12, U1) = B(P01, P02, U0 ) That is, the actual budget in the new state of the world is the same as the initial budget. Then we can substitute this in the expression for the CV: CV = B(P11, P12, U1) − B(P01, P02, U0 ) In our example where P12 = P02, this expression corresponds exactly to the change in consumer surplus under the compensated demand curve for U = U0 when the price changes

The Compensation Principle of Benefit-Cost Reasoning

219

from P01 to P11. Rather than actually calculate this demand curve, one can compute the CV directly by using the Cobb-Douglas expenditure function and the above expression: U0 (P11 ) α (P12 )1−α U0 (P01 ) α (P02 )1−α CV = ———–———— − ———–———— δ δ On substituting the parameter values from our example, δ = 0.722467, U0 = 6741, P12 = P02 = 1, P11 = 4, and P01 = 2, we have 6741 (40.1 − 20.1) CV = ————– 0.722467 = $717.75 Following the same reasoning, we can express the EV for price changes: EV = B(P01, P02, U0 ) − B(P01, P02, U1 ) The first term is the actual budget, and the second term is the minimum budget necessary to achieve the new utility level at the initial prices. The difference between the terms is what we have defined as the EV. Again, since price changes do not affect a consumer’s budget, we substitute (this time for the first term): EV = B(P11, P12, U1 ) − B(P01, P02, U1 ) When P12 = P02, this expression corresponds to the change in consumer surplus under the compensated demand curve for U = U1. For our example, we calculate U1 = 6289 and by using the parameters above, 6289 (40.1 − 20.1) EV = ————– 0.722467 = $669.62 as expected, this is less than the $693.15 loss in ordinary consumer surplus.

CHAPTER SEVEN U N C E R TA I N T Y A N D P U B L I C P O L I C Y

IN THIS CHAPTER we will introduce models that have been developed to explain the effect of uncertainty on individual economic behavior. Increasingly, analysts have come to recognize that uncertainty is not just a curious oddity that arises in a few isolated instances. It is a pervasive phenomenon that explains a great deal about individual behavior and can be a major factor in the design of policies. Consider the fear of crime and our individual and policy responses to it. As individuals, we take actions to reduce the risk of becoming victims. We put locks on our doors and windows. We may pay for door-to-door taxi service in some areas because we believe it exposes us to fewer hazards than alternatives that involve walking or waiting at a bus stop. We may pay a premium for housing located in a relatively “safe” neighborhood. We may purchase insurance that offers some reimbursement in the event of theft or other possible losses. None of these actions guarantees our safety and security. But they reduce the probability that we will experience losses, and it is this reduction in uncertainty that many value enough to make the protective expenditures worthwhile. Our desire for more security from crime goes beyond individual actions. We have public policies in the form of police, prosecutors, courts, jails, and prisons to help keep our “streets safe.” Other public policies can also be understood in terms of their risk reduction benefits, including social “safety net” programs such as unemployment insurance to protect against unexpected income loss, health insurance, programs to reduce environmental hazards, and license requirements for commercial airplane pilots. The value of these programs depends in part on how much people are willing to pay to avoid or reduce uncertainty and in part on how the cost of doing so through the programs compares to other methods. The examples illustrate why analysts must be able to assess the importance of changes in the level and cost of uncertainty associated with proposed policies.

220

Uncertainty and Public Policy

221

To accomplish the assessment, we must develop a more general understanding of how individuals respond to uncertainty. We begin by reviewing the concepts of expected value and expected utility and consider the proposition that individuals act to maximize expected utility. The latter proposition, known as the expected utility theorem, is helpful in understanding the economic costs of uncertainty. We then consider some of the choice possibilities for responding to uncertainty, introduced in the section on risk control and risk-shifting mechanisms. There are many situations in which individual responses to uncertainty do not seem to be modeled well by the expected utility theorem. Some situations may be better modeled by concepts from the theory of games against persons; we illustrate this with an urban housing problem known as the Slumlord’s Dilemma; a similar situation may characterize certain issues in international trade. Behavior in other situations may be better modeled by concepts of bounded rationality; we consider this in the context of food-labeling requirements, federal policy to subsidize disaster insurance in areas that are highly flood-prone, and the allocation of a retirement portfolio between stocks and bonds. These latter models are discussed in the section on alternative models of individual behavior under uncertainty. The response to uncertainty depends not only on how individuals think about it but also on the set of possible responses. The situations analyzed in this chapter are primarily those in which the individual does not alter the amount of uncertainty, but acts to reduce exposure to it.1 An example is the uncertainty faced by the farmer planting now and concerned about crop price at harvest time. The farmer cannot change the price uncertainty, but he or she can reduce the risk from it by selling a futures contract (the sale of some portion of the expected future crop at a price agreed upon now). There are a wide variety of social mechanisms that are designed to reduce the costs of risk. We do not pretend in this chapter to give an exhaustive description, but we mention and explain many of them at points convenient to their development. A fundamental principle behind many of them is to shift the risk to where it is less costly. Two basic procedures, riskpooling and risk-spreading, are often used by individuals toward that end. Insurance, stock markets, and futures markets are examples of such mechanisms. Some mechanisms of public policy, for example, limited liability and the subsidized disaster insurance mentioned above, are used to alter the distribution of risk. Other public policies are adopted in an attempt to control risk more directly through such means as occupational licensing requirements, consumer product safety standards, and health standards 1 This is to allow focus on the value of risk-shifting. Individuals will also act to reduce uncertainty directly rather than simply shifting it. Protective expenditures in the form of fire extinguishers reduce the probability of damage from a fire. Fire insurance, on the other hand, changes neither the likelihood of a fire nor the damage that it will cause, but it does reduce the risk to the purchaser by shifting it to the insurance company. The chapter helps us to understand both of these types of actions. We defer some discussions until we have reviewed the functioning of markets and can develop perspective on the role of information within the markets. For example, credit lenders and employers can use resources to gather information about credit and job applicants before responding to their respective applications. Examples that emphasize information in markets are discussed throughout Parts IV and V.

222

Chapter Seven

in the workplace. The appropriateness of these policies is often a difficult question to resolve, in large part because of analytic uncertainty about how individuals respond to them relative to alternatives. One interesting area of analysis involving uncertainty is health policy, particularly national health insurance or alternatives to it. In the section on medical care insurance later in this chapter, the cost spiral in the delivery of medical services is shown to be, in large part, an unintended consequence of insurance coverage. The insurance coverage that reduces risk simultaneously distorts individual incentives to conserve scarce resources. The chapter extends the analysis of this moral hazard problem with applications to the 1980s savings and loan crisis as well as the continuing problem of involuntary unemployment. An appendix illustrates several calculations of risk assessments, including an empirical method for estimating the value of risk savings from medical insurance.

Expected Value and Expected Utility When an individual makes an economic decision, we often assume that each of the alternatives is known and understood with certainty. But in many, perhaps most, cases there is uncertainty about what the individual will receive as a consequence of any specific choice. For example, the decision to allocate time to reading this textbook is a gamble; it may not pay off for any particular reader.2 New cars may turn out to be lemons; a job may be offered that exposes the worker to risk of injury. In each case the person making the decision simply does not know in advance what the outcome will be. This type of uncertainty does not necessarily lead to any revisions in the ordinary theorems of demand and supply. For example, other things being equal, an increase in the price of a risky commodity would be expected to reduce the demand for the commodity. A reason for studying uncertainty on its own, however, is that we observe that the willingness to pay for a risky commodity depends upon the perceived likelihoods of possible outcomes. A potential buyer of a particular car will offer less as his or her subjective evaluation of the likelihood that it is a lemon increases. It is this phenomenon on which we wish to focus here: how individuals respond to changes in these perceived likelihoods and how social mechanisms can affect those perceptions. Certain fundamental concepts must be clarified before we can consider alternative models of individual behavior under uncertainty. One is the set of alternative states of the world: the different, mutually exclusive outcomes that may result from the process generating the uncertainty. For example, if a coin is flipped, two states of the world may result: It can come up heads, or it can come up tails. If a single die is thrown, there are six possible states, one corresponding to each face of the die. The definition of the states depends in part on the problem being considered. If you are betting that a 3 will come up on the die, then only two states are relevant to you: 3 and not 3. An uncertain outcome of a university course may be any of the specific grades A−, B+, C+, and so on, for one student and pass or not pass for another, depending on which gamble has been chosen. 2

Please note that the same thing is true of alternative textbooks.

Uncertainty and Public Policy

223

Another fundamental concept is the probability that a particular state will occur. If an evenly weighted coin is flipped, the probability that it will come up heads is –21 and the probability that it will come up tails also is –21. If the evenly weighted die is thrown, each face has a probability –61 of being up. If you bet on 3, then you have –61 chance of getting three and a –65 chance of getting not 3. The relation between an objective conception of the states of the world with their associated probabilities and an individual’s subjective perception of them is a subject of considerable controversy in many applications. It is a philosophical issue whether any events are truly random. If a coin is flipped in exactly the same way every time (for instance, by machine in a constant environment, e.g., no wind), it will land with the same face up every time: Under those conditions there is no uncertainty. Based on the laws of physics and given perfect information on how a coin is flipped and time to calculate, one can predict with virtual certainty what the outcome will be. Put differently, in an objective sense there is no uncertainty. Why then do we all agree, for the usual case such as the coin toss at the start of a football game, that the outcomes heads and tails have equal probability? There are two parts to the answer. First, coins are not tossed exactly the same way, and we lack the information and calculation time necessary to predict the outcome with a model based on the laws of physics. Thus the uncertainty is due to our own lack of information and/or information-processing ability. Second, we do have some information: historical evidence. We have observed that in a large number of these uncontrolled or irregular coin tosses, heads and tails appear with approximately equal frequency. We might say that when the actual determinants of the coin toss outcome are selected randomly (e.g., the force, speed, and distance of the flip), it is objectively true that the outcomes have equal probability. Furthermore, if we share the subjective perception that the football referee “chooses” the determinants of the toss randomly, we conclude that the probability of each outcome is –21. Recognizing that perceptions of probabilities depend heavily on the type of knowledge we possess, let us consider a distinction that Frank Knight proposed be made between “risky” and “uncertain” situations.3 Risky situations, in his terminology, are those in which each possible outcome has a known probability of occurring. Uncertain situations, again according to Knight, are those in which the probability of each outcome is not known. The coin toss is risky, but whether there will be a damaging nuclear accident next year is uncertain. There is some risk of particular medical problems arising during surgery, but the consequences of depleting the ozone layer in the earth’s atmosphere are uncertain. One of the factors that explains why we consider some situations uncertain is lack of experience with them; we do not have many trial depletions of the ozone layer, unlike our histories of coin tossing. Let us now recognize that knowledge differences may cause the same situation to be perceived differently by different people. Before you started reading this book, you may 3 In less formal usage common today, uncertainty is used to refer to all situations in which the outcome is unknown. Thus quotation marks are used here to denote the meanings assigned by Knight. See F. H. Knight, Risk, Uncertainty, and Profit (Boston: Houghton Mifflin Company, 1921).

224

Chapter Seven

have been uncertain about whether you would enjoy it. I may know the probability that you will enjoy it, based on surveys of past readers. But whether you will enjoy the rest of the book is perceived as risky by both of us. Furthermore, we will have different probability estimates, and yours will be based on better information (your own reactions so far) than mine. Thus our subjective perceptions of the probability will differ. In most economic models individual decision-making depends upon the subjective perceptions about the possible states and their likelihoods. We made two points about those perceptions above. First, doubts about which state will occur are due to lack of knowledge. Second, because there are knowledge differences among people, subjective perceptions will often be different. At this point we might inquire further about how subjective perceptions are formed. For example, individuals can alter their perceptions by seeking additional information, and they might decide to do so in light of the perceived benefits and costs. Different analytic assumptions about the formation of subjective perceptions lead to different predictions about behavior and can lead to quite different policy recommendations. However, we avoid those complications in this section by sticking to situations such as the coin toss; individuals perceive the world as “risky” and the subjective probability assessments coincide with the objective ones. Thus we can refer to “the” probability of an event in an unambiguous sense. One other basic concept to be introduced is the payoff in each possible state of the world. Suppose that when a coin is flipped, you will receive $2 if the coin turns up heads and −$1 if the coin turns up tails. These payments or prizes associated with each state are referred to as the payoffs. Then we can define the expected value of a risky situation: The expected value is the sum of the payoff in each possible state of the world weighted by the probability that it will occur. If there are n possible states, and each state i has a payoff Xi and a probability of occurring Πi the expected value E(V) is E(V) =

n

Σ Πi Xi i=1

In the coin toss game, the E(V) is E(V) = –21 ($2) + –21 (−$1) = $0.50 If all the states are properly considered, it will always be true that n

Σ Πi = 1 i=1 that is, it is certain that one of the states will occur. When we flip the coin, it must come up either heads or tails. (We do not count tosses in which the coin stays on its edge.) If we agree to flip the coin 100 times with the same payoffs as above on each flip, then the E(V) of this new game is $50 [100 times the E(V) of one flip] because the result of any single flip is completely independent of the results of other flips in the game. Many people would be willing to pay some entry price to play this game. Suppose the entry price is $50, or $0.50 per flip. Then the entry price equals the E(V) of playing the game, or the net expected gain is zero. Any risky situation in which the entry price equals the E(V) is called a fair game.

Uncertainty and Public Policy

225

It is common for individuals to refuse to play fair games. Let us go back to the simple game of a single coin toss with payoffs as above. For an entry price of $0.50, which makes the game fair, risk-averse people would not be willing to play. These people prefer the certainty of not playing, which has the same net expected value as playing, to the risky situation. Some people would be willing to take the risk, but if the payoffs on a single toss were changed to be $200 on heads and −$100 on tails and the entry price were raised to $50, fewer people would play. All three situations have the same net expected value, so it must be some other factor that explains why fewer people are willing to play as the stakes get raised. This other factor is the risk. A crucial insight was offered by Bernoulli, a mathematician of the eighteenth century. He suggested that individuals value not the expected dollars, but rather the expected utility that can be derived from them. If individual utility functions are characterized by a diminishing marginal utility of money, then the expected utility of a gain of, say, $100 will be less than the expected utility of a loss of $100. The expected change in utility from accepting a gamble with these two outcomes equally likely would be negative: The individual would decline the fair gamble. Let us develop this idea more carefully. We can define expected utility as follows: The expected utility of a risky situation is the sum of the resulting utility level in each possible state of the world weighted by the probability that it will occur. If we let W0 equal the initial wealth, E0 equal the entry price, and U(W) represent the utility function, the expected utility E(U) may be expressed as E(U) =

n

Σ Πi U(W0 − E0 + Xi ) i=1

The expected utility theorem simply says that individuals choose among alternatives in order to maximize expected utility.4 As we discuss this theorem, let us keep in mind a distinction between positive and normative views. The positive question concerns the predictive power of the theory, the extent to which actual behavior is consistent with the implications of the theorem. The normative issue is whether people should behave in accordance with the theorem even if they do not.

4 To derive the expected utility theorem from assumptions about behavior requires several assumptions about human decision-making in addition to those introduced in Chapter 2. For a full review see K. Arrow, Essays in the Theory of Risk-Bearing (Chicago: Markham Publishing Company, 1971). The original derivation of a utility measure to include risky situations was made by John von Neumann and Oskar Morgenstern in their Theory of Games and Economic Behavior (Princeton, N.J.: Princeton University Press, 1944). Probably the most controversial of the additional assumptions is one that implies that an individual is indifferent to two lotteries that are identical except in one state of the world; in that state the prizes are different but are ones to which the individual is indifferent. If a person is indifferent between A and B, then the assumption is that

ΠAU(A) + (1 − ΠA )U(C )= ΠAU(B) + (1 − ΠA )U(C ) However, it is commonly found in surveys and experiments that individuals are not indifferent between these two lotteries. See, for example, Jacques Dreze, “Axiomatic Theories of Choice, Cardinal Utility and Subjective Utility: A Review,” in P. Diamond and M. Rothschild, eds., Uncertainty in Economics (New York: Academic Press, 1978), pp. 37–57. See also Mark J. Machina, “‘Expected Utility’ Analysis without the Independence Axiom,” Econometrica, 50, March 1982, pp. 277–323.

226

Chapter Seven

O

Figure 7-1. The von Neumann–Morgenstern utility index for evaluating risky situations.

That is, perhaps individuals do not always understand the consequences of their choices under uncertainty and they would be better off if they did act to maximize expected utility. Unless stated otherwise, we will generally take the view that increases in expected utility are desirable. The main implication of this insight is that it offers an understanding of why people are willing to pay something to avoid risk. This, in turn, explains a great deal of behavior that cannot be explained by the effect on expected value alone, for example, the purchase of all forms of insurance, diversifying investment portfolios, and spending on safety measures beyond those that increase expected value. To illustrate the behavior implied by the theorem, we first construct a diagram illustrating how an expected utility maximizer evaluates risky choices. Imagine for the moment that an individual can participate in a lottery with only two possible outcomes: winning $50,000 or nothing. Given a choice, naturally the individual would prefer a lottery with a higher rather than lower probability of winning the $50,000. The best lottery would be the one in which the probability of winning equaled 1, and the worst would have a probability of winning equal to 0. Let us arbitrarily assign a utility value of 1 to the best lottery (i.e., $50,000 with certainty) and 0 to the worst lottery (i.e., $0 with certainty). In Figure 7-1 the horizontal axis shows the monetary payoff and the vertical axis shows the utility level. We graph the utility level and payoff of the two extreme lotteries, with point A as the best lottery and the origin as the worst. Using these two lotteries as reference points, we now construct a utility index specific to the individual. This index can be used to reveal

Uncertainty and Public Policy

227

the individual’s preference ordering of all possible risky situations (with possible outcomes between the best and the worst), provided the individual is an expected utility maximizer. Consider any amount of money between $0 and $50,000, such as $10,000, which we offer to the individual with certainty. Obviously, the best lottery is preferred to a certain $10,000. Similarly, the certain $10,000 is preferred to the worst lottery. Therefore, there must be some lottery with a probability of winning between 0 and 1 that the individual considers exactly as desirable as the certain $10,000. We ask the individual to identify the probability. Suppose it is .4. Then we define the .4 probability as the utility value to the individual of $10,000 with certainty. This is shown as point B in Figure 7-1. If we follow this procedure for all monetary amounts between $0 and $50,000, we have a relation showing the individual’s utility level as a function of wealth.5 This is shown as the solid curved line. The height of the curve, or the utility level, equals the probability of winning necessary to make the individual indifferent to the lottery and the level of certain wealth shown on the horizontal axis. This construct is referred to, after its creators, as the von Neumann–Morgenstern utility index.6 The dashed straight line connecting the origin with point A shows, for each possible probability, the expected value of the lottery (on the horizontal axis) and the expected utility level (on the vertical axis). For example, the E(V) of the lottery with a .4 chance of winning $50,000 is $20,000: E(V) = .4($50,000) + .6($0) = $20,000 The expected utility of the lottery is E(U) = .4U($50,000) + .6U($0) = .4(1) + .6(0) = .4 These are graphed as point C. Its height or expected utility level of .4 should not be surprising. The utility index was constructed in recognition of the individual’s indifference between this lottery and the certain wealth ($10,000) assigned a utility level of .4. The lottery and its certain wealth equivalent should have the same utility level. Thus point C represents the expected value and expected utility from the .4 gamble. The expected values and expected utilities of gambles with higher probabilities of winning lie on the dashed line to the right of point C, and those with lower probabilities lie to the left. 5 This is an indirect utility function; the utility comes not from wealth directly, but from the goods and services purchased with it. 6 This index can be mistakenly interpreted as a cardinal utility scale. It is true that it is cardinal in the sense that it is unique up to a linear transformation. However, it does not measure preference intensity. For example, one cannot conclude that a risky situation with E(U ) = .2 is twice as preferable to one in which E(U ) = .1. All the index does is rank-order alternative risky situations. For a discussion of this, see William J. Baumol, Economic Theory and Operations Analysis, 4th Ed. (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1977), pp. 431–432.

228

Chapter Seven

The utility index can now be used to rank-order risky situations because it allows their expected utilities to be calculated and compared. Suppose the individual actually has $20,000 with certainty; thus the current position is shown at point D. We then propose a fair game: We will allow the individual to take a chance on the above lottery with .4 probability of winning $50,000 in return for an entry price of $20,000 (the expected value of the lottery). The individual refuses; the expected utility from the gamble (the height at point C) is less than the utility of the certain $20,000 (the height of point D). An individual who refuses any fair gamble is said to be risk-averse. This does not imply that risk-averse individuals will refuse any gamble. Suppose, for the same $20,000 entry price, the individual could play a lottery that had a .8 probability of winning $50,000. This is shown as point M in Figure 7-1. It has an expected value of $40,000, and we can see geometrically that the expected utility from the gamble (the .8 height of point M) exceeds the utility of the current position at D. Thus the risk-averse expected-utility maximizer would accept this gamble. Its attraction is that it has an expected value sufficiently greater than the entry price (unlike a fair gamble). This suggests, for example, why even financially conservative people may invest in the stock market. The risk aversion illustrated above is a consequence of the concavity of the utility function we drew. A (strictly) concave function is one whose values lie above the straight line connecting any two of its points (it is hill-shaped). This also means that its slope is diminishing. A utility function has a diminishing slope only if the marginal utility of money or wealth is diminishing. Thus anyone with a diminishing marginal utility of wealth has a concave utility function and is risk-averse. The greater the degree of concavity of the utility function, the greater the risk aversion. (We will provide a measure of this shortly.) We illustrate in Figure 7-2 several different utility functions that vary in the degree of risk aversion (OBDA, OHEA). Of course, it is possible that some individuals are not risk-averse. We also show in Figure 7-2 a utility function OGFA characterized by increasing marginal utility of wealth. This individual evaluates the utility of the fair gamble (the height at point C) as greater than the utility of a certain $20,000 (the height at point G). In other words, this individual would accept the fair gamble of participating in the lottery for an entry price of $20,000. An individual who prefers to accept any fair gamble is a risk lover. Similarly, the individual with the straight-line utility function OCA (constant marginal utility of wealth) is indifferent to our proposal; an individual who is indifferent to any fair gamble is risk-neutral. One way to measure the strength of risk aversion to a gamble is to look at the pure risk cost. To understand the measure, we use the concept of the certain-wealth equivalent. The certain-wealth equivalent is the amount of certain wealth that provides utility equivalent to the risky situation. Geometrically it is the wealth level at which the height of the utility function equals the height measuring expected utility. Then the pure risk cost is defined as the difference between the expected wealth of a risky situation and its certainwealth equivalent. To illustrate, we look at Figure 7-2 in a different context. Suppose that each individual owns $50,000 in jewelry that has a .6 probability of being stolen. Think of the origin of the horizontal axis as an unspecified amount of wealth from other sources; the horizontal axis now measures the additional wealth from jewelry and is

Uncertainty and Public Policy

229

O

Figure 7-2. The curvature of the utility function reflects attitude toward risk.

dependent upon which state occurs (stolen or not stolen). On the vertical axis the utility level is measured; the scale is from 0 for the worst outcome (stolen) to 1 for the best outcome (not stolen) as before. Point C represents the expected wealth of $20,000 [= .4($50,000) + .6($0)] and the expected utility level of .4 [= .4(1) + .6(0)] of this risky situation; by design it is the same for each of the four different utility curves. The certain-wealth equivalents for each individual are the wealth levels where their utility curves have a height of .4. These are different for each of the four utility functions. Consider first the individual with the utility curve OHEA. This person is indifferent to the risky situation with a $20,000 expected wealth (point C) and having a certain $13,300 (point H). Thus the pure risk cost is $6700 (= $20,000 expected value − $13,300 certain-value equivalent). That is, this individual will pay up to $6700 in terms of reduced expected value in order to avoid the risk. Risk-averse persons will pay to reduce risk. The willingness to pay to avoid risk is a fundamental reason for the existence of insurance and other risk-reducing mechanisms that we will consider shortly. For example, this individual will pay up to $36,700 (the expected loss plus the pure risk cost) for full-coverage jewelry insurance; this ensures that the net wealth from jewelry (after insuring it) is $13,300 in both states of the world and thus eliminates all of the risk. If an insurance company can offer such insurance at or near the fair entry price of $30,000 (the expected loss), there will be room for a deal. Recall that we stated earlier that the individual with the more concave utility function OBDA is more risk-averse. Now we can see that he or she evaluates the pure risk cost of the same risky situation at a greater amount than $6700; to be exact, $10,000 (= $20,000

230

Chapter Seven

O

Figure 7-3. Risk-averse persons prefer smaller to bigger gambles.

expected wealth − $10,000 certain-wealth equivalent at point B). This person would be willing to pay up to $40,000 for full-coverage jewelry insurance. The risk-neutral person has no risk cost and will take no reduction in expected value in order to avoid the risky situation (i.e., is indifferent to purchasing insurance at the fair entry price of $30,000). The risk lover has a negative risk cost of $8000, since the certain-wealth equivalent (at point F) is $28,000. That is, because the latter enjoys the risk, the expected value would have to be increased by $8000 in order to persuade the individual to avoid the risk (the individual would decline insurance even at the fair entry price). The above discussion shows that perceived risk cost is a function of risk preferences. Now we wish to illustrate the effect on one individual of varying the amount of risk. In Figure 7-3 we replicate the utility function OBDA from Figure 7-2. Point C shows the expected utility and expected value from the jewelry as discussed above. Let us contrast this with another situation, one in which only $40,000 of jewelry is at risk and the other $10,000 of wealth is safe. Thus, the individual will either end up at point B (the jewelry is stolen) or point A (not stolen). The straight line BA shows each possible combination of expected wealth (on the horizontal axis) and expected utility (on the vertical axis) that may result, depending on the probability of theft.7 Note that the line BA is above the line OA. This is because, in an important sense, the risk is lower in the new situation of BA: for any expected wealth the likelihood of ending 7 Recall that both the expected value and the expected utility are weighted averages of their respective values at points B and A; the weights on each are the same and equal the probabilities of state B and state A.

Uncertainty and Public Policy

231

up in positions “far” from it has been reduced (i.e., there is no chance wealth will end up at less than $10,000). Put differently, for any expected wealth the gamble represented by line BA is smaller than that of OA. Two general implications for risk-averse individuals follow: (1) For any expected wealth (on the horizontal axis), the expected utility of a smaller gamble is greater than that of the larger gamble. (2) Similarly, for any expected wealth, a smaller gamble has a lower pure risk cost. Let us illustrate by assuming a specific probability of theft equal to .75, chosen to keep the expected value at $20,000 as in the prior situation. That is, the E(V) of this new situation is E(V) = .25($50,000) + .75($10,000) = $20,000 The expected utility can be calculated: E(U) = .25U($50,000) + .75U($10,000) = .25(1) + .75(.4) = .55 This is shown as point K on line BA. The expected utility of the new situation (the height at point K) is greater than that of the initial situation (the height at point C), even though the expected value is the same. Furthermore, the risk cost of the smaller gamble is only LK or $6000, as compared with BC, or $10,000 for the larger gamble. This geometric illustration helps to clarify why, in our earlier examples of coin toss games with identical expected values but increasing stakes, individuals may become less inclined to play as the stakes become greater.8 It is often useful to be able to measure the level of risk in a way that does not depend on individual preferences (the pure risk cost does). There is no unique measure, and for any measure individuals need not respond to it similarly. Nevertheless, one common measure that goes a long way toward summarizing how dispersed the outcomes may be is the variance. The variance Var(X ) is defined as Var(X) =

Σi πi [Xi − E(V)]2

where i = 1, 2, . . . , n possible states of the world, Xi is the outcome in the ith state and πi its probability, and E(V ) is the expected value. For any given expected value, the variance is greater as the likelihood of ending up with an outcome “far” from the expected value increases. Furthermore, in the appendix we review a result that shows that the pure risk cost is approximately proportional to the variance. To illustrate the calculation, the variance of our first jewelry example Var(1), when $50,000 is stolen with probability .6, is Var(1) = .6(0 − $20,000) 2 + .4($50,000 − $20,000) 2 = $600 million 8 Recall that risk-averse people will gamble if the gamble has a sufficiently high net positive expected value (i.e., is not fair).

232

Chapter Seven

The variance of the second example Var(2), when $40,000 is stolen with probability .75, is Var(2) = .75($10,000 − $20,000) 2 + .25($50,000 − $20,000) 2 = $300 million The variance of the second is lower than the first because it has greater probability of outcomes “closer” to the expected value. Since the variance can get large, it is often convenient to report instead the square root of the variance called the standard deviation. In the above examples, the standard deviation is $24,495 of situation (1) and $17,320 of situation (2). In many situations with a large number of different possible outcomes distributed around the expected value, a useful rule of thumb is that there is approximately a 90 percent chance that the actual outcome will lie within two standard deviations of the expected value.9 In actual risky situations with many possible states of the world and different sources of risk, other measures may be used to convey the degree of riskiness. Corporate bonds, for example, are “letter graded” by Moody and Standard & Poor for their differing reliabilities in terms of making the payments promised to bondholders (interest and return of principal). In early 2000, for example, a bond terminating in 2008 and rated A+ by Standard & Poor had an annual yield to maturity of 7.24 percent, but a bond of similar maturity with the lower rating of BBB− was yielding 7.70 percent. The higher yield is the compensation for holding the riskier bond. During the twentieth century, the average annual rate of return on stocks exceeded that of bonds by 5–6 percent. Are people who invest in bonds simply foolish? Not necessarily, because the risk associated with investing in bonds is generally lower. Typically, stock prices fluctuate much more than bond prices. The longer the holding period, the more likely the realized return on stocks will exceed that of bonds (e.g., over any 10 years, the U.S. stock market virtually always outperforms the bond market). But within shorter periods (e.g., 2 years or less), stocks performance may be much worse than that of bonds. Even though both investments are risky, within any interval the realized value of the bond investment is likely to be closer to its expected value than that of the stock. The higher average rate of return with stocks is the compensation for holding the riskier asset. In a later section we consider further the size of this differential. For now, we note that potential stock investors look at measures such as the standard deviation, Beta coefficient, and Sharpe ratio to gauge the magnitude of any individual stock’s risk.10 As we have seen, the expected utility model allows for a diversity of risk preferences. But the above evidence, of higher rates of return for holding riskier assets, suggests that empirically, risk-averse behavior is predominant. Probably the most important evidence of 9

This rule of thumb applies to situations that can be approximated statistically by the normal distribution. Among stock investments, the future price of stock for some companies may be much more volatile or uncertain than others (e.g., a new biotechnology company as opposed to a long-established water utility). The Beta measure compares a stock’s price fluctuations relative to those of the Standard & Poor index of 500 stocks. The Sharpe ratio compares a stock’s return given its standard deviation to that of a nearly riskless asset such as a 3month U.S. Treasury bill. An introduction to financial investing, the importance of risk, and some of the measures used to assess it are contained in Burton G. Malkiel, A Random Walk Down Wall Street: Including a LifeCycle Guide to Personal Investing, 6th Ed. (New York: W. W. Norton & Company, 1996). 10

Uncertainty and Public Policy

233

risk aversion is that virtually all individuals diversify their portfolios. That is, wealth is not stored all in one asset but is spread among various stocks, bonds, real estate, savings accounts, pension funds, and other assets. We show in the following section that such diversification has the primary purpose of reducing risk. An individual with risk-neutral or riskloving preferences would not behave in this manner. As additional evidence, when individuals bear significant risk as a consequence of such factors as home ownership and automobile driving, they will usually offer others more than a fair price to bear the risk for them. Almost all insurance purchases are evidence of risk-averse behavior of this type, since the premiums are at least as great as the expected payout.11 For example, most individuals who drive purchase more than the minimum insurance required by state regulations. If risk aversion is so predominant, how do we explain the commonly observed willingness of individuals to participate in risky situations in which the odds are against them (i.e., the net expected value is negative)? For example, many people seem to enjoy an evening in Las Vegas or Atlantic City or a day at the race track. Apart from professional gamblers, this behavior is probably best understood for its direct consumption value. That is, people may receive utility directly from the process of gambling similarly to the consumption of any other good or service. The primary motivation for participation need not be the expected indirect utility from the wealth change associated with gambling. Some individuals may engage in unfair gambles because of limited preference for risk, even though they are primarily risk-averse. That is, it is possible to be risk-averse over some range of wealth while simultaneously being a risk lover over another range. In Figure 7-4a we illustrate one such possibility.12 An individual is currently at a wealth level of $50,000. However, this wealth includes a $25,000 home that has some possibility of being accidentally destroyed by fire or natural disaster. The individual also has a chance to invest $2000 in a new “dot-com” company that, if successful, will return $10,000. The preferences indicated by the utility function in Figure 7-4a suggest that the individual is risk-averse concerning the bulk of wealth already in his or her possession but is willing to gamble small amounts “against the odds” if the prospects of wealth gains are large enough. As drawn, the individual will purchase actuarially fair insurance and may invest in the “dot-com” company even if the investment is less than a fair gamble.13 11 We do not count employer-provided insurance as evidence of risk aversion. Private medical and dental insurance, when provided through an employer, is a form of nontaxable income. Roughly speaking, $50 of medical insurance provided in this way yields $50 of benefits to the individual in terms of expected medical services (ignoring risk costs). If the $50 is given to the individual as a taxable income, after taxes there will be only $25 to $40 (depending on the individual’s tax bracket) left for consumption purposes. Thus, the favorable tax treatment can result in expected medical care benefits (apart from risk reduction) that exceed the individual’s cost of the insurance (in terms of foregone consumption of other things). 12 This was first suggested in M. Friedman and L. Savage, “The Utility Analysis of Choices Involving Risk,” Journal of Political Economy, 56, August 1948, pp. 279–304. 13 We have not specified the probability of success of the “dot.com” company. If it is a fair gamble, the individual will clearly invest because the expected utility exceeds the current utility. Therefore, the investment can be

(a)

(b) Figure 7-4. Do utility functions adapt to different situations? (a) An individual may both insure and accept unfair gambles. (b) An individual’s utility function adapts to new wealth.

Uncertainty and Public Policy

235

Although the shape of the utility function in Figure 7-4a does seem to explain some observed behavior, it raises other questions. In particular, suppose the “dot-com” investment wins in the above example and the individual now has a wealth of $58,000. To remain consistent with the behavioral idea of preserving wealth already possessed, the individual’s utility function now has to shift to something like that illustrated by the dashed extension in Figure 7-4b. In most economic theory we typically assume that the utility function is fixed. But this example suggests that, for some situations, it may be important to consider the possibility of an adaptive utility function. (That is, the utility function depends on which state occurs.) The above discussion is intended to suggest that, on balance, risk in our society is considered a social cost and not a social benefit. Thus, when public policy alternatives imply differing amounts of risk, those with more risk are disfavored unless other factors (e.g., sufficiently higher expected values) work in their favor. In the optional section later on in this chapter, we present an empirical method used to assess the risk costs of a change in policy (involving health insurance).

Risk Control and Risk-Shifting Mechanisms Recognizing that risk is generally considered costly, we consider in this section mechanisms used to affect its costs. Two fundamental mechanisms used to reduce risk costs are riskpooling and risk-spreading. We discuss them below and then turn to a discussion of public policies that affect risk costs.

Risk-Pooling and Risk-Spreading Risk-pooling occurs when a group of individuals, each facing a risk that is independent of the risks faced by the others, agree to share any losses (or gains) among themselves. We develop the idea of risk-pooling informally here and provide more detail in the chapter’s appendix. Imagine that each of many households has $5000 worth of property that is vulnerable to theft. Furthermore, suppose that each independently faces a .2 probability that the property will be stolen.14 Let us consider what happens if an insurance company offers each household full-coverage insurance at the fair entry price of $1000, called an actuarially fair premium in the insurance industry. (That is, the premium equals the expected loss.) Unlike each household, the insurance company does not care whose property is stolen; its concern is that the total premiums it collects will (at least) cover the total cost of replacing all the property that is stolen.

at least slightly less than fair and the individual will still invest. However, the probability of success may be so low that the expected utility falls below the current level, and then the individual will not invest despite the preference for risk. 14 Independence implies that the probability of theft in one household is unrelated to whether theft has occurred in any other household.

236

Chapter Seven

The statistical law of large numbers implies that, as the number of identical but independent random events increases, the likelihood that the actual average result will be close to the expected result increases. This is the same principle that applies to coin flips: The larger the number of flips, the more likely it is that the total proportion of heads will be close to –21. For the insurance company it becomes a virtual certainty that approximately 20 percent of the insured households will have claims and that therefore it will face total claims approximately equal to the total premiums collected. Thus by shifting the risk to the insurance company, where it is pooled, the risk cost dissipates. As long as the pool is large enough, the risk cost to the insurance company becomes negligible and premiums equal to expected losses will be sufficient to cover the claims. Let us give a simple example to illustrate how risk-pooling reduces risk costs. Suppose we consider two identical risk-averse individuals. Each has $50,000 in wealth, $5000 of which is subject to theft; each independently faces a probability of theft equal to .2. Initially, the two bear the risks independently, or they self-insure. There are only two possible outcomes: $50,000 with probability .8, and $45,000 with probability .2. The expected wealth of each is $49,000 [= .2($45,000) + .8($50,000]. Now suppose that these two individuals agree to pool their risk, so that any losses from theft will be evenly divided between them. Then there are three possible outcomes or states: 1. Neither individual has a loss from theft (W = $50,000). 2. Both individuals have losses from theft (W = $45,000). 3. One individual has a loss and the other does not (W = $47,500). Under this simple pooling arrangement, the only way they end up with $45,000 each is if both suffer theft losses. However, the probability of this occurring is only .04 = .2(.2). By contrast, this probability is .2 with self-insurance. Of course they are also less likely to end up with $50,000 each, since this only happens when neither suffers a theft loss. The probability of neither having a loss is .64 = .8(.8); this contrasts with the .8 chance under selfinsurance. The remaining possible outcome, that each ends up with $47,500, occurs with .32 probability (the sum of probabilities for the three outcomes must equal 1). As a consequence of this pooling arrangement, the probabilities of the extreme outcomes decline while the probability of an outcome in the middle increases. The expected wealth, however, remains the same: E(W) = .64($50,000) + .04($45,000) + .32($47,500) = $49,000 With expected wealth the same and with the likelihood of ending up close to it greater, the risk is reduced. The standard deviation, for example, is reduced from $2966 under selfinsurance to only $1414 with the two-person pool.15 This illustrates how risk-pooling can reduce risk without affecting expected value. 15 To the nearest dollar, $2966 is the square root of the variance of $8.8 million = .8(50,000 − 49,000)2 + .2(45,000 − 49,000)2, and $1414 is the square root of $2 million = .64(50,000 − 49,000)2 + .32(47,500 − 49,000)2 + .04(45,000 − 49,000)2.

Uncertainty and Public Policy

237

If we found more individuals to join this pool, we could reduce the risk costs even further. As the pool expands, the likelihood of outcomes near the expected value increases while the likelihood of extreme outcomes decreases. This is precisely the concept that underlies insurance: a large pool of people who agree to divide any losses among themselves. The insurance company in this concept is the intermediary: It organizes the pool, and it incurs the transaction costs of keeping track of membership and losses of insured property and making the necessary monetary transfers. A relatively simple way to do all this is to collect the expected loss plus a prorated share of the transaction costs from each member at the start and then pay out the losses as they arise. As the number of people in the pool gets large, the risk cost and often the transaction costs (both prorated) become smaller and in the limit may be negligible. Then the premium approaches the actuarially fair level (i.e., the expected loss). When insurance is available at an actuarially fair premium, risk-averse individuals will of course purchase it. But for some items it may be that the transaction costs of operating the pool do not become negligible. Then the premium charged will be significantly higher than the expected loss to individuals, and many, depending on their degrees of risk aversion, will self-insure. For example, low-cost items are rarely insured for the reason that the transaction costs of insuring them are large relative to the expected loss. Factors included in the transaction costs include determining the value of an item and the probability of its loss, verifying that an actual loss has occurred, and making sure that the loss did not arise through the negligence of the owner or someone else who might be held responsible. An interesting example of high transaction costs involves automobile insurance. When an accident involving more than one car occurs, substantial legal expenses are often incurred to determine if one party is at fault. (If there were no legal “fault,” all premiums might be lower.)16 Another reason why risk-averse individuals may self-insure is government tax policy. Uninsured casualty and theft losses (above a minimum) are deductible when calculated for federal income tax purposes, provided the taxpayer elects to itemize rather than take the standard deduction. If an individual’s marginal tax bracket is 31 percent, a $1 deductible loss from theft reduces the tax bill by $0.31. With private insurance, the $1.00 is recovered, but the $0.31 in tax savings is lost. Thus the next expected payoff from insurance is only $0.69, which is substantially below the $1 fair premium. In other words, government insurance, with benefits that increase in proportion to income, encourages individuals to self-insure. We will explore other aspects of insurance later in the chapter. Now, however, let us return to the main point: to understand how risk-pooling and risk-spreading reduce risk costs. We have shown that risk-pooling is the essence of insurance. But it is just as important to recognize that the same principle operates in other institutional forms. For example, 16 For an interesting theoretical analysis of this see Guido Calabresi, The Cost of Accidents: A Legal and Economic Analysis (New Haven, Conn.: Yale University Press, 1970). Some of the empirical issues and practical difficulties with the idea are explained in R. Cooter and T. Ulen, Law and Economics (Glenview, Ill.: Scott, Foresman and Company, 1988), pp. 463–472.

238

Chapter Seven

risk-pooling is a factor when firms in unrelated businesses merge to become conglomerates. By pooling the independent risks that each firm takes, the likelihood that the average return on investments will be close to the expected return is greater. This has the effect of lowering the cost to the firm of obtaining investment funds (because it is “safer”). Of course, this advantage must be weighed against the difficulty of managing unrelated business ventures. The general principle of diversification of assets can be seen as an example of riskpooling. That is, a portfolio of diverse investments that an individual holds is a pool much like the pool of theft insurance policies that an insurance company holds. Compare the expected utility of a risk-averse individual under two alternative arrangements: one in which the individual invests in one firm and the other in which the individual invests the same total amount spread equally among ten different firms facing similar but independent risks. (Note that firms in the same industry would not fully meet this criterion. While the success or failure of each is independent to some degree, presumably some of the uncertainty is due to factors that affect the industry as a whole.) The same phenomenon that we described above applies here. It is like comparing ten coin flips each with the same relatively small bet against one flip (of the same coin) with stakes ten times greater. The expected value of both situations is the same (and for stock investments, presumably positive). However, the probability that the realized outcome will be close to the expected value increases with the number of flips (the standard deviation declines), and thus the risk is reduced. Any risk-averse investor will prefer smaller, more numerous independent investments of total expected value equal to that of one larger investment. A numerical example is given in the appendix. At this point let us illustrate the advantage of risk-spreading. Risk-spreading occurs when different individuals share the returns from one risky situation. An obvious example of this is the diversification of firm ownership through the stock market. Through the issuance of common stock a single firm can allow many individuals to bear only a small portion of the total risk, and the sum of the risk cost that each owner faces is considerably lower than if there were a sole owner. This reduction in total risk cost is the gain from riskspreading. The easiest way to see the risk-spreading advantage is to make use of the result that we mentioned earlier (and that is reviewed in the appendix): the risk cost is approximately proportional to the variance. We will show that risk-spreading reduces the variance more than proportionately, and thus reduces the risk cost. Suppose that one large investment is divided into ten equal-sized smaller investments. The expected value is the same either way. Now imagine that there are ten individuals with identical risk-averse preferences, each holding one of the small investments. If Xi represents the ith outcome of the large investment, then each of the ten individuals would receive Xi /10 in state i. The variance that each experiences is then Var(X/10) = Factoring out the

1 — 10 ,

Σ πi [E(Xi /10) − Xi /10] 2

this becomes = (1/10)2

Σ πi [E(Xi ) − Xi ] 2 = (1/100)Var(X)

Uncertainty and Public Policy

239

1 of the expected value but only — 1 of the — — That is, when spread each individual receives 10 100 1 —. Since the risk cost is aporiginal variance—far less than a proportionate reduction of 10 proximately proportional to the variance, the risk cost of the small investment is substan1 that of the original one-owner investment. Put differently, ten times the tially less than 10 — sum of the spread investment’s risk cost is significantly less than the one-owner risk cost. Or put still differently, the group of ten investors would be willing to pay more than any single one of them to own the one risky investment. The risk-spreading strategy works as a consequence of diminishing marginal utility of wealth. We have seen that a risk-averse individual always prefers smaller gambles if expected value is held constant. For example, the risk-averse person prefers a fair coin flip with $1 at stake to the same coin flip with $2 at stake. The larger gamble represents not only a larger total but also a larger marginal risk cost. The expected utility gain from winning the second dollar is less than that from winning the first, and the expected utility loss from losing a second dollar exceeds that of losing the first—simply because of the diminishing marginal utility of wealth. Thus, the marginal risk cost increases as the stakes increase. Two similar individuals each bearing half the risk from one risky event have lower total risk costs than one individual facing the risk alone. Another institution that facilitates risk-spreading is the futures market. For example, a crop grower may not wish to bear the full risk of planting a crop now to receive an uncertain return when it ripens next year. In the futures market the farmer can sell part of the future crop at a price specified now. Thus the crop grower, in selling now, gives up the possible gains and losses if next year’s price turns out to be different. In return, certainty of income is achieved by the sale of the futures contract. All these examples of insurance, futures contracts, and stock shares can be thought of as contingent commodities: those whose values depend upon which state of the world occurs. The theft insurance contract pays nothing if the state turns out to be no theft and pays the value of the insured article if there is theft. The buyer of crop futures faces a loss if next year’s price turns out to be low and a gain if it is high. A share of common stock can be thought of as a claim on future value of the firm, and its value depends on which states of profitability arise.17 In theory, markets for a wide variety of contingent commodities are required to ensure optimal resource allocation in the presence of uncertainty. Yet we actually have very few of those markets developed. I might like to insure the value of my income against inflation, but there are as yet no futures contracts based on the level of the consumer price index. There are many reasons why such markets have not developed, but one reason relevant to policy analysis is that collective action may be required to create them, and we simply have not thought enough about how to do it. However, because uncertainty is so pervasive and costly, it is worth a great deal of time and effort to create efficient mechanisms for reducing risk costs.

17 A nice survey of uncertainty and contingent commodities is contained in J. Hirshleifer and J. Riles, The Analytics of Uncertainty and Information (Cambridge: Cambridge University Press, 1992).

240

Chapter Seven

Policy Aspects of Risk-Shifting and Risk Control In this section we give some brief examples of policies that have important effects on risk costs and their distribution. To do so, it is useful first to mention one important dimension of the relation between risk and resource allocation that we have so far ignored: Risk costs affect the amount of resources allocated to risk-taking situations. In the pooling and spreading illustrations we accepted the total amount of risk as a given: The risk-creating events would be undertaken regardless of the risk-bearing arrangements. That is, the jewelry and other “unsafe” property would be bought whether or not insurance was available, and the risky high-technology firm would operate whether or not it was owned by a partnership. Those simplifying assumptions were made in order to emphasize this point: The risk costs of random events are not inherently determined by the events themselves; they depend importantly on the institutional arrangements that allow the risks to be moved from one economic agent to another. To be efficient, risks should be allocated in order to minimize their costs; otherwise, there will be room for deals like the ones illustrated. Institutions such as insurance and the stock market serve to reduce the risk costs from the initial allocation by allowing them to be pooled and spread. However, another important aspect of risk-cost-reducing mechanisms is that they increase resource allocation to the risk-creating events. If no theft insurance were available, there would be less demand at any given prices for goods subject to theft. If firms were not allowed to spread their risks through the issuance of stock, the size of firms might be uneconomically restricted (e.g., that might prevent taking advantage of certain economies of scale). If people cannot insure against inflation, they will allocate fewer resources to activities whose value depends upon it (most notably investment). The policy examples mentioned below are illustrative of other institutional ways in which risk costs are affected. These policies, like the pooling and spreading mechanisms, can have important effects on resource allocation. We note some of the effects, but our primary emphasis continues to be on increasing awareness of social mechanisms used to respond to risk. An interesting risk-shifting mechanism created through public policy is limited corporate liability. That is, a corporation may be held liable only for amounts up to the net worth of the assets it owns. The assets of the corporation’s owners, the shareholders, are not at risk. This limitation does not apply to unincorporated businesses, for example, a partner in an unincorporated business could be forced to sell his or her home to pay business debts. The limit on liability provides a strong incentive for firms to incorporate. By limiting the liability of firm owners, part of the burden of the risks taken by the firm is shifted to others. In particular circumstances this may foster or hinder the efficient allocation of resources. For example, it may encourage new product development that has positive expected return but high attendant risk, depending on sales. Socially, such projects are desirable: Even though some may not work out, the average returns are positive and the aggregate risk from many such projects is negligible from the social point of view. However, those with the new idea may not think it worth their time if the profits must be spread through increased selling of stock. Limited liability combined with debt-financing (borrow-

Uncertainty and Public Policy

241

ing) may provide the right mixture of expected private return and risk to make the undertaking privately desirable. On the other hand, liability that is too limited may encourage excessive risk-taking. Suppose a firm considers taking a risk, such as building a nuclear reactor, that may have catastrophic consequences for others. Decisions of that kind are made in light of the expected benefits and costs to the firm. The firm may know that there is some nontrivial probability of a catastrophic accident, but in that event it cannot be held liable for more than its own assets. Thus, the expected loss in the firm’s calculation is less than the actual expected social loss, and the firm may cheerfully undertake risks that have negative expected social returns.18 This will be relevant to our discussion later in the chapter of the savings and loan crisis of the 1980s. In addition to risk-shifting mechanisms, risk creation can be and often is regulated through a variety of other controls. To take one simple example, legalized gambling on horse races is so designed that there is never any risk in the aggregate. Only individual bettors take risks. The track keeps a fixed percentage of the total funds bet and distributes the balance to the winning bettors in accordance with standard formulas. Thus there is no risk to the track of betting losses.19 Similarly, the public policy of raising revenues through creating and selling risk in the form of state lottery systems is another example. These systems, which have negative expected value of about 50 cents per dollar, depend for their sales upon the less educated members of the public who are the primary buyers of lottery tickets.20 Another form of aggregate risk control that involves public policy concerns crime and deterrence. In our earlier example of theft insurance we took the probability of theft as given exogenously. But it may well be that the resources allocated to criminal justice activities in any area have an influence on the probability of theft. That is, the degree of street lighting, the frequency of police patrol, and the extent of citizen cooperation (as in the prompt reporting of suspected crimes) may influence the likelihood that potential thieves will actually attempt thefts.21 As mentioned in the chapter introduction, individuals may influence the probability of their own victimization by decisions such as to install burglar alarms to protect households

18 This problem in this example is compounded by the Price-Anderson Act, which further limits the liability from nuclear reactor accidents. See J. Dubin and G. Rothwell, “Safety at Nuclear Power Plants: Economic Incentives under the Price-Anderson Act and State Regulatory Commissions,” Social Science Journal, 26, No. 3, July 1989, pp. 303–311. 19 This is a slight exaggeration. In most states the tracks are required to make payoffs of at least $0.05 on each winning $1.00. Occasionally, when a heavy favorite wins, there are not enough losing dollars to make the minimum payment and the track owners must make up the difference. That does not happen very often, however. 20 See Charles Clotfelter and Philip J. Cook, “On the Economics of State Lotteries,” Journal of Economic Perspectives, 4, No. 4, Autumn 1990, pp. 105–119. 21 See, for example, D. Black and D. Nagin, “Do Right-to-Carry Laws Deter Violent Crime?,” Journal of Legal Studies, 27, No. 1, January 1998, pp. 209–219; Hope Corman and H. Naci Mocan, “A Time-Series Analysis of Crime, Deterrence, and Drug Abuse in New York City,” American Economic Review, 90, No. 3, June 2000, pp. 584–604; and H. Tauchen, A. Witte, and H. Griesinger, “Criminal Deterrence: Revisiting the Issue with a Birth Cohort,” Review of Economics and Statistics, 76, No. 3, August 1994, pp. 399–412.

242

Chapter Seven

and to take taxis rather than walk to reduce exposure to street assault. Thus the decisions about how best to reduce risks are interrelated. That is, the least-cost way of achieving a given reduction in risk will typically involve a combination of public expenditure, private protective expenditure, and insurance.22 In the example of health insurance to be presented, we will analyze some of the problems that arise from such interrelations. Another policy form of risk control is quality certification. Imagine, for example, the medical care profession without licensing requirements. Anyone who wanted to be a doctor would simply hang out a shingle and offer medical services. An individual seeking services would then be very uncertain about the quality of care that might be received. Society imposes licensing requirements that require a demonstration of at least some medical training. That reduces the consumer’s uncertainty in a particular way. It leaves some uncertainty about the quality of the care received, but it probably does have the effect of raising the average level of care by eliminating those who would be least competent to provide it. Whether such licensing requirements have benefits greater than costs is another issue. Milton Friedman argues that they are inefficient.23 Licensing requirements create a barrier to entry, which gives the suppliers of service some monopoly power and leads to higher prices and lower quantities than in an unregulated market. He further argues that competitive forces in an unregulated market would drive out the least competent suppliers, so that only competent ones would continue to have patients seeking their services. Kenneth Arrow, on the other hand, suggests that the uncertainty costs of an unregulated regime may be great.24 Because the primary method of consumer learning about quality is trial and error and because each consumer purchases medical services infrequently and often for different reasons each time, the competitive mechanism may be a very imperfect safeguard. Thus, substantial incompetence could persist, which would turn each decision to seek medical services into a risky lottery from the uninformed consumer’s perspective. These theoretical arguments do not resolve what is essentially an empirical question: whether the benefits of the reduction of uncertainty outweigh the costs of the supply restrictions.25 Nor does actual policy debate about the medical care industry consider this question in such a broad form. The licensing issues that do receive serious consideration are more narrowly defined. For example, should licenses be required for the performance of specific kinds of surgery? Or should the medical knowledge of licensed doctors be periodically reexamined to ensure that physicians keep their knowledge current? Or should certain kinds of minor medical treatments be delicensed, or relicensed, to include paramedics and nurses as well as physicians? These same issues can be raised with respect to all occupational licensure requirements: automobile mechanics, real estate agents, dentists, and teachers. They can also be applied 22 This is discussed generally in I. Ehrlich and G. Becker, “Market Insurance, Self-Insurance, and SelfProtection,” Journal of Political Economy, 80, No. 4, July/August 1972, pp. 623–648. 23 See M. Friedman, Capitalism and Freedom (Chicago: University of Chicago Press, 1962), Chapter IX, pp. 137–160. 24 See Arrow, Essays in the Theory of Risk-Bearing, Chapter 8. 25 For an example of an empirical study of the issue, see W. D. White, “The Impact of Occupational Licensure of Clinical Laboratory Personnel,” Journal of Human Resources, 13, No. 1, Spring 1978, pp. 91–102.

Uncertainty and Public Policy

243

to safety requirements for products such as paint and microwave ovens. In each case it is not sufficient to analyze the number of substandard transactions that occur. One must remember to consider, for example, the pure risk costs that consumers bear because of the possibility of undesirable outcomes.26 The chapter’s appendix considers some methods for placing dollar values on the risk cost. The above is a point worth emphasizing because ignoring it is a common error. Standards might raise the expected value of a service in a clear way. For example, suppose that certain automobile safety standards reduce automobile accidents. The quantitative reduction in accidents is a readily comprehended benefit, but the standard achieves far more: For all of us who drive, the pure risk cost of driving has been reduced. This latter benefit can be substantial. Another issue to consider generally is that alternative policies can vary greatly in the extent of their coverage. A specific service can be restricted by licensure to a broad or narrow range of suppliers. Certification can be used as an alternative to licensure; it need not be required to supply the service (e.g., an accountant need not be a certified public accountant). To get a better understanding of the analytic issues involved in considering policies such as these, we must broaden our own understanding about individual behavior under uncertainty. Until now, we have simply examined the idea that individuals behave so as to maximize expected utility; however, the extent to which actual behavior may be approximated by this model is disputed.

Alternative Models of Individual Behavior under Uncertainty The Slumlord’s Dilemma and Strategic Behavior Most of the examples of uncertainty that we have mentioned so far are of the type Knight called “risky,” that is, the probabilities of the different possible states of the world are known. However, in many uncertain situations that arise the probabilities are not known. One class of these situations may be referred to as strategic games, such as chess playing or even nuclear weapons strategies. Each of the players in the game chooses some strategy from the set of all possible strategies in the hope of achieving a desired outcome. The term “player” is roughly synonymous to “economic agent,” in that the player may be an individual, a firm, a governmental unit, or any other decision-making entity. In the nuclear weapons case, countries are viewed as one-minded “persons” who attempt to deter nuclear attacks by choosing defensive capabilities that guarantee the attacker’s destruction. Game theory is the scholarly attempt to understand strategic games.27 26 This is sometimes referred to as the ex ante, ex post distinction. Ex post one can see the outcomes, but the risk is gone by that time. Social costs include the ex ante risk costs. 27 For classic readings on game theory and strategic reasoning, see R. D. Luce and H. Raiffa, Games and Decisions (New York: John Wiley & Sons, Inc., 1957), and Thomas C. Schelling, The Strategy of Conflict (Cambridge: Harvard University Press, 1960). Modern treatments include Robert Gibbons, Game Theory for Applied Economists (Princeton, N.J.: Princeton University Press, 1992), and H. Scott Bierman and Luis Fernandez, Game Theory with Economic Applications (Reading, Mass.: Addison-Wesley Publishing Company, Inc., 1993).

244

Chapter Seven

Figure 7-5. The Slumlord’s Dilemma: The numbers ($A, $B) are net profits to Larry and Sally, respectively.

An interesting game of this type is known as the Slumlord’s Dilemma.28 Imagine that two slum owners, Slumlady Sally and Slumlord Larry, have adjacent tenements. Each owner knows the following: If both invest in improving their tenements, they will have the nicest low-rent apartments in the city and will earn high returns on their investments (say an extra profit of $5000 each). On the other hand, if, say, Slumlord Larry invests but Slumlady Sally does not, then Larry will lose his shirt but Sally will make out like a bandit. The latter may happen because of externalities. That is, Larry will realize only a slight increase in the demand for his apartments because of a negative externality: His apartments are right next door to a slum. The increased rent is more than offset by the renovation costs, and Larry finds his net profit decreased by $4000. But Sally now finds her apartments in much greater demand, without having invested a penny, because of an external benefit: They are now in a nice neighborhood. Her profits go up by $6000. The opposite would be true if Sally were the only one to invest. The situation is like that shown in the matrix in Figure 7-5. The question is, what will they do? Slumlord Larry might reason as follows: “If Sally invests, then I am better off not to invest ($6000 > $5000). If Sally does not invest, then I am better off not to invest ($0 > −$4000). Since I am better off not to invest in either case, I will not invest.” Thus for Larry, not investing is a dominant strategy: It gives the best outcome in each possible state of the world. Sally goes through the same reasoning. She considers what will make her best off, and she concludes that the strategy of not investing is dominant for her. Therefore, Sally and Larry end up with no change in profits, but they have obviously missed a golden opportunity to increase their profits by $5000 each. Why does this happen? Why do they not simply cooperate with each other and both invest? 28

This version of the Prisoner’s Dilemma was first proposed by O. Davis and A. Whinston, “Externalities, Welfare, and the Theory of Games, “ Journal of Political Economy, 70, June 1962, pp. 241–262.

Uncertainty and Public Policy

245

The problem is that each owner has an incentive to be misleading, and the other knows it. If you were considering investing, say, $10,000 in renovating an urban slum but your success depended on what happened next door, would you trust that slumlord? That is, each player is uncertain about whether the other will really invest even if each agrees to do so. Imagine a more realistic example involving ten to twenty tenements owned by different individuals in which success depends on each owner making the investment. The inability to trust one another can lead to the uneconomic perpetuation of slums. How can this problem be solved? Like all problems that arise from external effects, the solution involves internalizing the effects in some way. But how can the uncertainty owing to lack of trust be overcome? If there were only one owner of the adjacent tenements, then it is clear that the investment would be made. It is possible that one of the two original owners could be persuaded to sell to the other (there is, of course, room for such a deal). However, the more owners that must be coordinated, the smaller the likelihood that one owner will be able to buy out all the others. In these cases the government may wish to exercise its power of eminent domain and buy up all of the property. Then it can redevelop the property as a whole either itself or by selling it to a developer who will do so. The process is more commonly referred to as urban renewal. Note that, in this game, no one is maximizing expected utility. No player knows the probability of each outcome. The analogous situation can be seen to arise in many different circumstances. When the game was first described, it was referred to as the Prisoner’s Dilemma. A sheriff arrests two suspects of a crime, separates them, and urges each to confess. Each prisoner has a choice of two strategies: confess or do not confess. If neither confesses, each will get off with a light sentence for a minor offense. If both confess, each will get a medium-length sentence. But if one confesses and the other does not, the one who confesses will be let off on probation and the other will be sent away for life. Not trusting each other, each suspect is led by these incentives to confess. This game is also thought to have some application in the important area of international trade policy. The most conventional economic analysis strongly supports the efficiency of free trade (based on logic quite analogous to that of allowing free trade among individual consumers in an economy). However, almost all countries make fairly extensive use of protective tariffs and export subsidies, and thus do not have free trade. Why? There may be many reasons of course, such as the political strength of a protected industry in one country (a good candidate particularly when the losses to that country’s consumers outweigh the gains to its producers). But here we focus on an explanation that is rational in the sense of trying to achieve net benefits for the country. Assume that the free trade solution maximizes net benefits in the world. If one (and only one) country adopts an import tariff (or an export subsidy), normally its consumers will lose but its producers and taxpayers will gain. In some circumstances, the gains will outweigh the losses and the country as a whole will be better off. However, other countries are harmed by the import tariff, and they may reduce their losses by imposing retaliatory tariffs of their own. The situation can easily be that of the Slumlord’s Dilemma: no tariffs producing the highest welfare for all, but with individual countries having incentive for tariff adoption

246

Chapter Seven

Figure 7-6. Larry’s strategy depends on whether nature or a person chooses state A or state B.

and a net result of lower welfare for each country. Such a situation would of course provide a strong rationale for cooperative agreements to reduce or eliminate tariffs, such as the General Agreement on Tariffs and Trade (GATT).29 As a last example of this game consider what happens when a group of people go to a restaurant and agree in advance to divide the bill evenly. Then all participants have incentives to order more expensive meals than if each paid for his or her own. You may order Chateaubriand, since your share of the bill will go up by only a fraction of the cost of the expensive steak. Furthermore, your bill is going to be big anyway, because you have no control over the costs that will arise from the orders of the others. Thus the ordering strategy of choosing expensive dishes dominates the strategy of ordering normally, and the whole group ends up with a feast and a bill that nobody except the restaurant owner thinks is worth it.30 We will see shortly that this identical dilemma is created by health insurance coverage and poses one of the most serious policy problems in the provision of health care. To highlight the differences between games against persons and those against “nature,” we assume a new set of possible outcomes for Larry in Figure 7-6. In a moment, we will discuss the “opponent.” But first let us point out that Larry no longer has a dominant strategy. He is better off investing if state A occurs and not investing if state B occurs. Consider how Larry might reason if state A or state B is to be consciously chosen by another person, the payoffs to the other person are identical with those to Larry, but the two people are not allowed to communicate.31 Reasoning strategically, Larry will realize

29 This example is a very simple one, and actual game-theoretic analysis of trade strategies and their implications is an important research area. See, for example, Frederick W. Mayer, “Domestic Politics and the Strategy of International Trade,” Journal of Policy Analysis and Management, 10, No. 2, Spring 1991, pp. 222–246, and more generally Paul Krugman and Alasdair Smith, eds., Empirical Studies of Strategic Trade Policy (Chicago: University of Chicago Press, 1994). 30 Not all people that agree to such bill-splitting think the results are suboptimal. Some may think the fun of the process is worth it; others may have a group relationship in which trust keeps all ordering normally. But the beauty of the example lies in recognizing the pressure that comes from the changed incentives and knowing full well that many people do become victims of this dilemma. 31 This may occur when Larry and the other person each represent one of two firms in an oligopoly market and they are prevented by law from conspiring to act jointly as a monopolist would.

Uncertainty and Public Policy

247

that the other person has a dominant strategy: State A is superior to state B no matter which strategy Larry chooses. The other person will choose A, and therefore Larry will decide to invest.32 Now let us change the opponent from a person to nature: State A or state B will be selected after Larry’s choice of strategy and without regard to it. For example, state B could be an earthquake that damages Larry’s building, and state A could be no earthquake. Or we could have states A and B determined by human events not directly related to Larry; for example, state B could represent a strike by public employees. The strike would interfere with refuse pickup whether or not Larry invested. That would impose greater cleanup costs on Larry and would also delay rental of the renovated units until city building inspectors returned to work. How would Larry behave in those situations? One decision rule proposed for these situations is the maximin rule: Choose the strategy that maximizes the payoff in the worst possible state of the world. Since the worst payoff is $0 if Larry does not invest and −$1000 if he does, the strategy of not investing maximizes the minimum that Larry could experience. The maximin strategy is one of extreme risk aversion, which does not depend in any way on Larry’s subjective estimate of the probability that each state will occur. Suppose, for example, that Larry thinks the probability of no earthquake or no strike is .9. Then his expected profit from investing is E(Π) = .9($5000) + .1(−$1000) = $4400 His expected profit if he does not invest is substantially lower: E(Π) = .9($2000) +.1(0) = $1800 As we have already seen, even a risk-averse expected utility maximizer could easily prefer the strategy of investing. Let us consider one other strategy that might be available: the strategic decision to gather more information. We take a simple example. Suppose Larry, with time and effort, could find out with certainty whether the public employees will go on strike. How much would that information be worth? To an expected utility maximizer, the utility value of perfect information is the difference between the expected utility of the current situation (with

32 A useful concept to understand this outcome is that of the Nash equilibrium: an outcome from which no player’s reward can be increased by unilateral action. In this example, neither player can single-handedly increase the $5000 reward by a change in strategy, so it is a Nash equilibrium. While the Nash equilibrium is the “best” outcome in this game, it need not be: The “Not invest, Not invest” outcome of the Slumlord’s Dilemma is also a Nash equilibrium! Furthermore, while these two games each have exactly one Nash equilibrium, there are strategic games that have multiple Nash equilibria, and some that have none. Nevertheless, the usefulness of the concept is that it identifies good candidates for the game’s outcome when noncooperative strategies are likely to prevail (owing to difficulties in securing binding cooperative agreements).

248

Chapter Seven

imperfect knowledge) and the expected utility of being able to choose the best strategy in whatever state arises. The monetary value of that information is simply the difference between the certain-wealth equivalents of the two expected utility levels. To illustrate, suppose Larry currently has $45,000 in wealth and the following riskaverse utility function: U = −e−0.0002W With imperfect knowledge, Larry will prefer to invest. The expected utility from this strategy is33 E(U) = .9U($50,000) + .1U($44,000) = −0.0000559332 The certain-wealth equivalent of this is $48,956.76. If Larry finds out whether there will be a strike, he can invest or not invest accordingly. The expected utility value from finding out is then E(U) = .9U($50,000) + .1U($45,000) = − 0.0000532009 The certain-wealth equivalent of this is $49,207.17. In other words, Larry should be willing to spend up to $250.41 (= $49,207.17 − $48,956.76) to find out with certainty whether there will be a strike. To follow the expected utility-maximizing strategy in the above example, Larry must have subjective perceptions of the probability of each state. Whether individuals have these perceptions, how the perceptions are formed, and whether the perceptions are acted upon are critical issues that we have not yet discussed. We will turn to them in the following section.

Bounded Rationality When common decisions involving easily understood risks such as a simple coin flip game are considered, it is certainly plausible to entertain the notion that behavior is consistent with expected utility maximization. But as decision situations become more complex and are encountered only infrequently, actual behavior begins to take on quite a different cast. One way to explain this is to recognize that decision-making is itself a costly process and that individuals will allocate only a limited amount of their own resources, including time,

33

The expected utility from not investing is E(U) = .9U($47,000) + .1U($45,000) = −0.0000867962

The certain-wealth equivalent of this strategy is $46,759.94.

Uncertainty and Public Policy

249

to the activity of deciding. A somewhat different but related explanation is that there are bounds or limits to human rationality.34 To illustrate this with a transparent example, let us consider the differences in decision-making that characterize playing the games of tic-tactoe and chess. In tic-tac-toe it is not hard to become an expert player. Simply by playing the game a few times, it becomes apparent which strategies prevent losing and which do not. That is, a player becomes an expert by trial and error. The optimal choice of moves is not accomplished by mentally considering all the alternatives and their possible consequences (nine possible openings × eight possible responses × seven possible next moves, and so forth) and seeing which current move is best. Individuals do not have the mental capacity to make such calculations. Rather, a small set of routine offensive and defensive ploys must be learned. For example, of the nine possible opening moves, it does not take long for the novice to realize that there are only three really different ones: the middle, a corner, or a side. Routine responses are developed as well: “If my opponent opens in the center, I will respond with a corner.” Tic-tac-toe is simple enough that almost all people learn unbeatable strategies. Because virtually everyone can quickly become an expert at tic-tac-toe, observed behavior will correspond closely with the assumption that each player acts as if all possible alternatives have been considered and the optimal one chosen. For all intents and purposes, the players of this game may be thought of as maximizers or optimizers of their chances of winning. The same limited calculating ability that prevents systematic consideration of all alternatives in tic-tac-toe applies a fortiori to the game of chess. No individual (nor as yet even our largest computer) is capable of thinking through all possible consequences of alternative chess moves in order to select the best one. People play chess the same way they play tic-tac-toe—by using routine offensive and defensive ploys that are learned primarily through the trials and errors of playing. That is, the same problem-solving procedure of trying to take a very complicated problem and breaking it down into manageable pieces (the standard routines) is followed. However, an important difference between the two games is readily apparent. Although almost everyone finds optimal strategies for tic-tac-toe, no individual has ever found an optimal (unbeatable) strategy for chess.35 Instead, individuals develop routines that satisfice. That is, they are satisfactory only when they are the best known to a player at a given time and do not seem to be the cause of losses. However, it is recognized that better routines can be discovered, and indeed most players strive to improve their routines over time. Economic decisions run the spectrum of choice situations from those in which we as individuals have optimizing routines to others in which we have satisficing routines, and still

34 Many of the ideas in this section are associated with the work of Herbert Simon. See, for example, Simon’s “Theories of Decision-Making in Economics and Behavioral Science,” in Surveys of Economic Theory, vol. III (New York: St. Martin’s Press, 1967), pp. 1–28. See also Herbert Simon, Models of Bounded Rationality, vols. 1–2 (Cambridge: The MIT Press, 1982). 35 It has been shown mathematically that such strategies must exist. See, for example, Herbert Simon, The Sciences of the Artificial (Cambridge: The MIT Press, 1969), p. 63.

250

Chapter Seven

others with which we are unfamiliar and for which we have no routines. When we buy meat, for example, it is not too difficult to discover whether the meat selections wrapped in a particular supermarket tend to have less desirable aspects (e.g., fat) hidden from view; and we have the opportunity to make frequent trials. Thus consumers may do pretty well at choosing their meat purchases from the alternative price-quality combinations available in their neighborhoods. To the extent that this is true, the meat choice game is like tic-tac-toe. There is no explicit calculation of all meat purchase alternatives every time one goes to shop; rather, a set of simplified routines for shopping becomes established. But the results may be the same as if an optimizing procedure had been followed. On the other hand, certain aspects of meat choices may be relevant to consumers but very difficult to perceive when the choice is made. For example, the freshness of the meat may not be apparent. One could rely on competitive forces to keep unfresh meats off the market or to keep meats separated by their relative freshness. But competitive forces respond only to consumer choices. Consumer choice routines might be quite adequate to recognize and cause the failure of any supplier who systematically and frequently attempts to sell unfresh meat. But it is quite plausible that more clever suppliers would be able to get away with such abuses if they are not attempted too often. In circumstances like this it may be possible for public policy to improve the workings of the market. A regulation requiring that all meat be labeled with a last legal day of sale, for example, might have benefits greater than its costs. For this to work, it is important that the regulation makers have better routines for evaluating freshness than consumers generally have. If the regulator thinks no meat should be sold after it has been cut and wrapped for two days but almost all consumers think that meat bought on the third day is fine, then the regulation can make consumers worse off. The gains from the reduction of unfresh meat sales can be outweighed by the losses from a reduced (and more expensive) supply of fresh meat. Why not just have the labeling requirement in the above example but leave out the part about a last legal day of sale? This depends upon the sophistication and diversity of consumer choices. If consumers only lack knowledge about any specific piece of meat or vary widely in their informed choices about the time within which each would use it, then the pure information policy may be best. Since regulators lack information about individual consumer preferences, there is an efficiency gain from informing them but not restricting choice. On the other hand, consumers may have difficulty processing the information, that is, using it to make the decision. If many consumers simply do not develop their own standards for freshness (because they do not encounter substandard meat often enough), the legal-lastday-of-sale aspect could have net benefits rather than net costs. One last meat example might be instructive. Suppose certain color-enhancing or tasteenhancing chemicals have been added to the meat and suppose further that they are carcinogenic, that there is some small probability that the intake of a given quantity will result in cancer in 20 or 30 years. This attribute is like freshness in that it is difficult if not impossible to perceive at the time of sale. But unlike freshness, individual consumers would not observe the consequences in time to consider different routines for making decisions. Even if informed about the facts in a dispassionate way, consumers have no experience in making such choices and may make them poorly.

Uncertainty and Public Policy

251

It may be analogous to hearing a brilliant lecture by Bobby Fischer on how to play chess and then having your first and only opportunity to play. You are not likely to do well, particularly if you are facing a seasoned opponent. If the stakes of the game are important, the novice might well prefer to have Bobby Fischer play as his or her proxy. Similarly, potential consumers of food additives might prefer to have an expert make the decision for them. In this case a regulatory agency with the power to ban certain products may improve efficiency. To the extent that rationality is bounded, the bounds are themselves the cause of uncertainty. In the last examples, often the consumer has all the information a supercomputer would need to solve for the optimum. The problem is that the consumer has not developed routines or programs that can process the information in order to identify the optimum. In such situations regulatory policies may have the potential to improve consumer satisfaction. Of course, it is an empirical question whether actual consumer decision-making in any particular situation is improvable by public policy (and again this depends not only on there being some deviation from the consumer’s optimum but also on the prospects for regulation actually reducing it). However, the existing empirical evidence on individual consumer choice suggests that actual behavior, even in simple choice situations, is often grossly inconsistent with expected utility maximization.36 One interesting study involved a household survey of consumer purchases of disaster insurance.37 Approximately 3000 households in disaster-prone areas, half of them uninsured, were asked a variety of questions designed to determine their subjective estimates of the probabilities of a disaster, the resulting loss that they might experience in that event, and their knowledge of available insurance. While a large number of people indicated that they did not have the information, those offering the information appeared to deviate substantially from expected utility maximization. In this group 39 percent of the uninsured should have bought insurance (in order to maximize expected utility), whereas about the same percentage of the insured should not have bought insurance. To give some policy perspective to the study’s findings, less than 10 percent of the entire uninsured sample could be said to have rationally chosen to remain uninsured while living in the disaster-prone area. Over half of this group simply did not know about the availability of insurance, let alone that their flood insurance would be 90 percent federally subsidized. Many of the rest appeared to have unrealistically low expectations of the damage that would occur in the event of a disaster. Is there any policy problem here? If one simply takes the expected utility model on faith, then there is certainly no reason to subsidize the 36 For an excellent survey of this and related literature, see Matthew Rabin, “Psychology and Economics,” Journal of Economic Literature, 36, No. 1, March 1998, pp. 11–46. A fine example of experimental research from this literature is David M. Grether and Charles R. Plott, “Economic Theory of Choice and the Preference Reversal Phenomenon,” American Economic Review, 69, No. 4, September 1979, pp. 623–638. The experiments in this paper reveal behavior inconsistent with preference theory in general, and not just expected utility maximization. Alternative models are tested in Lee S. Friedman, “Bounded Rationality versus Standard UtilityMaximization: A Test of Energy Price Responsiveness,” in J. Fox and R. Gowda, eds., Judgments, Decisions, and Public Policy (Cambridge: Cambridge University Press, 2002), pp. 138–173. 37 See Howard Kunreuther, “Limited Knowledge and Insurance Protection,” Public Policy, 24, No. 2, Spring 1976, pp. 227–261.

252

Chapter Seven

insurance. In fact, the model suggests a good reason not to provide the subsidy: If individuals do not bear the true costs of their locational choices then they will overlocate in the subsidized areas. But this model ignores a problem that the model of bounded rationality reveals. There may be many people living in these areas, because they do not have or cannot process the information about possible disaster, who will suffer serious loss in that event. Subsidized insurance probably does alleviate this problem to a small degree (depending on the price elasticity of its demand), but it can simultaneously cause the locational problem mentioned above. A policy of compulsory unsubsidized insurance might be better. It solves the problem of the unprotected and provides essentially correct locational signals, except for the risk takers, who really would prefer no insurance. The best policies may be ones of information or even help with information processing, if consumer choice can be preserved but made more rational. A final example is the Equity Premium Puzzle that returns us to a finding mentioned earlier: over the long run, the average annual rate of return on stock investments has exceeded that of bonds by 5–6 percent. The problem is that this differential seems too high to be the result of rational expected utility-maximizing preferences.38 For it to be an equilibrium, that means on the margin investors must be indifferent to placing their last investment dollar in stocks or bonds. But with the rate-of-return on stocks so much higher, it implies an implausibly high average degree of risk aversion among investors: about 30 times greater than it has been found to be in other common situations.39 Thaler et al. put forth a more plausible explanation that depends upon two particular kinds of bounded rationality: loss aversion and myopia. Loss aversion is the idea that people are more sensitive to decreases in their wealth than to increases. A number of studies found that individuals weight losses more than twice as strongly as gains (far more than can be explained by diminishing marginal utility).40 To illustrate, Thaler et al. suggest

38 This was first pointed out by R. Mehra and E. Prescott, “The Equity Premium: A Puzzle,” Journal of Monetary Economics, 15, No. 2, March 1985, pp. 145–161. It has been studied more recently by R. Thaler, A. Tversky, D. Kahneman, and A. Schwartz, “The Effect of Myopia and Loss Aversion on Risk Taking: An Experimental Test,” Quarterly Journal of Economics, 112, No. 2, May 1997, pp. 647–661. 39 Siegel and Thaler express the implausibility this way. Suppose an individual were subject to an uncertainty in which there was a 50 percent chance that wealth would double and a 50 percent chance that it would be cut in half (a positive expected change). The individual who is indifferent to the stock-bond choice on the margin is not only willing to pay to avoid the above gamble, but would be willing to forgo 49 percent of wealth to do so. Since the worst outcome of the gamble is to end up with 50 percent of wealth, 49 percent is an absurd amount of money to pay to avoid the risk. See J. Siegel and R. Thaler, “The Equity Premium Puzzle,” Journal of Economic Perspectives, 11, No. 1, Winter 1997, pp. 191–200. 40 See Jack L. Knetsch, “Assumptions, Behavioral Findings, and Policy Analysis,” Journal of Policy Analysis and Management, 14, No. 1, Winter 1995, pp. 68–78. It may be a semantic issue whether this behavior should be described as a form of bounded rationality, but it does violate the assumption of having a unique preference ordering. One could see this as a consequence of it being too hard to know one’s preferences in the abstract and easier to reform them in different situational contexts.

Uncertainty and Public Policy

253

that such preferences could be represented by the following segmented utility function U(∆W), where ∆W is the change in wealth from an initial level: U(∆W) =

∆W ∆W ≥ 0 2.5∆W ∆W < 0

{

Such a utility function implies that the same individual would rank any given outcome (e.g., final wealth level $100,000) differently if evaluated from different starting points. Then, if offered the opportunity of an investment that has a net gain of $200 with probability .5 and a net loss of $100 with probability .5, the individual with this utility function would refuse.41 On the other hand, consider how the individual would respond if offered the investment opportunity to do this twice in succession (with the second result independent of the first). If the individual evaluates the whole sequence, there are three possible outcomes: win both times with .25 probability, lose both times with .25 probability, or win one and lose the other for a net gain of $100 with a .5 probability. The expected utility is positive and the individual would accept the offer: U(∆W ) = .25(400) + .25(2.5)(−200) + .50[100] = 25 On the other hand, the individual may reason myopically, calculate if the first investment is worthwhile, decide it is not, and thereby reject the sequence. Myopia is the framing of a long-run decision in terms of its short-run consequences (framing effects were introduced in the contingent valuation discussion of Chapter 6). Think of these two investments as “invest in the stock market this year” and “invest in the stock market next year.” The individual who thinks of this choice as one sequence to be evaluated at the end of the second year will invest. However, the individual who thinks of this myopically will not invest. Thaler et al. conduct an experiment and find that investors are indeed characterized by both loss aversion and myopia in deciding how to allocate an investment portfolio between stocks and bonds. Such behavior helps to explain the Equity Premium Puzzle, and is relevant to policies concerning individual control of retirement accounts. Furthermore, they also note that more information provision in this context (more frequent feedback on investment results) increases myopia, and thus that “more information” can be counterproductive. Knetsch, accepting the validity of the valuations implied by these behaviors, argues that there are important consequences for many public policies of failing to account for them, for example, “activities with negative environmental and social impacts will be unduly encouraged . . . inappropriately lax standards of protection against injuries will be set.”42 These are not the kinds of implications that come from traditional economic models, and it 41

The same individual starting with an initial wealth level that is $1000 lower and offered the choice between receiving an extra $1000 with certainty or a gamble that returned $1200 with probability .5 and $900 with probability .5 would accept the gamble. The possible outcomes here and likelihoods of each are identical to those in the text where the individual refused the gamble. 42 Knetsch, “Assumptions, Behavioral Findings, and Policy Analysis,” p. 74.

254

Chapter Seven

should serve as a reminder that it is healthy to remain open-minded about the models to be used in policy analyses.43

Moral Hazard and Medical Care Insurance One of the more serious risky contingencies that each of us faces is the possibility of requiring expensive medical care. It is therefore not surprising that many people choose to purchase health insurance, which is so universal that it is usually provided as a fringe benefit of employment.44 Many people feel that a universal minimum of medical care should be guaranteed to all; thus it is not surprising that the government Medicare and Medicaid programs provide insurance for the elderly and lower-income individuals (primarily those on welfare). However, many people still “fall through the cracks” of these programs (e.g., the self-employed and the marginally employed), and legislation to extend health insurance coverage is the subject of continuing policy debate. It seems inevitable that some form of universal coverage will eventually be adopted, although exactly what that form will be is up for grabs. A factor that has undeniably slowed the movement toward universal coverage is the dramatic increase in the costs of medical care. From 1965, when Medicare was getting started, to 1993, medical expenditures went from 6 percent of the GDP to 14 percent, where they have remained. At first, analysts thought that the cost increase was due almost entirely to the new demand from Medicare and Medicaid patients. But gradually, as medical inflation continued unabated and more data became available, the recognition grew that medical insurance coverage was itself an important factor. It was not simply that more people were being covered by a relatively inelastic supply; the level of real resources (labor and capital inputs) per capita approximately tripled from 1960 to 1990. During roughly the same period (1965– 1991), insurance payments (both public and private) increased from 24 to 62 percent of total health care spending, and out-of-pocket spending declined from 46 to 20 percent.45 To understand why medical insurance has caused this problem, we return to fairly simple and traditional models of economic behavior. As we have already seen, the advantage of insurance is that, through risk-shifting and risk-pooling, the cost of risk is reduced. But in our earliest examples of theft insurance we assumed that the insurance company offered insurance at a premium approximately equal to the expected loss without insurance. This 43 Other studies that have focused on policy implications of bounded rationality include C. Camerer and H. Kunreuther, “Decision Processes for Low Probability Events: Policy Implications,” Journal of Policy Analysis and Management, 8, No. 4, Fall 1989, pp. 565–592; and Lee S. Friedman and Karl Hausker, “Residential Energy Consumption: Models of Consumer Behavior and Their Implications for Rate Design,” Journal of Consumer Policy, 11, 1988, pp. 287–313. 44 We noted earlier the tax advantages of receiving part of the wage in this form. The employer’s contribution is nontaxable; but if the same money were included in the wage, the recipient would have to pay income taxes on it. Thus the public policy is similar to offering individuals (through their employers) a matching grant for medical care insurance when higher-income people receive more generous matches (the tax savings are greater). 45 For more data and general discussion, see Chapter 4 in both the 1993 and 1994 issues of the Economic Report of the President (Washington, D.C.: U.S. Government Printing Office, 1993, 1994).

Uncertainty and Public Policy

255

turns out to be a false proposition for medical insurance, as another version of the Slumlord’s Dilemma makes its impact felt.46 The insurance changes the economic incentives that each individual faces, and this causes behavior to be different. The problem arises because medical care expenses, perhaps unlike the events of illness or injury themselves, are not random events. The quantity of medical care demanded by an individual depends, to be sure, on the random event of a medical problem. But it also depends on the income and tastes of the individual (or his or her doctor) and the price of the services. If I must be hospitalized, my decision to have a private or semiprivate room depends on the price. I may stay an “extra day” to be on the safe side if it is cheap enough, or I’ll be particularly anxious to leave because of the high cost. If I am poor, the doctor may provide only the services that are essential to help keep my bill down; if I am well off, the doctor may provide “Cadillac quality” care. How does this connect to insurance? The effect of full-coverage insurance is to reduce the price an individual is charged at the point of service from the ordinary market price to zero. Once insured, an individual receives all covered services “for free.” Therefore, more medical expenses will be incurred by an insured person than would be incurred by the same person without insurance. A hospital, knowing most of its patients are insured (and absent any regulatory constraints), can buy any medical equipment no matter how expensive, use it as the doctor finds appropriate, and get reimbursed for the “necessary” expenses by the insurance company. Full-coverage insurance leaves the medical care industry without the usual mechanism of consumer demand as a cost control; in fact, it leaves the industry with virtually no method of cost control at all. Full-coverage insurance can be offered only when the demand for the covered services is inelastic. To see this, let us look at Figure 7-7. Imagine that there are two states of the world: An individual is healthy or ill. If illness strikes (let us say with Π = .5) and the individual is uninsured, we might observe that 50 units of medical care are bought at the market price of $1.00 per unit. The expected cost to the individual is thus $25. Suppose that the demand for these services is completely inelastic (in this state), as if for every illness there is a unique, invariable treatment. Then the expected cost of insuring many individuals like this one is also $25, and every risk-averse individual will prefer to purchase the insurance that would be offered at this actuarially fair premium. Now let us relax the assumption that the demand is inelastic; suppose the actual demand is EAG as drawn in Figure 7-7. The uninsured ill person behaves as before. But because the ill person with insurance faces a zero price at the point of service, 100 units of medical care will be demanded. The insurance company will receive a bill for $100, and thus its expected cost of insuring many people like this is $50 each [ = –21 (0) + –21 (100)]. Thus the choice that the individual faces is to bear the risk and go uninsured with expected loss of $25 or to give up $50 with certainty in order to shift the risk. In this case, the individual might well prefer to remain uninsured.

46 Mark Pauly was the first to point this out. See his “The Economics of Moral Hazard,” American Economic Review, 58, 1968, pp. 531–537.

256

Chapter Seven

O

Figure 7-7. The moral hazard of full-coverage medical insurance.

Note the presence of the Slumlord’s Dilemma here. The insurance company is still charging an actuarially fair premium: it breaks even. Every individual could perceive that his or her own excess use contributes to the rise in premium (from the inelastic case). Nevertheless, each individual controls only a negligible fraction of the premium costs. If I become ill and consider restraining my demands, the savings will not accrue to me but will be spread evenly over the entire insured population; and I will still be charged my share of the excess use by all the others. If I do purchase the excess units, the extra costs are spread evenly over the insured population and I pay only a tiny fraction. Thus I am better off following the strategy of demanding the “excess” services, even though every one would be better off if none of us demanded them! To make sure that this last point is clear, let us return to Figure 7-7. Under full-coverage insurance, the social costs of units 51 to 100 can be seen to exceed their benefits. The area CAHG indicates their cost of $50 (paid for through the insurance premiums) and the area under the demand curve ACG measures their value to the consumer of $25. Thus a riskaverse individual will purchase insurance only if the expected utility loss from consuming these marginal services, valued at $12.50 [= –21 (25)], is less than the value of the utility gain from the overall risk reduction; otherwise, he or she is better off remaining uninsured. On the other hand, we know that insurance with excess use forbidden would have the greatest value because it would provide the full risk-reduction benefits but no expected loss from the consumption of excess services. Thus, all risk-averse people would most prefer an insurance plan with some social mechanism to prevent excess use.

Uncertainty and Public Policy

257

In fact, the problem extends to behavior beyond that of excess consumption when ill. It also includes actions that affect the probability of illness. The uninsured person would normally take some set of preventive health measures in order to ward off various states of illness. These could include the purchase of health services such as regular check-ups, as well as the avoidance (or reduced use) of “health-risky” goods such as alcohol or tobacco products. But again the fully insured person has less incentive to undertake these appropriate preventive measures, because the cost of any resulting illness is not borne by that individual but is spread over the entire insurance pool. The problems of excess use and insufficient preventive measures are known in the insurance literature as moral hazard. A typical example used to illustrate the problem is arson: If the owner of a building could insure the property for any desired value, then he or she might buy a policy for twice the market value of the building and secretly arrange to have it “torched”! This example clearly identifies a moral hazard, but it is misleading in the sense that it downplays the pure role of economic incentives, morality aside. That is, we usually think that it is quite rational for individuals to increase consumption or investment in response to a lower price, and that is the temptation perceived by both those with medical insurance and the insured potential arsonist. Other examples are when the earthquake-insured fail to brace their homes the way they would without insurance and when the theft-insured fail to undertake the security measures they would choose in the absence of insurance. Is there any way to solve the medical insurance problem? One method that might mitigate the problem, if not solve it, is the use of deductibles and coinsurance. A deductible requires the individual to pay for a certain amount of medical services before the insurance coverage goes into effect; it is designed to deter reliance on insurance to pay for “minor” illnesses. Coinsurance requires the individual to pay a certain fraction of each dollar spent: the coinsurance rate refers to the percentage paid by the individual. (For example, a coinsurance rate of 25 percent means that, for each dollar spent, the individual must pay 25 percent and the insurance company 75 percent.) We can see the effect of each in Figure 7-8a. Suppose that the deductible is for the first 60 units and the insured individual becomes ill. It is not obvious whether the person will file a claim or not. One choice is simply to purchase the 50 units as would be done without insurance. The other choice is to file a claim, pay the full cost of the first 60 units, and consume the rest of the excess units (61 to 100) free. Triangle AEF measures the net cost to the consumer of having to pay for the first 10 excess units, and triangle FGJ measures the net benefits of consuming the next 40 free. If FGJ > ALF, the individual will file the claim. In this particular example FGJ = –21 (0.80)(40) = $16.00 and AEF = 1–2 (0.20)(10) = $1.00, so the individual’s behavior is unaltered by the deductible. To understand the effects of deductibles in a more general way, imagine many possible states of illness from mild to severe and associate with each a demand curve that shifts progressively to the right as we consider more severe illnesses. This is illustrated in Figure 7-8b. Since the deductible remains as a fixed rectangle and the benefits of medical care increase steadily as the demand curve shifts to the right, it can be seen that one is less likely to file for the less serious illnesses and that past some point the deductible no longer deters filing.

Chapter Seven

258

O

(a)

O

(b) Figure 7-8. The effects of deductibles and coinsurance: (a) Deductibles and coinsurance can restrain use. (b) The deductible deters filing small claims.

Uncertainty and Public Policy

259

To see the effect of coinsurance, suppose the coinsurance rate for the individual represented in Figure 7-8a is 10 percent. If ill, the individual will then purchase ninety-five units of medical services (at point H). The smaller the price elasticity of demand, the smaller the restraining effect of coinsurance (and the less the moral hazard in the first place). Note that the existence of coinsurance can make a policy attractive to someone who finds full coverage unattractive. Although it only partially shifts the risk (the amount depends upon the coinsurance rate), it does shift the more expensive portion (i.e., it prevents the losses where the marginal utility of wealth is greatest). Furthermore, it reduces the consumption of the least-valued of the excess units.47 So it is quite possible that the risk-saving gains of partial coverage will exceed the expected costs from “excess” use, even if that is not true of full coverage. The above analysis suggests that one direction to explore in an attempt to solve the medical cost problem is increased reliance on deductibles and coinsurance. When those ideas are integrated into national health insurance proposals, they are typically modified to be income-contingent. That is, it is recognized that lower-income families have limited ability to meet even these partial payments. However, this section is not intended as a recommendation of what to do; it is intended only to help clarify the nature of the medical cost explosion we have experienced. Although a full discussion of the normative implications that could be explored is beyond our scope, mentioning at least a few of them should help to keep this analysis in perspective. First, the social benefits and costs referred to in the diagrams of this analysis cannot be easily estimated from observable data for a variety of reasons. Social cost, for example, is the value of what society gives up by allocating a resource to the provision of medical care. In perfectly competitive industries we can often approximate it by the market price. But in the medical care sector, unwarranted entry restrictions (such as limiting entrants to medical schools to a number less than the number of qualified applicants) may make observed cost greater than social costs. To the extent that is true, the optimal quantity of medical care should be greater than what uninsured people choose. In our diagram this might be equivalent to drawing the social marginal cost line at $0.80 rather than at the $1.00 observed price and calculating social benefits and costs on the basis of its relation to the demand curve. But then the demand curves themselves are unreliable guides for social welfare. For one thing, very little is known about the relation between medical care and health; presumably it is health that people seek. This makes it extremely difficult to know whether delivered medical services are excessive. Disagreement among physicians about the appropriate care for a given patient is common. In addition, the specific equity concerns (about medical care) suggest caution in the use of compensation tests here; to the extent that a patient’s income influences the location of the demand curve, the benefits and costs to people of different income groups might require separate analytic treatment. 47 As drawn, for example, the coinsurance deters purchases of units 96 to 100. This reduces the expected cost of illness by $2.50, and reduces the expected consumer surplus from “subsidized” excess consumption by only $0.125 [= (0.5)(0.5)(0.10)5]. Thus, the consumer has an expected gain of $2.375, which is offset to some degree by the increased risk owing to having only partial insurance coverage.

260

Chapter Seven

Finally, much of the problem we have been discussing is exacerbated by the fee-forservice method of organizing the supply of medical services. A physician is supposed to be the consumer’s agent and provide expert advice on how to further the consumer’s medical interests. The need for an agent arises because of the generally accepted wisdom that consumers are not competent to choose their own medical treatments. But this puts the fee-forservice physician in an awkward position, because he or she is simultaneously a supplier of services to the patient and a receiver of income in proportion to the amount of services sold. There is a conflict of interest that may cause overpurchasing without insurance, and the presence of insurance only exacerbates the tendency. An alternative to the fee-for-service system is a system of health maintenance organizations to which consumers pay annual fees in return for “free” health services as needed. That gives the suppliers incentive to conserve on their use of resources (“managed care”), and the forces of competition, as well as physician norms, work to ensure that appropriate treatments are supplied as required.48 The use of such systems in the United States has grown dramatically, although 80 percent of the Medicare population still uses fee-for-service. Similarly, many hospital charges are no longer fee-for-service but are determined at admission in accordance with a schedule of “diagnostic related groups.” Another alternative for medical services, much less politically feasible here, would be a national health service like that in Sweden or England. These cautions about making normative inferences from the moral hazard analysis only hint at some of the complexities that are relevant to health policy. But the complexity does not make the achievement of the analysis presented here any less worthwhile. The medical cost problem continues to be of widespread concern, and the clarification of one of its sources is an important contribution. Furthermore, the same model generated some useful insights (the role of deductibles and coinsurance) for achieving policy improvement.

Information Asymmetry and Hidden Action: The Savings and Loan Crisis of the 1980s and Involuntary Unemployment The previous section explained the moral hazard problem in the context of medical insurance. However, the moral hazard problem arises in many other policy contexts as well. In order for moral hazard to be present, two conditions are necessary. First, there must be an information asymmetry between two or more parties who wish to “contract” (i.e., enter an economic relationship). Second, the asymmetry must involve a hidden action: an outcomeaffecting action taken by one of the parties after contracting. This implies that the outcome

48

See, for example, D. Cutler, M. McClellan, and J. Newhouse, “How Does Managed Care Do It?,” Rand Journal of Economics, 31, No. 3, Autumn 2000, pp. 526–548. For a more extensive review of alternative reimbursement plans, see J. Newhouse, “Reimbursing Health Plans and Health Providers: Selection Versus Efficiency in Production,” Journal of Economic Literature, 34, No. 3, September 1996, pp. 1236–1263. Some problems with managed care are analyzed in S. Singer and A. Enthoven, “Structural Problems of Managed Care in California and Some Options for Ameliorating Them,” California Management Review, 43, No. 1, Fall 2000, pp. 50–65.

Uncertainty and Public Policy

261

is determined by other additional factors that are also unknown (e.g., random events): otherwise, the action would be revealed by the outcome itself. Then moral hazard can be defined as the situation where one party to a contract has incentive to undertake a hidden action with adverse consequences to another party to the contract. Think of the person taking the hidden action as the “agent” and the other person as the “principal.” Then moral hazard exists whenever there is a principal-agent relationship with hidden action. In the health insurance case, the insurance company is the principal, the insured is the agent, and the hidden action is excess consumption of health services when ill (and the true degree of illness is a random event also unknown to the principal). The “split the bill” agreement at a restaurant, which creates incentives to overorder (the hidden action), is another moral hazard problem (with the group as the principal, the individual diners the agents, and true individual preferences also unknown to the group). A very important moral hazard problem created by public policy was at least partially responsible for the savings and loan industry crisis during the 1980s, in which many deregulated institutions made unsound loans and became insolvent. During this period, regulators closed almost 1200 S&Ls at a taxpayer cost of over $100 billion.49 In the case of a banking institution, think of depositors as the principals and the institution as the agent. The job of the agent is to earn interest for the depositors by lending out the deposited money, as well as keeping the balance secure. The hidden action is the lending of money by the institution to entities with credit worthiness unknown to the depositors; the principals do not know if a loan default is due to bad luck or excessive risk-taking. In order for the agent to attract principals, it must convince them that their deposits will in fact be secure. For many years, federal deposit insurance played this role. However, such insurance also creates the usual kind of moral hazard: incentive for the insured institution to take greater risk than otherwise in lending funds. From the 1930s through the 1970s, this moral hazard was manageable because it was accompanied by restrictions on the types of loans allowed and interest rate ceilings on deposits that limited the supply of funds to the S&Ls. Then in 1980 the United States enacted the Depositary Institutions Deregulation and Monetary Control Act. This act increased federal deposit insurance from $40,000 to $100,000 per account, removed many restrictions on the types of loans allowed, and gradually removed the rate ceilings that had moderated the competition for deposits. Not surprisingly, the institutions increased interest rates to attract more funds, and depositors did not worry much about the loans that were made (the hidden action) because of the deposit insurance. Banks that were not doing too well had every incentive to make unusually risky loans; they had an ample supply of deposits because of deposit insurance, few of their own assets to lose, and the hope that large profits from repayment of the risky loans could restore them to health. Thus the large number of insolvencies in the 1980s can be understood as a predictable response to the increased moral hazard from higher deposit insurance and 49

p. 194.

Economic Report of the President, 1993 (Washington, D.C.: U.S. Government Printing Office, 1993),

262

Chapter Seven

deregulation. These insolvencies were stemmed, at least temporarily, by the passage of the 1989 Financial Institutions Reform, Recovery, and Enforcement Act.50 One last example of moral hazard is particularly interesting because of the importance of the problem: involuntary unemployment. In the most conventional microeconomic models, there is no involuntary unemployment (the market forces the wage to be at the level where demand equals supply). Yet the unemployment rate is one of the most closely watched measures of performance of a national economy, and all market-oriented economies go through periods where the extent of involuntary employment becomes a matter of serious national concern. While much of macroeconomic theory tries to explain this phenomenon at the aggregate level, it remains poorly understood and it is important to understand its microeconomic foundations. One promising concept for explaining involuntary unemployment is that of the efficiency wage. This term refers to a wage that serves a dual purpose: to attract labor and to create incentives that increase labor productivity. It is the second purpose, motivated by the uncertainty about labor productivity, that relates to the moral hazard. The easiest way to explain this is by thinking about the effort that a single individual puts into work. Simplifying further, imagine that an individual chooses either “high” or “low” effort. This choice is the hidden action, recognizing that there are many cases in which it is difficult for the employerprincipal to know just how hard any particular agent-employee is working (other factors unknown to the principal are also determinants of the outcome). If there were no penalty to “low” effort (e.g., no risk of being fired and unemployed, no reduction in promotion possibilities), many employees would prefer this choice.51 If the employee “caught” making a low effort was fired but hired immediately at the same market wage by some other firm (the “no unemployment” case), there would be no penalty to the low-effort choice. Employers, knowing this, would only offer a wage commensurate with low-effort productivity. If one views the market as offering each worker full employment insurance in case of job loss (i.e., an equivalent job immediately), then the moral hazard results in excess job losses and high insurance “premiums” in the form of lower wages (reflecting the low productivity). Everyone would prefer a high-effort, high-wage, full-employment equilibrium, but the cost of one individual’s low-effort choice is not borne by that individual. It is spread out over all workers in the form of a small “premium” increase (i.e., market wage reduction). So, many workers make the low-effort choice resulting in the low market-clearing wage for all. It is another version of moral hazard that sounds similar to the Slumlord’s Dilemma. But suppose we introduce the equivalent of coinsurance: making the employee bear some portion of the cost of the low-effort choice. What if, upon firing, the worker was not instantaneously reemployed elsewhere but suffered a spell of unemployment? To an extent depending on the severity of the unemployment, this would increase the incentives of workers 50 A great deal has been written about this issue. A good summary is contained in Frederic S. Mishkin, “An Evaluation of the Treasury Plan for Banking Reform,” Journal of Economic Perspectives, 6, No. 1, Winter 1992, pp. 133–153. 51 Employees who derive more utility from the high-effort choice will of course select it.

Uncertainty and Public Policy

263

to avoid job loss, and fewer low-effort choices would be made. The number of “claims” would fall, and wages and average productivity would rise. The extent of the moral hazard would be mitigated. It gets ahead of ourselves to model fully the firm behavior that could lead to this “coinsurance” outcome. However, the gist of the argument goes like this. Starting from a low-wage, full-employment market position, any single firm has incentive to offer a wage that is above the going rate. Why? Because then its workers will suffer a loss if they are fired (the difference between the firm’s wage and the market wage), and they will therefore increase their work efforts (and productivity) in order to avoid this. But as all firms start increasing their wages, the penalty disappears again. Firms continue to bid up the wage rate until it goes above the market-clearing wage, with demand for labor less than the supply, so that employees face an unemployment penalty for job loss and seek to avoid it. Even though there is unemployment, at some wage rate that sustains it each firm will be in equilibrium: while a firm could lower wages and hire additional workers from the unemployed pool, the gains from this would be precisely offset by the reduction in work effort of its current employees.52

Summary Uncertainty is a pervasive phenomenon: It is present to some degree in virtually all economic choice situations. The sources of uncertainty are varied and many: nature, human interaction, lack of information, or complexity. Although different situations may not fit neatly into any one of these categories, the following examples should illustrate the sources of difference: the weather, labor-management bargaining tactics, buying a used car, and defending yourself against a lawsuit. The presence of uncertainty is generally considered costly. As a matter of preference, most people simply dislike uncertainty and are willing to pay in order to avoid or reduce it. We call this risk aversion, and it can profoundly affect resource allocation to activities that generate risk. A good example of this is the impact of a high but uncertain inflation rate on the nation’s aggregate savings and investment: Because this kind of inflation makes the real return from savings and investment more uncertain, it works to reduce them. It is important to understand how people respond to the uncertainties they perceive, as well as how public policy can affect those perceptions. The most widely used model of behavior under uncertainty is the model of expected utility maximization: Individuals, when confronted with risky choice situations, will attempt to make the decisions that maximize their expected utilities. To understand this, we reviewed the concepts of probability and expected value.

52 The best general reference on efficiency wages is George A. Akerlof and Janet L. Yellen, eds., Efficiency Wage Models of the Labor Market (Cambridge: Cambridge University Press, 1986). Two papers that provide empirical support for this concept are J. Konings and P. Walsh. “Evidence of Efficiency Wage Payments in UK Firm Level Panel Data.” Economic Journal, 104, No. 424, May 1994, pp. 542–555; and C. Campbell III, “Do Firms Pay Efficiency Wages? Evidence with Data at the Firm Level,” Journal of Labor Economics, 11, No. 3, July 1993, pp. 442–470.

264

Chapter Seven

Several examples of simple coin-flipping games were given to demonstrate that expected value is not a sufficient predictor of choice-making. Many people will refuse to take risks even when the entry price is fair. Such risk aversion is implied by a diminishing marginal utility of wealth, which leads naturally to the idea that people care about expected utility. Because risk is costly, social mechanisms that can shift risk to where it is less costly are of great importance. Two methods of doing this are risk-pooling and risk-spreading. Riskpooling is the essence of insurance; as a consequence of the law of large numbers, the aggregate risk cost to the insurance company from possible claims is much lower than the sum of risk costs when each individual bears his or her own. Risk-spreading, on the other hand, derives its advantage from the fact that one risky event has a lower total risk cost if different individuals share the risk. Dividing the ownership of a company among several partners or among many owners through the sale of stock is a good example of risk-spreading. Individuals also use futures markets for this purpose. For example, a farmer may not wish to bear the full risk of growing crops to be harvested in the future and sold at some unknown price. Instead, he or she may sell the rights to some of that crop now at a known price through the futures market. The ability of insurance, stock, and futures markets to reduce risk costs is limited to coverage of only a few of the many risky phenomena. In later chapters we will see that other mechanisms of risk reduction are used in private markets: for example, the desire to be an employee rather than an entrepreneur can be understood as a way to reduce the risk. But in addition, many public rules and regulations can be understood as collective attempts to reduce risk costs. Through the legal system we have the concept of limited liability, which forces certain risks to be shifted. This may not be desirable when the risks taken can have catastrophic consequences: A controversial example of this problem is the Price-Anderson Act, which specifically limits the liability of private producers from nuclear reactor accidents. Part of the function of the criminal justice system is to deter potential offenders from committing crimes and thus limit the risks of that type to be faced by the population. A whole variety of regulations involving product or service quality can be thought of in terms of riskreduction benefits: examples are occupational licensure or certification, labeling requirements, and safety standards. When we shift to policy considerations, it is important to recognize that risk is like other undesirable phenomena that we seek to avoid; the willingness to avoid these phenomena depends upon what we must give up to do so. A riskless society would probably be desired by no one, once it is recognized that either crossing the street or driving would have to be banned. General cost considerations will be explored more fully in later chapters. In this part of the book we are concentrating on models of individual decision-making. Thus the next point emphasized in this chapter is that policy analysts must consider critically whether particular choice situations can be modeled successfully by the expected utility-maximization hypothesis. Some situations, as that of the Slumlord’s (or Prisoner’s) Dilemma, may be perceived as uncertain rather than risky. Choice-making may result from strategic reasoning rather than estimating probabilities of the various possible states. To the extent that this

Uncertainty and Public Policy

265

game-theoretic model applies to urban housing and international trade, it may provide a rationale for policies involving urban renewal and trade agreements to reduce tariffs. A more general alternative to expected utility maximization is a model that assumes some type of bounded rationality. This type of model recognizes that there are limits to human information-processing abilities, such that the calculations required to maximize expected utility may be beyond them in some situations. This is not a statement that applies only to some people; it applies to all of us. No one has yet discovered an optimal chess strategy, even though we know that at least one exists. It is an empirical question whether particular decision situations we face are ones in which we are likely to find the optimum or, put differently, whether they are more like chess or tic-tac-toe. Most empirical evidence, however, suggests that people do not maximize expected utility in situations perceived as unfamiliar, even if they are quite simple. Models with bounded rationality emphasize that people learn by trial and error and often develop strategies of problem-solving (or decision-making) that satisfice even if they do not optimize. With enough trials and errors and learning and ingenuity, people can solve incredibly complex problems. But in other situations, either because of complexity or the lack of trials to allow trial-and-error learning, human choice may be very poor. It is recognition that people can make quite serious mistakes that provides a potentially powerful rationale for many regulatory standards and other public policies. The example of the purchase of disaster insurance in flood- and earthquake-prone locations illustrates this nicely. If people behave according to the expected utility maximization hypothesis, there is no public policy problem except possibly the lack of objective information. But survey evidence suggests that people do not make the coverage decision in that manner, that many people seem to choose purchase strategies that are inferior given their own preferences, and that public policies designed to take account of the bounded rationality in this situation may lead to better decision-making. Similarly, the Equity Premium Puzzle suggests that, owing to myopia and loss aversion, many people may not make good decisions when allocating funds such as those in retirement accounts across different financial assets such as stocks and bonds. One of the most important areas of policy-analytic research concerns more careful exploration of the differences in policy evaluations that arise from these alternative models of decision-making. Mechanisms to reduce risk can have adverse side effects, as the moral hazard problem illustrates. This problem arises whenever there is a principal-agent contracting relationship with a hidden action information asymmetry; the principal does not know what outcomeaffecting action the agent takes. In the case of medical insurance, the contract is between the insurance company (the principal) and the covered individual (the agent). The presence of insurance changes the incentives each individual faces when deciding on the purchase of medical services. Full-coverage insurance reduces the price at the point of service to zero, leading to a Prisoner’s Dilemma in which everyone overconsumes medical services (the hidden action). As a by-product of the desirable growth of medical care insurance coverage for the population, we created a system of medical care that lacked effective mechanisms of cost control. Fee-for-service physicians treating well-insured patients left little incentive to economize on

Chapter Seven

266

services. This has been mitigated to some extent by reduced reliance on the fee-for-service system and increased reliance on prepaid systems such as health maintenance organizations. However, the vast majority of those covered by Medicare remain in the fee-for-service systems. We have also become more aware of the role of deductibles and coinsurance in limiting moral hazard in these and other insurance circumstances. The moral hazard problem also offers fundamental insights that help us understand the savings and loan crisis of the 1980s, as well as how involuntary employment can arise in a competitive market setting. In the appendix several calculations are offered to illustrate various aspects of risk assessment. We consider the St. Petersburg Paradox of why those invited to play a game with very high expected value are unwilling to offer much of an entry fee. Measures of risk aversion are introduced and their use in estimating risk costs empirically is illustrated. In the prior section on moral hazard, we identified a trade-off between the amount of risk reduction and the amount of overconsumption of medical services. The calculation of the riskreduction part of this trade-off is simulated in the appendix, albeit with a highly simplified model. By using boundaries for risk aversion derived by common sense and past experience, one can begin to get a feel for the magnitudes involved. Such an exercise can be very useful in policy analysis, as in the design of a plan for universal coverage.

Exercises 7-1

A consumer has a von Neumann-Morgenstern utility index for income Y: Y2 U(Y ) = 10Y − ———– 100,000 Furthermore, if she becomes ill, her demand for medical care Q is Q = 200 − 4P where P is the dollar price per unit of medical care. The current price is P = $25. The probability that she will become ill is .15. Her current income is $10,000. To simplify this problem, assume that 100 units of medical care when the consumer is “ill” just restore the consumer to “healthy.” Any medical care above that is considered as consumption of any ordinary good or service. Thus, the utility level in each state depends on the income level after medical expenses and premiums supplemented by any consumer surplus from medical care above the “healthy” point. (In this problem, the consumer will always choose at least enough medical care to restore her health.) a What is the consumer’s expected utility with no insurance? (Answer: U = 95,316.) b Political candidate A proposes that fully comprehensive health insurance be provided to everyone. The proposed premium would be 10 percent above the actuarially fair level for this consumer (to cover transaction costs). What premium would be charged for this plan, and what expected utility level would it yield? (Answers: $825; U = 92,746.)

Uncertainty and Public Policy

267

c Political candidate B proposes a catastrophic insurance plan. It would cover all expenses above $2750, but the consumer would pay for all medical expenditures up to $2750. Again, the proposed premium would be 10 percent above the actuarially fair level. What premium would be charged for plan B, and what expected utility level would it yield? (Answers: Approximately $371; U = 93,150.) Note: Is there any difference in the medical service that would be received by this consumer under Plan A or Plan B? What causes the difference in expected utility? d Political candidate C proposes a comprehensive plan with no deductibles but a 60 percent coinsurance rate (the consumer would pay $0.60 for each $1.00 of medical expenditure). The proposed premium would also be 10 percent above the actuarially fair level. What premium would be charged for plan C, and what expected utility level would it yield? (Answers: $231; U = 94,821.) e Suppose a $50 preventive visit to the doctor lowered the probability of illness to .075. Would the consumer purchase a preventive visit when she is not covered by insurance? (Answer: Yes.) If covered by plan C, would the consumer purchase the preventive visit if the premium were adjusted to reflect the new expected costs plus 10 percent? (Answer: Yes.) 7-2

You are asked your opinion of a new plan proposed as federal unemployment insurance to cover a new group of workers. When uninsured, the average spell of unemployment per year per worker is 2 weeks, and the maximum time that any of these workers was unemployed was 52 weeks. The plan promises benefits equal to each worker’s wage for up to 52 weeks of unemployment. It proposes to finance these benefits by a tax on the weekly wage, and estimates that a 4 percent tax is needed (4% × 50 weeks work per year on average = 2 weeks pay). a What is moral hazard? b What is the moral hazard in this proposed plan? c Do you think the 4 percent tax will raise revenue equal to the expected payout? Explain. d What do you suggest to reduce the moral hazard?

APPENDIX EVALUATING THE COSTS OF UNCERTAINTY

In this chapter we have emphasized that uncertainty is costly. In this appendix we consider several different assessments of those costs. The first section reviews the St. Petersburg Paradox, in which we resolve the paradox not by its original resolution of expected utility calculation but by recognizing the strong effects of payoff constraints. The second section

268

Chapter Seven

illustrates some explicit calculations of the value of benefits from risk-pooling and riskspreading. The third section contains a method, albeit highly simplified, for estimating the order of magnitude of risk cost in a policy situation. It is intended to suggest that plausible boundaries can be placed on the increase in risk costs borne by the population because of an increase in the average medical coinsurance rate. This exercise could be useful in the design of national health policy.

Real Constraints and the St. Petersburg Paradox A common but somewhat incorrect example often used to illustrate the point that expected value is not a sufficient predictor of decision-making behavior is the St. Petersburg Paradox. Consider the game in which an evenly weighted coin is flipped until it comes up heads, which ends the game. Suppose the winner receives $2 if a head comes up on the first flip, $22 = $4 if a head does not come up until the second flip, $23 = $8 if there is not a head until the third flip, and so on. If a head does not come up until the ith flip, then the payoff to the player is $2i. Thus the payoff is relatively low if a head comes up “soon” but grows exponentially higher the more flips it takes before a head finally comes up. On the other hand, the probability is relatively high that the game will end “soon.” There is a –21 probability that the game will end on the first flip, ( –21 ) 2 = –41 probability that it ends on the second flip, ( –21 )3 = –81 probability that it ends on the third flip, and so on. The probability that the game continues until the ith flip is ( –21 ) i. Thus the game has the expected payoffs shown in Table 7A-1. Since theoretically the game can go on forever, its expected value is infinite. That is, ∞

Σ Πi Xi = 1 + 1 + 1 + . . . = ∞ i−1 But when people are asked what entry price they are willing to pay to play the game, the response is invariably a number below infinity; most people will not even pay a paltry $1 million to play this game. In fact, few people will offer more than $20 to play. This is the paradox: Why should people offer so little to play a game with such a high expected value? This is where the idea of diminishing marginal utility of wealth enters. If the gain from successive tosses in the form of expected utility payoffs diminishes (unlike the constancy of the expected monetary gain), it could have a finite sum. Then it is perfectly plausible that individuals would offer only finite amounts of money for the privilege of playing the game.53

53 For example, suppose the utility value of the ith payoff is (– 3 i 2 ) . Then the expected utility E(U) from playing the game is ∞



i=1

i=1 ar2,

E(U) = Σ ( –12 )i( –23 )i = Σ ( –43 )i = 3 This follows because the sum of an infinite series a, ar, . . . , arn, . . . equals a/(1 − r) for r < 1. In the above equation a = –43 and r = –43 . We have not yet discussed the monetary value of an expected utility increase of 3 utils to this individual, but it should be clear that there is no reason why it could not be a low dollar amount. Since X = 2i and U = (–23 )i, we can take the logarithm of each equation and divide one by the other to deduce: (ln X )(ln –23 ) ln U = ————— ln 2

Uncertainty and Public Policy

269

Table 7A-1 A Game with Infinite Expected Value Flip number

Probability

Payoff

Expected payoff

1 2 3 4 . . .

–21 –1 4 –21 1 — 16

2 4 8 16

1 1 1 1

However, the above reasoning does not explain why most individuals in fact will offer low dollar amounts to play the game, and in that sense the idea of expected utility maximization does not really resolve the paradox. The answer actually has nothing to do with diminishing marginal utility. The real answer is that no game operator has the assets to be able to make the larger payoffs. For example, suppose the U.S. government guaranteed to pay prizes up to $10 trillion— roughly the entire annual product of the U.S. economy. That would enable payment of the scheduled prizes if heads came up on any of the first forty-three flips, but beyond that the prize could get no larger than $10 trillion. Thus the expected value of this game would be only $44. Since no plausible game operator could guarantee payments anywhere near that size, the expected value of the game with realistic prize limits is considerably below $44. If the maximum prize is $10 million, for example, the expected value of the game is approximately $24.

Calculating the Value of Risk-Pooling and Risk-Spreading The chapter’s text uses a simple example to illustrate how risk-pooling reduces risk costs. The example is of two identical risk-averse individuals. Each has $50,000 in wealth, $5000 of which is subject to theft; each independently faces a probability of theft equal to .2. Here we further assume each has a specific utility function of wealth W 54: U(W ) = −e−0.0002W

or U = X 0.58496 We interpret this equation as showing the utility increase from additional dollars. When U = 3, the equation implies X = $6.54. If the individual behaves in accordance with the expected utility theorem, then $6.54 is the most this individual would offer to play the game. 54 This utility function is one with diminishing marginal utility of wealth. It is characterized by mild risk aversion: The individual is indifferent to receiving $900 with certainty or accepting a 50 percent chance to win $2000. This is explained further in the third section. The natural number e = 2.71828.

270

Chapter Seven

Using it, we can calculate certain-wealth equivalents and pure risk costs. When the two bear the risks independently, or self-insure, each has an expected utility of E(U) = .2U($45,000) + .8U($50,000) = .2(−e−9) + .8(−e−10 ) = −0.0000610019 We find the certain-wealth equivalent (Wc ) by solving −0.0000610019 = −e−0.0002W or Wc = $48,523.03 Since the expected wealth of each is $49,000 [= .2($45,000) + .8($50,000], the risk cost [E(W ) − Wc ] is $476.97. In other words, each would forgo as much as $476.97 of expected value in order to be rid of the risk from self-insurance. If the two individuals agree to pool their risk, recall that there are three possible outcomes or states: 1. Neither individual has a loss from theft (W = $50,000), with probability .64. 2. Both individuals have losses from theft (W = $45,000), with probability .04. 3. One individual has a loss and the other does not (W = $47,500), with probability .32. The expected wealth remains at $49,000, the same as under self-insurance. What happens to expected utility? With expected wealth the same but with the likelihood of ending up close to it greater, the risk is reduced and expected utility increases (i.e., is a smaller negative number): E(U) = .64U($50,000) + .04U($45,000) + .32U($47,500) = − 0.0000579449 The certain-wealth equivalent is found by solving as follows: −0.0000579449 = −e−Wc whence Wc = $48,780.09 Thus, we can see that this simple risk-pooling arrangement increases expected utility because it reduces the risk cost per person from $476.97 to only $219.91. Another way to look at it is from the perspective of the compensation principle. That is, let us ask whether the change from self-insurance to risk-pooling has benefits greater than costs. Each individual values the initial situation at $48,523.03 and the risk-pooling situation at $48,780.09. Therefore, each would be willing to pay up to the difference of $257.06 in order to make the change.

Uncertainty and Public Policy

271

The net benefits from making the change are $514.12, and, in this example, each person is strictly better off. Recall that the general principle of diversification of assets can be seen as an example of risk-pooling. Let us use the utility function above to compare the expected utility of a risk-averse individual under two alternative arrangements—one in which the individual invests in one firm and the other in which the individual invests the same total amount spread equally among ten different and independent firms similar to the first firm. Let one strategy be to invest $1000 in a risky, high-technology firm with a probability of .8 of being successful and returning $5000 and a probability .2 of failing and returning nothing. Assume the individual has $46,000 in wealth initially and thus utility: U($46,000) = −e−9.2 = −0.0001010394 The proposed investment clearly has expected value greater than the entry cost: .8($5000) + .2(0) = $4000 > $1000 We first check to see if making this investment would raise the individual’s expected utility level (otherwise, the individual would not invest): E(U) = .8U($50,000) + .2U($45,000) This is the same expression we evaluated in the self-insurance example, where we found E(U) = −0.0000610019 and the certain-wealth equivalent is Wc = $48,523.03 Thus the individual prefers this investment to no investment at all. The risk cost, as before, is $476.97. Now we wish to see if the diversification of assets can reduce the risk cost. The second strategy is to divide the $1000 into smaller investments of $100 in each of ten different firms. We choose the firms to be similar in risk to the first: Each firm has a .8 chance of being successful and returning $500 and a .2 chance of losing the $100 investment. We choose diverse firms—ones such that there are no linkages between the successes or failures of any of them.55 The expected value of each investment is $400 [= .8($500) + .2($0)]; since the investments are independent, the total expected value is simply 10 × $400, or $4000. 55

When there is interdependence among some of the investments in a portfolio, the diversification is reduced. As an extreme example, suppose the profitability of building supply firms is determined solely by whether interest rates are low (which increases the demand for new construction) or high (which reduces the demand). Making small investments in each of ten building supply firms is then no different than investing the same total in any one: All ten firms will either be profitable or will not be. Calculating expected value and utility is more complicated when there is interdependence among the assets in a pool. For a discussion of this in the context of energy investments, see P.S. Dasgupta and G. M. Heal, Economic Theory and Exhaustible Resources (Oxford: James Nisbet & Company, Ltd., and Cambridge University Press, 1979), pp. 377–388, especially pp. 385–387.

Chapter Seven

272

Table 7A-2 The Expected Utility of a Diversified Portfolio Number of successful investments

Probability

Wealth level

Expected utilitya

0 1 2 3 4 5 6 7 8 9 10 Sums

.0000 .0000 .0001 .0008 .0055 .0264 .0881 .2013 .3020 .2684 .1074 1.0000

45,000 45,500 46,000 46,500 47,000 47,500 48,000 48,500 49,000 49,500 50,000

.0 .0 .0000000101 .0000000731 .0000004550 .0000019761 .0000059669 .0000123364 .0000167464 .0000134669 .0000048760 .0000559069

a

U(W) = −e−0.0002W.

The numbers used to calculate the expected utility are shown in Table 7A-2. There are eleven possible outcomes, since the number of successful investments can be anywhere from zero to ten. The probability that any given number of successes will arise, given that each firm has .8 chance of success, is provided by the binomial probability measure.56 We follow the usual procedure for calculating expected utility: we multiply the probability of each possible outcome by its utility level and sum. From Table 7A-2 we see that the expected utility of the diversified portfolio is greater (i.e., less negative) than that of the undiversified portfolio. The certain-wealth equivalent of the expected utility level is found by solving −0.0000559069 = −e0.0002W

56 When there are n independent risky events each with probability of success Π, the probability Π , that there r are exactly r successes is given by the binomial probability measure

n! Πr = ———— Πr (1 − Π) n−r r!(n − r)! The notation n! means (n)(n − 1)(n − 2) … (1). For example, 4! = 4(3)(2)(1) = 24. To illustrate the calculations for Table 7A-2, the probability that there are exactly 8 successes in the 10 investments is 10! Π = —— (.88)(.2 2) 8!2! 10(9)(.00671) = —————— 2 = .30199

Uncertainty and Public Policy

273

whence, Wc = $48,959.11 Thus the risk cost has been reduced from $476.97 to only $40.89 by diversifying the portfolio. Again, this risk-pooling strategy works because it reduces the probability of extreme outcomes and increases the probability of an outcome near the expected value. In this example the individual has a 96.72 percent chance of ending up with wealth in the range from $48,000 to $50,000. Finally, we can see the advantage of risk-spreading easily through a reinterpretation of the empirical example we have just used. Suppose we characterize a risky high-technology firm by the $1000 investment with a .8 probability of returning $5000 and a .2 probability of returning nothing. Let us consider whether it is more efficient to have the firm owned by a single owner or by a partnership of ten. We assume for simplicity that potential owners are homogeneous and risk-averse and that each has preinvestment wealth of $46,000 and the same utility function we have been using. We have already seen that a single owner would evaluate the certain-wealth equivalent of his or her position at $48,523. That is, an individual with the given preferences would be indifferent to receiving $2523 with certainty or owning the firm. Now let us show that a tenperson partnership would value the firm at a higher cash equivalency. To raise the $1000 required to operate the firm, each partner would contribute $100, and so each would end up with either $45,900 (if unsuccessful) or $46,400 (if successful). The expected utility of this position is E(U) = .8U($46,400) + .2U($45,900) = −0.000095233 This has a certain-wealth equivalent of −0.000095233 = e−0.0002Wc whence, Wc = $46,295.92 In other words, each partner would be indifferent to receiving $295.92 with certainty or owning the firm. But then in the aggregate the partnership values the firm at ten times that, or $2959, compared with only $2523 for the single owner. By the compensation principle, a change from single ownership to the partnership would increase efficiency: the partners could buy out the single owner at a price that would make everyone better off. The “social profit” is $436; it consists totally of the reduction in risk cost from $477 faced by a single owner to the $41 [10($46,300 − $46,295.92)] total risk cost of the partnership.

Methods of Assessing Risk CostO In the last section, we simply assumed a specific utility function in order to illustrate risk cost calculations. Now we wish to develop a somewhat more general method for estimating

274

Chapter Seven

these risk costs. To begin with, we define two ways of measuring the degree of risk aversion: 1. Absolute risk aversion: a(W ) = −U″(W )/U′(W ) 2. Relative risk aversion: r(W ) = −WU″(W )/U′(W ) Note that whenever an individual has a utility-of-wealth function characterized by risk aversion, U″(W ) < 0 and, of course, U′(W ) > 0. Thus, both measures of risk are positive when there is risk aversion. Also, the second measure r(W ) is simply the elasticity of the marginal utility of wealth. This corresponds to greater risk aversion when marginal utility is changing rapidly or, equivalently, when the utility function is more concave. A straightline utility function, on the other hand, scores a zero on both measures (U″ = 0) and indicates risk neutrality. To get a feel for the interpretation of these measures, we use an approximation suggested by Pratt.57 Recall the risk cost definition: the difference between the expected wealth in a risky situation and its certain-wealth equivalent. Pratt shows that the risk cost C can be approximated by the following formula, which is derived from a Taylor series expansion: ¯ )σ 2 C = –21 a(W w ¯ is the expected wealth and σ 2 the variance of wealth. Thus for individuals who where W w have the same expected wealth and are faced with the same uncertainties, those with greater absolute risk aversion will pay more to avoid the risk. The absolute risk aversion is proportional to the absolute amount of money an individual will pay to avoid a fair gamble. Similarly, we can express the relative risk cost as the ratio of the risk cost to expected wealth: C σw2 ¯ ) —– –21 a(W —– = ¯ ¯ W W or C σw2 ¯ ) —– –21 r(W —– = ¯ ¯2 W W

( )

where the term in parentheses on the right has a standard statistical interpretation: the coefficient of variation squared. This last equation shows that the share of wealth an individual will give up to avoid a certain risk is proportional to his or her relative risk aversion. As we have seen in earlier chapters, simulation techniques may be profitably used when we do not have precise knowledge of certain parameter values but have some reason to believe the values are likely to lie within a given range. In this case we do not know the values of the risk-aversion measures for particular individuals. Nevertheless, common sense can be a useful guide.

57

J.W. Pratt, “Risk Aversion in the Small and in the Large,” Econometrica, 32, 1964, pp. 122–136.

Uncertainty and Public Policy

275

Two parametric utility functions have been found to be useful for empirical exercises involving risk aversion. One of them is the function U(W) = −e−aW where a > 0, which is characterized by constant absolute risk aversion equal to a. To see this we take the first and second derivatives: U′(W) = ae−aW U″(W) = −a2e−aW Therefore, applying the definition of absolute risk aversion, we find a2e−aW = a a(W) = ———– ae−aW The other function of interest displays constant relative risk aversion equal to r: W 1−r U(W) = ——– 1−r where r > 0, r ≠ 1. To show this, we again take derivatives: (1 − r)W −r U′(W) = ————— = W −r 1−r U″(W) = −rW −r−1 By applying the definition of relative risk aversion, we get rW −r = r r(W) = ——– W −r Now let us work with the constant absolute risk-aversion function and ask what values are reasonable for a. Imagine asking people what entry price they would require in order to make (be indifferent to) a bet with an even chance of winning or losing $1000. Casual observation suggests that a practical lower bound might be $50, in the sense that very few people would accept the bet for anything lower and the vast majority of people would demand more. To see what degree of absolute risk aversion a this implies, we must solve the equations that indicate the utility equivalence of current wealth W with the gamble: U(W) = –21 U(W + 1050) + –21 U(W − 950) For the constant absolute risk aversion case: −e−aW = − –21 e−a(W+1050) − –21 e−a(W−950) The initial wealth level drops out, and on simplifying we have 2 = e−1050a + e950a

276

Chapter Seven

This equation can be solved on a calculator, and we find a ≅ 0.0001.58 Suppose for an upper bound we use an entry price of $333; this makes a 50-50 chance of winning $1333 or losing $667 and is probably sufficient to attract most investors. Solving as above, this implies a ≅ 0.0007. To show how one could use these boundaries in a simulation of the effects of coinsurance on medical care insurance, a highly simplified illustrative calculation is done below. The major simplification is to assume that each household only faces two possible states of the world: healthy with ΠH = .9, and ill with ΠI = .1. In an actual simulation, one would use a standard statistical distribution to model the many real contingencies that households face.59 We assume that the uninsured household would purchase $5600 worth of medical services if ill and that the price elasticity of demand is −0.5. Let Wc be the certain-wealth equivalent that makes the “average” household indifferent to its current uninsured and risky state. That is, U(Wc ) = EU(W) = .9U(W) + .1U(W − 5600) or for the specific utility function −e−aWc = .9(−e−aW ) + .1(−ea(W−5600)) which can be simplified as follows: e a(W−Wc) = .9 + .1e a(5600)) and, taking logs, 1 ln(.9 + .1e a(5600)) W − Wc = — a By using our two bounds for a in the above equation, we find that W − Wc =

{

$723.83 a = 0.0001 $2542.31 a = 0.0007

Since the risk cost of being completely uninsured is W − Wc minus the expected medical care cost [$560 = .l($5600)], we see it is between $163.83 and $1985.31 per household.

58 For those not familiar with inelegant but remarkably practical ways of solving many problems by trial and error, here is how one might proceed for the above. Since we know a is positive when there is risk aversion, the answer will be greater than 0. The first term must be a fraction because of the negative sign in the exponent (the bigger a, the smaller the fraction. Since e ≅ 2.72, any exponent larger than 1 makes the second term too big, or 1 —– a > 950 = 0.001. Therefore, we have quickly realized that 0 < a < 0.001. From here, one can achieve accuracy to any decimal point by trial and error. Take the halfway point in this interval, 0.0005, and try it on the right-handside terms, 0.59 + 1.61 = 2.20, so 0.0005 is too big, and 0 < a < 0.0005. Proceed in this manner until the difference between the two boundaries is negligible for your purposes. 59 This exercise is a simplification of an actual one done by Martin Feldstein and described in his article, “The Welfare Loss of Excess Health Insurance,” Journal of Political Economy, 81, No. 2, part 1, March/April 1973, pp. 251–280. In the actual Feldstein simulation, the number of hospitalizations per household is assumed to resemble a Poisson distribution and the duration per hospitalization is assumed to resemble the gamma distribution.

Uncertainty and Public Policy

277

Now, if each household purchased an insurance policy with a coinsurance provision of 50 percent, its members, when ill, would purchase more medical services. We have assumed Q = 5600P −0.5, where P = $1.00 initially. With coinsurance, P drops to $0.50 and Q = 7920 (the moral hazard factor). Thus, in the event of illness, the insurance company and the household will each pay $3960 to the hospital. The insurance company will charge $396 for each policy, which the household loses in all states. To find the residual cost of risk to the household, we ask what certain wealth Wc * would bring the same utility as is now expected: U(Wc * ) = .9U(W − 396) + .1U(W − 396 − 3960) or, using the same algebra as above W − 396 − Wc * = –1a ln(.9 + .1e a(3960)) Using our two bounds for a, we calculate: W − 396 − Wc * =

{

$474.43 $1308.44

a = 0.0001 a = 0.0007

As before, the residual risk cost is the difference between these figures and the household’s expected out-of-pocket medical costs of $396, so it is between $78 and $912.45. Therefore, the risk saving from taking out partial insurance coverage is between $85.40 and $1072.87 per household (the difference between the total and residual risk costs). We emphasize that this exercise is intended to demonstrate a procedure for using riskaversion measures in a simulation. To present an actual simulation would require more detail and more sensitivity testing than is appropriate for the purpose of this book. However, studying and understanding exercises like this one should provide the necessary courage to grapple with the existing work of this type, and perhaps improve upon it.

CHAPTER EIGHT A L L O C AT I O N O V E R T I M E A N D I N D E X AT I O N

CURRENT ECONOMIC DECISIONS often have extremely important consequences for the future well-being of the individuals who make them (as well as society as a whole). For example, the decision to invest in schooling today can have a significant impact on an individual’s future earnings profile. To do this, many students borrow funds to finance today’s consumption and repay the loan later out of their future earnings. The amount someone chooses to save from current earnings when working has important implications for the amount of retirement or other income available to that person in the future. To make these decisions, an individual must somehow balance today’s wants and opportunities against those of the future. In this chapter we consider some models that help us to understand and analyze these choices and some important public policies that affect them. An investment is an increment to the capital stock, where capital refers to the durable resources that will be available to help produce a flow of consumption in the future. For example, raw materials, land, and labor can be allocated this year to the construction of a new factory. In the future, the factory will be used to make clothing. The creation of the factory is an investment that increases our total capital stock as well as the future flow of clothing available for consumption. Similarly, one can invest in people and increase the stock of “human capital.” For example, an additional student in medical school is an investment that adds to the future stock of physicians and increases the future flow of medical services. When a society invests, it is giving up current consumption in favor of future consumption. The medical student, for example, could be working in a restaurant producing meals instead of attending school. All of the invested resources could be used instead to increase current consumption. Any individual, in deciding whether to invest, must consider the current and future effects on both income and consumption. The medical student not only forgoes the restaurant income, but must be able to pay tuition and have enough left over for

278

Allocation over Time and Indexation

279

room and board. These costs must be weighed against the future benefits from becoming a physician. Some investments will be undertaken only if the investor can borrow purchasing power today that can be repaid out of future earnings (e.g., a student loan). That is, the individual investor is not necessarily the one who defers current consumption in an amount equal to the invested resources. Others might be willing to lend or save some of their current purchasing power. Investment demands are bids to use real resources for future rather than current consumption (the student who pays tuition and expects teachers and classrooms to be provided in return), and savings supplies are offers of real resources for future use rather than for current consumption (the individual who offers himself or herself as a student). We will show that the interest rate is the price that affects the investment demand (a declining function of the interest rate) as well as the savings supply (an increasing function of the interest rate). To begin to understand the individual decisions that underlie the demands and supplies, we introduce a simple model that concludes that an individual will undertake all investments whose present discounted values are greater than zero at the given rate of interest. There are many reasons why other models apart from this initial one may better explain actual decisions involving resource allocation over time. But our primary purpose is to introduce the importance of allocating resources over time, as well as the concept of discounting that is used to make comparisons across time periods. Our initial model illustrates these aspects clearly. We shall then review some of the evidence on actual individual behavior with respect to time allocation (we mentioned myopia in the last chapter). We shall also address in a limited way the problem of the uncertainty that individuals face in making intertemporal decisions. One important source of intertemporal uncertainty is the degree of inflation, which makes the relationship between real future prices and current ones unclear. While an analysis of the causes of inflation is beyond our scope, we explain how various indexing mechanisms are used to adapt to inflation and to reduce an individual’s uncertainty. After reviewing some principles of index construction, we consider a number of practical problems that arise in constructing and implementing the necessary indices. These problems have relevance in several policy areas such as government bond issuance, Social Security payments, and school financing. They may also be thought of as a special subset of the more general problem of linking decision-making to any social indicator (e.g., crime rates and health statistics).

Intertemporal Allocation and Capital Markets For most people the timing of the receipts of income from their wealth does not match perfectly with the desired timing of their expenditures, or consumption. Although one could consider this problem over any period of time, it is most useful to think first about an average lifetime or life cycle. Figure 8-1 represents a family’s income over a period of approximately 60 years and its consumption pattern over the same period. The most important feature to note is the relative evenness of the consumption pattern compared to the uneven income pattern.

280

Chapter Eight

Figure 8-1. Typical life style pattern of family income and consumption.

The patterns in Figure 8-1 approximate facts we can observe. Incomes are typically low when the adult family members are in their 20s and may be in school or just starting their careers. They usually rise fairly steadily until they peak when these adults are somewhere in their late 50s or 60s, and shortly thereafter plummet sharply because of retirement. Meanwhile, consumption is likely to outpace income in the earlier stages: home-occupancy and child-rearing expenses, for example, are usually financed by borrowing that is paid back out of the savings that accrue during the middle stages of the family’s life cycle. Similarly, the savings are used to help finance consumption during the low-income years of retirement. In other words, these facts suggest that a family prefers to consume at its “permanent” income level—roughly the average income over a lifetime—rather than have to alter consumption to fit the pattern of transitory annual income.1 1

The difference between permanent and transitory income can often have a profound analytic impact on the evaluation of policies by equity standards. As one example, consider the degree of progressivity (or regressivity) of a tax such as the property tax. If the tax dollars paid by home owners are compared with their annual incomes, a regressive bias results. To see this, recognize that many families living in homes with tax bills that are high compared with their low current incomes are at either the beginning or the end of their lifetime income stream. The tax bill as a proportion of permanent income for these families is much lower. Similarly, people in the equivalent homes but in the highincome stages of the life cycle will pay only a small portion of their high current incomes in taxes, even though their taxes are a much higher proportion of their permanent incomes. In other words, a tax that is proportional to home value is also approximately proportional to permanent income but will appear to be regressive in relation to current income. Thus, whatever the true incidence of the property tax, it appears more regressive if measured using current income as a base. Calculations showing the difference are in S. Sacher, “Housing Demand and Property Tax Incidence in a Life-Cycle Framework,” Public Finance Quarterly, 21, No. 3, July 1993, pp. 235–259. This

Allocation over Time and Indexation

281

Figure 8-2. Savings W0 − C0 in a two-period model with interest rate r.

Individual Consumption Choice with Savings Opportunities What we wish to do now is build up a more systematic model of individual desires to allocate resources over time and examine how capital market transactions can and do facilitate that allocation. We begin by imagining a consumer to have a fixed wealth endowment that can be allocated for consumption in one of two time periods. One important point of the present and successive models is to think more generally about the determinants of a consumer’s budget constraint, and to recognize that it is not determined simply by the flow of current income but also by the stock that we refer to as wealth. Figure 8-2 shows some indifference curves to represent the preferences of an individual for consumption in two different time periods: current consumption C0 (the first half of life) phenomenon was first noted in Henry Aaron, “A New View of Property Tax Incidence,” American Economic Review, 64, No. 2, May 1974, pp. 212–221. This does not mean that property taxes are no more of a burden to low-current-income families than to highcurrent-income families with the same permanent incomes. However, differences in burden are due to imperfections in the capital market. For example, an elderly couple may own a home that has greatly appreciated in value and on which the mortgage has long been paid off. This can leave them in the awkward position of being wealthy but with little liquid income. They might wish to pay their property taxes by giving up some of their home equity, but methods of lending to facilitate that (reverse mortgages) are not always available or are not at competitive rates.

282

Chapter Eight

versus future consumption C1 (the second half). In this model the units of consumption are homogeneous except for time, and both are normal goods (think of them as little baskets each identically filled with a little of many different goods and services). The curves are drawn more bent than straight, with the corner near an imaginary line at an angle of 45° from the origin. This is to reflect the observation that many individuals have a preference for reasonably even consumption over time. (That is, over a wide range of possible slopes for the budget constraint, the tangency will be near the 45° line.) The shape of our curves does depend on the idea of using generic, broad consumption units. There is no reason why in a model with more goods the time indifference curves for specific commodities cannot have quite different shapes from each other and from the ones drawn. Hearing aids might be strongly preferred only in the future period, whereas water skiing might be the opposite. But with our generic goods, the slope of the indifference curve is sometimes referred to as the marginal rate of time preference. It is nothing more, however, than a marginal rate of substitution and it varies along any indifference curve. The wealth endowment is shown as W0, where each unit of wealth can be used to purchase a unit of consumption today. However, we must determine the rest of the budget constraint; choosing C0 = W0 (and C1 = 0) gives us only one of its extreme points. An extreme alternative to consuming all the wealth today is to defer consumption completely, saving all the wealth for future consumption. If that is done, the wealth can be put in a bank or in government bonds or in other savings instruments, where it will earn r, the market rate of interest.2 Thus the maximum future consumption that is possible is C1 = W0(1 + r), when C0 = 0. Of course, the individual could choose any of the combinations on the line connecting the two extreme points; that locus is the budget constraint. At each point on the locus, savings is simply the amount of deferred consumption W0 − C0. The saved funds, after earning interest, are used to buy future consumption C1 = (W0 − C0)(1 + r). This is the budget constraint equation: any combination of C0 and C1 that for given W0 and r makes the relation hold. As usual, to maximize utility, the individual will choose the point on the budget constraint that is just tangent to an indifference curve. In Figure 8-2 the individual consumes Cˆ0 in the current period and saves W0 − Cˆ0. Thus, the specific choice of how much to save depends not only on the initial wealth endowment and preferences but also on the interest rate, or the size of the return to savings. This brings us to the next important point of this illustration: to interpret the interest rate as a relative price. The slope of the budget constraint equals the (negative) ratio of the intercept levels on the vertical (future consumption) and horizontal (current consumption) axes: W0(1 + r) (1 + r) 1 − ———— = − ——— = − ———— W0 1 1/(1 + r) Since we know that the slope of an ordinary budget constraint for two goods is minus the ratio of the two prices, it is only natural to interpret this slope similarly. The numerator 2 We are ignoring regulatory constraints that may prevent banks from offering the market rate of interest. We are also referring to the “riskless” rate of interest; a range of interest rates exist at any one time in part because some vehicles for saving and borrowing are riskier than others.

Allocation over Time and Indexation

283

1 is the price of a unit of current consumption. The denominator 1/(1 + r) is the price of a unit of future consumption. To see this, note that if we give up one unit of consumption today, we can buy 1 + r units of future consumption. To buy precisely one unit of future consumption, we must give up 1/(1 + r) units of current consumption, and thus that is its price. This latter number is sometimes referred to as the present value, or present discounted value, of a unit of future consumption. Think of it as the amount of money that must be put in the bank today (forgoing current consumption) in order to have enough to obtain one unit of consumption in the future. The rate r is then referred to as the discount rate. If the discount rate is 0.10, it means the present value of $1.00 worth of future consumption is $0.91 (the amount of current consumption forgone to get the unit of future consumption). An alternative way to see the same point is to rewrite the budget constraint as if “solving” for W0: 1 C W0 = 1C0 + ——– 1+r 1 This looks like any ordinary budget constraint, W = PX X + PY Y where the prices and quantities in the time equation correspond as follows: PX = 1

X = C0

and 1 PY = ——– 1+r

Y = C1

In this form, the budget constraint is any combination of C0 and C1 that makes the present discounted value of the consumption stream equal to the wealth. (Note that only future consumption is discounted; current consumption is already in its present value.) In this model an increase in the interest rate unambiguously makes the individual better off. It is equivalent to reducing the price of the future good, all other prices being constant. The budget constraint rotates outward from W0, as shown in Figure 8-2, which means that the individual can reach a higher indifference curve. This result is due purely to the fact that the individual can only be a saver in this model: There are no opportunities for borrowing. C0 must be less than or equal to W0. An increase in the interest rate will unambiguously increase C1; both substitution and income effects are positive. However, we do not know if savings will increase. The income effect on C0 is positive, but the substitution effect is negative.

Individual Consumption Choice with Borrowing and Savings Opportunities The unambiguous welfare effect of an interest rate increase disappears as we make the model more realistic and allow borrowing. To do so, we replace the wealth endowment by

284

Chapter Eight

Figure 8-3. The budget constraint for allocation between time periods with savings and borrowing possible.

an income stream. The individual owns a stock of real resources that we refer to as capital. The capital produces a flow of services that earn income Y0 in the current period and Y1 in the future. The individual’s wealth is the value of the capital assets, which equals the present value of the income streams that they will generate. Constrained by the value of those income flows, the individual must decide on a consumption pattern. We assume for now that current borrowing or lending (saving) of capital resources can be done at the market rate of interest. Thus this model takes r, Y0, and Y1 as fixed parameters that constrain the individual’s choice of the two variables C0 and C1. Figure 8-3 shows this slightly more sophisticated model graphically. The individual is initially at the point (Y0, Y1) and, of course, can choose C0 = Y0 and C1 = Y1. But what does the budget constraint look like? At one extreme, the individual could save every penny of Y0 (i.e., choose C0 = 0), put it in the bank where it returns Y0 (1 + r) in the future period, and have C1 = Y1 + Y0 (1 + r), which is the intercept level on the future consumption axis. At the other extreme, the individual can increase current consumption above the Y0 level by borrowing against future income. The bank will lend the present value Y1 /(1 + r) today in return for being paid back Y1 next period. Current consumption can be made equal to the present

Allocation over Time and Indexation

285

value of the income stream. So when C1 = 0, C0 = Y0 + Y1 /(1 + r). This is the intercept level on the current consumption axis. The budget constraint is thus the line that connects those two points and has this equation3: C1 = Y1 + (Y0 −C0 )(1 + r) You can think of this as saying that an individual can choose any future consumption level that equals future income plus the value of any savings. The second term on the righthand side represents the future value of savings if C0 < Y0. But the individual might choose to borrow rather than save, which would make the second term negative, with C0 > Y0. In this case the second term simply represents the cost of borrowing: the reduction in future consumption that results from repaying the loan. The slope of the budget constraint is −(1 + r), as before.4 The budget constraint equation can be rearranged as below with the following interpretation: The budget constraint is all combinations of C0 and C1 that make the present value of the consumption stream equal to wealth, the present value of the income stream: C1 Y1 C0 + ——– = Y0 + ——– 1+r 1+r As noted an individual may be either a saver or borrower, depending upon personal preferences. Any point on the budget constraint to the right of point A represents borrowing (C0 > Y0 ), and any point to the left of point A represents saving (C0 < Y0 ). Suppose the interest rate now increases. That causes the budget constraint to rotate clockwise about point A, as shown by the dashed line in Figure 8-3: The present value of future income is lower (less can be gained by borrowing), and the future value of current income is higher (more can be gained by saving). If the individual was a saver originally, he or she must be better off, since more of each good can be consumed than before. However, the person who was initially a borrower will be worse off unless the new savings opportunities outweigh the worsened borrowing prospects. Regardless of whether the person is a borrower or a saver, the substitution effect is to increase C1 and reduce C0. For the saver, income effects are positive and thus C1 will increase and the change in C0 is ambiguous, as before. For the borrower, we do not know if real income is increased or decreased and thus cannot predict the income effects.

Let C1 = mC0 + b, where m and b are, respectively, the unknown slope and intercept of the budget constraint. From the point where C0 = 0 and C1 = Y1 + Y0 (1 + r), we know that b = Y1 + Y0 (1 + r). From the point where C0 = Y0 + Y1 /(1 + r) and C1 = 0, we know that m = −b/C0 = −[Y1 + Y0 (1 + r)]/ [Y0 + Y1 /(1 + r)] = −(1 + r). Therefore, C1 = b + mC0 = Y1 + (Y0 − C0 )(1 + r). 4 This is shown in the above note. We can also check this with calculus by taking the partial derivative: 3

∂C1 ∂[Y1 + (Y0 − C0)(1 + r)] —— = ———————–—— = −(1 + r) ∂C0 ∂C0

286

Chapter Eight

Individual Investment and Consumption Choices So far, we have shown that an individual may choose a consumption pattern over time by taking advantage of opportunities for borrowing or saving. Now we wish to add a third alternative: undertaking productive investment. To make this clear, let us go back to the start of the description of the last model: An individual can earn Y0 in the current period and Y1 in the future period. Let us say that the source of this income comes partly from labor (a return on human capital) and partly in the form of net rent from occupants in an office building that the person owns (rent minus any operating expenses). One productive investment opportunity that might be available is education. Instead of allocating all current labor to employment, the individual can use some of it to enroll in school for the current period. This is an investment in human capital. It reduces current income, but it raises future income because of the better job that can then be obtained. Note that the primary cost of education may not be tuition, but the earnings forgone. The tuition payment further reduces the income available for current consumption. In Figure 8-4 this opportunity might be represented by a change in the income stream (available for consumption) from point A to point B. A second productive opportunity might be to renovate the office building during the current period in order to make it more attractive to commercial occupants and thus receive higher net rents in the future period. This is an ordinary capital investment. Its cost includes the net rent forgone in the current period while the renovation work is being done. It also includes, of course, the cost of the labor and materials that go into the renovation. Since the owner must pay for them, they further reduce the amount of owner income available for current consumption. Graphically, this investment might be thought of as a move from point B to point C. The idea in both these examples is that the individual controls an endowment of real resources that can be used for current production and earn up to the amount of the endowment flow of Y0. He or she can also choose to withhold some of these real resources from use in producing current consumption goods and instead convert them to a different productive use for the future period.5 Investment implies that a real resource spends time out of the most profitable use for current consumption and has a cost equal to the amount of forgone current consumption. The investment process is sometimes referred to as real capital formation, and the amount of investment is the increment to the existing capital stocks (e.g., human skills, buildings, and machinery). Imagine someone facing a whole array of investment opportunities. For each unit of real resource, the individual considers removing it from production of current consumption (moving one unit to the left starting from Y0) and allocating it to its best investment use (the

5

In the formal model we use, each unit of real resource is homogeneous and thus there is no need for the individual to buy investment inputs in the market. Our examples add realism by having the individual produce current consumption but spend some of the proceeds on buying investment inputs. The net effect is the same: Equivalent amounts of real resources are withheld from use in current consumption, and equivalent amounts of income are available to purchase current consumption.

Allocation over Time and Indexation

287

O

Figure 8-4. The effect of investment opportunities on intertemporal consumption choice.

largest possible gain to future income). This traces out the investment opportunities path in Figure 8-4, where the slope becomes flatter as we move closer to the future-period axis (the marginal investments are less lucrative). Consider how these investment alternatives affect the individual’s consumption possibilities. Any point on the investment opportunities path can be chosen, and then the budget constraint is determined precisely as in our last model. Naturally, the individual seeks the budget constraint with the best consumption possibilities: the one farthest out from the origin. Since the slope of the budget constraint is determined by the market rate of interest, the individual will choose the investment portfolio such that the budget constraint is just tangent to the investment opportunities path. We label this point D. Note that it is also the choice that intersects the axes at the farthest possible points. Thus we have the following result: A necessary condition for utility maximization is to undertake the investment opportunities that maximize the present value of income calculated at the market rate of interest. The above rule is equivalent to undertaking all investments whose present values are positive at the market rate of interest. At point D, the slope of the investment opportunities locus is −(1 + r). To the right of D the slope is steeper, and to the left it is flatter. If we undertook one additional investment project (and thus moved to the left), in absolute value ∆Y1 —— 1 means that the old bundle of goods must be interior to the new budget constraint P2 and welfare must be higher in the current period. If P < 1, the Paasche test indicates a welfare decrease. This is the case drawn in Figure 8-8. The solid line through point F is the period 2 budget constraint, or P2. P1 is the budget constraint through point E with the same slope (prices) as in period 2; it lies further from the origin than P2. However, since the goods consumed during period 1 were those at point E, the budget constraint for period 2 allows welfare to increase (as at point F) even though P < 1. Thus, Paasche indications of welfare decreases are overly pessimistic and can be wrong, as in this case. Note that the change from point E to point F is judged an improvement by Laspeyre’s method (Figure 8-7) and a welfare decrease by the Paasche method, so that the “truth” cannot be unambiguously determined in this case without knowledge of the indifference curves. The same concepts apply in forming and using the Laspeyre and Paasche price indices (quantities are fixed, prices vary). In Figure 8-9 let us say a retired individual living on Social Security is initially (period 1) at point A. Price changes in the next period are such that the same nominal Social Security income leads to the budget constraint through point B (which the retiree would choose if necessary). However, the government wishes to compensate the individual for the increase in the cost of living. Using the period 1 quantities as a base, it calculates L1 = PX1X1 + PY1Y1 L 2 = PX2X1 + PY2Y1

302

Chapter Eight

O

Figure 8-8. The Paasche quantity index.

and defines the Laspeyre price index as L2 L ≡ —– L1 It then gives the retiree a Social Security payment of L 2 = L × L1—just the amount of money that enables the person to buy last period’s quantity (point A) at the current period’s prices. This is overly generous, of course; the individual is at least as well off in the current period as in the previous one and must be better off if he or she chooses to spend the money differently. A true cost-of-living index measures the percent change in nominal income required to hold utility constant at the base level. It would give the budget constraint drawn in Figure 8-9 as LT . Thus, the Laspeyre price index overestimates the increase in the cost of living. Similarly, the Paasche price index underestimates the cost-of-living increase. The theoretical imperfections of these common methods of indexing are probably less significant than the practical problems of defining the goods and services that are included in an index and updating the index over time. We illustrated the method of index construction with just one person, two goods, and two time periods. But most indices are intended to include many people, goods, and time periods. If the sum of the nominal budget constraints of several individuals is adjusted upward in accordance with the Laspeyre price index, there is no implication that each individual is now better off than initially. The welfare of the individuals within the group depends on how in-

Allocation over Time and Indexation

303

O

Figure 8-9. The Laspeyre price index overestimates the increase in the cost of living.

dividual budget constraints change over time, how the prices of specific goods in the aggregate bundle change, and the preferences of the individuals for the different goods in the aggregate bundle. When university faculty are told they will receive a 5 percent cost-ofliving raise, some of the faculty will have had their apartment rents raised by 10 percent and others by only 2 percent. Senior faculty may have their salaries increased by 7 percent and junior faculty by only 3 percent. Which are the goods that should be used to determine this group’s “average” cost-of-living change? How does one decide the components of the cost of living for the “average” retiree on Social Security or the “average” lower-income family? If one keeps track of the prices of the components over time, how often should their composition be reviewed? The most familiar price index is the CPI calculated by the Bureau of Labor Statistics by using the Laspeyre method. Legislation at all levels of government ties annual increases in individual grants to the CPI; Social Security, food stamps, welfare payments, and government pensions are examples. U.S. Treasury “I” bonds are indexed to the CPI. Many private labor contracts tie wage increases to the CPI. This makes the details of the CPI computations important.

Chapter Eight

304

The CPI is a fixed-weight index; the weight on each price included in it is derived from an occasional survey of urban household expenditures. In 1997, the base period for the fixed weights was from 1982 to 1984. For example, the three item categories of residential rent, owner’s equivalent rent, and housing at school comprised 27.3 percent in the CPI-U index.25 While the prices for items in these categories are measured monthly, their collective weight in the overall index remains at 27.3 percent until the fixed-weights change as the result of a more recent survey. There are 206 different item categories in the CPI; the prices for items within each category are measured monthly in forty-four different urban areas and then aggregated, based on population weights from the last census, into the national index. One problem with such a fixed-weight index is that the weights become more and more irrelevant to consumers over time. For example, beef prices may have increased significantly relative to other foods, but consumers respond by spending less on meat and more on poultry. This is an example of what we saw earlier in Figure 8-9: The cost of achieving a fixed utility level never rises as quickly as the cost of achieving it with the original bundle of goods and services. As consumers can achieve the original utility with less than the indexed amount by substituting other products for those that have become relatively more expensive, this source of overcompensation is known as “substitution bias.” Being faithful to the concept of a price index over time works against the accuracy of the figure as a costof-living index. One way to make a Laspeyre price index more useful as a measure of the cost of living is to have it chain-linked. For example, the CPI measures the change in prices from 1996 to 1997 by using the 1982–1984 weights, where Pi Qj is short for the sum over the n goods in n the index pik qjk :

Σ

k=1

P98Q84 L98 = ——— P84Q84 P97Q84 L97 = ——— P84Q84 L98 P98Q84 —– = ——— L97 P97Q84 But a chain-linked index would compute





L98 P98Q97 —– = ——— L97 P97Q97 L85 P85Q84 —– = ——— L84 P84Q84

25 CPI-U stands for the Consumer Price Index for All Urban Consumers, and is the version of the CPI that is used most commonly for indexation. The other version of the CPI, known as CPI-W, the Consumer Price Index for Urban Wage Earners and Clerical Workers, is used to index Social Security benefits.

Allocation over Time and Indexation

305

This of course would require a method of identifying the changes in consumption patterns each year (bearing in mind that accurate surveys are expensive). Beginning with 1998, the Bureau of Labor Statistics announced that new expenditure weights from the 1993–1995 period would be utilized. More importantly, it also announced that in 2002 it will utilize expenditure weights from 1999 to 2000 and thereafter update the expenditure weights every 2 years (e.g., the CPI in 2004 will be based on weights from 2001 to 2002). Thus in the future, the CPI will more closely approximate a chain-linked index rather than a fixed-weight index. Another important change in the CPI that also works to reduce substitution bias became effective in 1999. Within any of the item categories of the CPI, such as “ice cream and related products,” the prices for specific items from a sample of different stores are measured monthly. These measurements, like the larger index, have historically been aggregated by using the fixed weights from the occasional expenditure survey. Starting in 1999, what is fixed for most categories is the proportion of expenditure spent on each item in the most recent expenditure survey. Thus if the price of a pint of ice cream goes up relative to a pint of frozen yogurt, the new method assumes that the quantity of ice cream purchased decreases (and that of frozen yogurt increases) in order to hold its expenditure share constant at the base level. This method, known as the geometric mean estimator, does not require any new data but is in most cases a better approximation of how consumers as a group actually respond to price changes among close substitutes. Note that the use of this method does not extend across item categories, but is only used within them.26 Another problem with any index is the difficulty of controlling for quality changes. Although we know inflation averaged almost 5 percent per year over the 27 years from 1972 to 1998, it is not obvious whether an individual with $2000 would prefer to spend it on the goods available in the 1972 edition of the Montgomery Ward catalog or the 1998 edition. Minarik cites an example of radial auto tires being more expensive than the older bias-ply style but lasting far longer.27 Computer technology changes very rapidly, so that today’s desktop computer is substantially advanced over those sold only a few years ago. Another example is controlling for quality improvements in medical and dental technology, such as the use of laser-based surgical tools and instruments. Quality also may decrease, as it undoubtedly did on certain airline routes following deregulation in 1984.28 All these quality changes should be taken into account by the index, but that is usually not possible. One exception effective in

26 The geometric mean estimator is used for categories covering about 61 percent of the CPI-U index. The major categories excluded from the new method are housing services, utility and governmental services, and medical care services. The first two are excluded primarily because consumer substitution within them is difficult, and the third is excluded based largely on low estimated demand elasticities for these services reported in the economics literature. 27 See p. 18 of Joseph J. Minarik, “Does the Consumer Price Index Need Deflating?,” Taxing and Spending, 3, Summer 1980, pp. 17–24. 28 When price competition was prevented by regulation, airlines competed by offering nonprice benefits such as more space, more choice of travel time, and free in-flight movies. They are now able to offer lower prices as a more efficient means of competing.

306

Chapter Eight

1998 is the category “personal computers and peripheral equipment”: an increase in quality of a major computer component such as a modem will be valued by an econometric method and deducted from the observed price. The CPI is a very broad index, and it does not necessarily reflect the change in cost of living experienced by particular groups (e.g., families in a certain state or households with very low income). For example, in 1995 a report of the National Academy of Research suggested that it might be better to index the poverty line not to the CPI but to an index that concentrates on “necessities” such as food, clothing, and housing.29 However, this is a controversial area. Studies that have attempted to construct a separate index for low-income households usually find that the CPI understates the changes in poverty, although others find the opposite.30 Nevertheless, the idea that one might wish to use special indices for particular policy purposes is an important one. Another example of an area in which special indices have policy use is education. In Chapter 5 we reviewed the problems of achieving equity in school finance. One of the problems mentioned was that nominal dollar expenditures of districts cannot always be compared directly. In California, for example, some school districts may have to spend considerably more dollars than other districts to provide reasonable temperatures in classrooms. Suppose one is interested in the wealth neutrality of the real educational resources available to each child under a state financing plan. One must adjust the observable nominal dollar relations to account for the cost differences in obtaining real resources. But then one needs to have a comparison basket of educational resources. Furthermore, one must not confuse the observed price of each item in the district with the opportunity cost of the resources. For example, teachers’ salaries may be high either because local demand for teachers is high or because previous grants were converted to higher salaries through the flypaper effect. Untangling these effects poses thorny statistical problems.31 One final point about the practical problems of index construction and use should be made: Political pressures to influence the index calculations are enormous. Analysts arguing for technical improvements (say, to better approximate a true cost-of-living index) must be aware that any proposed change will likely benefit some people and harm others, and thus ignite political forces seeking to protect their own interests. It is thus important to be very clear about the technical rationale for proposing any changes and mindful of the practical and political obstacles that are likely to be encountered. One political reporter opined that the improvements to the CPI that occurred in the late 1990s happened not simply because of analytic argument but at least partially because they were a convenient compromise be-

29 National Research Council, Measuring Poverty: A New Approach (Washington, D.C.: National Academy Press, 1995). 30 See, e.g., the studies mentioned on pp. 128–129 of D. Baker, “Does the CPI Overstate Inflation?” in D. Baker, ed., Getting Prices Right (Armonk, N.Y.: M. E. Sharpe, 1998); and those mentioned on pp. 21–22 of M. Boskin et al., “Consumer Prices, the Consumer Price Index, and the Cost of Living,” Journal of Economic Perspectives, 12, No. 1, Winter 1998, pp. 3–26. 31 For a study of these issues in New York State, see W. Duncombe and J. Yinger, “School Finance Reform: Aid Formulas and Equity Objectives,” National Tax Journal, 51, No. 2, June 1998, pp. 239–262.

Allocation over Time and Indexation

307

tween Democrats and Republicans seeking to agree upon a budget with greater revenues but less spending on entitlements.32

Summary In this chapter we explored problems involving resource allocation over time. We reviewed the theory of individual choice as it relates to saving, borrowing, and investing. We motivated both saving and borrowing as responses to the uneven pattern in which income is accrued over a lifetime, in contrast to common preferences for a more even consumption stream. We motivated investment, the process of capital creation, as the response to opportunities to increase wealth. We saw that individuals can increase wealth not only by investing to create physical capital assets such as factories and office buildings, but also by investing in “human” capital through education. The benefit of investing is the extent to which it increases future income (or future utility directly), and its cost is that it reduces resources available for current consumption. Because all of these decisions (saving, borrowing, and investing) involve some form of trade-off between current and future opportunities, a method of comparison across time periods is needed. Interest rates are the prices that guide these comparisons. We illustrated this in a simple two-period model. We saw that individuals generally want to invest more at lower interest rates, whereas low interest rates normally discourage the supply of savings available to meet the investment demand. For any given market interest rate, we can think of individuals making their utilitymaximizing intertemporal decisions by discounting, which is a way to convert any future amount into its current-value equivalent, called the present discounted value. Seen through the discounting perspective, individuals maximize their utility by undertaking all investments that have present discounted values of benefits greater than those of its costs. To the extent that the separation theorem applies—when the benefits and costs of an investment to an individual are all in dollar amounts as opposed to affecting utility directly—this is pure wealth maximization and it is relatively easy for individuals to hire agents with expertise to direct or carry out the investments (e.g., by purchasing stock or hiring an investment advisor). Of course there are investments, particularly those involving human capital, such as education, that directly affect the utility of the investing individual (as well as his or her income stream), and then only the individual can factor in the nonmonetary benefits and costs. But in either case, the underlying model implies that the individual will discount future benefits and costs and strive to undertake the investments that maximize utility. Once the investments are chosen, individuals then choose the consumption stream that maximizes their utilities, subject to the budget constraint that the present value of the consumption stream can be no greater than the wealth.

32 Ben Wildavsky, “Budget Deal May Still Hang on a CPI Fix,” National Journal, 29, No. 12, March 22, 1997, p. 576.

308

Chapter Eight

This discounting perspective is useful when moving from the more abstract two-period model to models with more familiar time frames such as years. Individuals still follow the rule of undertaking all investments with net present value greater than zero. We illustrated use of the discounting rule with several different calculations, including one to illustrate that the same principle applies in benefit-cost analysis. However, it is not clear to what extent individuals actually behave as if they are discounting as described above. Several studies suggest that individuals sometimes act as if they have discount rates that are far higher than market rates. For example, many thousands of individuals leaving the military during the 1990s had earned substantial benefits and were given a choice between receiving a lump-sum payment or an annuity with present value at market rates 50–100 percent greater than the lump-sum amount. The calculation of the annuity’s present value was made for them and explained to them. But most chose the lumpsum payment, which means that they acted as if they had personal discount rates that exceed 20 percent (when market rates were at 7 percent). While some of these individuals may have needed a large amount of cash for which they had no alternative source, why so many people made a choice that left them about $30,000 poorer in conventional terms remains a puzzle. If poor intertemporal choice-making is a legitimate public policy concern, then policies to simplify the problem might be quite valuable. We considered a set of policies that address one source of complexity for these choices: indexing policies to remove the uncertainty that inflation causes about the real purchasing power of future dollars. Social Security payments, many private pension plans, food stamps, and U.S. Treasury “I” bonds are all indexed to keep purchasing power approximately constant no matter what the inflation rate. We reviewed common methods of index construction used to achieve this objective. A true cost-of-living index for an individual would measure the percent change in nominal income required to hold utility constant at a base level. To approximate this concept, Laspeyre and Paasche price indices can be constructed to estimate the change. The Laspeyre index calculates how much money would be needed to purchase the base period consumption quantities at the new period’s prices. This overestimates the necessary monetary increase, because while the individual could then purchase the original bundle, at the new prices there will be a different bundle that yields more utility than initially. This overcompensation is sometimes referred to as a substitution bias because it does not account for the fact that the individual can and will substitute items that have become relatively less expensive for those that have become relatively more expensive. The Paasche index, which calculates how much more money it costs to purchase the new-period quantities owing to the price changes, has the opposite flaw and underestimates the money required to hold utility constant. The theoretical imperfections of the Laspeyre and Paasche methods of indexing are probably less significant than the practical problems of defining the goods and services that are included in an index and updating the index over time. We consider these practical problems as they apply to the construction of one of our most important indices, the Consumer Price Index (CPI), which is the index actually used to adjust Social Security and many other nominal payments in the economy. These changes are very politically sensitive, because they affect the payments that millions of individuals receive.

Allocation over Time and Indexation

309

The CPI is a fixed-weight Laspeyre index constructed by the Bureau of Labor Statistics. The weights are fixed by expenditure surveys, which the Bureau updates every 2 years in order to reduce substitution bias. The CPI is based on 206 different item categories; within each category the prices of specific items are measured monthly in many different urban areas and are then averaged to estimate a price change for each specific item. Again to reduce substitution bias, the Bureau assumes that the proportion of expenditure on each item (rather than its quantity) within a category remains constant. Finally, we noted that the index is imperfect because quality changes in many of the items surveyed do not get taken into account. Even if a computer this year costs the same as one last year, its quality is often substantially increased. While this remains a problem for most items, the Bureau is now using an econometric method to account specifically for quality changes in computers. The allocation of resources over time has profound effects on economic growth. The more that we can understand about how individuals make these decisions and how public policies might improve them, the better off we will all be. The utility-maximizing model that we have used is a useful starting point to help us understand the important role of interest rates as well as the concept of discounting. The difficulty that many individuals may have with these decisions challenges us to improve our models as well as our public policies. To the extent that index construction is used to give individuals more certainty about their real entitlements to retirement benefits, it eases their task in deciding how much more to save. Index construction can also be used in many other ways, such as using an index of education costs in state aid formulas to ensure equity across districts.

Exercises 8-1

A house can be rented in an uncontrolled market for a profit of $20,000 this period and $20,000 next period. (There are only two periods.) a If the market interest rate is 10 percent, what is the most you would expect a housing investor to offer for the house? Explain. b Suppose the investor in (a) buys the house for his maximum offer and rent controls are then imposed, allowing the new owner to charge in each period only the operating expenses actually incurred plus $5000. Upset by the effect of rent controls on his profits, the new owner considers selling the house. What is the maximum a housing investor would now bid for it? (Answer: $9545.45.)

8-2

The secretary of labor is concerned about the human capital investments of young adults like Steven Even. Steven lives in an inner-city area that banks avoid and that discourages thoughts of higher education, but he is bright and full of potential. He is working now in a secure but low-paying job. His current income is $10,000 (Y0 = 10), and his future income (Y1) will stay at $10,000 unless he improves it by investing in himself through higher education. His productive opportunities locus is Y1 = 30 − 2Y02/10 The market interest rate for borrowing or saving between the two periods is r = 0.20.

310

Chapter Eight

a Write down the numerical equation showing the present value of Steven’s wealth if he stays in his current job. You do not have to calculate the value. b Suppose Steven chooses the point on his productive opportunities locus where Y0 = 5 and Y1 = 25. Explain in what sense he is investing, and how much. c Steven’s consumption preferences are such that he prefers strictly even consumption (C0 = C1) at any interest rate. Given that no one will lend him any money, how much will he invest? [Hint: Draw a diagram including the given information, and think carefully about the shape of Steven’s indifference curves. The answer is 0.] d Suppose the secretary of labor stands ready to do what the market will not: lend money to people like Steven at the going interest rate. Now will he invest? [Hint: Note the slope of the productive opportunities locus at (10, 10) is −4.]

APPENDIX DISCOUNTING OVER CONTINUOUS INTERVALSO

In this appendix we review the concept of a continuous stream of payments, which is often used in analytic calculations and requires some knowledge of integral calculus. We approach the idea by first distinguishing between simple and compound interest and then examining compound interest as the compounding period becomes shorter and shorter. The examples in the chapter all involved simple interest rates, but it is sometimes more convenient to work with continuously compounded rates over the same interval. Imagine first that we deposit $P in a bank that pays r percent annual interest compounded semiannually. This is equivalent to keeping the money in the bank for two 6-month periods at a 6-month simple interest rate of r/2:

( )

r A=P 1+— 2

2

In other words, compounding holds the simple rate of interest constant but redefines the intervals to be shorter. The difference between this and the simple-interest case is that with compounding one earns interest on the interest. The interest earned after 6 months is Pr/2, and that is added to the account balance. Thus, during the second 6 months, one earns Pr/2 as interest on the original deposit plus interest on the first 6 months’ interest (Pr/2)(r/2) = Pr 2/4. To check this, note that: Simple interest: A = P(1 + r) = P + Pr Compounded semiannually:

( )

r A=P 1+— 2

2

Pr 2 = P + Pr + —— 4

Allocation over Time and Indexation

311

If we let the original deposit of $P earn compound interest for t years, we would have

( )

r A=P 1+— 2

2t

The more often interest is compounded (for a fixed simple rate), the more benefit to the saver. If the interest is compounded quarterly and held for t years,

( )

r A=P 1+— 4

4t

and if the savings are compounded n times per year and held for t years, the amount at the end of the period is

( )

r A=P 1+— n

nt

Now what happens if we let n approach infinity, or compound continuously? To answer that, we first define the number e33:

( )

1 e = lim 1 + — n→∞ n

n

≈ 2.718

This number can be interpreted economically as the yield on $1 invested for 1 year at a 100 percent interest rate compounded continuously. If the simple interest rate is r rather than 100 percent, one must make use of the following limit to calculate the continuously compounded yield:

( )

r e r = lim 1 + — n→∞ n

n

Table 8A-1 shows the effect of the frequency of compounding or discounting on savings and borrowings. Note that if $P is deposited and compounded continuously at annual interest r for a period of t years, the amount at the end of that time is A = P(e r ) t = Pe rt If one asks about the present discounted value of A dollars in the future, the answer expressed with continuous discounting is PDV = Ae−rt = P We have already seen that the present value of a stream of payments is the sum of the present value of each payment. If one receives a payment At each year for n years, its present value using the continuously discounted rate is PDV = A0 + A1e−r + A2e−2r + . . . + Ane−nr 33 If mathematical limits are not familiar, try taking a calculator and experimenting. Compute the expression (1 + 1/n) n for n = 10 and then n = 100 to see that it approaches e = 2.718.

312

Chapter Eight

Table 8A-1 The Effect of Compounding on Yields and Present Values Deposit $1 for 1 year at 10% annual interest

Formula

Yield

Equivalent simple interesta

(a) Simple

1(1 + 0.10)

1.1000

10.00

(b) Compounded semi-annually

1(1 + 0.10/2)2

1.1025

10.25

0.10/4)4

1.1038

10.38

1.1052

10.52

Present value

Equivalent simple discount rate

(c) Compounded quarterly (d) Compounded continuously Repay $1 in 1 year at 10% annual discount rate

1(1 +

0.10

1(e

)

Formula

(a) Simple

1/(1 + 0.10)

0.9091

10.00

(b) Discounted semi-annually

1/(1 + 0.10/2)2

0.9070

10.25

(c) Discounted quarterly

1/(1 + 0.10/4)4

0.9060

10.38

(d) Discounted continuously

1/(e0.10)

0.9048

10.52

a

If r is the simple interest rate and rc is the continuously compounded rate over a given time interval, they are equivalent if their yields are identical: 1 + r = erc or ln (1 + r) = rc. This formula allows conversion from simple to continuous interest rates.

There is one last idea that is important to develop because of its analytic convenience. A payment at the rate of $At per year could be sent in installments, just as annual rent is usually paid monthly; $100 per year could come in quarterly payments of $25, or weekly payments of $1.92, or at some other frequency. The frequency we wish to focus on is payment at each instant! An example of why we are interested in this may help. Suppose someone owns a tree and wishes to cut it and sell the lumber when its present value is maximized. The tree grows at its natural rate each instant; it adds more lumber as a stream of continuing payments and changes the present value. Or a machine may be thought of as depreciating continuously over time, that is, as generating a stream of instantaneous negative payments. The mathematics below is useful for finding the present value in these and similar cases. Let us call A(t) the annual dollars that would result if the instantaneous payment at time t continued for exactly 1 year. Let us call ∆t the fraction of the year during which the payment actually does continue. Thus, the amount received is A(t)∆t, and its continuously discounted value is PDV = [A(t)∆t]e−rt If we have a stream of instantaneous payments for T years, where the annual rate is constant within each small portion of the year ∆t, then the present value of the whole stream

Allocation over Time and Indexation

313

O

Figure 8A-1. The present value of a continuously discounted stream of payments.

can be thought of as the discounted sum of the payments during each ∆t. The number of intervals being summed is T/∆t: PDV =

T

Σ [A(t)∆t]e−rt t=0

Now if we go to the limit where the size of the interval ∆t approaches zero, the above expression becomes an integral: PDV = lim

n→∞

T

Σ [A(t)∆t]e−rt = ∫t=0 A(t)e−rt dt t=0 T

To see what this means, let us use some numerical examples. Let A(t) = $100. If it is paid in a single payment at the end of the year, ∆t = 1. At an annual rate of 10 percent continuously discounted, its present value is PDV = [A(t)∆t]e−rt = $100e−0.10 = $90.48 If the $100 is paid in two 6-month installments, ∆t = ½, so that A(t)∆t = 50 and there are two components to the present value: PDV = 50e−0.10(0.5) + 50e−0.10(1) = 47.56 + 45.24 = $92.80

314

Chapter Eight

We can imagine paying the $100 in an increasing number of installments until the interval size is that of an instant. If the $100 is paid in equal instantaneous installments, its present value is PDV =

∫t=0 100e−0.1t dt t=1

100e−0.1(1) − ————— 100e−0.1(1) = ————— −0.1 −0.1 = −904.84 + 1000 = $95.16 We show the geometric interpretations of these calculations in Figure 8A-1. The downward-sloping line shows the present value of $100 (discounted continuously) at any point between the present and 1 year into the future. The single-payment calculation is the area of the rectangle using the height of the curve ($90.48), where t = 1 and with length ∆t = 1. The two-payment calculation is the sum of two rectangles: Both have length ∆t = 0.5, and the heights are the discounted values of the payments at t = 0.5 and t = 1, or $95.12 and $90.48, respectively. Note that the second calculation comes closer to measuring the whole area under the curve from t = 0 to t = 1. If we divided the intervals into 4, 8, and 16 payments, we would come even closer to measuring that whole area. In the limit of instantaneous payments at each infinitesimally sized interval, the area is the whole area. The integral is simply the way of calculating the area under the given curve for the relevant period (t = 0 to t = 1).

PA R T T H R E E P O L I C Y A S P E C T S O F P R O D U C T I O N A N D S U P P LY D E C I S I O N S

CHAPTER NINE T H E C O S T S I D E O F P O L I C Y A N A LY S I S : T E C H N I C A L L I M I T S , PRODUCTIVE POSSIBILITIES, AND COST CONCEPTS

IN THIS CHAPTER we examine concepts from the economic theories of production and costs. These theories are useful in deducing policy-relevant consequences from observable supply activities. First, they are crucial components of predictive models of producer behavior: understanding what outputs will be supplied by these organizations and what resources will be used to make the outputs. Second, they are important for the normative purpose of evaluating the efficiency consequences of supply activities. After presenting an overview of these uses below, we explain how the chapter is organized to develop skills by using several fundamental concepts from these theories. In order to predict the behavior of a supplier organization, economic models provide specifications of the organization’s objectives, capabilities, and environmental constraints. The latter are factors that are exogenous to the organization or outside its direct control: the technological possibilities for producing the product, the costs of the resource inputs used with alternative production methods, and the demand for the organization’s product.1 In this chapter we will develop familiarity with the supply constraints of technology and costs. In later chapters we will focus on specifying the objectives and capabilities of an organization and then linking them with the environmental constraints to predict the organization’s behavioral response to proposed policies. However, we wish to clarify in advance that making deductions about technology and costs from empirical observations cannot be done in isolation from organizational objectives and capabilities. Consider first the constraint of technology, the methods available for converting inputs into outputs. Obviously, one cannot expect an organization to produce more output than is

1 When the supplier has monopoly or monopsony power, its own actions may affect the costs of its inputs or the demand for its product. We focus on these situations in later chapters. However, exogenous factors such as resource scarcity and consumer tastes still remain as important constraints.

317

318

Chapter Nine

technologically possible given the inputs it uses; therefore, understanding the constraint is useful for predictive purposes. Technologies that are efficient in an engineering sense (the maximum output for given inputs) are sometimes represented in models by a production function. Estimated production functions, based upon observations of supplier inputs and outputs, are used commonly in analytic work. Understanding technological possibilities is easier for some activities than others. In agriculture, for example, it might be clear that one particular output is corn and that the inputs used to produce it are land, labor, fertilizer, capital, equipment, weather, and so on. Even in this relatively clear case there are difficulties in establishing the relation between inputs and outputs. Suppose, for example, that less fertilizer does not reduce the amount of corn produced but does reduce its sweetness. Then it would be a mistake to compare only the quantity and not the quality of the output and conclude that the process with less fertilizer is technologically superior. To corn lovers this is like comparing apples and oranges. Imagine the difficulty of trying to understand the technology constraining some other important supply organizations. How does one define the outputs of a school, police force, or mental hospital, and what are the inputs that determine the outputs? One might think, for example, that the verbal and mathematical skills of children are intended as the outputs of schools and furthermore that they can be measured by scores on standardized tests. However, one community might emphasize learning about civic responsibilities as an important schooling objective, and that output would not necessarily be reflected in the standardized scores. Its schools might have smaller increases in the standardized scores of their pupils, but it would be wrong to conclude they were technologically inefficient compared with other schools with similar resource inputs: It is the objectives that differ, not the technological efficiency. The other environmental constraint focused on in this chapter is cost. The supplier organization’s choice of technology will depend upon its perception of costs. The monetary costs of inputs that we observe in the marketplace may or may not fully represent that perception. In the standard example of a profit-maximizing firm, the monetary costs are those perceived by the firm. For example, the firm will produce any level of output with whatever technology minimizes the monetary cost of its required inputs. In this case a relation known as the cost function can be used to predict the total costs of producing alternative output levels for any given input prices. Knowledge of the cost function can be useful for predicting how supplier organizations will respond to various policy changes such as those involving taxes or regulatory rules. However, suppose the supplier organization is a public agency with a mandate to employ the hard-to-employ. It may prefer a labor-intensive technology to a capital-intensive one with lower monetary costs. That is, it may perceive forgone employment opportunities as a cost of using capital in addition to its monetary cost. Some supplier organizations may produce in environments in which political costs are major expenses. A district attorney’s office, for example, may require police assistance to obtain certain evidence. But the police have many important matters on their agenda and establish their own priorities. The district attorney’s office may have to pay a political cost,

The Cost Side of Policy Analysis

319

such as agreeing to prosecute some other individuals arrested by the police promptly, in order to gain police cooperation in gathering the evidence. The above examples demonstrate that understanding technological possibilities and costs is important for predicting the behavior of supply organizations; however, one must consider carefully how these constraints apply in specific situations. Now let us turn briefly to the normative use of concepts from the theories of production and cost. We discuss their relevance to the concepts of Pareto optimality and benefit-cost analysis. We have not yet considered how the concept of Pareto optimality applies to a complicated economy in which decisions must be made about the outputs to be produced and the resource inputs to be used in making each output. Indeed, we defer most of this discussion until Chapter 12. Nevertheless, we shall sometimes point out when there is “room for a deal.” For example, efficiency requires that each output be produced by a method that is technologically efficient: that is, one that achieves the maximum possible output with the inputs used. Otherwise, one could use the same inputs with an efficient technology and have more of that output with no less of anything else. The incremental output could be given to anyone and someone would be made better off with no one else worse off. Thus a necessary condition for Pareto optimality is that outputs be produced with technologically efficient methods. Knowledge of the production function is useful for judging the technical efficiency of supplier organizations. We will illustrate this with an example from an evaluation of a public employment program. Partial knowledge of the production function is developed from empirical observations and used to judge changes in the technological efficiency of production over time. All of the other normative illustrations in this chapter are applications of the benefit-cost principle. Whereas Chapter 6 focused on the “benefit” side of the principle, this chapter focuses on the “cost” side. The essential point is to demonstrate that knowledge of costs can provide a great deal of the information that can be used to find out if gainers from a change can compensate the losers. Once that is clear, we illustrate how the knowledge is obtained (and some of the difficulties that arise) through specific applications of benefit-cost analysis and the use of cost functions. We use examples from a public employment program, trucking deregulation, and peak-load pricing for public utilities. The chapter is organized as follows: We begin with a review of the relation between technological possibilities and the concept of a production function. Both predictive and normative uses of the production function approach in a policy setting are illustrated from an analysis of a public employment program. We also provide an example to suggest how cross-sectional empirical data are often used to draw inferences about a production function, and we warn of a pitfall to avoid when the method is used. Following the section on technological constraints, we compare and contrast concepts of cost: accounting cost, private opportunity cost, and social opportunity cost. We show how knowledge of the social opportunity cost can be used to test for relative efficiency by the compensation principle. The use of these different cost concepts is illustrated by benefitcost calculations used in the analysis of the public employment program. The calculations illustrate both predictive and normative analytic tasks.

320

Chapter Nine

After reviewing cost concepts, we explore the relations between costs and outputs. The concept of a cost function is explained, and use of the function is illustrated in the evaluation of a regulatory reform concerning interstate trucking firms as well as in a public employment program. In a supplemental section, another type of cost-input relation known as the joint cost problem is discussed, together with its application to peak-load pricing of public utility services. In an appendix to the chapter, we use the mathematics of duality to clarify some of the relations between technology and cost functions and introduce some of the cost functions commonly used in empirical analysis.

Technical Possibilities and the Production Function The Production Function Is a Summary of Technological Possibilities The production function summarizes the various technical possibilities for converting inputs, or factors of production, into the maximum possible output. For example, if Q represents output, and the inputs are K for capital and L for labor, the production function may be expressed as Q = F(K, L) The idea is that the output may be produced by various combinations of the two inputs, and knowledge of the production function and specific quantities K 0 and L0 allows one to infer the level Q0 that is the maximum output that can be produced with that combination. Usually, more output can be produced if more of one of the inputs is available. In mathematical notation, ∆Q/∆K > 0 and ∆Q/∆L > 0. Let us think of a single technology as a set of instructions for converting specified inputs into some output, exactly like the instructions that come with a model airplane kit, where the various parts are inputs and the model airplane is the output. Note that the economic meaning of “technology” is broader than its common interpretation as a type of machine; there can be technologies in which the only inputs are labor, and more generally, the variations in possible instructions to laborers can be an important source of technological change. For example, suppose we imagine alternative processes of developing computer programs Q to sell to other firms (e.g., to keep track of their accounts receivable) and there are ten computer programmers L and five computer terminals K as the inputs available during a specified production period. There are many ways in which one could imagine organizing these inputs for production: Perhaps some programmers should specialize in drafting the program and others in debugging the drafts. We might instruct each programmer to develop a program from start to finish or perhaps some programmers should specialize in developing financial programs and others in inventory control. Two time shifts of labor might be developed to allow full utilization of the terminals. Each of these is a way to vary the technology of production. If the production function for this example is represented as above, Q = F(K, L), then the only information we have is on the maximum output that can be attained with the two types

The Cost Side of Policy Analysis

321

of inputs. On the other hand, we might consider the ten units of labor as divided into two different types of labor, for example, six programmers who make the program plan LP and four who debug LD , and represent the production function with three inputs as follows: Q = F(LP , LD , K) We could extend this to consider the morning M and evening E programmers: Q = F(LPM , LPE , LDM , LDE , K) Thus whether the effect of a technological variation can be identified from knowledge of the production function depends upon how the function is defined: the more aggregated the input definitions, the less information is revealed about technical variations. Often an analyst is expected to be able to determine empirically some aspect of the technology of an actual production process. This typically requires statistical estimation of the production function. Although the statistical procedures are beyond the scope of this text, the theoretical and practical considerations that underlie the analysis can be illustrated. One example arose in regard to the evaluation of the New York Supported Work experiment, a program of the nonprofit Wildcat Service Corporation, which hires ex-addicts and ex-offenders to deliver a wide variety of public services in the city. One group of employees was engaged in cleaning the exteriors of fire stations around the city, and the crews had been working for approximately 6 months. The question raised was whether the productivity of the workers was improving. Answering this specific question actually had only a latent role in the overall evaluation of the experiment. This particular project was started before the formal experiment, and its participants were not randomly selected. However, the analyst hired to undertake the formal economic evaluation was not well known to the officials operating or funding the program, and they sought some early assurance that his work would be useful.2 This project provided a low-risk opportunity to get some indication of the quality of the evaluation to come. That is, the resolution of the issue was to some degree a test of the analyst’s skill and would be a determinant of the seriousness with which his future analyses and recommendations would be taken. Accurate data were attainable on the inputs used to clean each building. This does not mean that the mass of data was all prepared, sitting and gathering dust on some desk while waiting for an analyst to walk in and have use for it. But the project managers, in the course of conducting routine activities, had maintained various records that the analyst could use in constructing a data set appropriate for this task. The project had several crews, which allowed them to work at different sites simultaneously. They kept track daily of attendance on each site for payroll purposes. For inventory control they kept daily track of the number of water-blasting machines assigned to each crew and the quantity of chemicals used by the crew. Furthermore, precise output data on the square footage of the surfaces cleaned, verified by site visits, were also available.

2

All this is known to me because I was the analyst.

322

Chapter Nine

Efficiency, Not Productivity, Is the Objective Let us consider for a moment what is meant by a productivity increase. Imagine a production process in which output Q is produced with inputs K and L. The average product of labor APL is defined as output per unit of labor: APL = Q/L This measure, output per worker, is what is usually referred to when productivity is discussed. Mayors often seek ways of raising the productivity of city employees. On a larger scale this measure applied to the aggregate private economy is often of great concern. For the 27-year period from 1947 to 1973, real productivity in the U.S. private business sector increased every single year. The rate of productivity increase averaged 2.91 percent per year. But from 1973 to 1980, average real productivity growth was only 0.58 percent per year, and in some years it was negative. Similarly, from 1980 to 1991 the average increase in the index was only 1.01 percent, and was again negative toward the period’s end. The source of this 20-year slowdown in productivity is still not well understood, although it began to pick up in the 1990s and averaged 2.8 percent per year in 1995–1999 when the economy was once again growing at a healthy clip.3 Nevertheless, maximizing productivity is not necessarily a wise or efficient strategy to follow. To see this, we introduce standard diagrams of the total, average, and marginal product curves for one input. These curves show how output varies as the amount of one type of input changes, given a fixed level of all the other inputs. The curves for the input labor are shown in Figures 9-1a and b. If the amount of some other input such as capital increases, then all three of the labor product curves will usually shift upward. (Normally, the more capital each worker has available, the greater the output per worker.) The total product-of-labor curve TPL shows the total output level as a function of the different labor amounts possible, holding all other inputs constant at a fixed level. It is drawn in Figure 9-1a to increase rapidly at first (as enough laborers become available to use the capital stock) and then more slowly until L T , where it actually begins to decline. (Too many workers jammed into one plant can become counterproductive.) The slowdown in the growth of total product is a consequence of diminishing marginal productivity. The marginal product of labor MPL is the change in output that results from adding one more unit of labor. In Figure 9-1a it is equal to the slope of TPL (= ∆TPL /∆L). It rises at first until it reaches a maximum at LM and then declines; its graph is shown in Figure 9-1b. The average product of labor APL in Figure 9-1a is the slope of the line drawn from the origin to any point on the TPL curve (slope = height/base = total product/labor = APL); it reaches its maximum at LA , where the line from the origin is just tangent to the TPL curve. In Figure 9-1b the APL curve is constructed from the TPL curve in Figure 9-1a.

3 Economic Report of the President, February 1982 (Washington, D.C.: U.S. Government Printing Office, 1982), p. 278, Table B-40, and Economic Report of the President, February 2000 (Washington, D.C.: U.S. Government Printing Office, 2000), p. 362, Table B-47. Aggregate output in the tables is measured by the real gross domestic product in the business sector, and it is divided by the total hours of work of all persons in the sector.

The Cost Side of Policy Analysis

323

(a)

(b)

Figure 9-1. The total (TPL) (a) and the marginal and average (MPL, APL) (b) product of labor curves.

The MPL reaches its maximum before that of the APL and it always passes through the maximum APL. This is shown in Figure 9-1b. That is, the marginal product pulls the average product up whenever it is greater (MPL > APL), and pulls it down whenever it is lower (MPL < APL). When they are the same, the APL is neither rising nor falling; the slope of the APL at this point is thus zero and the APL is at a maximum.

324

Chapter Nine

From this analysis, we can see that maximizing productivity for a given stock of nonlabor inputs implies that the quantity of labor used should be LA . But is that desirable? In general, the answer is no. Imagine that the MPL at LA is three per hour, that labor can be hired at $6.00 per hour, and output sells at $3.00 each. Then hiring one more hour’s worth of labor will cost $6.00 but result in $9.00 worth of extra output. It would be inefficient to forgo this opportunity, since there is clearly room for a deal (consumers and the extra laborer can all be made better off and no one worse off). It does not matter whether productivity is decreasing. The relevant consideration is efficiency: If the value of the marginal product exceeds its cost, then efficiency can be increased by moving away from the maximum productivity. In this example, labor should be hired until the MPL declines to 2. That will be to the right of LA , in general.4 One reason for pointing out the inefficiency of maximizing productivity (aside from its relevance to the public employment problem, to be discussed shortly) is that it can be a tempting mistake to make. Public managers, responding to their mayors’ pleas to increase productivity, could decrease efficiency by utilizing too little labor with the available capital stock. Two people and a truck on a refuse collection route may collect 2 tons per day; three people on the same truck may collect 21/2 tons per day and thereby cause “productivity” to decrease. But the relevant issue is efficiency: whether the extra cleanliness resulting from the marginal 1/2 ton of removed refuse per collection cycle is worth more than the cost of achieving it. There are, of course, other ways to change the level of productivity: One can add to the nonlabor inputs such as the capital stock, or one can make technological progress (i.e., use a new technology that generates more output from a given set of inputs than the old technology). Like changes in labor quantities, neither method can be utilized for free. To generate an increase in the capital stock requires that people reduce current consumption in order to save more, and technical progress is usually a consequence of devoting resources to research and development. As with all economic activities, these methods should be pursued only to the extent that we (individually and collectively) are willing to pay the bills for doing so. Nevertheless, the relatively low 1973–1991 U.S. productivity was matched by similar sluggishness around much of the world, and the reasons for it, despite many studies, remain elusive. It requires further study to uncover whether this was because of “errors” in resource allocation or “bad luck” in research and development efforts, or simply due to living in a world in which it was truly “more expensive” to buy increases in productivity.5 4 Take the case in which inputs and outputs have constant prices. To the left of LA , where APL is increasing, average output cost is decreasing and profits per unit are increasing. Therefore, there is still room for a deal by employing more labor and expanding output. To the right of LA , profits per unit begin to decrease, but very slowly at first whereas quantity is increasing at a constant clip. Thus total profits are still increasing. Eventually, profits per unit will decline enough that total profits do not increase; that will be the efficient production point. 5 For an excellent introduction to this general problem, see Edward F. Denison, Accounting for Slower Economic Growth (Washington, D.C.: The Brookings Institution, 1979). Leading explanations for the worldwide productivity slowdown are the sharp energy price increase in 1973 (causing much of the energy-using capital stock to become inefficient), and data inadequacies that underestimate productivity gains through technical progress in the service sectors (e.g., the spread of computers). For analyses of the productivity slowdown, see the Symposium

The Cost Side of Policy Analysis

325

How does this discussion apply to the supported work problem? First, what is the common sense meaning of wanting the workers to be “more productive”? In this case, it really refers to improving the contribution of the worker, all other things being equal. It means things such as better timeliness, better focus on the tasks to be accomplished, and more skill when undertaking the tasks. Productivity of the workers thus should not simply be defined by the APL at different times; this can change for too many reasons irrelevant to the real question of worker improvement (e.g., changes in the capital equipment available). We want to ask whether the supported workers, when utilizing any given quantity of nonlabor inputs, produce more output over time. One might think of this as a skill increase or an increase in “human capital”: over time, each worker hour represents more labor. Alternatively, one could think of this as technical progress: The same inputs produce more output because the production process is becoming more refined. This case would be one of laboraugmenting technical progress. Letting a(t) represent a technical progress function (where t is time), we might hypothesize6: Q = F[K, a(t)L] The term a(t) can be thought of as an adjustment to the nominal quantity of labor (e.g., hours worked) in order to account for changes in the effectiveness of labor over time. Now if we can estimate the production function, it may be possible to see if labor inputs produce more output over time, other things being equal. That is, we want to know, for t1 > t0 whether a(t1) > a(t0 ). This would be true if ∆a/∆t > 0, meaning that labor is more effective over time. For example, if a(t0) = 1 and a(t1 ) = 1.2, it means each hour of labor in t1 has the same effect on output as 1.2 hours in t0. It is necessary to choose some specific empirical form for the production function. Theoretical considerations provide some guidance about the general shape of the function, but the specific numerical equation selected is then a matter of which fits the data the best. First we provide the theoretical background that helps us to understand likely shapes, and then we will turn to the empirical specifics.

Characterizing Different Production Functions We begin with the idea of an isoquant: a locus of input combinations that yield a given output level. This is the production analogue to the indifference curves of utility theory. For example, in Figure 9-2 points A and B illustrate two of the many different input mixes that can be used to produce 30 units of output: point A uses K = 10 and L = 5, and point B

in Journal of Economic Perspectives, 2, No. 4, Fall 1988, pp. 3–97; and Zvi Griliches, “Productivity, R&D, and the Data Constraint,” American Economic Review, 84, No. 1, March 1994, pp. 1–23. For a focus on measurement techniques, see R. Fare et al., “Productivity Growth, Technical Progress, and Efficiency Change in Industrialized Countries,” American Economic Review, 84, No. 1, March 1994, pp. 66–83. 6 Capital augmenting technical progress is represented as F [a (t)K, L] and neutral technological progress as a(t)F(K, L).

326

Chapter Nine

O

Figure 9-2. Returns-to-scale and production functions: If the isoquant for Q = 60 went through point C, the returns to scale would be constant.

uses K = 5 and L = 15. Because A uses more capital relative to labor than does B, we say it is the more “capital intensive” of the two. The isoquant generally has a negative slope: as less of one input (capital) is used, more of another input (labor) is needed in order to keep the output level constant. The absolute slope of the isoquant is called the rate of technical substitution (of labor for capital, given the axes definitions), or RTSL,K, and its economic meaning is the amount of an input (capital) that can be released when one extra unit of another input (labor) is added and still hold output constant. Generally, the RTSL,K diminishes as one moves from left to right along the isoquant. When capital is “abundant” and labor is “scarce” (the upper left portion of an isoquant), the marginal product of capital is “low” whereas that of labor is “high.” So to hold output constant when an extra unit of labor is obtained, “many” units of capital can be released, thus implying that the slope is steep. The opposite is true on the lower-right portion of the isoquant, where capital is “scarce” and “labor” is abundant. Then an extra unit of labor has a “low” marginal product” whereas that for capital is “high.” In this case, holding output constant when an extra unit of labor is added means that only “few” units of capital can be released, or that the slope is relatively flat. The above reasoning suggests a relationship between the marginal products of the factors and the RTS, and indeed there is one. For any small change in the input mix, we can write the amount that output changes as follows: ∆Q = MPK*(∆K) + MPL*(∆L)

The Cost Side of Policy Analysis

327

That is, the change in output is the sum of two effects. One effect is the amount output changes per unit change in capital (MPK ) times the number of units that capital changes (∆K). The other effect is that owing to the changing labor: the amount output changes per unit change in labor times the number of units that labor changes. Now let this small change in input mix be a very specific type: from one point to another on the same isoquant. For such a change, ∆Q must be zero. Then the above equation becomes: 0 = MPK*(∆K) + MPL*(∆L) or, rewriting, −∆K/∆L = MPL /MPK But the expression on the left-hand side of the equation is simply minus the slope of the isoquant, or what we have defined as the RTSL,K . Therefore, RTSL,K = MPL /MPK We will make reference to this relationship later on. But now we continue to explain aspects of production functions. Two important characteristics of production functions are the returns to scale and elasticity of substitution. Roughly, the returns to scale concerns just how much output changes when all inputs are increased. Normally isoquants for higher output levels will lie upward and to the right of some initial isoquant (more inputs allow production of greater output). The returns-to-scale characteristic is whether a proportionate change applied to all inputs leads to an output change that is proportionately greater, the same, or smaller (corresponding to increasing, constant, or decreasing returns to scale, respectively).7 For short, we shall refer to these as IRTS, CRTS, and DRTS. On Figure 9-2, the ray from the origin through point B shows all input combinations that have the same capital-labor ratio (the proportion K/L) as point B itself. We might wonder A production function F(K, L) may be partially characterized by its scale coefficient φ, where returns are decreasing, constant, or increasing if φ < 1, φ =1, and φ > 1, respectively. φ is equal to the sum of the elasticities of output with respect to each input: 7

φ = εQ,L + εQ,K A quick derivation of this is possible with some calculus. Consider the total differential of the production function: ∂Q ∂Q dQ = —— dL + —— dK ∂L ∂K Divide both sides by Q: dQ ∂Q 1 ∂Q 1 —– = —— — dL + —— — dK Q ∂L Q ∂K Q Note that the term on the left-hand side is the proportionate change in output. Now consider changes that are brought about by increasing all inputs by the same proportion α: dL dK α = —– = —– L K

328

Chapter Nine

how far out on this ray we would have to go to double the output level to Q = 60? Point C on the ray has twice the inputs of point B. If the isoquant for Q = 60 crosses the ray below point C, as is shown by the solid-line isoquant, then the production function is IRTS (output is doubled for an input increase that is less than double). If, alternatively, the isoquant for Q = 60 crossed the ray above point C, like the dotted-line isoquant drawn, then the production function would be DRTS. The CRTS case, not drawn, would be if the Q = 60 isoquant crossed the ray directly through point C. The elasticity of substitution is a measure of the curvature of an isoquant. Denoting it as σ, it is defined as %∆(K/L) σ = ————— %∆RTSL,K In English, it is the percent change in the capital-labor ratio caused by a movement along the isoquant sufficient to change its slope by 1 percent. This is illustrated in Figure 9-3. If the isoquant is sharply curved, then its slope is changing rapidly from point to point. Thus one does not have to travel too far along it (in terms of a changed ratio of K/L) to change the slope by 1 percent, or the elasticity is low. At an extreme, right-angle isoquants have zero elasticity.8 On the other hand, relatively flat isoquants have very gradual changes in slopes; one has to travel a longer distance (in terms of the amount by which the ratio K/L changes) in order to change their slopes by 1 percent. Thus flat isoquants have high elasticities. In the extreme, a straight-line isoquant has infinite elasticity (no matter how large the change in the ratio K/L, it is not enough to get the constant slope of the isoquant to change by 1 percent).9 Figure 9-3 illustrates both the right-angle and straight-line isoquants, as well as a “middle-of-the road” one that has σ = 1. The greater this elasticity, the easier it is to substitute one type of input for another. Thus for the right-angle isoquants, which have zero elasticity, it is not possible to maintain an output level at the “corner” by increasing one output and reducing the other (substitution to maintain output is impossible). For the straight-line isoquants, however, one can maintain the output level indefinitely by substituting a fixed quantity of one input for a unit reduction in the other (until the axis is reached).

Divide both sides of the preceding equation by α or its equivalent: dQ/Q ∂Q 1 ∂Q 1 —–—– = —— — L + —— — K α ∂L Q ∂K Q But the term on the left is just the proportionate increase in output over the proportionate increase in input, or φ; and the terms on the right are the input elasticities. Therefore, φ = εQ,L + εQ,K 8

A production function that has right-angle isoquants is called fixed proportions or fixed coefficients and has the mathematical form Q = min (aK, bL), where a and b are positive constants and “min” means the output level is the minimum of aK or bL. 9 A production function with straight-line isoquants is called linear and has the mathematical form Q = aK + bL, where a and b are positive constants.

The Cost Side of Policy Analysis

329

Figure 9-3. The elasticity of substitution characterizes the curvature of an isoquant.

While theoretically both the returns to scale and the substitution elasticity can change values at different production points, much empirical work assumes that these characteristics are approximately constant over the range of the production function examined. Production functions that meet these assumptions are called constant elasticity of substitution (CES), and have been used to approximate a broad range of production processes.10 How do we relate this to the problem of choosing a specific empirical form for the production function used by the Wildcat supported workers cleaning fire station exteriors? Site inspection of the supported work operations provided some basis for judging an appropriate form. Neither of the extreme elasticities of substitution σ seemed appropriate: The water-blasting machines could not operate themselves (σ ≠ ∞), and there was substitutability between the factors because increased scrubbing of some areas could substitute for more water blasting (σ ≠ 0). In terms of returns to scale, it would be surprising if the data revealed any large differences from the constant-returns case. Buildings large enough to use two crews simultaneously were not cleaned by following procedures very different from those for one-crew buildings. 10

CES production functions have the form Q = A[δK −ρ + (1 − δ)L−ρ]−α/ρ

where the elasticity of substitution σ = 1/(1 + ρ) and the parameter restrictions are −1 < ρ < ∞, A > 0, α > 0, and 0 < δ < 1. The returns to scale is determined by α, with α < 1 being DRTS, α = 1 being CRTS, and α > 1 being IRTS. For further explanation and empirical studies of the CES function, see the following two references: K. J. Arrow et al., “Capital Labor Substitution and Economic Efficiency,” Review of Economics and Statistics, 43, August 1961, pp. 225–250, and M. Nerlove, “Recent Studies of the CES and Related Production Functions,” in M. Brown, ed., The Theory and Empirical Analysis of Production (New York: Columbia University Press, 1967).

330

Chapter Nine

The production function used to approximate the described features was Cobb-Douglas: Q = AK αLβ A > 0, 0 < α, β < 1 The returns to scale of this function always equal α + β, and it is often used in a more restricted form where β = 1 − α (i.e., constant returns). It has an elasticity of substitution equal to 1. (Use of a CES function did not provide significantly different results.)11 At this point it might seem fairly easy to apply standard statistical methods to determine A, α, and β by substituting the values of Q (the square feet of building cleaned), K (the number of machine-hours used in cleaning the building), and L (the number of worker-hours) observed for many buildings cleaned by the project. However, the most significant problem in applying that to the supported work setting was that not all buildings were equally easy to clean. Simply knowing the square footage cleaned did not reflect the difficulty of the task. The buildings varied in height from one to four stories, and the taller buildings required either extensive scaffolding to be erected or the rental of a large “cherrypicker” to carry the workers to the higher parts. Some of the buildings had a great deal of limestone surface, which was more difficult to clean and required special chemical treatment. To get around the difference problem, it was hypothesized first that standardized (but unobservable) output Q S was produced with Cobb-Douglas technology: QS = AK αLβ

11

To see these two points, note the equations for the marginal products in calculus form: ∂Q —– = MPK = αAK α−1L β ∂K ∂Q —– = MPL = βAK αL β−1 ∂L

We can use these equations to find the input elasticities: ∂Q K K αQ εQK = —– — = αAK α−1L β — = —– = α ∂K Q Q Q ∂Q L L βQ εQL = —– — = βAK αL β−1 — = —– = β ∂L Q Q Q Since an earlier note showed that the scale coefficient is the sum of the input elasticities, φ=α+β Going back to the marginal product equations, let us divide the second one by the first: ∂Q/∂L β K ——— = RTSL,K = — — ∂Q/∂K α L Substituting this in the definition of σ gives us ∆(K/L) / (K/L) ∆(K /L) / (K /L) σ = ———–———— = ——————————— = 1 ∆RTSL,K /RTSL,K (β/α)∆(K/L)/(β/α)(K/L)

The Cost Side of Policy Analysis

331

Then it was assumed that the standardized output QS was a product of the observed output Q, in square feet, multiplied by several factors to correct for the degree of job difficulty.12 The idea can be illustrated by assuming there is only one such factor D: QS = Q(D)−ω where ω is an unknown constant presumed to be less than 0 (the bigger D for a given Q, the higher Q S should be). Now the two equations can be combined to give an expression that is entirely in observable variables but which, when estimated statistically, will reveal the parameters of the production function (as well as some measures of how well the hypothesized form fits the data): Q = AK αLβD ω Finally, recall that the motivation for undertaking this analysis was to see if the workers were improving over time. A convenient way to hypothesize a time factor t on labor is Q = AK αLβ+δtD ω where δ is the increment to β (the elasticity of output with respect to labor) per unit of time. If δ is positive, labor is becoming more productive over time. We will not go into the details of estimation, but note what happens when the above equation is put in logarithmic form: ln Q = ln A + α ln K + (β + δt) ln L + ω ln D The equation is linear, which allows it to be estimated by standard computer programs for multiple regression analysis simply by entering the logarithms of all the variables as the observations. In the actual estimation, there were twenty-five observations (buildings cleaned) and each was defined to be in one of two periods: t = 0 if the building was cleaned in the first 3 months of the project and t = 1 if in the second 3 months. The standardized production function was estimated as Q = 50.217K 0.60L0.45−0.07t where K was defined as machine-hours and L as labor-hours. The estimated returns to scale were close to 1, as expected. Note that the coefficient of t is negative; in the second time period, labor was less productive.13 This result did not come completely as a surprise. The raw data had revealed that the unadjusted APL had declined, but it was thought possible that the decline was due to taking

12 A function like this is sometimes referred to as hedonic, which implies that the ordinary (output) measure may have a range of attributes about it that must be known to know its “value.” 13 All signs were as expected on the equation actually estimated. The R 2 was 0.81. There were three measures of job difficulty (per building): the number of stories high, the cherrypicker rental charge, and the proportion of limestone cleaning solvent to all solvents used. Of these, the cherrypicker variable had a significant coefficient (Student’s t statistic >2). The two input variables and the time variable also were significant.

332

Chapter Nine

on tasks of increasing difficulty. However, the production function analysis ruled that out by controlling for the effects of job difficulty: Other things being equal, labor was less productive. In the discussion of these results, two kinds of questions were asked of the analyst: (1) Does the equation predict realistically? (2) Could there be real job-difficulty factors that were omitted from the equation and cause the distortion of the results? To answer the first question, sample predictions were made, and they satisfied the project supervisor that each factor was predicted to have a reasonable (in his experience) effect. As for the second question, the project supervisor again played a key role: He could think of no other jobdifficulty factors that were not included in the equation. Thus, the result was accepted in the sense that the decision makers were persuaded that the analytic work revealed to them something that had not been known before and which was of concern to them.14 The analyst had passed the test.15 One of the obvious lessons from the supported work case is that the success or failure of the analytic task can depend crucially on the ability to take a neat concept from theory and figure out how to apply it in a meaningful way to a messy world. Theory can be credited with rejecting the typical productivity measures as unsuitable in this case and for pointing the way toward a form of input-output relation that allowed some substitution among the inputs. But to implement these concepts by taking the path from Cobb-Douglas functions to cherrypicker expenditures per building required learning in some detail about the nittygritty of the actual operations being studied. Often the linkages assumed in ordinary uses of theory do not apply to empirical applications, and one must be careful to interpret the results accordingly. For example, the production function used in theory is one that summarizes all of the technologies that are efficient in the engineering sense (maximum output for the given inputs). But the function we estimated describes the relation between actual outputs and the inputs used. We have no reason to believe that the actual output is the maximum possible output. Nevertheless, we can infer something about the technical efficiency of the program. How do we draw this inference? The observations during the first period (t = 0) give a lower-bound estimate of the true production function: maximum possible output must be at least as large as actual output. It is reasonable to assume that the actual skills of workers are not decreasing over time. Had the coefficient on labor increased during the second period (t = 1), there would be some ambiguity about whether the actual worker skills had increased or the program managers simply were able to increase the technical efficiency of the clean-

14 Later it was discovered that not all of the crew members of the masonry cleaning project were receiving the same pay, and that had caused a morale problem during the second period. More careful hiring and promotion policies were instituted as a result. 15 The clients of the analyst also passed a test by their acceptance of negative program results. Most public officials who operate programs or who have recommended funding for such programs wish the programs to be successful. Policy evaluation is not intended to fulfill their wishes; it is intended to be objective. Sometimes officials will resist this, which is one reason why good, truth-telling analysts follow this maxim: “Keep your bags packed!”

The Cost Side of Policy Analysis

333

ing operations. However, the observed decrease in labor productivity must be due to lower technical efficiency, since skills are at least the same.16 In the above example the observations used to study technology came from one organization with considerable detail about the inputs and outputs. It is much more common to have observations derived from a cross section of organizations producing the same good and with less detailed knowledge about the inputs and outputs. For example, one might obtain data from the Annual Survey of Manufacturers undertaken by the U.S. Census Bureau, where output is reported as the annual dollar value per industry and an input such as labor is measured by the annual number of production worker hours in the industry. Even with these aggregated and less detailed data it is often possible to make inferences about technology.17 However, caution similar to that in the supported work example must be exercised. For example, a common assumption made when using aggregate (industry-wide) observations is that each organization comprising the aggregate is technically efficient. If that is not true, the estimated relation cannot be interpreted as the production function. To illustrate this concern, Table 9-1 presents two matrices containing data from two industries using highly simplified production processes. In each case we will accept as a given that the true production function is constant returns to scale with fixed coefficients (zero elasticity of substitution), sometimes referred to as Leontif technology18: Q = min (aK, bL) a, b > 0 This function is for processes in which the inputs are always used in fixed proportion (= b/a), such as one operator per tractor. Suppose, for example, that a = 4 and b = 4, so that Q = min (4K, 4L) Then if K = 20 and L = 10, the output is the minimum of (80, 40), or Q = 40. In this case we say that labor is “binding”; more capital will not increase output, but each extra unit of

16 It might be argued that current program output (in terms of buildings cleaned) is not the only objective of the program, and what appears as reduced labor productivity might simply represent increased and deliberate program efforts to teach the participants skills that will have a future payoff. This argument is only partially correct. It certainly is true that the program is intended to produce both current and future benefits. However, the analysis above counts only the hours the participants were actually at work cleaning the buildings; furthermore, it controls for the amount of supervision given during those hours. Therefore, the conclusion that the reduced labor productivity is due to lower technical efficiency withstands this criticism. Later in this chapter we will discuss the future benefits as additional outputs of the program. 17 An example of this is a cross-sectional study of pretrial release agencies in the criminal justice system. Agencies that used the technologies of point systems and call-in requirements were far more successful at producing outputs (released defendants who appear at trial as required). See Lee S. Friedman, “Public Sector Innovations and Their Diffusion: Economic Tools and Managerial Tasks,” in A. Altshuler and R. Behn, Innovation in American Government (Washington, D.C.: The Brookings Institution, 1997), pp. 332–359. 18 Wassily Leontif created an economy-wide model of production showing the flows of resources and goods across each industry. This has become known as input-output analysis. A characteristic of the model is that it assumes that inputs in each industry are always used in fixed proportions to one another (at the angle of a rightangle isoquant like that shown in Figure 9-3). See Wassily Leontif, The Structure of the American Economy, 1919–1929 (New York: Oxford University Press, 1951).

334

Chapter Nine

Table 9-1 Industry Production Data Output

Capital

Labor

Technically efficient suppliers Unobserved but factual Supplier 1 Supplier 2 Observed Industry

80 160

20 40

20 40

240

60

60

Technically inefficient suppliers Unobserved but factual Supplier 1 Supplier 2 Observed Industry

40 200

20 40

20 40

240

60

60

labor from 10 to 20 will add 4 units of output. Note that a technically efficient organization will use inputs in only a 1:1 proportion with this technology; otherwise, it could produce the same output level with fewer resources. One could reduce K from 20 to 10 in the above example and maintain Q = 40. Returning to Table 9-1, we imagine that the only data available for analysis are the industry totals and we are trying to deduce the unknown coefficients a and b of the Leontif technologies used by each industry. The example is so designed that the industry totals are identical for each industry, but the underlying production functions are not the same. In the top part of Table 9-1, we assume correctly that each of the two supplier organizations is operating at a point on its production function. This means that for each supplier (represented by subscripts 1 and 2), aK1 = bL1 = Q1 aK 2 = bL 2 = Q2 and by addition we get the industry totals: a(K1 + K 2 ) = b(L1 + L 2 ) = Q1 + Q2 Since we observe that K1 + K2 = 60 and Q1 + Q2 = 240, it must be that a = 4. By analogous reasoning, b = 4. Therefore, the production function is Q = min (4K, 4L), deduced from industry-level data and the assumptions we made. If we apply the same reasoning to the suppliers in the lower part of Table 9-1, we reach the same conclusion. But in this case it is false. The truth (let us say) is that supplier 2 is operating on the production frontier with an actual production function of Q = min (5K, 5L)

The Cost Side of Policy Analysis

335

Supplier 1 is simply operating with technologically inefficient procedures. If it were operating efficiently with inputs K = 20 and L = 20, it would produce 100 units. Thus the industry as a whole is producing only 80 percent of the output it could produce with the given inputs (240/300). The aggregate data reveal only the average relations between inputs and outputs of the units comprising the aggregate. In the technically efficient case the average corresponds to the maximal output because each unit is attaining the maximum. But when the units are not operating at technically efficient levels, the aggregate data do not reveal the production function. It is not easy to determine whether supplier organizations are technically efficient in actuality. The strongest arguments for assuming efficiency are generally made with references to firms in competitive, private industries. In this environment, it is argued, the only firms that can survive are those that produce at the least possible cost, and thus they must be technically efficient. However, not all economists think actual competitive pressures are strong enough to force this behavior or that firms have the capabilities to reach and maintain technically efficient production over time.19 When one turns to different settings, such as production by public agencies, there is even less generalizable guidance. Thus to estimate the production function referred to in ordinary theory, methods that account for possible variation in technical efficiency may be very useful.20 It still can be useful to identify the actual average relations between inputs and outputs. The supported work example provides one illustration. If estimated with industry-wide observations, the average relations may be used to predict the output effects of a proportionate increase or decrease in resources to each supplier in the sector. But of course it also might be useful to try to improve the operating efficiency of the organizations, which is something one might overlook if a leap is made too quickly from knowing the inputs to inferring the production function.

Costs The concepts of cost are absolutely fundamental to the comparison of alternatives: indeed, the most fundamental concept of an action, decision, or allocation being “costly” is that there are alternative uses of the resources. (“There is no such thing as a free lunch.”) In this 19 See, for example, Richard Nelson and Sidney Winter, An Evolutionary Theory of Economic Change (Cambridge: Harvard University Press, 1982): see also the debate between George Stigler and Harvey Leibenstein: G. Stigler, “The Xistense of X-Efficiency, American Economic Review, 66, March 1976, pp. 213–216, and H. Leibenstein, “X Inefficiences Xists—Reply to an Xorcist.” American Economic Review, 68, March 1978, pp. 203–211. 20 For a general review of techniques for measuring productive efficiency, see T. Coelli, D. S. Prasada Rao, and G. Battese, An Introduction to Efficiency and Productivity Analysis (Boston: Kluwer Academic Publishers, 1997), and H. Fried, C. A. Knox Lovell, and S. Schmidt, The Measurement of Productive Efficiency: Techniques and Applications (New York: Oxford University Press, 1993). One method that has been used in many public sector applications to account for variation in technical efficiency is data envelopment analysis (DEA); see, for example, Abraham Charnes et al., eds., Data Envelopment Analysis: Theory, Methodology and Application (Boston: Kluwer Academic Publishers, 1994).

336

Chapter Nine

section we will review different definitions of cost that the policy analyst encounters and must understand, and illustrate their use in a variety of predictive and normative applications. In the first part of this section we introduce the normative concept of social opportunity cost and show how the concept is applied in benefit-cost analysis. In the second part we introduce the concepts of accounting cost and private opportunity cost and compare and contrast them with social opportunity cost. In the third we demonstrate both positive and normative applications of these concepts in the benefit-cost analyses of the supported work program. In the fourth we demonstrate some linkages between cost concepts and technology. One type of linkage is in the form of a cost function, and we illustrate the use of a cost function in an analysis of regulated trucking firms. Another type of linkage occurs when two or more outputs are produced with some shared input, and we illustrate how benefitcost reasoning can be applied to this joint cost problem to identify the most efficient allocation of resources. An appendix illustrates the relations between production functions and cost functions by using the mathematics of duality and introduces some of the cost functions commonly used in empirical analyses.

Social Opportunity Cost and Benefit-Cost Analysis The social opportunity cost of using resources in one activity is the value forgone by not using them in the best alternative activity. In Figure 9-4a we represent a simple society with a fixed amount of resources that can be devoted to producing two outputs: food F and shelter S. The production-possibilities curve shows the maximum output combinations that are possible given the resources available and technological possibilities. If all resources are devoted to shelter production and the best technology is used, SM will be the maximum possible shelter output. The social opportunity cost21 of producing SM is FM , the alternative output forgone. More typically, we think of smaller changes like that from point A to B. If we are currently at point A (with outputs FA , SA ), the opportunity cost of increasing shelter production by ∆S units of S is ∆F units of F. Thus, the opportunity cost per unit increase in S is simply ∆F/∆S, and for a small enough change this can be interpreted as the negative of the slope of the production-possibilities curve at the current allocation. This number is called the rate of product transformation (RPTS,F): the minimum number of units of one output (food) that must be forgone in order to increase another output (shelter) by one unit. The bowed-out shape of the production-possibilities curve can be explained intuitively as follows. The economy is endowed with a stock of factors in a certain proportion K 0 /L0. The technically best proportion to use with each good is unlikely to coincide with the endowment. Let us say that food is best produced with a relatively labor-intensive technology (low K /L). Then at each end point of the production-possibilities curve (FM and SM ), an “inferior” K /L ratio (= K 0 /L0 ) is being employed. If we insisted that each product be produced by using the ratio K 0 /L0 , and assumed constant returns to scale, the production-possibility

21

This is often referred to simply as the social cost.

The Cost Side of Policy Analysis

(a)

(b)

337

O

O

Figure 9-4. (a) Social opportunity costs and production possibilities. (b) The rate of product transformation (RPT) is the social marginal cost (MC).

338

Chapter Nine

frontier would be the straight line connecting the extreme points.22 However, we know we can do better than the straight line by allowing each industry to use a capital-labor ratio closer to its technically best one: Only the weighted average of both has to equal K 0 /L0 . Thus the frontier will be bowed outward. An interesting interpretive point arises if we are currently at point C and ask, “What is the social opportunity cost of increasing shelter production by ∆S?” Point C is productively inefficient, either because some resources are currently unemployed or because the resources in production are not being used to produce the maximum possible output (for reasons of technological or input-mix inefficiency). The common interpretation is that there is zero social opportunity cost: Society need give up no unit of F, since it can move from point C to point B. However, the social opportunity cost concept seems to ask about the best use of the resources other than to increase S by ∆S. The best alternative is the one in the preceding example: to increase F by ∆F. Both interpretations are correct; they simply answer slightly different questions. The first one answers the question, “What is the change in social cost associated with the resources used for S production at point C and point B, respectively?” Since the same FM − (FA − ∆F ) units are forgone by being at either C or B, the change in social cost is zero. There is no increase in social cost because at C we have already paid for the ∆S units even though we do not receive them. The second answer is a correct response to the question, “What is the social cost of those ∆S units?” However, it is not a new cost generated by the move from C to B. Another way to express the social cost is shown geometrically in Figure 9-4b. We keep the quantity of shelter as the horizontal axis, but the vertical axis measures the RPTS,F : the number of units of food given up to obtain each additional unit of shelter. Thus the height of the solid curve drawn can be thought of as the least social marginal cost of each unit of shelter (in terms of food forgone). The area under the marginal cost curve up to a given quantity is the total social cost of that quantity, since it is simply the sum of the marginal costs of the units of shelter produced. For example, the shaded area in the diagram is the social cost of SA units, which we know equals FM − FA from Figure 9-4a. If inefficient production methods are used so that the economy is at an allocation like point C in Figure 9-4a, the observed marginal social costs (e.g., MCI in Figure 9-4b) would be above the least marginal social costs, and the difference between them (shown as the cross-hatched area) would be the degree of production inefficiency. Now we wish to integrate social cost considerations into the compensation principle of benefit-cost analysis. In Chapter 6, we avoided cost concepts by choosing examples in which the net change in consumer surplus revealed all that was necessary. In principle, this can always be done as long as all changes in individuals’ budget constraints caused by the allocative change are taken into account. The earlier examples assumed these were held constant except for offsetting transfers. However, allocative changes will, in general, cause 22 If there were continuing economies of scale without limit, under these assumptions the frontier would be bowed inward, since we lose the economies as we move away from the extreme points. However, constant or decreasing returns to scale are empirically much more likely at these (imagined) extreme uses of society’s resources.

The Cost Side of Policy Analysis

339

some nonoffsetting changes in individual budget constraints, and the question becomes how analysts find out about these. It turns out that there are many circumstances in which knowledge of market prices and social costs give the analyst a neat summary measure of these budget constraint changes. This measure is called the producers’ surplus. The more general statement of the compensation principle is that a change is relatively efficient if the net change in the consumer surplus plus producer surplus is positive. But this turns out to be identical to the difference between social benefits and social costs. Therefore, we can also state the compensation principle in this way: A change in allocation is relatively efficient if its social benefits exceed its social costs. In the simple model below we attempt to explain and clarify these concepts. Figures 9-5a and b are similar to Figures 9-4a and b except for the addition of a demand side. Imagine as we did in Chapter 8 that we have a Robinson Crusoe economy with no Friday (i.e., a one-person economy). This allows us to draw Crusoe’s indifference curves in Figure 9-5a. Crusoe has to choose a production point that, because there is no one with whom to trade, will also be his consumption point. The highest utility that he can attain is Umax, at point C, where the indifference curve is just tangent to the production-possibilities frontier. His optimal production and consumption are FC and SC. All we do below is explain this again using some new concepts that are important and useful for benefit-cost analysis in more complicated, many-person economies. Since the two curves are tangent at point C, their slopes are equal and thus RPTS,F = MRSS,F at Crusoe’s utility maximum. This feature is often used in the many-person economy to identify how much of each good to produce. It is the condition for product-mix efficiency, and it is another necessary condition for Pareto optimality that we review more carefully in Chapter 12. We show here that the rule is essentially identical with choosing the allocation that maximizes social benefits minus social costs (or, equivalently, maximizes net social benefits). To put this in the context of a benefit-cost analysis, we need a measure of the marginal benefit to Crusoe of each unit of shelter. Through each point on the production-possibilities frontier (his effective budget constraint), he has an indifference curve that crosses through it (such as the one labeled U0 crossing point A). The slope of this indifference curve at the crossing reveals just how much food Crusoe is willing to forgo consuming in order to have one additional unit of shelter—the measure of marginal benefit that we are seeking. The MRS is high (the slope is steep) near F*, and it gradually declines as we move along the frontier toward S*. For example, the slope of the indifference curve at point B is much flatter than at point A. Let us graph on Figure 9-5b Crusoe’s MRSS,F at each point on the production-possibilities frontier. His marginal benefit (measured in units of food) declines as the number of units of shelter increases. We also graph the RPTS,F as before. It has a clear interpretation as the least marginal cost (in food units) to him of each unit of shelter. It is no accident that the marginal benefit curve crosses the marginal cost curve at SC , since we know that is the point at which RPTS,F = MRSS,F . Let us consider how Crusoe would reason as the sole recipient of benefits and bearer of costs. Imagine him starting at the upper-left end of the frontier at F*, and considering

(a)

(b)

O

O

Figure 9-5. Robinson Crusoe’s Pareto-optimal allocation of resources (a) is the same as his maximization of social benefits minus social costs (b).

The Cost Side of Policy Analysis

341

whether marginal increases in shelter (moving rightward along the frontier) are worth the marginal decreases in food that would be required to achieve it. For each unit of shelter he would ask if its marginal benefit (the maximum amount of food he is willing to forgo— determined by the slope of the indifference curve) exceeds its marginal cost (the minimum amount of food he would have to forgo—determined by the slope of the productionpossibilities frontier). He will choose to produce and consume each unit of shelter for which the marginal benefit exceeds the marginal cost and thus increases his net benefits. He will not go beyond SC units of shelter, because these units have marginal cost greater than marginal benefit and would thus cause a reduction in net benefits. SC is the quantity that maximizes his net benefits, which equal the area GHJ. Thus the marginal benefit–marginal cost reasoning simply reveals utility-increasing changes, and maximizing net benefits leads to the utility maximum (which, in this economy, is the Pareto-optimal allocation). Let us extend this reasoning, by a two-step process, to the many-person economy. Crusoe still has the same production possibilities, but now imagine that the goods must be bought or sold in the marketplace, where shelter has a price (still measured in food units). This allows him to separate the production decision from the consumption decision. He will make production decisions to maximize his budget, which he can then use to choose his most preferred consumption bundle. Assume the market price is PC —not accidentally the level at which the MRS = RPT. Crusoe the producer will produce and sell to the “market” all the units that have opportunity costs below the market price: SC units. One way to see this is on Figure 9-5a. Suppose he chose a point on the production frontier with fewer than SC units of shelter, such as point A. Then his budget constraint in the marketplace would be the dashed line through point A with slope PC .23 Note that this is not tangent to the frontier. He could have a higher budget constraint by choosing a production point on the frontier to the right of point A. But at point C, the budget constraint is just tangent to the frontier. It is the highest one that he can reach. Any production point on the frontier where shelter exceeds SC units, such as at point B, gives a budget constraint under the one through point C. We can see the same logic in a slightly different form in Figure 9-5b. Producing the first unit of shelter means he forgoes producing OJ units of food (his opportunity cost), but then he can sell this shelter unit in the marketplace for PC units of food (which is more than OJ). The difference between the market price and his opportunity cost (PC − OJ) is the “profit” or “surplus” that he gains by producing this shelter unit rather than food—an increment to his budget level.24 He will find it similarly beneficial to produce each unit of shelter up to SC , since each adds some “surplus” to his growing “buying power.” But the opportunity cost of producing shelter units beyond SC is too great (higher than PC ). Crusoe would forgo more food in production than he would regain by producing and selling the shelter; it would reduce his buying power. Thus again we see that SC is the quantity that maximizes his purchasing power or budget constraint (which is necessary for maximizing utility). For any production choice (S, F ) his budget level B = PC S + F, where the price per unit of food is 1. His initial budget level (with no shelter production) is F*. By producing one unit of shelter, he gains PC but loses RPTS,F (= OJ ): F* + PC − OJ. So the net change in his budget is (PC − OJ ). 23 24

342

Chapter Nine

As a producer, Crusoe receives producer’s surplus, or economic rent, defined as payments to the producer above opportunity costs. He makes the production decisions that maximize his producer surplus. This is shown as the darker shaded area JPC H. It is the increase in his budget constraint that results from producing SC in the shelter market (and FC food) instead of the next best alternative (0 shelter, F* food). Note also that if Crusoe chose inefficient production methods (i.e., was not on his production-possibilities frontier), this would show up as a higher marginal cost curve (as we saw before) and therefore a lower producer surplus and lower budget constraint for consumption. The consumption side in Figure 9-5b is already familiar. Crusoe the consumer buys from the market all units that have marginal benefit greater than the market price: SC units.25 Thus he makes the consumption decisions that maximize his consumer surplus. His consumer surplus is the lighter shaded area, PC GH. The main point of this illustration is to see that his total net benefit GHJ can be seen as the sum of two components: his producer and consumer surpluses. The net social benefit is the sum of the consumer and producer surpluses. In social benefit-cost analysis it makes no difference whether a $1 change occurs in the consumer or producer surplus; both count equally. Now let us move to step 2, where there are many people consuming and producing shelter (some people might do both, but others will participate only as demanders or suppliers). Here we simply reinterpret the marginal benefit (demand) and marginal cost (supply) curves as those applying to the many-person market and assume that there are no aggregation problems of the kind discussed in Chapter 6. Then for a given market price the market demand curve reveals the sum of all the individual consumer surpluses (from any quantity of shelter consumption), and similarly the market supply curve reveals the sum of all the individual producer surpluses (from any quantity of shelter production). The sum of the two surpluses is maximized at the quantity at which the market demand intersects the market supply (because it is also the quantity at which marginal benefit equals marginal cost). In the above example with only two goods, the social cost of shelter is expressed in terms of food forgone. To apply the concept in a multiproduct economy, we measure the costs and benefits in terms of money. That is, the alternative to using resources for shelter is to use them for “dollars for all other things” (rather than simply food). This focuses just on the use of resources for shelter, and implicitly assumes efficient resource allocation within the “dollars for all other things” part of the economy. In later chapters, we shall consider a variety of ways analysts account for different complications that may arise. Nevertheless, knowledge of market demand and market supply curves can be used in many circumstances to assess the degree to which changes in resource allocation have benefits greater than costs

25 This model differs slightly from the earlier one with no marketplace, in that it offers prospects of gains from trading with others. In this model, Crusoe has better consumption possibilities: the straight-line budget constraint lies everywhere above the production-possibilities frontier (which was also the consumption possibilities in the previous model) except at the tangency point C. However, we have deliberately constructed this example with the one market price PC that does not offer him any gains from trade. Note for future reference that any market price P ≠ PC would allow him to reach a higher utility level by consuming a bundle unattainable by his own production (i.e., trade increases welfare).

The Cost Side of Policy Analysis

343

(i.e., increase the sum of the consumer and producer surpluses), or equivalently, increase relative efficiency according to the compensation principle. The introduction of explicit accounting for social costs in the benefit-cost framework does not in any way alter the qualifications about its meaning introduced in Chapter 6. The same questions about the meaning of this aggregated definition of efficiency, as well as the equity implications of using it as a criterion in analysis, remain. All that we have done here is show how benefits and costs are calculated when resource suppliers receive payments other than the least opportunity costs. Each person in the society is a resource supplier as well as a consumer, and we must consider the effects on each side to know how any individual is affected by a change or how society in the aggregate is affected.

Accounting Cost and Private Opportunity Cost All of the discussion to this point in the section has been about the concept of social cost. This is the concept of most evaluative use to the analyst, but do the available data on costs correspond to it? Often, recorded costs will differ from social costs because a different concept underlies each. The accounting concept of cost is the bookkeeper’s view: that which gets recorded on the financial statements and budgets of firms and agencies. It is based on the actual price when purchased, sometimes modified by various conventions for depreciation (of durable goods). A number of examples illustrate important differences between the concepts. When the nation switched to an all-volunteer army in the 1970s, it was recognized that higher wages and salaries would have to be offered in order to attract volunteers. Some people, thinking of the accounting impact that it would have on the government’s budget, argued against the concept because it was “too expensive.” But the change in accounting costs is in the direction opposite that of the change in social costs. The social costs of the army personnel are what they could earn in their best alternatives (reflecting the value of the forgone outputs) and forgone psychic incomes (the best alternatives may offer satisfaction in addition to pay). For draftees, the opportunity costs often exceeded their military wages by substantial amounts. Because many draftees would not choose to volunteer even at the higher wages of the voluntary army, it must be that the social cost of using a given number of draftees in the military exceeds the social cost of using the same number of volunteers. For volunteers, the total social cost cannot exceed the benefits to them of military employment: If a volunteer had a better alternative, he or she presumably would not have volunteered. There are advantages to having military wage rates set at the social cost rather than below it: (1) The nation now has a more accurate picture of the true resource cost of providing national defense, which usually influences the amount of it sought. (2) The input mix used in the production of national defense has been biased in the direction of too much labor relative to capital, because labor appeared to be so cheap. That source of distortion has been eliminated. Of course, there are many other issues to consider in conjunction with the draftee-volunteer debate. For example, is it equitable for higher-income people to be more easily able to avoid the risk to life that might arise in the military, or is military service more properly viewed as a civic obligation incumbent on all citizens?

344

Chapter Nine

Other examples of differences in social costs and accounting costs involving labor abound. Jury duty, for example, is similar in this respect to the military draft: The social opportunity costs are greater than the wages paid jurors. Volunteers in hospitals, election campaigns, and other nonprofit activities may receive no wages, but that does not mean there is no social cost of using their services. In the same vein, an entrepreneur who could earn $30,000 elsewhere may draw no salary while operating a business that ends up with a $25,000 accounting profit. The economist, who defines economic profit as revenues above opportunity costs, would count this as a $5000 loss.26 A third cost concept, private opportunity cost, is defined as the payment necessary to keep a resource in its current use. This is very similar to and often identical with the social cost. Differences between private and social opportunity costs can arise when the prices of resources do not reflect the social costs. In the above examples of the army the private opportunity cost to the draftee is the same as the social opportunity cost. But the private opportunity cost of a draftee to the army equals the accounting cost. If the entrepreneur in the above example produces chemicals but pollutes the neighborhood while doing so (an externality), the social cost of the production exceeds the private cost to the entrepreneur (society not only forgoes alternative uses of the regular inputs to the firm; it also forgoes having the clean air it used to have). In the above two examples, the private opportunity cost to the organization equals the accounting cost even though there is a divergence from the social cost. However, the private cost often diverges from the accounting cost; for instance, the divergence occurs in the example of the entrepreneur with a $25,000 “profit” and a $30,000 alternative employment opportunity. Another very important example, because it is quite general, is the treatment of capital resources such as machinery or buildings. The accountant uses the historical purchase price minus a certain amount of depreciation each year calculated according to a formula.27 However, the historical purchase price is a sunk cost, and it is irrelevant to decision-making. The opportunity cost of employing a machine for a year is what is given up by not selling it now to the highest bidder (alternative user of the machine). There are two components of this opportunity cost. One is the true economic depreciation, which is the decrement in the selling price of the machine over 1 year. This reduction occurs because there is “less” machine (it obsolesces over time). Sometimes the true economic depreciation can be roughly approximated by the accountant’s method of depreciation. The second component is the forgone interest that could have been earned on the money from the sale; this is the opportunity cost of the capital in the machine at the start

26 To the extent that these costs are borne voluntarily, they must have benefits to the participants that are at least as great. The hospital volunteer, for example, must consider the benefits of volunteering to outweigh the costs. If the entrepreneur claims to prefer operating the business to a higher-salaried opportunity, there must be nonmonetary benefits that more than offset the financial loss (e.g., pleasure from being the boss). 27 Depreciation rules are somewhat arbitrary. For example, straight-line depreciation is one common method by which a reasonable life span of n years is estimated for a machine and then a fixed percentage equal to 1/n of the total cost is deducted each year until the cost has been fully deducted.

The Cost Side of Policy Analysis

345

of the period.28 (Together, these components are also the rental value of the machine: what someone would have to pay to rent the machine for a year.) The second component is not taken into account by the bookkeepers, although it is both a private opportunity cost of the firm using the machine and a social cost. The latter point is of particular importance to policy analysts who use program budgets as one source of information about the costs of a government program. Unless the government agency actually rents (from other firms) all the capital it uses, the social costs of its capital resources will not appear in the budget. The analyst must impute them. To summarize this discussion, the accountant’s concept of cost is largely historical. It often differs sharply from the opportunity cost concepts, which refer to the value of the resource in its best alternative use. The difference between social and private opportunity costs is one of perspective. The social cost concept treats the whole society as if it were one large family, so that everything given up by the employment of a resource is counted as part of the cost. The private opportunity cost, the payment necessary to keep a resource in its current use, is the value in its next best alternative use from the perspective of the resource employer. Analysts are most interested in the opportunity cost concepts, because individual decision makers are thought to act on their perception of cost (the private opportunity cost), and the social costs are most relevant to efficiency considerations.

An Application in a Benefit-Cost Analysis An early evaluation of New York’s Wildcat Service Corporation in its experimental stage can be used to illustrate a number of points about the use of cost concepts. Four different organizational perspectives on costs (and benefits) are shown to have policy relevance.29 In Table 9-2, the social costs and benefits of the program (those known by the end of the second year of the experiment) are summarized. This social benefit-cost calculation is equivalent to a compensation test: benefits and costs are simply a convenient way to organize and then summarize a variety of the effects of the program (the policy change). In making a social-benefit cost calculation, we are asking whether the gains to the gainers (benefits) outweigh the losses to the losers (costs) among all members of the economy.30 The benefits in Table 9-2 can be thought of as the value of the output of the program. The output consists of the goods and services actually produced as a part of the program 28

The forgone interest is actually a monetary measure of what is being given up. The real change in resource use is that consumers must forgo some current consumption in order to devote resources to make the machine. 29 The material in this section is drawn from Lee S. Friedman, “An Interim Evaluation of the Supported Work Experiment,” Policy Analysis, 3, No. 2, Spring 1977, pp. 147–170. 30 Interesting questions sometimes arise about who has “standing” in the social benefit-cost calculation. For example, should the preferences of convicted felons count equally along with those of average citizens? More commonly in the global economy, should Country X count equally the benefits and costs of its actions to people in Country Y? The classic reference article on this subject is by Dale Whittington and Duncan MacRae, Jr., “The Issue of Standing in Cost-Benefit Analysis,” Journal of Policy Analysis and Management, 5, No. 4, Summer 1986, pp. 665–682.

346

Chapter Nine

Table 9-2 The New York Supported Work Experiment: Social Benefits and Costs per Year in the Experiment per Person Benefits Value added by program to public goods and services Post-program experimental earnings Savings from crime-connected costs System Crime reduction Drug program participation Health Total social benefits Costs Opportunity costs of supported work employees Staff and nonpersonnel expenses Total social costs Net benefits

$4519 1154 86 207 — (285) $5681 $1112 2362 $3474 $2207

Source: Lee S. Friedman, “An Interim Evaluation of the Supported Work Experiment,” Policy Analysis, 3, No. 2, Spring 1977, p. 165.

and the external effects of production. The measured external effects consist of the increase in the future (out-of-program) earnings stream of participants, the reduction in crime by participants, the reduction in drug treatment, and the change in health of the participants. In all cases the existence of the effects is measured by comparison with an experimental control group; individuals who were found qualified to be participants but were chosen by lottery not to participate. Without the control group it would be virtually impossible to know if the program was having any effect at all. The listed benefits were measured in such a way as to underestimate their magnitude. For example, the out-of-program earnings increases included only the difference between the experimental and control groups within the first calendar year from the onset of the experiment. Presumably this difference persisted at least to some extent into the future, and thus the true benefit is higher than that measured. The reason for underestimating the benefits results from analytic judgment about how to handle the uncertainty of the exact level of benefits. If there were no uncertainty, there would be no need for this procedure. Since the analysis indicates that the benefits outweigh the costs, confidence in this conclusion can be tested by deliberately making assumptions conservative (less favorable) to it. Since it still holds even with known underestimation of benefits, confidence that it is correct increases despite the uncertainty. Sometimes, small changes in the assumptions may lead to opposite conclusions, and then the analyst must report that it is really ambiguous whether the benefits outweigh the costs. Of course, an important part of analytic skill is learning how to convert available data into information that minimizes the range of uncertainty about the truth.

The Cost Side of Policy Analysis

347

In Table 9-2 the costs are shown to be substantially less than the benefits. The component of the cost calculation relevant to the earlier discussion in the chapter is the opportunity cost to society of employing the participants in the program. The actual wages received were substantially higher than the $1112 opportunity cost. But that is irrelevant to the opportunity cost; we wish to know the value of whatever is forgone by the employment of participants in supported work. The traditional measure of this value is the earnings that would have been received otherwise. They are measured quite precisely by the actual earnings of the control group. The measure presumably reflects the value of the marginal product that would be added by this labor. In other words, one reason that the benefits easily outweigh the cost is that the costs are low: The control group members remain unemployed for most of the time, so society gives up little by employing them in the supported environment provided by Wildcat.31 A simple summary of the measurable social benefits and costs is generally inadequate as a basis for understanding or presenting the social effects of the program. Some decision makers might be more interested in one component than another and wish more detail. For example, the value of goods and services is discussed at length in the main analytic reports, and it might be of use to explore whether there are trade-offs between future earnings and in-program output. Or it might be interesting to find out if the participants are really more ill than the controls, or if they just consume more medical services in response to a given illness.32 Other effects may be important but impossible to value in a meaningful way; an example is the effect of the program on the family lives of participants. However, our purpose here is to emphasize the effects of different perspectives on the cost and benefits. Recall that the calculation of social benefits and costs reflects indifference in dollar terms of who is gaining or losing. However, society is not all one big family, and the costs and benefits to particular subsets of society’s members can take on special importance. Through the political system, taxpayers have considerable influence on the spending decisions of 31 This issue is more complicated than the above discussion reveals. There is another part to the opportunity cost: Participants forgo not only their alternative earnings but their leisure as well, which must have some value to them. If the unemployment of controls were voluntary, it could be argued that their wage rate (when actually working) applied to the time equivalent of full-time employment is the social opportunity cost. (This assumes that controls make an optimal labor-leisure trade-off.) However, most analysts accept the idea that much unemployment is involuntary because of imperfections in the labor market. Still, the analysis would be improved by an explicit accounting of that effect. Most evaluations of similar programs also have ignored the value of forgone leisure. In this particular case, accounting for the value of forgone leisure would have been extremely unlikely to affect the conclusion. All the controls were revealed to prefer program participation to nonparticipation, implying that they would forgo their leisure for less than their net increase in tangible first year benefits of $1703. Although this excludes the fact that future earnings increases are part of the inducement, those must add more to benefits than to costs; furthermore, the participant is also forgoing the private returns to crime and perhaps better health (so all of the $1703 cannot be simply leisure’s opportunity cost). 32 The study assumed conservatively that the participants were more ill, based on data that indicate they averaged slightly more time in the hospital per year. However, this average was based on relatively few hospitalizations.

348

Chapter Nine

government. Thus we can ask, from the perspective of taxpayers, how does the Wildcat program look? Table 9-3 shows the major benefits and costs to taxpayers. The primary difference between this perspective and the social perspective is that certain transfers that cancel out in the latter must be made explicit here. On the benefit side, the taxpayer will experience a reduction in welfare payments and a wider sharing of the tax burden as new taxpayers make contributions. These are not included in the social calculation because, at least as a first approximation, the payments are simply transfers of purchasing power (a dollar lost by the participant is offset by the dollar gained by the taxpayer). On the cost side, the actual wages paid the supported work employees are relevant to the taxpayer. The taxpayer perspective is sometimes represented as the impact on the government budget. There is a difference between the concepts: The taxpayer perspective reveals private opportunity costs, whereas the impact on the government budget is measured by accounting costs. In this case, because capital assets of the program are very small, there is little difference. A more important simplification is treating taxpayers as a homogeneous group: Federal taxpayers in Ohio do not receive the public goods and services that accrue to the residents of New York City. One could further disaggregate the effects into New York taxpayers and other taxpayers; this might be useful in deciding how to share the costs of the program. A specific example of the perspective of a particular agency was offered as a third benefit-cost perspective in the analysis. The New York City welfare department was one source of program funds. It provided $1.19 per participant-hour in supported work on the theory that this represented a payment it would have to make (to the participants) if the program did not exist. This added up to $1237 per year, and the department also provided certain direct benefits to the participants valued at $842 for a total of $2079. However, the welfare department had to pay out $2639 in benefits to the average control during the same period. Thus the department was getting a bargain: For every $1.00 put into supported work, it received $1.27 in reduced claims for welfare. Finally, a fourth important perspective on benefits and costs is that of the participant. The change in disposable income was calculated. The average member of the experimental group received $3769 in program wages and fringe benefits and $1154 in out-ofprogram earnings, for a total of $4923. To receive this, he or she accepted a welfare reduction of $1797, increased taxes of $311, and forgone earnings of $1112, or $3220 total cost. Thus the increase in disposable income was $1703. This type of calculation is relevant to determining Wildcat wages. If it is large, taxpayers may be asked to transfer more than necessary to achieve the net social benefits. If the net benefit to participants is small or negative, then it will be difficult to induce those eligible for the program to apply. Note that of the four benefit-cost calculations presented, only one has a specific normative purpose: The social benefit-cost analysis reveals whether the economy is made relatively more efficient by the program. The other three calculations can be thought of as summarizing the program’s effects from the perspectives of various constituent groups. By calculating the benefits and costs from the perspectives of different constituencies, one can, for example, predict whether these groups will favor the program. Or one might use these calculations in

The Cost Side of Policy Analysis

349

Table 9-3 The New York Supported Work Experiment: Taxpayer Benefits and Costs per Year in the Experiment per Person Benefits Public goods and services Welfare reduction Increased income tax collected Savings from crime-connected costs System Crime reduction Total taxpayer benefits Costs Supported work costs Net benefits

$4519 1797 311 86 207 $6920 $6131 $ 789

Source: Lee S. Friedman, “An Interim Evaluation of the Supported Work Experiment,” Policy Analysis, 3, No. 2, Spring 1977, p. 167.

evaluating the equity of the program. The calculations also can suggest whether certain changes in the program will increase or reduce support from the various constituencies. In general, these benefit-cost calculations can be made for any program; they require judgment about which groups are the important constituencies.33 In this application, no linkage between the cost concepts and production function needs to be made. But in other applications, understanding such linkages can be a key point. In the next section, we illustrate this point.

Cost-Output Relations In this section we will consider how opportunity costs (with private equal to social) vary as a function of the output level. Both producing agencies and analysts are often interested in least-cost production, and most of the standard cost curves are drawn on the assumption that production is or will be at least cost. However, actual observed costs in many situations are not the minimum possible. Nevertheless, the attempt to identify consistent relations between observed costs and the production function is extremely useful. The primary reason is that it may be much easier 33

During the 1970s one top federal analyst was reported to have directed all his associates to make at least two benefit-cost calculations: one from the social perspective and the other from the perspective of those living in Louisiana. The analyst was from the North, but the powerful chairman of the Senate Finance committee from 1965 to 1980, Russell Long, was from Louisiana. An interesting study characterizes several different but common political perspectives that cause perceptions of benefits and costs to be skewed in particular ways. See A. Boardman, A. Vining, and W. G. Waters, II, “Costs and Benefits through Bureaucratic Lenses: Example of a Highway Project,” Journal of Policy Analysis and Management, 12, No. 3, Summer 1993, pp. 532–555.

350

Chapter Nine

to obtain data on costs than on all the different input quantities. We give two examples of decisions that depend on determining aspects of the returns-to-scale characteristic of the production function. The first case involves trucking deregulation, and the second is concerned with the national supported work program. In both cases inferences about scale are made by estimating cost functions rather than production functions. In the case of trucking the nature of scale economics in the industry bears on whether the industry ought to be regulated or not.34 If there are increasing returns to scale over a large portion of the total demand for trucking services, then the way to meet that demand and give up the fewest resources in doing so is to have only a few firms or agencies as supply agents. However, if the supply were by private profit-maximizing firms, there would be little effective competition to ensure that the prices charged are close to the opportunity costs or that enough services are actually provided. The typical response to situations like this has been to regulate the prices charged by firms, as with utility companies. In 1994, 41 states regulated their intrastate trucking, and prior to 1980 the Interstate Commerce Commission (ICC) regulated the prices of interstate trucking. Federal legislation has largely done away with these practices. Under federal regulation there were a large number of interstate trucking firms, not the small number associated with large economies of scale.35 Proponents of regulation argued that the ICC has maintained prices just high enough for many firms to survive and compete, and that without regulation the industry would become extremely concentrated (i.e., have only a few firms) and would present the problems mentioned above. Opponents of regulation argued that there were no significant economies of scale, that a large number of competing firms would therefore continue to exist without price regulation, and that the result would be lower prices to consumers and services essentially unchanged. Under the assumption that these firms operated on their production frontiers (i.e., were technically efficient), knowledge of the returns to scale of the production function would reveal the expected degree of concentration. This knowledge was obtained without ever estimating the production function, simply by studying the cost functions of the firms. Let us illustrate this point. In general, the total cost TC of producing any output level is the sum of the opportunity costs of the inputs used to produce that level. The average cost AC is simply the total cost divided by the quantity of output. The marginal cost MC is the opportunity cost of the additional resources necessary to produce one additional unit of output. Suppose we have a very simple production function Q = F(L); that is, labor is the only type of input. Let us also assume that a firm can hire all the labor it wishes at the current wage rate w. Then the TC of producing any quantity is wL, and AC is simply:

34 The behavior of monopolies will be treated in the next chapter and public policy with respect to natural monopolies will be treated in Chapter 18. Here we present only a bare-bones summary in order to motivate the cost analyst. 35 According to Thomas Moore, there were 14,648 regulated trucking firms in 1974. See p. 340 in T. Moore, “The Beneficiaries of Trucking Regulation,” Journal of Law and Economics, 21, October 1978, pp. 327–343.

The Cost Side of Policy Analysis

351

TC = —– wL = w —— 1 AC = —– Q Q APL If the firm is technically efficient, this can be rewritten wL AC = ——– F(L) In this formulation the firm’s average cost clearly varies inversely with the average productivity of the one and only factor. The quantity of output at which the APL is at a maximum must also be the quantity at which the average cost is a minimum. This illustrates that the output quantity at which the firm’s average cost is minimized depends upon the shape of the production function (and technical efficiency). The relation between a firm’s average cost curve and the production function is more complicated when there is more than one input. However, the fact that there is a relation extends to the general case of many inputs. Suppose the inputs used are X1, X2, . . . , Xn with associated prices P1, P2, . . . , Pn. Imagine three possible production functions that vary in their respective returns to scale. Using superscripts I, C, and D to indicate increasing, constant, and decreasing returns to scale and m to represent some positive constant, let us suppose that the functions are represented by m 2Q = F I (mX1, mX2, . . . , mXn ) mQ = F C(mX1, mX2, . . . , mXn ) √¯¯¯ mQ = F D(mX1, mX2, . . . , mXn ) When m = 1, all three functions have the same output level. Since the inputs are the same, they have the same total cost (TC0 ) and the same average cost (AC0 ): n

Σ

Pi X i TC0 i=1 AC0 = —— = ———– Q Q But now let us ask what happens to AC if we expand production by multiplying all inputs by some m greater than 1. Then, n

Σ

Pi mXi TC i=1 AC I = ——— = ——–—— m 2Q m 2Q

(Σ ) n

m Pi Xi AC0 i=1 = ————— = ——– < AC0 2 m Q m Similarly, n

Σ

Pi mXi TC i=1 AC C = —— = ——–—— = AC0 mQ mQ

352

Chapter Nine

and n

Σ

Pi mXi TC i=1 AC D = —— = ——–—— = √¯¯¯ mAC0 > AC0 √¯¯¯ mQ √¯¯¯ mQ Thus for a firm or agency that would produce at least cost for given input prices as the scale of operations increases, the AC increases, stays the same, or decreases depending on whether the production function is decreasing, constant, or increasing returns to scale. One other related concept, crucial to understanding multiproduct organizations, is that of economies of scope: when the cost of producing two (or more) products within one firm is less than the cost of producing the same quantity of each in separate firms. If we denote the least total cost of producing outputs Q1 and Q2 in a single firm as C(Q1, Q2), then we can say economies of scope are present if C(Q1, Q2) < C(Q1, 0) + C(0, Q2) Examples of scope economies are common: most automobile firms also produce trucks; most bakeries bake cakes, cookies, bread, and rolls; most computer software companies produce more than one software product; most police departments provide routine patrols as well as investigative services; most universities provide both teaching and research services. In all of these cases (presumably), the producers economize on some of their inputs (management expertise, ovens, programmers, knowledge of criminal behavior, professors) so that it is less expensive to provide multiple products through one organization rather than more.36 One can also inquire, of course, about the presence of “size” economies in an industry composed of multiproduct firms (i.e., can “few” multiproduct firms produce at lower cost than “many” of them). To relate this to the trucking issue, the size of the firm (measured by the quantity of output it produces) necessary for least-cost production depends on the returns to scale of the production function: The greater the returns to scale, the larger the firms in the industry should be. Furthermore, the observed relation between average cost and output level (for an organization that produces at least cost, and holding input prices constant), reveals the returns to scale: As firm quantity increases, the change in AC will vary inversely with the returns to scale. Therefore, we can look for evidence of the returns to scale by examining the costoutput relationship. This is precisely what is done in a study by Spady and Friedlaender of the regulated trucking industry.37 There are some problems in measuring the output of each firm— 36

We can measure the degree of scope economies (φs ) as C(Q1, 0) + C(0, Q2 ) − C(Q1, Q2 ) φs = —————————————— C(Q1, Q2 )

For a good general reference on scope economies and related issues, see W. Baumol, J. Panzar, and R. Willig, Contestable Markets and the Theory of Industry Structure (San Diego: Harcourt Brace Jovanovich, 1988). 37 See Richard H. Spady and Ann F. Friedlaender, “Hedonic Cost Functions for the Regulated Trucking Industry,” Bell Journal of Economics, 9, No. 1, Spring 1978, pp. 159–179.

The Cost Side of Policy Analysis

353

similar to those of the Wildcat example. The number of ton-miles carried is the unadjusted measure, but it must be converted into standardized ton-miles, which account for the length of a haul, the shipment size, and other quality factors that lead to differences in the effective output. For example, one firm may make one trip of 1000 miles on open highway, and another firm may make 1000 trips of 1 mile each. The ton-miles are the same, but the outputs are really quite different, and they should not be expected to have either the same value to consumers or the same costs. Similarly, a firm that handles shipment sizes that fill up the truck is producing a different output than one that picks up many small loads to fill the truck. Since these factors vary continuously, rather than each firm producing a small discrete set of different output types, one way to account for them is to create a continuously adjusted effective output measure (similar to the Wildcat example of adjusting the square footage of cleaned structures for job difficulty, so that one very tall building is not necessarily the same output as an equal amount of square footage in the form of two short buildings). An equivalent way to view their analysis is to think of it as estimating the cost function of a multiproduct firm.38 The results of the Spady-Friedlaender analysis are shown graphically in Figure 9-6. The figure shows two alternative specifications of the AC function: one assuming that the output is hedonic (quality variable) and that its quality attributes must be included in the estimation procedure, and the other assuming that the unadjusted ton-miles measure is sufficient (the output is nonhedonic). The results of testing them statistically indicate that the nonhedonic specification is erroneous. One of the crucial insights from the analysis is the importance of not making this specification error. Under the erroneous specification, it appears that there are significant economies of scale: The AC declines as firm size increases from its current average level to that of the largest firm. But under the maintained hypothesis (not rejected statistically as false) of the hedonic specification, the average-size firm is currently very close to the minimum AC and the lack of further scale economies would discourage expansion of firm size. The Spady-Friedlaender analysis supported the policy arguments in favor of trucking deregulation. Of course, there are many other aspects of this policy that were considered. Our purpose is simply to illustrate the relevance of the cost-output relation to a policy decision. In 1980 the Motor Carrier Act was enacted, which removed regulatory entry barriers to the trucking industry and significantly reduced the rate-making (price-fixing) authority of the ICC.39 In 1994, Congress passed further legislation barring the states

38

We provide more technical details about cost functions in the appendix to this chapter. For one estimate of the efficiency gains from this decision, see J. Ying, “The Inefficiency of Regulating a Competitive Industry—Productivity Gains in Trucking Following Reform,” Review of Economics and Statistics, 72, No. 2, May 1990, pp. 191–201. For a general review of the economics of trucking regulation, see W. Viscusi, J. Vernon, and J. Harrington, Jr., Economics of Regulation and Antitrust (Lexington, Mass.: D. C. Health and Company, 1992), Chapter 17. For an excellent analysis of the politics of this issue, including the role of policy analysts themselves, see Dorothy L. Robyn, Braking the Special Interests: Trucking Deregulation and the Politics of Policy Reform (Chicago: University of Chicago Press, 1987). 39

354

Chapter Nine

Figure 9-6. Average cost functions (from Richard H. Spady and Ann F. Friedlaender, “Hedonic Cost Functions for the Regulated Trucking Industry, “ Bell Journal of Economics, 9, No. 1, Spring 1978, p. 172). Copyright © 1978. Reprinted by permission of RAND.

(effective in 1995) from regulating prices, routes, or services for all interstate trucking services except household moving.40 A second interesting application of a cost function bears on the Wildcat experiment and can be explained briefly. On the basis of the interim success of the New York program, a consortium of federal agencies decided to sponsor an expansion to fifteen cities known as the national Supported Work experiment. After the second year of the national program (when start-up costs were not a factor), the average cost per participant-year was significantly greater than in New York alone ($13,562 versus $9853). This was puzzling until analysts decided to look at annual costs in relation to the scale of each site as measured by annual participant-years.41 There turned out to be a clear cost advantage for larger sites, primarily because management expenses of a relatively “fixed” nature could be spread over the larger number of participants. Annual management expenses ranged from over $6000 per participant-year at smaller sites to under $3000 at the larger ones. Thus, many of the sites were not operating at the minimum of the average-cost curve. In this example the analysts recognized that there are no obvious policy conclusions without further information. Perhaps most importantly, the output of supported work is not 40

At least one study indicated that state regulations were responsible for raising rates above competitive levels. See T. Daniel and A. Kleit, “Disentangling Regulatory Policy: The Effects of State Regulations on Trucking Rates,” Journal of Regulatory Economics, 8, No. 3, November 1995, pp. 267–284. 41 See David A. Long and others, An Analysis of Expenditures in the National Supported Work Demonstration (Princeton, N.J.: Mathematica Policy Research, Inc., March 6, 1980).

The Cost Side of Policy Analysis

355

measured by the participant-years, and it is possible that the social value of a participantyear at one site was quite different from that at another. This is simply another illustration of the need to control for quality when measuring output. Second, the number of participants desirable in each program is best thought of as determined by marginal benefit– marginal cost considerations. Even so, the analysis was useful because it provided some information on the cost side of this calculation and, more importantly, called attention to the issue. To make sure this last point is clear, in Figures 9-7a and b we contrast the supported work example with the trucking discussion. In both cases it is desirable that each unit of output be produced if its marginal benefit (the height of the demand curve) exceeds its marginal cost. The optimal levels of output are denoted as QE in each diagram. In Figure 9-7a, representing supported work, the assumption as drawn is that it is more efficient for one agency to supply the demand in any location (a natural monopoly). This may be, for example, because there are only a limited number of potential employees for whom this kind of employment will lead to social benefits; this shows up in the diagram as a demand curve closer to rather than further away from the origin. Given that one agency has a cost structure like AC and MC, the most efficient level of output is determined by the intersection of the demand curve with the marginal cost curve.42 Like our earlier productivity example of crew size on sanitation trucks, it does not matter whether AC could be improved by a different quantity: The value of any increase in output level from QE would not exceed its marginal cost. In Figure 9-7b, representing trucking, the assumption is that many firms should supply the demand for trucks. Each U-shaped curve represents the AC over the quantity of output one firm can supply. As drawn, one firm would encounter significant diseconomies of scale well before it reached output levels that satisfied market demand. On the other hand, the industry can expand at constant AC simply by adding identical firms. Obviously the leastcost way of producing in this situation is for each firm to operate at the minimum of its AC curve. At the risk of oversimplification, the Spady-Friedlaender analysis was attempting to find out if the trucking cost structure looked more like Figure 9-7a or b.

Joint Costs and Peak-Load PricingS An additional cost-output relation that arises frequently in public policy analysis is that of joint costs. It occurs when two or more discrete outputs are made from some of the same inputs, and the problem is deducing whether the marginal cost of an output is greater or less than its marginal benefit. A standard example is that both wool and mutton are obtained from lambs. One may know precisely the marginal benefit of additional wool, but how is one to decide on the division of the marginal cost of a lamb between wool and mutton? The resolution lies in cleverly avoiding this question and instead comparing the sum of the two marginal benefits to the marginal cost. 42 This is a simplification by which it is assumed that the total benefits outweigh the total costs and secondbest arguments are ignored. The latter are discussed in Chapters 11 (Ramsey optimal pricing) and 15.

Chapter Nine

356

(a)

(b)

O

O

Figure 9-7. Contrasting natural monopoly with a competitive industry: (a) Supported work may be a natural monopoly. (b) Trucking may be naturally competitive.

The Cost Side of Policy Analysis

357

From a public policy perspective, a more interesting example of the same problem is often referred to as peak-load pricing. We will illustrate this using as our joint input an electricity generating plant providing two discrete outputs: day electricity (the peak) and night electricity. These two products, because they are supplied by the plant at different times, are nonrivalrous: an increase or decrease in the amount of plant used to provide one (say, day electricity) does not change the amount of plant available to supply the other (night electricity). Given the demand for day and night electricity and the costs of building and operating the generating plant, what should we supply of each to make marginal benefits equal marginal costs? What prices should be charged so that demanders will utilize the supply? Other similar examples of joint costs are roads, bridges, buses, airport landings, telecommunications and Internet lines, each providing nonrivalrous services to peak and off-peak users.43 There are many uses of peak-load pricing principles in practice: for example, hotel rates for “in season” and “off season,” higher telephone rates during weekdays compared to evenings and weekends. However, they are less common in the public sector. Interesting exceptions include peak tolls in Orange County, California, for a highway developed through a public-private partnership, tunnels in Marseilles and Oslo, and road use in Singapore.44 It is unfortunate that the public sector does not make more use of this concept. In the California electricity crisis of 2000–2001, for example, the shortages that caused rolling blackouts were due in part to the fact that most consumers did not face higher peak period prices and thus had too little incentive to conserve and reduce their demands. Let us assume for our electricity illustration that the (separable) operating costs per kilowatt-hour are 3 cents and the joint cost of providing each kilowatt-hour of capacity is 4 cents per day (the capital costs spread out over the life of the plant). The principle we follow is to provide to consumers all units of capacity the marginal benefits of which outweigh the marginal costs. To identify them, we must proceed in two steps. First, for each group we must find out how much willingness-to-pay per unit will be left after the operating ex-

43 The nonrivalrous aspect is what distinguishes a joint cost from a common cost. A common cost is the cost of an input that is used to make several different products, such as a potato-peeling plant that provides the potatoes used for several different products (e.g., potato chips, instant mashed potatoes). The multiproduct firm that owns the plant has to decide how to allocate its costs to the different products. However, these services are rivalrous: if the plant is peeling potatoes for use in making chips, it must stop in order to peel potatoes for use in an instant food. In the normal case, efficiency requires a constant charge per use of the facility (e.g., per peeled potato) regardless of the product for which it is being used. The electricity plant has a common cost aspect to it. Utilities often think of themselves as selling residential, commercial, and industrial electricity (three different products), and have to decide how much of the generating plant to charge to each service. Like the potato-peeling plant, the generating charge per megawatt-hour within a given time period ought to be the same no matter which customer gets it. Unlike peeled potatoes, there are very distinct demands for electricity at different times (e.g., night and day, summer and winter). Thus, the electricity plant is a joint cost across time periods, and a common cost within time periods. Railroad tracks are somewhat like electricity plants in this respect: they are a common cost of freight and passenger service, but a joint cost across time periods (passengers have quite different demands for day and night service). 44 The Singapore road pricing application has been the subject of several studies. See, for example, S. Phang and R.Toh, “From Manual to Electronic Road Congestion Pricing: The Singapore Experience and Experiment,” Transportation Research Part E—Logistics and Transportation Review, 33, No. 2, June 1997, pp. 97–106.

358

Chapter Nine

penses are paid. For example, if a day demander is willing to pay (has a marginal benefit of) 10 cents for an incremental kilowatt-hour, then 3 cents of this must cover operating expenses and therefore 7 cents (= 10 − 3) is the residual willingness to pay that could be applied to cover capacity costs. Think of this 7 cents as the marginal benefit available for capacity. Second, we add together the willingness-to-pay for capacity for the day and the night demanders (since they can share each unit of capacity) in order to see if the sum exceeds the marginal cost and to identify the quantity of capacity at which this sum just equals the marginal cost. We illustrate this in Figures 9-8a and b by using the following two half-dayeach demand curves for day D and night N electricity45: 1 D 0 ≤ D ≤ 2500 PD = 10 − —— 250 1 PN = 8 − —— N 250

0 ≤ N ≤ 2000

Recall that a demand curve may also be interpreted as a marginal benefit curve, so we may express them as: 1 D 0 ≤ D ≤ 2500 MBD = 10 − —— 250 1 N MBN = 8 − —— 250

0 ≤ N ≤ 2000

In Figure 9-8a, we graph the marginal benefit curves and the marginal operating costs (but not yet the joint capital cost). For each group, the marginal benefit available for capacity is the vertical distance between its (full) marginal benefit curve and the marginal operating cost. For example, the marginal benefit of the 750th kilowatt-hour is 7 cents for day users, and therefore the marginal benefit available for capacity is 4 cents (= 7 − 3). Similarly, the marginal benefit available for the 750th unit of capacity from night users is 2 cents (= 5 − 3). Since the 6-cent sum of these exceeds the 4-cent cost of capacity, it increases efficiency to provide the capacity used to produce the 750th kilowatt-hour for each group. In Figure 9-8b we graph the marginal benefits available for capacity of each group C (MBD , MBCN ). We also graph the vertical sum of those two curves to show the total marginal benefit available for capacity (MBC ). The amount of capacity to provide is thus found where the total marginal benefit available for capacity intersects the 4-cent capital cost per unit of capacity, or 1000 kilowatt-hours. We can find the same solution numerically. The individual equations are found by subtracting the 3-cent operating costs from the original marginal benefit equations, here denoting quantity as quantity of capacity Q C:

45 For simplicity, we make the unrealistic assumption that the demand during either time period is independent of the price in the other time period. This eases the exposition but is not necessary to illustrate how much costs bear on allocative decisions.

(a)

(b)

Figure 9-8. Efficiency with joint costs—peak-load pricing of electricity: (a) The demands for C + MBC ) day and night electricity. (b) The most efficient capacity (1000) is where MBC (MBD N equals the marginal capacity cost.

Chapter Nine

360

(c) O

Figure 9-8. (c) If the marginal cost is only 1 cent, then the most efficient capacity is 1500.

  

1 Q C 0 ≤ Q C ≤ 1750 C MBD = 7 − —— 250

C MBN

0

 1 = 5 − —— Q  250  0

Otherwise

C

0 ≤ Q C ≤ 1250 Otherwise

We then sum these to find the total marginal benefit available for capacity (MBC )46: 46

Note that to add demand curves in the usual horizontal way, we have quantity on the left of the equation and a price expression on the right. We add to find the total quantity for a given price. To find the vertical sum, however, we write the equation so that price is on the left and a quantity expression is on the right. This gives us the total willingness to pay for an incremental unit at any given quantity. We do this only when different consumers receive benefits from the same resource.

The Cost Side of Policy Analysis

MB = C

  

2 QC 12 − —— 250

361

0 ≤ Q C ≤ 1250

1 Q C 1250 ≤ Q C ≤ 1750 7 − —— 250 0

Otherwise

To find the quantity of capacity at which marginal benefit MBC equals marginal cost, we use the part of the expression relevant for MBC = 4 cents.47 2 QC = 4 MBC = 12 − —— 250 whence QC = 1000 Thus the most efficient capacity to provide is 1000 units. But how will it be utilized, and what prices should be charged? To answer these questions, we enter the 1000 unit capacity figure into our equations to solve for the other unknowns: 1 (1000) = 3 MBCD = 7 − —— 250 2 (1000) = 1 MBCN = 5 − —— 250 Note that by design these sum to 4 cents, the marginal cost of capacity. Continuing to work backward through the equations, note that if the marginal benefit available for capacity from a day user is 3 cents, then the ordinary marginal benefit (MBD ) must be 6 cents (adding the 3-cent operating cost back in) and similarly MBN must be 4 cents (= 1 + 3). In the original demand equations, the corresponding prices PD of 6 cents and PN of 4 cents cause day and night demand each to equal precisely 1000 kilowatt-hours. Thus capacity is fully utilized both day and night, the (peak) day price is 6 cents, the (off-peak) night price is 4 cents, and when operating costs are subtracted this leaves capacity contributions of 3 cents from day users and 1 cent from night users. In this particular problem, because marginal capacity costs are assumed to be constant at 4 cents, these contribution shares exactly cover the costs. An interesting insight can be had if we make a minor change in the problem. Suppose that the marginal cost of a unit capacity is only 1 cent rather than 4 cents (see Figure 9-8c). Then the relevant part of the market willingness-to-pay equation is the one in which night demanders have zero marginal willingness to pay: 1 QC = 1 MBC = 7 − —— 250

1 If one substitutes MBC = 4 and uses the other line segment 7 − —– Q C, one finds Q C = 750. But this quan250 C tity is not in the range 1250 ≤ Q ≤ 1750 where the segment applies. Thus, it is not a valid solution. 47

362

Chapter Nine

whence QC = 1500 and MBCN = 0 That is, night demanders do not get allocated any portion of the capacity costs: The marginal capacity is there only because the day demanders are willing to pay for it. Of course, both groups must pay the marginal operating costs they impose, so PD = 4 cents and PN = 3 cents. Thus, the capacity is fully utilized during the day (D = 1500), but at night there is unused capacity (N = 1250).

Summary In this chapter we examined the role of technology and costs as constraints on an organization’s supply decisions. Policy analysts use these concepts in a variety of predictive and evaluative ways. One important way arises in considering technical efficiency in the public sector. For example, to discover if the skills or productivity of participants in a jobtraining program are improving, it may be necessary to estimate the observed technical relation between inputs and outputs in order to isolate the changing contribution of the trainee. To do this requires a theoretical understanding of production functions, a good working knowledge of the operations being studied, and certain statistical skills (not covered here). We explained how production functions may be characterized in terms of their returns to scale and their elasticities of substitution. At a purely theoretical level it is helpful to understand the differences between concepts of efficiency and productivity. We illustrated this by examining the tempting mistake of maximizing productivity that a city manager or mayor might make. Numerous practical difficulties stand in the way of discovering the empirical truth about input-output relations. One of these, discussed in the examples of masonry cleaning and trucking, is the importance of accounting for quality variations in the output. The hedonic method of accounting for these variations was illustrated at a simplified level. Another difficulty, common in the public sector, is having any good measure of output; we illustrated this with education but could raise the same question in other areas (e.g., how should we measure the output of public fire protection?). In looking at the relation between costs and participant-years in the national supported work experiment, analysts were able to identify the possibility of substantial managerial scale economies while being sensitive to the inadequacy of participant-years as an output measure. The concepts of cost are fundamental to economic choice. They are used in virtually every policy analysis that considers alternatives. We explained how social costs are relevant to the compensation principle as embodied in social benefit-cost analysis. The important distinctions among social opportunity costs, private opportunity costs, and accounting costs were reviewed. The use of the different cost concepts was illustrated in the analysis of the New York supported work experiment; these included the social benefit-cost calcu-

The Cost Side of Policy Analysis

363

lation (used to measure the change in relative efficiency) and other benefit-cost calculations from the perspective of different constituent groups that could be used to predict their responses to the program. The latter do not speak to efficiency, but can be very helpful in designing feasible and fair programs. The opportunity cost concepts can have very important linkages to production functions. These linkages are most likely in environments where the supply organization produces at a technically efficient level. They are often assumed to hold for private, profit-making firms, such as those in the regulated trucking industry. Then the relation between a firm’s cost function and its production function can simplify certain analytic tasks. We illustrated this, at a simplified level, by showing how the Spady-Friedlaender study inferred the technical returns to scale of trucking firms from an analysis of their cost functions. The analysis, which indicated that no significant scale economies were available by expansion of the average firm, lent support to the arguments in favor of trucking deregulation. In the appendix, we provide more technical details about this duality relation between cost and production functions. We also examined one other type of cost-output relation; the joint cost problem. We illustrated the most efficient solution to joint costs in the context of peak-load pricing problems, as might arise in utility pricing, bridge and tunnel crossings, commuter railroads, or roads. Benefit-cost reasoning was used to identify the solution.

Exercises 9-1

A public employment program in San Francisco for recently released ex-offenders paid each employee $225 per week. However, a study showed that each employee only caused output to go up by $140 per week. Therefore, the social benefits of the program are less than the social costs. a Criticize the social cost measure. b Criticize the social benefit measure.

9-2

The director of public service employment for a small city funded two different programs last year, each with a different constant-returns-to-scale production function. The director was not sure of the specification of the production functions last year but hopes to allocate resources more wisely this year. Production data were gathered for each program during three periods last year, and are listed in the accompanying table.

Program A

1 2 3

Program B

KA

LA

QA

KB

LB

QB

24 24 24

26 28 22

48 48 44

25 25 25

25 36 16

50 60 40

364

Chapter Nine

a Program A operates with a fixed proportions production function. What is it? [Answer: Q = min (2K, 2L).] In period 3 what are the marginal products of capital and labor, respectively? In period 2 what is the elasticity of output with respect to labor? (Answer: 0.) b Program B operates with a Cobb-Douglas production function. What is it? (Answer: Q = 2K 1/2L1/2.) In period 3 what are the marginal products of capital and labor? In period 3 what is the elasticity of output with respect to capital? cO Suppose in the third period that the capital was fixed in each program but you were free to allocate the 38 labor units between the two programs any way you wished. If each unit of QB is equal in value to each unit of QA , how would you allocate labor to maximize the total value of outputs? (Answer: LA = 24; LB = 14.) dO Suppose you could use two Cobb-Douglas production processes (C and D) to produce the same output: QC = KC1/3LC2/3

QD = KD1/2LD1/2

If you then had 100 units of capital and 105 units of labor, how would you allocate them between the two processes to maximize total output? (Answer: KC = 47, LC = 67, KD = 53, LD = 38.) eO Suppose your budget were large enough to employ 100 units of either labor or capital, and the cost of a unit of labor was the same as a unit of capital. The production function is QD = KD1/2LD1/2. Given that output must be at least 30, what is the maximum number of people you could employ? (Answer: L = 90.) 9-3

You are an analyst for a metropolitan transportation authority. You are asked if it would improve efficiency to buy more buses, and if so, how many more should be bought. Currently, there are eighty buses. The operating cost of a bus is $30 during the day and $60 during the night, when higher wages must be paid to drivers and other workers. The daily capital cost of a bus, whether or not it is used, is $10. The demands for buses aggregated over persons and stops during the 12 hours of day and night, respectively D and N, are QD = 160 − PD QN = 80 − PN What is the efficient number of buses? What prices should be charged to induce efficient ridership? Will all the buses be in use at night? (Answer: 120 buses; PD = $40, PN = $60; no.)

The Cost Side of Policy Analysis

365

APPENDIX DUALITY—SOME MATHEMATICAL RELATIONS BETWEEN PRODUCTION AND COST FUNCTIONSO

In this appendix we examine briefly some of the mathematical relations between production functions and costs when it can be assumed that the supplier will operate at least cost. This material is helpful for understanding (and undertaking) work such as SpadyFriedlaender analysis. The problem of choosing the cost-minimizing inputs given a certain production function is usually explained by a diagram much like Figure 9A-1. Given a production function Q = F(K, L) and input prices PK and PL, suppose we are told to produce an output of Q = 30 at the least cost. The isoquant for Q = 30 is shown in the figure. An isocost line is a locus of inputs such that PK K + PLL is constant. This line has the slope −PL /PK. Thus, geometrically we wish to be on the lowest possible isocost line that reaches the isoquant where Q = 30. This occurs where the isocost line is just tangent to the isoquant. At that point the marginal rate of technical substitution (RTSL,K , the negative of the slope of the isoquant) equals the input price ratio PL/PK. Recall that the RTS is equal to the ratio of the marginal products at any point. To see this, remember that the change in output dQ along an isoquant is zero. That is, Q = F(K, L) and along an isoquant (taking the total differential) ∂F dK + —– ∂F dL = 0 dQ = —– ∂K ∂L These terms can be so rearranged that ∂F/∂L MPL = —–—— = —— ∂F/∂K MPK Q=const

( )

dK − —– dL

The term on the left-hand side is the negative of the slope of an isoquant, and thus it is by definition equal to RTSL,K . Thus, MPL RTSL,K = —–— MPK For the general problem of least-cost C input choice for the production of Q units of output, we may formulate it in calculus terms as C = PK K + PLL + λ[Q − F(K, L)]

366

Chapter Nine

Figure 9A-1. The least-cost method of producing a given output level.

Thus, the first-order conditions for a cost minimum are ∂C = P − λ —– ∂F = 0 —– K ∂K ∂K ∂C = P − λ —– ∂F = 0 —– L ∂L ∂L ∂C = Q − F(K, L) = 0 —– ∂λ Dividing the second equation by the first after rearranging slightly gives us the calculus proof of the geometric argument: PL ∂F/∂L MPL —– = ——— = —— = RTSL,K PK ∂F/∂K MPK This simply reiterates that the least-cost input choice will be the point on the isoquant whose slope is minus the ratio of the input prices. This reasoning allows us to find the cost-minimizing input choice given a specific production function, input prices, and a desired output level. For example, suppose the production function is Cobb-Douglas: Q = K 1/2L1/2 and PK = 9, PL = 4, and the desired output is 30. We know the correct point on the isoquant will have slope − 4/9. To find an expression for the isoquant slope in terms of K and L, let us find the marginal productivity equations directly from the production function and then combine them:

The Cost Side of Policy Analysis

367

∂Q MPL = —– = –21 K1/2L−1/2 ∂L ∂Q MPK = —– = –21 K−1/2L1/2 ∂K MPL K RTSL,K = —— =— MPK L Thus K/L must equal 4/9, or K = 4L/9. Now we can substitute directly into the production function and solve for Q = 30: 30 = K1/2L1/2 1/2

( )

4L 30 = —– 9

L1/2

2L 30 = — 3 45 = L 20 = K 360 = C = PK K + PLL Refer back to the general calculus formulation of this problem: C = PK K + PLL + λ[Q − F(K, L)] Note that ∂C/∂Q = λ; that is, λ can be interpreted as the marginal cost of increasing the output level by one unit. We know from the first (or second) equation of the first-order conditions: PK λ = ———– ∂F/∂K Thus in our specific problem we can identify the marginal cost by substituting the correct expressions: 9 λ = ——————— –21 (20−1/2)(451/2) 9 = ——— –21 (1.5) = 12 That is, it would cost $12 to expand output by one more unit (in the least-cost way). Now, one of the problems with the above method of solution, simply a calculus version of the usual geometric argument, is that it can be somewhat tedious. It certainly would be nice if there were a simpler way to find the answers. Economists using the mathematics of

368

Chapter Nine

duality have associated a cost function with each “well-behaved” production function, from which it is often much simpler to find the answers to problems like the one just solved. Of course, if one had to derive (as we will shortly) the cost function from the production function each time, there would be no gain. But the general equation for the cost function can be expressed just like the general equation for a production function. For example, it is no more difficult to remember (or look up) the Cobb-Douglas cost function than the CobbDouglas production function.48 Let us first give the general definition of a cost function: A cost function C(Q, P1, P2 , . . . , Pn ) is a relation that associates for each output level and input prices the least total cost of producing that output level. Once a cost function is known, it is easy to derive the standard cost curves from it. For example: ∂C MC(Q) = —– ∂Q and C(Q, P1, P2 , . . . , Pn ) AC(Q) = —————–——— Q To get a better understanding of the cost function, let us derive it for Cobb-Douglas production technology: Q = K αL1−α To find the cost function, we must solve for the cost minimum in the general problem49: C = PK K + PLL + λ(Q − K αL1−α ) The first-order conditions are:

48

∂C = P − αλK α−1L1−α = 0 —– K ∂K

(i)

∂C = P − (1 − α)λK αL−α = 0 —– L ∂L

(ii)

∂C = Q − K αL1−α = 0 —– ∂λ

(iii)

The dual approach on the supply side is analogous to the dual approach to consumer demand reviewed in the appendix to Chapter 6. The textbook by Hal R. Varian, Microeconomic Analysis (New York: W. W. Norton & Company, Inc., 1992) provides a good introductory approach. A more advanced reference is M. Fuss and D. McFadden, eds., Production Economics, A Dual Approach to Theory and Applications (Amsterdam: North-Holland, 1978). 49 Note that this is the same formulation one would use to find the expenditure function associated with CobbDouglas utility function.

The Cost Side of Policy Analysis

369

The solution requires that we express the cost C as a function of only the output level and the input prices (and the fixed parameters). Since C = PK K + PLL, we will use the first-order conditions to substitute for K and L in this equation. By dividing (i) and (ii) after rearranging, we see that PK [α/(1 − α)]L —– = —————— PL K or α PL PK K = ——– 1−α L or [α/(1 − α)]PLL K = —————–— PK Therefore, α 1 C = PK K + PLL = PLL 1 + ——– = PLL ——– 1−α 1−α

(

)

Now we have only to rid ourselves of the L in the above expression. To do so, we use condition (iii), Q = K α L1−α. On substituting our expression for K derived from (i) and (ii), we have α Q = ——– 1−α

α

PL α α 1−α —– L L PK

α Q = ——– 1−α

α

PL α —– L PK

( )( ) ( )( ) ( )( )

α L = Q ——– 1−α

−α

PL —– PK

−α

Now we may substitute this in the expression for the cost: 1 C = PLL ——– 1−α −α

α ——– 1−α

−α

( )( )( )

PL = PL —– PK

1 ——– Q 1−α

= PL1−αPKα α−α (1 − α) α−1Q or, letting δ = α−α(1 − α)α-1, C = δPL1−αPKα Q

370

Chapter Nine

Of course, this was tedious to derive, but the point is that the derivation need not be repeated: This expression is no more difficult to remember (or refer to) than the production function itself. Now let us resolve our problem, where we are given Q = 30, PK = 9, PL = 4, and α = 1/2 (δ = 2). The least cost is obtained by simply plugging in the formula C = 2(41/2)(91/2)(30) = 360 But what are the inputs? They are simply the value of the partial derivatives of the cost function with respect to prices! That is, if we let Xi (Q, P1, P2 , . . . , Pn ) denote generally the optimal level of the ith input given output Q and input prices, we have Shephard’s lemma50: ∂C(Q, P1, P2 , . . . , Pn ) ————————— = Xi (Q, P1, P2 , . . . , Pn ) ∂Pi In other words, this simple derivative property of the cost function can be used to reveal the derived demand curve for any factor holding the output level and other prices constant (i.e., it is a “compensated” derived demand curve). To make this more concrete, let us apply Shephard’s lemma to the Cobb-Douglas case: ∂C = (1 − α)δP −αP α Q L = —— L K ∂PL ∂C = αδP 1−αP α−1Q K = —— L K ∂PK These functions are the equations for optimal input demand conditional on the level of Q. For our specific example, assume everything is given but PL. Then the derived demand curve (holding Q at 30 and PK at 9) is L = –21 (2)PL−1/2 (91/2)(30) = 90PL−1/2 50 A proof of Shephard’s lemma offered by Varian, Microeconomic Analysis, p. 74, is instructive. Let X ˆ be the vector of inputs that is cost-minimizing at prices Pˆ and output level Q. Now imagine considering other costminimizing input vectors X that are associated with different prices P but the same output level. Define the cost difference between the X and Xˆ input vectors at prices P as CD(P):

CD(P) = C(P, Q) − PXˆ Since C is the minimum cost at price P, C is less than PXˆ for all P except Pˆ (where they are equal). Thus this function CD(P) attains its maximum value (of zero) at Pˆ, and its partial derivatives must all be equal to zero at that point (the ordinary first-order conditions for optimization). Thus, ∂CD(Pˆ ) ∂C(Pˆ , Q) —–—— = ———— − Xi = 0 ∂Pi ∂Pi ∂C(Pˆ , Q) ———— = Xi ∂Pi

The Cost Side of Policy Analysis

371

This tells us the optimal level of L for any price PL (conditional on the other factors). Thus, when PL = 4, L = 45. Similarly, we find the optimal K in our problem: K = –21 (2)(41/2)(9−1/2)(30) = 20 We can also derive the standard cost curves for the Cobb-Douglas with about as little effort: ∂C = δP 1−αP α MC(Q) = —– L K ∂Q or for our specific function: MC(Q) = 2(41/2)(91/2) = 12 That is, the marginal cost curve associated with Cobb-Douglas technology is constant. Of course, this is always true for constant returns-to-scale production functions. The average cost must thus have the same equation: C = δP 1−αP α AC(Q) = — L K Q In the main part of the text it was mentioned that most empirical studies of production assume Cobb-Douglas or CES technology. While these may often be good approximations, their use is due more to their ease of statistical estimation than to any strong belief that technology has constant elasticity of substitution. However, a new freedom arises with the cost function approach: Several functional forms that have been discovered are easily estimable statistically but are less restrictive in terms of the type of production function that might underlie them. Two will be mentioned briefly here. The first is the generalized Leontif cost function.51 n

n

Σ Σ aij Pi 1/2Pj 1/2 i=1 j=1

C(Q, P1, . . . , Pn ) = Q

where aij = aji. The aij’s are fixed parameters of the function. For a two-factor technology this may be written C(Q, PK , PL ) = Q(aK PK + aLPL + 2aLK PL1/2PK1/2) This generalized function is linear in the parameters, so it could easily be tested by statistical methods. It corresponds to the fixed-proportions Leontif technology when aij = 0 for i ≠ j.52 51

This was derived by W. Diewert, “An Application of the Shephard Duality Theorem: A Generalized Leontif Production Function,” Journal of Political Economy, 79, No. 3, May/June 1971, pp. 481–507. 52 The reader may wish to prove this as an exercise. It can be done by finding the two derived demand curves and using them to identify the relation between L and K. This is an isoquant, since the level of Q is constant and the same for each derived demand curve. The isoquant collapses to a right angle when aLK = 0.

372

Chapter Nine

The second cost function of quite general use is the translog cost function, the one used by Spady and Friedlaender in their study of trucking:

(

ln C(Q, P1, . . . , Pn ) = Q a0 +

n

n

n

Σ ai ln Pi + –21i=1 Σ j=1 Σ aij ln Pi ln Pj i=1

)

where all the a’s are parameters and have the following restrictions: n

n

Σ ai = 1 i=1 Σ aij = 0 i=1

aij = aji

If it turns out that all aij = 0, the translog function collapses to the Cobb-Douglas.

CHAPTER TEN P R I VAT E P R O F I T- M A K I N G O R G A N I Z AT I O N S : O B J E C T I V E S , C A PA B I L I T I E S , A N D P O L I C Y I M P L I C AT I O N S

the following one are about the organizations that convert inputs into outputs: firms in the private profit-making sector, private nonprofit organizations such as certain hospitals and schools, public bureaus such as fire and sanitation departments, and public enterprises such as mass transit systems.1 Each of these organizations must decide what outputs to produce, the level of each, and the technologies with which to make them. In the preceding chapter we reviewed the concepts of technological possibilities and their associated costs. These serve to constrain the organization’s decisions. Similarly, the organization is constrained by the factors that influence its revenues: sales, voluntary donations, or government funding. For the most part we shall defer an examination of the sources of these latter constraints: they arise in part from consumer (including government) demand for the outputs and in part from the behavior of other producer organizations that supply the same or very similar outputs. If we simply refer to all the external constraints on the organization as its environment, then we can say that the organization’s behavior is a function of its objectives, capabilities, and environment. In this chapter our purpose is to examine the role of analytic assumptions about objectives and capabilities in modeling the behavior of the private, profit-making firm. We begin with models of the private firm because they have received the most attention in the professional literature and are more highly developed and tested than the models of other supplier organizations. The models of behavior we review here are useful for predicting the organization’s response to such policy changes as taxes, subsidy plans, regulations, antitrust laws, and other legal requirements. THIS CHAPTER AND

1 The distinction between a public bureau and public enterprise is that the bureau receives its funds from government revenues and the enterprise receives its funds from sale of the output. In actuality most public supply organizations receive some user fees as well as government subsidies, so the distinction is really a matter of degree.

373

374

Chapter Ten

We begin with a discussion of the concept of a firm and emphasize the importance of uncertainty, information costs, and transaction costs to explain the formation of firms. Then we review the standard model of a firm: an